You are on page 1of 32

 Process

A process is basically a program in execution. A process is an 'active' entity as opposed to program which
is considered to be a 'passive' entity. Attributes held by process include hardware state, memory, CPU etc.
A process is defined as an entity which represents the basic unit of work to be implemented in the system.

To put it in simple terms, we write our computer programs in a text file and when we execute this program,
it becomes a process which performs all the tasks mentioned in the program.
When a program is loaded into the memory and it becomes a process, it can be divided into four sections ─
stack, heap, text and data. The following image shows a simplified layout of a process inside main memory

Process memory is divided into four sections for efficient working :

 The Text section is made up of the compiled program code, read in from non-volatile storage when the

program is launched.

 The Data section is made up the global and static variables, allocated and initialized prior to executing the

main.

 The Heap is used for the dynamic memory allocation, and is managed via calls to new, delete, malloc, free,

etc.

 The Stack is used for local variables. Space on the stack is reserved for local variables when they are

declared.
 Process Life Cycle
When a process executes, it passes through different states. These stages may differ in different operating
systems, and the names of these states are also not standardized.

In general, a process can have one of the following five states at a time.

S.N. State & Description

1
Start/New
This is the initial state when a process is first started/created.

2
Ready
The process is waiting to be assigned to a processor. Ready processes are waiting to have the
processor allocated to them by the operating system so that they can run. Process may come
into this state after Start state or while running it by but interrupted by the scheduler to
assign CPU to some other process.

3
Running
Once the process has been assigned to a processor by the OS scheduler, the process state is
set to running and the processor executes its instructions.

4
Waiting
Process moves into the waiting state if it needs to wait for a resource, such as waiting for
user input, or waiting for a file to become available.

5
Terminated or Exit
Once the process finishes its execution, or it is terminated by the operating system, it is
moved to the terminated state where it waits to be removed from main memory.
 Process Control Block (PCB)
A Process Control Block is a data structure maintained by the Operating System for every process. The
PCB is identified by an integer process ID (PID).
The architecture of a PCB is completely dependent on Operating System and may contain different
information in different operating systems. Here is a simplified diagram of a PCB −

The PCB is maintained for a process throughout its lifetime, and is deleted once the process terminates.
A PCB keeps all the information needed to keep track of a process as listed below in the table −

S.N. Information & Description

1
Process State
The current state of the process i.e., whether it is ready, running, waiting, or whatever. The
state of a process is defined in part by the current activity of that process.

2
Process privileges
This is required to allow/disallow access to system resources.

3
Process ID
Unique identification for each of the process in the operating system.
4
Pointer
A pointer to parent process.

5
Program Counter
Program Counter is a pointer to the address of the next instruction to be executed for this
process.

6
CPU registers
Various CPU registers where process need to be stored for execution for running state.
These vary in number and type based on architecture. They include accumulators, stack
pointers, general purpose registers etc.

7
CPU Scheduling Information
Process priority and other scheduling information which is required to schedule the process.
This includes the value of base and limit registers (protection) and page tables, segment
tables depending on memory.

8
Memory management information
This includes the information of page table, memory limits, Segment table depending on
memory used by the operating system.

9
Accounting information
This includes the amount of CPU used for process execution, time limits, execution ID etc.

10
IO status information
This includes a list of I/O devices allocated to the process.
 Steps involved in Context Switching
The process of context switching involves a number of steps. The following diagram depicts the process
of context switching between the two processes P1 and P2.

In the above figure, we can see that initially, the process P1 is in the running state and the process P2 is in
the ready state. Now, when some interruption occurs then you have to switch the process P1 from running
to the ready state after saving the context and the process P2 from ready to running state. The following
steps will be performed:

1. Firstly, the context of the process P1 i.e. the process present in the running state will be saved in
the Process Control Block of process P1 i.e. PCB1.
2. Now, you have to move the PCB1 to the relevant queue i.e. ready queue, I/O queue, waiting
queue, etc.
3. From the ready state, select the new process that is to be executed i.e. the process P2.
4. Now, update the Process Control Block of process P2 i.e. PCB2 by setting the process state to
running. If the process P2 was earlier executed by the CPU, then you can get the position of last
executed instruction so that you can resume the execution of P2.
5. Similarly, if you want to execute the process P1 again, then you have to follow the same steps as
mentioned above(from step 1 to 4).
For context switching to happen, two processes are at least required in general, and in t he case of the
round-robin algorithm, you can perform context switching with the help of one process only.

The time involved in the context switching of one process by other is called the Context Switching
Time.
 Process Scheduling
The process scheduling is the activity of the process manager that handles the removal of the running
process from the CPU and the selection of another process on the basis of a particular strategy.
Process scheduling is an essential part of a Multiprogramming operating systems. Such operating systems
allow more than one process to be loaded into the executable memory at a time and the loaded process
shares the CPU using time multiplexing.
Scheduling fell into one of the two general categories:

 Non Pre-emptive Scheduling: When the currently executing process gives up the CPU voluntarily.
 Pre-emptive Scheduling: When the operating system decides to favour another process, pre-
empting the currently executing process.

 Process Scheduling Queues


The OS maintains all PCBs in Process Scheduling Queues. The OS maintains a separate queue for each of
the process states and PCBs of all processes in the same execution state are placed in the same queue.
When the state of a process is changed, its PCB is unlinked from its current queue and moved to its new
state queue.
The Operating System maintains the following important process scheduling queues −
 Job queue − This queue keeps all the processes in the system.
 Ready queue − This queue keeps a set of all processes residing in main memory, ready and waiting
to execute. A new process is always put in this queue.
 Device queues − The processes which are blocked due to unavailability of an I/O device constitute
this queue.

The OS can use different policies to manage each queue (FIFO, Round Robin, Priority, etc.). The OS
scheduler determines how to move processes between the ready and run queues which can only have one
entry per processor core on the system; in the above diagram, it has been merged with the CPU.
 Schedulers
Schedulers are special system software which handle process scheduling in various ways. Their main task
is to select the jobs to be submitted into the system and to decide which process to run. Schedulers are of
three types −

1. Long Term Scheduler


It is also called a job scheduler. A long-term scheduler determines which programs are admitted to the
system for processing. It selects processes from the queue and loads them into memory for execution.
Process loads into the memory for CPU scheduling.
The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O bound and
processor bound. It also controls the degree of multiprogramming. If the degree of multiprogramming is
stable, then the average rate of process creation must be equal to the average departure rate of processes
leaving the system.
On some systems, the long-term scheduler may not be available or minimal. Time-sharing operating
systems have no long term scheduler. When a process changes the state from new to ready, then there is
use of long-term scheduler.

2. Short Term Scheduler


It is also called as CPU scheduler. Its main objective is to increase system performance in accordance with
the chosen set of criteria. It is the change of ready state to running state of the process. CPU scheduler
selects a process among the processes that are ready to execute and allocates CPU to one of them.
Short-term schedulers make the decision of which process to execute next. Short-term schedulers are faster
than long-term schedulers.

3. Medium Term Scheduler


Medium-term scheduling is a part of swapping. It removes the processes from the memory. It reduces the
degree of multiprogramming. The medium-term scheduler is in-charge of handling the swapped out-
processes.
A running process may become suspended if it makes an I/O request. A suspended processes cannot make
any progress towards completion. In this condition, to remove the process from memory and make space
for other processes, the suspended process is moved to the secondary storage. This process is
called swapping, and the process is said to be swapped out or rolled out. Swapping may be necessary to
improve the process mix.
Comparison among Scheduler

S.N. Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler

1 It is a job scheduler It is a CPU scheduler It is a process swapping


scheduler.

2 Speed is lesser than short Speed is fastest among Speed is in between both short
term scheduler other two and long term scheduler.

3 It controls the degree of It provides lesser control It reduces the degree of


multiprogramming over degree of multiprogramming.
multiprogramming

4 It is almost absent or minimal It is also minimal in time It is a part of Time sharing


in time sharing system sharing system systems.

5 It selects processes from pool It selects those processes It can re-introduce the process
and loads them into memory which are ready to execute into memory and execution
for execution can be continued.

 Operations on Process
1. Process Creation
Through appropriate system calls, such as fork or spawn, processes may create other processes. The process
which creates other process, is termed the parent of the other process, while the created sub-process is
termed its child.
Each process is given an integer identifier, termed as process identifier, or PID. The parent PID (PPID) is
also stored for each process.
On a typical UNIX systems the process scheduler is termed as sched, and is given PID 0. The first thing
done by it at system start-up time is to launch init, which gives that process PID 1. Further Init launches all
the system daemons and user logins, and becomes the ultimate parent of all other processes.
A child process may receive some amount of shared resources with its parent depending on system
implementation. To prevent runaway children from consuming all of a certain system resource, child
processes may or may not be limited to a subset of the resources originally allocated to the parent.
There are two options for the parent process after creating the child :

 Wait for the child process to terminate before proceeding. Parent process makes a wait() system call,
for either a specific child process or for any particular child process, which causes the parent process
to block until the wait() returns. UNIX shells normally wait for their children to complete before
issuing a new prompt.
 Run concurrently with the child, continuing to process without waiting. When a UNIX shell runs a
process as a background task, this is the operation seen. It is also possible for the parent to run for a
while, and then wait for the child later, which might occur in a sort of a parallel processing
operation.

There are also two possibilities in terms of the address space of the new process:

1. The child process is a duplicate of the parent process.


2. The child process has a program loaded into it.

To illustrate these different implementations, let us consider the UNIX operating system. In UNIX, each
process is identified by its process identifier, which is a unique integer. A new process is created by
the fork system call. The new process consists of a copy of the address space of the original process. This
mechanism allows the parent process to communicate easily with its child process. Both processes (the
parent and the child) continue execution at the instruction after the fork system call, with one
difference: The return code for the fork system call is zero for the new(child) process, whereas the(non
zero) process identifier of the child is returned to the parent.
Typically, the execlp system call is used after the fork system call by one of the two processes to replace
the process memory space with a new program. The execlp system call loads a binary file into memory -
destroying the memory image of the program containing the execlp system call – and starts its execution. In
this manner the two processes are able to communicate, and then to go their separate ways.
GATE Numerical Tip: If fork is called for n times, the number of child processes or new processes created will
be: 2n – 1.

2. Process Termination
By making the exit(system call), typically returning an int, processes may request their own termination.
This int is passed along to the parent if it is doing a wait(), and is typically zero on successful completion
and some non-zero code in the event of any problem.
Processes may also be terminated by the system for a variety of reasons, including :

 The inability of the system to deliver the necessary system resources.

 In response to a KILL command or other unhandled process interrupts.


 A parent may kill its children if the task assigned to them is no longer needed i.e. if the need of having a child

terminates.

 If the parent exits, the system may or may not allow the child to continue without a parent (In UNIX systems,

orphaned processes are generally inherited by init, which then proceeds to kill them.)

When a process ends, all of its system resources are freed up, open files flushed and closed, etc. The process
termination status and execution times are returned to the parent if the parent is waiting for the child to
terminate, or eventually returned to init if the process already became an orphan.
The processes which are trying to terminate but cannot do so because their parent is not waiting for them are
termed zombies. These are eventually inherited by init as orphans and killed off.
 CPU Scheduling
CPU scheduling is a process which allows one process to use the CPU while the execution of another
process is on hold (in waiting state) due to unavailability of any resource like I/O etc, thereby making full
use of CPU. The aim of CPU scheduling is to make the system efficient, fast and fair.
Whenever the CPU becomes idle, the operating system must select one of the processes in the ready
queue to be executed. The selection process is carried out by the short-term scheduler (or CPU scheduler).
The scheduler selects from among the processes in memory that are ready to execute, and allocates the CPU
to one of them.

 CPU Scheduling: Dispatcher


Another component involved in the CPU scheduling function is the Dispatcher. The dispatcher is the
module that gives control of the CPU to the process selected by the short-term scheduler. This function
involves:

 Switching context
 Switching to user mode
 Jumping to the proper location in the user program to restart that program from where it left last time.

The dispatcher should be as fast as possible, given that it is invoked during every process switch. The time
taken by the dispatcher to stop one process and start another process is known as the Dispatch Latency.
Dispatch Latency can be explained using the below figure:
 Types of CPU Scheduling
CPU scheduling decisions may take place under the following four circumstances:

1. When a process switches from the running state to the waiting state(for I/O request or invocation of wait for
the termination of one of the child processes).
2. When a process switches from the running state to the ready state (for example, when an interrupt occurs).
3. When a process switches from the waiting state to the ready state(for example, completion of I/O).
4. When a process terminates.

In circumstances 1 and 4, there is no choice in terms of scheduling. A new process(if one exists in the ready
queue) must be selected for execution. There is a choice, however in circumstances 2 and 3.
When Scheduling takes place only under circumstances 1 and 4, we say the scheduling scheme is non-
preemptive; otherwise the scheduling scheme is preemptive.

 Non-Preemptive Scheduling
Under non-preemptive scheduling, once the CPU has been allocated to a process, the process keeps the CPU
until it releases the CPU either by terminating or by switching to the waiting state.
This scheduling method is used by the Microsoft Windows 3.1 and by the Apple Macintosh operating
systems.
It is the only method that can be used on certain hardware platforms, because It does not require the special
hardware(for example: a timer) needed for preemptive scheduling.

 Preemptive Scheduling
In this type of Scheduling, the tasks are usually assigned with priorities. At times it is necessary to run a
certain task that has a higher priority before another task although it is running. Therefore, the running task
is interrupted for some time and resumed later when the priority task has finished its execution.

 CPU Scheduling Criteria


There are many different criteria’s to check when considering the "best" scheduling algorithm, they are:
CPU Utilization
To make out the best use of CPU and not to waste any CPU cycle, CPU would be working most of the
time(Ideally 100% of the time). Considering a real system, CPU usage should range from 40% (lightly
loaded) to 90% (heavily loaded.)
Throughput
It is the total number of processes completed per unit time or rather say total amount of work done in a unit
of time. This may range from 10/second to 1/hour depending on the specific processes.
Turnaround Time
It is the amount of time taken to execute a particular process, i.e. The interval from time of submission of
the process to the time of completion of the process (Wall clock time).
Waiting Time
The sum of the periods spent waiting in the ready queue amount of time a process has been waiting in the
ready queue to acquire get control on the CPU.
Response Time
Amount of time it takes from when a request was submitted until the first response is produced. Remember,
it is the time till the first response and not the completion of process execution (final response).
In general CPU utilization and Throughput are maximized and other factors are reduced for proper
optimization.
 Scheduling Algorithms
To decide which process to execute first and which process to execute last to achieve maximum CPU
utilisation, computer scientists have defined some algorithms, they are:

1. First Come First Serve(FCFS) Scheduling


2. Shortest-Job-First(SJF) Scheduling
3. Priority Scheduling
4. Round Robin(RR) Scheduling
5. Multilevel Queue Scheduling
6. Multilevel Feedback Queue Scheduling

Simple formulae for calculating various times for given processes:


Completion Time: Time taken for the execution to complete, starting from arrival time.
Turn Around Time: Time taken to complete after arrival. In simple words, it is the difference between the
Completion time and the Arrival time.
Waiting Time: Total time the process has to wait before it's execution begins. It is the difference between
the Turn Around time and the Burst time of the process.

1. First Come First Serve Scheduling


In the "First come first serve" scheduling algorithm, as the name suggests, the process which arrives first,
gets executed first, or we can say that the process which requests the CPU first, gets the CPU allocated first.

 First Come First Serve, is just like FIFO(First in First out) Queue data structure, where the data
element which is added to the queue first, is the one who leaves the queue first.
 This is used in Batch Systems.
 It's easy to understand and implement programmatically, using a Queue data structure, where a
new process enters through the tail of the queue, and the scheduler selects process from the head of
the queue.
 A perfect real life example of FCFS scheduling is buying tickets at ticket counter.

Calculating Average Waiting Time


For every scheduling algorithm, Average waiting time is a crucial parameter to judge it's performance.
AWT or Average waiting time is the average of the waiting times of the processes in the queue, waiting for
the scheduler to pick them for execution.
Lower the Average Waiting Time, better the scheduling algorithm.

Consider the processes P1, P2, P3, P4 given in the below table, arrives for execution in the same order,
with Arrival Time 0, and given Burst Time, let's find the average waiting time using the FCFS scheduling
algorithm.
The average waiting time will be 18.75 ms
For the above given proccesses, first P1 will be provided with the CPU resources,

 Hence, waiting time for P1 will be 0


 P1 requires 21 ms for completion, hence waiting time for P2 will be 21 ms
 Similarly, waiting time for process P3 will be execution time of P1 + execution time for P2, which
will be (21 + 3) ms = 24 ms.
 For process P4 it will be the sum of execution times of P1, P2 and P3.

The GANTT chart above perfectly represents the waiting time for each process.

 Problems with FCFS Scheduling


Below we have a few shortcomings or problems with the FCFS scheduling algorithm:
1. It is Non Pre-emptive algorithm, which means the process priority doesn't matter.

If a process with very least priority is being executed, more like daily routine backup process, which takes
more time, and all of a sudden some other high priority process arrives, like interrupt to avoid system crash,
the high priority process will have to wait, and hence in this case, the system will crash, just because of
improper process scheduling.

2. Not optimal Average Waiting Time.


3. Resources utilization in parallel is not possible, which leads to Convoy Effect, and hence poor resource(CPU,
I/O etc) utilization.

 Convoy Effect
Convoy Effect is a situation where many processes, who need to use a resource for short time are blocked
by one process holding that resource for a long time.
This essentially leads to poor utilization of resources and hence poor performance.
2. Shortest Job First (SJF) Scheduling
Shortest Job First scheduling works on the process with the shortest burst time or duration first.

 This is the best approach to minimize waiting time.


 This is used in Batch Systems.
 It is of two types:
1. Non Pre-emptive
2. Pre-emptive
 To successfully implement it, the burst time/duration time of the processes should be known to the processor
in advance, which is practically not feasible all the time.
 This scheduling algorithm is optimal if all the jobs/processes are available at the same time. (either Arrival
time is 0 for all, or Arrival time is same for all)

 Non Pre-emptive Shortest Job First


Consider the set of 5 processes whose arrival time and burst time are given below-

Process Id Arrival time Burst time

P1 3 1

P2 1 4

P3 4 2

P4 0 6

P5 2 3

 If the CPU scheduling policy is SJF non-preemptive, calculate the average waiting time and average turn
around time.

Solution-
Gantt Chart

Now, we know-
 Turn Around time = Exit time – Arrival time
 Waiting time = Turn Around time – Burst time
Process Id Exit time Turn Around time Waiting time

P1 7 7–3=4 4–1=3

P2 16 16 – 1 = 15 15 – 4 = 11

P3 9 9–4=5 5–2=3

P4 6 6–0=6 6–6=0

P5 12 12 – 2 = 10 10 – 3 = 7

Now,
 Average Turn Around time = (4 + 15 + 5 + 6 + 10) / 5 = 40 / 5 = 8 unit
 Average waiting time = (3 + 11 + 3 + 0 + 7) / 5 = 24 / 5 = 4.8 unit

o Problem with Non Pre-emptive SJF


If the arrival time for processes are different, which means all the processes are not available in the ready
queue at time 0, and some jobs arrive after some time, in such situation, sometimes process with short burst
time have to wait for the current process's execution to finish, because in Non Pre-emptive SJF, on arrival of
a process with short duration, the existing job/process's execution is not halted/stopped to execute the short
job first.
This leads to the problem of Starvation, where a shorter process has to wait for a long time until the current
longer process gets executed. This happens if shorter jobs keep coming, but this can be solved using the
concept of aging.

 Pre-emptive Shortest Job First


In Preemptive Shortest Job First Scheduling, jobs are put into ready queue as they arrive, but as a process
with short burst time arrives, the existing process is preempted or removed from execution, and the shorter
job is executed first. It is also known as SRTF
Consider the set of 5 processes whose arrival time and burst time are given below-

Process Id Arrival time Burst time

P1 3 1

P2 1 4

P3 4 2

P4 0 6

P5 2 3
If the CPU scheduling policy is SJF preemptive, calculate the average waiting time and average turn around
time.
Solution-
Gantt Chart-

Now, we know-
 Turn Around time = Exit time – Arrival time
 Waiting time = Turn Around time – Burst time

Process Id Exit time Turn Around time Waiting time

P1 4 4–3=1 1–1=0

P2 6 6–1=5 5–4=1

P3 8 8–4=4 4–2=2

P4 16 16 – 0 = 16 16 – 6 = 10

P5 11 11 – 2 = 9 9–3=6

Now,
 Average Turn Around time = (1 + 5 + 4 + 16 + 9) / 5 = 35 / 5 = 7 unit
 Average waiting time = (0 + 1 + 2 + 10 + 6) / 5 = 19 / 5 = 3.8 unit
3. Priority Scheduling
In the Shortest Job First scheduling algorithm, the priority of a process is generally the inverse of the CPU
burst time, i.e. the larger the burst time the lower is the priority of that process.
In case of priority scheduling the priority is not always set as the inverse of the CPU burst time, rather it can
be internally or externally set, but yes the scheduling is done on the basis of priority of the process where
the process which is most urgent is processed first, followed by the ones with lesser priority in order.
Processes with same priority are executed in FCFS manner.
The priority of process, when internally defined, can be decided based on memory requirements, time
limits ,number of open files, ratio of I/O burst to CPU burst etc.
Whereas, external priorities are set based on criteria outside the operating system, like the importance of the
process, funds paid for the computer resource use, makrte factor etc.

 Types of Priority Scheduling Algorithm


Priority scheduling can be of two types:

1. Preemptive Priority Scheduling: If the new process arrived at the ready queue has a higher priority
than the currently running process, the CPU is preempted, which means the processing of the current
process is stoped and the incoming new process with higher priority gets the CPU for its execution.
2. Non-Preemptive Priority Scheduling: In case of non-preemptive priority scheduling algorithm if a
new process arrives with a higher priority than the current running process, the incoming process is
put at the head of the ready queue, which means after the execution of the current process it will be
processed.

 Example of Priority Scheduling Algorithm


Consider the below table fo processes with their respective CPU burst times and the priorities.

As you can see in the GANTT chart that the processes are given CPU time just on the basis of the priorities.
 Problem with Priority Scheduling Algorithm
In priority scheduling algorithm, the chances of indefinite blocking or starvation.
A process is considered blocked when it is ready to run but has to wait for the CPU as some other process is
running currently.
But in case of priority scheduling if new higher priority processes keeps coming in the ready queue then the
processes waiting in the ready queue with lower priority may have to wait for long durations before getting
the CPU for execution.
In 1973, when the IBM 7904 machine was shut down at MIT, a low-priority process was found which was submitted
in 1967 and had not yet been run.

 Using Aging Technique with Priority Scheduling


To prevent starvation of any process, we can use the concept of aging where we keep on increasing the
priority of low-priority process based on the its waiting time.
For example, if we decide the aging factor to be 0.5 for each day of waiting, then if a process with
priority 20(which is comparitively low priority) comes in the ready queue. After one day of waiting, its
priority is increased to 19.5 and so on.
Doing so, we can ensure that no process will have to wait for indefinite time for getting CPU time for
processing.

Problem-01 Consider the set of 5 processes whose arrival time and burst time are given below-
Process Id Arrival time Burst time Priority

P1 0 4 2

P2 1 3 3

P3 2 1 4

P4 3 5 5

P5 4 2 5

If the CPU scheduling policy is priority non-preemptive, calculate the average waiting time and average turn
around time. (Higher number represents higher priority)
Solution-
Gantt Chart-

Now, we know-
 Turn Around time = Exit time – Arrival time
 Waiting time = Turn Around time – Burst time

Process Id Exit time Turn Around time Waiting time

P1 4 4–0=4 4–4=0

P2 15 15 – 1 = 14 14 – 3 = 11

P3 12 12 – 2 = 10 10 – 1 = 9

P4 9 9–3=6 6–5=1

P5 11 11 – 4 = 7 7–2=5
Now,
 Average Turn Around time = (4 + 14 + 10 + 6 + 7) / 5 = 41 / 5 = 8.2 unit
 Average waiting time = (0 + 11 + 9 + 1 + 5) / 5 = 26 / 5 = 5.2 unit

Problem-02: Consider the set of 5 processes whose arrival time and burst time are given below-
Process Id Arrival time Burst time Priority

P1 0 430 2

P2 1 320 3

P3 2 10 4

P4 3 50 5

P5 4 20 5

If the CPU scheduling policy is priority preemptive, calculate the average waiting time and average turn around
time. (Higher number represents higher priority)
Solution-
Gantt Chart-

Now, we know-
 Turn Around time = Exit time – Arrival time
 Waiting time = Turn Around time – Burst time

Process Id Exit time Turn Around time Waiting time

P1 15 15 – 0 = 15 15 – 4 = 11

P2 12 12 – 1 = 11 11 – 3 = 8

P3 3 3–2=1 1–1=0

P4 8 8–3=5 5–5=0

P5 10 10 – 4 = 6 6–2=4

Now,
 Average Turn Around time = (15 + 11 + 1 + 5 + 6) / 5 = 38 / 5 = 7.6 unit
 Average waiting time = (11 + 8 + 0 + 0 + 4) / 5 = 23 / 5 = 4.6 unit
4. Round Robin Scheduling
 A fixed time is allotted to each process, called quantum, for execution.
 Once a process is executed for given time period that process is preemptied and other process
executes for given time period.
 Context switching is used to save states of preempted processes.

Advantages
 It gives the best performance in terms of average response time.
 It is best suited for time sharing system, client server architecture and interactive system.

Disadvantages
 It leads to starvation for processes with larger burst time as they have to repeat the cycle many times.
 Its performance heavily depends on time quantum.
 Priorities can not be set for the processes.

Note-01:
With decreasing value of time quantum,
 Number of context switch increases
 Response time decreases
 Chances of starvation decreases
Thus, smaller value of time quantum is better in terms of response time.
Note-02:
With increasing value of time quantum,
 Number of context switch decreases
 Response time increases
 Chances of starvation increases
Thus, higher value of time quantum is better in terms of number of context switch.
Note-03:
 With increasing value of time quantum, Round Robin Scheduling tends to become FCFS Scheduling.
 When time quantum tends to infinity, Round Robin Scheduling becomes FCFS Scheduling.

Note-04:
 The performance of Round Robin scheduling heavily depends on the value of time quantum.
 The value of time quantum should be such that it is neither too big nor too small.

Problem-01: Consider the set of 5 processes whose arrival time and burst time are given below-

Process Id Arrival time Burst time

P1 0 5

P2 1 3

P3 2 1

P4 3 2

P5 4 3

If the CPU scheduling policy is Round Robin with time quantum = 2 unit, calculate the average waiting time
and average turn around time.
Solution-
Ready Queue-
P1, P2, P3, P1, P4, P5, P2, P1,P5
Gantt Chart-

Now, we know-
 Turn Around time = Exit time – Arrival time
 Waiting time = Turn Around time – Burst time

Process Id Exit time Turn Around time Waiting time

P1 13 13 – 0 = 13 13 – 5 = 8

P2 12 12 – 1 = 11 11 – 3 = 8

P3 5 5–2=3 3–1=2

P4 9 9–3=6 6–2=4

P5 14 14 – 4 = 10 10 – 3 = 7

Now,
 Average Turn Around time = (13 + 11 + 3 + 6 + 10) / 5 = 43 / 5 = 8.6 unit
 Average waiting time = (8 + 8 + 2 + 4 + 7) / 5 = 29 / 5 = 5.8 unit

Problem-02: Consider the set of 6 processes whose arrival time and burst time are given below-

Process Id Arrival time Burst time

P1 0 4

P2 1 5

P3 2 2

P4 3 1

P5 4 6

P6 6 3

If the CPU scheduling policy is Round Robin with time quantum = 2, calculate the average waiting time and
average turn around time.
Solution-
Ready Queue-

P5

P6

P2

P5

P6

P2

P5

P4

P1

P3

P2

P1

Now, we know-
 Turn Around time = Exit time – Arrival time
 Waiting time = Turn Around time – Burst time

Process Id Exit time Turn Around time Waiting time

P1 8 8–0=8 8–4=4

P2 18 18 – 1 = 17 17 – 5 = 12

P3 6 6–2=4 4–2=2

P4 9 9–3=6 6–1=5

P5 21 21 – 4 = 17 17 – 6 = 11

P6 19 19 – 6 = 13 13 – 3 = 10
Now,
 Average Turn Around time = (8 + 17 + 4 + 6 + 17 + 13) / 6 = 65 / 6 = 10.84 unit
 Average waiting time = (4 + 12 + 2 + 5 + 11 + 10) / 6 = 44 / 6 = 7.33 unit
Problem-03: Consider the set of 6 processes whose arrival time and burst time are given below-

Process Id Arrival time Burst time

P1 5 5

P2 4 6

P3 3 7

P4 1 9

P5 2 2

P6 6 3

If the CPU scheduling policy is Round Robin with time quantum = 3, calculate the average waiting time and
average turn around time.
Solution-
Ready Queue-
P3
Gantt chart-
P1

P4

P2

P3

P6

P1

P4

P2 Now, we know-
P3  Turn Around time = Exit time – Arrival time
 Waiting time = Turn Around time – Burst time
P5

P4
Process Id Exit time Turn Around time Waiting time

P1 32 32 – 5 = 27 27 – 5 = 22

P2 27 27 – 4 = 23 23 – 6 = 17

P3 33 33 – 3 = 30 30 – 7 = 23

P4 30 30 – 1 = 29 29 – 9 = 20

P5 6 6–2=4 4–2=2

P6 21 21 – 6 = 15 15 – 3 = 12
Now,
 Average Turn Around time = (27 + 23 + 30 + 29 + 4 + 15) / 6 = 128 / 6 = 21.33 unit
 Average waiting time = (22 + 17 + 23 + 20 + 2 + 12) / 6 = 96 / 6 = 16 unit

Problem-04: Four jobs to be executed on a single processor system arrive at time 0 in the order A, B, C,
D. Their burst CPU time requirements are 4, 1, 8, 1 time units respectively. The completion time of A
under round robin scheduling with time slice of one time unit is-
1. 10
2. 4
3. 8
4. 9
Solution-

Process Id Arrival time Burst time

A 0 4

B 0 1

C 0 8

D 0 1

Ready Queue-
C, A, C, A, C, A, D, C, B, A
Gantt chart-

Clearly, completion time of process A = 9 unit.


5. Multilevel Queue Scheduling
Another class of scheduling algorithms has been created for situations in which processes are easily
classified into different groups.
For example: A common division is made between foreground(or interactive) processes and background
(or batch) processes. These two types of processes have different response-time requirements, and so might
have different scheduling needs. In addition, foreground processes may have priority over background
processes.
A multi-level queue scheduling algorithm partitions the ready queue into several separate queues. The
processes are permanently assigned to one queue, generally based on some property of the process, such as
memory size, process priority, or process type. Each queue has its own scheduling algorithm.
For example: separate queues might be used for foreground and background processes. The foreground
queue might be scheduled by Round Robin algorithm, while the background queue is scheduled by an FCFS
algorithm.
In addition, there must be scheduling among the queues, which is commonly implemented as fixed-priority
preemptive scheduling. For example: The foreground queue may have absolute priority over the
background queue.
Let us consider an example of a multilevel queue-scheduling algorithm with five queues:

1. System Processes

2. Interactive Processes

3. Interactive Editing Processes

4. Batch Processes

5. Student Processes

Each queue has absolute priority over lower-priority queues. No process in the batch queue, for example,
could run unless the queues for system processes, interactive processes, and interactive editing processes
were all empty. If an interactive editing process entered the ready queue while a batch process was running,
the batch process will be preempted.
6. Multilevel Feedback Queue Scheduling
In a multilevel queue-scheduling algorithm, processes are permanently assigned to a queue on entry to the
system. Processes do not move between queues. This setup has the advantage of low scheduling overhead,
but the disadvantage of being inflexible.
Multilevel feedback queue scheduling, however, allows a process to move between queues. The idea is to
separate processes with different CPU-burst characteristics. If a process uses too much CPU time, it will be
moved to a lower-priority queue. Similarly, a process that waits too long in a lower-priority queue may be
moved to a higher-priority queue. This form of aging prevents starvation.

An example of a multilevel feedback queue can be seen in the below figure.


In general, a multilevel feedback queue scheduler is defined by the following parameters:

 The number of queues.


 The scheduling algorithm for each queue.
 The method used to determine when to upgrade a process to a higher-priority queue.
 The method used to determine when to demote a process to a lower-priority queue.
 The method used to determine which queue a process will enter when that process needs service.

The definition of a multilevel feedback queue scheduler makes it the most general CPU-scheduling
algorithm. It can be configured to match a specific system under design. Unfortunately, it also requires some
means of selecting values for all the parameters to define the best scheduler. Although a multilevel feedback
queue is the most general scheme, it is also the most complex.
Comparison of Scheduling Algorithms
Now, let us examine the advantages and disadvantages of each scheduling algorithms that we have studied
so far.

o First Come First Serve (FCFS)


Let's start with the Advantages:

 FCFS algorithm doesn't include any complex logic, it just puts the process requests in a queue and
executes it one by one.
 Hence, FCFS is pretty simple and easy to implement.
 Eventually, every process will get a chance to run, so starvation doesn't occur.

It's time for the Disadvantages:

 There is no option for pre-emption of a process. If a process is started, then CPU executes the
process until it ends.
 Because there is no pre-emption, if a process executes for a long time, the processes in the back of
the queue will have to wait for a long time before they get a chance to be executed.

o Shortest Job First (SJF)


Starting with the Advantages: of Shortest Job First scheduling algorithm.

 According to the definition, short processes are executed first and then followed by longer processes.
 The throughput is increased because more processes can be executed in less amount of time.

And the Disadvantages:

 The time taken by a process must be known by the CPU beforehand, which is not possible.
 Longer processes will have more waiting time, eventually they'll suffer starvation.

Note: Preemptive Shortest Job First scheduling will have the same advantages and disadvantages as those for SJF.

o Round Robin (RR)


Here are some Advantages: of using the Round Robin Scheduling:

 Each process is served by the CPU for a fixed time quantum, so all processes are given the same priority.
 Starvation doesn't occur because for each round robin cycle, every process is given a fixed time to execute.
No process is left behind.

and here comes the Disadvantages:

 The throughput in RR largely depends on the choice of the length of the time quantum. If time quantum is
longer than needed, it tends to exhibit the same behavior as FCFS.
 If time quantum is shorter than needed, the number of times that CPU switches from one process to another
process, increases. This leads to decrease in CPU efficiency.
o Priority based Scheduling
Advantages of Priority Scheduling:

 The priority of a process can be selected based on memory requirement, time requirement or user preference.
For example, a high end game will have better graphics, that means the process which updates the screen in a
game will have higher priority so as to achieve better graphics performance.

Some Disadvantages:

 A second scheduling algorithm is required to schedule the processes which have same priority.
 In preemptive priority scheduling, a higher priority process can execute ahead of an already executing lower
priority process. If lower priority process keeps waiting for higher priority processes, starvation occurs.

Usage of Scheduling Algorithms in Different Situations


Every scheduling algorithm has a type of a situation where it is the best choice. Let's look at different such
situations:

Situation 1:
The incoming processes are short and there is no need for the processes to execute in a specific order.
In this case, FCFS works best when compared to SJF and RR because the processes are short which means
that no process will wait for a longer time. When each process is executed one by one, every process will be
executed eventually.

Situation 2:
The processes are a mix of long and short processes and the task will only be completed if all the processes
are executed successfully in a given time.
Round Robin scheduling works efficiently here because it does not cause starvation and also gives equal
time quantum for each process.

Situation 3:
The processes are a mix of user based and kernel based processes.
Priority based scheduling works efficiently in this case because generally kernel based processes have
higher priority when compared to user based processes.
For example, the scheduler itself is a kernel based process, it should run first so that it can schedule other
processes.

You might also like