You are on page 1of 31

Subject: Operating System

Created By: Asst. Prof. SK ABDUL ISRAR College: ABA, BLS

What is Process Management?


Process management involves various tasks like creation, scheduling,
termination of processes, and a dead lock. Process is a program that is under
execution, which is an important part of modern-day operating systems. The
OS must allocate resources that enable processes to share and exchange
information. It also protects the resources of each process from other
methods and allows synchronization among processes.

Process Creation
A process may be created in the system for different operations. Some of
the events that lead to process creation are as follows −

• User request for process creation


• System Initialization
• Batch job initialization
• Execution of a process creation system call by a running process
A process may be created by another process using fork(). The creating
process is called the parent process and the created process is the child
process. A child process can have only one parent but a parent process
may have many children. Both the parent and child processes have the
same memory image, open files and environment strings. However, they
have distinct address spaces.
A diagram that demonstrates process creation using fork() is as follows −

1|Page
Contact: 7008443534, 9090042626
Subject: Operating System
Created By: Asst. Prof. SK ABDUL ISRAR College: ABA, BLS

Process Termination
Process termination occurs when the process is terminated The exit()
system call is used by most operating systems for process termination.
Some of the causes of process termination are as follows −

• A process may be terminated after its execution is naturally


completed. This process leaves the processor and releases all its
resources.
• A child process may be terminated if its parent process requests
for its termination.
• A process can be terminated if it tries to use a resource that it is
not allowed to. For example - A process can be terminated for
trying to write into a read only file.
• If an I/O failure occurs for a process, it can be terminated. For
example - If a process requires the printer and it is not working,
then the process will be terminated.
• In most cases, if a parent process is terminated then its child
processes are also terminated. This is done because the child
process cannot exist without the parent process.
• If a process requires more memory than is currently available in
the system, then it is terminated because of memory scarcity.

Cooperating process
Cooperating processes are those that can affect or are affected by other processes
running on the system. Cooperating processes may share data with each other.

Reasons for needing cooperating processes


There may be many reasons for the requirement of cooperating processes. Some of
these are given as follows −

• Modularity
Modularity involves dividing complicated tasks into smaller subtasks. These
subtasks can completed by different cooperating processes. This leads to faster
and more efficient completion of the required tasks.

2|Page
Contact: 7008443534, 9090042626
Subject: Operating System
Created By: Asst. Prof. SK ABDUL ISRAR College: ABA, BLS

• Information Sharing
Sharing of information between multiple processes can be accomplished using
cooperating processes. This may include access to the same files. A
mechanism is required so that the processes can access the files in parallel to
each other.

• Convenience
There are many tasks that a user needs to do such as compiling, printing,
editing etc. It is convenient if these tasks can be managed by cooperating
processes.

• Computation Speedup
Subtasks of a single task can be performed parallely using cooperating
processes. This increases the computation speedup as the task can be
executed faster. However, this is only possible if the system has multiple
processing elements.

Methods of Cooperation
Cooperating processes can coordinate with each other using shared data or
messages. Details about these are given as follows −

• Cooperation by Sharing
The cooperating processes can cooperate with each other using shared data
such as memory, variables, files, databases etc. Critical section is used to
provide data integrity and writing is mutually exclusive to prevent inconsistent
data.
A diagram that demonstrates cooperation by sharing is given as follows −

In the above diagram, Process P1 and P2 can cooperate with each other using
shared data such as memory, variables, files, databases etc.

3|Page
Contact: 7008443534, 9090042626
Subject: Operating System
Created By: Asst. Prof. SK ABDUL ISRAR College: ABA, BLS

• Cooperation by Communication
The cooperating processes can cooperate with each other using messages.
This may lead to deadlock if each process is waiting for a message from the
other to perform a operation. Starvation is also possible if a process never
receives a message.
A diagram that demonstrates cooperation by communication is given as follows

In the above diagram, Process P1 and P2 can cooperate with each other using
messages to communicate.

What is Thread?
A thread is a flow of execution through the process code, with its own program counter
that keeps track of which instruction to execute next, system registers which hold its
current working variables, and a stack which contains the execution history.
A thread shares with its peer threads few information like code segment, data
segment and open files. When one thread alters a code segment memory item, all
other threads see that.
A thread is also called a lightweight process. Threads provide a way to improve
application performance through parallelism. Threads represent a software approach
to improving performance of operating system by reducing the overhead thread is
equivalent to a classical process.
Each thread belongs to exactly one process and no thread can exist outside a
process. Each thread represents a separate flow of control. Threads have been
successfully used in implementing network servers and web server. They also
provide a suitable foundation for parallel execution of applications on shared memory
multiprocessors. The following figure shows the working of a single-threaded and a
multithreaded process.

4|Page
Contact: 7008443534, 9090042626
Subject: Operating System
Created By: Asst. Prof. SK ABDUL ISRAR College: ABA, BLS

Difference between Process and Thread


S.N. Process Thread

1 Process is heavy weight or resource intensive. Thread is light weight, taking


lesser resources than a process.

2 Process switching needs interaction with Thread switching does not need
operating system. to interact with operating system.

3 In multiple processing environments, each All threads can share same set of
process executes the same code but has its open files, child processes.
own memory and file resources.

4 If one process is blocked, then no other While one thread is blocked and
process can execute until the first process is waiting, a second thread in the
unblocked. same task can run.

5|Page
Contact: 7008443534, 9090042626
Subject: Operating System
Created By: Asst. Prof. SK ABDUL ISRAR College: ABA, BLS

5 Multiple processes without using threads use Multiple threaded processes use
more resources. fewer resources.

6 In multiple processes each process operates One thread can read, write or
independently of the others. change another thread's data.

Advantages of Thread
• Threads minimize the context switching time.
• Use of threads provides concurrency within a process.
• Efficient communication.
• It is more economical to create and context switch threads.
• Threads allow utilization of multiprocessor architectures to a greater scale and
efficiency.

Types of Thread
Threads are implemented in following two ways −
• User Level Threads − User managed threads.
• Kernel Level Threads − Operating System managed threads acting on kernel,
an operating system core.

User Level Threads


In this case, the thread management kernel is not aware of the existence of threads.
The thread library contains code for creating and destroying threads, for passing
message and data between threads, for scheduling thread execution and for saving
and restoring thread contexts. The application starts with a single thread.

6|Page
Contact: 7008443534, 9090042626
Subject: Operating System
Created By: Asst. Prof. SK ABDUL ISRAR College: ABA, BLS

Advantages

• Thread switching does not require Kernel mode privileges.


• User level thread can run on any operating system.
• Scheduling can be application specific in the user level thread.
• User level threads are fast to create and manage.
Disadvantages

• In a typical operating system, most system calls are blocking.


• Multithreaded application cannot take advantage of multiprocessing.

Kernel Level Threads


In this case, thread management is done by the Kernel. There is no thread
management code in the application area. Kernel threads are supported directly by
the operating system. Any application can be programmed to be multithreaded. All of
the threads within an application are supported within a single process.
The Kernel maintains context information for the process as a whole and for
individuals’ threads within the process. Scheduling by the Kernel is done on a thread
basis. The Kernel performs thread creation, scheduling and management in Kernel
space. Kernel threads are generally slower to create and manage than the user
threads.

7|Page
Contact: 7008443534, 9090042626
Subject: Operating System
Created By: Asst. Prof. SK ABDUL ISRAR College: ABA, BLS

Advantages

• Kernel can simultaneously schedule multiple threads from the same process on
multiple processes.
• If one thread in a process is blocked, the Kernel can schedule another thread of the
same process.
• Kernel routines themselves can be multithreaded.
Disadvantages

• Kernel threads are generally slower to create and manage than the user threads.
• Transfer of control from one thread to another within the same process requires a
mode switch to the Kernel.

Multithreading Models
Some operating system provides a combined user level thread and Kernel level
thread facility. Solaris is a good example of this combined approach. In a combined
system, multiple threads within the same application can run in parallel on multiple
processors and a blocking system call need not block the entire process.
Multithreading models are three types

• Many to many relationship.


• Many to one relationship.
• One to one relationship.

Many to Many Model


The many-to-many model multiplexes any number of user threads onto an equal or
smaller number of kernel threads.
The following diagram shows the many-to-many threading model where 6 user level
threads are multiplexing with 6 kernel level threads. In this model, developers can
create as many user threads as necessary and the corresponding Kernel threads can
run in parallel on a multiprocessor machine. This model provides the best accuracy
on concurrency and when a thread performs a blocking system call, the kernel can
schedule another thread for execution.

8|Page
Contact: 7008443534, 9090042626
Subject: Operating System
Created By: Asst. Prof. SK ABDUL ISRAR College: ABA, BLS

Many to One Model


Many-to-one model maps many user level threads to one Kernel-level thread. Thread
management is done in user space by the thread library. When thread makes a
blocking system call, the entire process will be blocked. Only one thread can access
the Kernel at a time, so multiple threads are unable to run in parallel on
multiprocessors.
If the user-level thread libraries are implemented in the operating system in such a
way that the system does not support them, then the Kernel threads use the many-
to-one relationship modes.

9|Page
Contact: 7008443534, 9090042626
Subject: Operating System
Created By: Asst. Prof. SK ABDUL ISRAR College: ABA, BLS

One to One Model


There is one-to-one relationship of user-level thread to the kernel-level thread. This
model provides more concurrency than the many-to-one model. It also allows another
thread to run when a thread makes a blocking system call. It supports multiple threads
to execute in parallel on microprocessors.
Disadvantage of this model is that creating user thread requires the corresponding
Kernel thread. OS/2, windows NT and windows 2000 use one to one relationship
model.

10 | P a g e
Contact: 7008443534, 9090042626
Subject: Operating System
Created By: Asst. Prof. SK ABDUL ISRAR College: ABA, BLS

Difference between User-Level & Kernel-Level Thread


S.N. User-Level Threads Kernel-Level Thread

1 User-level threads are faster to create and Kernel-level threads are slower to
manage. create and manage.

2 Implementation is by a thread library at the Operating system supports creation


user level. of Kernel threads.

3 User-level thread is generic and can run on Kernel-level thread is specific to the
any operating system. operating system.

4 Multi-threaded applications cannot take Kernel routines themselves can be


advantage of multiprocessing. multithreaded.

11 | P a g e
Contact: 7008443534, 9090042626
Subject: Operating System
Created By: Asst. Prof. SK ABDUL ISRAR College: ABA, BLS

What is Process Scheduling?


Process Scheduling is an OS task that schedules processes of different states
like ready, waiting, and running.

Process scheduling allows OS to allocate a time interval of CPU execution for


each process. Another important reason for using a process scheduling system
is that it keeps the CPU busy all the time. This allows you to get the minimum
response time for programs.

Process Scheduling Queues


Process Scheduling Queues help you to maintain a distinct queue for each and
every process states and PCBs. All the process of the same execution state are
placed in the same queue. Therefore, whenever the state of a process is
modified, its PCB needs to be unlinked from its existing queue, which moves
back to the new state queue.

Three types of operating system queues are:

1. Job queue – It helps you to store all the processes in the system.
2. Ready queue – This type of queue helps you to set every process residing
in the main memory, which is ready and waiting to execute.
3. Device queues – It is a process that is blocked because of the absence of
an I/O device.

12 | P a g e
Contact: 7008443534, 9090042626
Subject: Operating System
Created By: Asst. Prof. SK ABDUL ISRAR College: ABA, BLS

In the above-given Diagram,

• Rectangle represents a queue.


• Circle denotes the resource
• Arrow indicates the flow of the process.

1. Every new process first put in the Ready queue .It waits in the ready
queue until it is finally processed for execution. Here, the new process is
put in the ready queue and wait until it is selected for execution or it is
dispatched.
2. One of the processes is allocated the CPU and it is executing
3. The process should issue an I/O request
4. Then, it should be placed in the I/O queue.
5. The process should create a new subprocess
6. The process should be waiting for its termination.
7. It should remove forcefully from the CPU, as a result interrupt. Once
interrupt is completed, it should be sent back to ready queue.

Two State Process Model


Two-state process models are:

• Running
• Not Running

Running
In the Operating system, whenever a new process is built, it is entered into the
system, which should be running.

Not Running
The process that are not running are kept in a queue, which is waiting for their
turn to execute. Each entry in the queue is a point to a specific process.

Scheduling Objectives
Here, are important objectives of Process scheduling

• Maximize the number of interactive users within acceptable response


times.

13 | P a g e
Contact: 7008443534, 9090042626
Subject: Operating System
Created By: Asst. Prof. SK ABDUL ISRAR College: ABA, BLS

• Achieve a balance between response and utilization.


• Avoid indefinite postponement and enforce priorities.
• It also should give reference to the processes holding the key resources.

Type of Process Schedulers


A scheduler is a type of system software that allows you to handle process
scheduling.

There are mainly three types of Process Schedulers:

1. Long Term
2. Short Term
3. Medium Term

• Long-Term Schedulers
Long-term schedulers perform long-term scheduling. This involves selecting
the processes from the storage pool in the secondary memory and loading them
into the ready queue in the main memory for execution.
The long-term scheduler controls the degree of multiprogramming. It must
select a careful mixture of I/O bound and CPU bound processes to yield
optimum system throughput. If it selects too many CPU bound processes then
the I/O devices are idle and if it selects too many I/O bound processes then the
processor has nothing to do.

• Short-Term Schedulers
Short-term schedulers perform short-term scheduling. This involves selecting
one of the processes from the ready queue and scheduling them for execution.
A scheduling algorithm is used to decide which process will be scheduled for
execution next by the short-term scheduler.
A diagram that demonstrates scheduling using long-term and short-term
schedulers is given as follows −

14 | P a g e
Contact: 7008443534, 9090042626
Subject: Operating System
Created By: Asst. Prof. SK ABDUL ISRAR College: ABA, BLS

• Medium-Term Schedulers
Medium-term schedulers perform medium-term scheduling. This involves
swapping out a process from main memory. The process can be swapped in
later from the point it stopped executing.
This can also be called as suspending and resuming the process and is helpful
in reducing the degree of multiprogramming. Swapping is also useful to improve
the mix of I/O bound and CPU bound processes in the memory.
A diagram that demonstrates medium-term scheduling is given as follows −

15 | P a g e
Contact: 7008443534, 9090042626
Subject: Operating System
Created By: Asst. Prof. SK ABDUL ISRAR College: ABA, BLS

Difference between Schedulers


Long-Term Short-Term Medium-Term
Long term is also Short term is also known Medium-term is also
known as a job as CPU scheduler called swapping
scheduler scheduler.
It is either absent or It is insignificant in the This scheduler is an
minimal in a time- time-sharing order. element of Time-sharing
sharing system. systems.
Speed is less Speed is the fastest It offers medium speed.
compared to the short compared to the short-
term scheduler. term and medium-term
scheduler.
Allow you to select It only selects processes It helps you to send
processes from the that is in a ready state of process back to memory.
loads and pool back the execution.
into the memory
Offers full control Offers less control Reduce the level of
multiprogramming.

What is Context switch?


Switching the CPU to another process require performing a state
save of the current process and a state restore of a different process.
This task is known as a context switch. When a content switch
occurs, the kernel saves the context of the old process in its PCB and
loads the saved context of the new process scheduled to run.
Context switch time is pure overhead, because the system does no
useful work while switching. Its speed varies from machine to
machine depending on the memory speed, the number of registers
that must be copied, and the existence of special instructions. Typical
speeds are a few milliseconds, context switching times are highly
dependent on hardware support.

16 | P a g e
Contact: 7008443534, 9090042626
Subject: Operating System
Created By: Asst. Prof. SK ABDUL ISRAR College: ABA, BLS

Types of CPU Scheduling


Here are two kinds of Scheduling methods:

Preemptive Scheduling:
Preemptive scheduling is used when a process switches from running
state to ready state or from waiting state to ready state. The resources
(mainly CPU cycles) are allocated to the process for the limited amount
of time and then is taken away, and the process is again placed back in
the ready queue if that process still has CPU burst time remaining. That
process stays in ready queue till it gets next chance to execute.
Algorithms based on preemptive scheduling are : Round Robin
(RR),Shortest Remaining Time First (SRTF), Priority (preemptive
version), etc.

Non-Preemptive Scheduling:
Non-preemptive Scheduling is used when a process terminates, or a
process switches from running to waiting state. In this scheduling, once
the resources (CPU cycles) is allocated to a process, the process holds
the CPU till it gets terminated or it reaches a waiting state. In case of
non-preemptive scheduling does not interrupt a process running CPU in
middle of the execution. Instead, it waits till the process complete its
CPU burst time and then it can allocate the CPU to another process.
Algorithms based on non-preemptive scheduling are: Shortest Job First
(SJF basically non preemptive) and Priority (non preemptive version),
etc.

17 | P a g e
Contact: 7008443534, 9090042626
Subject: Operating System
Created By: Asst. Prof. SK ABDUL ISRAR College: ABA, BLS

Preemptive Vs Non-Preemptive Scheduling


Preemptive Scheduling Non-Preemptive Scheduling
Resources are allocated Resources are used and then
according to the cycles for a held by the process until it gets
limited time. terminated.
The process can be interrupted, The process is not interrupted
even before the completion. until its life cycle is complete.
Starvation may be caused, due Starvation can occur when a
to the insertion of priority process with large burst time
process in the queue. occupies the system.
Maintaining queue and No such overheads are required.
remaining time needs storage
overhead.

CPU Scheduling Criteria


Different CPU scheduling algorithms have different properties and the
choice of a particular algorithm may favor one class of process over
another.
In choosing which algorithm to use we must consider the following
criteria.

CPU Utilization
We want to keep the CPU as busy as possible. In a real system it should
range from 40% to 90%.

Throughput
If the CPU is busy in executing process then work is being done, one
measure of work is the number of process that are completed per time
unit called throughput.

Turn Around Time


The interval from the time of submission of a process to the time of
completion is the Turn around time. The sum of the periods spent
waiting to get into memory in the ready queue, executing on the CPU
and doing I/O.

18 | P a g e
Contact: 7008443534, 9090042626
Subject: Operating System
Created By: Asst. Prof. SK ABDUL ISRAR College: ABA, BLS

Waiting Time
It is the sum of the period spent waiting in the ready queue.

Response Time
The time from the submission of a request until the first response is
produced. This measure called the response time. It is the time which
takes to start responding.

Interval Timer
Timer interruption is a method that is closely related to preemption. When a
certain process gets the CPU allocation, a timer may be set to a specified
interval. Both timer interruption and preemption force a process to return the
CPU before its CPU burst is complete.

Most of the multi-programmed operating system uses some form of a timer to


prevent a process from tying up the system forever.

What is Dispatcher?
It is a module that provides control of the CPU to the process. The Dispatcher
should be fast so that it can run on every context switch. Dispatch latency is the
amount of time needed by the CPU scheduler to stop one process and start
another.

Functions performed by Dispatcher:

• Context Switching
• Switching to user mode
• Moving to the correct location in the newly loaded program.

Types of CPU scheduling Algorithm


There are mainly six types of process scheduling algorithms

1. First Come First Serve (FCFS)


2. Shortest-Job-First (SJF) Scheduling
3. Shortest Remaining Time
4. Priority Scheduling
5. Round Robin Scheduling

19 | P a g e
Contact: 7008443534, 9090042626
Subject: Operating System
Created By: Asst. Prof. SK ABDUL ISRAR College: ABA, BLS

6. Multilevel Queue Scheduling

Scheduling Algorithms

First Come First Serve


First Come First Serve is the full form of FCFS. It is the easiest and most simple
CPU scheduling algorithm. In this type of algorithm, the process which requests
the CPU gets the CPU allocation first. This scheduling method can be managed
with a FIFO queue.

As the process enters the ready queue, its PCB (Process Control Block) is linked
with the tail of the queue. So, when CPU becomes free, it should be assigned to
the process at the beginning of the queue.

20 | P a g e
Contact: 7008443534, 9090042626
Subject: Operating System
Created By: Asst. Prof. SK ABDUL ISRAR College: ABA, BLS

Characteristics of FCFS method:


• It offers non-preemptive and pre-emptive scheduling algorithm.
• Jobs are always executed on a first-come, first-serve basis
• It is easy to implement and use.
• However, this method is poor in performance, and the general wait time
is quite high.

Example:
Process Arrival Time Burst Time(in ns)
P1 0 5
P2 1 4
P3 3 3
P4 4 1
P5 2 2

Gantt Chart: -
P1 P2 P5 P3 P4
0 5 9 11 14 15
Turn Around Time: -
Note: Turn Around Time= Completion Time – Arrival Time
P1 = 5 – 0 = 5
P2 = 9 – 1 = 8
P3 = 14 – 3 = 11
P4 = 15 – 4 = 11
P5 = 11 – 2 = 9
5+8+11+11+9
Average Turn Around Time= = 8.8 ns
5

Waiting Time: -
Note: Waiting Time= Turn Around Time – Burst Time
P1 = 5 – 5 = 0
P2 = 8 – 4 = 4
21 | P a g e
Contact: 7008443534, 9090042626
Subject: Operating System
Created By: Asst. Prof. SK ABDUL ISRAR College: ABA, BLS

P3 = 11 – 3 = 8
P4 = 11 – 1 = 10
P5 = 9 – 2 = 7
0+4+8+10+7
Average Waiting Time = = 5.8 ns
5

Response Time: -
Note: Response Time= First CPU Allocation Time – Arrival Time
P1 = 0 – 0 = 0
P2 = 5 – 1 = 4
P3 = 11 – 3 = 8
P4 = 14 – 4 = 10
P5 = 9 – 2 = 7
0+4+8+10+7
Average Response Time = = 5.8 ns
5

Shortest Job First


SJF is a full form of (Shortest job first) is a scheduling algorithm in which the
process with the shortest execution time should be selected for execution next.
This scheduling method can be preemptive or non-preemptive. It significantly
reduces the average waiting time for other processes awaiting execution.

Characteristics of SJF Scheduling


• It is associated with each job as a unit of time to complete.
• In this method, when the CPU is available, the next process or job with
the shortest completion time will be executed first.
• It is Implemented with non-preemptive policy.
• This algorithm method is useful for batch-type processing, where waiting
for jobs to complete is not critical.
• It improves job output by offering shorter jobs, which should be executed
first, which mostly have a shorter turnaround time.

22 | P a g e
Contact: 7008443534, 9090042626
Subject: Operating System
Created By: Asst. Prof. SK ABDUL ISRAR College: ABA, BLS

Example:

Process Arrival Time Burst Time


P1 0 4
P2 1 3
P3 2 1
P4 3 6
P5 4 2

Gantt Chart: -
P1 P3 P5 P2 P4
0 4 5 7 10 16
Turn Around Time: -
Note: Turn Around Time= Completion Time – Arrival Time
P1 = 4 – 0 = 4
P2 = 10 – 1 = 9
P3 = 5 – 2 = 3
P4 = 16 – 3 = 13
P5 = 7 – 4 = 3
4+9+3+13+3
Average Turn Around Time= = 6.4 ns
5

Waiting Time: -
Note: Waiting Time= Turn Around Time – Burst Time
P1 = 4 – 4 = 0
P2 = 9 – 3 = 6
P3 = 3 – 1 = 2
P4 = 13 – 6 = 7
P5 = 3 – 2 = 1
0+6+2+7+1
Average Waiting Time = = 3.2 ns
5

23 | P a g e
Contact: 7008443534, 9090042626
Subject: Operating System
Created By: Asst. Prof. SK ABDUL ISRAR College: ABA, BLS

Response Time: -
Note: Response Time= First CPU Allocation Time – Arrival Time
P1 = 0 – 0 = 0
P2 = 7 – 1 = 6
P3 = 4 – 2 = 2
P4 = 10 – 3= 7
P5 = 5 – 4 = 1
0+6+2+7+1
Average Response Time = = 3.2 ns
5

Shortest Remaining Time First


The full form of SRTF is Shortest remaining time. It is also known as SJF
preemptive scheduling. In this method, the process will be allocated to the task,
which is closest to its completion. This method prevents a newer ready state
process from holding the completion of an older process.

Characteristics of SRTF scheduling method:


• This method is mostly applied in batch environments where short jobs
are required to be given preference.
• This is not an ideal method to implement it in a shared system where the
required CPU time is unknown.
• Associate with each process as the length of its next CPU burst. So that
operating system uses these lengths, which helps to schedule the process
with the shortest possible time.

Example
Process Arrival Time Burst Time
P1 0 7
P2 1 5
P3 2 3
P4 3 1
P5 4 2
P6 5 1

24 | P a g e
Contact: 7008443534, 9090042626
Subject: Operating System
Created By: Asst. Prof. SK ABDUL ISRAR College: ABA, BLS

Gantt Chart: -
P1 P2 P3 P4 P3 P3 P6 P5 P2 P1
0 1 2 3 4 5 6 7 9 13 19
Turn Around Time: -
Note: Turn Around Time= Completion Time – Arrival Time
P1 = 19 – 0 = 19
P2 = 13 – 1 = 12
P3 = 6 – 2 = 4
P4 = 4 – 3 = 1
P5 = 9 – 4 = 5
P6 = 7 – 5 = 2
19+12+4+1+5+2
Average Turn Around Time= = 7.16 ns
6
Waiting Time: -
Note: Waiting Time= Turn Around Time – Burst Time
P1 = 19 – 7 = 12
P2 = 12 – 5 = 7
P3 = 4 – 3 = 1
P4 = 1 – 1 = 0
P5 = 5 – 2 = 3
P6= 2 – 1 = 1
12+7+1+0+3+1
Average Waiting Time = = 4 ns
6

Response Time: -
Note: Response Time= First CPU Allocation Time – Arrival Time
P1 = 0 – 0 = 0
P2 = 1 – 1 = 0
P3 = 2 – 2 = 0
P4 = 3 – 3 = 0

25 | P a g e
Contact: 7008443534, 9090042626
Subject: Operating System
Created By: Asst. Prof. SK ABDUL ISRAR College: ABA, BLS

P5 = 7 – 4 = 3
P6= 6 – 5 = 1
0+0+0+0+3+1
Average Response Time = = 0.66 ns
6

Priority Based Scheduling


Priority scheduling is a method of scheduling processes based on priority. In
this method, the scheduler selects the tasks to work as per the priority.

Priority scheduling also helps OS to involve priority assignments. The


processes with higher priority should be carried out first, whereas jobs with
equal priorities are carried out on a round-robin or FCFS basis. Priority can be
decided based on memory requirements, time requirements, etc.

It is of two types: -
1) Preemptive priority scheduling
2) Non-preemptive priority scheduling

Non-Preemptive priority scheduling

PROCESS ARRIVAL TIME BURST TIME PRIORITY


P1 0 4 4
P2 1 5 5
P3 2 1 7
P4 3 2 2
P5 4 3 1
P6 5 6 6
NOTE: Highest number is the highest priority

Gantt Chart
P1 P3 P6 P2 P4 P5
0 4 5 11 16 18 21

26 | P a g e
Contact: 7008443534, 9090042626
Subject: Operating System
Created By: Asst. Prof. SK ABDUL ISRAR College: ABA, BLS

Turn Around Time: -


Note: Turn Around Time= Completion Time – Arrival Time
P1 = 4 – 0 = 4
P2 =16 – 1 = 15
P3 = 5 – 2 = 3
P4 = 18 – 3 = 15
P5 = 21 – 4 = 17
P6 = 11 – 5 = 6
4+15+3+15+17+6
Avg. Turn Around Time = = 10 ms
6

Waiting Time: -
Note: Waiting Time= Turn Around Time – Burst Time
P1 = 4 – 4= 0
P2 =15 – 5 = 10
P3 = 3 – 1 = 2
P4 = 15 – 2 = 13
P5 = 17 – 3 = 14
P6 = 6 – 6 = 0
0+10+2+13+14+0
Avg. Waiting Time = = 6.5 ms
6

Response Time: -
Note: Response Time= First CPU Allocation Time – Arrival Time
P1 = 0 – 0= 0
P2 =11 – 1 = 10
P3 = 4 – 2 = 2
P4 = 16 – 3 = 13
P5 = 18 – 4 = 14
P6 = 5 – 5 = 0
0+10+2+13+14+0
Avg. Response Time = = 6.5 ms
6

27 | P a g e
Contact: 7008443534, 9090042626
Subject: Operating System
Created By: Asst. Prof. SK ABDUL ISRAR College: ABA, BLS

Preemptive priority scheduling

PROCESS ARRIVAL TIME BURST TIME PRIORITY


P1 1 4 5
P2 2 5 2
P3 3 6 6
P4 0 1 4
P5 4 2 7
P6 5 3 8
NOTE: Highest number is the highest priority
Gantt Chart
P4 P1 P1 P3 P5 P6 P5 P3 P1 P2
0 1 2 3 4 5 8 9 14 16 21

Turn Around Time: -


Note: Turn Around Time= Completion Time – Arrival Time
P1 = 16 – 1 = 15
P2 =21 – 2 = 19
P3 = 14 – 3 = 11
P4 = 1 – 0 = 1
P5 = 9 – 4 = 5
P6 = 8 – 5 = 3
15+19+11+1+5+3
Avg. Turn Around Time = = 9 ms
6

Waiting Time: -
Note: Waiting Time= Turn Around Time – Burst Time
P1 = 15 – 4= 11
P2 =19 – 5 = 14
P3 = 11 – 6 = 5
P4 = 1 – 1 = 0
P5 = 5 – 2 = 3
P6 = 3 – 3 = 0
11+14+5+0+3+0
Avg. Waiting Time =
6
= 5.5 ms

28 | P a g e
Contact: 7008443534, 9090042626
Subject: Operating System
Created By: Asst. Prof. SK ABDUL ISRAR College: ABA, BLS

Response Time: -
Note: Response Time= First CPU Allocation Time – Arrival Time
P1 = 1 – 1= 0
P2 =16 – 2 = 14
P3 = 3 – 3 = 0
P4 = 0 – 0 = 0
P5 = 4 – 4 = 0
P6 = 5 – 5 = 0
0+14+0+0+0+0
Avg. Response Time = = 2.3 ms
6

Round-Robin Scheduling
Round robin is the oldest, simplest scheduling algorithm. The name of this
algorithm comes from the round-robin principle, where each person gets an
equal share of something in turn. It is mostly used for scheduling algorithms
in multitasking. This algorithm method helps for starvation free execution of
processes.

Characteristics of Round-Robin Scheduling


• Round robin is a hybrid model which is clock-driven
• Time slice should be minimum, which is assigned for a specific task to
be processed. However, it may vary for different processes.
• It is a real time system which responds to the event within a specific
time limit.

PROCESS ARRIVAL TIME BURST TIME


P1 0 3
P2 1 5
P3 2 2
P4 3 5
P5 4 5
NOTE: Time Slice = 2 ms
P1 P2 P3 P1 P4 P5 P2 P4 P5 P2 P4 P5
0 2 4 6 7 9 11 13 15 17 18 19 20

29 | P a g e
Contact: 7008443534, 9090042626
Subject: Operating System
Created By: Asst. Prof. SK ABDUL ISRAR College: ABA, BLS

Turn Around Time: -


Note: Turn Around Time= Completion Time – Arrival Time
P1 = 7 – 0 = 7
P2 =18 – 1 = 17
P3 = 6 – 2 = 4
P4 = 19 – 3 = 16
P5 = 20 – 4 = 16
7+17+4+16+16
Avg. Turn Around Time = = 12 ms
5

Waiting Time: -
Note: Waiting Time= Turn Around Time – Burst Time
P1 = 7 – 3= 4
P2 =17 – 5 = 12
P3 = 4 – 2 = 2
P4 = 16 – 5 = 11
P5 = 16 – 5 = 11

4+12+2+11+11
Avg. Waiting Time = = 8 ms
5

Response Time: -
Note: Response Time= First CPU Allocation Time – Arrival Time
P1 = 0 – 0 = 4
P2 = 2 – 1 = 1
P3 = 4 – 2 = 2
P4 = 7 – 3 = 4
P5 = 9 – 4 = 5

4+1+2+4+5
Avg. Response Time = = 3.2 ms
5

30 | P a g e
Contact: 7008443534, 9090042626
Subject: Operating System
Created By: Asst. Prof. SK ABDUL ISRAR College: ABA, BLS

Multiple-Level Queues Scheduling


This algorithm separates the ready queue into various separate queues. In
this method, processes are assigned to a queue based on a specific property
of the process, like the process priority, size of the memory, etc.

However, this is not an independent scheduling OS algorithm as it needs to


use other types of algorithms in order to schedule the jobs.

Characteristic of Multiple-Level Queues Scheduling:


• Multiple queues should be maintained for processes with some
characteristics.
• Every queue may have its separate scheduling algorithms.
• Priorities are given for each queue.

Let us consider an example of a multilevel queue-scheduling algorithm


with five queues:

1. System Processes
2. Interactive Processes
3. Interactive Editing Processes
4. Batch Processes
5. Student Processes

Each queue has absolute priority over lower-priority queues. No process


in the batch queue, for example, could run unless the queues for system
processes, interactive processes, and interactive editing processes were
all empty. If an interactive editing process entered the ready queue while
a batch process was running, the batch process will be preempted.

31 | P a g e
Contact: 7008443534, 9090042626

You might also like