0% found this document useful (0 votes)
111 views44 pages

OS Complete

The document discusses the First Come First Serve (FCFS) scheduling algorithm. FCFS is a non-preemptive scheduling algorithm that schedules tasks, processes, and requests in the order that they arrive in the ready queue. It is a simple and predictable algorithm where the first task to arrive is the first to be allocated processor time without consideration for task priority or size. The document provides an example C++ implementation of FCFS scheduling and calculates the completion time, turnaround time, and waiting time for a sample set of processes.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
111 views44 pages

OS Complete

The document discusses the First Come First Serve (FCFS) scheduling algorithm. FCFS is a non-preemptive scheduling algorithm that schedules tasks, processes, and requests in the order that they arrive in the ready queue. It is a simple and predictable algorithm where the first task to arrive is the first to be allocated processor time without consideration for task priority or size. The document provides an example C++ implementation of FCFS scheduling and calculates the completion time, turnaround time, and waiting time for a sample set of processes.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Experiment – 1

Aim: Understanding Operating Systems and its Types.


Theory:
Introduction:
An Operating System (OS) is an interface between a computer user and computer hardware. An
operating system is a software which performs all the basic tasks like file management, memory
management, process management, handling input and output, and controlling peripheral devices
such as disk drives and printers.

Types of Operating System :


Single-user, single-tasking operating systems:
Batch OS is the first operating system for second-generation computers. This OS does not directly
interact with the computer. Instead, an operator takes up similar jobs and groups them together
into a batch, and then these batches are executed one by one based on the first-come, first, serve
principle. Examples of this type of operating system include DOS and Windows 3.x.

Multi-user, single-tasking operating systems:


This type of operating system is designed to support multiple users but can execute only one task
at a time. Examples of this type of operating system include UNIX and Linux.

Multi-user, multi-tasking operating systems:


This type of operating system is designed to support multiple users and execute multiple tasks
simultaneously. Examples of this type of operating system include Windows NT, Windows 2000,
Windows XP, Windows 7, Windows 10, and Mac OS X.

Real-time operating systems:


In Real-Time Systems, each job carries a certain deadline within which the job is supposed to be
completed, otherwise, the huge loss will be there, or even if the result is produced, it will be
completely useless. Examples of this type of operating system include VxWorks and QNX.

Embedded operating systems:


This type of operating system is designed for use in embedded systems, such as smartphones,
smartwatches, and other small electronic devices. Examples of this type of operating system include
Android, iOS, and Windows CE.

Network operating systems:


This type of operating system is designed for use in networked environments, such as servers,
routers, and other devices that provide network services. Examples of this type of operating system
include Windows Server, Linux, and UNIX.

Working of Operating Systems:


An operating system (OS) is the software that manages the resources and communication of a
computer. It acts as an intermediary between the computer's hardware and the software
applications running on it.

The operating system (OS) manages all of the software and hardware on the computer. It
performs basic tasks such as file, memory and process management, handling input and output,
and controlling peripheral devices such as disk drives and printers.
Most of the time, there are several different computer programs running at the same time, and
they all need to access your computer’s central processing unit (CPU), memory and storage. The
operating system coordinates all of this to make sure each program gets what it needs.

The operating system also manages the start-up and shut-down of the computer, as well as the
management of tasks and processes running on the computer.
Overall, the operating system acts as the central controller of a computer, managing its resources
and communication, and providing a user interface for interacting with the computer. It is
responsible for the overall functioning and stability of a computer and its software applications.
Operating Systems Works As :
1. Interface: Operating systems provide a user-friendly interface to interact with the computer,
such as a graphical user interface (GUI) or command-line interface (CLI).
2. Communication: Operating systems facilitate communication between different devices and
software programs, such as networking and file sharing.
3. Resource management: Operating systems manage and allocate resources such as memory,
processing power, and storage to different programs and processes.
4. Process management: Operating systems manage and control the execution of different
processes, such as multitasking and scheduling.
5. Security: Operating systems provide security features, such as user authentication and
access control, to protect the system from unauthorized access and malware.
6. File management: Operating systems manage and organize files and folders on a computer
or device, allowing users to easily find and access them.
7. Hardware management: Operating systems manage and control the interactions between
software and hardware, such as device drivers and input/output operations.

Advantages of operating systems:

1. Multitasking :An operating system can manage multiple tasks at once. It enables users to
complete multiple things at the same time.
2. Resource management: Operating systems manage and allocate resources such as
memory, processing power, and storage to different programs and processes running on
the system.
3. Security: Operating systems provide built-in security features to protect the system and
user data from unauthorized access or malicious attacks.
4. User-friendly interface: Operating systems provide an intuitive and user-friendly interface
that makes it easy to navigate and perform common tasks.
5. No Coding Lines : The rise of GUI (graphical user interface) has eliminated the process of
writing complex commands for doing small tasks.
Disadvantages of operating systems:

1. Expensive : When compared to other platforms like Linux, some operating systems are
costly. Microsoft Windows operating system with GUI and other in-built features carry a
costly price.
2. Highly Complex : Operating systems are highly complex, and the language which used to
established these OS are not clear and well defined.
3. Vulnerability to viruses and malware: Operating systems are vulnerable to viruses and
malware, which can compromise system security and cause data loss or corruption.
4. High system requirements: Some operating systems require powerful hardware to run,
which can be expensive and limit their use on older or less powerful devices.
5. Limited customization options: Some operating systems may have limited customization
options, which can be frustrating for users who want to personalize their experience.
Experiment 2
Aim: Introduction to (First Come First serve) Job Scheduling.
Theory:
Introduction:

First Come, First Served (FCFS) is a type of scheduling algorithm used by operating systems and
networks to efficiently and automatically execute queued tasks, processes and requests by the
order of their arrival. An FCFS scheduling algorithm may also be referred to as a first-in, first-out
(FIFO) algorithm or a first-come, first-choice (FCFC) algorithm.

Due to its simplistic nature, an FCFS algorithm is predictable, regardless of the type of tasks or
requests it has to process. Like a grocery store checkout scheme, FCFS algorithms mimic real-life
customer service situations where patrons who arrive first get served first regardless of the size
and complexity of their interaction.

FCFS Algorithm :

FCFS (First Come First Serve) is a scheduling algorithm that processes tasks in the order they are
comes in ready queue, without any priority or preference given to specific tasks. if we have 2 tasks
in ready queue then who has less arrival time will execute first.

#include<iostream>
using namespace std ;
static int count=0 ;
class process {
public :
int p_no,at,bt,ct,wt,tat;

void enter_details() {
p_no=++count;
cout<<"ENTER ARRIVAL TIME AND BURST TIME FOR PROCESS "<<p_no<< " : ";
cin>>at>>bt;
ct=tat=wt=0;
}
};
int main() {
int n ;
cout<<"ENTER NUMBER OF PROCESS : ";
cin>>n;
process p[n], temp ;
for(int i=0;i<n;i++) {
p[i].enter_details();
}
cout<<"p_id\tAT\tBT\tCT\tTAT\tWT"<<endl;
for(int i=0;i<n;i++) {

cout<<p[i].p_no<<"\t"<<p[i].at<<"\t"<<p[i].bt<<"\t"<<p[i].ct<<"\t"<<p[i].tat<<"\t"<<p[i].wt<<endl;
}
for(int i=0;i<n-1;i++) {
for(int j=i+1;j<n;j++) {
if(p[i].at > p[j].at) {
temp=p[i];
p[i]=p[j];
p[j]=temp;
}
}
}
float ct_sum=0,tat_sum=0,wt_sum=0;
int cu_t=p[0].at;
for(int i=0;i<n;i++) {
if((cu_t-p[i].at)<0) {
cu_t=p[i].at;
}
cu_t=cu_t+p[i].bt;
p[i].ct=cu_t;
p[i].tat=p[i].ct-p[i].at ;
tat_sum=tat_sum+p[i].tat;
p[i].wt=p[i].tat-p[i].bt ;
wt_sum=wt_sum+p[i].wt;
}
for(int i=0;i<n-1;i++) {
for(int j=i+1;j<n;j++) {
if(p[i].p_no > p[j].p_no) {
temp=p[i];
p[i]=p[j];
p[j]=temp;
}
}
}
cout<<"p_id\tAT\tBT\tCT\tTAT\tWT"<<endl;
for(int i=0;i<n;i++) {

cout<<p[i].p_no<<"\t"<<p[i].at<<"\t"<<p[i].bt<<"\t"<<p[i].ct<<"\t"<<p[i].tat<<"\t"<<p[i].wt<<endl;
}
cout<<"average TAT = "<<tat_sum/n<<endl;
cout<<"Average WT = "<<wt_sum/n<<endl;
return 0;
}
Output:

Some Numerical:
A computer system has 5 processes that need to be executed using the First Come First Serve (FCFS)
algorithm. The arrival time and burst time of each process are as follows:

Process Id Arrival Time Burst Time


P1 0 6
P2 4 2
P3 5 4
P4 6 1
P5 7 3

To calculate the completion time, turnaround time, and waiting time for each process, we can use
the following formulas:
Turnaround Time (TAT ) = Completion Time - Arrival Time
Waiting Time (WT ) = Turnaround Time - Burst Time

P1 P2 P3 P4 P5
0 6 8 12 13 16
Using these formulas and Grantt chart, we can calculate the values for each process as follows:

Process Id Arrival Time Burst Time Completion TAT WT


time ( CT )
P1 0 6 6 6 0
P2 4 2 8 4 2
P3 5 4 12 7 3
P4 6 1 13 7 6
P5 7 3 16 9 6
SUM = 33 17
Average Turn Around time = 6.6 , Average Waiting time = 3.6
Advantages of FCFS:
Simple : The greatest benefit of FCFS order scheduling is its simplicity. Orders are completed in
the sequence that they are placed, making scheduling and process extremely straightforward.
There is no debate or calculations required to determine the queue of orders.

Another advantage is that first come, first serve can be used to manage customer flow. For example,
if a store knows that it will be busy at a certain time of day, it can use first come, first serve to ensure
that customers are served in a timely manner. This can help to reduce wait times for customers and
improve overall customer satisfaction.

Disadvantages of FCFS:
Long Waiting Time : Because FCFS is a non-preemp tive CPU scheduling algorithm, This means that
a subsequent order cannot begin processing until the order before has finished executing. Once a
process has been allocated to the CPU, it will never release the CPU until it is completed. This
means that if the first order placed has a considerable burst time, all orders behind it are forced to
wait for its completion, no matter how small their burst times.

Lower Device Utilization : Because FCFS is so simple, it tends not to be very efficient. This goes
hand-in-hand with extended waiting times. Because the CPU can only handle one order at a time,
FCFS utilizes a minimal portion of your system’s capabilities, rendering it very inefficient.
Experiment 3
Aim: To perform shortest job first (non-primitive scheduling) algorithm .
Theory:
Introduction:

Shortest Job First (SJF) is a scheduling algorithm that assigns the job with the shortest burst time to
the CPU. When a new job arrives, it is inserted into the ready queue in the order of its burst time.
When the CPU is available, it is assigned to the job with the shortest burst time. However, it is very
difficult to predict the burst time needed for a process hence this algorithm is very difficult to
implement in the system.
To implement SJF in a non-primitive scheduling environment, you can follow these steps:
1. Create a ready queue to hold the jobs that are waiting to be executed.
2. When a new job arrives, sort the ready queue in increasing order of burst time.
3. When the CPU is available, assign the job with the shortest burst time to the CPU.
4. Once a job is completed, remove it from the ready queue and repeat steps 2 and 3.
It is important to note that SJF is based on the assumption that the burst time of a job is known in
advance. In reality, this is often not the case, and thus, SJF is not commonly used in practice.

SJF Algorithm and code


This code takes a list of jobs as input, where each job is represented as a list containing the arrival
time and burst time of the job. It sorts the jobs by their burst time in ascending order and iterates
through the jobs, adding the burst time of the current job to the completion time and calculating
the turnaround time for the current job as the completion time minus the arrival time. The function
returns the total turnaround time for all jobs.

#include<iostream>
using namespace std ;

static int count=0 ;


class process {
public :
int p_no,at,bt,ct,wt,tat;
void enter_details() {
p_no=++count;
cout<<"ENTER ARRIVAL TIME AND BURST TIME FOR PROCESS "<<p_no<< " : ";
cin>>at>>bt;
ct=tat=wt=0;
}
};
int main() {
int n ;
cout<<"ENTER NUMBER OF PROCESS : ";
cin>>n;
process p[n] ;
process temp ;
for(int i=0;i<n;i++) {
p[i].enter_details();
}
cout<<"-------------------"<<endl;
cout<<"p_id\tAT\tBT"<<endl;
for(int i=0;i<n;i++) {
cout<<p[i].p_no<<"\t"<<p[i].at<<"\t"<<p[i].bt<<endl;
}
cout<<"-------------------"<<endl;
for(int i=0;i<n-1;i++) {
for(int j=i+1;j<n;j++) {
if(p[i].at > p[j].at) {
temp=p[i];
p[i]=p[j];
p[j]=temp;
}
}
}
int shortestJob = -1, currentTime=0 ;
float tat_sum=0,wt_sum=0;
for (int i = 0; i < n; i++) {
// Find the shortest job that has arrived
shortestJob = i;
for (int j = i + 1; j < n; j++) {
if (p[j].at <= currentTime && p[j].bt < p[shortestJob].bt) {
shortestJob = j;
}
}

// Swap current process with the shortest job


swap(p[i], p[shortestJob]);

// Update waiting and turnaround time


p[i].wt = currentTime - p[i].at;
p[i].tat = p[i].wt + p[i].bt;

// Update current time


currentTime += p[i].bt;
p[i].ct=currentTime;

// Update average waiting and turnaround time


wt_sum += p[i].wt;
tat_sum += p[i].tat;
}
for(int i=0;i<n-1;i++) {
for(int j=i+1;j<n;j++) {
if(p[i].p_no > p[j].p_no) {
temp=p[i];
p[i]=p[j];
p[j]=temp;
}
}
}
cout<<"p_id\tAT\tBT\tCT\tTAT\tWT"<<endl;
for(int i=0;i<n;i++) {
cout<<p[i].p_no<<"\t"<<p[i].at<<"\t"<<p[i].bt<<"\t"<<p[i].ct<<"\t"<<p[i].tat<<"\t"<<p[i].wt
<<endl;
}
cout<<"average TAT = "<<tat_sum/n<<endl;
cout<<"Average WT = "<<wt_sum/n<<endl;
return 0;
}
Output

Some Numerical:
Algorithm in which the process with the shortest burst time is executed first. Here is one numerical
on SJF:

Process Id Arrival Time Burst Time


P1 3 4
P2 5 3
P3 0 2
P4 5 1
P5 4 3
To calculate the completion time, turnaround time, and waiting time for each process, we can use
the following formulas:
Turnaround Time (TAT ) = Completion Time - Arrival Time
Waiting Time (WT ) = Turnaround Time - Burst Time

P3 P1 P4 P5 P2
0 2 3 7 8 11 14
Using these formulas and Grantt chart, we can calculate the values for each process as follows:

Process Id Arrival Time Burst Time Completion TAT WT


time ( CT )
P1 3 4 7 4 0
P2 5 3 14 9 6
P3 0 2 2 2 0
P4 5 1 8 3 2
P5 4 3 11 7 4
SUM = 25 12
Average Turn Around time = 5 , Average Waiting time = 2.4

Applications of SJF (Shortest Job First)


Operating Systems: Shortest job first (SJF) is commonly used in operating systems as a scheduling
algorithm to determine the order in which processes are executed. It helps to minimize the overall
completion time of all processes and increase system efficiency.

Job Scheduling: In job scheduling systems, SJF is used to prioritize jobs based on their execution
time. This ensures that jobs with shorter execution times are completed first, reducing the overall
completion time of all jobs.

Cloud Computing: In cloud computing environments, SJF is used to optimize the allocation of
resources to virtual machines. This helps to improve the performance of the cloud environment and
reduce costs.

Network Routing: In network routing, SJF is used to determine the best path for data packets to
travel through the network. This helps to minimize the delay of data packets and improve network
performance.

Advantages of SJF:

1. High throughput: shortest job first (SJF) algorithm ensures that the CPU is always working
on the shortest job, thus maximizing the throughput of the system.
2. Low waiting time: As the shortest job is executed first, the waiting time for other jobs is
minimized. This results in a more efficient use of system resources.
3. High CPU utilization: As the CPU is always working on the shortest job, it is more likely to
be in a busy state, resulting in high CPU utilization.
4. Good for interactive systems: SJF is well suited for interactive systems where the user
expects a quick response.

Disadvantages of SJF:
1. Difficult to predict job length: It is difficult to predict the length of a job, which can lead to
inaccuracies in the scheduling algorithm.
2. Starvation of long jobs: longer jobs may be starved and may not be executed for a long
time, resulting in poor performance.
3. No support for priority: SJF does not consider the priority of a job and only focuses on the
job's length, which may lead to lower-priority jobs being executed before higher-priority
jobs.
4. Non-pre-emptive: SJF is a non-pre-emptive algorithm, meaning that once a job starts, it
cannot be interrupted. This can lead to inefficiency if a shorter job arrives while a longer
job is executing.
Experiment 4
Aim: To perform longest job first (non-primitive scheduling) algorithm .
Theory:
Introduction:
LJF (Longest Job First) is a scheduling algorithm used in operating systems to prioritize the
execution of processes based on their expected or estimated processing time, where the longest
job is given the highest priority
LJF is commonly used in batch processing systems, where the goal is to complete a large number
of jobs in the shortest amount of time. It can also be used in real-time systems, where the goal is
to meet specific deadlines.
However, LJF can lead to a longer waiting time for shorter jobs, which can be a disadvantage in
some systems. Additionally, if a new job arrives with a much longer burst time than the currently
executing job, it can cause a significant delay for that job.

LJF Algorithm with Code:


The Longest Job First (LJF) algorithm is a scheduling algorithm that prioritizes the execution of jobs
with the longest execution time. Longest Job First CPU Scheduling Algorithm
Step-1: First, sort the processes in increasing order of their Arrival Time.
Step 2: Choose the process having the highest Burst Time among all the processes that have
arrived till that time.
Step 3: Then process it for its burst time.

Step 4: Repeat the above three steps until all the processes are executed.

#include<iostream>
using namespace std ;

static int count=0 ;


class process {
public :
int p_no,at,bt,ct,wt,tat;
void enter_details() {
p_no=++count;
cout<<"ENTER ARRIVAL TIME AND BURST TIME FOR PROCESS "<<p_no<< " : ";
cin>>at>>bt;
ct=tat=wt=0;
}
};
int main() {
int n ;
cout<<"ENTER NUMBER OF PROCESS : ";
cin>>n;
process p[n] ;
process temp ;
for(int i=0;i<n;i++) {
p[i].enter_details();
}
cout<<"-------------------"<<endl;
cout<<"p_id\tAT\tBT"<<endl;
for(int i=0;i<n;i++) {
cout<<p[i].p_no<<"\t"<<p[i].at<<"\t"<<p[i].bt<<endl;
}
cout<<"-------------------"<<endl;
for(int i=0;i<n-1;i++) {
for(int j=i+1;j<n;j++) {
if(p[i].at > p[j].at) {
temp=p[i];
p[i]=p[j];
p[j]=temp;
}
}
}
int longestJob = -1, currentTime=0 ;
float tat_sum=0,wt_sum=0;
for (int i = 0; i < n; i++) {
// Find the longest job that has arrived
longestJob = i;
for (int j = i + 1; j < n; j++) {
if (p[j].at <= currentTime && p[j].bt > p[longestJob].bt) {
longestJob = j;
}
}

// Swap current process with the shortest job


swap(p[i], p[longestJob]);

// Update waiting and turnaround time


p[i].wt = currentTime - p[i].at;
p[i].tat = p[i].wt + p[i].bt;

// Update current time


currentTime += p[i].bt;
p[i].ct=currentTime;

// Update average waiting and turnaround time


wt_sum += p[i].wt;
tat_sum += p[i].tat;
}
for(int i=0;i<n-1;i++) {
for(int j=i+1;j<n;j++) {
if(p[i].p_no > p[j].p_no) {
temp=p[i];
p[i]=p[j];
p[j]=temp;
}
}
}
cout<<"p_id\tAT\tBT\tCT\tTAT\tWT"<<endl;
for(int i=0;i<n;i++) {
cout<<p[i].p_no<<"\t"<<p[i].at<<"\t"<<p[i].bt<<"\t"<<p[i].ct<<"\t"<<p[i].tat<<"\t"<<p[i].wt
<<endl;
}
cout<<"average TAT = "<<tat_sum/n<<endl;
cout<<"Average WT = "<<wt_sum/n<<endl;
return 0;
}

output

Numerical:
A computer system has 5 processes that need to be executed using the Longest Job First (LJF)
algorithm. The arrival time and burst time of each process are as follows:

Process Id Arrival Time Burst Time


P1 0 6
P2 5 2
P3 3 4
P4 2 1
P5 6 3

To calculate the completion time, turnaround time, and waiting time for each process, we can use
the following formulas:
Turnaround Time (TAT ) = Completion Time - Arrival Time
Waiting Time (WT ) = Turnaround Time - Burst Time

P1 P3 P5 P2 P4
0 6 10 13 15 16
Using these formulas and Grantt chart, we can calculate the values for each process as follows:

Process Id Arrival Time Burst Time Completion TAT WT


time ( CT )
P1 0 6 6 6 0
P2 5 2 15 10 8
P3 3 4 10 7 3
P4 2 1 16 14 13
P5 6 3 13 7 4
SUM = 44 28
Average Turn Around time = 8.8 , Average Waiting time = 5.6

Note: The priority of the jobs is not considered in this algorithm as the focus is on completing the
longest jobs first.

Applications of longest job first (non-primitive scheduling)


Here are some applications of the LJF scheduling algorithm:
Batch Processing: In batch processing systems, the LJF algorithm is used to prioritize the
processing of jobs based on their processing time. This allows the system to complete longer jobs
first, reducing the overall time required to process all jobs.
Production Planning: In manufacturing plants, the LJF algorithm can be used to schedule the
production of goods based on the time required to produce each item. This allows the plant to
optimize its production schedule, ensuring that longer jobs are completed first and minimizing the
time required to produce all items.
Resource Allocation: In resource allocation systems, the LJF algorithm can be used to allocate
resources such as CPU time or network bandwidth based on the time required by each job. This
ensures that longer jobs are given priority access to resources, reducing the overall processing
time for all jobs.

Scheduling in Healthcare: The LJF algorithm can also be applied in healthcare to schedule patients
based on the time required for their medical procedures. This helps to optimize the use of medical
resources and ensure that longer procedures are scheduled first to reduce overall wait times.
Overall, the LJF algorithm is useful in any system that requires the processing of a large number of
jobs with different processing requirements, where prioritizing longer jobs can help to optimize
system performance.

Advantages of Longest Job First (Non-Primitive Scheduling):


1. Maximum utilization of resources: The LJR algorithm ensures that the biggest job is
completed first, which means that the maximum utilization of resources is achieved. This
leads to increased efficiency and faster completion of tasks.
2. Simple and easy to implement: LJR is a simple and easy-to-implement algorithm that does
not require complex calculations. It is easy to understand and apply, making it an attractive
option for many organizations.
3. Reduced waiting time: Since the longest job is given priority, smaller jobs may have to wait
longer. However, once a job is started, it is completed quickly, which reduces the overall
waiting time for all jobs.
4. All the jobs or processes finish at the same time approximately.
Disadvantages of Longest Job First (Non-Primitive Scheduling):
1. Poor response time: Smaller jobs may have to wait longer for processing, which can lead to
poor response times. This can be a disadvantage in systems where response time is critical,
such as real-time systems.
2. This may lead to a convoy effect.
3. It may happen that a short process may never get executed and the system keeps on
executing the longer processes.
4. It reduces the processing speed and thus reduces the efficiency and utilization of the
system.
Experiment 5
Aim: To implement Priority Non-pre-emptive priority scheduling algorithm .
Theory:
Introduction:

Priority non-preemptive scheduling is a scheduling algorithm used in operating systems where


processes are assigned a priority value, and the process with the highest priority is executed first.
In this algorithm, once a process has started executing, it will continue to run until it completes or
blocks for I/O or some other reason.
In priority non-preemptive scheduling, the scheduler selects the highest-priority process from the
ready queue and assigns the CPU to it. If there are multiple processes with the same priority, they
are executed in a First-Come-First-Serve (FCFS) order. Once a process completes or blocks for I/O,
the scheduler selects the next highest-priority process and assigns the CPU to it.

Priority (Non Pre-emptive) Algorithm with Code


Step 1 : Input the number of processes , burst time for each process, arrival time and there
respective scheduling priority.
Step 2 : sort the all given processes in ascending order according to arrival time and if two or more
processes having same arrival time then processes are sorted according to their priorities (high
value represents high priority) in a ready queue.
Step 3 : Calculate the Finish Time, Turn Around Time and Waiting Time for each process which in
turn help to calculate Average Waiting Time and Average Turn Around Time required by CPU to
schedule given set of processes.
Step 4 : Process with less arrival time (not necessary this process will have highest priority) comes
first and gets scheduled first by the CPU.
Step 5 : Calculate the Average Waiting Time and Average Turn Around Time.

#include<iostream>
using namespace std ;
static int count=0 ;
class process {
public :
int p_no,at,bt,ct,wt,tat,pt;
void enter_details() {
p_no=++count;
cout<<"ENTER ARRIVAL TIME AND BURST TIME AND PRIORITY FOR PROCESS
"<<p_no<< " : ";
cin>>at>>bt>>pt;
ct=tat=wt=0;
}
};
int main() {
int n ;
cout<<"ENTER NUMBER OF PROCESS : ";
cin>>n;
process p[n] ;
process temp ;
for(int i=0;i<n;i++) {
p[i].enter_details();
}
cout<<"-------------------"<<endl;
cout<<"p_id\tAT\tBT\tPRIORITY"<<endl;
for(int i=0;i<n;i++) {
cout<<p[i].p_no<<"\t"<<p[i].at<<"\t"<<p[i].bt<<"\t"<<p[i].pt<<endl;
}
cout<<"-------------------"<<endl;
for(int i=0;i<n-1;i++) {
for(int j=i+1;j<n;j++) {
if(p[i].at > p[j].at) {
temp=p[i];
p[i]=p[j];
p[j]=temp;
}
}
}
int highestPriority = -1, currentTime=0 ;
float tat_sum=0,wt_sum=0;
// Process each job
for (int i = 0; i < n; i++) {
// Find the highest priority job that has arrived
highestPriority = i;
for (int j = i + 1; j < n; j++) {
if (p[j].at <= currentTime && p[j].pt > p[highestPriority].pt) {
highestPriority = j;
}
}
// Swap current process with the highest priority job
swap(p[i], p[highestPriority]);
// Update waiting and turnaround time
p[i].wt = currentTime - p[i].at;
p[i].tat = p[i].wt + p[i].bt;
// Update current time
currentTime += p[i].bt;
p[i].ct=currentTime;
// Update average waiting and turnaround time
wt_sum += p[i].wt;
tat_sum += p[i].tat;
}
for(int i=0;i<n-1;i++) {
for(int j=i+1;j<n;j++) {
if(p[i].p_no > p[j].p_no) {
temp=p[i];
p[i]=p[j];
p[j]=temp;
}
}
}
cout<<"p_id\tAT\tBT\tCT\tTAT\tWT"<<endl;
for(int i=0;i<n;i++) {
cout<<p[i].p_no<<"\t"<<p[i].at<<"\t"<<p[i].bt<<"\t"<<p[i].ct<<"\t"<<p[i].tat<<"\t"<<p[i].wt
<<endl;
}
cout<<"average TAT = "<<tat_sum/n<<endl;
cout<<"Average WT = "<<wt_sum/n<<endl;
return 0;
}
Output

Numerical:
A computer system has 5 processes that need to be executed using the First Come First Serve (FCFS)
algorithm. The arrival time and burst time of each process are as follows:

Process Id Arrival Time Burst Time Priority


P1 0 4 1
P2 0 3 2
P3 6 7 1
P4 11 4 3
P5 12 2 2
To calculate the completion time, turnaround time, and waiting time for each process, we can use
the following formulas:

Turnaround Time (TAT ) = Completion Time - Arrival Time


Waiting Time (WT ) = Turnaround Time - Burst Time

P2 P1 P3 P4 P5
0 3 7 14 18 20
Using these formulas and Grantt chart, we can calculate the values for each process as follows:

Process Id Arrival Time Burst Priority Completion TAT WT


Time time ( CT )
P1 0 4 1 7 7 3
P2 0 3 2 3 3 0
P3 6 7 1 14 8 1
P4 11 4 3 18 7 3
P5 12 2 2 20 8 6
SUM = 33 13
Average Turn Around time = 6.6 , Average Waiting time = 2.6
Applications of Pre-emptive Priority Scheduling
1. Operating Systems: Priority scheduling is used in operating systems to manage the
allocation of system resources, such as CPU time and memory, to different processes and
applications.
2. Network Scheduling: In a computer network, priority scheduling is used to manage the
transmission of data packets. Packets with higher priority are transmitted first, while lower
priority packets are sent later.
3. Real-Time Systems: Priority scheduling is essential in real-time systems, where certain tasks
need to be completed within a specific timeframe. For example, in a medical device, critical
tasks must be executed first, while less critical tasks can be postponed.
4. Multitasking: In a multitasking environment, priority scheduling is used to determine which
tasks are executed first and which are postponed. This helps to optimize the use of system
resources and improve overall performance.

Advantages of Priority Scheduling


1. Simplicity: Priority non-preemptive scheduling is a simple scheduling algorithm to
implement, making it easy to understand and modify. It only requires a single priority value
to be assigned to each process, making it easier to manage than more complex scheduling
algorithms.
2. Fairness: Priority non-preemptive scheduling can ensure that high-priority processes get
executed first, which can be important in certain applications where real-time or critical
tasks need to be performed quickly. This can help ensure that the system is fair to all
processes and that they get a chance to execute based on their priority.
3. Efficiency: Because priority non-preemptive scheduling does not require context switching
(i.e., preempting one process to execute another), it can be more efficient than preemptive
scheduling algorithms in certain situations. This can lead to lower overhead and faster
processing times for processes.

Disadvantages of Priority Scheduling


1. Complexity: Priority scheduling algorithms are complex and difficult to implement,
especially in real-time systems where time constraints are critical.
2. Starvation: Lower priority processes may never get a chance to execute if higher priority
processes always run first. This phenomenon is known as starvation and can lead to
inefficiency.
3. Deadlocks: Deadlocks can occur when two processes with the same priority compete for
the same resources. This can result in a situation where neither process can continue
executing.
4. Priority Inversion: Priority inversion is a scenario where a low-priority process holds a
resource that a high-priority process needs. This can lead to the high-priority process
waiting, resulting in poor performance.
5. Prioritization Overhead: Prioritizing processes requires additional system resources and can
result in overhead, reducing system performance. Additionally, the algorithm must be
updated regularly to reflect changes in the priority of processes, further increasing
overhead.
Experiment 6
Aim: The aim of this experiment is to perform HRRN (Highest Response Ratio Next) scheduling
Algorithm.

Theory:
Introduction:
HRRN scheduling algorithm is a non-preemptive scheduling algorithm. The response ratio of a
process is defined as (waiting time + burst time) / burst time. The process with the highest
response ratio is selected for execution next. If multiple processes have the same response ratio,
the one with the highest arrival time is selected.

Algorithm with Code


 In the HRNN scheduling algorithm, once a process is selected for execution will run
until its completion.
 The first step is to calculate the waiting time for all the processes. Waiting time
simply means the sum of the time spent waiting in the ready queue by processes.
 Processes get scheduled each time for execution in order to find the response ratio
for each available process.
 Then after the process having the highest response ratio is executed first by the
processor.
 In a case, if two processes have the same response ratio then the tie is broken using
the FCFS scheduling algorithm.

#include <iostream>
#include <algorithm>
using namespace std;
// Process structure to store process information
struct Process {
int id;
int arrival_time;
int burst_time;
int waiting_time;
int turnaround_time;
};

// Function to calculate waiting time and turnaround time for each process
void calculate_times(Process processes[], int n) {
int current_time = 0;
int completed_processes = 0;
while (completed_processes < n) {
// Find the process with the highest response ratio that has arrived and not yet
completed
int highest_response_ratio_index = -1;
double highest_response_ratio = -1;
for (int i = 0; i < n; i++) {
if (processes[i].arrival_time <= current_time && processes[i].burst_time > 0) {
double response_ratio = 1 + ((double)(current_time - processes[i].arrival_time) /
processes[i].burst_time);
if (response_ratio > highest_response_ratio) {
highest_response_ratio_index = i;
highest_response_ratio = response_ratio;
}
}
}

// If a process is found, execute it for 1 unit of time and update waiting time and
turnaround time
if (highest_response_ratio_index != -1) {
processes[highest_response_ratio_index].burst_time--;
current_time++;
if (processes[highest_response_ratio_index].burst_time == 0) {
completed_processes++;
int completion_time = current_time;
int turnaround_time = completion_time -
processes[highest_response_ratio_index].arrival_time;
int waiting_time = turnaround_time -
processes[highest_response_ratio_index].burst_time;
processes[highest_response_ratio_index].waiting_time = waiting_time;
processes[highest_response_ratio_index].turnaround_time = turnaround_time;
}
}
// If no process is found, move to the next unit of time
else {
current_time++;
}
}
}

// Function to print the waiting time and turnaround time for each process
void print_times(Process processes[], int n) {
for (int i = 0; i < n; i++) {
cout << "Process " << processes[i].id << ": Waiting Time = " <<
processes[i].waiting_time << ", Turnaround Time = " << processes[i].turnaround_time <<
endl;
}
}

int main() {
// Example usage
Process processes[] = {{1, 0, 5, 0, 0}, {2, 1, 3, 0, 0}, {3, 2, 8, 0, 0}};
int n = sizeof(processes) / sizeof(processes[0]);
calculate_times(processes, n);
print_times(processes, n);
return 0;
}
OUTPUT

APPLICATIONS:
1. HRRN scheduling algorithm is commonly used in interactive systems where the
response time is a critical factor, such as time-sharing systems and interactive
user interfaces.
2. It is also useful in systems where a process with a higher response ratio should
be given priority over others, such as in real-time systems.

Advantages
1. The HRRN scheduling algorithm provides a balance between the SJF (Shortest
Job First) and FCFS (First Come, First Serve) scheduling algorithms, as it
considers both the waiting time and burst time of processes to determine their
priority.
2. This algorithm guarantees that the process with the highest response ratio will
be executed next, ensuring that the system responds quickly to user requests.
3. The HRRN algorithm avoids the problem of starvation, as processes with longer
waiting times will have higher response ratios and thus higher priority.
Disadvantages
1. The HRRN scheduling algorithm is not suitable for batch processing systems,
where the response time is not a critical factor.
2. Calculating the response ratio for each process requires additional overhead,
which can be time-consuming in large systems with many processes.
3. The HRRN algorithm may suffer from convoy effect, where a long-running
process may cause other processes with higher response ratios to wait for a
longer time.
Experiment 7
Aim: To perform Shortest Remaining Time First scheduling.
Theory:
Introduction:
This Algorithm is the preemptive version of SJF scheduling. In SRTF, the execution of the process can be
stopped after certain amount of time. At the arrival of every process, the short term scheduler schedules
the process with the least remaining burst time among the list of available processes and the running
process.

Once all the processes are available in the ready queue, No preemption will be done and the
algorithm will work as SJF scheduling. The context of the process is saved in the Process Control
Block when the process is removed from the execution and the next process is scheduled. This PCB
is accessed on the next execution of this process.

Algorithm with Code


1. When a new process arrives, the scheduler checks whether its burst time is shorter than the
remaining burst time of the currently running process.

2. If the burst time of the new process is shorter, the scheduler preempts the currently running
process and starts executing the new process.
3. If the burst time of the new process is longer,
4. the scheduler places the new process in the ready queue.
5. The scheduler continues this process until all the processes have completed their execution.

#include <iostream>
#include <algorithm>
#include <bits/stdc++.h>

using namespace std;

int main() {
int n = 5; // number of processes
int bt[] = {6, 8, 7, 3, 2}; // burst time of each process
int at[] = {0, 2, 3, 5, 6}; // arrival time of each process
int rt[n]; // remaining time of each process
int wt[n]; // waiting time of each process
int tt[n]; // turnaround time of each process
int completed = 0;
int current_time = 0;

// Initialize the remaining time of each process


for (int i = 0; i < n; i++) {
rt[i] = bt[i];
}

while (completed < n) {


int shortest_time = INT_MAX;
int shortest_process = -1;

// Find the process with the shortest remaining burst time


for (int i = 0; i < n; i++) {
if (at[i] <= current_time && rt[i] < shortest_time && rt[i] > 0) {
shortest_time = rt[i];
shortest_process = i;
}
}

// If no process is found, increment the current time


if (shortest_process == -1) {
current_time++;
}
else {
// Execute the selected process for one time unit
rt[shortest_process]--;
current_time++;

// If the process is completed, calculate its waiting time and turnaround time
if (rt[shortest_process] == 0) {
completed++;
wt[shortest_process] = current_time - at[shortest_process] - bt[shortest_process];
tt[shortest_process] = current_time - at[shortest_process];
}
}
}

// Calculate the average waiting time and turnaround time of all processes
float avg_waiting_time = 0;
float avg_turnaround_time = 0;
for (int i = 0; i < n; i++) {
avg_waiting_time += wt[i];
avg_turnaround_time += tt[i];
}
avg_waiting_time /= n;
avg_turnaround_time /= n;

// Print the results


cout << "Average Waiting Time: " << avg_waiting_time << endl;
cout << "Average Turnaround Time: " << avg_turnaround_time << endl;

return 0;
}

OUTPUT
Numerical:
Process Id Arrival Time Burst Time
P1 0 5
P2 1 6
P3 2 3
P4 3 1
P5 4 5
To calculate the completion time, turnaround time, and waiting time for each process, we can use
the following formulas:
Turnaround Time (TAT ) = Completion Time - Arrival Time
Waiting Time (WT ) = Turnaround Time - Burst Time
Gantt chart

P1 P1 P3 P4 P3 P1 P5 P2
0 1 2 3 4 6 10 15 21
Using these formulas and Grantt chart, we can calculate the values for each process as follows:

Process Id Arrival Time Burst Completion TAT WT


Time time ( CT )
P1 0 5 10
10 5
P2 1 6 21
20 14
P3 2 3 64 1
P4 3 1 41 0
P5 4 5 15
11 6
SUM = 46 26
Average Turn Around time = 9.2, Average Waiting time = 5.2

Applications
The SRTF scheduling algorithm is widely used in interactive systems that require fast response times.
Some applications that use SRTF include:
1. Real-time systems: SRTF scheduling is useful in real-time systems, where a process must
complete its execution within a fixed time frame.
2. Multimedia systems: SRTF scheduling is useful in multimedia systems, where a high-quality
display or audio output must be generated in real-time.
3. Interactive systems: SRTF scheduling is useful in interactive systems, such as web servers,
where fast response times are critical.
Advantages
The SRTF scheduling algorithm has several advantages, including:
1. Reduced response time: SRTF scheduling ensures that processes with shorter burst times are
given priority, resulting in reduced response times.
2. Improved efficiency: SRTF scheduling ensures that the CPU is always busy, resulting in
improved efficiency.
3. Better resource utilization: SRTF scheduling ensures that resources are utilized optimally, resulting in
better resource utilization.

Disadvantages
The SRTF scheduling algorithm also has some disadvantages, including:
1. High overhead: The SRTF scheduling algorithm requires frequent context switching, which
results in high overhead.
2. Starvation: The SRTF scheduling algorithm may result in starvation, where a process with a
longer burst time is continuously preempted by processes with shorter burst times.
3. Predictability: The SRTF scheduling algorithm is not very predictable, as the burst time of
processes may vary.

Experiment 8
Aim: To perform Longest Remaining Time First scheduling.
Theory:
Introduction:
Longest Remaining Time First (LRTF) is a CPU scheduling algorithm that is designed to improve the
efficiency of scheduling by prioritizing tasks with the longest remaining processing time. The main
idea behind this algorithm is that it aims to reduce the amount of context switching and maximize
the CPU utilization by scheduling the tasks in the order of their remaining execution time. This
algorithm is useful in real-time systems where tasks with shorter deadlines must be executed first.

Algorithm with Code


The LRTF algorithm is a pre-emptive scheduling algorithm that assigns priority to the tasks based on
their remaining processing time. The tasks are sorted in a priority queue, with the task having the
longest remaining processing time at the top of the queue. The scheduler selects the task at the top
of the queue and assigns it to the CPU for processing. If a new task arrives with a longer remaining
processing time than the current task, the current task is pre-empted, and the new task is assigned
to the CPU.
The following is a step-by-step algorithm for LRTF:
1. Initialize an empty priority queue Q.
2. Insert all the tasks in the priority queue with their remaining processing time as the priority
key.
3. Select the task at the top of the priority queue for processing.

4. If a new task arrives with a longer remaining processing time than the current task, preempt
the current task and assign the new task to the CPU.
5. Continue the above steps until all tasks have been processed

#include <iostream>
#include <algorithm>
using namespace std;
// Define a struct to represent a process
struct Process {
int pid; // process ID
int arrival_time; // arrival time
int burst_time; // burst time
int remaining_time;// remaining time for the process
};
// Define a function to compare processes by their remaining time
bool compare_processes(const Process& a, const Process& b) {
if (a.remaining_time == b.remaining_time) {
return [Link] < [Link];
} else {
return a.remaining_time > b.remaining_time;
}
}

int main() {
// Define the number of processes
int n = 5;

// Define the arrival time and burst time for each process
int arrival_time[] = {0, 1, 2, 3, 4};
int burst_time[] = {5, 4, 3, 2, 1};

// Create an array to store the processes


Process processes[n];

// Add the processes to the array


for (int i = 0; i < n; i++) {
processes[i].pid = i;
processes[i].arrival_time = arrival_time[i];
processes[i].burst_time = burst_time[i];
processes[i].remaining_time = burst_time[i];
}

// Define the current time


int current_time = 0;

// Define a flag to indicate if all processes have completed


bool all_complete = false;

// Loop through the processes until they are all complete


while (!all_complete) {
// Assume all processes are complete
all_complete = true;

// Sort the processes by their remaining time


sort(processes, processes + n, compare_processes);

// Select the process with the longest remaining time


int i = 0;
while (i < n && processes[i].arrival_time > current_time) {
i++;
}

if (i < n && processes[i].remaining_time > 0) {


// Execute the selected process
processes[i].remaining_time--;

// Update the current time


current_time++;
// Check if the process is complete
if (processes[i].remaining_time <= 0) {
cout << "Process " << processes[i].pid << " completed at time " << current_time
<< endl;
}

// Set the flag to indicate that not all processes are complete
all_complete = false;
} else {
// Update the current time to the arrival time of the next process
if (i < n) {
current_time = processes[i].arrival_time;
}
}
}

return 0;
}
OUTPUT

Numerical:
Process Id Arrival Time Burst Time
P1 0 5
P2 1 6
P3 2 3
P4 3 1
P5 4 5
To calculate the completion time, turnaround time, and waiting time for each process, we can use
the following formulas:

Turnaround Time (TAT ) = Completion Time - Arrival Time


Waiting Time (WT ) = Turnaround Time - Burst Time
Gantt chart

P1 P2 P2 P1 P5 P2 P1 P3 P4
0 1 2 3 4 9 13 16 19 20
Using these formulas and Grantt chart, we can calculate the values for each process as follows:
Process Id Arrival Time Burst Completion TAT WT
Time time ( CT )
P1 0 5 16 16 11
P2 1 6 13 12 6
P3 2 3 19 17 14
P4 3 1 20 17 16
P5 4 5 9 5 0
SUM = 67 47
Average Turn Around time = 13.4, Average Waiting time = 9.4

Applications:
The LRTF algorithm is useful in real-time systems, where tasks with shorter deadlines must be
executed first. It is also suitable for systems where the processing time of the tasks is not known
beforehand. Some of the common applications of the LRTF algorithm are:
1. Operating Systems: The LRTF algorithm is used in many operating systems for CPU
scheduling, including Unix, Linux, and Windows.
2. Real-time Systems: The LRTF algorithm is widely used in real-time systems, such as traffic
management systems, industrial automation systems, and medical equipment.
3. Multimedia Systems: The LRTF algorithm is used in multimedia systems to ensure that the
tasks with the longest processing time are executed first. This is important in video streaming
applications, where frames must be displayed at a specific rate.

Advantages
The LRTF algorithm has several advantages over other scheduling algorithms, including:

1. Improved Efficiency: The LRTF algorithm reduces the amount of context switching and
maximizes the CPU utilization by scheduling the tasks in the order of their remaining
execution time.
2. Fairness: The LRTF algorithm ensures that tasks with longer processing times are given higher
priority, which results in a fair allocation of CPU time.

3. Real-time Responsiveness: The LRTF algorithm is useful in real-time systems, where tasks
with shorter deadlines must be executed first.

Disadvantages
The LRTF algorithm has a few disadvantages, including:
1. Starvation: The LRTF algorithm can result in the starvation of short tasks if there are long
tasks in the queue.
2. Complexity: The LRTF algorithm is more complex than other scheduling algorithms, which
makes it difficult to implement and maintain.
3. Predictability: The LRTF algorithm is not very predictable, as the execution time of the tasks
is not known beforehand.

Experiment 9
Aim: To perform Pre-emptive Priority scheduling.
Theory:
Introduction:
Pre-emptive priority CPU scheduling is a popular scheduling algorithm used in operating systems to
determine which process should be executed next. In this algorithm, each process is assigned a
priority, which determines its position in the execution queue. The process with the highest priority
is executed first, and the execution of a process can be pre-empted by a higher-priority process.

Algorithm with Code


The pre-emptive priority CPU scheduling algorithm works as follows:

1. Assign a priority to each process, which reflects its importance.


2. When a process enters the system, it is placed in the ready queue according to its priority.
3. The process with the highest priority is selected for execution.
4. If a higher-priority process arrives, the currently executing process is pre-empted, and the
higher-priority process is selected for execution.
5. If a process with the same priority arrives, it is added to the end of the queue.

#include <iostream>
#include <algorithm>
#include <limits.h>
using namespace std;

// Process structure to store process information


struct Process {
int id;
int arrival_time;
int burst_time;
int priority;
};

// Function to implement Priority Preemptive scheduling


void priority_preemptive(Process processes[], int n) {
// Sort the processes based on their arrival time
sort(processes, processes + n, [](Process const& p1, Process const& p2) {
return p1.arrival_time < p2.arrival_time;
});

int current_time = 0;
int completed_processes = 0;
while (completed_processes < n) {
// Find the highest priority process that has arrived and not yet completed
int highest_priority_index = -1;
int highest_priority = INT_MAX ;
for (int i = 0; i < n; i++) {
if (processes[i].arrival_time <= current_time && processes[i].burst_time > 0 &&
processes[i].priority < highest_priority) {
highest_priority_index = i;
highest_priority = processes[i].priority;
}
}

// If a process is found, execute it for 1 unit of time


if (highest_priority_index != -1) {
processes[highest_priority_index].burst_time--;
cout << "Process " << processes[highest_priority_index].id << " executed at time "
<< current_time << endl;
if (processes[highest_priority_index].burst_time == 0) {
completed_processes++;
cout << "Process " << processes[highest_priority_index].id << " completed at time
" << current_time + 1 << endl;
}
}
// If no process is found, move to the next unit of time
else {
cout << "Idle at time " << current_time << endl;
}

current_time++;
}
}

int main() {
// Example usage
Process processes[] = {{1, 0, 5, 2}, {2, 1, 3, 1}, {3, 2, 8, 3}};
int n = sizeof(processes) / sizeof(processes[0]);
priority_preemptive(processes, n);
return 0;
}

OUTPUT
Numerical:
Process Id Arrival Time Burst Time Priority
P1 0 3 3
P2 1 4 2
P3 2 6 4
P4 3 4 6
P5 4 2 10
To calculate the completion time, turnaround time, and waiting time for each process, we can use
the following formulas:

Turnaround Time (TAT ) = Completion Time - Arrival Time


Waiting Time (WT ) = Turnaround Time - Burst Time
Gantt chart

P1 P1 P3 P4 P5 P4 P3 P1 P2
0 1 2 3 4 6 9 14 15 17
Using these formulas and Grantt chart, we can calculate the values for each process as follows:

Process Id Arrival Time Burst Completion TAT WT


Time time ( CT )
P1 0 3 1515 12
P2 1 4 1716 12
P3 2 6 1412 6
P4 3 4 9 6 2
P5 4 2 6 2 0
SUM = 51 32
Average Turn Around time = 10.2 Average Waiting time = 6.4

USES:
1. Resource allocation: Priorities can be used to allocate resources such as CPU time and
memory to processes based on their importance.
2. Real-time systems: In real-time systems, priority scheduling is used to ensure that time-
critical tasks are completed within their deadlines.
3. Multitasking: Priority scheduling can be used to manage multiple tasks running
simultaneously, ensuring that high-priority tasks are completed before lower-priority
tasks.
4. Fairness: Priority scheduling can be designed to ensure fairness by aging the priorities
of processes that have been waiting for a long time.
5. Interrupt handling: In systems with hardware interrupts, priority scheduling can be
used to handle the interrupts based on their priority, ensuring that important
interrupts are serviced first.

Advantages
1. The pre-emptive priority CPU scheduling algorithm ensures that high-priority processes are
executed first, which is important in real-time systems.
2. It allows for the efficient utilization of system resources, as the most important processes
are given priority.
3. It can help to minimize the average response time of processes, as higher-priority processes
are executed more quickly.

Disadvantages
1. The pre-emptive priority CPU scheduling algorithm can lead to starvation, where low-priority
processes are never executed.
2. It can also lead to priority inversion, where a low-priority process holds a resource that is
required by a high-priority process, preventing the high-priority process from executing.
3. It can be difficult to determine appropriate priorities for processes, which can lead to
inefficiencies in the system.
Experiment 10
Aim: To implement Round Robin scheduling algorithm .
Theory:
Introduction:

Round-robin scheduling is a CPU scheduling algorithm that is designed for time-sharing systems. It
is a pre-emptive scheduling algorithm, meaning that the CPU is allocated to each process for a
fixed time slice, or quantum, before being preempted and given to the next process in the queue.

In round-robin scheduling, the ready queue is treated as a circular queue. Each process is assigned
a time slice, which is typically a small amount of time, such as 10-100 milliseconds. The scheduler
assigns the CPU to the first process in the queue and allows it to run for its time slice. If the
process completes its execution during the time slice, it is removed from the queue. If not, it is
moved to the end of the queue and the next process in the queue is given a chance to run.
The period of time for which a process or job is allowed to run in a pre-emptive method is called
time quantum.

Round Robin Algorithm with Code


Step 1 : Input the number of processes , burst time for each process, arrival time and there
respective scheduling priority.
Step 2 : sort the all given processes in ascending order according to arrival time and if two or more
processes having same arrival time then processes are sorted according to their priorities (high
value represents high priority) in a ready queue.
Step 3 : Calculate the Finish Time, Turn Around Time and Waiting Time for each process which in
turn help to calculate Average Waiting Time and Average Turn Around Time required by CPU to
schedule given set of processes.
Step 4 : Process with less arrival time (not necessary this process will have highest priority) comes
first and gets scheduled first by the CPU.
Step 5 : Calculate the Average Waiting Time and Average Turn Around Time.

#include <iostream>
#include <queue>
using namespace std;
// Process structure to store process information
struct Process {
int id;
int arrival_time;
int burst_time;
int remaining_time;
};
// Function to implement Round Robin scheduling
void round_robin(Process processes[], int n, int quantum) {
// Create a queue to store the processes
queue<Process> q;
for (int i = 0; i < n; i++) {
[Link](processes[i]);
}
int current_time = 0;
while (![Link]()) {
Process current_process = [Link]();
[Link]();
// If the process has not finished executing, execute it for the quantum time
if (current_process.remaining_time > quantum) {
current_time += quantum;
current_process.remaining_time -= quantum;
[Link](current_process);
}
// If the process has finished executing, print its details
else {
current_time += current_process.remaining_time;
cout << "Process " << current_process.id << " finished at time " << current_time << endl;
}
}
}
int main() {
// Example usage
Process processes[] = {{1, 0, 5, 5}, {2, 1, 3, 3}, {3, 2, 8, 8}};
int n = sizeof(processes) / sizeof(processes[0]);
int quantum = 2;
round_robin(processes, n, quantum);
return 0;
}
OUTPUT

Numerical:
A computer system has 5 processes that need to be executed using the Round Robin algorithm. The
arrival time and burst time of each process are as follows: time quntum =4

Process Id Arrival Time Burst Time


P1 0 5
P2 1 6
P3 2 3
P4 3 1
P5 4 5
P6 6 4
To calculate the completion time, turnaround time, and waiting time for each process, we can use
the following formulas:
Turnaround Time (TAT ) = Completion Time - Arrival Time
Waiting Time (WT ) = Turnaround Time - Burst Time
ready queue

P1 P2 P3 P4 P5 P1 P6 P2 P5
Gantt chart

P1 P2 P3 P4 P5 P1 P6 P2 P5
0 4 8 11 12 16 17 21 23 24
Using these formulas and Grantt chart, we can calculate the values for each process as follows:

Process Id Arrival Time Burst Completion TAT WT


Time time ( CT )
P1 0 5 1717 12
P2 1 6 2322 16
P3 2 3 119 6
P4 3 1 129 8
P5 4 5 2420 15
P6 6 4 2115 11
SUM = 92 76
Average Turn Around time = 15.3, Average Waiting time = 12.6

Applications of Pre-emptive Priority Scheduling


1. Time-sharing systems: Round-robin scheduling is widely used in time-sharing systems,
where multiple users share a single system. The scheduler allocates the CPU to each
process for a fixed time slice, ensuring that each user gets a fair share of the system
resources.
2. Real-time systems: Round-robin scheduling is used in real-time systems to ensure that each
task gets a fixed amount of CPU time. This is important in applications like industrial
control systems or avionics, where timing is critical.
3. Web servers: Round-robin scheduling can be used in web servers to distribute incoming
requests across multiple servers. Each server is assigned a time slice to process incoming
requests, ensuring that all servers are utilized fairly and none are overwhelmed with
requests.

Advantages of Priority Scheduling


1. No issues of starvation or convoy effect.
2. Every job gets a fair allocation of CPU.
3. No priority scheduling is involved.
4. Total number of processes on the run queue helps assume the worst-case response time
for a process.
5. Doesn’t depend on burst time and is easily implementable.
Disadvantages of Priority Scheduling
1. Low slicing time reduces processor output.
2. Spends more time on context switching.
3. Performance depends on time quantum.
4. Processes don’t have priorities.
5. No special priority to more important tasks.

Experiment 11
Aim: FCFS Disc Scheduling
Theory:
Introduction:

FCFS (First-Come, First-Served) is a simple disk scheduling algorithm used in operating systems to
manage input/output (I/O) operations on a hard disk. As the name suggests, it prioritizes the I/O
requests in the order in which they arrive, with the first request received being the first to be
serviced.
Under this scheduling algorithm, the operating system maintains a queue of pending I/O requests,
with new requests being added to the end of the queue as they arrive. The operating system then
services these requests one by one in the order they were added to the queue.
While this approach is simple to implement and ensures that all requests are eventually serviced, it
can lead to performance issues in some cases. For example, if a large I/O request arrives first, it can
cause subsequent smaller requests to be delayed, resulting in longer average response times for all
requests.
Overall, FCFS is a straightforward scheduling algorithm that works well for simple I/O workloads but
may not be the most efficient option in more complex scenarios.

Algorithm with Code


The First-Come, First-Served (FCFS) disk scheduling algorithm is a simple scheduling algorithm where
the requests are processed in the order they are received by the disk. In other words, the requests
are processed in the order of their arrival time.

#include <iostream>
#include <algorithm>
#include <vector>
using namespace std;

// Function to calculate the total head movement of the disk arm


int calculate_head_movement(vector<int> requests, int head_position) {
int total_head_movement = 0;
int current_position = head_position;
for (int i = 0; i < [Link](); i++) {
total_head_movement += abs(requests[i] - current_position);
current_position = requests[i];
}
return total_head_movement;
}

int main() {
// Example usage
vector<int> requests = {98, 183, 37, 122, 14, 124, 65, 67};
int head_position = 53;
int total_head_movement = calculate_head_movement(requests, head_position);
cout << "Total head movement: " << total_head_movement << endl;
return 0;
}

OUTPUT

Advantages
1. First Come First Serve algorithm has a very simple logic, it executes the process requests
one by one in the sequence they arrive.
2. Thus, First Come First Serve is very simple and easy to understand and implement.
3. In FCFS eventually, each and every process gets a chance to execute, so no starvation
occur.

Disadvantages
1. This scheduling algorithm is nonpreemptive, which means the process can’t be stopped in
middle of execution and will run it’s full course.
2. FCFS being a nonpreemptive scheduling algorithm, the short processes which are at the
back of the queue have to wait for the long process at the front to finish

3. The throughput of FCFS is not very efficient.


4. FCFS is implemented on small systems only where input-output efficiency is not of utmost
importance.
Experiment 12
Aim: The implementation of the SSTF (Shortest Seek Time First) Disk Scheduling algorithm
Theory:
Introduction:
SSTF (Shortest Seek Time First) is a disk scheduling algorithm which selects the request which is
closest to the current head position. To achieve this, it selects the request which has the least seek
time from the current position of the head.

Algorithm with Code


1. Arrange all the I/O requests in ascending order.
2. The head will find the nearest request (which has a minimum distance from the head)
present in any direction (left or right) and will move to that request. Total head movement
is calculated as
3. Current request - previous request (if the current request is greater)
4. Previous request - current request (if the previous request is greater)
5. Then the head will move another nearest request which has not been serviced present in
any direction.
6. This process is repeated until all the requests are served and we get total head movement.

#include <iostream>
#include <algorithm>
#include <vector>
#include <limits.h>
using namespace std;

// Function to calculate the total head movement of the disk arm


int calculate_head_movement(vector<int> requests, int head_position) {
int total_head_movement = 0;
while (![Link]()) {
// Find the request with the smallest seek time
int min_seek_time = INT_MAX;
int min_index = -1;
for (int i = 0; i < [Link](); i++) {
int seek_time = abs(requests[i] - head_position);
if (seek_time < min_seek_time) {
min_seek_time = seek_time;
min_index = i;
}
}
// Add the seek time to the total head movement
total_head_movement += min_seek_time;
// Update the head position to the selected request
head_position = requests[min_index];
// Remove the selected request from the vector
[Link]([Link]() + min_index);
}
return total_head_movement;
}

int main() {
// Example usage
vector<int> requests = {98, 183, 37, 122, 14, 124, 65, 67};
int head_position = 53;
int total_head_movement = calculate_head_movement(requests, head_position);
cout << "Total head movement: " << total_head_movement << endl;
return 0;
}

OUTPUT

Applications:
1. File systems: SSTF is used in file systems to read and write files from disk. By minimizing the
head movement of the disk arm, SSTF reduces the average seek time and access time for
reading and writing files.
2. Database systems: SSTF is used in database systems to retrieve data from disk. Databases
often store large amounts of data on disk, and the SSTF algorithm helps to optimize disk
access time for faster retrieval of data.
3. Real-time systems: SSTF is used in real-time systems that require low-latency disk access.
For example, in aviation and automotive systems, low-latency access to data from disk is
critical for safety-critical applications, and SSTF helps to optimize disk access time.

Advantages
1. The total seek time is reduced compared to First Come First Serve.
2. SSTF improves and increases throughput.
3. Less average waiting time and response time in SSTF.

Disadvantages
1. In SSTF there is an overhead of finding out the closest request.
2. Starvation may occur for requests far from head.
3. In SSTF high variance is present in response time and waiting time.
4. Frequent switching of the Head’s direction slows the algorithm.

You might also like