Professional Documents
Culture Documents
IT204
Practical File
ubmitted To-
S Submitted By-
Mr. Rahul Gupta Piyush Gaba
Department of IT 2K22/IT/120
INDEX
S. No. P rograms Date Sign
1. Case Study - Basic Operating System Commands
2. Write a program to implement FCFS(First Come First
Serve) Scheduling Algorithm
3. Write a program to implement SJF(Shortest Job First)
Scheduling Algorithm
4. Write a program to implement Priority CPU Scheduling
Algorithm
5. Write a program to implement Round Robin (RR)
Scheduling Algorithm
6. Write a program to implement Preemptive Shortest Job
First Scheduling Algorithm
7. Write a program to implement Longest Remaining Time
First (LRTF) CPU Scheduling Algorithm
8. Write a program to implement Banker’s Algorithm
9. Write a program to implement solution of Producer -
Consumer Problem
10. Write a program to implement First In First Out (FIFO)
Page Replacement Algorithm
11. Write a program to implement Least Recently Used
(LRU) Page Replacement Algorithm
12. Write a program to implement Dining - Philosophers
Problem
13. Write a program to implement Reader - Writer Problem
14. Write a program to implement First Come First Serve
(FCFS) Disk Scheduling Algorithm
Case Study
Aim
Introduction
In the fast-paced world of information technology, proficiency in Unix operating system commands is essential
for system administrators, developers, and IT professionals. This case study explores how a team at XYZ
Corporation leveraged basic Unix commands to enhance operational efficiency, streamline processes, and
troubleshoot issues effectively.
Background
YZ Corporation, a leading tech company, relies heavily on Unix-based systems to power its critical infrastructure.
X
The IT team identified a need to optimize daily tasks and improve overall system management by leveraging the
power of Unix commands.
Objectives
Implementation
T he initiation of a comprehensive training program by the IT team centered on optimizing Unix operating system
commands. The program encompassed fundamental commands such as ls, cd, cp, mv, rm, mkdir, and chmod.
Additionally, advanced commands like grep, find, and awk were introduced to address specific use cases.
● c hmod: Adjusts file or directory permissions for improved security.
● chown: Changes the owner of a file or directory for user control.
● jobs: Displays active customization providing process status information.
● kill: Terminates a process using either its ID or name.
● ping: Tests network connectivity between specified hosts.
● wget: Downloads files from the internet for offline use.
● uname: Outputs system details like the kernel and machine architecture.
● top: Shows real-time system resource usage for monitoring performance.
● history: Lists previously executed commands for reference.
● man: Provides manual pages for a specified command for in-depth information.
● echo: Prints text to the terminal, facilitating script output.
● zip: Compresses files into a zip archive, reducing storage space.
● unzip: Extracts files from a zip archive for access to compressed content.
● ostname: Reveals the current system's hostname for identification.
h
● useradd: Creates a new user account, enabling system access.
● userdel: Removes a user account, revoking system access.
● apt-get: Manages software packages on Debian-based Linux systems, facilitating installation and
updates.
● nano: Edits text using a straightforward command-line interface.
● vi: Edits text through a command-line interface, providing powerful editing capabilities.
● jed: Edits text with a graphical user interface, enhancing user experience.
● alias: Creates shortcuts for frequently used commands or command sequences.
● unalias: Eliminates previously created aliases to restore default commands.
● su: Switches to a different user account or the root user for administrative tasks.
● htop: Displays real-time system resource usage with a user-friendly interface, improving readability
compared to top.
● ps: Lists currently running processes, offering a snapshot of system activity.
● vim: Advanced text editor with a command-line interface, featuring powerful editing and customization
options.
Operational Efficiency
T hrough the integration of Unix commands into daily workflows, the team automated repetitive tasks, thereby
reducing the time and effort required for manual interventions. For instance, scheduled cron jobs using crontab
were employed to automate regular system maintenance, ensuring seamless execution of updates, backups, and
log rotations.
apitalizing on Unix commands, the team efficiently organized and managed files and directories. They
C
employed cp and mv for copying and moving files, mkdir for creating directories, and rm for deleting
unnecessary files. The implementation of chmod ensured proper file permissions, thereby enhancing security
and access control.
Troubleshooting Capabilities
T he introduction of potent Unix commands such as grep and find significantly enhanced the team's ability to
quickly troubleshoot issues. Team members proficiently searched through log files using grep to identify patterns
or errors, while the find command played a crucial role in locating specific files or directories across the entire
system, facilitating swift issue resolution.
T o encourage enhanced collaboration, the team established standardized practices for sharing Unix command
sequences and solutions. They developed documentation outlining common commands and their applications,
enabling team members to easily reference and share their knowledge. This collaborative approach elevated the
overall skill set within the team and reduced the learning curve for new members.
Results
T ime Savings: The automation of routine tasks resulted in a significant reduction in manual effort, allowing team
members to concentrate on more strategic initiatives.
Improved Accuracy: Standardized Unix commands enhanced accuracy in file and directory management,
lowering the likelihood of errors.
E nhanced Troubleshooting: The team's ability to troubleshoot and resolve issues was expedited, minimizing
downtime and improving system reliability.
nowledge Sharing: The documentation and collaborative approach facilitated knowledge sharing, empowering
K
team members to learn and apply Unix commands effectively.
Conclusion
T hrough the seamless integration of basic Unix operating system commands into their daily operations, XYZ
Corporation's IT team successfully enhanced operational efficiency, streamlined file management, improved
troubleshooting capabilities, and fostered improved communication and collaboration. This case study
underscores the significance of foundational Unix skills in optimizing system administration and IT operations.
Program 1
Aim
Theory
T he First Come First Serve (FCFS) CPU scheduling algorithm prioritizes processes based on their arrival time,
executing the earliest arriving process first. Processes are placed in a ready queue, and the CPU serves the one
waiting the longest. While simple and easy to implement, FCFS has a drawback—long waiting times for processes
with lengthy burst times, known as the "convoy effect." This limitation can hinder system efficiency as shorter
processes get delayed behind longer ones, impacting overall performance in scenarios with mixed burst times.
Code
v oid calculate_tat(int pr[], int n, int bt[], int wt[], int tat[]){
for (int i = 0; i<n ;i++)
tat[i] = bt[i] + wt[i];
}
int main(){
int processes[] = {1, 2, 3};
int n = sizeof processes / sizeof processes[0];
int burst_time[] = {5, 6, 11};
calculateAvgTime(processes, n, burst_time);
r eturn 0;
}
Output
Conclusion
In conclusion, the First-Come-First-Serve (FCFS) algorithm is a basic scheduling approach that executes
processes in the order of their arrival. While easy to implement, it may lead to inefficient resource
utilization and longer waiting times, particularly when shorter processes are queued behind longer
ones. FCFS is suitable for certain scenarios but may not be ideal for systems requiring quick response
times or prioritizing shorter jobs. Understanding its limitations is crucial when choosing scheduling
algorithms to meet specific system requirements.
Program 2
Aim
Theory
T he Shortest Job First (SJF) algorithm is a CPU scheduling technique that prioritizes the process with the shortest
burst time in the ready queue for immediate execution. While effective in minimizing the average waiting time of
processes, a significant drawback lies in the uncertainty of a process's burst time until it commences execution.
This unpredictability makes the practical implementation of SJF challenging.
Code
# include <iostream>
using namespace std;
struct Process {
int pid;
int bt;
int art;
};
if (rt[shortest] == 0) {
complete++;
check = false;
finish_time = t + 1;
wt[shortest] = finish_time - proc[shortest].bt - proc[shortest].art;
if (wt[shortest] < 0)
wt[shortest] = 0;
}
t++;
}
}
int main(){
Process proc[] = { { 1, 5, 1 }, { 2, 3, 1 }, { 3, 6, 2 }, { 4, 5, 3 } };
int n = sizeof(proc) / sizeof(proc[0]);
findavgTime(proc, n);
return 0;
}
Output
Conclusion
In conclusion, the Shortest Job First (SJF) algorithm plays a crucial role in process scheduling,
emphasizing the execution of shorter tasks to enhance system efficiency. Its non-preemptive nature
aims to minimize waiting times and improve overall system performance. However, challenges like
burst time estimation and the convoy effect should be considered. SJF remains a key concept in
operating system design, influencing the development of scheduling algorithms and contributing to the
ongoing exploration of trade-offs in resource management strategies.
Program 3
Aim
Theory
riority Non-Preemptive Scheduling is a scheduling algorithm in operating systems that assigns priority levels to
P
each process and schedules them based on these priorities. The process with the highest priority is selected for
execution first. In this non-preemptive version of the algorithm, once a process starts executing, it continues
until it completes or enters a waiting state.
Algorithm
.
1 ssign priority values to each process. A lower numerical value generally indicates a higher priority.
A
2. Select the process with the highest priority for execution.
3. Execute the selected process until it completes or enters a waiting state.
4. If multiple processes share the same priority, use additional criteria (e.g., arrival time) to break the tie.
5. Repeat the process until all processes are completed.
Code
# include <stdio.h>
#include <stdlib.h>
completionTimes[process] = clock;
isProcessCompleted[process] = 1;
process =
getNextProcess(arrivalTimes, isProcessCompleted, priority,
isHighNumberHighPriority, clock, numberOfProcesses);
}
a verageTurnaroundTime /= numberOfProcesses;
averageWaitingTime /= numberOfProcesses;
rintf("PID\tAT\tBT\tComp\tTA\tWT\n");
p
for (int i = 0; i < numberOfProcesses; i++)
{
printf("%d\t%d\t%d\t%d\t%d\t%d\n", i + 1, arrivalTimes[i], bustTimes[i],
completionTimes[i], turnAroundTimes[i], waitingTimes[i]);
}
printf("\nAverage Waiting Time: %0.2f\n", averageWaitingTime);
printf("Average Turn Around Time: %0.2f\n", averageTurnaroundTime);
f ree(completionTimes);
free(turnAroundTimes);
free(waitingTimes);
free(isProcessCompleted);
}
int main()
{
int arrivalTime[6] = {1, 2, 3, 4, 5, 6};
int burstTime[6] = {4, 5, 1, 2, 3, 6};
int priority[6] = {4, 5, 7, 2, 1, 6};
findAverageTimes(arrivalTime, burstTime, priority, 1, 6);
return 0;
}
Output
Conclusion
riority Non-Preemptive Scheduling is a simple and efficient algorithm for managing the execution of processes
P
in an operating system. It ensures that higher-priority processes are given preference, promoting the timely
execution of critical tasks. However, it may lead to the "starvation" problem if lower-priority processes never get
a chance to execute.
Program 4
Aim
Theory
T he Round Robin scheduling algorithm is one of the CPU scheduling algorithms in which every process gets a
fixed amount of time quantum to execute the process.
In this algorithm, every process gets executed cyclically. This means that processes that have their burst time
remaining after the expiration of the time quantum are sent back to the ready state and wait for their next turn
to complete the execution until it terminates. This processing is done in FIFO order which suggests that
processes are executed on a first-come, first-serve basis.
Algorithm
F irst, the processes which are eligible to enter the ready queue enter the ready queue. After entering the first
process in Ready Queue is executed for a Time Quantum chunk of time. After execution is complete, the process
is removed from the ready queue. Even now the process requires some time to complete its execution, then the
process is added to Ready Queue.
T he Ready Queue does not hold processes which are already present in the Ready Queue. The Ready Queue is
designed in such a manner that it does not hold non unique processes. By holding same processes Redundancy
of the processes increases.
Code
# include<iostream>
using namespace std;
v oid findWaitingTime(int processes[], int n, int bt[], int wt[], int quantum)
{
int rem_bt[n];
for (int i = 0 ; i < n ; i++)
rem_bt[i] = bt[i];
int t = 0;
bool done = false; // Move done variable outside the loop
hile (!done)
w
{
done = true; // Initialize done to true before checking in each iteration
v oid findTurnAroundTime(int processes[], int n, int bt[], int wt[], int tat[])
{
for (int i = 0; i < n; i++)
tat[i] = bt[i] + wt[i];
}
int main()
{
int processes[] = {1, 2, 3, 4};
int n = sizeof processes / sizeof processes[0];
int burst_time[] = {4, 1, 8, 1};
int quantum = 1;
findavgTime(processes, n, burst_time, quantum);
return 0;
}
Output
Conclusion
ound Robin CPU Scheduling is the most important CPU Scheduling Algorithm which is ever used in the history
R
of CPU Scheduling Algorithms. Round Robin CPU Scheduling uses Time Quantum (TQ). The Time Quantum is
something which is removed from the Burst Time and lets the chunk of process to be completed.
Program 5
Aim
Theory
S hortest Remaining Time First (SRTF) is a non-preemptive scheduling algorithm where the process with the
smallest remaining burst time is selected for execution. This algorithm prioritizes shorter jobs, minimizing the
waiting time and improving system responsiveness. Processes are executed in order of their remaining time,
allowing the system to adapt dynamically to changing workloads. The algorithm reduces turnaround time and
enhances efficiency, though it may lead to a phenomenon called "starvation," where long processes might be
delayed indefinitely. SRTF is a fundamental concept in operating system scheduling, emphasizing optimal CPU
utilization and responsiveness.
Code
# include<iostream>
using namespace std;
struct Process {
int id, at, bt, rem_t;
};
if (p[sel_pr_in].rem_t == 0) {
++comp_pr_n;
tot_wt += curr_t - p[sel_pr_in].at - p[sel_pr_in].bt;
tot_wat += curr_t - p[sel_pr_in].at;
sel_pr_in = -1;
}
++curr_t;
}
}
c out << "\nAverage Waiting Time: " << awt << endl;
cout << "Average Turnaround Time: " << atat << endl;
}
int main() {
int n;
cout << "Enter the number of processes: ";
cin >> n;
Process p[n];
return 0;
}
Output
Conclusion
In conclusion, the Shortest Remaining Time First (SRTF) algorithm offers an efficient approach to
process scheduling, aiming to minimize waiting times and enhance system responsiveness by
prioritizing shorter tasks. Despite its advantages in reducing turnaround times and optimizing CPU
utilization, SRTF may face challenges such as the potential for process starvation. It remains a
fundamental concept in operating system design, contributing to the ongoing development and
refinement of scheduling algorithms. The choice of scheduling algorithm depends on the specific
requirements and characteristics of the system, balancing the trade-offs between fairness, efficiency,
and responsiveness in managing computing resources.
Program 6
Aim
Theory
Longest Remaining Time First (LRTF) orPreemptiveLongest Job First (LJF) scheduling algorithm selects the
rocess with the longest remaining burst time to execute first. If a new process arrives with an even longer burst
p
time, it preempts the currently executing process and starts executing the new one.
Algorithm
.
1 Initialize the arrival time, burst time, waiting time, turnaround time, current time, etc.
2. Loop until all processes are executed.
3. At each step, select the process with the longest remaining burst time.
4. If a new process arrives with a longer burst time, preempt the currently executing process.
5. Calculate waiting time, turnaround time, and update current time accordingly.
6. Repeat until all processes are executed.
Code
#include <iostream>
struct Process {
int pid;
int arrivalTime;
int burstTime;
int remainingTime;
};
if (processes[longestIdx].remainingTime == 0) {
completed++;
int completionTime = currentTime;
int turnaroundTime = completionTime - processes[longestIdx].arrivalTime;
int waitingTime = turnaroundTime - processes[longestIdx].burstTime;
t otalWaitTime += waitingTime;
totalTurnaroundTime += turnaroundTime;
c out << processes[longestIdx].pid << "\t" << processes[longestIdx].arrivalTime << "\t" <<
processes[longestIdx].burstTime << "\t" << completionTime << "\t" << turnaroundTime << "\t" << waitingTime
<< endl;
}
}
c out << "Average Waiting Time: " << totalWaitTime / n << endl;
cout << "Average Turnaround Time: " << totalTurnaroundTime / n << endl;
}
int main() {
int n;
cout << "Enter the number of processes: ";
cin >> n;
rocess processes[n];
P
cout << "Enter arrival time and burst time for each process:\n";
for (int i = 0; i < n; ++i) {
processes[i].pid = i + 1;
cout << "Process " << i + 1 << ":\n";
cout << "Arrival time: ";
cin >> processes[i].arrivalTime;
cout << "Burst time: ";
c in >> processes[i].burstTime;
processes[i].remainingTime = processes[i].burstTime;
}
ljf(processes, n);
return 0;
}
Output
Conclusion
In conclusion, the non-preemptive Longest Job First (LJF) CPU scheduling algorithm efficiently selects processes
based on their burst time, executing the longest job first. By minimizing context switching, it optimizes
throughput. However, it may lead to longer average waiting times for shorter processes.
Program 7
Aim
Theory
anker's Algorithm is a deadlock avoidance algorithm used in operating systems to manage the
B
allocation of multiple resources to multiple processes while ensuring that deadlock does not occur. It
was developed by Edsger Dijkstra.
T he algorithm works on the principle of simulating the resource allocation process to determine if it's
safe to grant a resource request. It considers the current allocation, maximum allocation, and available
resources to make decisions. If the system remains in a safe state after granting the resources, the
allocation is permitted; otherwise, the process is blocked until resources are available.
Algorithm
.
1 Input available resources, maximum resource requirement, and resource allocation for each process.
2. Calculate the need matrix by subtracting allocation from maximum.
3. Initialize finish array to false and work array to available resources.
4. Repeat until all processes are finished:
a. Find an unfinished process whose need can be satisfied with available resources.
b. If such a process is found, allocate its resources, update available resources, mark the process as
finished, and record its sequence.
c. If no such process is found, system is not in a safe state.
5. If all processes are finished, system is in a safe state.
6.
Code
#include <iostream>
bool isSafe(int *processes, int *avail, int **maxm, int **allot, int P, int R) {
int **need = new int*[P];
for (int i = 0; i < P; i++)
need[i] = new int[R];
if (j == R) {
for (int k = 0; k < R; k++)
work[k] += allot[p][k];
safeSeq[count++] = p;
finish[p] = true;
found = true;
}
}
}
if (!found) {
std::cout << "System is not in a safe state\n";
return false;
}
}
elete[] finish;
d
delete[] safeSeq;
delete[] work;
for (int i = 0; i < P; i++)
delete[] need[i];
delete[] need;
return true;
}
int main() {
int P, R;
s td::cout << "Enter the number of processes: ";
std::cin >> P;
std::cout << "Enter the number of resources: ";
std::cin >> R;
elete[] processes;
d
delete[] avail;
return 0;
}
Output
Conclusion
T he Banker's Algorithm implementation in C++ allows dynamic input for processes, resources, maximum
requirements, and allocations. By considering these inputs, it determines whether the system is in a safe state.
This interactive implementation enhances understanding of resource management and system safety in
concurrent environments.
Program 8
Aim
Theory
Algorithm
. Define shared buffer, mutex, and semaphores for empty and full slots.
1
2. Initialize empty slots to maximum buffer size and full slots to 0.
3. Producer produces items:
a. Wait for empty slot semaphore.
b. Acquire mutex lock to access shared buffer.
c. Produce item and add it to the buffer.
d. Release mutex lock.
e. Increment full slot semaphore.
4. Consumer consumes items:
a. Wait for full slot semaphore.
b. Acquire mutex lock to access shared buffer.
c. Consume item from buffer.
d. Release mutex lock.
e. Increment empty slot semaphore.
5. Repeat producer and consumer steps.
Code
# include <iostream>
#include <queue>
/ / Shared data
std::queue<int> buffer;
int in = 0, out = 0;
/ / Semaphore-like variables
int empty = BUFFER_SIZE;
int full = 0;
/ / Acquire lock
while (lock)
;
lock = true;
/ / Produce item
buffer.push(item);
/ / Release lock
lock = false;
/ / Update semaphores
empty--;
full++;
}
/ / Acquire lock
while (lock)
;
lock = true;
/ / Consume item
int item = buffer.front();
buffer.pop();
/ / Release lock
lock = false;
/ / Update semaphores
empty++;
full--;
return item;
}
int main() {
int producers, consumers, items;
s td::cout << "Enter the number of items to produce by each producer: ";
std::cin >> items;
/ / Consume items
std::cout << "\nConsuming items...\n";
for (int i = 1; i <= producers * items; ++i) {
int item = consume();
std::cout << "Consumed item: " << item << std::endl;
}
return 0;
}
Output
Conclusion
T he Producer-Consumer problem solution using semaphores and mutex ensures synchronization and mutual
exclusion, preventing race conditions. It optimally utilizes buffer space and maintains data integrity in concurrent
environments, facilitating efficient resource sharing between producers and consumers.
Program 9
Aim
Theory
F IFO algorithm works on the principle of first come, first serve. In this algorithm, the page that is
brought into memory first is the first one to be replaced when a page fault occurs. This algorithm is
simple to implement and requires only a queue data structure to maintain the order of pages in
memory. However, it suffers from a major disadvantage, known as the Belady's anomaly, which
means that the page fault rate can increase with an increase in the number of page frames allocated.
Algorithm
Code
#include <stdio.h>
int main() {
int frames[MAX_FRAMES];
int reference[MAX_FRAMES];
int num_frames, num_references;
int num_page_faults = 0;
int frame_index = 0;
Output
Conclusion
T he implementation of the FIFO page replacement algorithm proved effective in managing memory resources by
replacing the oldest page first. Through experimentation, it demonstrated the algorithm's simplicity and
suitability for systems with limited memory, showcasing its ability to maintain efficient memory usage.
Program 10
Aim
Theory
L RU algorithm works on the principle that the least recently used page is the one that is most likely
to be replaced. This algorithm requires maintaining a record of the order in which pages are accessed.
This can be done using a linked list, a stack, or an array. Whenever a page is accessed, it is moved to
the front of the list. When a page fault occurs, the page at the end of the list is the one that is
replaced. LRU algorithm is known to perform better than FIFO and does not suffer from the Belady's
anomaly.
Algorithm
Code
#include <stdio.h>
int get_lru_victim();
int main() {
rintf("\n\t\t\t LRU PAGE REPLACEMENT ALGORITHM");
p
printf("\n Enter the number of frames: ");
scanf("%d", &frames_count);
printf("Enter the size of reference string: ");
scanf("%d", &references_count);
printf("Enter the reference string (separated by space): ");
for (int i = 0; i < references_count; i++)
scanf("%d", &ref[i]);
rintf("\n");
p
printf("\nReference String\t\tPage Frames\n");
if (flag == 0) {
count++;
if (count <= frames_count)
victim++;
else
victim = get_lru_victim();
age_faults++;
p
frames[victim] = ref[i];
for (int j = 0; j < frames_count; j++)
printf("%4d", frames[j]);
}
recent[ref[i]] = i;
}
printf("\n\n\t No. of page faults: %d", page_faults);
return 0;
}
int get_lru_victim() {
int temp1, temp2;
for (int i = 0; i < frames_count; i++) {
temp1 = frames[i];
lru_calculations[i] = recent[temp1];
}
temp2 = lru_calculations[0];
for (int j = 1; j < frames_count; j++) {
if (temp2 > lru_calculations[j]) {
temp2 = lru_calculations[j];
}
}
for (int i = 0; i < frames_count; i++) {
if (recent[frames[i]] == temp2) {
return i;
}
}
return 0;
}
Output
Conclusion
In conclusion, the implementation of the Least Recently Used (LRU) Page Replacement Algorithm proved
effective in managing memory efficiently by replacing the least recently used page. This algorithm offers a
balanced approach to handling memory demands in computer systems, enhancing overall performance and
minimizing resource wastage.
Program 11
Aim
Theory
T he Dining Philosophers Problem is a classic concurrency problem where a group of philosophers sits
around a dining table, with each philosopher thinking and eating. There are chopsticks (or forks) placed
between each pair of adjacent philosophers. To eat, a philosopher must pick up the two chopsticks
adjacent to them. However, a chopstick can only be used by one philosopher at a time.
T he challenge lies in designing a solution that prevents deadlocks, where all philosophers are waiting
for a chopstick held by another philosopher, resulting in a circular dependency.
Algorithm
. Initialization: Each philosopher is initially thinking and all the chopsticks are available.
1
2. Pick up Chopsticks: When a philosopher wants to eat, they must pick up the chopsticks on their
left and right sides. However, to avoid deadlock, the philosopher with the highest ID picks up
the right chopstick first.
3. Eat: Once a philosopher has both chopsticks, they eat for a certain amount of time.
4. Put Down Chopsticks: After eating, the philosopher puts down both chopsticks, making them
available for others.
5. Repeat: Philosophers continuously alternate between thinking and eating, attempting to avoid
deadlock and starvation.
Code
# include <iostream>
#include <chrono>
#include <thread>
f orks[left_fork] = false;
forks[right_fork] = false;
rintStatus(philosopher_id, "eating");
p
this_thread::sleep_for(chrono::seconds(1));
f orks[left_fork] = true;
forks[right_fork] = true;
int main() {
for (int i = 0; i < NUM_PHILOSOPHERS; ++i) {
eat(i);
}
return 0;
}
Output
Conclusion
arious solutions exist for this problem, ranging from using semaphores or mutexes to implementing more
V
sophisticated algorithms. Understanding and implementing such solutions help in mastering fundamental
concepts of concurrency and synchronization in computer science.
Program 12
Aim
Theory
Algorithm
T he following is a high-level algorithm for the reader-writer problem using semaphores, with a
readers-preference policy:
Initialize semaphores:
● mutex (binary semaphore) to control access to the shared resource.
● wrt (counting semaphore) to count the number of writers waiting.
● read_count (integer variable) to keep track of the number of active readers.
Writer's algorithm:
● Wait on wrt semaphore (to ensure no other writers are active).
● Wait on mutex semaphore (to ensure no readers are active).
● Write the shared resource.
● Signal mutex semaphore (to allow other readers or writers).
● Signal wrt semaphore (to allow other waiting writers).
Reader's algorithm:
● Wait on mutex semaphore (to ensure no writers are active).
● Increment read_count.
● If read_count was 0, wait on wrt semaphore (to ensure no writers are waiting).
● Signal mutex semaphore (to allow other readers).
● Read the shared resource.
● Wait on mutex semaphore (to ensure no new writers have arrived).
● Decrement read_count.
● If read_count is 0, signal wrt semaphore (to allow waiting writers).
● Signal mutex semaphore (to allow other readers or writers).
Code
# include <iostream>
#include <thread>
#include <vector>
#include <semaphore.h>
void writer() {
sem_wait(&wrt); // Ensure no other writers are active
sem_wait(&mutex); // Ensure no readers are active
void reader() {
sem_wait(&mutex); // Ensure no writers are active
read_count++;
if (read_count == 1) {
sem_wait(&wrt); // Ensure no writers are waiting
}
sem_post(&mutex); // Allow other readers
int main() {
sem_init(&mutex, 0, 1); // Initialize binary semaphore
sem_init(&wrt, 0, 1); // Initialize counting semaphore
s em_destroy(&mutex);
sem_destroy(&wrt);
return 0;
}
Output
Conclusion
T he reader-writer problem is a fundamental synchronization problem that arises in various scenarios where
multiple threads need to access a shared resource concurrently. The solution presented here uses semaphores to
ensure mutual exclusion between writers and to coordinate access between readers and writers.
T he main advantage of this solution is that it allows multiple readers to access the shared resource
simultaneously, improving concurrency and performance. However, it also introduces some overhead due to the
use of semaphores and the need for readers to acquire and release locks.
Program 13
Aim
Theory
F CFS (First-Come, First-Served) Disk Scheduling Algorithm is one of the simplest disk scheduling
algorithms used in computer operating systems. It works on the principle of serving the disk I/O
requests in the order they arrive. When a process submits an I/O request, it gets added to the end of
the disk queue. The disk controller then services these requests one by one in the order they were
received.
Algorithm
. Initialization: Initialize the disk queue with the initial position of the disk head.
1
2. Request Arrival: As new I/O requests arrive, they are added to the end of the disk queue.
3. Service Request: The disk controller services the requests in the order they were received. It
starts serving the request at the head of the queue and continues until the queue is empty.
4. Movement of Disk Head: The disk head moves from its current position to the location of the
next I/O request in the queue. The total head movement is the sum of the differences in the
positions of consecutive requests in the queue.
5. Completion: After servicing all the requests in the queue, the algorithm completes, and the
total head movement is calculated.
Code
# include <iostream>
#include <cmath>
return totalHeadMovement;
}
int main() {
// Example disk requests
int requests[] = {176, 79, 34, 60, 92, 11, 41, 114};
int numRequests = sizeof(requests) / sizeof(requests[0]);
return 0;
}
Output
Conclusion
F CFS disk scheduling algorithm is easy to implement and understand, making it suitable for systems with low I/O
loads or for educational purposes. However, it suffers from the "convoy effect," where a long process ahead of a
short one can cause unnecessary delays for subsequent requests. It is also inefficient in terms of disk arm
movement since it doesn't consider the proximity of requests. As a result, FCFS may not be the best choice for
systems with high I/O loads or where minimizing seek time is critical. Other scheduling algorithms like SSTF
(Shortest Seek Time First) or SCAN provide better performance in such scenarios by considering the proximity of
requests and minimizing seek time. Understanding the characteristics and trade-offs of various disk scheduling
algorithms is essential for designing efficient and responsive I/O subsystems in operating systems.
Program 14
Aim
Write a program to implementthe SSTF(Shortest Seek Time First) Disk Scheduling Algorithm.
Theory
S STF (Shortest Seek Time First) is a disk scheduling algorithm that selects the request with the shortest
seek time from the current head position. The seek time is the time required for the disk arm to move
from its current position to the track where the requested data is located. SSTF minimizes the total seek
time, thus improving disk efficiency.
Algorithm
.
1 S tart with an initial head position.
2. Find the request with the shortest seek time from the current head position.
3. Move the disk arm to the track of the selected request.
4. Serve the request.
5. Repeat steps 2-4 until all requests are served.
Code
/ / Driver code
int main()
{
int n = 10;
int proc[n] = { 45, 21, 67, 90, 4, 89, 52, 61, 87, 25 };
return 0;
}
Output
Conclusion
In this implementation, we have simulated the SSTF disk scheduling algorithm. The program takes the current
head position and a list of disk requests as input. It then calculates the shortest seek time for each request from
the current head position, selects the request with the minimum seek time, and moves the head accordingly
Program 15
Aim
Theory
S CAN (Elevator) disk scheduling algorithm is designed to minimize the average seek time by moving the disk arm
in one direction until reaching the end of the disk and then changing direction to scan back in the opposite
direction. It works like an elevator moving up and down the floors. SCAN provides fairness and efficiency in
accessing disk requests by servicing requests along the path of the disk arm's movement.
Algorithm
. S tart with an initial head position and direction (either towards higher tracks or lower tracks).
1
2. Move the disk arm in the current direction until reaching either the end of the disk or the last request in
that direction.
3. If there are no more requests in the current direction, change the direction and continue scanning in the
opposite direction.
4. Repeat steps 2-3 until all requests are serviced.
Code
# include <bits/stdc++.h>
using namespace std;
/ / Driver code
int main()
{
/ / request array
int arr[] = { 240, 94, 179, 51, 118, 15, 137, 29, 75 };
int head = 55;
string direction = "left";
return 0;
}
Output
Conclusion
S CAN disk scheduling algorithm efficiently services disk requests by moving the disk arm in a linear fashion,
traversing the disk from one end to the other and then back again. It ensures fairness in accessing disk requests
by scanning in both directions, thus reducing the average seek time and improving overall disk performance.