You are on page 1of 54

‭‘‬

‭Department of Information Technology‬


‭Operating Systems‬

‭IT204‬
‭Practical File‬

‭ ubmitted To-‬
S ‭Submitted By-‬
‭Mr. Rahul Gupta‬ ‭Piyush Gaba‬
‭Department of IT‬ ‭2K22/IT/120‬
‭INDEX‬
‭S. No.‬ P ‭ rograms‬ ‭Date‬ ‭Sign‬
‭1.‬ ‭Case Study - Basic Operating System Commands‬
‭2.‬ ‭Write a program to implement FCFS(First Come First‬
‭Serve) Scheduling Algorithm‬
‭3.‬ ‭Write a program to implement SJF(Shortest Job First)‬
‭Scheduling Algorithm‬
‭4.‬ ‭Write a program to implement Priority CPU Scheduling‬
‭Algorithm‬
‭5.‬ ‭Write a program to implement Round Robin (RR)‬
‭Scheduling Algorithm‬
‭6.‬ ‭Write a program to implement Preemptive Shortest Job‬
‭First Scheduling Algorithm‬
‭7.‬ ‭Write a program to implement Longest Remaining Time‬
‭First (LRTF) CPU Scheduling Algorithm‬
‭8.‬ ‭Write a program to implement Banker’s Algorithm‬
‭9.‬ ‭Write a program to implement solution of Producer -‬
‭Consumer Problem‬
‭10.‬ ‭Write a program to implement First In First Out (FIFO)‬
‭Page Replacement Algorithm‬
‭11.‬ ‭Write a program to implement Least Recently Used‬
‭(LRU) Page Replacement Algorithm‬
‭12.‬ ‭Write a program to implement Dining - Philosophers‬
‭Problem‬
‭13.‬ ‭Write a program to implement Reader - Writer Problem‬
‭14.‬ ‭Write a program to implement First Come First Serve‬
‭(FCFS) Disk Scheduling Algorithm‬
‭Case Study‬
‭Aim‬

‭Write a case study based on Basic Operating System Commands‬

‭Introduction‬

I‭n the fast-paced world of information technology, proficiency in Unix operating system commands is essential‬
‭for system administrators, developers, and IT professionals. This case study explores how a team at XYZ‬
‭Corporation leveraged basic Unix commands to enhance operational efficiency, streamline processes, and‬
‭troubleshoot issues effectively.‬

‭Background‬

‭ YZ Corporation, a leading tech company, relies heavily on Unix-based systems to power its critical infrastructure.‬
X
‭The IT team identified a need to optimize daily tasks and improve overall system management by leveraging the‬
‭power of Unix commands.‬

‭Objectives‬

‭●‬ ‭Improve operational efficiency through automation.‬

‭●‬ ‭Streamline file and directory management.‬

‭●‬ ‭Enhance troubleshooting capabilities.‬

‭●‬ ‭Facilitate effective communication and collaboration among team members.‬

‭Implementation‬

T‭ he initiation of a comprehensive training program by the IT team centered on optimizing Unix operating system‬
‭commands. The program encompassed fundamental commands such as ls, cd, cp, mv, rm, mkdir, and chmod.‬
‭Additionally, advanced commands like grep, find, and awk were introduced to address specific use cases.‬

‭Basic Operating System Commands‬

‭‬
● c‭ hmod: Adjusts file or directory permissions for improved security.‬
‭●‬ ‭chown: Changes the owner of a file or directory for user control.‬
‭●‬ ‭jobs: Displays active customization providing process status information.‬
‭●‬ ‭kill: Terminates a process using either its ID or name.‬
‭●‬ ‭ping: Tests network connectivity between specified hosts.‬
‭●‬ ‭wget: Downloads files from the internet for offline use.‬
‭●‬ ‭uname: Outputs system details like the kernel and machine architecture.‬
‭●‬ ‭top: Shows real-time system resource usage for monitoring performance.‬
‭●‬ ‭history: Lists previously executed commands for reference.‬
‭●‬ ‭man: Provides manual pages for a specified command for in-depth information.‬
‭●‬ ‭echo: Prints text to the terminal, facilitating script output.‬
‭●‬ ‭zip: Compresses files into a zip archive, reducing storage space.‬
‭●‬ ‭unzip: Extracts files from a zip archive for access to compressed content.‬
‭‬
● ‭ ostname: Reveals the current system's hostname for identification.‬
h
‭●‬ ‭useradd: Creates a new user account, enabling system access.‬
‭●‬ ‭userdel: Removes a user account, revoking system access.‬
‭●‬ ‭apt-get: Manages software packages on Debian-based Linux systems, facilitating installation and‬
‭updates.‬
‭‬
● ‭nano: Edits text using a straightforward command-line interface.‬
‭●‬ ‭vi: Edits text through a command-line interface, providing powerful editing capabilities.‬
‭●‬ ‭jed: Edits text with a graphical user interface, enhancing user experience.‬
‭●‬ ‭alias: Creates shortcuts for frequently used commands or command sequences.‬
‭●‬ ‭unalias: Eliminates previously created aliases to restore default commands.‬
‭●‬ ‭su: Switches to a different user account or the root user for administrative tasks.‬
‭●‬ ‭htop: Displays real-time system resource usage with a user-friendly interface, improving readability‬
‭compared to top.‬
‭‬
● ‭ps: Lists currently running processes, offering a snapshot of system activity.‬
‭●‬ ‭vim: Advanced text editor with a command-line interface, featuring powerful editing and customization‬
‭options.‬

‭Operational Efficiency‬

T‭ hrough the integration of Unix commands into daily workflows, the team automated repetitive tasks, thereby‬
‭reducing the time and effort required for manual interventions. For instance, scheduled cron jobs using crontab‬
‭were employed to automate regular system maintenance, ensuring seamless execution of updates, backups, and‬
‭log rotations.‬

‭File and Directory Management‬

‭ apitalizing on Unix commands, the team efficiently organized and managed files and directories. They‬
C
‭employed cp and mv for copying and moving files, mkdir for creating directories, and rm for deleting‬
‭unnecessary files. The implementation of chmod ensured proper file permissions, thereby enhancing security‬
‭and access control.‬

‭Troubleshooting Capabilities‬

T‭ he introduction of potent Unix commands such as grep and find significantly enhanced the team's ability to‬
‭quickly troubleshoot issues. Team members proficiently searched through log files using grep to identify patterns‬
‭or errors, while the find command played a crucial role in locating specific files or directories across the entire‬
‭system, facilitating swift issue resolution.‬

‭Communication and Collaboration‬

T‭ o encourage enhanced collaboration, the team established standardized practices for sharing Unix command‬
‭sequences and solutions. They developed documentation outlining common commands and their applications,‬
‭enabling team members to easily reference and share their knowledge. This collaborative approach elevated the‬
‭overall skill set within the team and reduced the learning curve for new members.‬

‭Results‬

T‭ ime Savings: The automation of routine tasks resulted in a significant reduction in manual effort, allowing team‬
‭members to concentrate on more strategic initiatives.‬
I‭mproved Accuracy: Standardized Unix commands enhanced accuracy in file and directory management,‬
‭lowering the likelihood of errors.‬

E‭ nhanced Troubleshooting: The team's ability to troubleshoot and resolve issues was expedited, minimizing‬
‭downtime and improving system reliability.‬

‭ nowledge Sharing: The documentation and collaborative approach facilitated knowledge sharing, empowering‬
K
‭team members to learn and apply Unix commands effectively.‬

‭Conclusion‬

T‭ hrough the seamless integration of basic Unix operating system commands into their daily operations, XYZ‬
‭Corporation's IT team successfully enhanced operational efficiency, streamlined file management, improved‬
‭troubleshooting capabilities, and fostered improved communication and collaboration. This case study‬
‭underscores the significance of foundational Unix skills in optimizing system administration and IT operations.‬
‭Program 1‬
‭Aim‬

‭Write a program to implement FCFS(First Come First Serve) Scheduling Algorithm‬

‭Theory‬

T‭ he First Come First Serve (FCFS) CPU scheduling algorithm prioritizes processes based on their arrival time,‬
‭executing the earliest arriving process first. Processes are placed in a ready queue, and the CPU serves the one‬
‭waiting the longest. While simple and easy to implement, FCFS has a drawback—long waiting times for processes‬
‭with lengthy burst times, known as the "convoy effect." This limitation can hinder system efficiency as shorter‬
‭processes get delayed behind longer ones, impacting overall performance in scenarios with mixed burst times.‬

‭Code‬

v‭ oid calculate_wt(int pr[], int n, int bt[], int wt[]){‬


‭wt[0] = 0;‬
‭for (int i = 1; i < n ; i++)‬
‭wt[i] = bt[i-1] + wt[i-1] ;‬
‭}‬

v‭ oid calculate_tat(int pr[], int n, int bt[], int wt[], int tat[]){‬
‭for (int i = 0; i<n ;i++)‬
‭tat[i] = bt[i] + wt[i];‬
‭}‬

v‭ oid calculateAvgTime(int pr[], int n, int bt[]){‬


‭int wt[n], tat[n], total_wt = 0, total_tat = 0;‬
‭calculate_wt(processes, n, bt, wt);‬
‭calculate_tat(processes, n, bt, wt, tat);‬
‭cout << "FCFS(First Come First Serve) Scheduling Algorithm" <<endl;‬
‭cout << "Pr No."<< " \tBT " << "\tWT " << "\tTAT\n" ;‬
‭for (int i=0; i<n; i++){‬
‭total_wt = total_wt + wt[i];‬
‭total_tat = total_tat + tat[i];‬
‭cout << i+1 << "\t" << bt[i] <<"\t" << wt[i] <<"\t" << tat[i] <<endl;‬
‭}‬
‭cout << "Average waiting time = " << (float)total_wt / (float)n;‬
‭cout << "\nAverage turn around time = " << (float)total_tat / (float)n;‬
‭}‬

i‭nt main(){‬
‭int processes[] = {1, 2, 3};‬
‭int n = sizeof processes / sizeof processes[0];‬
‭int burst_time[] = {5, 6, 11};‬
‭calculateAvgTime(processes, n, burst_time);‬
r‭ eturn 0;‬
‭}‬

‭Output‬

‭Conclusion‬

I‭n conclusion, the First-Come-First-Serve (FCFS) algorithm is a basic scheduling approach that executes‬
‭processes in the order of their arrival. While easy to implement, it may lead to inefficient resource‬
‭utilization and longer waiting times, particularly when shorter processes are queued behind longer‬
‭ones. FCFS is suitable for certain scenarios but may not be ideal for systems requiring quick response‬
‭times or prioritizing shorter jobs. Understanding its limitations is crucial when choosing scheduling‬
‭algorithms to meet specific system requirements.‬
‭Program 2‬
‭Aim‬

‭Write a program to implement SJF(Shortest Job First) Scheduling Algorithm‬

‭Theory‬

T‭ he Shortest Job First (SJF) algorithm is a CPU scheduling technique that prioritizes the process with the shortest‬
‭burst time in the ready queue for immediate execution. While effective in minimizing the average waiting time of‬
‭processes, a significant drawback lies in the uncertainty of a process's burst time until it commences execution.‬
‭This unpredictability makes the practical implementation of SJF challenging.‬

‭Code‬

#‭ include <iostream>‬
‭using namespace std;‬
‭struct Process {‬
‭int pid;‬
‭int bt;‬
‭int art;‬
‭};‬

v‭ oid findTurnAroundTime(Process proc[], int n, int wt[], int tat[]) {‬


‭for (int i = 0; i < n; i++)‬
‭tat[i] = proc[i].bt + wt[i];‬
‭}‬

v‭ oid findWaitingTime(Process proc[], int n, int wt[]) {‬


‭int rt[n];‬
‭for (int i = 0; i < n; i++)‬
‭rt[i] = proc[i].bt;‬
‭int complete = 0, t = 0, minm = INT_MAX;‬
‭int shortest = 0, finish_time;‬
‭bool check = false;‬
‭while (complete != n) {‬
‭for (int j = 0; j <n ;j++){‬
‭if ((proc[j].art <= t) && (rt[j] < minm) && rt[j] > 0){‬
‭minm = rt[j];‬
‭shortest = j;‬
‭check = true;‬
‭}‬
‭}‬

i‭f (check == false) {‬


‭t++;‬
‭continue;‬
}‭ ‬
‭rt[shortest]--;‬
‭minm = rt[shortest];‬

‭if (minm == 0) minm = INT_MAX;‬

i‭f (rt[shortest] == 0) {‬
‭complete++;‬
‭check = false;‬
‭finish_time = t + 1;‬
‭wt[shortest] = finish_time - proc[shortest].bt - proc[shortest].art;‬
‭if (wt[shortest] < 0)‬
‭wt[shortest] = 0;‬
‭}‬
‭t++;‬
‭}‬
‭}‬

v‭ oid findavgTime(Process proc[], int n) {‬


‭int wt[n], tat[n], total_wt = 0, total_tat = 0;‬
‭findWaitingTime(proc, n, wt);‬
‭findTurnAroundTime(proc, n, wt, tat);‬
‭cout << "Processes " << " Burst time " << " Waiting time " << " Turn around time\n";‬
‭for (int i = 0; i < n; i++) {‬
‭total_wt = total_wt + wt[i];‬
‭total_tat = total_tat + tat[i];‬
‭cout << " " << proc[i].pid << "\t\t" << proc[i].bt << "\t\t " << wt[i] << "\t\t " << tat[i] <<‬
‭endl;‬
‭}‬
‭cout << "\nAverage waiting time = " << (float)total_wt / (float)n;‬
‭cout << "\nAverage turn around time = " << (float)total_tat / (float)n;‬
‭}‬

i‭nt main(){‬
‭Process proc[] = { { 1, 5, 1 }, { 2, 3, 1 }, { 3, 6, 2 }, { 4, 5, 3 } };‬
‭int n = sizeof(proc) / sizeof(proc[0]);‬
‭findavgTime(proc, n);‬
‭return 0;‬
‭}‬
‭Output‬

‭Conclusion‬

I‭n conclusion, the Shortest Job First (SJF) algorithm plays a crucial role in process scheduling,‬
‭emphasizing the execution of shorter tasks to enhance system efficiency. Its non-preemptive nature‬
‭aims to minimize waiting times and improve overall system performance. However, challenges like‬
‭burst time estimation and the convoy effect should be considered. SJF remains a key concept in‬
‭operating system design, influencing the development of scheduling algorithms and contributing to the‬
‭ongoing exploration of trade-offs in resource management strategies.‬
‭Program 3‬
‭Aim‬

‭Write a program to implement Priority CPU Scheduling Algorithm‬

‭Theory‬

‭ riority Non-Preemptive Scheduling is a scheduling algorithm in operating systems that assigns priority levels to‬
P
‭each process and schedules them based on these priorities. The process with the highest priority is selected for‬
‭execution first. In this non-preemptive version of the algorithm, once a process starts executing, it continues‬
‭until it completes or enters a waiting state.‬

‭Algorithm‬

‭ .‬
1 ‭ ssign priority values to each process. A lower numerical value generally indicates a higher priority.‬
A
‭2.‬ ‭Select the process with the highest priority for execution.‬
‭3.‬ ‭Execute the selected process until it completes or enters a waiting state.‬
‭4.‬ ‭If multiple processes share the same priority, use additional criteria (e.g., arrival time) to break the tie.‬
‭5.‬ ‭Repeat the process until all processes are completed.‬

‭Code‬

#‭ include <stdio.h>‬
‭#include <stdlib.h>‬

‭int getNextProcess(int *arrivalTimes, int *isProcessCompleted, int *priority,‬


‭int isHighNumberHighPriority, int clock,‬
‭int numberOfProcesses)‬
‭{‬
‭if (clock == -1)‬
‭{‬
‭// the first process is the one which arrived first, irrespective of its‬
‭// priority‬
‭int min = 0;‬
‭for (int i = 0; i < numberOfProcesses; i++)‬
‭{‬
‭if (arrivalTimes[i] < arrivalTimes[min])‬
‭{‬
‭min = i;‬
‭}‬
‭}‬
‭return min;‬
‭}‬
‭int min = 0;‬
‭while (isProcessCompleted[min])‬
‭{‬
‭min++;‬
}‭ ‬
‭if (min >= numberOfProcesses)‬
‭return -1;‬
‭for (int i = min; i < numberOfProcesses; i++)‬
‭{‬
‭if (isProcessCompleted[i] || arrivalTimes[i] > clock)‬
‭{‬
‭// process has not yet arrived or it is already completed‬
‭continue;‬
‭}‬
‭if (isHighNumberHighPriority)‬
‭{‬
‭if (priority[i] > priority[min])‬
‭{‬
‭min = i;‬
‭}‬
‭} else‬
‭{‬
‭if (priority[i] < priority[min])‬
‭{‬
‭min = i;‬
‭}‬
‭}‬
‭}‬
‭return min;‬
‭}‬

‭void findAverageTimes(int *arrivalTimes, int *bustTimes, int *priority,‬


‭int isHighNumberHighPriority, int numberOfProcesses)‬
‭{‬
‭int *completionTimes = (int *)calloc(numberOfProcesses, sizeof(int));‬
‭int *turnAroundTimes = (int *)calloc(numberOfProcesses, sizeof(int));‬
‭int *waitingTimes = (int *)calloc(numberOfProcesses, sizeof(int));‬
‭int *isProcessCompleted = (int *)calloc(numberOfProcesses, sizeof(int));‬
‭int clock = -1;‬
‭int process =‬
‭getNextProcess(arrivalTimes, isProcessCompleted, priority,‬
‭isHighNumberHighPriority, clock, numberOfProcesses);‬
‭clock = arrivalTimes[process];‬

‭ hile (process != -1)‬


w
‭{‬

‭waitingTimes[process] = clock - arrivalTimes[process];‬


‭clock += bustTimes[process];‬

‭completionTimes[process] = clock;‬

‭turnAroundTimes[process] = waitingTimes[process] + bustTimes[process];‬

‭isProcessCompleted[process] = 1;‬

‭process =‬
‭getNextProcess(arrivalTimes, isProcessCompleted, priority,‬
‭isHighNumberHighPriority, clock, numberOfProcesses);‬
‭}‬

f‭ loat averageWaitingTime = 0;‬


‭float averageTurnaroundTime = 0;‬
‭for (int i = 0; i < numberOfProcesses; i++)‬
‭{‬
‭averageWaitingTime += waitingTimes[i];‬
‭averageTurnaroundTime += turnAroundTimes[i];‬
‭}‬

a‭ verageTurnaroundTime /= numberOfProcesses;‬
‭averageWaitingTime /= numberOfProcesses;‬

‭// display the data‬

‭ rintf("PID\tAT\tBT\tComp\tTA\tWT\n");‬
p
‭for (int i = 0; i < numberOfProcesses; i++)‬
‭{‬
‭printf("%d\t%d\t%d\t%d\t%d\t%d\n", i + 1, arrivalTimes[i], bustTimes[i],‬
‭completionTimes[i], turnAroundTimes[i], waitingTimes[i]);‬
‭}‬
‭printf("\nAverage Waiting Time: %0.2f\n", averageWaitingTime);‬
‭printf("Average Turn Around Time: %0.2f\n", averageTurnaroundTime);‬

f‭ ree(completionTimes);‬
‭free(turnAroundTimes);‬
‭free(waitingTimes);‬
‭free(isProcessCompleted);‬
‭}‬

i‭nt main()‬
‭{‬
‭int arrivalTime[6] = {1, 2, 3, 4, 5, 6};‬
i‭nt burstTime[6] = {4, 5, 1, 2, 3, 6};‬
‭int priority[6] = {4, 5, 7, 2, 1, 6};‬
‭findAverageTimes(arrivalTime, burstTime, priority, 1, 6);‬

‭return 0;‬
‭}‬

‭Output‬

‭Conclusion‬

‭ riority Non-Preemptive Scheduling is a simple and efficient algorithm for managing the execution of processes‬
P
‭in an operating system. It ensures that higher-priority processes are given preference, promoting the timely‬
‭execution of critical tasks. However, it may lead to the "starvation" problem if lower-priority processes never get‬
‭a chance to execute.‬
‭Program 4‬
‭Aim‬

‭Write a program to implement Round Robin (RR) Scheduling Algorithm‬

‭Theory‬

T‭ he Round Robin scheduling algorithm is one of the CPU scheduling algorithms in which every process gets a‬
‭fixed amount of time quantum to execute the process.‬
‭In this algorithm, every process gets executed cyclically. This means that processes that have their burst time‬
‭remaining after the expiration of the time quantum are sent back to the ready state and wait for their next turn‬
‭to complete the execution until it terminates. This processing is done in FIFO order which suggests that‬
‭processes are executed on a first-come, first-serve basis.‬

‭Algorithm‬

F‭ irst, the processes which are eligible to enter the ready queue enter the ready queue. After entering the first‬
‭process in Ready Queue is executed for a Time Quantum chunk of time. After execution is complete, the process‬
‭is removed from the ready queue. Even now the process requires some time to complete its execution, then the‬
‭process is added to Ready Queue.‬

T‭ he Ready Queue does not hold processes which are already present in the Ready Queue. The Ready Queue is‬
‭designed in such a manner that it does not hold non unique processes. By holding same processes Redundancy‬
‭of the processes increases.‬

‭Code‬

#‭ include<iostream>‬
‭using namespace std;‬

v‭ oid findWaitingTime(int processes[], int n, int bt[], int wt[], int quantum)‬
‭{‬
‭int rem_bt[n];‬
‭for (int i = 0 ; i < n ; i++)‬
‭rem_bt[i] = bt[i];‬

i‭nt t = 0;‬
‭bool done = false; // Move done variable outside the loop‬

‭ hile (!done)‬
w
‭{‬
‭done = true; // Initialize done to true before checking in each iteration‬

f‭ or (int i = 0; i < n; i++)‬


‭{‬
‭if (rem_bt[i] > 0)‬
‭{‬
‭done = false;‬

i‭f (rem_bt[i] > quantum)‬


‭{‬
‭t += quantum;‬
‭rem_bt[i] -= quantum;‬
‭}‬
‭else‬
‭{‬
‭t = t + rem_bt[i];‬
‭wt[i] = t - bt[i];‬
‭rem_bt[i] = 0;‬
‭}‬
‭}‬
‭}‬
‭}‬
‭}‬

v‭ oid findTurnAroundTime(int processes[], int n, int bt[], int wt[], int tat[])‬
‭{‬
‭for (int i = 0; i < n; i++)‬
‭tat[i] = bt[i] + wt[i];‬
‭}‬

v‭ oid findavgTime(int processes[], int n, int bt[], int quantum)‬


‭{‬
‭int wt[n], tat[n], total_wt = 0, total_tat = 0;‬
‭findWaitingTime(processes, n, bt, wt, quantum);‬
‭findTurnAroundTime(processes, n, bt, wt, tat);‬

c‭ out << "ROUND ROBIN SCHEDULING ALGORITHM" << endl;‬


‭cout << "PN " << " \tBT " << " \t WT " << " \tTAT\n";‬

f‭ or (int i = 0; i < n; i++)‬


‭{‬
‭total_wt = total_wt + wt[i];‬
‭total_tat = total_tat + tat[i];‬
‭cout << " " << i + 1 << "\t" << bt[i] << "\t " << wt[i] << "\t " << tat[i] << endl;‬
‭}‬

c‭ out << "Average waiting time = " << (float)total_wt / (float)n;‬


‭cout << "\nAverage turn around time = " << (float)total_tat / (float)n;‬
‭}‬

‭int main()‬
‭{‬
i‭nt processes[] = {1, 2, 3, 4};‬
‭int n = sizeof processes / sizeof processes[0];‬
‭int burst_time[] = {4, 1, 8, 1};‬
‭int quantum = 1;‬
‭findavgTime(processes, n, burst_time, quantum);‬
‭return 0;‬
‭}‬

‭Output‬

‭Conclusion‬

‭ ound Robin CPU Scheduling is the most important CPU Scheduling Algorithm which is ever used in the history‬
R
‭of CPU Scheduling Algorithms. Round Robin CPU Scheduling uses Time Quantum (TQ). The Time Quantum is‬
‭something which is removed from the Burst Time and lets the chunk of process to be completed.‬
‭Program 5‬
‭Aim‬

‭Write a program to implement Preemptive Shortest Job First Scheduling Algorithm‬

‭Theory‬

S‭ hortest Remaining Time First (SRTF) is a non-preemptive scheduling algorithm where the process with the‬
‭smallest remaining burst time is selected for execution. This algorithm prioritizes shorter jobs, minimizing the‬
‭waiting time and improving system responsiveness. Processes are executed in order of their remaining time,‬
‭allowing the system to adapt dynamically to changing workloads. The algorithm reduces turnaround time and‬
‭enhances efficiency, though it may lead to a phenomenon called "starvation," where long processes might be‬
‭delayed indefinitely. SRTF is a fundamental concept in operating system scheduling, emphasizing optimal CPU‬
‭utilization and responsiveness.‬

‭Code‬

#‭ include<iostream>‬
‭using namespace std;‬

‭struct Process {‬
‭int id, at, bt, rem_t;‬
‭};‬

‭void srtf(Process p[], int n) {‬


‭int curr_t = 0, comp_pr_n = 0;‬
‭int sel_pr_in = -1;‬
‭float tot_wt = 0, tot_wat = 0;‬

‭while (comp_pr_n < n) {‬


‭for (int i = 0; i < n; ++i) {‬
‭if (p[i].at <= curr_t && p[i].rem_t > 0) {‬
‭if (sel_pr_in == -1 || p[i].rem_t < p[sel_pr_in].rem_t)‬
‭sel_pr_in = i;‬
‭}‬
‭}‬

‭if (sel_pr_in == -1) {‬


‭++curr_t;‬
‭} else {‬
‭--p[sel_pr_in].rem_t;‬
‭cout << "Time " << curr_t << ": Process " << p[sel_pr_in].id << endl;‬

‭if (p[sel_pr_in].rem_t == 0) {‬
‭++comp_pr_n;‬
‭tot_wt += curr_t - p[sel_pr_in].at - p[sel_pr_in].bt;‬
‭tot_wat += curr_t - p[sel_pr_in].at;‬
‭sel_pr_in = -1;‬
‭}‬

‭++curr_t;‬
‭}‬
‭}‬

f‭ loat awt = tot_wt / n;‬


‭float atat = tot_wat / n;‬

c‭ out << "\nAverage Waiting Time: " << awt << endl;‬
‭cout << "Average Turnaround Time: " << atat << endl;‬
‭}‬

‭int main() {‬
‭int n;‬
‭cout << "Enter the number of processes: ";‬
‭cin >> n;‬
‭Process p[n];‬

‭for (int i = 0; i < n; ++i) {‬


‭p[i].id = i + 1;‬
‭cout << "Enter arrival time and burst time for Process " << p[i].id << ": ";‬
‭cin >> p[i].at >> p[i].bt;‬
‭p[i].rem_t = p[i].bt;‬
‭}‬

c‭ out << "\nShortest Remaining Time First (SRTF) Scheduling:\n";‬


‭srtf(p, n);‬

‭return 0;‬
‭}‬
‭Output‬

‭Conclusion‬

I‭n conclusion, the Shortest Remaining Time First (SRTF) algorithm offers an efficient approach to‬
‭process scheduling, aiming to minimize waiting times and enhance system responsiveness by‬
‭prioritizing shorter tasks. Despite its advantages in reducing turnaround times and optimizing CPU‬
‭utilization, SRTF may face challenges such as the potential for process starvation. It remains a‬
‭fundamental concept in operating system design, contributing to the ongoing development and‬
‭refinement of scheduling algorithms. The choice of scheduling algorithm depends on the specific‬
‭requirements and characteristics of the system, balancing the trade-offs between fairness, efficiency,‬
‭and responsiveness in managing computing resources.‬
‭Program 6‬
‭Aim‬

‭Write a program to implement‬‭Longest Remaining Time‬‭First (LRTF) CPU Scheduling Algorithm‬

‭Theory‬

‭Longest Remaining Time First (LRTF) or‬‭Preemptive‬‭Longest Job First (LJF) scheduling algorithm selects the‬
‭ rocess with the longest remaining burst time to execute first. If a new process arrives with an even longer burst‬
p
‭time, it preempts the currently executing process and starts executing the new one.‬

‭Algorithm‬

‭ .‬
1 I‭nitialize the arrival time, burst time, waiting time, turnaround time, current time, etc.‬
‭2.‬ ‭Loop until all processes are executed.‬
‭3.‬ ‭At each step, select the process with the longest remaining burst time.‬
‭4.‬ ‭If a new process arrives with a longer burst time, preempt the currently executing process.‬
‭5.‬ ‭Calculate waiting time, turnaround time, and update current time accordingly.‬
‭6.‬ ‭Repeat until all processes are executed.‬

‭Code‬

‭#include <iostream>‬

‭using namespace std;‬

‭struct Process {‬
‭int pid;‬
‭int arrivalTime;‬
‭int burstTime;‬
‭int remainingTime;‬
‭};‬

‭void ljf(Process processes[], int n) {‬


‭int currentTime = 0;‬
‭int completed = 0;‬
‭float totalWaitTime = 0, totalTurnaroundTime = 0;‬

‭cout << "PID\tAT\tBT\tCT\tTAT\tWT\n";‬

‭while (completed < n) {‬


‭int longestIdx = -1;‬
‭int longestRemainingTime = 0;‬

/‭ / Find the process with the longest remaining time‬


‭for (int i = 0; i < n; ++i) {‬
‭if (processes[i].arrivalTime <= currentTime && processes[i].remainingTime > 0) {‬
‭if (longestIdx == -1 || processes[i].remainingTime > longestRemainingTime) {‬
‭longestIdx = i;‬
‭longestRemainingTime = processes[i].remainingTime;‬
‭}‬
‭}‬
‭}‬

‭if (longestIdx == -1) {‬


‭currentTime++;‬
‭continue;‬
‭}‬

/‭ / Execute the process‬


‭processes[longestIdx].remainingTime--;‬
‭currentTime++;‬

‭if (processes[longestIdx].remainingTime == 0) {‬
‭completed++;‬
‭int completionTime = currentTime;‬
‭int turnaroundTime = completionTime - processes[longestIdx].arrivalTime;‬
‭int waitingTime = turnaroundTime - processes[longestIdx].burstTime;‬

t‭ otalWaitTime += waitingTime;‬
‭totalTurnaroundTime += turnaroundTime;‬

c‭ out << processes[longestIdx].pid << "\t" << processes[longestIdx].arrivalTime << "\t" <<‬
‭processes[longestIdx].burstTime << "\t" << completionTime << "\t" << turnaroundTime << "\t" << waitingTime‬
‭<< endl;‬
‭}‬
‭}‬

c‭ out << "Average Waiting Time: " << totalWaitTime / n << endl;‬
‭cout << "Average Turnaround Time: " << totalTurnaroundTime / n << endl;‬
‭}‬

‭int main() {‬
‭int n;‬
‭cout << "Enter the number of processes: ";‬
‭cin >> n;‬

‭ rocess processes[n];‬
P
‭cout << "Enter arrival time and burst time for each process:\n";‬
‭for (int i = 0; i < n; ++i) {‬
‭processes[i].pid = i + 1;‬
‭cout << "Process " << i + 1 << ":\n";‬
‭cout << "Arrival time: ";‬
‭cin >> processes[i].arrivalTime;‬
‭cout << "Burst time: ";‬
c‭ in >> processes[i].burstTime;‬
‭processes[i].remainingTime = processes[i].burstTime;‬
‭}‬

‭ljf(processes, n);‬

‭return 0;‬
‭}‬

‭Output‬

‭Conclusion‬

I‭n conclusion, the non-preemptive Longest Job First (LJF) CPU scheduling algorithm efficiently selects processes‬
‭based on their burst time, executing the longest job first. By minimizing context switching, it optimizes‬
‭throughput. However, it may lead to longer average waiting times for shorter processes.‬
‭Program 7‬
‭Aim‬

‭Write a program to implement‬‭Banker’s Algorithm‬

‭Theory‬

‭ anker's Algorithm is a deadlock avoidance algorithm used in operating systems to manage the‬
B
‭allocation of multiple resources to multiple processes while ensuring that deadlock does not occur. It‬
‭was developed by Edsger Dijkstra.‬
T‭ he algorithm works on the principle of simulating the resource allocation process to determine if it's‬
‭safe to grant a resource request. It considers the current allocation, maximum allocation, and available‬
‭resources to make decisions. If the system remains in a safe state after granting the resources, the‬
‭allocation is permitted; otherwise, the process is blocked until resources are available.‬

‭Algorithm‬

‭ .‬
1 I‭nput available resources, maximum resource requirement, and resource allocation for each process.‬
‭2.‬ ‭Calculate the need matrix by subtracting allocation from maximum.‬
‭3.‬ ‭Initialize finish array to false and work array to available resources.‬
‭4.‬ ‭Repeat until all processes are finished:‬
‭a. Find an unfinished process whose need can be satisfied with available resources.‬
‭b. If such a process is found, allocate its resources, update available resources, mark the process as‬
‭finished, and record its sequence.‬
‭c. If no such process is found, system is not in a safe state.‬
‭5.‬ ‭If all processes are finished, system is in a safe state.‬
‭6.‬

‭Code‬

‭#include <iostream>‬

‭void calculateNeed(int **need, int **maxm, int **allot, int P, int R) {‬


‭for (int i = 0; i < P; i++)‬
‭for (int j = 0; j < R; j++)‬
‭need[i][j] = maxm[i][j] - allot[i][j];‬
‭}‬

‭bool isSafe(int *processes, int *avail, int **maxm, int **allot, int P, int R) {‬
‭int **need = new int*[P];‬
‭for (int i = 0; i < P; i++)‬
‭need[i] = new int[R];‬

‭calculateNeed(need, maxm, allot, P, R);‬

‭bool *finish = new bool[P]();‬


i‭nt *safeSeq = new int[P];‬
‭int *work = new int[R];‬
‭for (int i = 0; i < R; i++)‬
‭work[i] = avail[i];‬

i‭nt count = 0;‬


‭while (count < P) {‬
‭bool found = false;‬
‭for (int p = 0; p < P; p++) {‬
‭if (!finish[p]) {‬
‭int j;‬
‭for (j = 0; j < R; j++)‬
‭if (need[p][j] > work[j])‬
‭break;‬

‭if (j == R) {‬
‭for (int k = 0; k < R; k++)‬
‭work[k] += allot[p][k];‬
‭safeSeq[count++] = p;‬
‭finish[p] = true;‬
‭found = true;‬
‭}‬
‭}‬
}‭ ‬
‭if (!found) {‬
‭std::cout << "System is not in a safe state\n";‬
‭return false;‬
‭}‬
‭}‬

s‭ td::cout << "System is in a safe state.\nSafe sequence is: ";‬


‭for (int i = 0; i < P - 1; i++)‬
‭std::cout << safeSeq[i] << " -> ";‬
‭std::cout << safeSeq[P - 1] << std::endl;‬

‭ elete[] finish;‬
d
‭delete[] safeSeq;‬
‭delete[] work;‬
‭for (int i = 0; i < P; i++)‬
‭delete[] need[i];‬
‭delete[] need;‬

‭return true;‬
‭}‬

‭int main() {‬
‭int P, R;‬
s‭ td::cout << "Enter the number of processes: ";‬
‭std::cin >> P;‬
‭std::cout << "Enter the number of resources: ";‬
‭std::cin >> R;‬

i‭nt *processes = new int[P];‬


‭int *avail = new int[R];‬

s‭ td::cout << "Enter available resources: ";‬


‭for (int i = 0; i < R; i++)‬
‭std::cin >> avail[i];‬

i‭nt **maxm = new int*[P];‬


‭int **allot = new int*[P];‬

s‭ td::cout << "Enter maximum resources required by each process:\n";‬


‭for (int i = 0; i < P; i++) {‬
‭maxm[i] = new int[R];‬
‭std::cout << "Process " << i << ": ";‬
‭for (int j = 0; j < R; j++)‬
‭std::cin >> maxm[i][j];‬
‭}‬

s‭ td::cout << "Enter resources allocated to each process:\n";‬


‭for (int i = 0; i < P; i++) {‬
‭allot[i] = new int[R];‬
‭std::cout << "Process " << i << ": ";‬
‭for (int j = 0; j < R; j++)‬
‭std::cin >> allot[i][j];‬
‭}‬

‭if (isSafe(processes, avail, maxm, allot, P, R))‬


‭std::cout << "Safe state\n";‬
‭else‬
‭std::cout << "Unsafe state\n";‬

‭ elete[] processes;‬
d
‭delete[] avail;‬

‭for (int i = 0; i < P; i++) {‬


‭delete[] maxm[i];‬
‭delete[] allot[i];‬
‭}‬
‭delete[] maxm;‬
‭delete[] allot;‬

‭return 0;‬
‭}‬
‭Output‬

‭Conclusion‬

T‭ he Banker's Algorithm implementation in C++ allows dynamic input for processes, resources, maximum‬
‭requirements, and allocations. By considering these inputs, it determines whether the system is in a safe state.‬
‭This interactive implementation enhances understanding of resource management and system safety in‬
‭concurrent environments.‬
‭Program 8‬
‭Aim‬

‭Write a program to implement‬‭solution of Producer‬‭- Consumer Problem‬

‭Theory‬

T‭ he Producer-Consumer problem addresses efficient synchronization between processes where one‬


‭produces data and the other consumes it. Utilizing semaphores and shared buffers, it ensures mutual‬
‭exclusion and synchronization, facilitating smooth coordination in concurrent environments, vital for‬
‭efficient resource management and task distribution.‬

‭Algorithm‬

‭ . Define shared buffer, mutex, and semaphores for empty and full slots.‬
1
‭2. Initialize empty slots to maximum buffer size and full slots to 0.‬
‭3. Producer produces items:‬
‭a. Wait for empty slot semaphore.‬
‭b. Acquire mutex lock to access shared buffer.‬
‭c. Produce item and add it to the buffer.‬
‭d. Release mutex lock.‬
‭e. Increment full slot semaphore.‬
‭4. Consumer consumes items:‬
‭a. Wait for full slot semaphore.‬
‭b. Acquire mutex lock to access shared buffer.‬
‭c. Consume item from buffer.‬
‭d. Release mutex lock.‬
‭e. Increment empty slot semaphore.‬
‭5. Repeat producer and consumer steps.‬

‭Code‬

#‭ include <iostream>‬
‭#include <queue>‬

/‭ / Define maximum buffer size‬


‭#define BUFFER_SIZE 100‬

/‭ / Shared data‬
‭std::queue<int> buffer;‬
‭int in = 0, out = 0;‬

/‭ / Semaphore-like variables‬
‭int empty = BUFFER_SIZE;‬
‭int full = 0;‬

‭// Mutex-like variable‬


‭bool lock = false;‬

/‭ / Function to produce an item‬


‭void produce(int item) {‬
‭// Busy wait until an empty slot is available‬
‭while (empty == 0)‬
‭;‬

/‭ / Acquire lock‬
‭while (lock)‬
‭;‬
‭lock = true;‬

/‭ / Produce item‬
‭buffer.push(item);‬

/‭ / Release lock‬
‭lock = false;‬

/‭ / Update semaphores‬
‭empty--;‬
‭full++;‬
‭}‬

/‭ / Function to consume an item‬


‭int consume() {‬
‭// Busy wait until a full slot is available‬
‭while (full == 0)‬
‭;‬

/‭ / Acquire lock‬
‭while (lock)‬
‭;‬
‭lock = true;‬

/‭ / Consume item‬
‭int item = buffer.front();‬
‭buffer.pop();‬

/‭ / Release lock‬
‭lock = false;‬

/‭ / Update semaphores‬
‭empty++;‬
‭full--;‬

‭return item;‬
‭}‬
‭int main() {‬
‭int producers, consumers, items;‬

s‭ td::cout << "Enter the number of producers: ";‬


‭std::cin >> producers;‬

s‭ td::cout << "Enter the number of consumers: ";‬


‭std::cin >> consumers;‬

s‭ td::cout << "Enter the number of items to produce by each producer: ";‬
‭std::cin >> items;‬

‭std::cout << "\nProducer-Consumer Simulation:\n";‬

/‭ / Create producer threads‬


‭for (int i = 1; i <= producers; ++i) {‬
‭std::cout << "Producer " << i << " is producing items...\n";‬
‭for (int j = 1; j <= items; ++j) {‬
‭produce(i * 100 + j); // Unique item for each producer‬
‭}‬
‭}‬

/‭ / Consume items‬
‭std::cout << "\nConsuming items...\n";‬
‭for (int i = 1; i <= producers * items; ++i) {‬
‭int item = consume();‬
‭std::cout << "Consumed item: " << item << std::endl;‬
‭}‬

‭return 0;‬
‭}‬
‭Output‬

‭Conclusion‬

T‭ he Producer-Consumer problem solution using semaphores and mutex ensures synchronization and mutual‬
‭exclusion, preventing race conditions. It optimally utilizes buffer space and maintains data integrity in concurrent‬
‭environments, facilitating efficient resource sharing between producers and consumers.‬
‭Program 9‬
‭Aim‬

‭Write a program to implement‬‭First In First Out (FIFO)‬‭Page Replacement Algorithm‬

‭Theory‬

F‭ IFO algorithm works on the principle of first come, first serve. In this algorithm, the page that is‬
‭brought into memory first is the first one to be replaced when a page fault occurs. This algorithm is‬
‭simple to implement and requires only a queue data structure to maintain the order of pages in‬
‭memory. However, it suffers from a major disadvantage, known as the Belady's anomaly, which‬
‭means that the page fault rate can increase with an increase in the number of page frames allocated.‬

‭Algorithm‬

‭ . Start the program‬


1
‭2. Prompt the user to enter the number of frames‬
‭3. Prompt the user to enter the number of pages‬
‭4. Prompt the user to enter the page numbers‬
‭5. Initialize all the elements in the frame array to -1‬
‭6. Loop through each page in the reference string:‬
‭a. Set a variable named page_found to 0‬
‭b. Get the current page number from the reference string‬
‭c. Loop through each frame:‬
‭i. Check if the current page number is already in the frame‬
‭ii. If it is, set page_found to 1 and break out of the loop‬
‭d. If page_found is still 0, add the current page to the frame array in the first available slot (based‬
‭on FIFO order)‬
‭i. Increment the number of page faults‬
‭e. Print the current reference string and the current state of the page frames‬
‭7. Print the total number of page faults‬
‭8. End the program‬

‭Code‬

‭#include <stdio.h>‬

‭#define MAX_FRAMES 50‬

‭int main() {‬
‭int frames[MAX_FRAMES];‬
‭int reference[MAX_FRAMES];‬
‭int num_frames, num_references;‬
i‭nt num_page_faults = 0;‬
‭int frame_index = 0;‬

‭printf("FIFO Page Replacement Algorithm\n\n");‬

/‭ / Read input from user‬


‭printf("Enter the number of frames: ");‬
‭scanf("%d", &num_frames);‬

‭ rintf("Enter the size of reference string: ");‬


p
‭scanf("%d", &num_references);‬

‭ rintf("Enter the reference string (separated by space): ");‬


p
‭for (int i = 0; i < num_references; i++)‬
‭scanf("%d", &reference[i]);‬

/‭ / Initialize all frames to -1, indicating they are empty‬


‭for (int i = 0; i < MAX_FRAMES; i++)‬
‭frames[i] = -1;‬

‭printf("\nReference String\tPage Frames\n");‬

/‭ / Loop through each reference string‬


‭for (int i = 0; i < num_references; i++) {‬
‭int page_found = 0;‬
‭int page = reference[i];‬

/‭ / Check if the page is already in the frame‬


‭for (int j = 0; j < num_frames; j++) {‬
‭if (frames[j] == page) {‬
‭page_found = 1;‬
‭break;‬
‭}‬
‭}‬

/‭ / If the page is not found in the frame, add it to the frame‬


‭if (!page_found) {‬
‭frames[frame_index] = page;‬
‭frame_index = (frame_index + 1) % num_frames;‬
‭num_page_faults++;‬
‭}‬

/‭ / Print the current reference string and page frames‬


‭printf("%4d\t\t", page);‬
‭for (int j = 0; j < num_frames; j++) {‬
‭printf("%4d", frames[j]);‬
‭}‬
‭printf("\n");‬
‭}‬

‭ rintf("\nTotal number of page faults: %d\n", num_page_faults);‬


p
‭return 0;‬
‭}‬

‭Output‬

‭Conclusion‬

T‭ he implementation of the FIFO page replacement algorithm proved effective in managing memory resources by‬
‭replacing the oldest page first. Through experimentation, it demonstrated the algorithm's simplicity and‬
‭suitability for systems with limited memory, showcasing its ability to maintain efficient memory usage.‬
‭Program 10‬
‭Aim‬

‭Write a program to implement‬‭Least Recently Used (LRU)‬‭Page Replacement Algorithm‬

‭Theory‬

L‭ RU algorithm works on the principle that the least recently used page is the one that is most likely‬
‭to be replaced. This algorithm requires maintaining a record of the order in which pages are accessed.‬
‭This can be done using a linked list, a stack, or an array. Whenever a page is accessed, it is moved to‬
‭the front of the list. When a page fault occurs, the page at the end of the list is the one that is‬
‭replaced. LRU algorithm is known to perform better than FIFO and does not suffer from the Belady's‬
‭anomaly.‬

‭Algorithm‬

‭ . Start the program‬


1
‭2. Read the number of frames‬
‭3. Read the number of pages‬
‭4. Read the page numbers‬
‭5. Initialize all the elements in the frame array to -1‬
‭6. Loop through each page in the reference string:‬
‭a. Check if the current page is already in the frame array‬
‭i. If it is, update the time of the most recent access for that page‬
‭b. If it is not in the frame array, find the least recently used (LRU) page by iterating through the‬
‭frame array and finding the page with the smallest time of last access‬
‭i. Replace the LRU page with the current page‬
‭ii. Increment the number of page faults‬
‭c. Print the current reference string and the current state of the page frames‬
‭7. Print the total number of page faults‬
‭8. End the program‬

‭Code‬

‭#include <stdio.h>‬

i‭nt frames_count, references_count, flag = 0, ref[50], frames[50], page_faults = 0, victim = -1;‬


‭int recent[50], lru_calculations[50], count = 0;‬

‭int get_lru_victim();‬

‭int main() {‬
‭ rintf("\n\t\t\t LRU PAGE REPLACEMENT ALGORITHM");‬
p
‭printf("\n Enter the number of frames: ");‬
‭scanf("%d", &frames_count);‬
‭printf("Enter the size of reference string: ");‬
‭scanf("%d", &references_count);‬
‭printf("Enter the reference string (separated by space): ");‬
‭for (int i = 0; i < references_count; i++)‬
‭scanf("%d", &ref[i]);‬

‭ rintf("\n\n\t\t LRU PAGE REPLACEMENT ALGORITHM ");‬


p
‭for (int i = 0; i < frames_count; i++) {‬
‭frames[i] = -1;‬
‭lru_calculations[i] = 0;‬
‭}‬

‭for (int i = 0; i < 50; i++)‬


‭recent[i] = 0;‬

‭ rintf("\n");‬
p
‭printf("\nReference String\t\tPage Frames\n");‬

‭for (int i = 0; i < references_count; i++) {‬


‭flag = 0;‬
‭printf("\n\t %d\t \t \t \t ", ref[i]);‬

‭for (int j = 0; j < frames_count; j++) {‬


‭if (frames[j] == ref[i]) {‬
‭flag = 1;‬
‭break;‬
‭}‬
‭}‬

‭if (flag == 0) {‬
‭count++;‬
‭if (count <= frames_count)‬
‭victim++;‬
‭else‬
‭victim = get_lru_victim();‬

‭ age_faults++;‬
p
‭frames[victim] = ref[i];‬
‭for (int j = 0; j < frames_count; j++)‬
‭printf("%4d", frames[j]);‬
}‭ ‬
‭recent[ref[i]] = i;‬
}‭ ‬
‭printf("\n\n\t No. of page faults: %d", page_faults);‬
‭return 0;‬
‭}‬

‭int get_lru_victim() {‬
‭int temp1, temp2;‬
‭for (int i = 0; i < frames_count; i++) {‬
‭temp1 = frames[i];‬
‭lru_calculations[i] = recent[temp1];‬
‭}‬
‭temp2 = lru_calculations[0];‬
‭for (int j = 1; j < frames_count; j++) {‬
‭if (temp2 > lru_calculations[j]) {‬
‭temp2 = lru_calculations[j];‬
‭}‬
‭}‬
‭for (int i = 0; i < frames_count; i++) {‬
‭if (recent[frames[i]] == temp2) {‬
‭return i;‬
‭}‬
‭}‬
‭return 0;‬
‭}‬
‭Output‬

‭Conclusion‬

I‭n conclusion, the implementation of the Least Recently Used (LRU) Page Replacement Algorithm proved‬
‭effective in managing memory efficiently by replacing the least recently used page. This algorithm offers a‬
‭balanced approach to handling memory demands in computer systems, enhancing overall performance and‬
‭minimizing resource wastage.‬
‭Program 11‬
‭Aim‬

‭Write a program to implement‬‭Dining - Philosophers‬‭Problem‬

‭Theory‬

T‭ he Dining Philosophers Problem is a classic concurrency problem where a group of philosophers sits‬
‭around a dining table, with each philosopher thinking and eating. There are chopsticks (or forks) placed‬
‭between each pair of adjacent philosophers. To eat, a philosopher must pick up the two chopsticks‬
‭adjacent to them. However, a chopstick can only be used by one philosopher at a time.‬
T‭ he challenge lies in designing a solution that prevents deadlocks, where all philosophers are waiting‬
‭for a chopstick held by another philosopher, resulting in a circular dependency.‬

‭Algorithm‬

‭ .‬ I‭nitialization: Each philosopher is initially thinking and all the chopsticks are available.‬
1
‭2.‬ ‭Pick up Chopsticks: When a philosopher wants to eat, they must pick up the chopsticks on their‬
‭left and right sides. However, to avoid deadlock, the philosopher with the highest ID picks up‬
‭the right chopstick first.‬
‭3.‬ ‭Eat: Once a philosopher has both chopsticks, they eat for a certain amount of time.‬
‭4.‬ ‭Put Down Chopsticks: After eating, the philosopher puts down both chopsticks, making them‬
‭available for others.‬
‭5.‬ ‭Repeat: Philosophers continuously alternate between thinking and eating, attempting to avoid‬
‭deadlock and starvation.‬

‭Code‬

#‭ include <iostream>‬
‭#include <chrono>‬
‭#include <thread>‬

‭using namespace std;‬

c‭ onst int NUM_PHILOSOPHERS = 5;‬


‭bool forks[NUM_PHILOSOPHERS] = {true, true, true, true, true};‬

‭void printStatus(int philosopher_id, const string& status) {‬


‭cout << "Philosopher " << philosopher_id << " is " << status << "." << endl;‬
‭}‬
‭void eat(int philosopher_id) {‬
‭int left_fork = philosopher_id;‬
‭int right_fork = (philosopher_id + 1) % NUM_PHILOSOPHERS;‬

‭if (!forks[left_fork] || !forks[right_fork])‬


‭return;‬

f‭ orks[left_fork] = false;‬
‭forks[right_fork] = false;‬

‭ rintStatus(philosopher_id, "eating");‬
p
‭this_thread::sleep_for(chrono::seconds(1));‬

f‭ orks[left_fork] = true;‬
‭forks[right_fork] = true;‬

‭printStatus(philosopher_id, "finished eating");‬


‭}‬

‭int main() {‬
‭for (int i = 0; i < NUM_PHILOSOPHERS; ++i) {‬
‭eat(i);‬
‭}‬

‭return 0;‬
‭}‬
‭Output‬

‭Conclusion‬

T‭ he Dining Philosophers Problem illustrates challenges in concurrent programming, particularly in scenarios‬


‭where multiple threads contend for shared resources. By carefully designing a solution that ensures mutual‬
‭exclusion (only one philosopher can use a chopstick at a time), deadlock prevention (avoiding circular‬
‭dependencies), and fairness (all philosophers have an equal chance to eat), we can create a system where‬
‭philosophers can dine without issues.‬

‭ arious solutions exist for this problem, ranging from using semaphores or mutexes to implementing more‬
V
‭sophisticated algorithms. Understanding and implementing such solutions help in mastering fundamental‬
‭concepts of concurrency and synchronization in computer science.‬
‭Program 12‬
‭Aim‬

‭Write a program to implement‬‭Reader - Writer Problem‬

‭Theory‬

T‭ he reader-writer problem can be solved using semaphores or monitors. Semaphores are a‬


‭synchronization primitive that provides a way to control access to shared resources, while monitors are‬
‭a high-level synchronization construct that encapsulates shared data and the operations that‬
‭manipulate that data.‬
‭ .‬ T‭ he key requirements for the reader-writer problem are:‬
1
‭2.‬ ‭Any number of readers can simultaneously read the data as long as there are no writers.‬
‭3.‬ ‭Only one writer can write the data at a time, and no readers are allowed during the write‬
‭operation.‬
‭4.‬ ‭Readers and writers have priority over each other based on the chosen algorithm (e.g.,‬
‭readers-preference or writers-preference).‬

‭Algorithm‬

T‭ he following is a high-level algorithm for the reader-writer problem using semaphores, with a‬
‭readers-preference policy:‬

‭Initialize semaphores:‬
‭●‬ ‭mutex (binary semaphore) to control access to the shared resource.‬
‭●‬ ‭wrt (counting semaphore) to count the number of writers waiting.‬
‭●‬ ‭read_count (integer variable) to keep track of the number of active readers.‬
‭Writer's algorithm:‬
‭●‬ ‭Wait on wrt semaphore (to ensure no other writers are active).‬
‭●‬ ‭Wait on mutex semaphore (to ensure no readers are active).‬
‭●‬ ‭Write the shared resource.‬
‭●‬ ‭Signal mutex semaphore (to allow other readers or writers).‬
‭●‬ ‭Signal wrt semaphore (to allow other waiting writers).‬
‭Reader's algorithm:‬
‭●‬ ‭Wait on mutex semaphore (to ensure no writers are active).‬
‭●‬ ‭Increment read_count.‬
‭●‬ ‭If read_count was 0, wait on wrt semaphore (to ensure no writers are waiting).‬
‭●‬ ‭Signal mutex semaphore (to allow other readers).‬
‭●‬ ‭Read the shared resource.‬
‭●‬ ‭Wait on mutex semaphore (to ensure no new writers have arrived).‬
‭●‬ ‭Decrement read_count.‬
‭●‬ ‭If read_count is 0, signal wrt semaphore (to allow waiting writers).‬
‭●‬ ‭Signal mutex semaphore (to allow other readers or writers).‬
‭Code‬

#‭ include <iostream>‬
‭#include <thread>‬
‭#include <vector>‬
‭#include <semaphore.h>‬

s‭ em_t mutex, wrt;‬


‭int read_count = 0;‬

‭void writer() {‬
‭sem_wait(&wrt); // Ensure no other writers are active‬
‭sem_wait(&mutex); // Ensure no readers are active‬

/‭ / Write to the shared resource‬


‭std::cout << "Writer is writing...\n";‬

s‭ em_post(&mutex); // Allow readers or writers‬


‭sem_post(&wrt); // Allow other waiting writers‬
‭}‬

‭void reader() {‬
‭sem_wait(&mutex); // Ensure no writers are active‬
‭read_count++;‬
‭if (read_count == 1) {‬
‭sem_wait(&wrt); // Ensure no writers are waiting‬
‭}‬
‭sem_post(&mutex); // Allow other readers‬

/‭ / Read from the shared resource‬


‭std::cout << "Reader is reading...\n";‬

s‭ em_wait(&mutex); // Ensure no new writers have arrived‬


‭read_count--;‬
‭if (read_count == 0) {‬
‭sem_post(&wrt); // Allow waiting writers‬
‭}‬
‭sem_post(&mutex); // Allow other readers or writers‬
‭}‬

‭int main() {‬
‭sem_init(&mutex, 0, 1); // Initialize binary semaphore‬
‭sem_init(&wrt, 0, 1); // Initialize counting semaphore‬

‭std::vector<std::thread> readers, writers;‬

/‭ / Create reader threads‬


‭for (int i = 0; i < 5; i++) {‬
‭readers.emplace_back(reader);‬
‭}‬

/‭ / Create writer threads‬


‭for (int i = 0; i < 3; i++) {‬
‭writers.emplace_back(writer);‬
‭}‬

/‭ / Wait for all threads to finish‬


‭for (auto& t : readers) {‬
‭t.join();‬
‭}‬
‭for (auto& t : writers) {‬
‭t.join();‬
‭}‬

s‭ em_destroy(&mutex);‬
‭sem_destroy(&wrt);‬

‭return 0;‬
‭}‬

‭Output‬

‭Conclusion‬

T‭ he reader-writer problem is a fundamental synchronization problem that arises in various scenarios where‬
‭multiple threads need to access a shared resource concurrently. The solution presented here uses semaphores to‬
‭ensure mutual exclusion between writers and to coordinate access between readers and writers.‬

T‭ he main advantage of this solution is that it allows multiple readers to access the shared resource‬
‭simultaneously, improving concurrency and performance. However, it also introduces some overhead due to the‬
‭use of semaphores and the need for readers to acquire and release locks.‬
‭Program 13‬
‭Aim‬

‭Write a program to implement‬‭First Come First Serve‬‭(FCFS) Disk Scheduling Algorithm‬

‭Theory‬

F‭ CFS (First-Come, First-Served) Disk Scheduling Algorithm is one of the simplest disk scheduling‬
‭algorithms used in computer operating systems. It works on the principle of serving the disk I/O‬
‭requests in the order they arrive. When a process submits an I/O request, it gets added to the end of‬
‭the disk queue. The disk controller then services these requests one by one in the order they were‬
‭received.‬

‭Algorithm‬

‭ .‬ I‭nitialization: Initialize the disk queue with the initial position of the disk head.‬
1
‭2.‬ ‭Request Arrival: As new I/O requests arrive, they are added to the end of the disk queue.‬
‭3.‬ ‭Service Request: The disk controller services the requests in the order they were received. It‬
‭starts serving the request at the head of the queue and continues until the queue is empty.‬
‭4.‬ ‭Movement of Disk Head: The disk head moves from its current position to the location of the‬
‭next I/O request in the queue. The total head movement is the sum of the differences in the‬
‭positions of consecutive requests in the queue.‬
‭5.‬ ‭Completion: After servicing all the requests in the queue, the algorithm completes, and the‬
‭total head movement is calculated.‬

‭Code‬

#‭ include <iostream>‬
‭#include <cmath>‬

‭using namespace std;‬

‭int calculateTotalHeadMovement(int requests[], int numRequests, int initialPosition) {‬


‭int totalHeadMovement = 0;‬
‭int currentPosition = initialPosition;‬

/‭ / Traverse through the requests and calculate head movement‬


‭for (int i = 0; i < numRequests; ++i) {‬
‭totalHeadMovement += abs(currentPosition - requests[i]);‬
‭currentPosition = requests[i];‬
‭}‬

‭return totalHeadMovement;‬
‭}‬
‭int main() {‬
‭// Example disk requests‬
‭int requests[] = {176, 79, 34, 60, 92, 11, 41, 114};‬
‭int numRequests = sizeof(requests) / sizeof(requests[0]);‬

/‭ / Initial position of disk head‬


‭int initialPosition = 50;‬

/‭ / Calculate total head movement‬


‭int totalHeadMovement = calculateTotalHeadMovement(requests, numRequests, initialPosition);‬

/‭ / Output the result‬


‭cout << "Total head movement using FCFS algorithm: " << totalHeadMovement << endl;‬

‭return 0;‬
‭}‬

‭Output‬

‭Conclusion‬

F‭ CFS disk scheduling algorithm is easy to implement and understand, making it suitable for systems with low I/O‬
‭loads or for educational purposes. However, it suffers from the "convoy effect," where a long process ahead of a‬
‭short one can cause unnecessary delays for subsequent requests. It is also inefficient in terms of disk arm‬
‭movement since it doesn't consider the proximity of requests. As a result, FCFS may not be the best choice for‬
‭systems with high I/O loads or where minimizing seek time is critical. Other scheduling algorithms like SSTF‬
‭(Shortest Seek Time First) or SCAN provide better performance in such scenarios by considering the proximity of‬
‭requests and minimizing seek time. Understanding the characteristics and trade-offs of various disk scheduling‬
‭algorithms is essential for designing efficient and responsive I/O subsystems in operating systems.‬
‭Program 14‬
‭Aim‬

‭Write a program to implement‬‭the SSTF(Shortest Seek Time First) Disk Scheduling Algorithm.‬

‭Theory‬

S‭ STF (Shortest Seek Time First) is a disk scheduling algorithm that selects the request with the shortest‬
‭seek time from the current head position. The seek time is the time required for the disk arm to move‬
‭from its current position to the track where the requested data is located. SSTF minimizes the total seek‬
‭time, thus improving disk efficiency.‬

‭Algorithm‬

‭ .‬
1 S‭ tart with an initial head position.‬
‭2.‬ ‭Find the request with the shortest seek time from the current head position.‬
‭3.‬ ‭Move the disk arm to the track of the selected request.‬
‭4.‬ ‭Serve the request.‬
‭5.‬ ‭Repeat steps 2-4 until all requests are served.‬

‭Code‬

/‭ / C++ program for implementation of‬


‭// SSTF disk scheduling‬
‭#include <bits/stdc++.h>‬
‭using namespace std;‬

/‭ / Calculates difference of each‬


‭// track number with the head position‬
‭void calculatedifference(int request[], int head,‬
‭int diff[][2], int n)‬
‭{‬
‭for(int i = 0; i < n; i++)‬
‭{‬
‭diff[i][0] = abs(head - request[i]);‬
‭}‬
‭}‬

/‭ / Find unaccessed track which is‬


‭// at minimum distance from head‬
‭int findMIN(int diff[][2], int n)‬
‭{‬
‭int index = -1;‬
‭int minimum = 1e9;‬

f‭ or(int i = 0; i < n; i++)‬


‭{‬
i‭f (!diff[i][1] && minimum > diff[i][0])‬
‭{‬
‭minimum = diff[i][0];‬
‭index = i;‬
‭}‬
}‭ ‬
‭return index;‬
‭}‬

‭void shortestSeekTimeFirst(int request[],‬


‭int head, int n)‬
‭{‬
‭if (n == 0)‬
‭{‬
‭return;‬
‭}‬

/‭ / Create array of objects of class node‬


‭int diff[n][2] = { { 0, 0 } };‬

/‭ / Count total number of seek operation‬


‭int seekcount = 0;‬

/‭ / Stores sequence in which disk access is done‬


‭int seeksequence[n + 1] = {0};‬

f‭ or(int i = 0; i < n; i++)‬


‭{‬
‭seeksequence[i] = head;‬
‭calculatedifference(request, head, diff, n);‬
‭int index = findMIN(diff, n);‬
‭diff[index][1] = 1;‬

/‭ / Increase the total count‬


‭seekcount += diff[index][0];‬

/‭ / Accessed track is now new head‬


‭head = request[index];‬
}‭ ‬
‭seeksequence[n] = head;‬

‭cout << "Total number of seek operations = "‬


‭<< seekcount << endl;‬
‭cout << "Seek sequence is : " << "\n";‬

/‭ / Print the sequence‬


‭for(int i = 0; i <= n; i++)‬
‭{‬
‭cout << seeksequence[i] << "\n";‬
‭}‬
‭}‬

/‭ / Driver code‬
‭int main()‬
‭{‬
‭int n = 10;‬
‭int proc[n] = { 45, 21, 67, 90, 4, 89, 52, 61, 87, 25 };‬

‭shortestSeekTimeFirst(proc, 50, n);‬

‭return 0;‬
‭}‬

‭Output‬

‭Conclusion‬

I‭n this implementation, we have simulated the SSTF disk scheduling algorithm. The program takes the current‬
‭head position and a list of disk requests as input. It then calculates the shortest seek time for each request from‬
‭the current head position, selects the request with the minimum seek time, and moves the head accordingly‬
‭Program 15‬
‭Aim‬

‭Write a program to implement‬‭the SCAN Disk Scheduling Algorithm.‬

‭Theory‬

S‭ CAN (Elevator) disk scheduling algorithm is designed to minimize the average seek time by moving the disk arm‬
‭in one direction until reaching the end of the disk and then changing direction to scan back in the opposite‬
‭direction. It works like an elevator moving up and down the floors. SCAN provides fairness and efficiency in‬
‭accessing disk requests by servicing requests along the path of the disk arm's movement.‬

‭Algorithm‬

‭ .‬ S‭ tart with an initial head position and direction (either towards higher tracks or lower tracks).‬
1
‭2.‬ ‭Move the disk arm in the current direction until reaching either the end of the disk or the last request in‬
‭that direction.‬
‭3.‬ ‭If there are no more requests in the current direction, change the direction and continue scanning in the‬
‭opposite direction.‬
‭4.‬ ‭Repeat steps 2-3 until all requests are serviced.‬

‭Code‬

#‭ include <bits/stdc++.h>‬
‭using namespace std;‬

‭int disk_size = 200;‬

v‭ oid SCAN(int arr[], int head, string direction)‬


‭{‬
‭int size = 8;‬
‭int seek_count = 0;‬
‭int distance, cur_track;‬
‭vector<int> left, right;‬
‭vector<int> seek_sequence;‬

/‭ / appending end values‬


‭// which has to be visited‬
‭// before reversing the direction‬
‭if (direction == "left")‬
‭left.push_back(0);‬
‭else if (direction == "right")‬
‭right.push_back(disk_size - 1);‬

‭for (int i = 0; i < size; i++) {‬


‭if (arr[i] < head)‬
‭left.push_back(arr[i]);‬
‭if (arr[i] > head)‬
‭right.push_back(arr[i]);‬
‭}‬

/‭ / sorting left and right vectors‬


‭std::sort(left.begin(), left.end());‬
‭std::sort(right.begin(), right.end());‬

/‭ / run the while loop two times.‬


‭// one by one scanning right‬
‭// and left of the head‬
‭int run = 2;‬
‭while (run--) {‬
‭if (direction == "left") {‬
‭for (int i = left.size() - 1; i >= 0; i--) {‬
‭cur_track = left[i];‬

/‭ / appending current track to seek sequence‬


‭seek_sequence.push_back(cur_track);‬

/‭ / calculate absolute distance‬


‭distance = abs(cur_track - head);‬

/‭ / increase the total count‬


‭seek_count += distance;‬

/‭ / accessed track is now the new head‬


‭head = cur_track;‬
}‭ ‬
‭direction = "right";‬
}‭ ‬
‭else if (direction == "right") {‬
‭for (int i = 0; i < right.size(); i++) {‬
‭cur_track = right[i];‬
‭// appending current track to seek sequence‬
‭seek_sequence.push_back(cur_track);‬

/‭ / calculate absolute distance‬


‭distance = abs(cur_track - head);‬

/‭ / increase the total count‬


‭seek_count += distance;‬

/‭ / accessed track is now new head‬


‭head = cur_track;‬
}‭ ‬
‭direction = "left";‬
‭}‬
‭}‬

‭cout << "Total number of seek operations = "‬


‭<< seek_count << endl;‬

‭cout << "Seek Sequence is" << endl;‬

‭for (int i = 0; i < seek_sequence.size(); i++) {‬


‭cout << seek_sequence[i] << endl;‬
‭}‬
‭}‬

/‭ / Driver code‬
‭int main()‬
‭{‬

/‭ / request array‬
‭int arr[] = { 240, 94, 179, 51, 118, 15, 137, 29, 75 };‬
‭int head = 55;‬
‭string direction = "left";‬

‭SCAN(arr, head, direction);‬

‭return 0;‬
‭}‬

‭Output‬
‭Conclusion‬

S‭ CAN disk scheduling algorithm efficiently services disk requests by moving the disk arm in a linear fashion,‬
‭traversing the disk from one end to the other and then back again. It ensures fairness in accessing disk requests‬
‭by scanning in both directions, thus reducing the average seek time and improving overall disk performance.‬

You might also like