Professional Documents
Culture Documents
Maharaja Agrasen
Institute Of Technology
PSP Area,Sector-22,Rohini,New Delhi -110086
Submitted To : Submitted By :
Mission
The Institute shall endeavor to incorporate the following basic missions in the teaching methodology:
Practical exercises in all Engineering and Management disciplines shall be carried out by Hardware equipment
as well as the related software enabling deeper understanding of basic concepts and encouraging inquisitive
nature.
The Institute strives to match technological advancements and encourage students to keep updating their
knowledge for enhancing their skills and inculcating their habit of continuous learning.
The Institute endeavors to enhance technical and management skills of students so that they are intellectually
capable and competent professionals with Industrial Aptitude to face the challenges of globalization.
Diversification
The Engineering, Technology and Management disciplines have diverse fields of studies with different
attributes. The aim is to create a synergy of the above attributes by encouraging analytical thinking.
The Institute provides seamless opportunities for innovative learning in all Engineering and Management
disciplines through digitization of learning processes using analysis, synthesis, simulation, graphics, tutorials
and related tools to create a platform for multi-disciplinary approach.
Entrepreneurship
The Institute strives to develop potential Engineers and Managers by enhancing their skills and research
capabilities so that they become successful entrepreneurs and responsible citizens.
Vision
To nurture young minds in a learning environment of high academic value and imbibe spiritual and ethical
values with technological and management competence.
Department of Computer Science & Engineering
An NBA Accredited Program
Mission
To foster an open, multidisciplinary and highly collaborative research environment to produce world-class
engineers capable of providing innovative solutions to real-life problems and fulfil societal needs.
Vision
To be centre of excellence in education, research and technology transfer in the field of computer engineering
and promote entrepreneurship and ethical values.
Course Outcomes
LIST OF EXPERIMENTS (As prescribed by
G.G.S.I.P.U)
List of Experiments:
1. Write a program to implement CPU scheduling for first come first serve.
2. Write a program to implement CPU scheduling for shortest job first.
3. Write a program to perform priority scheduling.
4. Write a program to implement CPU scheduling for Round Robin.
5. Write a program for page replacement policy using a) LRU b) FIFO c) Optimal.
6. Write a program to implement first fit, best fit and worst fit algorithm for memory
management.
7. Write a program to implement reader/writer problem using semaphore.
8. Write a program to implement Banker‟s algorithm for deadlock avoidance.
NOTE: - At least 8 Experiments out of the list must be done in the semester.
18 | P a g e
AIT/CSE
M
LIST OF EXPERIMENTS (Beyond the syllabus)
3. i) Write a script to find the greatest of three numbers (numbers passed as command line
parameters) ii) Write a script to check whether the given no. is even / odd iii) Write a script
to check whether the given number is prime or not iv) Write a script to check whether the
given input is a number or a string v) Write a script to compute no. of characters and words
in each line of given file
4. i) Write a script to calculate the average of n numbers ii) Write a script to print the
Fibonacci series up to n terms iii) Write a script to calculate the factorial of a given number
iv) Write a script to calculate the sum of digits of the given number v) Write a script to
check whether the given string is a palindrome
19 | P a g e
AIT/CSE
M
Index
S.No. Aim of Experiment Date of Marks Marks Marks Marks Marks Total
Conduct (R1) (R3) (R3) (R4) (R5) Marks
1. Write a program 25
to implement CPU March
scheduling for first 2021
come first serve.
Introduction
An operating system (OS) is the software component of a computer system that is responsible
for the management and coordination of activities and the sharing of the resources of the
computer. The OS acts as a host for application programs that are run on the machine.
As a host, one of the purposes of an OS is to handle the details of the operation of the
hardware. This relieves application programs from having to manage these details and makes
it easier to write applications. Almost all computers use an OS of some type. OSs offer a
number of services to application programs and users.
Common contemporary OSs include Microsoft Windows, Mac OS X, and Linux. Microsoft
Windows has a significant majority of market share in the desktop and notebook computer
markets, while the server and embedded device markets are split amongst several OSs
An Operating System provides services to both the users and to the programs.
● It provides programs an environment to execute.
● It provides users the services to execute the programs in a convenient manner.
Following are a few common services provided by an operating system −
● Program execution
● I/O operations
● File System manipulation
● Communication
● Error Detection
● Resource Allocation
● Protection
Program execution
Operating systems handle many kinds of activities from user programs to system programs
like printer spooler, name servers, file server, etc. Each of these activities is encapsulated as
a process.
A process includes the complete execution context (code to execute, data to manipulate,
registers, OS resources in use). Following are the major activities of an operating system
with respect to program management −
● Loads a program into memory.
● Executes the program.
● Handles program's execution.
● Provides a mechanism for process synchronization.
● Provides a mechanism for process communication.
● Provides a mechanism for deadlock handling.
I/O Operation
An I/O subsystem comprises I/O devices and their corresponding driver software. Drivers
hide the peculiarities of specific hardware devices from the users.
An Operating System manages the communication between user and device drivers.
● I/O operation means read or write operation with any file or any specific I/O device.
● Operating system provides access to the required I/O device when required.
Communication
In case of distributed systems which are a collection of processors that do not share memory,
peripheral devices, or a clock, the operating system manages communications between all
the processes. Multiple processes communicate with one another through communication
lines in the network.
The OS handles routing and connection strategies, and the problems of contention and
security. Following are the major activities of an operating system with respect to
communication −
● Two processes often require data to be transferred between them
● Both the processes can be on one computer or on different computers, but are
connected through a computer network.
● Communication may be implemented by two methods, either by Shared Memory or
by Message Passing.
Error handling
Errors can occur anytime and anywhere. An error may occur in CPU, in I/O devices or in the
memory hardware. Following are the major activities of an operating system with respect to
error handling −
● The OS constantly checks for possible errors.
● The OS takes an appropriate action to ensure correct and consistent computing.
Resource Management
In case of multi-user or multi-tasking environments, resources such as main memory, CPU
cycles and files storage are to be allocated to each user or job. Following are the major
activities of an operating system with respect to resource management −
● The OS manages all kinds of resources using schedulers.
● CPU scheduling algorithms are used for better utilization of CPU.
Protection
Considering a computer system having multiple users and concurrent execution of multiple
processes, the various processes must be protected from each other's activities.
Protection refers to a mechanism or a way to control the access of programs, processes, or
users to the resources defined by a computer system. Following are the major activities of an
operating system with respect to protection −
● The OS ensures that all access to system resources is controlled.
● The OS ensures that external I/O devices are protected from invalid access attempts.
● The OS provides authentication features for each user by means of passwords.
Linux V/S Windows
Access Users can access the source code of Usually, users cannot access the
the kernel in Linux and can alter source code. However, members of
the kernel according to need. some groups can have access to it.
Variety Linux has several distributions that Windows have fewer options to
are highly customizable. customize.
Command- The command line is usually Windows also has a command line,
Line referred to as Terminal, which is but it is not as effective as a
the most useful tool of the Linux comparison to the Linux terminal.
system. It is used for Most users prefer the GUI options for
administration and daily tasks. For daily tasks.
the end-users, it does not look so
effective.
Installation The Linux installation process is a Windows OS is easy to install and set
bit complicated to set up as it up on a machine; it requires fewer
requires many user inputs. It takes user input options during installation.
less time than Windows to install. However, it takes more time to install
as compared to Linux.
Ease of use The Linux OS is meant to be for Windows comes with simple and rich
the technical user because you GUI options, so it is easy to use it. It
must have some exposure to can be simply used by technical as
various Linux commands. Users well as non-technical users. The
may take more time to be a handy troubleshooting process is also much
user of Linux. The troubleshooting easier than Linux.
process is also complicated as
compared to Windows.
Reliability Linux is highly reliable and secure. Windows is not as reliable as Linux.
It has well-established system However, now Windows has
security, process management, and improved reliability but still has some
uptime. security weaknesses and system
instabilities.
Support Linux has a good support as it has Windows also provides good support
a huge community of user forums to its users. It provides free as well as
and online search. paid support. It has an easily
accessible online forum.
Update Linux provides full control to its Windows updates are annoying. The
users on updates. A user can install updates will come at any time and
the update whenever needed. Also, take too much time to install.
it takes less time to install an Sometimes, you power on your
update. machine, and updates are
automatically getting started.
Unfortunately, the user does not have
much control over updates.
Security Linux OS is more secure than Windows is less secure than Linux.
Windows. It is hard for the hackers Attackers primarily target Windows
and attackers to find a loophole in for malware and virus. Windows is
it. So, Linux is hard to break into. most vulnerable without anti-virus.
Aim: Write a program to implement CPU scheduling for first come first serve.
Theory :
First in, first-out (FIFO), also known as first come, first served (FCFS), is the simplest
scheduling algorithm. FIFO simply queues processes in the order that they arrive in the ready
queue. In this, the process that comes first will be executed first and the next process starts
only after the previous one gets fully executed.
Source Code :
// FCFS scheduling
#include<iostream>
using namespace std;
// Driver code
int main()
{
// Process Id's
int processes[] = { 1, 2, 3};
int n = sizeof processes / sizeof processes[0];
findavgTime(processes, n, burst_time);
return 0;
}
Output :
Viva Voce
CPU utilization refers to a computer's usage of processing resources or the amount of work
handled by a CPU. Actual CPU utilization varies depending on the amount and type of
managed computing tasks. Certain tasks require heavy CPU time, while others require less
because of non- CPU resource requirements.
Q3 Which scheduling algorithm allocates the CPU first to the process that requests the
CPU first?
First Come First Serve Scheduling allocates the CPU first to the process that requests the
CPU first.
Multiprogramming Systems
Aim: Write a program to implement CPU scheduling for the shortest job first
(Non-preemptive).
Theory :
The shortest job first (SJF) or shortest job next, is a scheduling policy that selects the waiting
process with the smallest execution time to execute next. It is a non-preemptive algorithm.In
non-preemptive scheduling, once the CPU cycle is allocated to process, the process holds it
till it reaches a waiting state or terminated.
Source Code :
temp=p[i];
p[i]=p[pos];
p[pos]=temp;
}
wt[0]=0;
for(i=1;i<n;i++)
{
wt[i]=0;
for(j=0;j<i;j++)
wt[i]+=bt[j];
total+=wt[i];
}
avg_wt=(float)total/n;
total=0;
avg_tat=(float)total/n;
cout<<"\n\nAverage Waiting Time="<<avg_wt;
cout<<"\nAverage Turnaround Time="<<avg_tat<<endl;
}
Output :
Experiment 2 (b) 8 April 2020
Aim : Write a program to implement CPU scheduling for the shortest job first (Preemptive).
Theory :
In Preemptive SJF Scheduling, jobs are put into the ready queue as they come. A process
with the shortest burst time begins execution. If a process with even a shorter burst time
arrives, the current process is removed or preempted from execution, and the shorter job is
allocated to the CPU cycle.
Source Code :
Output :
Viva Voce
Q5 What do you mean by Average turnaround time and average waiting time.
Turnaround Time: The interval from the time of submission of a process to the time of
completion is the turnaround time. Turnaround time is the sum of the periods spent waiting to
get into memory, waiting in the ready queue, executing on the CPU, and doing I/O.
Waiting Time: sum of the period spent waiting in the ready queue.
Experiment 3 (a) 15 April 2020
Theory :
Source Code :
int main()
{
int bt[20],p[20],wt[20],tat[20],pr[20],i,j,n,total=0,pos,temp,avg_wt,avg_tat;
cout<<"Enter Total Number of Process:";
cin>>n;
//sort
for(i=0;i<n;i++)
{
pos=i;
for(j=i+1;j<n;j++)
{
if(pr[j]<pr[pos])
pos=j;
}
temp=pr[i];
pr[i]=pr[pos];
pr[pos]=temp;
temp=bt[i];
bt[i]=bt[pos];
bt[pos]=temp;
temp=p[i];
p[i]=p[pos];
p[pos]=temp;
}
wt[0]=0;
for(i=1;i<n;i++)
{
wt[i]=0;
for(j=0;j<i;j++)
wt[i]+=bt[j];
total+=wt[i];
}
avg_wt=total/n;
total=0;
avg_tat=total/n;
cout<<"\n\nAverage Waiting Time="<<avg_wt;
cout<<"\nAverage Turnaround Time="<<avg_tat;
return 0; }
Output :
Experiment 3 (b) 15 April 2020
Theory :
In Preemptive Scheduling, the tasks are mostly assigned with their priorities. Sometimes it is
important to run a task with a higher priority before another lower priority task, even if the
lower priority task is still running. The lower priority task holds for some time and resumes
when the higher priority task finishes its execution.
Source Code :
p[9]=-1;
for(time=0; count!=n; time++)
{
smallest=9;
for(i=0; i<n; i++)
{
if(a[i]<=time && p[i]>p[smallest] && b[i]>0 )
smallest=i;
}
b[smallest]--;
if(b[smallest]==0)
{
count++;
end=time+1;
completion[smallest] = end;
waiting[smallest] = end - a[smallest] - x[smallest];
turnaround[smallest] = end - a[smallest];
}
}
cout<<"Process"<<"\t"<< "burst-time"<<"\t"<<"arrival-time" <<"\t"<<"waiting-time"
<<"\t"<<"turnaround-time"<< "\t"<<"completion-time"<<"\t"<<"Priority"<<endl;
for(i=0; i<n; i++)
{
cout<<"p"<<i+1<<"\t\t"<<x[i]<<"\t\t"<<a[i]<<"\t\t"<<waiting[i]<<"\t\t"<<turnaround[i]<<"\t
\t"<<completion[i]<<"\t\t"<<p[i]<<endl;
avg = avg + waiting[i];
tt = tt + turnaround[i];
}
cout<<"\n\nAverage waiting time ="<<avg/n;
cout<<" Average Turnaround time ="<<tt/n<<endl;
}
Output :
Viva Voice
Q1 In the priority scheduling algorithm, when a process arrives at the ready queue, its
priority is compared with the priority of which process?
In priority scheduling algorithm, when a process arrives at the ready queue, its priority is
compared with the priority of the currently running process.
Q2 What is starvation?
Starvation is the name given to the indefinite postponement of a process because it requires
some resource before it can run, but the resource, though available for allocation, is never
allocated to this process.
Q3 What is aging?
Aging is a technique of gradually increasing the priority of processes that wait in the system
for a long time.
The priority of a process depends on which algorithm we are or our system is using. Whether
it is FCFS or Shortest Job Next. If it is FCFS then the first process that arrived first would get
the higher priority than other processes and in case SJN which process has the shortest
processing time will get the higher priority than others. So deciding the priority of a process
is an algorithm dependent task, like, this only our operating system work while assigning the
priority.
Experiment 4 22 April 2020
Aim : Write a program to implement CPU scheduling for Round Robin. (Preemptive).
Theory :
Source Code :
void findWaitingTime(int processes[], int n,int bt[], int wt[], int quantum)
{
int rem_bt[n];
for (int i = 0 ; i < n ; i++)
rem_bt[i] = bt[i];
int t = 0;
while (1)
{
bool done = true;
for (int i = 0 ; i < n; i++)
{
if (rem_bt[i] > 0)
{
done = false;
if (rem_bt[i] > quantum)
{
t += quantum;
rem_bt[i] -= quantum;
}
else
{
t = t + rem_bt[i];
wt[i] = t - bt[i];
rem_bt[i] = 0;
}
}
}
if (done == true)
break;
}
}
void findTurnAroundTime(int processes[], int n,int bt[], int wt[], int tat[])
{
for (int i = 0; i < n ; i++)
tat[i] = bt[i] + wt[i];
}
cout << "Processes "<< " Burst time " << " Waiting time " << " Turn around time\n";
int main()
{
int processes[] = { 1, 2, 3};
int n = sizeof processes / sizeof processes[0];
int burst_time[] = {10, 5, 8};
int quantum = 2;
findavgTime(processes, n, burst_time, quantum);
return 0;
}
Output :
Viva Voce
The period of time for which a process is allowed to run uninterrupted in a pre-emptive
multitasking operating system. The scheduler is run once every time slice to choose the next
process to run. If the time slice is too short then the scheduler will consume too much
processing time but if it is too long then processes may not be able to respond to external
events quickly enough.
4: What are the factors that affect Round Robin Scheduling Algorithm?
The issue with the round-robin scheme is the length of the quantum. Setting the quantum too
short causes too many context switches and lower the CPU efficiency. On the other hand,
setting the quantum too long may cause poor response time and approximates First Come
First Served (FCFS).
Pre-emptive Scheduling
Experiment 5(a) 29 April 2020
Aim : Write a program to implement the first fit for memory management.
Theory :
In this scheme, we check the blocks in a sequential manner which means we pick the first
process then compare its size with the first block size if it is less than the size of the block it
is allocated otherwise we move to the second block and so on.
When the first process is allocated we move on to the next process until all processes are
allocated.
Source Code :
// First - Fit
#include<iostream>
using namespace std;
int main(){
int num_blocks,num_files,blocks[10],files[10],fragment[10],filesAllocated[10];
int i=0,j=0;
for(i=0;i<10;i++)
{
fragment[i]=1;
filesAllocated[i]=0;
}
cout<<"\nEnter the No. of Blocks: ";
cin>>num_blocks;
cout<<"\nEnter the No. of Files: ";
cin>>num_files;
cout<<"\nEnter the sizes of blocks ";
for(i=0;i<num_blocks;i++)
{
cout<<"\nBlock["<<i+1<<"]: ";
cin>>blocks[i];
}
cout<<"\nEnter the sizes of files ";
for(i=0;i<num_files;i++)
{
cout<<"\nFile["<<i+1<<"]: ";
cin>>files[i];
}
for(i=0;i<num_files;i++)
{
for(j=0;j<num_blocks;j++)
{
if(files[i]<=blocks[j])
{
blocks[j]=blocks[j]-files[i];
if(blocks[j]!=0)
{
fragment[j]++;
}
}
}
break;
}
cout<<"\nBLOCKS\tREMAIN\tFRAGMENTS";
for(i=0;i<num_blocks;i++)
cout<<endl<<i+1<<"\t"<<blocks[i]<<"\t"<<fragment[i];
cout<<"\n\nFiles which are not allocated memory \n";
for(i=0;i<num_files;i++)
{
if(filesAllocated[i]==0)
{
cout<<"\nFile "<<i+1;
}
}
}
Output :
Experiment 5(b) 29 April 2020
Aim : Write a program to implement the best fit for memory management.
Theory :
The best-fit deals with allocating the smallest free partition which meets the requirement of
the requesting process. This algorithm first searches the entire list of free partitions and
considers the smallest hole that is adequate. It then tries to find a hole that is close to the
actual process size needed.
Source Code :
// Best - Fit
#include<iostream>
using namespace std;
int main()
{
int fragment[20],b[20],p[20],i,j,nb,np,temp,lowest=9999;
static int barray[20],parray[20];
cout<<"\nEnter the number of blocks:";
cin>>nb;
cout<<"Enter the number of files:";
cin>>np;
cout<<"\nEnter the size of the blocks:-\n";
for(i=1;i<=nb;i++)
{
cout<<"Block no."<<i<<":";
cin>>b[i];
}
cout<<"\nEnter the size of the files :-\n";
for(i=1;i<=np;i++)
{
cout<<"File no. "<<i<<":";
cin>>p[i];
}
for(i=1;i<=np;i++)
{
for(j=1;j<=nb;j++)
{
if(barray[j]!=1)
{
temp=b[j]-p[i];
if(temp>=0)
if(lowest>temp)
{
parray[i]=j;
lowest=temp;
}
}
}
fragment[i]=lowest;
barray[parray[i]]=1;
lowest=10000;
}
cout<<"\nFile_no\tFile_size\tBlock_no\tBlock_size\tFragment";
for(i=1;i<=np && parray[i]!=0;i++)
cout<<"\n"<<i<<"\t\t"<<p[i]<<"\t\t"<<parray[i]<<"\t\t"<<b[parray[i]]<<"\t\t"<<fragment[i];
return 0;
}
Output :
Experiment 5(c) 29 April 2020
Aim : Write a program to implement the worst fit for memory management.
Theory :
Worst Fit allocates a process to the partition which is largest sufficient among the freely
available partitions available in the main memory. If a large process comes at a later stage,
then memory will not have space to accommodate it. Worst fit is a strategy contrary to the
best fit that might make sense because it tends to minimize the effects of external
fragmentation.
Source Code :
// Worst - Fit
#include <iostream>
using namespace std;
int main()
{
int nBlocks,nProcess,blockSize[20],processSize[20];
for(int i=0;i<nBlocks;i++)
cin>>blockSize[i];
for(int i=0;i<nProcess;i++)
cin>>processSize[i];
for(int i=0;i<nProcess;i++)
{
int max = blockSize[0];
int pos = 0;
for(int j=0;j<nBlocks;j++)
Output :
Viva Voce
Q2 . Describe first-fit, best-fit strategies and worst fit strategies for disk space
allocation, with their merits and demerits.
It is a Slow Process. Checking the whole memory for each job makes the working of the
operating system very slow. It takes a lot of time to complete the work.
In this allocation technique, the process traverses the whole memory and always search for
the largest hole/partition, and then the process is placed in that hole/partition. It is a slow
process because it has to traverse the entire memory to search the largest hole.
Since this process chooses the largest hole/partition, therefore there will be large internal
fragmentation. Now, this internal fragmentation will be quite big so that other small processes
can also be placed in that leftover partition.
It is a slow process because it traverses all the partitions in the memory and then selects the
largest partition among all the partitions, which is a time-consuming process.
1) Internal Fragmentation
2) External Fragmentation
3) Limitation on the size of the process
4) Degree of multiprogramming is less
Experiment 6 (a) 20 May 2021
Theory :
The Least Recently Used (LRU) algorithm is a Greedy algorithm where the page to be
replaced is least recently used. The idea is based on the locality of reference, the least
recently used page is not likely.
Source Code :
// LRU
#include <iostream>
using namespace std;
int min(int counter[],int nFrames)
{
int minimum = counter[0];
int pos = 0;
for(int i=0;i<nFrames;i++) {
if(minimum > counter[i])
{
minimum = counter[i];
pos = i;
}
}
return pos;
}
int main()
{
int pageString[50],n,frames[10],counter[10],recent = 0;
int pageFault = 0,nFrames;
Output :
Experiment 6(b) 20 May 2021
Theory :
This is the simplest page replacement algorithm. In this algorithm, the operating system
keeps track of all pages in the memory in a queue, the oldest page is in the front of the queue.
When a page needs to be replaced page in the front of the queue is selected for removal.
Source Code :
// FIFO
#include <iostream>
using namespace std;
int getReplaceposition(int counter[],int n)
{
int max = counter[0];
int pos=0;
for(int i=0;i<n;i++) {
if(counter[i]>max)
{
pos=i;
max = counter[i];
}
}
return pos;
}
int main()
{
int pages[20],frames[10],counter[10];
int nPages,nFrames,pageFault=0;
cout<<"Enter the number of pages(MAX 20): ";
cin>>nPages;
for(int i=0;i<nFrames;i++)
{
frames[i] = 0;
counter[i] = 0;
}
for(int i=0;i<nPages;i++)
{
int flag =0;
for(int j=0;j<nFrames;j++)
{
if(frames[j] == pages[i])
{
flag=1;
break;
}
}
if(flag == 0)
{
pageFault++;
for(int j=0;j<nFrames;j++)
{
if(frames[j] == 0)
{
frames[j] = pages[i];
flag=1;
counter[j]++;
break;
}
}
}
if(flag == 0)
{
int pos = getReplaceposition(counter,nFrames);
frames[pos] = pages[i];
counter[pos] = 1;
for(int k=0;k<nFrames;k++)
{
if(k!=pos)
counter[k]++;
}
}
cout<<endl;
for(int j=0;j<nFrames;j++)
{
cout<<frames[j]<<" ";
}
}
cout<<"\nPage Fault: "<<pageFault;
return 0;
}
Output :
Experiment 6 (c) 20 May 2021
Theory :
In this algorithm, OS replaces the page that will not be used for the longest period of time in
future.
Source Code :
// Optimal
#include<iostream>
using namespace std;
int main()
{
int nop,nof,page[20],i,count=0;
cout<<"\nEnter the No. of Pages:";
cin>>nop;
int frame[nof],fcount[nof];
for(i=0;i<nof;i++)
{
frame[i]=-1;
fcount[i]=0;
}
i=0;
while(i<nop)
{
int j=0,flag=0;
while(j<nof)
{
if(page[i]==frame[j]){
flag=1;
}
j++;
}
j=0;
cout<<"\n";
cout<<"\t"<<page[i]<<"-->";
if(flag==0)
{
if(i>=nof)
{
int max=0,k=0;
while(k<nof)
{
int dist=0,j1=i+1;
while(j1<nop)
{
if(frame[k]!=page[j1])
dist++;
else
{
break;
}
j1++;
}
fcount[k]=dist;
k++;
}
k=0;
while(k<nof-1)
{
if(fcount[max]<fcount[k+1])
max=k+1;
k++;
}
frame[max]=page[i];
}
else {
frame[i%nof]=page[i];
}
count++;
while(j<nof)
{
cout<<"\t|"<<frame[j]<<"|";
j++;
}
}
i++;
}
cout<<"\n";
cout<<"\n\tPage Fault is:"<<count;
return 0;
}
Output :
Viva Questions
• Demand Paging: Demand paging is the paging policy that a page is not read into memory
until it is requested, that is until there is a page fault on the page.
• Page fault interrupt: A page fault interrupt occurs when a memory reference is made to a
page that is not in memory. The present bit in the page table entry will be found to be off
by the virtual memory hardware and it will signal an interrupt.
• Thrashing: The problem of many page faults occurring in a short time, is called page
thrashing.
2. Under what circumstances do page faults occur? Describe the actions taken by the
operating system when a page fault occurs?
A page fault occurs when access to a page that has not been brought into the main
memory takes place. The operating system verifies the memory access, aborting the
program if it is invalid. If it is valid, a free frame is located and I/O is requested to read
the needed page into the free frame. Upon completion of I/O, the process table and page
table are updated and the instruction is restarted
3. What is the cause of thrashing? How does the system detect thrashing? Once it
detects thrashing, what can the system do to eliminate this problem?
Belady‟s anomaly is the name given to the phenomenon where increasing the number of
page frames result in an increase in the number of page faults for a given memory access
Pattern.
5. What is a page and what is a frame. How are the two related?
The page frame is the storage unit (typically 4KB in size) whereas the page is the
contents that you would store in the storage unit i.e. the page frame. For example, the
RAM is divided into fixed-size blocks called page frames which is typically 4KB in size,
and each page frame can store 4KB of data i.e. the page.
Experiment 7 27 May 2021
Language Used: C
Theory :
● If one of the people tries editing the file, no other person should be reading or
writing at the same time, otherwise, changes will not be visible to him/her.
● However, if some person is reading the file, then others may read it at the same time.
Precisely in OS, we call this situation as the readers-writers problem
Writer process:
1. Writer requests the entry to the critical section.
2. If allowed i.e. wait() gives a true value, it enters and performs the write. If not
allowed, it keeps on waiting.
3. It exits the critical section.
Reader process:
1. Reader requests the entry to the critical section.
2. If allowed:
● it increments the count of the number of readers inside the critical section. If
this reader is the first reader entering, it locks the wrt semaphore to
restrict the entry of writers if any reader is inside.
● It then, signals mutex as any other reader is allowed to enter while others
are already reading.
● After performing reading, it exits the critical section. When exiting, it
checks if no more reader is inside, it signals the semaphore “wrt” as now,
The writer can enter the critical section.
3. If not allowed, it keeps on waiting.
Thus, the semaphore „wrt„ is queued on both readers and writers in a manner such that
preference is given to readers if writers are also there. Thus, no reader is waiting simply
because a writer has requested to enter the critical section.
Source Code :
#include<stdio.h>
#include<pthread.h>
#include<semaphore.h>
sem_t mutex,writeblock;
int data = 0,rcount = 0;
main()
{
int i,b;
pthread_t rtid[5],wtid[5];
sem_init(&mutex,0,1);
sem_init(&writeblock,0,1);
for(i=0;i<=2;i++)
{
pthread_create(&wtid[i],NULL,writer,(void *)i);
pthread_create(&rtid[i],NULL,reader,(void *)i);
}
for(i=0;i<=2;i++)
{
pthread_join(wtid[i],NULL);
pthread_join(rtid[i],NULL);
}
}
Output :
Viva - Questions
1. Define the critical section problem and explain the necessary characteristics of a
correct solution.
The critical section is a code segment where the shared variables can be accessed. An
atomic action is required in a critical section i.e. only one process can execute in
its critical section at a time. All the other processes have to wait to execute in their critical
Sections.
A race condition occurs when two threads access a shared variable at the same time. The
first thread reads the variable, and the second thread reads the same value from the
Variable.
4. Define monitor.
Theory :
In simple terms, it checks if the allocation of any resource will lead to deadlock or not, OR is
it safe to allocate a resource to a process and if not then the resource is not allocated to that
process. Determining a safe sequence (even if there is only 1) will assure that system will not
go into deadlock.
Banker‟s algorithm is generally used to find if a safe sequence exists or not. But here we will
determine the total number of safe sequences and print all safe sequences.
Source Code :
#include <iostream>
using namespace std;
int main()
{
// P0, P1, P2, P3, P4 are the Process names here
int n, m, i, j, k;
n = 5; // Number of processes
m = 3; // Number of resources
int alloc[5][3] = { { 0, 1, 0 }, // P0 // Allocation Matrix
{ 2, 0, 0 }, // P1
{ 3, 0, 2 }, // P2
{ 2, 1, 1 }, // P3
{ 0, 0, 2 } }; // P4
int max[5][3] = { { 7, 5, 3 }, // P0 // MAX Matrix
{ 3, 2, 2 }, // P1
{ 9, 0, 2 }, // P2
{ 2, 2, 2 }, // P3
{ 4, 3, 3 } }; // P4
int flag = 0;
for (j = 0; j < m; j++) {
if (need[i][j] > avail[j]){
flag = 1;
break;
}}
if (flag == 0) {
ans[ind++] = i;
for (y = 0; y < m; y++)
avail[y] += alloc[i][y];
f[i] = 1;
}
}
}
}
return (0);
}
Output :
Viva - Questions
Deadlock in OS is a situation where two or more processes are blocked. Conditions for
Deadlock- Mutual Exclusion, Hold and Wait, No preemption, Circular wait. These 4
conditions must hold simultaneously for the occurrence of deadlock.
One is that it depends on how often a deadlock is likely to occur under the implementation of
this algorithm. The other has to do with how many processes will be affected by deadlock
when this algorithm is applied.
The resource allocation graph is the pictorial representation of the state of a system. As
its name suggests, the resource allocation graph is the complete information about all the
processes which are holding some resources or waiting for some resources. .... Vertices
are mainly of two types, Resource and process.