Professional Documents
Culture Documents
B.Tech CSBS,
Submitted by
………………………………………...
April 2021.
1
Table of Contents:
Title: Page No:
Introduction 3
Review of Literature 4
Process Management 4
I/O Management 7
File Management 8
Memory Management: 10
Implementation 12
Process Management HRRN Algorithm 12
I/O Management C-LOOK Algorithm 13
Memory Management Buddy Allocation Algorithm 13
File Management DES Algorithm 13
Results 14
Code: 14
Outputs: 29
Conclusion 32
References 32
2
Introduction
Ubuntu is a Linux-based operating system. It is designed for computers, smartphones, and
network servers. The system is developed by a UK based company called Canonical Ltd. All the
principles used to develop the Ubuntu software are based on the principles of Open-Source
software development.
A default installation of Ubuntu contains a wide range of software that includes LibreOffice,
Firefox, Thunderbird, Transmission, and several lightweight games such as Sudoku and
chess. Many additional software packages are accessible from the built in Ubuntu Software
(previously Ubuntu Software Centre) as well as any other APT-based package management tools.
Many additional software packages that are no longer installed by default, such as Evolution,
GIMP, Pidgin, and Synaptic, are still accessible in the repositories and installable by the main tool
or by any other APT-based package management tool. Cross-distribution snap packages and flat-
packs are also available, that both allow installing software, such as some of Microsoft's software,
in most of the major Linux operating systems (such as any currently supported Ubuntu version and
in Fedora). The default file manager is GNOME Files, formerly called Nautilus.
All of the application software installed by default is free software. In addition, Ubuntu
redistributes some hardware drivers that are available only in binary format, but such packages are
clearly marked in the restricted component.
3
Review of Literature
Process Management
When you execute a program on your Unix system, the system creates a special environment for
that program. This environment contains everything needed for the system to run the program as
if no other program were running on the system. Whenever you issue a command in Unix, it
creates, or starts, a new process. A process, in simple terms, is an instance of a running program.
The operating system tracks processes through a five-digit ID number known as the Pid or the
process ID. Each process in the system has a unique Pid.
Pid s eventually repeat because all the possible numbers are used up and the next Pid rolls or starts
over. At any point of time, no two processes with the same Pid exist in the system because it is the
Pid that Unix uses to track each process.
A process means program in execution. It generally takes an input, processes it and gives us the
appropriate output. There are basically 2 types of processes.
1. Foreground processes: Such kind of processes are also known as interactive processes. These
are the processes which are to be executed or initiated by the user or the programmer, they cannot
be initialized by system services. Such processes take input from the user and return the output.
While these processes are running, we cannot directly initiate a new process from the same
terminal.
2. Background processes: Such kind of processes are also known as non-interactive processes.
These are the processes that are to be executed or initiated by the system itself or by users, though
they can even be managed by users. These processes have a unique PID or process if assigned to
them and we can initiate other processes within the same terminal from which they are initiated.
4
Performance of HRRN –
1. Shorter Processes are favoured.
2. Aging without service increases ratio, longer jobs can get past shorter jobs.
Gantt Chart
Explanation
5
· Next at t = 13 we have 2 jobs available D and E.
· Response Ratios of D and E are 2.4 and 3.5 respectively.
· So, process E is selected next and process D is selected last.
• Mutex
1. A Mutex is a lock that we set before using a shared resource and release after using it.
2. When the lock is set, no other thread can access the locked region of code.
3. So, we see that even if thread 2 is scheduled while thread 1 was not done accessing the
shared resource and the code is locked by thread 1 using mutexes then thread 2 cannot
even access that region of code.
4. So, this ensures synchronized access of shared resources in the code.
• Working of a mutex
1. Suppose one thread has locked a region of code using mutex and is executing that piece
of code.
2. Now if the scheduler decides to do a context switch, then all the other threads which are
ready to execute the same region are unblocked.
3. Only one of all the threads would make it to the execution but if this thread tries to
execute the same region of code that is already locked then it will again go to sleep.
4. Context switch will take place again and again but no thread would be able to execute the
locked region of code until the mutex lock over it is released.
5. Mutex lock will only be released by the thread who locked it.
6
6. So, this ensures that once a thread has locked a piece of code then no other thread can
execute the same region until it is unlocked by the thread who locked it.
A visual representation of the mutex lock working with the threads can be seen below:
I/O Management
CLOOK: As LOOK is similar to SCAN algorithm, in similar way, CLOOK is similar to CSCAN
disk scheduling algorithm. In CLOOK, the disk arm in spite of going to the end goes only to the
7
last request to be serviced in front of the head and then from there goes to the other end’s last
request. Thus, it also prevents the extra delay which occurred due to unnecessary traversal to the
end of the disk.
File Management
A multiuser system needs a way to let different users have different files with the same name. It
also needs a way to keep files in logical groups. With thousands of system files and hundreds of
files per user, it would be disastrous to have all of the files in one big heap. Even single-user
operating systems have found it necessary to go beyond "flat" file system structures.
To locate any file, we can give a sequence of names, starting from the file system's root, that shows
its exact position in the filesystem: we start with the root and then list the directories you go through
to find the file, separating them by slashes. This is called a path.
BTRFS is a Linux filesystem that has been adopted as the default filesystem in some popular
versions of Linux. It is based on copy-on-write, allowing for efficient snapshots and clones. It uses
B-trees as its main on-disk data structure. The design goal is to work well for many use cases and
workloads. To this end, much effort has been directed to maintaining even performance as the
filesystem ages, rather than trying to support a particular narrow benchmark use-case.
Linux filesystems are installed on smartphones as well as enterprise servers. This entails
challenges on many different fronts.
8
---Scalability. The filesystem must scale in many dimensions: disk space, memory, and CPUs.
---Data integrity. Losing data is not an option, and much effort is expended to safeguard the
content. This includes checksums, metadata duplication, and RAID support built into the
filesystem.
---Disk diversity. The system should work well with SSDs and hard disks. It is also expected to be
able to use an array of different sized disks, which poses challenges to the RAID and striping
mechanisms.
At the encryption site, DES takes a 64-bit plaintext and creates a 64-bit ciphertext; at the
decryption site, DES takes a 64-bit ciphertext and creates a 64-bit block of plaintext. The same
56-bit cipher key is used for both encryption and decryption. The encryption process is made of
two permutations (P-boxes), which we call initial and final permutations, and sixteen Feistel
rounds. Each round uses a different 48-bit round key generated from the cipher key according to
a predefined algorithm.
9
Memory Management:
• Virtual Memory and demand Paging:
Linux supports virtual memory, that is, using a disk as an extension of RAM so that the effective
size of usable memory grows correspondingly. The kernel will write the contents of a currently
unused block of memory to the hard disk so that the memory can be used for another purpose.
When the original contents are needed again, they are read back into memory. This is all made
completely transparent to the user; programs running under Linux only see the larger amount of
memory available and don't notice that parts of them reside on the disk from time to time. Of
course, reading and writing the hard disk is slower (on the order of a thousand times slower) than
using real memory, so the programs don't run as fast. The part of the hard disk that is used as
virtual memory is called the swap space. Linux can use either a normal file in the filesystem or a
separate partition for swap space
Virtual memory makes your system appear as if it has more memory than it actually has. This may
sound interesting and may prompt one to as how is this possible. So, lets understand the concept.
▪ To start, we must first understand that virtual memory is a layer of memory addresses
that map to physical addresses.
▪ In virtual memory model, when a processor executes a program instruction, it reads
the instruction from virtual memory and executes it.
▪ But before executing the instruction, it first converts the virtual memory address into
physical address.
▪ This conversion is done based on the mapping of virtual to physical addresses that is
done based on the mapping information contained in the page tables (that are
maintained by OS).
10
Whenever the processor encounters a virtual address, it extracts the virtual page frame number out
of it. Then it translates this virtual page frame number into a physical page frame number and the
offset parts helps it to go to the exact address in the physical page. This translation of addresses is
done through the page tables. Theoretically we can consider a page table to contain the following
information:
Demand Paging:
According to the concept of Virtual Memory, in order to execute some process, only a part of the
process needs to be present in the main memory which means that only a few pages will only be
present in the main memory at any time. However, deciding, which pages need to be kept in the
main memory and which need to be kept in the secondary memory, is going to be difficult because
we cannot say in advance that a process will require a particular page at particular time. Therefore,
to overcome this problem, there is a concept called Demand Paging is introduced. It suggests
keeping all pages of the frames in the secondary memory until they are required. In other words,
it says that do not load any page in the main memory until it is required. Whenever any page is
referred for the first time in the main memory, then that page will be found in the secondary
memory. After that, it may or may not be present in the main memory depending upon the page
replacement algorithm.
11
are of equal size and called as buddies. In the same manner one of the two buddies will further
divide into smaller parts until the request is fulfilled. Benefit of this technique is that the two
buddies can combine to form the block of larger size according to the memory request.
A second strategy for allocating kernel memory is known as slab allocation. It eliminates
fragmentation caused by allocations and deallocations. This method is used to retain allocated
memory that contains a data object of a certain type for reuse upon subsequent allocations of
objects of the same type. In slab allocation memory chunks suitable to fit data objects of certain
type or size are pre-allocated. Cache does not free the space immediately after use although it
keeps track of data which are required frequently so that whenever a request is made the data will
reach very fast. Two terms required are:
Slab – A slab is made up of one or more physically contiguous pages. The slab is the actual
container of data associated with objects of the specific kind of the containing cache.
Cache – Cache represents a small amount of very fast memory. A cache consists of one or more
slabs. There is a single cache for each unique kernel data structure.
Implementation
12
I/O Management C-LOOK Algorithm
1. Let Request array represents an array storing indexes of the tracks that have been requested in
ascending order of their time of arrival and head is the position of the disk head.
2. The initial direction in which the head is moving is given and it services in the same direction.
3. The head services all the requests one by one in the direction it is moving.
4. The head continues to move in the same direction until all the requests in this direction have been
serviced.
5. While moving in this direction, calculate the absolute distance of the tracks from the head.
7. Currently serviced track position now becomes the new head position.
9. If we reach the last request in the current direction then reverse the direction and move the head in
this direction until we reach the last request that is needed to be serviced in this direction without
servicing the intermediate requests.
10. Reverse the direction and go to step 3 until all the requests have not been serviced.
The buddy system is a memory allocation and management algorithm that manages memory in
power of two increments. Assume the memory size is 2U, suppose a size of S is required.
1. The process begins with the 64-bit plain text block getting handed over to an initial
permutation (IP) function.
13
2. The initial permutation (IP) is then performed on the plain text.
3. Next, the initial permutation (IP) creates two halves of the permuted block, referred to
as Left Plain Text (LPT) and Right Plain Text (RPT).
4. Each LPT and RPT goes through 16 rounds of the encryption process.
5. Finally, the LPT and RPT are re-joined, and a Final Permutation (FP) is performed on
the newly combined block.
The encryption process step (step 4, above) is further broken down into five stages:
1. Key transformation
2. Expansion permutation
3. S-Box permutation
4. P-Box permutation
For decryption, we use the same algorithm, and we reverse the order of the 16 round keys.
Results
Code:
#include<iostream>
#include<algorithm>
#include <bits/stdc++.h>
using namespace std;
//-------------------------------------------------------Process Management------------------------------------------------------//
struct process{
char process_id[50];
int burst_time;
int arrival_time;
int wait_time;
float response_ratio=0;
}a[50];
void read(int n)
{
int i;
14
for(i=0;i<n;i++)
{
cout<<"Enter the Process Name, Arrival Time and Burst Time for Process "<<i+1<<": ";
cin>>a[i].process_id;
cin>>a[i].arrival_time;
cin>>a[i].burst_time;
a[i].response_ratio=0;
a[i].wait_time=-a[i].arrival_time;
}
}
bool btimeSort(process a,process b)
{
return a.burst_time<b.burst_time;
}
bool atimeSort(process a,process b)
{
return a.arrival_time<b.arrival_time;
}
bool rrtimeSort(process a,process b)
{
return a.response_ratio>b.response_ratio;
}
void display(int n)
{
sort(a,a+n,btimeSort);
sort(a,a+n,atimeSort);
int ttime=0,i;
int j,completion_time[n];
for(i=0;i<n;i++)
{
j=i;
while(a[j].arrival_time<=ttime&&j!=n)
{
j++;
}
for(int q = i;q<j;q++)
{
a[q].wait_time=ttime-a[q].arrival_time;
a[q].response_ratio=(float)(a[q].wait_time+a[q].burst_time)/(float)a[q].burst_time;
}
sort(a+i,a+j,rrtimeSort);
completion_time[i]=ttime;
cout<<endl;
ttime+=a[i].burst_time;
}
completion_time[i] = ttime;
float averageWaitingTime=0;
float averageResponseTime=0;
float averageTAT=0;
15
cout<<"\n";
cout<<"P.Name AT\tBT\tCT\tTAT\tWT\tRT\n";
if (arr[i]>head)
{
16
right.push_back(arr[i]);
}
std::sort(left.begin(), left.end());
std::sort(right.begin(), right.end());
17
for(int i = 0; i <= n; i++)
{
free_list[i].clear();
}
free_list[n].push_back(make_pair(0, max_size-1));
}
if (free_list[n].size()>0)
{
pair<int, int> temp= free_list[n][0];
free_list[n].erase(free_list[n].begin());
cout<<"Memory from "<<temp.first<<" to "<<temp.second<<" allocated"<< "\n";
mp[temp.first] = temp.second-temp.first+1;
}
else
{
int i;
for(i= n+1; i<size; i++)
{
if(free_list[i].size() != 0)
break;
}
if(i==size)
{
cout<<"Sorry, failed to allocate memory \n";
}
else
{
pair<int, int> temp;
temp = free_list[i][0];
free_list[i].erase(free_list[i].begin());
i--;
18
pair<int, int> pair1, pair2;
pair1 = make_pair(temp.first, temp.first + (temp.second - temp.first) / 2);
free_list[i].push_back(pair1);
free_list[i].push_back(pair2);
temp = free_list[i][0];
free_list[i].erase(free_list[i].begin());
}
cout << "Memory from " << temp.first<< " to " << temp.second<< " allocated" << "\n";
if(mp.find(id)==mp.end())
{
cout<<"Sorry, invalid free request\n";
return;
}
int n=ceil(log(mp[id])/log(2));
int i, buddyNumber, buddyAddress;
buddyNumber=id/mp[id];
if (buddyNumber % 2 != 0)
{
buddyAddress = id - pow(2, n);
}
else
{
buddyAddress = id + pow(2, n);
}
19
for(i=0; i<free_list[n].size(); i++)
{
if(free_list[n][i].first==buddyAddress)
{
if(buddyNumber%2==0)
{
free_list[n + 1].push_back(make_pair(id, id + 2 * (pow(2, n) - 1)));
cout<<"Coalescing of blocks starting at "<< buddyAddress <<" and "<<id <<" was done" <<"\n";
}
free_list[n].erase(free_list[n].begin() + i);
free_list[n].erase(free_list[n].begin() +
free_list[n].size() - 1);
break;
}
}
mp.erase(id);
}
//---------------------------------------------------------File Management---------------------------------------------------------//
string hexatobinary(string s)
{
20
for (int i = 0; i < s.size(); i++)
{
bin += mp[s[i]];
}
return bin;
}
string binarytohexa(string s)
{
21
{
string s = "";
for (int i=0; i<shifts; i++)
{
for (int j=1; j<28; j++)
{
s+=k[j];
}
s+= k[0];
k= s;
s= "";
}
return k;
}
plaintext= hexatobinary(plaintext);
22
string left= plaintext.substr(0, 32);
string right= plaintext.substr(32, 32);
cout<<"After splitting: L0="<<binarytohexa(left)<<" R0="<<binarytohexa(right)<<endl;
23
29, 12, 28, 17,
1, 15, 23, 26,
5, 18, 31, 10,
2, 8, 24, 14,
32, 27, 3, 9,
19, 13, 30, 6,
22, 11, 4, 25 };
left= x;
if(i!= 15)
{
swap(left, right);
}
cout<<"Round "<<i + 1<<" "<< binarytohexa(left) <<" "<<binarytohexa(right)<<" "<<roundkeys[i]<<endl;
}
24
38, 6, 46, 14, 54, 22, 62, 30,
37, 5, 45, 13, 53, 21, 61, 29,
36, 4, 44, 12, 52, 20, 60, 28,
35, 3, 43, 11, 51, 19, 59, 27,
34, 2, 42, 10, 50, 18, 58, 26,
33, 1, 41, 9, 49, 17, 57, 25 };
switch(c)
{
case 1:
{
int nop,choice,i;
cout<<"Enter number of processes: ";
cin>>nop;
read(nop);
display(nop);
break;
}
case 2:
{
int arr[100], head, size;
25
}
case 3:
{
int total,c,req;
char ch='Y';
cout<<"Enter Total Memory Size (in Bytes) => ";
cin>>total;
initialize(total);
label:
do
{
cout<<"\n1. Add Process into Memory\n2. Remove Process \n3. Exit\n=> ";
cin>>c;
switch(c)
{
case 1:
{
cout<<"Enter Process Size (in Bytes) => ";
cin>>req;
cout<<"\n===>";
if(req >= 0)
{
allocate(req);
}
else
{
cout<<"Enter positive value for allocation"<<endl;
}
break;
}
case 2:
{
cout<<"Enter Starting Address => ";
cin>>req;
cout<<"\n===>";
deallocate(req);
break;
}
case 3:
{
cout<<"Do you wish to continue? Press N to exit";
cin>>ch;
if(ch=='N')
{
break;
}
26
}
}
}while(ch!='N');
break;
}
case 4:
{
string plaintext, key;
cout<<"Enter plaintext(in hexadecimal): ";
cin>>plaintext;
cout<<"Enter key(in hexadecimal): ";
cin>>key;
key = hexatobinary(key);
int shift_table[16] = { 1, 1, 2, 2,
2, 2, 2, 2,
1, 2, 2, 2,
2, 2, 2, 1 };
vector<string> RoundKey_binary;
vector<string> roundkeys;
27
for (int i = 0; i < 16; i++)
{
left = left_shift(left, shift_table[i]);
right = left_shift(right, shift_table[i]);
RoundKey_binary.push_back(RoundKey);
roundkeys.push_back(binarytohexa(RoundKey));
}
default:
cout<<"Enter a valid input";
break;
}
}
while(c!=5);
}
28
Outputs:
1. HRRN Algorithm
2. C-LOOK Algorithm
29
3. DES Algorithm
30
4. Buddy Allocation Algorithm:
31
Conclusion
In this project we aimed to emulate a Linux based Operating System specifically the Ubuntu OS.
We studied the algorithms and methods used by this OS in some key areas like Process
Management, I/O Management, File management and finally Memory Management and selected
an algorithm each from these specific areas to implement for our project. Highest Response Ratio
Next, C-LOOK, Data Encryption Standard and Buddy Allocation were the algorithms that we
implemented for Process, I/O, File and Memory Management respectively. Through this project
we understood the logic and application of the methods and compiled these in a single menu driven
C++ program simulating an Ubuntu environment. We wish to continue further study into this topic
and learn more about the advantages and disadvantages of the algorithms and procedures employed
and work upon making them much more efficient and useful.
References
1. https://www.geeksforgeeks.org/highest-response-ratio-next-hrrn-cpu-scheduling/
2. https://www.geeksforgeeks.org/mutex-lock-for-linux-thread-synchronization/
3. https://www.geeksforgeeks.org/operating-system-allocating-kernel-memory-buddy-
system-slab-system/
4. https://www.geeksforgeeks.org/virtual-memory-in-operating-system/
5. https://www.thegeekstuff.com/2012/02/linux-memory-manageme
6. https://docstore.mik.ua/orelly/unix/upt/ch01_19.htm
7. https://doc.nuxeo.com/nxdoc/filesystem-commands/
8. https://dl.acm.org/doi/abs/10.1145/2501620.2501623?download=true
9. https://www.simplilearn.com/what-is-des-
article#:~:text=The%20DES%20(Data%20Encryption%20Standard,ciphertext%20using
%2048%2Dbit%20keys.
32