You are on page 1of 32

Mukesh Patel School of Technology Management and Engineering

Linux OS: Ubuntu


Operating Systems
Mini- Project Report

B.Tech CSBS,

Submitted by

1. Ishaan Kangriwala (E019)

2. Paarth Kapasi (E020)

3. Dhruv Patel (E035)

4. Nihal Shetty (E050)

Under the Guidance of

………………………………………...

Prof. Swarnalata Bolavarappu

April 2021.

1
Table of Contents:
Title: Page No:
Introduction 3
Review of Literature 4
Process Management 4
I/O Management 7
File Management 8
Memory Management: 10
Implementation 12
Process Management HRRN Algorithm 12
I/O Management C-LOOK Algorithm 13
Memory Management Buddy Allocation Algorithm 13
File Management DES Algorithm 13
Results 14
Code: 14
Outputs: 29
Conclusion 32
References 32

2
Introduction
Ubuntu is a Linux-based operating system. It is designed for computers, smartphones, and
network servers. The system is developed by a UK based company called Canonical Ltd. All the
principles used to develop the Ubuntu software are based on the principles of Open-Source
software development.

Key Features of Ubuntu


Following are some of the significant features of Ubuntu −
• The desktop version of Ubuntu supports all the normal software on Windows such as
Firefox, Chrome, VLC, etc.
• It supports the office suite called LibreOffice.
• Ubuntu has an in-built email software called Thunderbird, which gives the user access to
email such as Exchange, Gmail, Hotmail, etc.
• There are a host of free applications for users to view and edit photos.
• There are also applications to manage videos and it also allows the users to share videos.
• It is easy to find content on Ubuntu with the smart searching facility.
• The best feature is, it is a free operating system and is backed by a huge open-source
community.

A default installation of Ubuntu contains a wide range of software that includes LibreOffice,
Firefox, Thunderbird, Transmission, and several lightweight games such as Sudoku and
chess. Many additional software packages are accessible from the built in Ubuntu Software
(previously Ubuntu Software Centre) as well as any other APT-based package management tools.
Many additional software packages that are no longer installed by default, such as Evolution,
GIMP, Pidgin, and Synaptic, are still accessible in the repositories and installable by the main tool
or by any other APT-based package management tool. Cross-distribution snap packages and flat-
packs are also available, that both allow installing software, such as some of Microsoft's software,
in most of the major Linux operating systems (such as any currently supported Ubuntu version and
in Fedora). The default file manager is GNOME Files, formerly called Nautilus.

All of the application software installed by default is free software. In addition, Ubuntu
redistributes some hardware drivers that are available only in binary format, but such packages are
clearly marked in the restricted component.

3
Review of Literature

Process Management

When you execute a program on your Unix system, the system creates a special environment for
that program. This environment contains everything needed for the system to run the program as
if no other program were running on the system. Whenever you issue a command in Unix, it
creates, or starts, a new process. A process, in simple terms, is an instance of a running program.
The operating system tracks processes through a five-digit ID number known as the Pid or the
process ID. Each process in the system has a unique Pid.

Pid s eventually repeat because all the possible numbers are used up and the next Pid rolls or starts
over. At any point of time, no two processes with the same Pid exist in the system because it is the
Pid that Unix uses to track each process.
A process means program in execution. It generally takes an input, processes it and gives us the
appropriate output. There are basically 2 types of processes.
1. Foreground processes: Such kind of processes are also known as interactive processes. These
are the processes which are to be executed or initiated by the user or the programmer, they cannot
be initialized by system services. Such processes take input from the user and return the output.
While these processes are running, we cannot directly initiate a new process from the same
terminal.
2. Background processes: Such kind of processes are also known as non-interactive processes.
These are the processes that are to be executed or initiated by the system itself or by users, though
they can even be managed by users. These processes have a unique PID or process if assigned to
them and we can initiate other processes within the same terminal from which they are initiated.

Highest Response Ratio Next (HRRN):


In this scheduling, processes with the highest response ratio are scheduled. This algorithm avoids
starvation. Given n processes with their Arrival times and Burst times, the task is to find average
waiting time and average turnaround time using HRRN scheduling algorithm.
The name itself states that we need to find the response ratio of all available processes and select
the one with the highest Response Ratio. A process once selected will run till completion.

Criteria – Response Ratio


Mode – Non-Pre-emptive

Response Ratio = (W + S)/S


Here, W is the waiting time of the process so far and S is the Burst time of the process.

4
Performance of HRRN –
1. Shorter Processes are favoured.
2. Aging without service increases ratio, longer jobs can get past shorter jobs.

Gantt Chart

Explanation

· At t = 0 we have only one process available, so A gets scheduled.


· Similarly at t = 3 we have only one process available, so B gets scheduled.
· Now at t = 9 we have 3 processes available, C, D and E. Since, C, D and E were
available after 4, 6 and 8 units respectively. Therefore, waiting time for C, D and E are
(9 – 4 =)5, (9 – 6 =)3, and (9 – 8 =)1 unit respectively.
· Using the formula given above we calculate the Response Ratios of C, D and E
respectively as 2.25, 1.6 and 1.5.
· Clearly C has the highest Response Ratio and so it gets scheduled.

5
· Next at t = 13 we have 2 jobs available D and E.
· Response Ratios of D and E are 2.4 and 3.5 respectively.
· So, process E is selected next and process D is selected last.

Implementation of HRRN Scheduling


1. Input the number of processes, their arrival times and burst times.
2. Sort them according to their arrival times.
3. At any given time calculate the response ratios and select the appropriate process to be
scheduled.
4. Calculate the turnaround time as completion time – arrival time.
5. Calculate the waiting time as turnaround time – burst time.
6. Turnaround time divided by the burst time gives the normalized turnaround time.
7. Sum up the waiting and turnaround times of all processes and divide by the number of
processes to get the average waiting and turnaround time.

Mutex lock for Linux Thread Synchronization


Thread synchronization is defined as a mechanism which ensures that two or more concurrent
processes or threads do not simultaneously execute some particular program segment known as a
critical section. Processes’ access to critical section is controlled by using synchronization
techniques. When one thread starts executing the critical section (a serialized segment of the
program) the other thread should wait until the first thread finishes. If proper synchronization
techniques are not applied, it may cause a race condition where the values of variables may be
unpredictable and vary depending on the timings of context switches of the processes or threads.

• Mutex
1. A Mutex is a lock that we set before using a shared resource and release after using it.
2. When the lock is set, no other thread can access the locked region of code.
3. So, we see that even if thread 2 is scheduled while thread 1 was not done accessing the
shared resource and the code is locked by thread 1 using mutexes then thread 2 cannot
even access that region of code.
4. So, this ensures synchronized access of shared resources in the code.

• Working of a mutex
1. Suppose one thread has locked a region of code using mutex and is executing that piece
of code.
2. Now if the scheduler decides to do a context switch, then all the other threads which are
ready to execute the same region are unblocked.
3. Only one of all the threads would make it to the execution but if this thread tries to
execute the same region of code that is already locked then it will again go to sleep.
4. Context switch will take place again and again but no thread would be able to execute the
locked region of code until the mutex lock over it is released.
5. Mutex lock will only be released by the thread who locked it.

6
6. So, this ensures that once a thread has locked a piece of code then no other thread can
execute the same region until it is unlocked by the thread who locked it.

A visual representation of the mutex lock working with the threads can be seen below:

I/O Management

Earliest Deadline First:


Earliest deadline first (EDF) or least time to go is a dynamic priority scheduling algorithm used in
real-time operating systems to place processes in a priority queue. Whenever a scheduling event
occurs (task finishes, new task released, etc.) the queue will be searched for the process closest to
its deadline. This process is the next to be scheduled for execution. EDF is an optimal scheduling
algorithm on pre-emptive uniprocessors, in the following sense: if a collection of independent jobs,
each characterized by an arrival time, an execution requirement and a deadline, can be scheduled
(by any algorithm) in a way that ensures all the jobs complete by their deadline, the EDF will
schedule this collection of jobs so they all complete by their deadline. With scheduling periodic
processes that have deadlines equal to their periods, EDF has a utilization bound of 100%.

LOOK & CLOOK in Ubuntu:


LOOK: It is similar to the SCAN disk scheduling algorithm except for the difference that the disk
arm in spite of going to the end of the disk goes only to the last request to be serviced in front of
the head and then reverses its direction from there only. Thus it prevents the extra delay which
occurred due to unnecessary traversal to the end of the disk.

CLOOK: As LOOK is similar to SCAN algorithm, in similar way, CLOOK is similar to CSCAN
disk scheduling algorithm. In CLOOK, the disk arm in spite of going to the end goes only to the

7
last request to be serviced in front of the head and then from there goes to the other end’s last
request. Thus, it also prevents the extra delay which occurred due to unnecessary traversal to the
end of the disk.

File Management

The Tree Structure of the Filesystem

A multiuser system needs a way to let different users have different files with the same name. It
also needs a way to keep files in logical groups. With thousands of system files and hundreds of
files per user, it would be disastrous to have all of the files in one big heap. Even single-user
operating systems have found it necessary to go beyond "flat" file system structures.

Almost every operating system solved this problem by implementing a tree-structured, or


hierarchical, filesystem. LINUX is no exception. A hierarchical file system is not much different
from a set of filing cabinets at the office. Your set of cabinets consists of many individual cabinets.
Each individual cabinet has several drawers; each drawer may have several partitions in it; each
partition may have several hanging folders; and each hanging folder may have several files. You
can specify an individual file by naming the filing cabinet, the drawer, the partition, the group of
folders, and the individual folder. The UNIX file system works in exactly the same way (as do
most other hierarchical file systems). Rather than having a heap of assorted files, files are
organized into directories. A directory is really nothing more than a special kind of file that lists a
bunch of other files. A directory can contain any number of files (although for performance
reasons, it's a good idea to keep the number of files in one directory relatively small - under 100,
when you can). A directory can also contain other directories. Because a directory is nothing more
than a special kind of file, directories also have names. At the top (the file system "tree" is really
upside down) is a directory called the "root," which has the special name / (pronounced “slash,"
but never spelled out).

To locate any file, we can give a sequence of names, starting from the file system's root, that shows
its exact position in the filesystem: we start with the root and then list the directories you go through
to find the file, separating them by slashes. This is called a path.

For Example: The names/home/mkl/mystuff/stuff and/home/hun/public/stuff both refer to files


named stuff. However, these files are in different directories, so they are different files. The names
home, hun, and soon are all names of directories. Complete paths like these are called "absolute
paths." There are shorter ways to refer to a file called relative paths.

BTRFS is a Linux filesystem that has been adopted as the default filesystem in some popular
versions of Linux. It is based on copy-on-write, allowing for efficient snapshots and clones. It uses
B-trees as its main on-disk data structure. The design goal is to work well for many use cases and
workloads. To this end, much effort has been directed to maintaining even performance as the
filesystem ages, rather than trying to support a particular narrow benchmark use-case.

Linux filesystems are installed on smartphones as well as enterprise servers. This entails
challenges on many different fronts.

8
---Scalability. The filesystem must scale in many dimensions: disk space, memory, and CPUs.

---Data integrity. Losing data is not an option, and much effort is expended to safeguard the
content. This includes checksums, metadata duplication, and RAID support built into the
filesystem.

---Disk diversity. The system should work well with SSDs and hard disks. It is also expected to be
able to use an array of different sized disks, which poses challenges to the RAID and striping
mechanisms.

Data Encryption Standard (DES):


In 1973, National Institute of Standards and Technology (NIST) published a request for proposals
for a national symmetric-key cryptosystem. A proposal from IBM, a modification of a project
called Lucifer, was accepted as DES. DES was published in the Federal Register in March 1975
as a draft of the Federal Information Processing Standard (FIPS). After the publication, the draft
was criticized severely for two reasons. First, critics questioned the small key length (only 56 bits),
which could make the cipher vulnerable to brute-force attack. Second, critics were concerned about
some hidden design behind the internal structure of DES. They were suspicious that some part of
the structure (the S-boxes) may have some hidden trapdoor that would allow the National Security
Agency (NSA) to decrypt the messages without the need for the key. Later IBM designers
mentioned that the internal structure was designed to prevent differential cryptanalysis. DES was
finally published as FIPS 46 in the Federal Register in January 1977. NIST, however, defines DES
as the standard for use in unclassified applications. DES has been the most widely used Data
Encryption Standard (DES) 6 144 Cryptography and Network Security symmetric-key block
cipher since its publication. NIST later issued a new standard (FIPS 46-3) that recommends the
use of triple DES (repeated DES cipher three times) for future applications.

At the encryption site, DES takes a 64-bit plaintext and creates a 64-bit ciphertext; at the
decryption site, DES takes a 64-bit ciphertext and creates a 64-bit block of plaintext. The same
56-bit cipher key is used for both encryption and decryption. The encryption process is made of
two permutations (P-boxes), which we call initial and final permutations, and sixteen Feistel
rounds. Each round uses a different 48-bit round key generated from the cipher key according to
a predefined algorithm.

9
Memory Management:
• Virtual Memory and demand Paging:

Linux supports virtual memory, that is, using a disk as an extension of RAM so that the effective
size of usable memory grows correspondingly. The kernel will write the contents of a currently
unused block of memory to the hard disk so that the memory can be used for another purpose.
When the original contents are needed again, they are read back into memory. This is all made
completely transparent to the user; programs running under Linux only see the larger amount of
memory available and don't notice that parts of them reside on the disk from time to time. Of
course, reading and writing the hard disk is slower (on the order of a thousand times slower) than
using real memory, so the programs don't run as fast. The part of the hard disk that is used as
virtual memory is called the swap space. Linux can use either a normal file in the filesystem or a
separate partition for swap space
Virtual memory makes your system appear as if it has more memory than it actually has. This may
sound interesting and may prompt one to as how is this possible. So, lets understand the concept.

▪ To start, we must first understand that virtual memory is a layer of memory addresses
that map to physical addresses.
▪ In virtual memory model, when a processor executes a program instruction, it reads
the instruction from virtual memory and executes it.
▪ But before executing the instruction, it first converts the virtual memory address into
physical address.
▪ This conversion is done based on the mapping of virtual to physical addresses that is
done based on the mapping information contained in the page tables (that are
maintained by OS).

10
Whenever the processor encounters a virtual address, it extracts the virtual page frame number out
of it. Then it translates this virtual page frame number into a physical page frame number and the
offset parts helps it to go to the exact address in the physical page. This translation of addresses is
done through the page tables. Theoretically we can consider a page table to contain the following
information:

▪ A flag that describes whether the entry is valid or not.


▪ The physical page frame number as described by this entry.
▪ Access information regarding the page (like read-only, read-write etc).

Demand Paging:

According to the concept of Virtual Memory, in order to execute some process, only a part of the
process needs to be present in the main memory which means that only a few pages will only be
present in the main memory at any time. However, deciding, which pages need to be kept in the
main memory and which need to be kept in the secondary memory, is going to be difficult because
we cannot say in advance that a process will require a particular page at particular time. Therefore,
to overcome this problem, there is a concept called Demand Paging is introduced. It suggests
keeping all pages of the frames in the secondary memory until they are required. In other words,
it says that do not load any page in the main memory until it is required. Whenever any page is
referred for the first time in the main memory, then that page will be found in the secondary
memory. After that, it may or may not be present in the main memory depending upon the page
replacement algorithm.

Kernel Memory allocation:


Buddy system
Buddy allocation system is an algorithm in which a larger memory block is divided into small
parts to satisfy the request. This algorithm is used to give best fit. The two smaller parts of block

11
are of equal size and called as buddies. In the same manner one of the two buddies will further
divide into smaller parts until the request is fulfilled. Benefit of this technique is that the two
buddies can combine to form the block of larger size according to the memory request.
A second strategy for allocating kernel memory is known as slab allocation. It eliminates
fragmentation caused by allocations and deallocations. This method is used to retain allocated
memory that contains a data object of a certain type for reuse upon subsequent allocations of
objects of the same type. In slab allocation memory chunks suitable to fit data objects of certain
type or size are pre-allocated. Cache does not free the space immediately after use although it
keeps track of data which are required frequently so that whenever a request is made the data will
reach very fast. Two terms required are:
Slab – A slab is made up of one or more physically contiguous pages. The slab is the actual
container of data associated with objects of the specific kind of the containing cache.
Cache – Cache represents a small amount of very fast memory. A cache consists of one or more
slabs. There is a single cache for each unique kernel data structure.

Implementation

Process Management HRRN Algorithm


1. In the HRNN scheduling algorithm, once a process is selected for execution it will run until
its completion.
2. The first step is to sort processes according to their arrival times.
3. Then calculate the waiting time for all the processes. Waiting time simply means the sum
of the time spent waiting in the ready queue by processes.
4. At any given time calculate the response ratios and select the process to be scheduled that
has the highest response ratio.
5. Then execute that process till it completes its burst time and repeats the steps until all the
processes are executed.
6. In a case, if two processes have the same response ratio then the tie is broken using the
FCFS scheduling algorithm.

12
I/O Management C-LOOK Algorithm

1. Let Request array represents an array storing indexes of the tracks that have been requested in
ascending order of their time of arrival and head is the position of the disk head.

2. The initial direction in which the head is moving is given and it services in the same direction.

3. The head services all the requests one by one in the direction it is moving.

4. The head continues to move in the same direction until all the requests in this direction have been
serviced.

5. While moving in this direction, calculate the absolute distance of the tracks from the head.

6. Increment the total seek count with this distance.

7. Currently serviced track position now becomes the new head position.

8. Go to step 5 until we reach the last request in this direction.

9. If we reach the last request in the current direction then reverse the direction and move the head in
this direction until we reach the last request that is needed to be serviced in this direction without
servicing the intermediate requests.

10. Reverse the direction and go to step 3 until all the requests have not been serviced.

Memory Management Buddy Allocation Algorithm

The buddy system is a memory allocation and management algorithm that manages memory in
power of two increments. Assume the memory size is 2U, suppose a size of S is required.

1. Entire space available is treated as a single 2U.


a. If 2U-1 < S <= 2U, Then Allocate the whole block.
b. Else: Recursively divide the block equally and test the condition at each time, when
it satisfies, allocate the block and get out the loop.

File Management DES Algorithm

1. The process begins with the 64-bit plain text block getting handed over to an initial
permutation (IP) function.

13
2. The initial permutation (IP) is then performed on the plain text.

3. Next, the initial permutation (IP) creates two halves of the permuted block, referred to
as Left Plain Text (LPT) and Right Plain Text (RPT).

4. Each LPT and RPT goes through 16 rounds of the encryption process.

5. Finally, the LPT and RPT are re-joined, and a Final Permutation (FP) is performed on
the newly combined block.

6. The result of this process produces the desired 64-bit ciphertext.

The encryption process step (step 4, above) is further broken down into five stages:

1. Key transformation

2. Expansion permutation

3. S-Box permutation

4. P-Box permutation

5. XOR and swap

For decryption, we use the same algorithm, and we reverse the order of the 16 round keys.

Results

Code:
#include<iostream>
#include<algorithm>
#include <bits/stdc++.h>
using namespace std;
//-------------------------------------------------------Process Management------------------------------------------------------//
struct process{
char process_id[50];
int burst_time;
int arrival_time;
int wait_time;
float response_ratio=0;
}a[50];
void read(int n)
{
int i;

14
for(i=0;i<n;i++)
{
cout<<"Enter the Process Name, Arrival Time and Burst Time for Process "<<i+1<<": ";
cin>>a[i].process_id;
cin>>a[i].arrival_time;
cin>>a[i].burst_time;
a[i].response_ratio=0;
a[i].wait_time=-a[i].arrival_time;
}
}
bool btimeSort(process a,process b)
{
return a.burst_time<b.burst_time;
}
bool atimeSort(process a,process b)
{
return a.arrival_time<b.arrival_time;
}
bool rrtimeSort(process a,process b)
{
return a.response_ratio>b.response_ratio;
}
void display(int n)
{
sort(a,a+n,btimeSort);
sort(a,a+n,atimeSort);
int ttime=0,i;
int j,completion_time[n];
for(i=0;i<n;i++)
{
j=i;
while(a[j].arrival_time<=ttime&&j!=n)
{
j++;
}
for(int q = i;q<j;q++)
{
a[q].wait_time=ttime-a[q].arrival_time;
a[q].response_ratio=(float)(a[q].wait_time+a[q].burst_time)/(float)a[q].burst_time;
}
sort(a+i,a+j,rrtimeSort);
completion_time[i]=ttime;
cout<<endl;
ttime+=a[i].burst_time;
}
completion_time[i] = ttime;
float averageWaitingTime=0;
float averageResponseTime=0;
float averageTAT=0;

15
cout<<"\n";
cout<<"P.Name AT\tBT\tCT\tTAT\tWT\tRT\n";

for (i=0; i<n; i++)


{
cout<<'P'<< a[i].process_id << "\t";
cout<< a[i].arrival_time << "\t";
cout<< a[i].burst_time << "\t";
cout<< completion_time[i+1] << "\t";
cout<< completion_time[i]-a[i].arrival_time+a[i].burst_time << "\t";
averageTAT+=completion_time[i]-a[i].arrival_time+a[i].burst_time;
cout<< a[i].wait_time << "\t";
averageWaitingTime+=completion_time[i]-a[i].arrival_time;
cout<< completion_time[i]-a[i].arrival_time << "\t";
averageResponseTime+=completion_time[i]-a[i].arrival_time;
cout<<"\n";
}
cout<<"\n";
cout<<"\nGantt Chart\n";
for (i=0; i<n; i++){
cout <<"| P"<< a[i].process_id << " ";
}
cout<<"\n";
for (i=0; i<n+1; i++){
cout << completion_time[i] << "\t";
}
cout<<"\n";
cout<<"Average Response time: "<<(float)averageResponseTime/(float)n<<endl;
cout<<"Average Waiting time: "<<(float)averageWaitingTime/(float)n<<endl;
cout<<"Average TA time: "<<(float)averageTAT/(float)n<<endl;
}
//--------------------------------------------------------I/O Management-----------------------------------------------------------//
int total_tracks = 200;

void CLOOK(int arr[], int head, int size)


{
int total_headmovement = 0, distance, current_track;
vector<int> left, right;
vector<int> seek_sequence;

for (int i=0; i<size; i++)


{
if(arr[i]<head)
{
left.push_back(arr[i]);
}

if (arr[i]>head)
{

16
right.push_back(arr[i]);
}

std::sort(left.begin(), left.end());
std::sort(right.begin(), right.end());

for (int i=0; i<right.size(); i++)


{
current_track = right[i];
seek_sequence.push_back(current_track);
distance = abs(current_track - head);
total_headmovement += distance;
head = current_track;
}

total_headmovement += abs(head - left[0]);


head = left[0];

for (int i = 0; i < left.size(); i++)


{
current_track = left[i];
seek_sequence.push_back(current_track);
distance = abs(current_track - head);
total_headmovement += distance;
head = current_track;
}

cout<<"Total Head Movement = "<<total_headmovement<<endl;


cout<<"The Seek Sequence is"<<endl;

for (int i = 0; i<seek_sequence.size(); i++)


{
cout<<seek_sequence[i]<< endl;
}
}
//-----------------------------------------------------------------Paarth Kapasi---------------------------------------------------------
------------------------------//
int size;
vector<pair<int, int>> free_list[100000];
map<int, int> mp;

void initialize(int max_size)


{
int n= ceil(log(max_size) / log(2));
size= n+1;

17
for(int i = 0; i <= n; i++)
{
free_list[i].clear();
}
free_list[n].push_back(make_pair(0, max_size-1));
}

void allocate(int max_size)


{
int n= ceil(log(max_size)/log(2));

if (free_list[n].size()>0)
{
pair<int, int> temp= free_list[n][0];

free_list[n].erase(free_list[n].begin());
cout<<"Memory from "<<temp.first<<" to "<<temp.second<<" allocated"<< "\n";

mp[temp.first] = temp.second-temp.first+1;
}
else
{
int i;
for(i= n+1; i<size; i++)
{

if(free_list[i].size() != 0)
break;
}

if(i==size)
{
cout<<"Sorry, failed to allocate memory \n";
}

else
{
pair<int, int> temp;
temp = free_list[i][0];

free_list[i].erase(free_list[i].begin());
i--;

for(; i >= n; i--)


{

18
pair<int, int> pair1, pair2;
pair1 = make_pair(temp.first, temp.first + (temp.second - temp.first) / 2);

pair2 = make_pair(temp.first + (temp.second - temp.first + 1) / 2, temp.second);

free_list[i].push_back(pair1);

free_list[i].push_back(pair2);
temp = free_list[i][0];

free_list[i].erase(free_list[i].begin());
}
cout << "Memory from " << temp.first<< " to " << temp.second<< " allocated" << "\n";

mp[temp.first] = temp.second - temp.first + 1;


}
}
}

void deallocate(int id)


{

if(mp.find(id)==mp.end())
{
cout<<"Sorry, invalid free request\n";
return;
}

int n=ceil(log(mp[id])/log(2));
int i, buddyNumber, buddyAddress;

free_list[n].push_back(make_pair(id, id+pow(2, n)-1));


cout<<"Memory block from"<<id
<<"to"<<id+pow(2, n)-1
<<" freed\n";

buddyNumber=id/mp[id];

if (buddyNumber % 2 != 0)
{
buddyAddress = id - pow(2, n);
}
else
{
buddyAddress = id + pow(2, n);
}

19
for(i=0; i<free_list[n].size(); i++)
{

if(free_list[n][i].first==buddyAddress)
{

if(buddyNumber%2==0)
{
free_list[n + 1].push_back(make_pair(id, id + 2 * (pow(2, n) - 1)));

cout<<"Coalescing of blocks starting at"<<id<<" and "<<buddyAddress<<" was done"<<"\n";


}
else
{
free_list[n + 1].push_back(make_pair(buddyAddress, buddyAddress + 2 * (pow(2, n))));

cout<<"Coalescing of blocks starting at "<< buddyAddress <<" and "<<id <<" was done" <<"\n";
}
free_list[n].erase(free_list[n].begin() + i);
free_list[n].erase(free_list[n].begin() +
free_list[n].size() - 1);
break;
}
}
mp.erase(id);
}
//---------------------------------------------------------File Management---------------------------------------------------------//
string hexatobinary(string s)
{

unordered_map<char, string> mp;


mp['0'] = "0000";
mp['1'] = "0001";
mp['2'] = "0010";
mp['3'] = "0011";
mp['4'] = "0100";
mp['5'] = "0101";
mp['6'] = "0110";
mp['7'] = "0111";
mp['8'] = "1000";
mp['9'] = "1001";
mp['A'] = "1010";
mp['B'] = "1011";
mp['C'] = "1100";
mp['D'] = "1101";
mp['E'] = "1110";
mp['F'] = "1111";
string bin = "";

20
for (int i = 0; i < s.size(); i++)
{
bin += mp[s[i]];
}
return bin;
}
string binarytohexa(string s)
{

unordered_map<string, string> mp;


mp["0000"] = "0";
mp["0001"] = "1";
mp["0010"] = "2";
mp["0011"] = "3";
mp["0100"] = "4";
mp["0101"] = "5";
mp["0110"] = "6";
mp["0111"] = "7";
mp["1000"] = "8";
mp["1001"] = "9";
mp["1010"] = "A";
mp["1011"] = "B";
mp["1100"] = "C";
mp["1101"] = "D";
mp["1110"] = "E";
mp["1111"] = "F";
string hex = "";
for (int i = 0; i < s.length(); i += 4) {
string ch = "";
ch += s[i];
ch += s[i + 1];
ch += s[i + 2];
ch += s[i + 3];
hex += mp[ch];
}
return hex;
}

string permutation(string k, int* arr, int n)


{
string permute = "";
for (int i = 0; i < n; i++)
{
permute += k[arr[i] - 1];
}
return permute;
}

string left_shift(string k, int shifts)

21
{
string s = "";
for (int i=0; i<shifts; i++)
{
for (int j=1; j<28; j++)
{
s+=k[j];
}
s+= k[0];
k= s;
s= "";
}
return k;
}

string exor(string a, string b)


{
string ans = "";
for (int i=0; i<a.size(); i++)
{
if (a[i] == b[i])
{
ans+="0";
}
else
{
ans+="1";
}
}
return ans;
}
string encryption(string plaintext, vector<string> RoundKey_binary, vector<string> roundkeys)
{

plaintext= hexatobinary(plaintext);

int initial_permutation[64]= { 58, 50, 42, 34, 26, 18, 10, 2,


60, 52, 44, 36, 28, 20, 12, 4,
62, 54, 46, 38, 30, 22, 14, 6,
64, 56, 48, 40, 32, 24, 16, 8,
57, 49, 41, 33, 25, 17, 9, 1,
59, 51, 43, 35, 27, 19, 11, 3,
61, 53, 45, 37, 29, 21, 13, 5,
63, 55, 47, 39, 31, 23, 15, 7 };

plaintext= permutation(plaintext, initial_permutation, 64);


cout << "After initial permutation: " << binarytohexa(plaintext) << endl;

22
string left= plaintext.substr(0, 32);
string right= plaintext.substr(32, 32);
cout<<"After splitting: L0="<<binarytohexa(left)<<" R0="<<binarytohexa(right)<<endl;

int expansion_dbox[48] = { 32, 1, 2, 3, 4, 5, 4, 5,


6, 7, 8, 9, 8, 9, 10, 11,
12, 13, 12, 13, 14, 15, 16, 17,
16, 17, 18, 19, 20, 21, 20, 21,
22, 23, 24, 25, 24, 25, 26, 27,
28, 29, 28, 29, 30, 31, 32, 1 };

int sbox[8][4][16] = { { 14, 4, 13, 1, 2, 15, 11, 8, 3, 10, 6, 12, 5, 9, 0, 7,


0, 15, 7, 4, 14, 2, 13, 1, 10, 6, 12, 11, 9, 5, 3, 8,
4, 1, 14, 8, 13, 6, 2, 11, 15, 12, 9, 7, 3, 10, 5, 0,
15, 12, 8, 2, 4, 9, 1, 7, 5, 11, 3, 14, 10, 0, 6, 13 },
{ 15, 1, 8, 14, 6, 11, 3, 4, 9, 7, 2, 13, 12, 0, 5, 10,
3, 13, 4, 7, 15, 2, 8, 14, 12, 0, 1, 10, 6, 9, 11, 5,
0, 14, 7, 11, 10, 4, 13, 1, 5, 8, 12, 6, 9, 3, 2, 15,
13, 8, 10, 1, 3, 15, 4, 2, 11, 6, 7, 12, 0, 5, 14, 9 },

{ 10, 0, 9, 14, 6, 3, 15, 5, 1, 13, 12, 7, 11, 4, 2, 8,


13, 7, 0, 9, 3, 4, 6, 10, 2, 8, 5, 14, 12, 11, 15, 1,
13, 6, 4, 9, 8, 15, 3, 0, 11, 1, 2, 12, 5, 10, 14, 7,
1, 10, 13, 0, 6, 9, 8, 7, 4, 15, 14, 3, 11, 5, 2, 12 },
{ 7, 13, 14, 3, 0, 6, 9, 10, 1, 2, 8, 5, 11, 12, 4, 15,
13, 8, 11, 5, 6, 15, 0, 3, 4, 7, 2, 12, 1, 10, 14, 9,
10, 6, 9, 0, 12, 11, 7, 13, 15, 1, 3, 14, 5, 2, 8, 4,
3, 15, 0, 6, 10, 1, 13, 8, 9, 4, 5, 11, 12, 7, 2, 14 },
{ 2, 12, 4, 1, 7, 10, 11, 6, 8, 5, 3, 15, 13, 0, 14, 9,
14, 11, 2, 12, 4, 7, 13, 1, 5, 0, 15, 10, 3, 9, 8, 6,
4, 2, 1, 11, 10, 13, 7, 8, 15, 9, 12, 5, 6, 3, 0, 14,
11, 8, 12, 7, 1, 14, 2, 13, 6, 15, 0, 9, 10, 4, 5, 3 },
{ 12, 1, 10, 15, 9, 2, 6, 8, 0, 13, 3, 4, 14, 7, 5, 11,
10, 15, 4, 2, 7, 12, 9, 5, 6, 1, 13, 14, 0, 11, 3, 8,
9, 14, 15, 5, 2, 8, 12, 3, 7, 0, 4, 10, 1, 13, 11, 6,
4, 3, 2, 12, 9, 5, 15, 10, 11, 14, 1, 7, 6, 0, 8, 13 },
{ 4, 11, 2, 14, 15, 0, 8, 13, 3, 12, 9, 7, 5, 10, 6, 1,
13, 0, 11, 7, 4, 9, 1, 10, 14, 3, 5, 12, 2, 15, 8, 6,
1, 4, 11, 13, 12, 3, 7, 14, 10, 15, 6, 8, 0, 5, 9, 2,
6, 11, 13, 8, 1, 4, 10, 7, 9, 5, 0, 15, 14, 2, 3, 12 },
{ 13, 2, 8, 4, 6, 15, 11, 1, 10, 9, 3, 14, 5, 0, 12, 7,
1, 15, 13, 8, 10, 3, 7, 4, 12, 5, 6, 11, 0, 14, 9, 2,
7, 11, 4, 1, 9, 12, 14, 2, 0, 6, 10, 13, 15, 3, 5, 8,
2, 1, 14, 7, 4, 10, 8, 13, 15, 12, 9, 0, 3, 5, 6, 11 } };

int permute[32] = { 16, 7, 20, 21,

23
29, 12, 28, 17,
1, 15, 23, 26,
5, 18, 31, 10,
2, 8, 24, 14,
32, 27, 3, 9,
19, 13, 30, 6,
22, 11, 4, 25 };

cout << endl;


for (int i=0; i<16; i++)
{

string right_expanded= permutation(right, expansion_dbox, 48);

string x= exor(RoundKey_binary[i], right_expanded);

string op= "";


for (int i= 0; i<8; i++)
{
int row= 2*int(x[i*6]-'0')+int(x[i*6+5]-'0');
int col= 8*int(x[i*6+1]-'0')+4*int(x[i*6+2]-'0')+2*int(x[i*6+3]-'0')+int(x[i*6+4]-'0');
int val= sbox[i][row][col];
op+= char(val/8+'0');
val= val% 8;
op+= char(val/4+'0');
val= val% 4;
op+= char(val/2+'0');
val= val% 2;
op+= char(val+'0');
}

op= permutation(op, permute, 32);


x= exor(op, left);

left= x;

if(i!= 15)
{
swap(left, right);
}
cout<<"Round "<<i + 1<<" "<< binarytohexa(left) <<" "<<binarytohexa(right)<<" "<<roundkeys[i]<<endl;
}

string combine =left + right;

int final_permutation[64] ={ 40, 8, 48, 16, 56, 24, 64, 32,


39, 7, 47, 15, 55, 23, 63, 31,

24
38, 6, 46, 14, 54, 22, 62, 30,
37, 5, 45, 13, 53, 21, 61, 29,
36, 4, 44, 12, 52, 20, 60, 28,
35, 3, 43, 11, 51, 19, 59, 27,
34, 2, 42, 10, 50, 18, 58, 26,
33, 1, 41, 9, 49, 17, 57, 25 };

string ciphertext = binarytohexa(permutation(combine, final_permutation, 64));


return ciphertext;
}
//---------------------------------------------------------Main Function-------------------------------------------------------------//
int main()
{
int c;
do
{
cout<<"\nEnter your choice:\n1.Process Management: HRRN Algorithm\n2.I/O Management: C-LOOK
Algorithm\n3.Memory Management: Buddy Memory Allocation\n4.File Management: Data Encryption Standard
Algorithm\n";
cin>>c;

switch(c)
{
case 1:
{
int nop,choice,i;
cout<<"Enter number of processes: ";
cin>>nop;
read(nop);
display(nop);
break;
}
case 2:
{
int arr[100], head, size;

cout<<"Enter the number of elements in the seek sequence: ";


cin>>size;

cout<<"Enter the request sequence: ";


for(int i=0;i<size;i++)
cin>>arr[i];

cout<<"Enter the current head position: ";


cin>>head;

CLOOK(arr, head, size);


break;

25
}

case 3:
{
int total,c,req;
char ch='Y';
cout<<"Enter Total Memory Size (in Bytes) => ";
cin>>total;
initialize(total);
label:
do
{
cout<<"\n1. Add Process into Memory\n2. Remove Process \n3. Exit\n=> ";
cin>>c;
switch(c)
{
case 1:
{
cout<<"Enter Process Size (in Bytes) => ";
cin>>req;
cout<<"\n===>";
if(req >= 0)
{
allocate(req);
}
else
{
cout<<"Enter positive value for allocation"<<endl;
}
break;
}

case 2:
{
cout<<"Enter Starting Address => ";
cin>>req;
cout<<"\n===>";
deallocate(req);
break;
}

case 3:
{
cout<<"Do you wish to continue? Press N to exit";
cin>>ch;
if(ch=='N')
{
break;
}

26
}
}

}while(ch!='N');
break;
}
case 4:
{
string plaintext, key;
cout<<"Enter plaintext(in hexadecimal): ";
cin>>plaintext;
cout<<"Enter key(in hexadecimal): ";
cin>>key;

key = hexatobinary(key);

int keyp[56] = { 57, 49, 41, 33, 25, 17, 9,


1, 58, 50, 42, 34, 26, 18,
10, 2, 59, 51, 43, 35, 27,
19, 11, 3, 60, 52, 44, 36,
63, 55, 47, 39, 31, 23, 15,
7, 62, 54, 46, 38, 30, 22,
14, 6, 61, 53, 45, 37, 29,
21, 13, 5, 28, 20, 12, 4 };

key = permutation(key, keyp, 56);

int shift_table[16] = { 1, 1, 2, 2,
2, 2, 2, 2,
1, 2, 2, 2,
2, 2, 2, 1 };

int key_comp[48] = { 14, 17, 11, 24, 1, 5,


3, 28, 15, 6, 21, 10,
23, 19, 12, 4, 26, 8,
16, 7, 27, 20, 13, 2,
41, 52, 31, 37, 47, 55,
30, 40, 51, 45, 33, 48,
44, 49, 39, 56, 34, 53,
46, 42, 50, 36, 29, 32 };

string left = key.substr(0, 28);


string right = key.substr(28, 28);

vector<string> RoundKey_binary;
vector<string> roundkeys;

27
for (int i = 0; i < 16; i++)
{
left = left_shift(left, shift_table[i]);
right = left_shift(right, shift_table[i]);

string combine = left + right;

string RoundKey = permutation(combine, key_comp, 48);

RoundKey_binary.push_back(RoundKey);
roundkeys.push_back(binarytohexa(RoundKey));
}

cout << "\nEncryption:\n\n";


string ciphertext = encryption(plaintext, RoundKey_binary, roundkeys);
cout << "\nCipher Text: " << ciphertext << endl;

cout << "\nDecryption\n\n";


reverse(RoundKey_binary.begin(), RoundKey_binary.end());
reverse(roundkeys.begin(), roundkeys.end());
string text = encryption(ciphertext, RoundKey_binary, roundkeys);
cout << "\nPlain Text: " << text << endl;
break;
}

default:
cout<<"Enter a valid input";
break;
}
}
while(c!=5);
}

28
Outputs:
1. HRRN Algorithm

2. C-LOOK Algorithm

29
3. DES Algorithm

30
4. Buddy Allocation Algorithm:

31
Conclusion
In this project we aimed to emulate a Linux based Operating System specifically the Ubuntu OS.
We studied the algorithms and methods used by this OS in some key areas like Process
Management, I/O Management, File management and finally Memory Management and selected
an algorithm each from these specific areas to implement for our project. Highest Response Ratio
Next, C-LOOK, Data Encryption Standard and Buddy Allocation were the algorithms that we
implemented for Process, I/O, File and Memory Management respectively. Through this project
we understood the logic and application of the methods and compiled these in a single menu driven
C++ program simulating an Ubuntu environment. We wish to continue further study into this topic
and learn more about the advantages and disadvantages of the algorithms and procedures employed
and work upon making them much more efficient and useful.

References
1. https://www.geeksforgeeks.org/highest-response-ratio-next-hrrn-cpu-scheduling/

2. https://www.geeksforgeeks.org/mutex-lock-for-linux-thread-synchronization/

3. https://www.geeksforgeeks.org/operating-system-allocating-kernel-memory-buddy-
system-slab-system/

4. https://www.geeksforgeeks.org/virtual-memory-in-operating-system/

5. https://www.thegeekstuff.com/2012/02/linux-memory-manageme

6. https://docstore.mik.ua/orelly/unix/upt/ch01_19.htm

7. https://doc.nuxeo.com/nxdoc/filesystem-commands/

8. https://dl.acm.org/doi/abs/10.1145/2501620.2501623?download=true

9. https://www.simplilearn.com/what-is-des-
article#:~:text=The%20DES%20(Data%20Encryption%20Standard,ciphertext%20using
%2048%2Dbit%20keys.

32

You might also like