You are on page 1of 11

TITLE

Project by
Name: Roll No:

1:Rahul kore 16102A0031

2:

3:
Project Title
Computer with linux.
Equipments Required:

Duration:
Operating System Concepts Simulator is an application to graphically
Objectives:
simulate Operating System concepts.

Process Management
Description:
1)First Come First Serve(FCFS):

-Analogy:

A FIFO acts like any normal queue whether, it is a line in a cinema, a


checkout line in a store, or a queue for a ticket booth. The first person or
process to arrive (First In) is the first one to be dealt with (First Out). If one
person goes through the line and then decides they forgot something then
they have to go back through.

This is exactly how OS's with this design let programs conduct their
business. One person (aka: process) at a time.

-Implementation:

To implement this, you can create a queue, an abstract data type (ADT)
that can be constructed from a linked list data structure. The system can
dequeue the next process from the front of the queue, run the process
until completion (or enqueue the process at the end of the line in more
complex schemes), then enqueue the process at the end of the line,
allowing the next process to use the CPU.

Advantages:

· simple
· easy and useful and understand
· first come, first served

Disadvantages:

· This scheduling method is nonpreemptive, that is, the process will


run until it finishes.
· Because of this nonpreemptive scheduling, short processes which
are at the back of the queue have to wait for the long process at
the front to finish
· Throughput is not efficient.
· It is used in a small system only where I/O efficiency is not very
important.

2)Shortest Job First(SJF):

-Analogy:

Shortest job next (SJN), also known as shortest job first (SJF) or shortest
process next (SPN), is a scheduling policy that selects for execution the
waiting process with the smallest execution time. SJN is a non-preemptive
algorithm. Shortest remaining time is a preemptive variant of SJN.

-Implementation:

1- Sort all the processes in increasing order

according to burst time.

2- Then simply, apply FCFS.

-Advantage:

Shortest job next is advantageous because of its simplicity and because it


minimizes the average amount of time each process has to wait until its
execution is complete. However, it has the potential for process starvation
for processes which will require a long time to complete if short processes
are continually added.

-Disadvantage:

Disadvantage of using shortest job next is that the total execution time of a
job must be known before execution. While it is impossible to predict
execution time perfectly, several methods can be used to estimate it, such
as a weighted average of previous execution times.

3)Fixed-Priority pre-emptive scheduling:

-Analogy:

Priority scheduling is a non-preemptive algorithm and one of the most


common scheduling algorithms in batch systems. Each process is assigned a
priority. Process with the highest priority is to be executed first and so on.

Processes with the same priority are executed on first come first served
basis. Priority can be decided based on memory requirements, time
requirements or any other resource requirement.

-Implementation:

1- First input the processes with their burst time

and priority.

2- Sort the processes, burst time and priority

according to the priority.


3- Now simply apply FCFS algorithm.

-Advantages:

1. Simplicity.

2. Reasonable support for priority.

3. Suitable for applications with varying time and resource requirements.

-Disadvantages:

1. Indefinite blocking or starvation.

2. A priority scheduling can leave some low priority waiting processes


indefinitely for CPU.

3. If the system eventually crashes then all unfinished low priority


processes gets lost.

4)Round Robin:

-Analogy:

Round Robin is a CPU Scheduling algorithmn where each process is


assigned a fixed time slot in a cyclic way.
· It is simple, easy to implement, and starvation-free as all processes
get fair share of CPU.
· One of the most commonly used technique in CPU scheduling as a
core.
· It is preemptive as processes are assigned CPU only for a fixed slice
of time at most.
· The disadvantage of it is more overhead of context switching.

-Implementation:

1. The queue structure in ready queue is of First In First Out (FIFO) type.

2. A fixed time is allotted to every process that arrives in the queue. This
fixed time is known as time slice or time quantum.

3. The first process that arrives is selected and sent to the processor for
execution. If it is not able to complete its execution within the time
quantum provided, then an interrupt is generated using an automated
timer.

4. The process is then stopped and is sent back at the end of the queue.
However, the state is saved and context is thereby stored in memory. This
helps the process to resume from the point where it was interrupted.

5. The scheduler selects another process from the ready queue and
dispatches it to the processor for its execution. It is executed until the time
Quantum does not exceed.
6. The same steps are repeated until all the process are finished.

-Advantages:

1. Every Thread / Process gets a chance to run.


2. CPU is shared between all processes.
3. Threads with the same priority are handled perfectly with
Round Robin.

-Disadvantages:

1. Low Priority tasks may wait for more time if the many tasks
are given high priority.
2. High Priority tasks may not execute the full instruction
given stipulated amount of time.

Memory Management

-Paging

In computer operating systems, paging is a memory management scheme


by which a computer stores and retrieves data from secondary storage for
use in main memory. In this scheme, the operating system retrieves data
from secondary storage in same-size blocks called pages. Paging is an
important part of virtual memory implementations in modern operating
systems, using secondary storage to let programs exceed the size of
available physical memory.

For simplicity, main memory is called "RAM" and secondary storage is


called "disk" but the concepts do not depend on whether these terms apply
literally to a specific computer system.

Paging is a memory management scheme that eliminates the need for


contiguous allocation of physical memory. This scheme permits the physical
address space of a process to be non – contiguous.

· Logical Address or Virtual Address (represented in bits): An


address generated by the CPU
· Logical Address Space or Virtual Address Space: The set of all
logical addresses generated by a program

· Physical Address (represented in bits): An address actually


available on memory unit

· Physical Address Space (represented in words or bytes): The


set of all physical addresses corresponding to the logical
addresses

· Page Faults

When a process tries to reference a page not currently present in RAM, the
processor treats this invalid memory reference as a page fault and transfers
control from the program to the operating system.

When all page frames are in use, the operating system must select a page
frame to reuse for the page the program now needs. If the evicted page
frame was dynamically allocated by a program to hold data, or if a program
modified it since it was read into RAM (in other words, if it has become
"dirty"), it must be written out to disk before being freed. If a program later
references the evicted page, another page fault occurs and the page must
be read back into RAM.

· Page Replacement Algorithm

Demand paging

When pure demand paging is used, pages are loaded only when they are
referenced. A program from a memory mapped file begins execution with
none of its pages in RAM. As the program commits page faults, the
operating system copies the needed pages from the file or swap partition
containing the page data into RAM.

· First In First Out (FIFO) –

This is the simplest page replacement algorithm. In this algorithm,


operating system keeps track of all pages in the memory in a
queue, oldest page is in the front of the queue. When a page
needs to be replaced page in the front of the queue is
selected for removal.
For example-1, consider page reference string 1, 3, 0, 3, 5, 6 and 3 page
slots.

Initially all slots are empty, so when 1, 3, 0 came they are allocated to the
empty slots —> 3 Page Faults.

when 3 comes, it is already in memory so —> 0 Page Faults.

Then 5 comes, it is not available in memory so it replaces the oldest page


slot i.e 1. —>1 Page Fault.

Finally 6 comes, it is also not available in memory so it replaces the oldest


page slot i.e 3 —>1 Page Fault.

· Optimal Page replacement –

In this algorithm, pages are replaced which would not be used for
the longest duration of time in the future.

Let us consider page reference string 7 0 1 2 0 3 0 4 2 3 0 3 2 and 4 page


slots.

Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots
—> 4 Page faults

0 is already there so —> 0 Page fault.

when 3 came it will take the place of 7 because it is not used for the longest
duration of time in the future.—>1 Page fault.

0 is already there so —> 0 Page fault..

4 will takes place of 1 —> 1 Page Fault.

Now for the further page reference string —> 0 Page fault because they
are already available in the memory.

· Least Recently Used –

In this algorithm page will be replaced which is least recently used.

Let say the page reference string 7 0 1 2 0 3 0 4 2 3 0 3 2 . Initially we have


4 page slots empty.

Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots
—> 4 Page faults
0 is already their so —> 0 Page fault.

when 3 came it will take the place of 7 because it is least recently used —
>1 Page fault

0 is already in memory so —> 0 Page fault.

4 will takes place of 1 —> 1 Page Fault

Now for the further page reference string —> 0 Page fault because they
are already available in the memory.

· Segmentation

· Memory segmentation is the division of a computer's primary


memory into segments or sections. In a computer system
using segmentation, a reference to a memory location
includes a value that identifies a segment and an offset
(memory location) within that segment. Segments or sections
are also used in object files of compiled programs when they
are linked together into a program image and when the image
is loaded into memory.

A process is divided into Segments. The chunks that a program is divided


into which are not necessarily all of the same length is called segments.
There are types of segmentation:

1. Virtual memory segmentation –

Each process is divided into a number of segments, not all of


which are resident at any one point in time.

2. Simple segmentation –

Each process is divided into a number of segments, all of which are


loaded into memory at run time, though not necessarily
contiguously.

There is no simple relationship between logical addresses and physical


addresses in segmentation. A table stores the information about all such
segments and is called Segment Table.

Segment Table – It maps two dimensional Logical address into one


dimensional Physical address. It’s each table entry has:
· Base Address: It contains the starting physical address where
the segments reside in memory.

· Limit: It specifies the length of the segment.

Advantages of Segmentation –

· No Internal fragmentation.

· Segment Table consumes less space in comparison to Page


table in paging.

Disadvantage of Segmentation –

· As processes are loaded and removed from the memory, the


free memory space is broken into little pieces, causing
External fragmentation.

Theory/History:

Block Diagram:

Algorithm/Flowchart:

Application:

Conclusion:

References:

Future Scope:
Output
Add screen shot of software based project

Or photos of hardware based project

Colourful printout is mandatory.

You might also like