You are on page 1of 81

Types of Operating System

Introduction
An operating system is a set of programs that enables a user to operate and interact
with a computer. Examples of operating systems are Linux distributions, windows, mac
os, FreeBSD, etc. There are many types of operating systems. In this article, we will
discuss various classifications of operating systems.

Types of Operating System


Batch Operating Systems
A batch operating system grabs all programs and data in the batch form and then processes
them. The main aim of using a batch processing system is to decrease the setup time while
submitting similar jobs to the CPU. Batch processing techniques were implemented in the hard
disk and card readers as well. In this case, all jobs are saved on the hard disk for making the
pool of jobs for their execution as a batch form.

A batch monitor is started for executing all pooled jobs, after reading them. These jobs are
divided into groups, and finally, precede the same jobs in a similar batch. Now all batched jobs
are ready for execution one by one, and due to this system enhances the system utilization while
decreasing the turnaround time.

Advantages

● In a batch system, all jobs are performed in repeating form without the user’s
permission.
● Can be feed input data in the batch processing system without using extra
hardware components.
● Small scale businesses can use batch processing systems for executing small
tasks to their benefit.
● Forgiving rest to the system’s processors, your batch system is capable to work
in off-line mode.
● Batch processing system consumes less time for executing all jobs.
● Sharing of the batch system for multiple users is possible.
● The idle time of the batch system is very less.
● You can assign a specific time for the batch jobs so when the computer is idle it
starts processing the batch jobs.
● The batch systems can manage large repeated work easily.

Disadvantages

● `Batch processing system's online sensors are often not available.


● Time-varying process characteristics.
● If anyone's job halts, then increase the workload for predicting time.
● Due to any mistake, any job can enter into an infinite loop.
● If your protection system is not well then, anyone's job can affect pending jobs.
● Computer operators must be trained for using batch systems.
● It is difficult to debug batch systems.
● Batch systems are sometimes costly.
● If some job takes too much time i.e. if an error occurs in the job then other jobs
will wait for an unknown time.

Examples

● Payroll System
● Bank Invoice System
● Transactions Process
● Daily Report
● Research Segment
● Billing System
Time-sharing Operating Systems
Time-sharing is a logical extension of multiprogramming. The CPU executes multiple jobs by
switching, among them, but the switches occur so frequently that the users can interact with
each program while it is running. An interactive computer provides direct communication
between the user and the system. The user gives instructions to the OS or a program directly,
using hardware, and waits for results.

A time-shared operating system uses CPU scheduling and multiprogramming to provide each
user with a small portion of a time-shared computer. Each user has at least one separate
program in memory. When a process executes, it executes for only a short time before it either
finishes or needs to perform input/output. In time-sharing operating systems several jobs must
be kept simultaneously in memory, so the system must have memory management and
protection.

Advantages

● Each task gets an equal opportunity.


● Fewer chances of duplication of software.
● CPU idle time can be reduced.

Disadvantages

● Reliability problem.
● One must have to take of security and integrity of user programs and data.
● Data communication problem.

Examples

● Windows 2000 server


● Windows NT server
● Unix
● Linux
Real-time Operating Systems
A real-time operating system (RTOS) is an operating system that runs multi-threaded
applications and can meet real-time deadlines. Most RTOSes include a scheduler, resource
management, and device drivers. Note that when we talk about “deadlines”, we do not
necessarily mean “fast”. Instead, meaning we can determine when certain tasks will execute
before runtime.

An RTOS can be a powerful tool if you’re creating complex embedded programs. They help
isolate tasks and give you the ability to run them concurrently. You can set prioritization levels of
tasks in most RTOSes, which allow some tasks to interrupt and run before other tasks. This is
known as “preemption.” If you need concurrency or are getting into deeper embedded concepts
like IoT or machine learning, it's wise to add RTOSes and multi-threaded programming to your
toolkit.

Advantages

● Priority-Based Scheduling.
● Abstracting Timing Information.
● Maintainability/Extensibility.
● Modularity.
● Promotes Team Development.
● Easier Testing.
● Code Reuse.
● Improved Efficiency.
● Idle Processing.

Disadvantages

● Limited Tasks.
● Use Heavy System resources.
● Complex Algorithms.
● Device driver and interrupt signals.
● Thread Priority.

Examples

● Airline traffic control systems.


● Command Control Systems.
● Airlines reservation system.
● Heart Peacemaker.
● Network Multimedia Systems.
● Robotics.

Multiprogramming Operating System

A multiprogramming operating system runs multiple programs on a single processor


computer. If a program waits for an I/O transfer, the other programs are ready to use the
CPU. As a result, various jobs may share CPU time. However, the execution of their jobs
is not defined to be in the same period. A multiprogramming OS is of the following two
types:

1. Multitasking OS: Enables execution of multiple programs at the same time. The
operating system accomplishes this by swapping each program in and out of
memory one at a time. When a program is switched out of memory, it is
temporarily saved on disk until it is required again.
2. Multiuser Operating System: This allows many users to share processing time on
a powerful central computer from different terminals. The operating system
accomplishes this by rapidly switching between terminals, each of which
receives a limited amount of processor time on the central computer.

Advantages

● It may help to run various jobs in a single application simultaneously.


● It helps to optimize the total job throughput of the computer.
● Various users may use the multiprogramming system at once.
● Short-time jobs are done quickly in comparison to long-time jobs.
● It may help to improve turnaround time for short-time tasks.
● It helps in improving CPU utilization and never gets idle.
● The resources are utilized smartly.

Disadvantages

● It is highly complicated and sophisticated.


● The CPU scheduling is required.
● Memory management is needed in the operating system because all types of
tasks are stored in the main memory.
● The harder task is to handle all processes and tasks.
● If it has a large number of jobs, then long-term jobs will require a long wait.

Examples

● Apps like office, chrome, etc.


● Microcomputers like MP/M, XENIX, and ESQview.
● Windows O/S
● UNIX O/S
Multiprocessor Operating System
Multiprocessor operating system utilizes multiple processors, which are connected with
physical memory, computer buses, clocks, and peripheral devices (touchpad, joystick,
etc). The main objective of using a multiprocessor OS is to consume high computing
power and increase the execution speed of the system.

Following are four major components, used in the Multiprocessor Operating System:

1. CPU – capable to access memories as well as controlling the entire I/O tasks.
2. Input Output Processor – I/P processor can access direct memories, and every
I/O processors have to be responsible for controlling all input and output tasks.
3. Input/Output Devices – These devices are used for inserting the input
commands, and producing output after processing.
4. Memory Unit – Multiprocessor system uses the two types of memory modules -
shared memory and distributed shared memory.

Advantages

● Great Reliability.
● Improve Throughput.
● Cost-Effective System.
● Parallel Processing.

Disadvantages

● It is more expensive due to its large architecture.


● Its speed can get degraded due to failing any one processor.
● It has more time delay when the processor receives the message and takes
appropriate action.
● It has big challenges related to skew and determinism.
● It needs context switching which can be impacted its performance.
Functions of Operating System

1. File Management

An operating system’s (OS) primary function is to manage files and folders.

Operating systems are responsible for managing the files on a computer. This includes
creating, opening, closing, and deleting files. The operating system is also responsible for
organizing the files on the disk.
Think of your computer as a project manager. A project manager manages the whole
team,checks the working of all the team members,provide resourses,facilitate things for
teams members in the same way operating system will be responsible for checking ongoing
processes,providing resources when required and ensuring that everything is in order. This
could also include managing which files and folders are stored on the computer and who
has access to them.

The OS also handles file permissions, which dictate what actions a user can take on a
particular file or folder. For example, you may have the ability to read a file but not edit or
delete it. This prevents unauthorized users from accessing or tampering with your files.

The OS does the following task for managing files:

● Keeps track of location and status of files.


● Allocating and deallocating resources.
● Decides which resource to be assigned to which file.

Besides this OS helps in doing

● Creating file
● Editing a file
● Updating a file
● Deleting a files

2. Device management

Operating systems provide essential functions for managing devices connected to a


computer. These functions include allocating memory, processing input and output requests,
and managing storage devices. This device could be a keyboard, mouse, printer, or any other
devices you may have connected.
An operating system will provide you with options to manage how each device behaves. For
example, you can set up your keyboard to type in a specific language or make it so that the
mouse only moves one screen at a time.

You can also use an operating system to install software and updates for your devices and
manage their security settings.

The operating system does the following tasks:

● Allocating and deallocating devices to different processes.


● Keeps records of all the devices attached to the computer.
● Decides which device to be allocated to which process and for how much time.

3. Process management

The operating system’s responsibility is to manage the processes running on your


computer. This includes starting and stopping programs, allocating resources, and
managing memory usage. The operating system ensures that the programs running on your
computer should be compatible. It’s also responsible for enforcing program security, which
helps to keep your computer safe from potential attacks.

How do Operating systems manage all processes?

Each process is given a certain amount of time to execute, called a quantum. Once a
process has used its quantum, the operating system interrupts it and provides another
process with a turn. This ensures that each process gets a fair share of the CPU time.

The operating system manages processes by doing the following task:

● Allocating and deallocating the resources.


● Allocates resources such that the system doesn’t run out of resources.
● Offering mechanisms for process synchronization.
● Helps in process communication.
4. Memory management

One of the most critical functions of an operating system is memory management. This is
the process of keeping track of all different applications and processes running on your
computer and all the data they’re using.

This is especially important on computers with a limited amount of memory, as it ensures


that no application or process takes up too much space and slows down your computer.
The operating system can move data around and delete files to make more space.

Operating systems perform the following tasks-

● Allocating/deallocating memory to store programs.


● Deciding the amount of memory that should be allocated to the program.
● Memory distribution while multiprocessing.
● Update the status in case memory is freed
● Keeps record of how much memory is used and how much is unused.

When a computer starts up, the operating system loads itself into memory and then
manages all the other running programs. It checks how much memory is used and how
much is available and makes sure that executing programs do not interfere with each other.

5. Job Accounting

An operating system’s (OS) job accounting feature is a powerful tool for tracking how your
computer’s resources are being used. This information can help you pinpoint and
troubleshoot any performance issues and identify unauthorized software installations.

Operating systems keep track of which users and processes use how many resources. This
information can be used for various purposes, including keeping tabs on system usage,
billing users for their use of resources, and providing information to system administrators
about which users and processes are causing problems.
The operating system does the following tasks:

● Keeps record of all the activities taking place on the system.


● Keeps record of information regarding resources, memory, errors, resources, etc.
● Responsible for Program swapping(in and out) in memory
● Keeps track of memory usage and accordingly assigns memory
● Opening and closing and writing to peripheral devices.
● Creating a file system for organizing files and directories.

Preemptive and Non-Preemptive


Scheduling
Key Differences between Preemptive and Non-Preemptive Scheduling
● In Preemptive Scheduling, the CPU is allocated to the processes for a specific time
period, and the non-preemptive scheduling CPU is allocated to the process until it
terminates.
● In Preemptive Scheduling, tasks are switched based on priority, while in
non-preemptive Scheduling, no switching takes place.
● The preemptive algorithm has the overhead of switching the process from the ready
state to the running state, while Non-preemptive Scheduling has no such overhead of
switching.
● Preemptive Scheduling is flexible, while Non-preemptive Scheduling is rigid.

What is Preemptive Scheduling?


Preemptive Scheduling is a scheduling method where the tasks are mostly assigned with
their priorities. Sometimes it is important to run a task with a higher priority before another
lower priority task, even if the lower priority task is still running.
At that time, the lower priority task holds for some time and resumes when the higher
priority task finishes its execution.
What is Non-Preemptive Scheduling?
In this type of scheduling method, the CPU has been allocated to a specific process. The
process that keeps the CPU busy will release the CPU either by switching context or
terminating.
It is the only method that can be used for various hardware platforms. That’s because it
doesn’t need specialized hardware (for example, a timer) like preemptive Scheduling.
Non-Preemptive Scheduling occurs when a process voluntarily enters the wait state or
terminates.

Preemptive vs Non-Preemptive Scheduling:


Comparison Table

Preemptive Scheduling Non-preemptive Scheduling

A processor can be preempted to Once the processor starts its


execute the different processes in execution, it must finish it before
the middle of any current process executing the other. It can’t be paused
execution. in the middle.

CPU utilization is more efficient CPU utilization is less efficient


compared to Non-Preemptive compared to preemptive Scheduling.
Scheduling.

Waiting and response time of Waiting and response time of the


preemptive Scheduling is less. non-preemptive Scheduling method is
higher.

Preemptive Scheduling is When any process enters the state of


prioritized. The highest priority running, the state of that process is
process is a process that is never deleted from the scheduler until
currently utilized. it finishes its job.
Preemptive Scheduling is flexible. Non-preemptive Scheduling is rigid.

Examples: – Shortest Remaining Examples: First Come First Serve,


Time First, Round Robin, etc. Shortest Job First, Priority
Scheduling, etc.

Preemptive Scheduling algorithm In non-preemptive scheduling process


can be pre-empted that is the cannot be Scheduled
process can be Scheduled

In this process, the CPU is allocated In this process, CPU is allocated to


to the processes for a specific time the process until it terminates or
period. switches to the waiting state.

Preemptive algorithm has the Non-preemptive Scheduling has no


overhead of switching the process such overhead of switching the
from the ready state to the running process from running into the ready
state and vice-versa. state.

Advantages of Preemptive Scheduling

Here, are pros/benefits of Preemptive Scheduling method:


● Preemptive scheduling method is more robust, approach so one process cannot
monopolize the CPU
● Choice of running task reconsidered after each interruption.
● Each event cause interruption of running tasks
● The OS makes sure that CPU usage is the same by all running process.
● In this, the usage of CPU is the same, i.e., all the running processes will make use of
CPU equally.
● This scheduling method also improvises the average response time.
● Preemptive Scheduling is beneficial when we use it for the multi-programming
environment.

Advantages of Non-preemptive Scheduling


Here, are pros/benefits of Non-preemptive Scheduling method:
● Offers low scheduling overhead
● Tends to offer high throughput
● It is conceptually very simple method
● Less computational resources need for Scheduling

Disadvantages of Preemptive Scheduling


Following are the drawbacks of preemptive scheduling:
● Need limited computational resources for Scheduling
● Takes a higher time by the scheduler to suspend the running task, switch the
context, and dispatch the new incoming task.
● The process which has low priority needs to wait for a longer time if some high
priority processes arrive continuously.

Disadvantages of Non-Preemptive Scheduling


Here, are cons/drawback of Non-Preemptive Scheduling method:
● It can lead to starvation especially for those real-time tasks
● Bugs can cause a machine to freeze up
● It can make real-time and priority Scheduling difficult
● Poor response time for processes

Example of Non-Preemptive Scheduling


In non-preemptive SJF scheduling, once the CPU cycle is allocated to process, the process
holds it till it reaches a waiting state or terminated.
Consider the following five processes each having its own unique burst time and arrival
time.
Process Queue Burst time Arrival time

P1 6 2

P2 2 5

P3 8 1
P4 3 0

P5 4 4

Step 0) At time=0, P4 arrives and starts execution.

Step 1) At time= 1, Process P3 arrives. But, P4 still needs 2 execution units to


complete. It will continue execution.

Step 2) At time =2, process P1 arrives and is added to the waiting queue. P4
will continue execution.
Step 3) At time = 3, process P4 will finish its execution. The burst time of P3
and P1 is compared. Process P1 is executed because its burst time is less
compared to P3.

Step 4) At time = 4, process P5 arrives and is added to the waiting queue. P1


will continue execution.

Step 5) At time = 5, process P2 arrives and is added to the waiting queue. P1


will continue execution.

Step 6) At time = 9, process P1 will finish its execution. The burst time of P3,
P5, and P2 is compared. Process P2 is executed because its burst time is the
lowest.
Step 7) At time=10, P2 is executing, and P3 and P5 are in the waiting queue.

Step 8) At time = 11, process P2 will finish its execution. The burst time of P3
and P5 is compared. Process P5 is executed because its burst time is lower.

Step 9) At time = 15, process P5 will finish its execution.


Step 10) At time = 23, process P3 will finish its execution.

Step 11) Let’s calculate the average waiting time for above example.

Wait time
P4= 0-0=0
P1= 3-2=1
P2= 9-5=4
P5= 11-4=7
P3= 15-1=14
Average Waiting Time= 0+1+4+7+14/5 = 26/5 =
5.2

Example of Pre-emptive Scheduling


Consider this following three processes in Round-robin
Process Queue Burst time

P1 4

P2 3

P3 5

Step 1) The execution begins with process P1, which has burst time 4. Here,
every process executes for 2 seconds. P2 and P3 are still in the waiting
queue.
Step 2) At time =2, P1 is added to the end of the Queue and P2 starts
executing

Step 3) At time=4 , P2 is preempted and add at the end of the queue. P3


starts executing.
Step 4) At time=6 , P3 is preempted and add at the end of the queue. P1
starts executing.

Step 5) At time=8 , P1 has a burst time of 4. It has completed execution. P2


starts execution
Step 6) P2 has a burst time of 3. It has already executed for 2 interval. At
time=9, P2 completes execution. Then, P3 starts execution till it completes.

Step 7) Let’s calculate the average waiting time for above example.
Wait time

P1= 0+ 4= 4

P2= 2+4= 6

P3= 4+3= 7
Deadlock in Operating System
A deadlock in OS is a situation in which more than one process is blocked because it
is holding a resource and also requires some resource that is acquired by some
other process. The four necessary conditions for a deadlock situation to occur are
mutual exclusion, hold and wait, no preemption and circular set. We can prevent a
deadlock by preventing any one of these conditions. There are different ways to
detect and recover a system from deadlock.

The four necessary conditions for a deadlock to arise are as follows.

● Mutual Exclusion: Only one process can use a resource at any given time i.e. the
resources are non-sharable.
● Hold and wait: A process is holding at least one resource at a time and is waiting
to acquire other resources held by some other process.
● No preemption: The resource can be released by a process voluntarily i.e. after
execution of the process.

Example
In the above figure, there are two processes and two resources. Process 1 holds "Resource
1" and needs "Resource 2" while Process 2 holds "Resource 2" and requires "Resource 1".
This creates a situation of deadlock because none of the two processes can be executed.
Since the resources are non-shareable they can only be used by one process at a
time(Mutual Exclusion). Each process is holding a resource and waiting for the other
process the release the resource it requires. None of the two processes releases their
resources before their execution and this creates a circular wait. Therefore, all four
conditions are satisfied.

Methods of Handling Deadlocks in Operating System


The first two methods are used to ensure the system never enters a deadlock.

Deadlock Prevention
This is done by restraining the ways a request can be made. Since deadlock occurs when all
the above four conditions are met, we try to prevent any one of them, thus preventing a
deadlock.

Deadlock Avoidance
When a process requests a resource, the deadlock avoidance algorithm examines the
resource-allocation state. If allocating that resource sends the system into an unsafe state,
the request is not granted.

Therefore, it requires additional information such as how many resources of each type is
required by a process. If the system enters into an unsafe state, it has to take a step back to
avoid deadlock.

Deadlock Detection and Recovery


We let the system fall into a deadlock and if it happens, we detect it using a detection
algorithm and try to recover.

Some ways of recovery are as follows.

● Aborting all the deadlocked processes.


● Abort one process at a time until the system recovers from the deadlock.
● Resource Preemption: Resources are taken one by one from a process and
assigned to higher priority processes until the deadlock is resolved.

Deadlock Ignorance
In the method, the system assumes that deadlock never occurs. Since the problem of
deadlock situation is not frequent, some systems simply ignore it. Operating systems such
as UNIX and Windows follow this approach. However, if a deadlock occurs we can reboot
our system and the deadlock is resolved automatically.

Advantage of Deadlock Method


● No preemption is needed for deadlocks.
● It is a good method if the state of the resource can be saved and restored easily.
● It is good for activities that perform a single burst of activity.
● It does not need run-time computations because the problem is solved in system
design.

Disadvantages of Deadlock Method


● The processes must know the maximum resource of each type required to
execute it.
● Preemptions are frequently encountered.
● It delays the process initiation.
● There are inherent pre-emption losses.
● It does not support incremental request of resources.

Deadlock Avoidance in Operating System


The deadlock Avoidance method is used by the operating system in order to check
whether the system is in a safe state or in an unsafe state and in order to avoid the
deadlocks, the process must need to tell the operating system about the maximum
number of resources a process can request in order to complete its execution.
Safe State and Unsafe State

A state is safe if the system can allocate resources to each process( up to its maximum
requirement) in some order and still avoid a deadlock. Formally, a system is in a safe
state only, if there exists a safe sequence. So a safe state is not a deadlocked state and
conversely a deadlocked state is an unsafe state.

In an Unsafe state, the operating system cannot prevent processes from requesting
resources in such a way that any deadlock occurs. It is not necessary that all unsafe
states are deadlocks; an unsafe state may lead to a deadlock.

The above Figure shows the Safe, unsafe, and deadlocked state spaces

Deadlock Detection
In this method, the OS assumes that a deadlock will occur in the future. So it
runs a deadlock detection mechanism with a certain interval of time, and
when it detects the deadlock, it starts a recovery approach.

The main task of the OS is to detect the deadlock. There’re two methods of
detection which we’ve already covered before.

Here, we use the same methods with some improvisations:


In the wait-for graph method, the OS checks the formation of a circle. It’s
somehow the same as the resource allocation graph (RAG) with some
differences. Mostly it causes confusion between the RAG and Wait-for graph.

The main difference between a RAG and a wait-for graph is the number
of vertices each graph contains. A RAG graph has two vertices: resource
and process. A wait-for graph has one vertex: process.

What is Virtual Memory in OS (Operating


System)?
Virtual Memory is a storage scheme that provides user an illusion of having a very big
main memory. This is done by treating a part of secondary memory as the main
memory.

In this scheme, User can load the bigger size processes than the available main memory
by having the illusion that the memory is available to load the process.

Instead of loading one big process in the main memory, the Operating System loads the
different parts of more than one process in the main memory.

By doing this, the degree of multiprogramming will be increased and therefore, the CPU
utilization will also be increased.

How Virtual Memory Works?


In modern word, virtual memory has become quite common these days. In this scheme,
whenever some pages needs to be loaded in the main memory for the execution and the
memory is not available for those many pages, then in that case, instead of stopping the
pages from entering in the main memory, the OS search for the RAM area that are least
used in the recent times or that are not referenced and copy that into the secondary
memory to make the space for the new pages in the main memory.

Since all this procedure happens automatically, therefore it makes the computer feel like
it is having the unlimited RAM.

Demand Paging
Demand Paging is a popular method of virtual memory management. In demand
paging, the pages of a process which are least used, get stored in the secondary
memory.

A page is copied to the main memory when its demand is made or page fault occurs.
There are various page replacement algorithms which are used to determine the pages
which will be replaced. We will discuss each one of them later in detail.

Advantages of Virtual Memory

1. The degree of Multiprogramming will be increased.


2. User can run large application with less real RAM.

3. There is no need to buy more memory RAMs.

Disadvantages of Virtual Memory

1. The system becomes slower since swapping takes time.

2. It takes more time in switching between applications.

3. The user will have the lesser hard disk space for its use.

Example of virtual memory

Suppose the CPU wants to access process P1, divided into ten pages. So following the idea
of virtual memory, only P1, P3, P5, P6, and P8 Pages are selected to be loaded in the main
memory. For that have to consult the page table. First CPU will check whether that page has
a valid(v) or invalid bit(I). Valid bit indicates that the page is in main memory and invalid bit
indicates that the page is not in main memory and has to load from secondary memory.
Like from the page table, we can see that page 1 is at frame 1, page 3 is at frame 2, page 5
is in frame 3, and so on. If the page is not in the main memory, then those pages not in use
are swapped out, and a new required page is swapped in. In short virtual memory includes
the concept of demand paging and swapping.

Page Replacement in OS

What is Page Replacement in Operating Systems?

Page replacement is needed in the operating systems that use virtual memory using
Demand Paging. As we know that in Demand paging, only a set of pages of a process is
loaded into the memory. This is done so that we can have more processes in the
memory at the same time.

When a page that is residing in virtual memory is requested by a process for its
execution, the Operating System needs to decide which page will be replaced by this
requested page. This process is known as page replacement and is a vital component in
virtual memory management.

Why Need Page Replacement Algorithms?

To understand why we need page replacement algorithms, we first need to know about
page faults. Let’s see what is a page fault.
Page Fault: A Page Fault occurs when a program running in CPU tries to access a page
that is in the address space of that program, but the requested page is currently not
loaded into the main physical memory, the RAM of the system.

Since the actual RAM is much less than the virtual memory the page faults occur. So
whenever a page fault occurs, the Operating system has to replace an existing page in
RAM with the newly requested page. In this scenario, page replacement algorithms help
the Operating System in deciding which page to replace. The primary objective of all the
page replacement algorithms is to minimize the number of page faults.

Example Of Page Replacement Algorithms in Operating


Systems

Least Recently Used (LRU) Page Replacement Algorithm


The least recently used page replacement algorithm keeps the track of usage of pages over a
period of time. This algorithm works on the basis of the principle of locality of a reference which
states that a program has a tendency to access the same set of memory locations repetitively
over a short period of time. So pages that have been used heavily in the past are most likely to
be used heavily in the future also.

In this algorithm, when a page fault occurs, then the page that has not been used for the longest
duration of time is replaced by the newly requested page.

Example: Let’s see the performance of the LRU on the same reference string of 3, 1, 2, 1, 6, 5, 1,
3 with 3-page frames:
● Initially, since all the slots are empty, pages 3, 1, 2 cause a page fault and take the
empty slots.

Page faults = 3

● When page 1 comes, it is in the memory and no page fault occurs.

Page faults = 3

● When page 6 comes, it is not in the memory, so a page fault occurs and the least
recently used page 3 is removed.

Page faults = 4

● When page 5 comes, it again causes a page fault and page 1 is removed as it is
now the least recently used page.

Page faults = 5

● When page 1 comes again, it is not in the memory and hence page 2 is removed
according to the LRU.

Page faults = 6

● When page 3 comes, the page fault occurs again and this time page 6 is removed
as the least recently used one.
Total page faults = 7

Now in the above example, the LRU causes the same page faults as the FIFO, but this may not
always be the case as it will depend upon the series, the number of frames available in memory,
etc. In fact, on most occasions, LRU is better than FIFO.

Advantages

● It is open for full analysis


● Doesn’t suffer from Belady’s anomaly
● Often more efficient than other algorithms

Disadvantages

● It requires additional data structures to be implemented


● More complex
● High hardware assistance is required

Last In First Out (LIFO) Page Replacement Algorithm


This is the Last in First Out algorithm and works on LIFO principles. In this algorithm, the newest
page is replaced by the requested page. Usually, this is done through a stack, where we maintain
a stack of pages currently in the memory with the newest page being at the top. Whenever a
page fault occurs, the page at the top of the stack is replaced.

Example: Let’s see how the LIFO performs for our example string of 3, 1, 2, 1, 6, 5, 1, 3 with
3-page frames:

● Initially, since all the slots are empty, page 3,1,2 causes a page fault and takes the
empty slots.

Page faults = 3

● When page 1 comes, it is in the memory and no page fault occurs.

Page faults = 3
● When page 6 comes, the page fault occurs and page 2 is removed as it is on the
top of the stack and is the newest page.

Page faults = 4

● When page 5 comes, it is not in the memory, which causes a page fault, and
hence page 6 is removed being on top of the stack.

Page faults = 5

● When page 1 and page 3 come, they are in memory already, hence no page fault
occurs.

Total page faults = 5

As you may notice, this is the same number of page faults as of the Optimal page replacement
algorithm. So we can say that for this series of pages, this is the best algorithm that can be
implemented without the prior knowledge of future references.

Advantages

● Simple to understand
● Easy to implement
● No overhead
Disadvantages

● Does not consider Locality principle, hence may produce worst performance
● The old pages may reside in memory forever even if they are not used

Differences between Memory


Mapped I/O And I/O mapped I/O .
In Memory Mapped Input Output −

● We allocate a memory address to an Input-Output device.

● Any instructions related to memory can be accessed by this Input-Output

device.

● The Input-Output device data are also given to the Arithmetic Logical

Unit.

Input-Output Mapped Input Output −

● We give an Input-Output address to an Input-Output device

● Only IN and OUT instructions are accessed by such devices.

● The ALU operations are not directly applicable to such Input-Output data.
I/O mapped I/O Memory mapped I/O

The devices are provided with 8-bit address The devices are provided with 16-bit address
values. values.

For transferring information, the instructions Since the peripherals are treated as memory
used are IN and OUT. locations, all the instructions related to
memory such as LDA, STA etc. can be used.

I/O read or I/O write cycles are used to Memory read or Memory write cycles are
access the interfaced devices. used to access the interfaced devices.

The entire memory address space can be The entire memory address space cannot be
used solely for addressing memory for used solely for addressing memory for
interfacing. interfacing.

Only the Accumulator and an I/O device can Any register and an I/O device can be used
be used for data transfer. for data transfer.

The decoder hardware involved is less. The decoder hardware involved is more.

ALU operations cannot be performed directly ALU operations can be performed directly on
on the data. the data.

28 I/O ports are available for interfacing. 216 I/O ports are available for interfacing.

Protection and Security in Operating System

What is Protection and Security in Operating Systems?


OS uses two sets of techniques to counter threats to information namely:

● Protection
● Security

Protection

Protection tackles the system's internal threats. It provides a mechanism for controlling access
to processes, programs, and user resources. In simple words, It specifies which files a specific
user can access or view and modify to maintain the proper functioning of the system. It allows
the safe sharing of common physical address space or common logical address space which
means that multiple users can access the memory due to the physical address space.

Security

Security tackles the system's external threats. The safety of their system
resources such as saved data, disks, memory, etc. is secured by the
security systems against harmful modifications, unauthorized access, and
inconsistency. It provides a mechanism (encryption and authentication) to
analyze the user before allowing access to the system.

Difference between Protection and Security

Protection Security

Protection deals with who has access to Security gives the system access only to
the system resources. authorized users.

Protection tackles the system's internal Security tackles the system's external
threats. threats.

Protection addresses simple queries. More complex queries are addressed in


security.

It specifies which files a specific user can It defines who is permitted to access the
access or view and modify. system.

An authorization mechanism is used in Encryption and certification


protection. (authentication) mechanisms are
implemented.

Protection provides a mechanism for While security provides a mechanism to


controlling access to processes, safeguard the system resources and the
programs, and user resources. user resources from all external users.
Threats to Protection and Security
A program that is malicious in nature and has harmful impacts on a system is
called a threat.

Common Threats That Occur in a System

In a system, some common threats include the following:

Virus

A computer virus is a form of malware, or malicious software, that transmits between


computers and corrupts software and data. Generally, viruses are small pieces of code
that are embedded in a system. They can corrupt files, erase data, crash systems, and
other things, making them extremely dangerous. Also, they can expand by replicating
themselves.

Trojan Horse

A Trojan Horse Virus is a form of malware that downloads on a computer by


impersonating a trustworthy program. A Trojan horse can get unauthorized access to a
system's login information. A malicious user may then use them to enter the system.

Worm

A computer worm is a sort of malware whose main purpose is to keep operating on


infected systems while self-replicating and infecting other computers. By using a
system's resources to extreme levels, a worm can completely destroy it. It has the ability
to produce duplicate copies that occupy all available resources and prevent any other
processes from using them.

Trap Door

A trap door is basically a back door into software that anyone can use to access any
system without having to follow the normal security access procedures. It may exist in a
system without the user's knowledge. As they're so hard to detect, trap doors need
programmers or developers to thoroughly examine all of the system's components in
order to find them.

Denial of Service

A Denial-of-Service (DoS) attack aims to shut down a computer system or network so


that its intended users are unable to access it. These kinds of attacks prevent
authorized users from accessing a system.

Methods to Ensure Protection and Security in Operating


System

● Keep a Data Backup: It is a safe option in case of data corruption due to


problems in protection and security, you can always require it from the Backup.

● Beware of suspicious emails and links: When we visit some malicious link over
the internet, it can cause a serious issue by acquiring user access.

● Secure Authentication and Authorization: OS should provide secure


authentication and authorization for access to resources and also users should
keep the credentials safe to avoid illegal access to resources.

● Use Secure Wi-Fi Only: Sometimes using free wifi or insecure wifi may cause
security issues, because attackers can transmit harmful programs over the
network or record the activity etc, which could cause a big problem in the worst
case.

● Install anti-virus and malware protection: It helps to remove and avoid viruses
and malware from the system.

● Manage access wisely: The access should be provided to apps and software by
thorough analysis because no software can harm our system until it acquires
access. So, we can ensure to provide suitable access to software and we can
always keep an eye on software to see what resources and access it is using.

● Firewalls Utilities: It enables us to monitor and filter network traffic. We can use
firewalls to ensure that only authorized users are allowed to access or transfer
data.

● Encryption and Decryption Based transfer: The data content must be transferred
according to an encryption algorithm that can only be reversed with the
appropriate decryption key. This process protects your data from unauthorized
access over the internet, also even if data is stolen it would always remain
unreadable.

● Be cautious when sharing personal information: The personal information and


credentials must be shared only with trusted and safe sources by not doing so
attackers can use this information for their intent which could be harmful to the
system's security.

Direct Memory Access (DMA)


Direct Memory Access (DMA) transfers the block of data between the
memory and peripheral devices of the system, without the
participation of the processor. The unit that controls the activity of
accessing memory directly is called a DMA controller
The DMA controller transfers the data in three modes:

1. Burst Mode: Here, once the DMA controller gains the charge of the
system bus, then it releases the system bus only after completion of
data transfer. Till then the CPU has to wait for the system buses.

2. Cycle Stealing Mode: In this mode, the DMA controller forces the CPU
to stop its operation and relinquish the control over the bus for a short
term to DMA controller. After the transfer of every byte, the DMA
controller releases the bus and then again requests for the system bus.
In this way, the DMA controller steals the clock cycle for transferring
every byte.

3. Transparent Mode: Here, the DMA controller takes the charge of


system bus only if the processor does not require the system bus.

Direct Memory Access Controller & it’s Working


DMA controller is a hardware unit that allows I/O devices to access memory
directly without the participation of the processor. Here, we will discuss the
working of the DMA controller. Below we have the diagram of DMA controller
that explains its working:
1. Whenever an I/O device wants to transfer the data to or from memory, it
sends the DMA request (DRQ) to the DMA controller. DMA controller
accepts this DRQ and asks the CPU to hold for a few clock cycles by
sending it the Hold request (HLD).
2. CPU receives the Hold request (HLD) from DMA controller and
relinquishes the bus and sends the Hold acknowledgement (HLDA) to
DMA controller.
3. After receiving the Hold acknowledgement (HLDA), DMA controller
acknowledges I/O device (DACK) that the data transfer can be
performed and DMA controller takes the charge of the system bus and
transfers the data to or from memory.
4. When the data transfer is accomplished, the DMA raise an interrupt to
let know the processor that the task of data transfer is finished and the
processor can take control over the bus again and start processing
where it has left.

Direct Memory Access Diagram


Direct Memory Access Advantages and Disadvantages
Advantages:

1. Transferring the data without the involvement of the processor will


speed up the read-write task.
2. DMA reduces the clock cycle requires to read or write a block of data.
3. Implementing DMA also reduces the overhead of the processor.

Disadvantages

1. As it is a hardware unit, it would cost to implement a DMA controller in


the system.
2. Cache coherence problem can occur while using DMA controller.

Disk Scheduling Algorithms


The operating system performs a disk scheduling process to schedule the I/O requests
that arrive at the disk. Disk scheduling is important since-

1. Many I/O requests may arrive from different processes, and the disk
controller can only serve one I/O request at a time. As a result, other I/O
requests need to wait in the waiting queue and get scheduled.
2. The operating system needs to manage the hardware efficiently.
3. To reduce seek time.

To perform disk scheduling, we have six disk scheduling algorithms. These are-
The goal of the disk scheduling algorithm is-

1. Have a minimum average seek time.


2. Have minimum rotational latency.
3. Have high throughput.

What is the C-SCAN Algorithm?


C-Scan (Circular Scan) scheduling algorithm is a variant of the SCAN scheduling
algorithm designed to provide a more uniform wait time.
Note: In the Scan algorithm, the disk arm starts at one end of the disk and moves
towards the other end, servicing requests as it reaches each cylinder until it gets to the
other end of the disk.
Like Scan, C-Scan moves the head from one end of the disk to the other, servicing
requests along the way. However, when the head reaches the other end, it immediately
returns to the beginning of the disk without servicing any requests on the return trip. The
C-Scan scheduling algorithm treats the cylinders as a circular list that wraps around
from the final cylinder to the first one.

Algorithm
To understand the C-Scan Algorithm, let us assume a disc queue with requests for I/O.
‘head’ is the position of the disk head. We will now apply C-Scan algorithm-

1. Arrange all the I/O requests in ascending order.


2. The head will start moving in the right direction, i.e. from 0 to the size of the
block.
3. As soon as a request is encountered, head movement is calculated as the
current request - the previous request.
4. This process is repeated until the head reaches the end of the disk and the
head movements are added.
5. As the end is reached, the head will go from right to left without processing
any requests.
6. As the head reaches the beginning of the disc, i.e. 0, the head again starts
moving in the right direction, and head movement keeps on adding.
7. This process is completed as soon as all the requests are processed, and
we get total head movement.

File concept in OS
What is the file?
The file can be explained as the smallest unit of storage on a computer
system. The user can perform file operations like open, close, read, write, and
modify.

File concept
The operating system can provide a logical view of the information stored in
the disks, this logical unit is known as a file. The information stored in files is
not lost during power failures.

A file helps to write data on the computer. It is a sequence of bits, bytes, or


records, the structure of which is defined by the owner and depends on the
type of the file.
Different types of files are:

● Executable file

In an executable file, the binary code that is loaded in the memory for
execution is stored. It is stored in an exe type file.

● Source file

The source file has subroutines and functions that are compiled later.

● Object file

An object file is a sequence of bytes used by the linker.

● Text file

A text file is a sequence of characters.

● Image file

An image file is a sequence of visual information, for example, vector art.


Access Matrix in OS
An access matrix in operating system is used to define each process’s rights
for each object executing in the domain. It helps in the protection of a
system and specifies the permissions/rights for every process executing in
the domain. The access matrix in os is represented as a two-dimensional
matrix.

Implementation of Access Matrix in OS


Four widely used access matrix implementations can be formed using these
decomposition methods:

● Global Table
● Capability Lists for Domains
● Access Lists for Objects
● Lock and key Method

Capability Lists
In the access matrix in the operating system, Capability Lists is a collection of
objects and the operations that can be performed on them. The object here is
specified by a physical name called capability. In this method, we can
associate each row with its domain instead of connecting the columns of the
access matrix to the objects as an access list.
File Allocation Methods in
Operating System
File Allocation Methods
There are different kinds of methods that are used to allocate disk space.
We must select the best method for the file allocation because it will
directly affect the system performance and system efficiency. With the help
of the allocation method, we can utilize the disk, and also files can be
accessed.
There are various types of file allocations method:

1. Contiguous allocation
2. Extents
3. Linked allocation
4. Clustering
5. FAT
6. Indexed allocation
7. Linked Indexed allocation
8. Multilevel Indexed allocation
9. Inode

There are different types of file allocation methods, but we mainly use three
types of file allocation methods:

1. Contiguous allocation
2. Linked list allocation
3. Indexed allocation

These methods provide quick access to the file blocks and also the
utilization of disk space in an efficient manner.
Contiguous Allocation: - Contiguous allocation is one of the most used
methods for allocation. Contiguous allocation means we allocate the block
in such a manner, so that in the hard disk, all the blocks get the contiguous
physical block.
We can see in the below figure that in the directory, we have three files. In
the table, we have mentioned the starting block and the length of all the
files. We can see in the table that for each file, we allocate a contiguous
block.
Example of contiguous allocation
We can see in the given diagram, that there is a file. The name of the file is
‘mail.’ The file starts from the 19th block and the length of the file is 6. So,
the file occupies 6 blocks in a contiguous manner. Thus, it will hold blocks
19, 20, 21, 22, 23, 24.

Advantages of Contiguous Allocation


The advantages of contiguous allocation are:

1. The contiguous allocation method gives excellent read performance.


2. Contiguous allocation is easy to implement.
3. The contiguous allocation method supports both types of file access
methods that are sequential access and direct access.
4. The Contiguous allocation method is fast because, in this method
number of seeks is less due to the contiguous allocation of file
blocks.

Disadvantages of Contiguous allocation


The disadvantages of contiguous allocation method are:

1. In the contiguous allocation method, sometimes disk can be


fragmented.
2. In this method, it is difficult to increase the size of the file due to the
availability of the contiguous memory block.

Linked List Allocation


The linked list allocation method overcomes the drawbacks of the
contiguous allocation method. In this file allocation method, each file is
treated as a linked list of disks blocks. In the linked list allocation method,
it is not required that disk blocks assigned to a specific file are in the
contiguous order on the disk. The directory entry comprises of a pointer for
starting file block and also for the ending file block. Each disk block that is
allocated or assigned to a file consists of a pointer, and that pointer point
the next block of the disk, which is allocated to the same file.
Example of linked list allocation
We can see in the below figure that we have a file named ‘jeep.’ The value
of the start is 9. So, we have to start the allocation from the 9th block, and
blocks are allocated in a random manner. The value of the end is 25. It
means the allocation is finished on the 25th block. We can see in the below
figure that the block (25) comprised of -1, which means a null pointer, and it
will not point to another block.
Advantages of Linked list allocation
There are various advantages of linked list allocation:

1. In liked list allocation, there is no external fragmentation. Due to this,


we can utilize the memory better.
2. In linked list allocation, a directory entry only comprises of the
starting block address.
3. The linked allocation method is flexible because we can quickly
increase the size of the file because, in this to allocate a file, we do
not require a chunk of memory in a contiguous form.

Disadvantages of Linked list Allocation


There are various disadvantages of linked list allocation:

1. Linked list allocation does not support direct access or random


access.
2. In linked list allocation, we need to traverse each block.
3. If the pointer in the linked list break in linked list allocation, then the
file gets corrupted.
4. In the disk block for the pointer, it needs some extra space.

Indexed Allocation
The Indexed allocation method is another method that is used for file
allocation. In the index allocation method, we have an additional block, and
that block is known as the index block. For each file, there is an individual
index block. In the index block, the ith entry holds the disk address of the
ith file block. We can see in the below figure that the directory entry
comprises of the address of the index block.

Advantages of Index Allocation


The advantages of index allocation are:
1. The index allocation method solves the problem of external
fragmentation.
2. Index allocation provides direct access.

Disadvantages of Index Allocation


The disadvantages of index allocation are:

1. In index allocation, pointer overhead is more.


2. We can lose the entire file if an index block is not correct.
3. It is totally a wastage to create an index for a small file.

Difference between Network Operating System


and Distributed Operating System

What is Network Operating System?


Network operating systems are server-based operating systems that provide
networking-related functionality. It manages the users, groups, data and provides
security. These operating systems permit the users to transfer the files and share
devices like printers among various devices in a network like a LAN (Local Area
Network), a private network, or another network. It is the most popular type of
operating system used in distributed architectures. The goal of a network operating
system is to allow resource sharing between two or more computers operating
different operating systems.

Advantages and Disadvantages of Network Operating System

There are various advantages and disadvantages of a network operating system.


These are as follows:
Advantages

There are various advantages of a network operating system. Some of them are as
follows:

1. It is possible to gain remote access to servers from various locations and


system types.

2. New technologies, upgradation, and hardware may be easily integrated into this
operating system.

3. The servers handle its security concerns.

4. Highly stable centralized servers.

Disadvantages

There are various disadvantages of a network operating system. Some of them are as
follows:

1. Network operating systems are very expensive.

2. It needs regular maintenance and updates.

3. The user must rely on the central location for most processes.

What is Distributed Operating System?


A distributed operating system (DOS) is an essential type of operating system.
Distributed systems use many central processors to serve multiple real-time
applications and users. As a result, data processing jobs are distributed between the
processors.

It connects multiple computers via a single communication channel. Furthermore,


each of these systems has its own processor and memory. Additionally, these CPUs
communicate via high-speed buses or telephone lines. Individual systems that
communicate via a single channel are regarded as a single entity. They're also known
as loosely coupled systems.
Advantages and Disadvantages of Distributed Operating System

There are various advantages and disadvantages of the distributed operating system.
These are as follows:

Advantages

There are various advantages of the distributed operating system. Some of them are
as follows:

1. It may share all resources (CPU, disk, network interface, nodes, computers, and
etc.) from one site to another, increasing data availability across the entire
system.

2. The entire system operates independently of one another, and as a result, if one
site crashes, the entire system does not halt.

3. It reduces the probability of data corruption because all data is replicated


across all sites; if one site fails, the user can access data from another
operational site.

4. It is an open system since it may be accessed from both local and remote
locations.

5. It increases the speed of data exchange from one site to another site.

6. Most distributed systems are made up of several nodes that interact to make
them fault-tolerant. If a single machine fails, the system remains operational.

7. It helps in the reduction of data processing time.

Disadvantages

There are various disadvantages of the distributed operating system. Some of them
are as follows:
1. The system must decide which jobs must be executed, when they must be
executed, and where they must be executed. A scheduler has limitations, which
can lead to underutilized hardware and unpredictable runtimes.

2. The underlying software is extremely complex and is not understood very well
compared to other systems.

3. It is hard to implement adequate security in DOS since the nodes and


connections must be secured.

4. The more widely distributed a system is, the more communication latency can
be expected. As a result, teams and developers must choose between
availability, consistency, and latency.

5. The database connected to a DOS is relatively complicated and hard to manage


in contrast to a single-user system.

6. Gathering, processing, presenting, and monitoring hardware use metrics for big
clusters can be a real issue.

7. These systems aren't widely available because they're thought to be too


expensive.

Key differences between the network operating system


and distributed operating system
There are various key differences between the network operating system and
distributed operating system. These are as follows:

1. Network operating systems are used in heterogeneous computers and are


known as loosely coupled systems. On the other hand, distributed operating
systems (DOS) are tightly connected systems, and it mostly used in
homogeneous computers or multiprocessors.
2. Communication between computers (nodes) in a distributed operating system
is achieved through shared memory or by sending messages. On the other
hand, the network operating system transfers files to interact with other nodes.

3. The operating system installed on the computers in a network operating system


can vary, but it is not the case in a distributed system.

4. The network operating system's primary goal is to give local services to remote
users. In contrast, DOS's goal is to handle the computer hardware resources.

5. The network operating system has a low level of transparency. On the other
hand, the DOS is highly transparent and hides resource usage.

6. The network operating system's scalability is higher than the DOS.

7. The network operating system uses a two-tier client/server architecture,


whereas the DOS uses an n-tier architecture.

8. The network operating system maintains resources at each node, whereas the
distributed operating system manages resources globally, whether they are
centered or distributed.
Head-to-head comparison between network operating
system and distributed operating system
Difference between Job and Process
Job is work that needs to be done.

A task is a piece of work that needs to be done.

The process is a series of actions that is done for a particular purpose.

Job and task define the work to be done, whereas process defines the way
the work can be done or how the work should be done.

1. PROCESS :
● The process is a program under execution. A program can be
defined as a set of instructions.
The program is a passive entity and the process is an active entity.
When we execute a program, it remains on the hard drive of our
system and when this program comes into the main memory it
becomes a process.
The process can be present on a hard drive, memory or CPU.
Example –
In windows we can see each of the processes (running) in
windows task manager. All the processes running in the
background are visible under the processes tab in Task Manager.
Another example may be a printer program running in the
background while we perform some other task on screen. That
printer program will be called a process.
● A process goes through many states when it is executed. Some of
these states are start, ready, running, waiting or
terminated/executed. These names aren’t standardized. These
states are shown in the Process state transition diagram or
process life cycle.
● More than one process can be executed at the same time. When
multiple processes are executed at the same time, it needs to be
decided which process needs to be executed first. This is known
as scheduling of a process or process scheduling. Thus, a
process is also known as a schedulable and executable unit.
● A process has certain attributes and a process also has a process
memory. Attributes of process are process id, process state,
priority, etc.
A process memory is divided into 4 sections – text section, data
section, heap and stack.
● The process also facilities interprocess communication. When
multiple processes are executed, it is necessary for processes to
communicate using communication protocols to maintain
synchronization.
● To further dive into details of the process, you may refer to –
Introduction of process management.

3. JOB :
● A job is a complete unit of work under execution. A job consists of
many tasks which in turn, consist of many processes. A job is a
series of tasks in a batch mode. Programs are written to execute a
job.
● Job is also obscure as it too holds many meanings. Jobs and
tasks are used synonymously in computational work.
● Example – Job of a computer is taking input from the user,
process the data and provide with the results. This job can be
divided into several small tasks, taking input as one task,
processing the data as another task, outputting the results as yet
another task.
These tasks are further executed in small processes. The task of
taking input has a number of processes involved. First of all, the
user enters the information. Then that information is converted to
binary language. Then that information goes to the CPU for further
execution. Then the CPU performs necessary actions to be taken.
Hence, a job is broken into tasks and these tasks are executed in
the form of processes.
● A job may be one job at a time or multiple jobs at a time. A single
job can be called a task. To perform multiple jobs at a time a job
needs to be scheduled. A job scheduler is a kind of application
program that schedules jobs. A job scheduling is also known as
batch scheduling.

The concept of job, process and task revolves around each other. Job, task
and process may be considered the same or different in reference to the
context they refer to. A process is an isolated entity of Operating System. A
task may be called a process if it is a single task. A job may be called a
task if the job to be performed is a single unit of work. A process or group
of processes can be termed as a task and a group of tasks can be termed
as a job.
Advantages and disadvantages of
multiprogramming systems

Advantages of multiprogramming systems

● CPU is used most of time and never become idle


● The system looks fast as all the tasks runs in parallel
● Short time jobs are completed faster than long time jobs
● Multiprogramming systems support multiply users
● Resources are used nicely
● Total read time taken to execute program/job decreases
● Response time is shorter
● In some applications multiple tasks are running and multiprogramming
systems better handle these type of applications

Disadvantages of multiprogramming systems

● It is difficult to program a system because of complicated schedule


handling
● Tracking all tasks/processes is sometimes difficult to handle
● Due to high load of tasks, long time jobs have to wait long

Advantages and disadvantages of


multiprocessing systems
Advantages of multiprocessing operating system are:

● Increased reliability: Due to the multiprocessing system, processing tasks can be


distributed among several processors. This increases reliability as if one
processor fails; the task can be given to another processor for completion.
● Increased throughout: As several processors increase, more work can be done in
less

● The economy of Scale: As multiprocessors systems share peripherals,


secondary storage devices, and power supplies, they are relatively cheaper than
single-processor systems.

Disadvantages of Multiprocessing operating System

● Operating system of multiprocessing is more complex and sophisticated as it


takes care of multiple CPUs at the same time.

Difference between Batch Processing


and Real Time Processing System

Batch Processing Real Time Processing


SR.NO.
System System

In batch processing In real time processing


processor only needs to processor needs to very
1
busy when work is responsive and active all
assigned to it. the time.

Jobs with similar In this system, events


2 requirements are mostly external to
batched together and computer system are
run through the accepted and processed
computer as a group. within certain deadlines.

Completion time is not Time to complete the


3 critical in batch task is very critical in
processing. real-time

Complex and costly


It provides most processing requires
economical and simplest unique hardware and
4
processing method for software to handle
business applications. complex operating
system programs.

Normal computer Real-time processing


specification can also needs high computer
5
work with batch architecture and high
processing. hardware specification.

It has to handle a
process within the
In this processing there
6 specified time limit
is no time limit.
otherwise the system
fails.

It is measurement It is action or event


7
oriented. oriented.
In this system sorting is
8 performed before No sorting is required.
processing.

In this system data is


collected for defined Supports random data
9
period of time and is input at random time.
processed in batches.

Examples of batch Examples of real-time


processing are processing are bank
transactions of credit ATM transactions,
10 cards, generation of bills, customer services, radar
processing of input and system, weather
output in the operating forecasts, temperature
system etc. measurement etc.

What is a process scheduler? What factors affect its


performance?
Answer: Scheduling can be defined as a set of policies and mechanisms which

controls the order in which the work to be done is completed. The scheduling

program which is a system software concerned with scheduling is called the

scheduler and the algorithm it uses is called the scheduling algorithm.

Various criteria or characteristics that help in designing a good scheduling

algorithm are:
● CPU Utilization − A scheduling algorithm should be designed so that

CPU remains busy as possible. It should make efficient use of CPU.

● Throughput − Throughput is the amount of work completed in a unit of

time. In other words throughput is the processes executed to number

of jobs completed in a unit of time. The scheduling algorithm must look

to maximize the number of jobs processed per time unit.

● Response time − Response time is the time taken to start responding

to the request. A scheduler must aim to minimize response time for

interactive users.

● Turnaround time − Turnaround time refers to the time between the

moment of submission of a job/ process and the time of its completion.

Thus how long it takes to execute a process is also an important factor.

● Waiting time − It is the time a job waits for resource allocation when

several jobs are competing in multiprogramming system. The aim is to

minimize the waiting time.

● Fairness − A good scheduler should make sure that each process gets

its fair share of the CPU.

Difference Between Paging and Segmentation


What is Paging?
It is a technique of memory management that breaks the process address space into various
blocks of similar sizes, known as pages. Here, we measure the size of a process in the total
number of pages. In a similar manner, the main memory gets divided into various frames. The
frames are small physical memory blocks of fixed size. We keep the overall size of a given frame to
a page so that we can utilize the main memory in the most optimum manner. It also avoids external
fragmentation.

What is Segmentation?
It is a technique of memory management in which every job gets divided into various blocks of
varied sizes, known as segments. This way, we get one segment for every module with pieces
performing related functions. These segments act as different spaces of the logical address of any
program. While executing a process, the corresponding segmentations load into a non-contagious
form of memory. It happens even if every segmentation loads into the available memory’s
contagious block.

Parameters Paging Segmentation

Individual Memory In Paging, we break a In the case of


process address space into Segmentation, we break a
blocks known as pages. process address space into
blocks known as sections.

Memory Size The pages are blocks of The sections are blocks of
fixed size. varying sizes.

Accountability The OS divides the The compiler mainly


available memory into calculates the size of
individual pages. individual segments, their
actual address as well as
virtual address.

Speed This technique is This technique is


comparatively much faster comparatively much slower
in accessing memory. in accessing memory than
Paging.

Size The available memory The user determines the


determines the individual individual segment sizes.
page sizes.
Fragmentation The Paging technique may The Segmentation
underutilize some of the technique may not use
pages- thus leading to some of the memory blocks
internal fragmentation. at all. Thus, it may lead to
external fragmentation.

Logical Address A logical address divides A logical address divides


into page offset and page into section offset and
number in the case of section number in the case
Paging. of Segmentation.

Data Storage In the case of Paging, the In the case of


page table leads to the Segmentation, the
storage of the page data. segmentation table leads to
the storage of the
segmentation data.

What is Fragmentation in OS?


Fragmentation refers to a process of information storage where the memory space of the system is
used inadequately, thus reducing the overall efficiency or ability or both (sometimes). The
implications of the process of fragmentation depend entirely on the specific allocation of storage
space schemes in the operation along with the particular fragmentation types. In some instances,
fragmentation leads to some unused storage capacity. This concept is also applicable to the
generated unused space in this very situation

Types of Fragmentation
Fragmentation is of three types:

● External Fragmentation
● Internal Fragmentation
● Data Fragmentation (which exists beside or a combination)
Internal Fragmentation
Whenever a memory block gets allocated with a process, and in case the process happens to be
smaller than the total amount of requested memory, a free space is ultimately created in this
memory block. And due to this, the memory block’s free space is unused. This is what causes
internal fragmentation. Read more on Internal Fragmentation here.

External Fragmentation
External fragmentation occurs whenever a method of dynamic memory allocation happens to
allocate some memory and leave a small amount of unusable memory. The total quantity of the
memory available is reduced substantially in case there’s too much external fragmentation. So,
there’s enough memory space in order to complete a request, and it is not contiguous. Thus, it is
known as external fragmentation. Read more on External Fragmentation here.

Causes of Fragmentation
The user processes are unloaded and loaded from the main memory. Also, all the processes are
kept in the memory blocks in a system’s main memory. Various spaces are left after the loading
and swapping of processes that other processes can’t load because of their sizes. The main
memory is available, but the space isn’t sufficient in order to load other processes since the
allocation of the main memory processes is dynamic.

When Does Page Fault Occur?


What is a page fault in the OS?
In operating systems, a page fault is an exception/error that is raised by the memory
management unit if a process accesses a memory page without it being loaded into
the memory. A mapping is required to be added to the virtual address space of the
process to access the page. The page contents are loaded from a backing store
(secondary storage), such as a disk.

When a page fault occurs, the following sequence of events


happens:
1. At first, an internal table is created to process whether the reference was valid
or invalid memory access.
2. The system gets terminated if the reference becomes invalid, if not, the page
will be paged in.
3. After checking for validity, the free-frame list searches the system for a free
frame.
4. The disk operation will be scheduled to get the required page from the disk.
5. The page table of the process will be updated with a new frame number, and the
invalid bit will be changed after the completion of the I/O operation. Now it is a
valid page reference.
6. Restart these steps upon finding any more page faults.

Difference between long term Scheduler and


short term Scheduler

What is a long-term scheduler?


Long term scheduler is also referred the job scheduler. Various processes are waiting
for execution on a computer. These processes are waiting in the job queue. The
long-term schedulers choose a job from the job queue or system memory and bring
that job to the ready queue to execute in the main memory.

What is a short-term scheduler?


The short-term scheduler is also referred to as a CPU Scheduler. The short-term
scheduler's main job is to choose a process from the Ready Queue that is ready to run
and assign the processor to it. In comparison to the long-term scheduler, short-term
Scheduler execution is frequent.
Main differences between the long term scheduler and
short term scheduler

1. A long-term scheduler is an operating system scheduler that chooses processes


from the job queue and loads them to execution in the main memory. On the
other hand, a short-term scheduler is an operating system scheduler that
chooses the process from the several processes that the processor runs.

2. The long-term scheduler chooses the processes or jobs from the job pool. In
contrast, the short-term scheduler chooses the processes from the ready queue.

3. The long-term scheduler controls the multiprogramming degree. In contrast, the


short-term scheduler has less control over multiprogramming.

4. The long-term scheduler assigns the job to the ready queue for further action by
the short-term scheduler, which is referred to as a job scheduler. In contrast, the
short-term scheduler assigns the task to the CPU for its process; therefore, it is
also called a CPU Scheduler.

5. The short-term scheduler chooses processes from the ready queue more
frequently than the long-term scheduler chooses processes from the job pool.

6. The long-term scheduler is slower than the short-term scheduler.

Long Term Scheduler Short Term Scheduler

It is an operating system scheduler that It is an operating system scheduler that

chooses processes from the job queue chooses the process from the several

and loads them to execution in the main processes that the processor runs.

memory.

It is also referred to as the Job Scheduler. It is also referred to as a CPU scheduler.


It is slower. It is faster.

It controls the multiprogramming degree. It provides less control over the

multiprogramming degree.

It selects the process less frequently. It selects the process more frequently.

It is always present in the Batch OS and It is present in the Batch OS and is only

can or cannot be present at all in the minimally present in the Time-Sharing

Time-Sharing OS. OS.

It chooses the processes from the job It chooses the processes from the ready

pool. queue.

It chooses a good process that is a It chooses a new process for a processor

mix-up of input/output bound and CPU quite frequently.

bound.

What is Process Control Block (PCB)?


When the process is created by the operating system it creates a data structure to
store the information of that process. This is known as Process Control Block (PCB).

Process Control block (PCB) is a data structure that stores information of a process.

Structure of Process Control Block


1. Process ID:
When a new process is created by the user, the operating system assigns a unique ID
i.e a process-ID to that process. This ID helps the process to be distinguished from
other processes existing in the system.

2. Process State:
A process, from its creation to completion goes through different states. Generally, a
process may be present in one of the 5 states during its execution:
3. Process Priority:
Process priority is a numeric value that represents the priority of each process. The
lesser the value, the greater the priority of that process. This priority is assigned at the
time of the creation of the PCB and may depend on many factors like the age of that
process, the resources consumed, and so on. The user can also externally assign a
priority to the process.

4. Process Accounting Information:


This attribute gives the information of the resources used by that process in its
lifetime. For Example: CPU time connection time, etc.

5. Program Counter:
The program counter is a pointer that points to the next instruction in the program to
be executed. This attribute of PCB contains the address of the next instruction to be
executed in the process.

6. CPU registers:
A CPU register is a quickly accessible small-sized location available to the CPU. These
registers are stored in virtual memory(RAM)
6. Context Switching:
A context switching is a process that involves switching the CPU from one process or
task to another. It is the process of storing the state of a process so that it can be
restored and resume execution at a later point. This allows multiple processes to share
a single CPU and is an essential feature of a multitasking operating system.

8. PCB pointer:
This field contains the address of the next PCB, which is in ready state. This helps the
operating system to hierarchically maintain an easy control flow between parent
processes and child processes.

9. List of open files:


As the name suggests, It contains information on all the files that are used by that
process. This field is important as it helps the operating system to close all the opened
files at the termination state of the process.

10. Process I/O information:


In this field, the list of all the input/output devices which are required by that process
during its execution is mentioned.

Context Switching
A context switching is a process that involves switching the CPU from one process or
task to another. It is the process of storing the state of a process so that it can be
restored and resume execution at a later point. This allows multiple processes to share
a single CPU and is an essential feature of a multitasking operating system.

So, whenever context switching occurs in the code execution then the current state of
that process is stored temporarily in CPU registers. This helps in the fast execution of
the process by not wasting time-saving and retrieving state information from the
secondary memory(hard disk).

What is thrashing?
A state in which the CPU performs lesser “productive” work and more “swapping”
is known as thrashing.

It occurs when there are too many pages in the memory and each page refers to
another one.

The CPU is busy swapping and hence its utilization falls.

What are the causes of thrashing?


The process scheduling mechanism tries to load many processes in the system at
a time and hence the degree of multiprogramming is increased. In this scenario,
there are far more processes than the number of frames available.

The memory soon fills up and the process starts spending a lot of time for the
required pages to be swapped in, causing the utilization of the CPU to fall low, as
every process has to wait for pages.

Effect of thrashing?
When the operating system encounters a situation of thrashing then it tries to
apply the following algorithms:

1. Global page replacement


2. Local page replacement

Global page replacement


Whenever there is thrashing, the global page replacement algorithm tries to
bring more pages.

Though, this is not a suitable algorithm as in this no process can get enough
frames causing more thrashing.

Local page replacement


This algorithm may help in the reduction of thrashing as it brings pages that
belongs to the process.
But there are many other disadvantages of local page replacement and hence it
is only used as an alternative for global page replacement.

Hit and Miss Ratios in Caches


Hit and miss ratios in caches have a lot to do with cache hits and misses.

A hit ratio is a calculation of cache hits, and comparing them with how many total content
requests were received.

A miss ratio is the flip side of this where the cache misses are calculated and compared
with the total number of content requests that were received.

Comparison: Mobile Operating System vs Desktop


Operating System
Throughput
Throughput is defined as the number of processes that complete their execution per unit
time.

Turnaround time is the amount of time to execute a particular process.

Waiting time is the amount of time a process has been waiting in the ready queue.

Response time is the amount of time it takes from when a request was submitted until the
first response is produced, not output (for time-sharing environment)

You might also like