You are on page 1of 18

Operating System

An Operating System (OS) is an interface between a computer


user and computer hardware. An operating system is a software
which performs all the basic tasks like file management,
memory management, process management, handling input
and output, and controlling peripheral devices such as disk
drives and printers.
Operating System as Extended Machine
Let us understand how the operating system works as an
Extended Machine.
1. At the Machine level the structure of a computer’s system is
complicated to program, mainly for input or output.
Programmers do not deal with hardware. They will always
mainly focus on implementing software. Therefore, a level of
abstraction is supposed to be maintained.
2. Operating systems provide a layer of abstraction for using disk
such as files.
3. This level of abstraction allows a program to create, write, and
read files, without dealing with the details of how the hardware
actually works.
4. The level of abstraction is the key to managing the complexity.
5. Good abstractions turn an impossible task into two
manageable tasks.
6. The first is to define and implement the abstractions.
7. The second is to solve the problem at hand.
8. Operating system provides abstractions to application
programs in a top down view.
Operating System
An Operating System (OS) is an interface between a computer
user and computer hardware. An operating system is a software
which performs all the basic tasks like file management,
memory management, process management, handling input
and output, and controlling peripheral devices such as disk
drives and printers.
Operating System as Resource Manager
Let us understand how the operating system works as a
Resource Manager.
1. Now-a-days all modern computers consist of processors,
memories, timers, network interfaces, printers, and so many
other devices.
2. The operating system provides for an orderly and
controlled allocation of the processors, memories, and I/O
devices among the various programs in the bottom-up view.
3. Operating system allows multiple programs to be in
memory and run at the same time.
4. Resource management includes multiplexing or sharing
resources in two different ways: in time and in space.
5. In time multiplexed, different programs take a chance of
using CPU. First one tries to use the resource, then the next
one that is ready in the queue and so on. For example:
Sharing the printer one after another.
6. In space multiplexing, Instead of the customers taking a
chance, each one gets part of the resource. For example −
Main memory is divided into several running programs, so
each one can be resident at the same time.
Introduction to Shell
The shell is a program that provides the user
with an interface to use the operating system’s
functions through some commands. A shell
script is a program that is used to perform
specific tasks. Shell scripts are mostly used to
avoid repetitive work. You can write a script to
automate a set of instructions to be executed
one after the other, instead of typing in the
commands one after the other n number of
times.
Some Shell Commands:
ls : For listing file
pwd : for view present work directory
cd /.. : change directory backward
cd ./root : go to root directory
touch filename.extension : create new file
rm filename : delete file
mkdir foldername : create folder
cp filename directory_for_pasting : copy the
file
Interprocess communication is the mechanism provided by the
operating system that allows processes to communicate with each
other. This communication could involve a process letting another
process know that some event has occurred or the transferring of
data from one process to another.
A diagram that illustrates interprocess communication is as follows-

Synchronization in Interprocess Communication


Synchronization is a necessary part of interprocess communication.
It is either provided by the interprocess control mechanism or
handled by the communicating processes. Some of the methods to
provide synchronization are as follows −
1. Semaphore: A semaphore is a variable that controls the access to
a common resource by multiple processes. The two types of
semaphores are binary semaphores and counting semaphores.
2. Mutual Exclusion: Mutual exclusion requires that only one
process thread can enter the critical section at a time. This is useful
for synchronization and also prevents race conditions.
3. Barrier: A barrier does not allow individual processes to proceed
until all the processes reach it. Many parallel languages and
collective routines impose barriers.
4. Spinlock: This is a type of lock. The processes trying to acquire
this lock wait in a loop while checking if the lock is available or not.
This is known as busy waiting because the process is not doing any
useful operation even though it is active.
Paging is a storage structure that enables the operating
framework to fetch processes from the secondary storage
into the main memory in the form of pages. In the Paging
method, the main memory is split into small fixed-size blocks
of physical memory, which is known as frames. The size of a
frame must be preserved the same as that of a page to have
maximum use of the main memory and to prevent external
fragmentation.
Paging changes pages from the swap disk to frames of the
physical memory therefore data can be accessed by the
processor. Any page can involve any frame. This leads to
multiple issues that should be undertaken by a paging system
1.When should a page be changed into physical memory?
2.How does the CPU find data in physical memory, especially
if its logical address is not the same as its physical address?
3.What arises when all the frames have pages and the CPU
require to access data from a page not recently saved in
physical memory?
Page Table
A page table is the data structure used by a virtual memory
system in a computer operating system to store the mapping
between virtual addresses and physical addresses

Segmentation
Segmentation is another approach of allocating memory
that can be used rather than or in conjunction with
paging. In its purest form, a program is broken into
multiple segments, each of which is a self-contained
unit, including a subroutine or data structure.A segment
can create at one of many addresses and can be of any
size, each segment table entry should contain the start
address and segment size. Some system allows a
segment to start at any address, while other limits the
start address. One such limit is found in the Intel X86
architecture, which requires a segment to start at an
address that has 6000 as its four low-order bits.
Fragmentation
Fragmentation is an unwanted problem in the operating
system in which the processes are loaded and unloaded
from memory, and free memory space is fragmented.
Processes can't be assigned to memory blocks due to
their small size, and the memory blocks stay unused .It is
also necessary to understand that as programs are
loaded and deleted from memory, they generate free
space or a hole in the memory. These small blocks
cannot be allotted to new arriving processes, resulting in
inefficient memory use.The conditions of fragmentation
depend on the memory allocation system. As the
process is loaded and unloaded from memory, these
areas are fragmented into small pieces of memory that
cannot be allocated to incoming processes. It is called
fragmentation.
Types of FragmentationThere are mainly two types of
fragmentation in the operating system. These are as
follows:
Internal Fragmentation
External Fragmentation
Deadlock
Deadlock is a situation where a set of processes are blocked
because each process is holding a resource and waiting for
another resource acquired by some other process.
Consider an example when two trains are coming toward
each other on same track and there is only one track, none of
the trains can move once they are in front of each other.
Similar situation occurs in operating systems when there are
two or more processes hold some resources and wait for
resources held by other
Deadlock Detection
A deadlock can be detected by a resource scheduler as it
keeps track of all the resources that are allocated to different
processes. After a deadlock is detected, it can be resolved
using the following methods −
All the processes that are involved in the deadlock are
terminated. This is not a good approach as all the progress
made by the processes is destroyedResources can be
preempted from some processes and given to others till the
deadlock is resolved
Methods for handling deadlock
There are three ways to handle deadlock
1) Deadlock prevention or avoidance: The idea is to
not let the system into deadlock state.
One can zoom into each category individually,
Prevention is done by negating one of above
mentioned necessary conditions for
deadlock.Avoidance is kind of futuristic in nature.
By using strategy of “Avoidance”, we have to make
an assumption. We need to ensure that all
information about resources which process WILL
need are known to us prior to execution of the
process. We use Banker’s algorithm (Which is in-
turn a gift from Dijkstra) in order to avoid
deadlock.
2) Deadlock detection and recovery: Let deadlock
occur, then do preemption to handle it once
occurred.
3) Ignore the problem all together: If deadlock is
very rare, then let it happen and reboot the
system. This is the approach that both Windows
and UNIX take
Ostrich Algorithm
The ostrich algorithm means that the deadlock is
simply ignored and it is assumed that it will
never occur. This is done because in some
systems the cost of handling the deadlock is
much higher than simply ignoring it as it occurs
very rarely. So, it is simply assumed that the
deadlock will never occur and the system is
rebooted if it occurs by any chance.
Memory management
Memory management is the functionality of an
operating system which handles or manages
primary memory and moves processes back and
forth between main memory and disk during
execution. Memory management keeps track of
each and every memory location, regardless of
either it is allocated to some process or it is free.
It checks how much memory is to be allocated to
processes. It decides which process will get
memory at what time. It tracks whenever some
memory gets freed or unallocated and
correspondingly it updates the status.
Types of OS
Batch operating system
The users of a batch operating system do not interact with
the computer directly. Each user prepares his job on an off-
line device like punch cards and submits it to the computer
operator. To speed up processing, jobs with similar needs are
batched together and run as a group. The programmers leave
their programs with the operator and the operator then sorts
the programs with similar requirements into batches.
The problems with Batch Systems are as follows −
Lack of interaction between the user and the job.
CPU is often idle, because the speed of the mechanical I/O
devices is slower than the CPU.
Difficult to provide the desired priority.
Time-sharing operating systems
Time-sharing is a technique which enables many people,
located at various terminals, to use a particular computer
system at the same time. Time-sharing or multitasking is a
logical extension of multiprogramming. Processor's time
which is shared among multiple users simultaneously is
termed as time-sharing.
The main difference between Multiprogrammed Batch
Systems and Time-Sharing Systems is that in case of
Multiprogrammed batch systems, the objective is to
maximize processor use, whereas in Time-Sharing Systems,
the objective is to minimize response time.
Distributed operating System
Distributed systems use multiple central
processors to serve multiple real-time
applications and multiple users. Data
processing jobs are distributed among the
processors accordingly.
The processors communicate with one another
through various communication lines (such as
high-speed buses or telephone lines). These
are referred as loosely coupled systems or
distributed systems. Processors in a distributed
system may vary in size and function. These
processors are referred as sites, nodes,
computers, and so on.
Network operating System
A Network Operating System runs on a server and
provides the server the capability to manage data,
users, groups, security, applications, and other
networking functions. The primary purpose of the
network operating system is to allow shared file and
printer access among multiple computers in a network,
typically a local area network (LAN), a private network
or to other networks
Real Time operating System
A real-time system is defined as a data processing
system in which the time interval required to
process and respond to inputs is so small that it
controls the environment. The time taken by the
system to respond to an input and display of
required updated information is termed as the
response time. So in this method, the response
time is very less as compared to online processing.

Real-time systems are used when there are rigid


time requirements on the operation of a processor
or the flow of data and real-time systems can be
used as a control device in a dedicated application.
A real-time operating system must have well-
defined, fixed time constraints, otherwise the
system will fail. For example, Scientific
experiments, medical imaging systems, industrial
control systems, weapon systems, robots, air traffic
control systems, etc
Process state
A process is a program in execution and it is more
than a program code called as text section and this
concept works under all the operating system
because all the task perform by the operating
system needs a process to perform the task
The process executes when it changes the state.
The state of a process is defined by the current
activity of the process.
Each process may be in any one of the following
states
New − The process is being created.
Running − In this state the instructions are being
executed.
Waiting − The process is in waiting state until an
event occurs like I/O operation completion or
receiving a signal.
Ready − The process is waiting to be assigned to a
processor.
Terminated − the process has finished execution.
It is important to know that only one process can
be running on any processor at any instant. Many
processes may be ready and waiting.
Now let us see the state diagram of these process
states

Explanation
Step 1 − Whenever a new process is created, it is admitted into ready
state
Step 2 − If no other process is present at running state, it is
dispatched to running based on scheduler dispatcher.
Step 3 − If any higher priority process is ready, the uncompleted
process will be sent to the waiting state from the running state.
Step 4 − Whenever I/O or event is completed the process will send
back to ready state based on the interrupt signal given by the running
state.
Step 5 − Whenever the execution of a process is completed in
running state, it will exit to terminate state, which is the completion
of process.
Process Control Block
Process Control Block is a data structure that contains
information of the process related to it. The process control
block is also known as a task control block, entry of the
process table, etc.
It is very important for process management as the data
structuring for processes is done in terms of the PCB. It also
defines the current state of the operating system
Race Condition
A race condition is a situation that may occur inside a critical
section. This happens when the result of multiple thread
execution in critical section differs according to the order in
which the threads execute.
Race conditions in critical sections can be avoided if the
critical section is treated as an atomic instruction. Also,
proper thread synchronization using locks or atomic variables
can prevent race conditions.
Context Switching
Context Switching involves storing the context or state of a
process so that it can be reloaded when required and execution
can be resumed from the same point as earlier. This is a feature
of a multitasking operating system and allows a single CPU to be
shared by multiple processes.A computer can address more
memory than the amount physically installed on the system. This
extra memory is actually called virtual memory and it is a section
of a hard disk that's set up to emulate the computer's RAM.
The main visible advantage of this scheme is that programs can
be larger than physical memory. Virtual memory serves two
purposes. First, it allows us to extend the use of physical memory
by using disk. Second, it allows us to have memory protection,
because each virtual address is translated to a physical address
Swapping
Swapping is a mechanism in which a process can be swapped
temporarily out of main memory (or move) to secondary storage
(disk) and make that memory available to other processes. At
some later time, the system swaps back the process from the
secondary storage to main memory
Though performance is usually affected by swapping process but
it helps in running multiple and big processes in parallel and
that's the reason Swapping is also known as a technique for
memory compaction.The total time taken by swapping process
includes the time it takes to move the entire process to a
secondary disk and then to copy the process back to memory, as
well as the time the process takes to regain main memory

You might also like