Professional Documents
Culture Documents
Segmentation
Segmentation is another approach of allocating memory
that can be used rather than or in conjunction with
paging. In its purest form, a program is broken into
multiple segments, each of which is a self-contained
unit, including a subroutine or data structure.A segment
can create at one of many addresses and can be of any
size, each segment table entry should contain the start
address and segment size. Some system allows a
segment to start at any address, while other limits the
start address. One such limit is found in the Intel X86
architecture, which requires a segment to start at an
address that has 6000 as its four low-order bits.
Fragmentation
Fragmentation is an unwanted problem in the operating
system in which the processes are loaded and unloaded
from memory, and free memory space is fragmented.
Processes can't be assigned to memory blocks due to
their small size, and the memory blocks stay unused .It is
also necessary to understand that as programs are
loaded and deleted from memory, they generate free
space or a hole in the memory. These small blocks
cannot be allotted to new arriving processes, resulting in
inefficient memory use.The conditions of fragmentation
depend on the memory allocation system. As the
process is loaded and unloaded from memory, these
areas are fragmented into small pieces of memory that
cannot be allocated to incoming processes. It is called
fragmentation.
Types of FragmentationThere are mainly two types of
fragmentation in the operating system. These are as
follows:
Internal Fragmentation
External Fragmentation
Deadlock
Deadlock is a situation where a set of processes are blocked
because each process is holding a resource and waiting for
another resource acquired by some other process.
Consider an example when two trains are coming toward
each other on same track and there is only one track, none of
the trains can move once they are in front of each other.
Similar situation occurs in operating systems when there are
two or more processes hold some resources and wait for
resources held by other
Deadlock Detection
A deadlock can be detected by a resource scheduler as it
keeps track of all the resources that are allocated to different
processes. After a deadlock is detected, it can be resolved
using the following methods −
All the processes that are involved in the deadlock are
terminated. This is not a good approach as all the progress
made by the processes is destroyedResources can be
preempted from some processes and given to others till the
deadlock is resolved
Methods for handling deadlock
There are three ways to handle deadlock
1) Deadlock prevention or avoidance: The idea is to
not let the system into deadlock state.
One can zoom into each category individually,
Prevention is done by negating one of above
mentioned necessary conditions for
deadlock.Avoidance is kind of futuristic in nature.
By using strategy of “Avoidance”, we have to make
an assumption. We need to ensure that all
information about resources which process WILL
need are known to us prior to execution of the
process. We use Banker’s algorithm (Which is in-
turn a gift from Dijkstra) in order to avoid
deadlock.
2) Deadlock detection and recovery: Let deadlock
occur, then do preemption to handle it once
occurred.
3) Ignore the problem all together: If deadlock is
very rare, then let it happen and reboot the
system. This is the approach that both Windows
and UNIX take
Ostrich Algorithm
The ostrich algorithm means that the deadlock is
simply ignored and it is assumed that it will
never occur. This is done because in some
systems the cost of handling the deadlock is
much higher than simply ignoring it as it occurs
very rarely. So, it is simply assumed that the
deadlock will never occur and the system is
rebooted if it occurs by any chance.
Memory management
Memory management is the functionality of an
operating system which handles or manages
primary memory and moves processes back and
forth between main memory and disk during
execution. Memory management keeps track of
each and every memory location, regardless of
either it is allocated to some process or it is free.
It checks how much memory is to be allocated to
processes. It decides which process will get
memory at what time. It tracks whenever some
memory gets freed or unallocated and
correspondingly it updates the status.
Types of OS
Batch operating system
The users of a batch operating system do not interact with
the computer directly. Each user prepares his job on an off-
line device like punch cards and submits it to the computer
operator. To speed up processing, jobs with similar needs are
batched together and run as a group. The programmers leave
their programs with the operator and the operator then sorts
the programs with similar requirements into batches.
The problems with Batch Systems are as follows −
Lack of interaction between the user and the job.
CPU is often idle, because the speed of the mechanical I/O
devices is slower than the CPU.
Difficult to provide the desired priority.
Time-sharing operating systems
Time-sharing is a technique which enables many people,
located at various terminals, to use a particular computer
system at the same time. Time-sharing or multitasking is a
logical extension of multiprogramming. Processor's time
which is shared among multiple users simultaneously is
termed as time-sharing.
The main difference between Multiprogrammed Batch
Systems and Time-Sharing Systems is that in case of
Multiprogrammed batch systems, the objective is to
maximize processor use, whereas in Time-Sharing Systems,
the objective is to minimize response time.
Distributed operating System
Distributed systems use multiple central
processors to serve multiple real-time
applications and multiple users. Data
processing jobs are distributed among the
processors accordingly.
The processors communicate with one another
through various communication lines (such as
high-speed buses or telephone lines). These
are referred as loosely coupled systems or
distributed systems. Processors in a distributed
system may vary in size and function. These
processors are referred as sites, nodes,
computers, and so on.
Network operating System
A Network Operating System runs on a server and
provides the server the capability to manage data,
users, groups, security, applications, and other
networking functions. The primary purpose of the
network operating system is to allow shared file and
printer access among multiple computers in a network,
typically a local area network (LAN), a private network
or to other networks
Real Time operating System
A real-time system is defined as a data processing
system in which the time interval required to
process and respond to inputs is so small that it
controls the environment. The time taken by the
system to respond to an input and display of
required updated information is termed as the
response time. So in this method, the response
time is very less as compared to online processing.
Explanation
Step 1 − Whenever a new process is created, it is admitted into ready
state
Step 2 − If no other process is present at running state, it is
dispatched to running based on scheduler dispatcher.
Step 3 − If any higher priority process is ready, the uncompleted
process will be sent to the waiting state from the running state.
Step 4 − Whenever I/O or event is completed the process will send
back to ready state based on the interrupt signal given by the running
state.
Step 5 − Whenever the execution of a process is completed in
running state, it will exit to terminate state, which is the completion
of process.
Process Control Block
Process Control Block is a data structure that contains
information of the process related to it. The process control
block is also known as a task control block, entry of the
process table, etc.
It is very important for process management as the data
structuring for processes is done in terms of the PCB. It also
defines the current state of the operating system
Race Condition
A race condition is a situation that may occur inside a critical
section. This happens when the result of multiple thread
execution in critical section differs according to the order in
which the threads execute.
Race conditions in critical sections can be avoided if the
critical section is treated as an atomic instruction. Also,
proper thread synchronization using locks or atomic variables
can prevent race conditions.
Context Switching
Context Switching involves storing the context or state of a
process so that it can be reloaded when required and execution
can be resumed from the same point as earlier. This is a feature
of a multitasking operating system and allows a single CPU to be
shared by multiple processes.A computer can address more
memory than the amount physically installed on the system. This
extra memory is actually called virtual memory and it is a section
of a hard disk that's set up to emulate the computer's RAM.
The main visible advantage of this scheme is that programs can
be larger than physical memory. Virtual memory serves two
purposes. First, it allows us to extend the use of physical memory
by using disk. Second, it allows us to have memory protection,
because each virtual address is translated to a physical address
Swapping
Swapping is a mechanism in which a process can be swapped
temporarily out of main memory (or move) to secondary storage
(disk) and make that memory available to other processes. At
some later time, the system swaps back the process from the
secondary storage to main memory
Though performance is usually affected by swapping process but
it helps in running multiple and big processes in parallel and
that's the reason Swapping is also known as a technique for
memory compaction.The total time taken by swapping process
includes the time it takes to move the entire process to a
secondary disk and then to copy the process back to memory, as
well as the time the process takes to regain main memory