You are on page 1of 136

OPERATING SYSTEM

S.Chithra
Department of Computer Science and
Applications.
OPERATING SYSTEM

Unit III – Memory Management

S.Chithra
Department of Computer Science and Applications
MEMORY MANAGEMENT
Memory Hierarchy Design:
In the Computer System Design, Memory Hierarchy is an
enhancement to organize the memory such that it can minimize the
access time.
 The Memory Hierarchy was developed based on a program
behavior known as locality of references.
 This Memory Hierarchy Design is divided into 2 main types:
 External Memory or Secondary Memory: Comprising of
Magnetic Disk, Optical Disk, Magnetic Tape i.e. peripheral storage
devices which are accessible by the processor via I/O Module.
 Internal Memory or Primary Memory: Comprising of Main
Memory, Cache Memory & CPU registers. This is directly accessible
by the processor.
,
MEMORY MANAGEMENT
MEMORY MANAGEMENT
Memory Management:
Memory management is the functionality of an operating system
which handles or manages primary memory and moves processes
back and forth between main memory and disk during execution.
 Main Memory refers to a physical memory that is the internal memory
to the computer.
 Memory management keeps track of each and every memory
location, regardless of either it is allocated to some process or it is
free.
 It checks how much memory is to be allocated to processes.
 It decides which process will get memory at what time.
 It tracks whenever some memory gets freed or unallocated and
correspondingly it updates the status.
MEMORY MANAGEMENT
Process Address Space:
The process address space is the set of logical addresses that a
process references in its code.
 The operating system takes care of mapping the logical addresses to
physical addresses at the time of memory allocation to the program.
 There are three types of addresses used in a program before and after
memory is allocated .
 Symbolic addresses: The addresses used in a source code. The variable names,
constants, and instruction labels are the basic elements of the symbolic address
space.
 Relative addresses: At the time of compilation, a compiler converts
symbolic addresses into relative addresses.
 Physical addresses: The loader generates these addresses at the time
when a program is loaded into main memory.
MEMORY MANAGEMENT
Memory Loading:
All the programs are loaded in the main memory for execution.
Sometimes complete program is loaded into the memory, but some
times a certain part or routine of the program is loaded into the
main memory only when it is called by the program.
 There are two types of Loading techniques:
 Static Loading: The absolute program (and data) is loaded
into memory in order for execution to start.
 Dynamic Loading: dynamic routines of the library are
stored on a disk in relocatable form and are loaded into
memory only when they are needed by the program.
MEMORY MANAGEMENT
Swapping:
Swapping is the process of bringing in each process in
main memory, running it for a while and then putting it
back to the disk.
 Swapping is also known as a technique for memory
compaction.
 sometimes there is not enough main memory to hold all
the currently active processes in a timesharing system.
 The total time taken by swapping process includes the
time it takes to move the entire process to a secondary
disk and then to copy the process back to memory, as
well as the time the process takes to regain main
memory.
MEMORY MANAGEMENT
MEMORY MANAGEMENT
MEMORY MANAGEMENT
Partition Allocation :
In Partition Allocation, when there is more than one partition
freely available to accommodate a process’s request, a
partition must be selected.
 To choose a particular partition, a partition allocation method is needed.
 When it is time to load a process into the main memory and if there is
more than one free block of memory of sufficient size then the OS
decides which free block to allocate.
 There are different Placement Algorithm:
 First Fit: The first hole that is big enough is allocated to
program.
 Best Fit: The smallest hole that is big enough is allocated to
program.
 Worst Fit: The largest hole that is big enough is allocated to
program.
MEMORY MANAGEMENT
Memory Partitions :
Memory allocation is a process by which computer programs are
assigned memory or space.
 Main memory usually has two partitions −
 Low Memory − Operating system resides in this memory.
 High Memory − User processes are held in high memory.
 Operating system uses the following memory allocation mechanism.
 Memory Partitioning types are:
 Fixed / Static-partition allocation
 Dynamic/ Multiple-partition allocation
MEMORY MANAGEMENT
Dynamic Memory Partitioning:
In this technique, the partition size is not declared initially. It is
declared at the time of
process loading
 The first partition is reserved for the operating system.
 The remaining space is divided into parts.
 The size of each partition will be
equal to the size of the process.
 The partition size varies according to
the need of the
process so that the internal
fragmentation can be avoided
MEMORY MANAGEMENT
MEMORY MANAGEMENT
Fixed / Contiguous Memory Partitions :
In this type of allocation, main memory is divided into a
number of fixed-sized partitions where each partition should
contain only one process.
 In this technique, the main memory is divided into partitions of
equal or different sizes.
 The operating system always resides in the first partition while
the other partitions can be used to store user processes.
 The memory is assigned to the processes in contiguous way.
 The partitions cannot overlap.
 A process must be contiguously present in a partition for the
execution
MEMORY MANAGEMENT
MEMORY MANAGEMENT
Contiguous Memory Allocation

Contiguous memory allocation refers to a memory


management technique in which whenever there
occurs a request by a user process for the memory,
one of the sections of the contiguous memory block
would be given to that process, in accordance with
its requirement.
MEMORY MANAGEMENT
Techniques for Contiguous Memory Allocation
• Depending on the needs of the process making the memory request, a single
contiguous piece of memory blocks is assigned.

• It is performed by creating fixed-sized memory segments and designating a


single process to each partition. The amount of multiprogramming will be
constrained, therefore, to the number of memory-based fixed partitions.

There are two ways to allocate this:


• Fix-size Partitioning Method
• Flexible Partitioning Method
MEMORY MANAGEMENT
Fix-size Partitioning Method

• Each process in this method of contiguous memory allocation is


given a fixed size continuous block in the main memory.
• This means that the entire memory will be partitioned into
continuous blocks of fixed size, and each time a process enters
the system, it will be given one of the available blocks.

• Because each process receives a block of memory space that is


the same size, regardless of the size of the process. Static
partitioning is another name for this approach.
MEMORY MANAGEMENT

The memory has fixed-sized chunks because


we are using the fixed size partition technique.
In addition to the 4MB process, the first
process, which is 3MB in size, is given a 5MB
block. The second process, which is 1MB in
size, is also given a 5MB block. So, it doesn't
matter how big the process is. The same fixed-
size memory block is assigned to each.

The degree of multiprogramming refers to the number of processes that can


run concurrently in memory. Therefore, the number of blocks formed in the
RAM determines the system's level of multiprogramming
MEMORY MANAGEMENT
Advantages

A fixed-size partition system has the following benefits:

• This strategy is easy to employ because each block is the same size.
Now all that is left to do is allocate processes to the fixed memory
blocks that have been divided up.
• It is simple to keep track of how many memory blocks are still
available, which determines how many further processes can be
allocated memory.
• This approach can be used in a system that requires
multiprogramming since numerous processes can be maintained in
memory at once.
MEMORY MANAGEMENT
Variable-Sized Partitions

• Dynamic partitioning is another name for this. The scheme


allocation in this type of partition is done dynamically.
• Here, the size of every partition isn’t declared initially. Only
once we know the process size, will we know the size of the
partitions.
• But in this case, the size of the process and the partition is
equal; thus, it helps in preventing internal fragmentation.
MEMORY MANAGEMENT
Pros of Contiguous Memory Allocation
1. It supports a user’s random access to
files.

2. The user gets excellent read


performance.

3. It is fairly simple to implement.


MEMORY MANAGEMENT
Cons of Contiguous Memory Allocation
1. Having a file grow might be somewhat difficult.

2. The disk may become fragmented.


MEMORY MANAGEMENT
Fragmentation:
Fragmentation is an unwanted problem where the memory blocks
cannot be allocated to the processes due to their small size and the
blocks remain unused.
 The process with the size greater than the size of the largest
partition could not be executed due to the lack of sufficient
contiguous memory blocks cannot be allocated to new upcoming
processes and results in inefficient use of memory.
 Basically, there are two types of fragmentation:
 Internal Fragmentation
 External Fragmentation
MEMORY MANAGEMENT
MEMORY MANAGEMENT
Internal Fragmentation:
Memory block assigned to process is bigger. Some portion of
memory is left unused, as it cannot be used by another process.
 In this fragmentation, the process is allocated a memory block of size
more than the size of that process.
 Due to this some part of the memory is left unused and this cause internal
fragmentation.
MEMORY MANAGEMENT
External Fragmentation:
Total memory space is enough to satisfy a request or to reside a
process in it, but it is not contiguous, so it cannot be used.
 In this fragmentation, although we
have total space available that is
needed by a process still we are not
able to put that process in the
memory because that space is not
contiguous.
 After some time P1 and P3 got
completed and their assigned space
is freed, but they cannot be used to
load a 2 MB process in the
memory since they are not
contiguously located
MEMORY MANAGEMENT
Compaction:
• Compaction technique can be used to
create more free memory out
fragmented memory.
• External fragmentation can be reduced by
compaction or shuffle memory contents to
place all free memory together in one large
block. To make compaction feasible, relocation
should be dynamic.
MEMORY MANAGEMENT
The internal fragmentation can be reduced by effectively assigning
the smallest partition but large enough for the process.
MEMORY MANAGEMENT
In Operating Systems,
• Paging is a storage mechanism used to retrieve
processes from the secondary storage into the main
memory in the form of pages.
• The main idea behind the paging is to divide each
process in the form of pages. The main memory will also
be divided in the form of frames.
• One page of the process is to be stored in one of the
frames of the memory. The pages can be stored at the
different locations of the memory but the priority is
always to find the contiguous frames or holes.
MEMORY MANAGEMENT
MEMORY MANAGEMENT
MEMORY MANAGEMENT
Example
Let us consider the main memory size 16 Kb and Frame size
is 1 KB therefore the main memory will be divided into the
collection of 16 frames of 1 KB each.
There are 4 processes in the system that is P1, P2, P3 and
P4 of 4 KB each. Each process is divided into pages of 1 KB
each so that one page can be stored in one frame.
Initially, all the frames are empty therefore pages of the
processes will get stored in the contiguous way.
MEMORY MANAGEMENT
MEMORY MANAGEMENT
Let us consider that, P2 and P4 are moved to waiting state after some
time. Now, 8 frames become empty and therefore other pages can be
loaded in that empty place. The process P5 of size 8 KB (8 pages) is
waiting inside the ready queue.

Given the fact that, we have 8 non contiguous frames available in the
memory and paging provides the flexibility of storing the process at
the different places. Therefore, we can load the pages of process P5
in the place of P2 and P4.
MEMORY MANAGEMENT
MEMORY MANAGEMENT
MEMORY MANAGEMENT
MEMORY MANAGEMENT
Paging is a non-contiguous memory allocation technique.
Page Table is a table that maps a page number to the frame
number containing that page.
• Disadvantage Of Paging-
• It increases the effective access time due to
increased number of memory accesses.
• One memory access is required to get the
frame number from the page table.
• Another memory access is required to get the
word from the page.
MEMORY MANAGEMENT
Translation Lookaside Buffer-
• Translation Lookaside Buffer (TLB) is a solution that tries to reduce the effective
access time.
• Being a hardware, the access time of TLB is very less as compared to the main
memory.

Structure-

Translation Lookaside Buffer (TLB) consists of two columns-


Page Number
Frame Number
MEMORY MANAGEMENT
Translating Logical Address into Physical Address-

In a paging scheme using TLB,


The logical address generated by the CPU is translated into the physical address using
following steps-

Step-01:

CPU generates a logical address consisting of two parts-


Page Number
Page Offset
MEMORY MANAGEMENT
Step-02:

TLB is checked to see if it contains an entry for the referenced page number.
The referenced page number is compared with the TLB entries all at once.

Now, two cases are possible-

Case-01: If there is a TLB hit-

If TLB contains an entry for the referenced page number, a TLB hit occurs.
In this case, TLB entry is used to get the corresponding frame number for the referenced page number.

Case-02: If there is a TLB miss-

If TLB does not contain an entry for the referenced page number, a TLB miss occurs.
In this case, page table is used to get the corresponding frame number for the referenced page
number.
Then, TLB is updated with the page number and frame number for future references.
MEMORY MANAGEMENT
Step-03:

• After the frame number is obtained, it is combined with the


page offset to generate the physical address.
• Then, physical address is used to read the required word
from the main memory.
MEMORY MANAGEMENT
Step-03:

• After the frame number is obtained, it is combined with the


page offset to generate the physical address.
• Then, physical address is used to read the required word
from the main memory.
MEMORY MANAGEMENT
Advantages-

The advantages of using TLB are-

TLB reduces the effective access time.


Only one memory access is required when TLB hit
occurs.
Disadvantages-

A major disadvantage of using TLB is-

After some time of running the process, when TLB hits


increases and process starts to run smoothly, a context
switching occurs.
The entire content of the TLB is flushed.
Then, TLB is again updated with the currently running process.
This happens again and again.
MEMORY MANAGEMENT
Memory protection

we have to protect the operating system from user processes and


which can be done by using a relocation register with a limit register.

• Here, the relocation register has the value of the smallest physical
address where as the limit register has the range of the logical
addresses. These two registers have some conditions like each
logical address must be less than the limit register.

• The memory management unit is used to translate the logical


address with the value in the relocation register dynamically after
which the translated (or mapped) address is then sent to memory.
MEMORY MANAGEMENT
MEMORY MANAGEMENT
• In the above diagram, when the scheduler selects a process
for the execution process, the dispatcher, on the other hand,
is responsible for loading the relocation and limit registers
with the correct values as part of the context switch as every
address generated by the CPU is checked against these 2
registers, and we may protect the operating system,
programs, and the data of the users from being altered by this
running process.
MEMORY MANAGEMENT
Methods of memory protection:
Memory Protection using Keys

Memory Protection using Rings

Capability-based addressing

Memory Protection using masks

Memory Protection using Segmentation

Memory Protection using Simulated segmentation

Memory Protection using Dynamic tainting


MEMORY MANAGEMENT
MEMORY MANAGEMENT
MEMORY MANAGEMENT
MEMORY MANAGEMENT
MEMORY MANAGEMENT
Virtual memory:
Virtual memory is important for improving system performance,
multitasking, using large programs and flexibility.
 Benefits of using virtual memory
 Frees applications from managing shared memory and saves
users from having to
add memory modules when RAM space runs out.
 Increased security because of memory isolation.
 Multiple larger applications can be run simultaneously.
 Doesn't need external fragmentation.
 Effective CPU use.
 Data can be moved automatically.
MEMORY MANAGEMENT
MEMORY MANAGEMENT
MEMORY MANAGEMENT
MEMORY MANAGEMENT
MEMORY MANAGEMENT
Virtual memory:
Virtual memory is important for improving system performance,
multitasking, using large programs and flexibility.
 Benefits of using virtual memory
 Frees applications from managing shared memory and saves
users from having to
add memory modules when RAM space runs out.
 Increased security because of memory isolation.
 Multiple larger applications can be run simultaneously.
 Doesn't need external fragmentation.
 Effective CPU use.
 Data can be moved automatically.
MEMORY MANAGEMENT
Allocation of frames in Operating System

• Demand paging necessitates the development of a page-


replacement algorithm and a frame allocation algorithm.
Frame allocation algorithms are used if you have multiple
processes; it helps decide how many frames to allocate to
each process.

There are various constraints to the strategies for the allocation


of frames:
You cannot allocate more than the total number of available
frames.
At least a minimum number of frames should be allocated to
each process
MEMORY MANAGEMENT
Frame allocation algorithms –
The two algorithms commonly used to allocate frames to a process
are:

Equal allocation: In a system with x frames and y processes, each


process gets equal number of frames, i.e. x/y. For instance, if the
system has 48 frames and 9 processes, each process will get 5
frames. The three frames which are not allocated to any process
can be used as a free-frame buffer pool.
MEMORY MANAGEMENT
Disadvantage: In systems with processes of varying sizes,
it does not make much sense to give each process equal
frames. Allocation of a large number of frames to a small
process will eventually lead to the wastage of a large
number of allocated unused frames.
MEMORY MANAGEMENT
Proportional allocation:
Frames are allocated to each process according to the
process size.
For a process pi of size si, the number of allocated
frames is ai = (si/S)*m, where S is the sum of the sizes
of all the processes and m is the number of frames in
the system.

For instance, in a system with 62 frames, if there is a


process of 10KB and another process of 127KB, then the
first process will be allocated (10/137)*62 = 4 frames
and the other process will get (127/137)*62 = 57
frames.
MEMORY MANAGEMENT
Advantage: All the processes share the available
frames according to their needs, rather than
equally.
MEMORY MANAGEMENT
Global vs Local Allocation –
The number of frames allocated to a process can also dynamically change depending
on whether you have used global replacement or local replacement for replacing
pages in case of a page fault.

Local replacement: When a process needs a page which is not in the memory, it can
bring in the new page and allocate it a frame from its own set of allocated frames
only.
Advantage: The pages in memory for a particular process and the page fault
ratio is affected by the paging behavior of only that process.
Disadvantage: A low priority process may hinder a high priority process by not
making its frames available to the high priority process.
MEMORY MANAGEMENT
Global replacement:
When a process needs a page which is not in the memory, it can bring in
the new page and allocate it a frame from the set of all frames, even if that
frame is currently allocated to some other process; that is, one process can
take a frame from another.

Advantage: Does not hinder the performance of processes and hence


results in greater system throughput.

Disadvantage: The page fault ratio of a process can not be solely controlled
by the process itself. The pages in memory for a process depends on the
paging behavior of other processes as well.
MEMORY MANAGEMENT
Thrashing is a condition or a situation when the system is spending a
major portion of its time servicing the page faults, but the actual
processing done is very negligible.

Causes of thrashing:

High degree of multiprogramming.


Lack of frames.
Page replacement policy.
MEMORY MANAGEMENT
MEMORY MANAGEMENT
Algorithms during Thrashing
Whenever thrashing starts, the operating system tries to apply either the Global
page replacement Algorithm or the Local page replacement algorithm.

1. Global Page Replacement


Since global page replacement can bring any page, it tries to bring more pages
whenever thrashing is found. But what actually will happen is that no process
gets enough frames, and as a result, the thrashing will increase more and more.
Therefore, the global page replacement algorithm is not suitable when thrashing
happens.

2. Local Page Replacement

Unlike the global page replacement algorithm, local page replacement will select
pages which only belong to that process. So there is a chance to reduce the
thrashing. But it is proven that there are many disadvantages if we use local page
replacement. Therefore, local page replacement is just an alternative to global
page replacement in a thrashing scenario.
MEMORY MANAGEMENT
How to Eliminate Thrashing

Adjust the swap file size:If the system swap file is


not configured correctly, disk thrashing can also
happen to you.
Increase the amount of RAM: As insufficient
memory can cause disk thrashing, one solution is to
add more RAM to the laptop. With more memory,
your computer can handle tasks easily and don't
have to work excessively. Generally, it is the best
long-term solution.
MEMORY MANAGEMENT
• Decrease the number of applications running on the
computer: If there are too many applications running
in the background, your system resource will consume
a lot. And the remaining system resource is slow that
can result in thrashing. So while closing, some
applications will release some resources so that you
can avoid thrashing to some extent.
• Replace programs: Replace those programs that are
heavy memory occupied with equivalents that use
less memory.
MEMORY MANAGEMENT
Techniques to Prevent Thrashing
The Local Page replacement is better than the Global Page replacement, but local
page replacement has many disadvantages, so it is sometimes not helpful.
Therefore below are some other techniques that are used to handle thrashing:

1. Locality Model

A locality is a set of pages that are actively used together. The locality model
states that as a process executes, it moves from one locality to another. Thus, a
program is generally composed of several different localities which may overlap.

For example, when a function is called, it defines a new locality where memory
references are made to the function call instructions, local and global variables,
etc. Similarly, when the function is exited, the process leaves this locality.
MEMORY MANAGEMENT
2. Working-Set Model

This model is based on the above-stated concept of the


Locality Model.

The basic principle states that if we allocate enough


frames to a process to accommodate its current locality,
it will only fault whenever it moves to some new
locality. But if the allocated frames are lesser than the
size of the current locality, the process is bound to
thrash.
MEMORY MANAGEMENT
According to this model, based on parameter A, the
working set is defined as the set of pages in the most
recent 'A' page references. Hence, all the actively
used pages would always end up being a part of the
working set.

The accuracy of the working set is dependent on the


value of parameter A. If A is too large, then working
sets may overlap. On the other hand, for smaller
values of A, the locality might not be covered
entirely.
MEMORY MANAGEMENT
3. Page Fault Frequency

A more direct approach to handle thrashing is


the one that uses the Page-Fault Frequency
concept.
MEMORY MANAGEMENT
• The problem associated with thrashing is the high page fault rate,
and thus, the concept here is to control the page fault rate.

• If the page fault rate is too high, it indicates that the process has
too few frames allocated to it. On the contrary, a low page fault
rate indicates that the process has too many frames.

• Upper and lower limits can be established on the desired page


fault rate, as shown in the diagram.

• If the page fault rate falls below the lower limit, frames can be
removed from the process. Similarly, if the page faults rate
exceeds the upper limit, more frames can be allocated to the
process.
MEMORY MANAGEMENT
MEMORY MANAGEMENT
OPERATING SYSTEM PROPERTIES
Distributed Environment:
A distributed environment refers to multiple independent
CPUs or processors in a
computer system.
 An operating system does the following activities related to
distributed environment:
 The OS distributes computation logics among several physical
processors.
 The processors do not share memory or a clock. Instead, each
processor has its own local
memory.
 The OS manages the communications between the processors.
 They communicate with each other through various
communication lines.
MEMORY MANAGEMENT
MEMORY MANAGEMENT
MEMORY MANAGEMENT
Memory Mapped Files in OS
Memory mapping is a technique that allows a part of the virtual address space to be
associated with a file logically. This technique of memory mapping leads to a significant
increase in performance.
MEMORY MANAGEMENT
Basic Mechanism of Memory Mapping
• The Operating System uses virtual memory for memory mapping a file. It is
performed by mapping a disk block to a page present in the physical memory.
Initially, the file is accessed through demand paging. If a process references an
address that does not exist in the physical memory, then page fault occurs and
the Operating System takes charge of bringing the missing page into the
physical memory.
• A page-sized portion of the file is read from the file system into a physical page.
• Manipulating the files through the use of memory rather than incurring the
overhead of using the read() and write() system calls not only simplifies but also
speeds up file access and usage.
• Multiple processes may be allowed to map a single file simultaneously to allow
sharing of data.
• If any of the processes write data in the virtual memory, then the modified data
will be visible to all the processes that map the same section of the file.
• The memory mapping system calls support copy-on-write functionality which
allows processes to share a file in read-only mode but the processes can have
their own copies of data that they have modified.
MEMORY MANAGEMENT
MEMORY MANAGEMENT
Types of Memory Mapped Files
Basically, there are two types of memory mapped files:
Persisted: Persisted files are connected with a source file
on a disk. After completing the final process, the data is
saved to the source file on disc. Working with very big
source files is appropriate with these type of memory-
mapped files.
Non-persisted: Non-persisted files are not connected to
any disk-based files. The data is lost when the last process
with the file completes its required task. The shared
memory that these files enable for inter-process
communications or IPC.
MEMORY MANAGEMENT
Advantages of Memory Mapped Files
It increases the I/O performance especially when it is used on large
files.
Accessing memory mapped file is faster than using direct system calls
like read() and write().
Another advantage is lazy loading where small amount of RAM is
used for a very large file.
Shared memory is often implemented by memory mapping files.
Thus, it supports data sharing.
MEMORY MANAGEMENT
Disadvantages of Memory Mapped Files
• In some cases, memory mapped file I/O may be substantially
slower as compared to standard file I/O.
• Only hardware architecture that has MMU (Memory
Management Unit) supports memory mapped files.
• In memory mapped files , expanding the size of a file is not
easy.
MEMORY MANAGEMENT
Allocating Kernel Memory
The process by which the kernel of the
operating system allocates memory for its
internal operations and data structures is
called kernel memory allocation.
MEMORY MANAGEMENT
Depending upon the requirements of the system and type of memory allocation, there
are two important kernel memory allocation techniques namely buddy system and slab
system

What is Buddy Memory Allocation System?

In buddy system, the available memory space is divided into


blocks of a fixed and equal size. These blocks are then organized in
the form of a binary tree structure. In this binary tree structure,
each block has a buddy block whose size is same as that of the
adjacent block.
MEMORY MANAGEMENT
The buddy system is one of the efficient memory allocation technique because it avoids
the fragmentation of memory space. Buddy system ensures that all the allocated blocks
have the same size so that they can be easily merged together with their buddy blocks.
Another major advantage of buddy system is that it allows for quick allocation and
deallocation of memory blocks that is an important requirement in real time systems to
give enhanced performance.
MEMORY MANAGEMENT
Advantages of Buddy System
The following are the major advantages of the buddy system −

• The buddy system involves less external fragmentation as


compared to other memory allocation algorithms.

• The buddy system uses binary tree structure to represent used or


unused memory blocks.

• In buddy system, it is easy to merge adjacent blocks.

• Buddy system provides quick allocation and deallocation of


memory.

• Buddy system allocates a block of correct memory size.


MEMORY MANAGEMENT
Disadvantages of Buddy System

• Buddy system leads to the internal fragmentation.


• Buddy system uses binary tree, hence it requires all
the allocation units to be powers of 2.
MEMORY MANAGEMENT
What is Slab Memory Allocation System?

Slab system is another technique used for allocating kernel


memory. The major advantages of the slab memory allocation
system is that it eliminates the fragmentation due to allocation and
deallocation of memory. In other words, the slab allocation system
is a type of memory allocation strategy used in the operating
system to manage kernel memory.
MEMORY MANAGEMENT
MEMORY MANAGEMENT
Note − In the slab system, the two required terms are: slab and
cache.

Cache − Cache is a very high speed small sized memory space. In slab
system, a cache has one or more slabs. For each unique kernel data
structure, there is a single cache space present.

Slab − Slab is a container used a container which stores data of a


kernel object of a specific type. It is made up of physically contiguous
pages as shown in the above figure.
FILE MANAGEMENT
A file is a collection of related information that is
recorded on secondary storage. Or file is a collection of
logically related entities. From the user’s perspective, a
file is the smallest allotment of logical secondary
storage.

The name of the file is divided into two parts as shown


below:

name
extension, separated by a period.
FILE MANAGEMENT
What is a File System?
A file system is a method an operating system uses to store,
organize, and manage files and directories on a storage
device. Some common types of file systems include:

• FAT (File Allocation Table): An older file system used by


older versions of Windows and other operating systems.
• NTFS (New Technology File System): A modern file system
used by Windows. It supports features such as file and
folder permissions, compression, and encryption.
• ext (Extended File System): A file system commonly used
on Linux and Unix-based operating systems.
• HFS (Hierarchical File System): A file system used by
macOS.
• APFS (Apple File System): A new file system introduced by
FILE MANAGEMENT
Files Attributes And Their Operations
Attributes Types Operations
Name Doc Create

Type Exe Open

Size Jpg Read

Creation Data Xis Write

Author C Append

Last Modified Java Truncate

protection class Delete

Close
FILE MANAGEMENT
Usual
File type Function
extension
Read to run machine language
Executable exe, com, bin
program
Compiled, machine language not
Object obj, o
linked
C, java, pas, asm,
Source Code Source code in various languages
a
Commands to the command
Batch bat, sh
interpreter
Text txt, doc Textual data, documents
FILE MANAGEMENT
Various word processor
Word Processor wp, tex, rrf, doc
formats
Related files grouped into
Archive arc, zip, tar
one compressed file
For containing
Multimedia mpeg, mov, rm
audio/video information
It is the textual data and
Markup xml, html, tex
documents
It contains libraries of
Library lib, a ,so, dll
routines for programmers
It is a format for printing
Print or View gif, pdf, jpg or viewing an ASCII or
binary file.
FILE MANAGEMENT
File Access Methods in Operating System
When a file is used, information is read and accessed into computer
memory and there are several ways to access this information of the
file. Some systems provide only one access method for files. Other
systems, such as those of IBM, support many access methods, and
choosing the right one for a particular application is a major design
problem.

There are three ways to access a file into a computer system:


Sequential-Access, Direct Access, Index sequential Method.
FILE MANAGEMENT
Sequential Access –
It is the simplest access method. Information in the file is processed in order, one
record after the other. This mode of access is by far the most common; for example,
editor and compiler usually access the file in this fashion.
Read and write make up the bulk of the operation on a file. A read operation -read
next- read the next position of the file and automatically advance a file pointer,
which keeps track I/O location. Similarly, for the -write next- append to the end of
the file and advance to the newly written material.

Key points:
Data is accessed one record right after another record in an order.
When we use read command, it move ahead pointer by one
When we use write command, it will allocate memory and move the pointer to the
end of the file
Such a method is reasonable for tape.
FILE MANAGEMENT
Advantages of Sequential Access Method :

It is simple to implement this file access mechanism.


It uses lexicographic order to quickly access the next entry.
It is suitable for applications that require access to all
records in a file, in a specific order.
It is less prone to data corruption as the data is written
sequentially and not randomly.
It is a more efficient method for reading large files, as it
only reads the required data and does not waste time
reading unnecessary data.
It is a reliable method for backup and restore operations, as
the data is stored sequentially and can be easily restored if
required.
FILE MANAGEMENT
Disadvantages of Sequential Access Method :

If the file record that needs to be accessed next is not present next to
the current record, this type of file access method is slow.
Moving a sizable chunk of the file may be necessary to insert a new
record.
It does not allow for quick access to specific records in the file. The
entire file must be searched sequentially to find a specific record, which
can be time-consuming.
It is not well-suited for applications that require frequent updates or
modifications to the file. Updating or inserting a record in the middle of
a large file can be a slow and cumbersome process.
Sequential access can also result in wasted storage space if records are
of varying lengths. The space between records cannot be used by other
records, which can result in inefficient use of storage.
FILE MANAGEMENT
2.Direct Access –
Another method is direct access method also known as relative access
method. A fixed-length logical record that allows the program to read and
write record rapidly. in no particular order. The direct access is based on the
disk model of a file since disk allows random access to any file block. For
direct access, the file is viewed as a numbered sequence of block or record.
Thus, we may read block 14 then block 59, and then we can write block 17.
There is no restriction on the order of reading and writing for a direct access
file.
A block number provided by the user to the operating system is normally a
relative block number, the first relative block of the file is 0 and then 1 and so
on.
FILE MANAGEMENT

Advantages of Direct Access Method :

The files can be immediately accessed decreasing the


average access time.
In the direct access method, in order to access a
block, there is no need of traversing all the blocks
present before it.
FILE MANAGEMENT
3.Index sequential method –
It is the other method of accessing a file that is built on the
top of the sequential access method. These methods
construct an index for the file. The index, like an index in the
back of a book, contains the pointer to the various blocks. To
find a record in the file, we first search the index, and then
by the help of pointer we access the file directly.

Key points:
It is built on top of Sequential access.
It control the pointer by using index.
FILE MANAGEMENT
File Directories
The collection of files is a file directory. The directory
contains information about the files, including attributes,
location, and ownership. Much of this information, especially
that is concerned with storage, is managed by the operating
system. The directory is itself a file, accessible by various file
management routines.
FILE MANAGEMENT
Below are information contained in a device
directory.

Name
Type
Address
Current length
Maximum length
Date last accessed
Date last updated
Owner id
Protection information
FILE MANAGEMENT
Below are information contained in
a device directory.

Name
Type
Address
Current length
Maximum length
Date last accessed
Date last updated
Owner id
Protection information
FILE MANAGEMENT
The operation performed on the
directory are:
Search for a file
Create a file
Delete a file
List a directory
Rename a file
Traverse the file system
FILE MANAGEMENT
Advantages of Maintaining Directories
• Efficiency: A file can be located more quickly.
• Naming: It becomes convenient for users as two
users can have same name for different files or may
have different name for same file.
• Grouping: Logical grouping of files can be done by
properties e.g. all java programs, all games etc.
FILE MANAGEMENT
Single-Level Directory
In this, a single directory is maintained for all
the users.

• Naming problem: Users cannot have the


same name for two files.
• Grouping problem: Users cannot group
files according to their needs.
FILE MANAGEMENT
FILE MANAGEMENT
Two-Level Directory
In this separate directories for each user is
maintained.
Path name: Due to two levels there is a path name
for every file to locate that file.
Now, we can have the same file name for different
users.
Searching is efficient in this method.
FILE MANAGEMENT
FILE MANAGEMENT
Tree-Structured Directory
The directory is maintained in the form of
a tree. Searching is efficient and also there
is grouping capability. We have absolute
or relative path name for a file.
FILE MANAGEMENT
FILE MANAGEMENT
File Allocation Methods

The allocation methods define how the files


are stored in the disk blocks. There are three
main disk space or file allocation methods.

Contiguous Allocation
Linked Allocation
Indexed Allocation
FILE MANAGEMENT
The main idea behind these
methods is to provide:

• Efficient disk space


utilization.
• Fast access to the file
blocks.
FILE MANAGEMENT
Contiguous Allocation
In this scheme, each file occupies a contiguous set of blocks on the
disk. For example, if a file requires n blocks and is given a block b
as the starting location, then the blocks assigned to the file will be:
b, b+1, b+2,……b+n-1. This means that given the starting block
address and the length of the file (in terms of blocks required), we
can determine the blocks occupied by the file.
The directory entry for a file with contiguous allocation contains

Address of starting block


Length of the allocated portion.
The file ‘mail’ in the following figure starts from the block 19 with
length = 6 blocks. Therefore, it occupies 19, 20, 21, 22, 23, 24
blocks.
FILE MANAGEMENT
FILE MANAGEMENT
Advantages:
Both the Sequential and Direct Accesses are supported by this.
For direct access, the address of the kth block of the file which
starts at block b can easily be obtained as (b+k).
This is extremely fast since the number of seeks are minimal
because of contiguous allocation of file blocks.
Disadvantages:
This method suffers from both internal and external
fragmentation. This makes it inefficient in terms of memory
utilization.
Increasing file size is difficult because it depends on the
availability of contiguous memory at a particular instance.
FILE MANAGEMENT
Linked List Allocation

In this scheme, each file is a linked list of disk blocks which need
not be contiguous. The disk blocks can be scattered anywhere on
the disk.
The directory entry contains a pointer to the starting and the
ending file block. Each block contains a pointer to the next block
occupied by the file.

The file ‘jeep’ in following image shows how the blocks are
randomly distributed. The last block (25) contains -1 indicating a
null pointer and does not point to any other block.
FILE MANAGEMENT
FILE MANAGEMENT
Advantages:

This is very flexible in terms of file size. File size


can be increased easily since the system does
not have to look for a contiguous chunk of
memory.
This method does not suffer from external
fragmentation. This makes it relatively better in
terms of memory utilization.
FILE MANAGEMENT
Disadvantages:

Because the file blocks are distributed randomly on the disk, a


large number of seeks are needed to access every block
individually. This makes linked allocation slower.
It does not support random or direct access. We can not directly
access the blocks of a file. A block k of a file can be accessed by
traversing k blocks sequentially (sequential access ) from the
starting block of the file via block pointers.
Pointers required in the linked allocation incur some extra
overhead.
FILE MANAGEMENT
3. Indexed Allocation

In this scheme, a special block known as the Index block


contains the pointers to all the blocks occupied by a file.
Each file has its own index block. The ith entry in the
index block contains the disk address of the ith file block.
The directory entry contains the address of the index
block as shown in the image:
FILE MANAGEMENT
FILE MANAGEMENT
Advantages:

This supports direct access to the blocks occupied by the file and
therefore provides fast access to the file blocks.
It overcomes the problem of external fragmentation.
Disadvantages:

The pointer overhead for indexed allocation is greater than linked


allocation.
For very small files, say files that expand only 2-3 blocks, the
indexed allocation would keep one entire block (index block) for
the pointers which is inefficient in terms of memory utilization.
However, in linked allocation we lose the space of only 1 pointer
per block.
FILE MANAGEMENT
For files that are very large, single index block may not be able to hold all the pointers.
Following mechanisms can be used to resolve this:

• Linked scheme: This scheme links two or more index blocks together for holding the
pointers. Every index block would then contain a pointer or the address to the next index
block.
• Multilevel index: In this policy, a first level index block is used to point to the second level
index blocks which inturn points to the disk blocks occupied by the file. This can be extended
to 3 or more levels depending on the maximum file size.
• Combined Scheme: In this scheme, a special block called the Inode (information Node)
contains all the information about the file such as the name, size, authority, etc and the
remaining space of Inode is used to store the Disk Block addresses which contain the actual
file as shown in the image below. The first few of these pointers in Inode point to the direct
blocks i.e the pointers contain the addresses of the disk blocks that contain data of the file.
The next few pointers point to indirect blocks. Indirect blocks may be single indirect, double
indirect or triple indirect. Single Indirect block is the disk block that does not contain the file
data but the disk address of the blocks that contain the file data. Similarly, double indirect
blocks do not contain the file data but the disk address of the blocks that contain the
address of the blocks containing the file data.
FILE MANAGEMENT
FILE MANAGEMENT
FILE MANAGEMENT

You might also like