Professional Documents
Culture Documents
S.Chithra
Department of Computer Science and
Applications.
OPERATING SYSTEM
S.Chithra
Department of Computer Science and Applications
MEMORY MANAGEMENT
Memory Hierarchy Design:
In the Computer System Design, Memory Hierarchy is an
enhancement to organize the memory such that it can minimize the
access time.
The Memory Hierarchy was developed based on a program
behavior known as locality of references.
This Memory Hierarchy Design is divided into 2 main types:
External Memory or Secondary Memory: Comprising of
Magnetic Disk, Optical Disk, Magnetic Tape i.e. peripheral storage
devices which are accessible by the processor via I/O Module.
Internal Memory or Primary Memory: Comprising of Main
Memory, Cache Memory & CPU registers. This is directly accessible
by the processor.
,
MEMORY MANAGEMENT
MEMORY MANAGEMENT
Memory Management:
Memory management is the functionality of an operating system
which handles or manages primary memory and moves processes
back and forth between main memory and disk during execution.
Main Memory refers to a physical memory that is the internal memory
to the computer.
Memory management keeps track of each and every memory
location, regardless of either it is allocated to some process or it is
free.
It checks how much memory is to be allocated to processes.
It decides which process will get memory at what time.
It tracks whenever some memory gets freed or unallocated and
correspondingly it updates the status.
MEMORY MANAGEMENT
Process Address Space:
The process address space is the set of logical addresses that a
process references in its code.
The operating system takes care of mapping the logical addresses to
physical addresses at the time of memory allocation to the program.
There are three types of addresses used in a program before and after
memory is allocated .
Symbolic addresses: The addresses used in a source code. The variable names,
constants, and instruction labels are the basic elements of the symbolic address
space.
Relative addresses: At the time of compilation, a compiler converts
symbolic addresses into relative addresses.
Physical addresses: The loader generates these addresses at the time
when a program is loaded into main memory.
MEMORY MANAGEMENT
Memory Loading:
All the programs are loaded in the main memory for execution.
Sometimes complete program is loaded into the memory, but some
times a certain part or routine of the program is loaded into the
main memory only when it is called by the program.
There are two types of Loading techniques:
Static Loading: The absolute program (and data) is loaded
into memory in order for execution to start.
Dynamic Loading: dynamic routines of the library are
stored on a disk in relocatable form and are loaded into
memory only when they are needed by the program.
MEMORY MANAGEMENT
Swapping:
Swapping is the process of bringing in each process in
main memory, running it for a while and then putting it
back to the disk.
Swapping is also known as a technique for memory
compaction.
sometimes there is not enough main memory to hold all
the currently active processes in a timesharing system.
The total time taken by swapping process includes the
time it takes to move the entire process to a secondary
disk and then to copy the process back to memory, as
well as the time the process takes to regain main
memory.
MEMORY MANAGEMENT
MEMORY MANAGEMENT
MEMORY MANAGEMENT
Partition Allocation :
In Partition Allocation, when there is more than one partition
freely available to accommodate a process’s request, a
partition must be selected.
To choose a particular partition, a partition allocation method is needed.
When it is time to load a process into the main memory and if there is
more than one free block of memory of sufficient size then the OS
decides which free block to allocate.
There are different Placement Algorithm:
First Fit: The first hole that is big enough is allocated to
program.
Best Fit: The smallest hole that is big enough is allocated to
program.
Worst Fit: The largest hole that is big enough is allocated to
program.
MEMORY MANAGEMENT
Memory Partitions :
Memory allocation is a process by which computer programs are
assigned memory or space.
Main memory usually has two partitions −
Low Memory − Operating system resides in this memory.
High Memory − User processes are held in high memory.
Operating system uses the following memory allocation mechanism.
Memory Partitioning types are:
Fixed / Static-partition allocation
Dynamic/ Multiple-partition allocation
MEMORY MANAGEMENT
Dynamic Memory Partitioning:
In this technique, the partition size is not declared initially. It is
declared at the time of
process loading
The first partition is reserved for the operating system.
The remaining space is divided into parts.
The size of each partition will be
equal to the size of the process.
The partition size varies according to
the need of the
process so that the internal
fragmentation can be avoided
MEMORY MANAGEMENT
MEMORY MANAGEMENT
Fixed / Contiguous Memory Partitions :
In this type of allocation, main memory is divided into a
number of fixed-sized partitions where each partition should
contain only one process.
In this technique, the main memory is divided into partitions of
equal or different sizes.
The operating system always resides in the first partition while
the other partitions can be used to store user processes.
The memory is assigned to the processes in contiguous way.
The partitions cannot overlap.
A process must be contiguously present in a partition for the
execution
MEMORY MANAGEMENT
MEMORY MANAGEMENT
Contiguous Memory Allocation
• This strategy is easy to employ because each block is the same size.
Now all that is left to do is allocate processes to the fixed memory
blocks that have been divided up.
• It is simple to keep track of how many memory blocks are still
available, which determines how many further processes can be
allocated memory.
• This approach can be used in a system that requires
multiprogramming since numerous processes can be maintained in
memory at once.
MEMORY MANAGEMENT
Variable-Sized Partitions
Given the fact that, we have 8 non contiguous frames available in the
memory and paging provides the flexibility of storing the process at
the different places. Therefore, we can load the pages of process P5
in the place of P2 and P4.
MEMORY MANAGEMENT
MEMORY MANAGEMENT
MEMORY MANAGEMENT
MEMORY MANAGEMENT
Paging is a non-contiguous memory allocation technique.
Page Table is a table that maps a page number to the frame
number containing that page.
• Disadvantage Of Paging-
• It increases the effective access time due to
increased number of memory accesses.
• One memory access is required to get the
frame number from the page table.
• Another memory access is required to get the
word from the page.
MEMORY MANAGEMENT
Translation Lookaside Buffer-
• Translation Lookaside Buffer (TLB) is a solution that tries to reduce the effective
access time.
• Being a hardware, the access time of TLB is very less as compared to the main
memory.
Structure-
Step-01:
TLB is checked to see if it contains an entry for the referenced page number.
The referenced page number is compared with the TLB entries all at once.
If TLB contains an entry for the referenced page number, a TLB hit occurs.
In this case, TLB entry is used to get the corresponding frame number for the referenced page number.
If TLB does not contain an entry for the referenced page number, a TLB miss occurs.
In this case, page table is used to get the corresponding frame number for the referenced page
number.
Then, TLB is updated with the page number and frame number for future references.
MEMORY MANAGEMENT
Step-03:
• Here, the relocation register has the value of the smallest physical
address where as the limit register has the range of the logical
addresses. These two registers have some conditions like each
logical address must be less than the limit register.
Capability-based addressing
Local replacement: When a process needs a page which is not in the memory, it can
bring in the new page and allocate it a frame from its own set of allocated frames
only.
Advantage: The pages in memory for a particular process and the page fault
ratio is affected by the paging behavior of only that process.
Disadvantage: A low priority process may hinder a high priority process by not
making its frames available to the high priority process.
MEMORY MANAGEMENT
Global replacement:
When a process needs a page which is not in the memory, it can bring in
the new page and allocate it a frame from the set of all frames, even if that
frame is currently allocated to some other process; that is, one process can
take a frame from another.
Disadvantage: The page fault ratio of a process can not be solely controlled
by the process itself. The pages in memory for a process depends on the
paging behavior of other processes as well.
MEMORY MANAGEMENT
Thrashing is a condition or a situation when the system is spending a
major portion of its time servicing the page faults, but the actual
processing done is very negligible.
Causes of thrashing:
Unlike the global page replacement algorithm, local page replacement will select
pages which only belong to that process. So there is a chance to reduce the
thrashing. But it is proven that there are many disadvantages if we use local page
replacement. Therefore, local page replacement is just an alternative to global
page replacement in a thrashing scenario.
MEMORY MANAGEMENT
How to Eliminate Thrashing
1. Locality Model
A locality is a set of pages that are actively used together. The locality model
states that as a process executes, it moves from one locality to another. Thus, a
program is generally composed of several different localities which may overlap.
For example, when a function is called, it defines a new locality where memory
references are made to the function call instructions, local and global variables,
etc. Similarly, when the function is exited, the process leaves this locality.
MEMORY MANAGEMENT
2. Working-Set Model
• If the page fault rate is too high, it indicates that the process has
too few frames allocated to it. On the contrary, a low page fault
rate indicates that the process has too many frames.
• If the page fault rate falls below the lower limit, frames can be
removed from the process. Similarly, if the page faults rate
exceeds the upper limit, more frames can be allocated to the
process.
MEMORY MANAGEMENT
MEMORY MANAGEMENT
OPERATING SYSTEM PROPERTIES
Distributed Environment:
A distributed environment refers to multiple independent
CPUs or processors in a
computer system.
An operating system does the following activities related to
distributed environment:
The OS distributes computation logics among several physical
processors.
The processors do not share memory or a clock. Instead, each
processor has its own local
memory.
The OS manages the communications between the processors.
They communicate with each other through various
communication lines.
MEMORY MANAGEMENT
MEMORY MANAGEMENT
MEMORY MANAGEMENT
Memory Mapped Files in OS
Memory mapping is a technique that allows a part of the virtual address space to be
associated with a file logically. This technique of memory mapping leads to a significant
increase in performance.
MEMORY MANAGEMENT
Basic Mechanism of Memory Mapping
• The Operating System uses virtual memory for memory mapping a file. It is
performed by mapping a disk block to a page present in the physical memory.
Initially, the file is accessed through demand paging. If a process references an
address that does not exist in the physical memory, then page fault occurs and
the Operating System takes charge of bringing the missing page into the
physical memory.
• A page-sized portion of the file is read from the file system into a physical page.
• Manipulating the files through the use of memory rather than incurring the
overhead of using the read() and write() system calls not only simplifies but also
speeds up file access and usage.
• Multiple processes may be allowed to map a single file simultaneously to allow
sharing of data.
• If any of the processes write data in the virtual memory, then the modified data
will be visible to all the processes that map the same section of the file.
• The memory mapping system calls support copy-on-write functionality which
allows processes to share a file in read-only mode but the processes can have
their own copies of data that they have modified.
MEMORY MANAGEMENT
MEMORY MANAGEMENT
Types of Memory Mapped Files
Basically, there are two types of memory mapped files:
Persisted: Persisted files are connected with a source file
on a disk. After completing the final process, the data is
saved to the source file on disc. Working with very big
source files is appropriate with these type of memory-
mapped files.
Non-persisted: Non-persisted files are not connected to
any disk-based files. The data is lost when the last process
with the file completes its required task. The shared
memory that these files enable for inter-process
communications or IPC.
MEMORY MANAGEMENT
Advantages of Memory Mapped Files
It increases the I/O performance especially when it is used on large
files.
Accessing memory mapped file is faster than using direct system calls
like read() and write().
Another advantage is lazy loading where small amount of RAM is
used for a very large file.
Shared memory is often implemented by memory mapping files.
Thus, it supports data sharing.
MEMORY MANAGEMENT
Disadvantages of Memory Mapped Files
• In some cases, memory mapped file I/O may be substantially
slower as compared to standard file I/O.
• Only hardware architecture that has MMU (Memory
Management Unit) supports memory mapped files.
• In memory mapped files , expanding the size of a file is not
easy.
MEMORY MANAGEMENT
Allocating Kernel Memory
The process by which the kernel of the
operating system allocates memory for its
internal operations and data structures is
called kernel memory allocation.
MEMORY MANAGEMENT
Depending upon the requirements of the system and type of memory allocation, there
are two important kernel memory allocation techniques namely buddy system and slab
system
Cache − Cache is a very high speed small sized memory space. In slab
system, a cache has one or more slabs. For each unique kernel data
structure, there is a single cache space present.
name
extension, separated by a period.
FILE MANAGEMENT
What is a File System?
A file system is a method an operating system uses to store,
organize, and manage files and directories on a storage
device. Some common types of file systems include:
Author C Append
Close
FILE MANAGEMENT
Usual
File type Function
extension
Read to run machine language
Executable exe, com, bin
program
Compiled, machine language not
Object obj, o
linked
C, java, pas, asm,
Source Code Source code in various languages
a
Commands to the command
Batch bat, sh
interpreter
Text txt, doc Textual data, documents
FILE MANAGEMENT
Various word processor
Word Processor wp, tex, rrf, doc
formats
Related files grouped into
Archive arc, zip, tar
one compressed file
For containing
Multimedia mpeg, mov, rm
audio/video information
It is the textual data and
Markup xml, html, tex
documents
It contains libraries of
Library lib, a ,so, dll
routines for programmers
It is a format for printing
Print or View gif, pdf, jpg or viewing an ASCII or
binary file.
FILE MANAGEMENT
File Access Methods in Operating System
When a file is used, information is read and accessed into computer
memory and there are several ways to access this information of the
file. Some systems provide only one access method for files. Other
systems, such as those of IBM, support many access methods, and
choosing the right one for a particular application is a major design
problem.
Key points:
Data is accessed one record right after another record in an order.
When we use read command, it move ahead pointer by one
When we use write command, it will allocate memory and move the pointer to the
end of the file
Such a method is reasonable for tape.
FILE MANAGEMENT
Advantages of Sequential Access Method :
If the file record that needs to be accessed next is not present next to
the current record, this type of file access method is slow.
Moving a sizable chunk of the file may be necessary to insert a new
record.
It does not allow for quick access to specific records in the file. The
entire file must be searched sequentially to find a specific record, which
can be time-consuming.
It is not well-suited for applications that require frequent updates or
modifications to the file. Updating or inserting a record in the middle of
a large file can be a slow and cumbersome process.
Sequential access can also result in wasted storage space if records are
of varying lengths. The space between records cannot be used by other
records, which can result in inefficient use of storage.
FILE MANAGEMENT
2.Direct Access –
Another method is direct access method also known as relative access
method. A fixed-length logical record that allows the program to read and
write record rapidly. in no particular order. The direct access is based on the
disk model of a file since disk allows random access to any file block. For
direct access, the file is viewed as a numbered sequence of block or record.
Thus, we may read block 14 then block 59, and then we can write block 17.
There is no restriction on the order of reading and writing for a direct access
file.
A block number provided by the user to the operating system is normally a
relative block number, the first relative block of the file is 0 and then 1 and so
on.
FILE MANAGEMENT
Key points:
It is built on top of Sequential access.
It control the pointer by using index.
FILE MANAGEMENT
File Directories
The collection of files is a file directory. The directory
contains information about the files, including attributes,
location, and ownership. Much of this information, especially
that is concerned with storage, is managed by the operating
system. The directory is itself a file, accessible by various file
management routines.
FILE MANAGEMENT
Below are information contained in a device
directory.
Name
Type
Address
Current length
Maximum length
Date last accessed
Date last updated
Owner id
Protection information
FILE MANAGEMENT
Below are information contained in
a device directory.
Name
Type
Address
Current length
Maximum length
Date last accessed
Date last updated
Owner id
Protection information
FILE MANAGEMENT
The operation performed on the
directory are:
Search for a file
Create a file
Delete a file
List a directory
Rename a file
Traverse the file system
FILE MANAGEMENT
Advantages of Maintaining Directories
• Efficiency: A file can be located more quickly.
• Naming: It becomes convenient for users as two
users can have same name for different files or may
have different name for same file.
• Grouping: Logical grouping of files can be done by
properties e.g. all java programs, all games etc.
FILE MANAGEMENT
Single-Level Directory
In this, a single directory is maintained for all
the users.
Contiguous Allocation
Linked Allocation
Indexed Allocation
FILE MANAGEMENT
The main idea behind these
methods is to provide:
In this scheme, each file is a linked list of disk blocks which need
not be contiguous. The disk blocks can be scattered anywhere on
the disk.
The directory entry contains a pointer to the starting and the
ending file block. Each block contains a pointer to the next block
occupied by the file.
The file ‘jeep’ in following image shows how the blocks are
randomly distributed. The last block (25) contains -1 indicating a
null pointer and does not point to any other block.
FILE MANAGEMENT
FILE MANAGEMENT
Advantages:
This supports direct access to the blocks occupied by the file and
therefore provides fast access to the file blocks.
It overcomes the problem of external fragmentation.
Disadvantages:
• Linked scheme: This scheme links two or more index blocks together for holding the
pointers. Every index block would then contain a pointer or the address to the next index
block.
• Multilevel index: In this policy, a first level index block is used to point to the second level
index blocks which inturn points to the disk blocks occupied by the file. This can be extended
to 3 or more levels depending on the maximum file size.
• Combined Scheme: In this scheme, a special block called the Inode (information Node)
contains all the information about the file such as the name, size, authority, etc and the
remaining space of Inode is used to store the Disk Block addresses which contain the actual
file as shown in the image below. The first few of these pointers in Inode point to the direct
blocks i.e the pointers contain the addresses of the disk blocks that contain data of the file.
The next few pointers point to indirect blocks. Indirect blocks may be single indirect, double
indirect or triple indirect. Single Indirect block is the disk block that does not contain the file
data but the disk address of the blocks that contain the file data. Similarly, double indirect
blocks do not contain the file data but the disk address of the blocks that contain the
address of the blocks containing the file data.
FILE MANAGEMENT
FILE MANAGEMENT
FILE MANAGEMENT