You are on page 1of 15

UNIT-3

WHAT DOES IT MEAN BY MEMORY MANAGEMENT UNIT?

• Store the data.


• Keep track of memory locations whether free or allocated.
• Move information between primary memory and secondary memory called
swapping.
• Be responsible for protecting the memory allocation.
• Sharing of memory spaces.

MULTISTEP PROCESSING OF A USER PROGRAM:

• Compiler and Assembler generate an object file (containing code and data
segments) from each source file.
• Linker combines all the object files for a program into a single executable object file,
which is complete and self-sufficient.
• Loader (part of OS) loads an executable object file into memory at locations
determined by the operating system.
FUNCTIONS OF MMU:

• Keep track of status of each location of primary memory (allocated /unallocated).


• Determining the allocation policy for memory.
• Allocation technique – once it is decided to allocate memory, the specific location
must be selected and allocation information updated.
• Deallocation technique – a process may explicitly release previously allocated
memory or reclaim the memory based on the policy.
LOGICAL VS. PHYSICAL ADDRESS SPACE:

• Physical address - the “hardware” address of a memory word (0xb0000).


o Address seen by the memory unit.
• Logical address (virtual address, relative address).
o Used by the program
o Generated by the CPU
• Absolute code - all addresses are physical
• Relocatable code - all addresses are relative

ADDRESS BINDING:

• The Address Binding refers to the mapping of computer instructions and data to
physical memory locations.
• Both logical and physical addresses are used in computer memory.
• It assigns a physical memory region to a logical pointer by mapping a physical address
to a logical address known as a virtual address.
• It is also a component of computer memory management that the OS performs on
behalf of applications that require memory access.

• Compile Time Address Binding


• Load Time Address Binding
• Execution Time or Dynamic Address Binding
Compile-time Address Binding:

• If the compiler is responsible for performing address binding then it is called compile-
time address binding.
• It will be done before loading the program into memory.
• The compiler requires interacts with an OS memory manager to perform compile-
time address binding.
Load time Address Binding:
• It will be done after loading the program into memory.
• This type of address binding will be done by the OS memory manager i.e loader.

Execution time or dynamic Address Binding:


• It will be postponed even after loading the program into memory.
• The program will be kept on changing the locations in memory until the time of
program execution.
• The dynamic type of address binding done by the processor at the time of program
execution.
DYNAMIC LOADING:

• Routine is not loaded until it is called.


• Better memory-space utilization; unused routine is never loaded.
• Useful when large amounts of code are needed to handle infrequently occurring
cases.
• No special support from the operating system is required.
• Implemented through program design.
OVERLAYS:

• Keep in memory only those instructions and data that are needed at any given time.
o Dynamic linking
o Dynamic loading
DYNAMIC LINKING:

• Linking postponed until execution time.


• Small piece of code, stub.
o Used to locate the appropriate memory-resident library routine
o Stub replaces itself with the address of the routine, and executes the routine.
• Operating system needed to check if routine is in processes memory address.
• Dynamic linking is particularly useful for libraries.
SWAPPING:

• A process needs to be in memory to be executed.


• A process, however, can be swapped temporarily out of memory to a backing store,
and then brought back into memory for continued execution.
• A process that is swapped out will be swapped back into the same memory space
that it occupied previously
• The system maintains a ready queue consisting of all processes whose memory
images are on the backing store or in memory and are ready to run
• Whenever the CPU scheduler decides to execute a process, it calls the dispatcher.
• The dispatcher checks to see whether the next process in the queue is in memory. If
not, and there is no free memory region, the dispatcher swaps out a process
currently in memory and swaps in the desired process.
• It then reloads registers as normal and transfers control to the selected process.
STANDARD SWAPPING:

• Standard swapping involves moving processes between main memory and a backing
store.
• The backing store is commonly a fast disk.
• The system maintains a ready queue consisting of all processes whose memory
images are on the backing store or in memory and are ready to run.
• The actual transfer of the 100-MB process to or from main memory takes
100 MB/50 MB per second = 2 seconds
• The swap time is 200 milliseconds. Since we must swap both out and in, the total
swap time is about 4,000 milliseconds.
• Standard swapping is not used in modern operating systems. It requires too much
swapping time and provides too little execution time to be a reasonable memory-
management solution.
FUNCTIONS OF MEMORY MANAGEMENT:

• Keep track of status of each partition (in use /not in use).


• Determining who gets the memory – handled by job schedular.
• Allocation technique an available partition of sufficient size is assigned
• Deallocation technique – when the job terminates, the partition is indicated “not in
use” and is available for future allocation.
MULTI PROGRAMMING:

• The concurrent residency of more than one program in the main memory is referred
as multiprogramming
• Since multiple programs are resident in the memory, as soon as the currently
executing program finishes its execution, the next program is dispatched for its
consumption.
• The main objective of multiprogramming is:
o Maximum CPU utilization
o Efficient management of the main memory
CONTIGUOUS MEMORY:

• A contiguous memory allocation is a memory management technique where


whenever there is a request by the user process for the memory, a single section of
the contiguous memory block is given to that process according to its requirement.
❑Fixed sized partition scheme
❑Variable sized partition scheme
• A process is allocated into memory as one complete unit.
• Multiple-partition allocation
o Hole – block of available memory
o holes of various size are scattered throughout memory
• When a process arrives, it is allocated memory from a hole large enough to
accommodate it.
• Operating system maintains information about:
a) allocated partitions b) free partitions (hole)

SINGLE CONTIGUOUS ALLOCATION:

• It is a simple memory management scheme that requires no special hardware


features.
• It is associated with standalone computers with simple Batch Operating systems.
• One to one correspondence between the system and the user.
• Four functions of memory management:
o Keep track of memory
o The job gets all memory when scheduled
o Allocation of memory
o Deallocation of memory
PARTITION ALLOCATION:

• In Partition Allocation, when there is more than one partition freely available to
accommodate a process’s request, a partition must be selected.
• To choose a particular partition, a partition allocation method is needed.
• When it is time to load a process into the main memory and if there is more than one
free block of memory of sufficient size then the OS decides which free block to
allocate.
DYNAMIC STORAGE ALLOCATION:
Dynamic Storage-Allocation methods:

• FIRST FIT
• BEST FIT
• WORST FIT

• First-fit: Allocate the first hole that is big enough.


• Best-fit: Allocate the smallest hole that is big enough.
o Must search entire list, unless ordered by size.
o Produces the smallest leftover hole
• Worst-fit: Allocate the largest hole
o Must search entire list, unless ordered by size
o Produces the largest leftover hole
• First-fit and best-fit better than worst-fit in terms of speed and storage utilization
FIRST FIT ALGORITHM:

• This method keeps the free/busy list of jobs organized by memory location, low-
ordered to high-ordered memory.
• In this method, first job claims the first available memory with space more than or
equal to its size.
• The operating system doesn’t search for appropriate partition but just allocate the
job to the nearest memory partition available with sufficient size.
• Memory partition: 150kb,220kb,500kb,350kb,700kb.
• Process: p1 -200kb, p2-160kb, p3 - 450kb, p4-500kb.

• P4 – 500kb should wait for the memory


Advantages of First-Fit Memory Allocation:

• It is fast in processing.
• As the processor allocates the nearest available memory partition to the job, it is
very fast in execution.
• Disadvantages of First-Fit Memory Allocation:
• It wastes a lot of memory.
• The processor ignores if the size of partition allocated to the job is very large as
compared to the size of job or not. It just allocates the memory.
• As a result, a lot of memory is wasted and many jobs may not get space in the
memory, and would have to wait for another job to complete.
BEST FIT ALGORITHM:

• Allocate the smallest hole that is big enough. We must search the entire list, unless
the list is ordered by size.
• This strategy produces the smallest leftover hole.
• memory partition: 150kb,220kb,500kb,350kb,700kb
• Process: p1 -200kb, p2-160kb, p3 - 450kb, p4-500kb
• All the process are allocated to this scenario.

• memory partition: 150kb,220kb,500kb,350kb,700kb


• Process: p1 -200kb, p2-160kb, p3 - 450kb, p4-500kb

Advantages of Best-Fit Allocation:

• Memory Efficient.
• The operating system allocates the job minimum possible space in the memory,
making memory management very efficient.
• To save memory from getting wasted, it is the best method.
Disadvantages of Best-Fit Allocation:

• It is a Slow Process.
• Checking the whole memory for each job makes the working of the operating system
very slow.
• It takes a lot of time to complete the work.
WORST FIT:

• In Worst Fit Algorithm,


• Algorithm first scans all the partitions.
• It then allocates the partition of largest size to the process.
FRAGMENTATION:

• As processes are loaded and removed from memory, the free memory space is
broken into little pieces.
• It happens after sometimes that processes cannot be allocated to memory blocks
considering their small size and memory blocks remains unused. This problem is
known as Fragmentation.
• Types of Fragmentation:
o Internal Fragmentation
o External Fragmentation
Internal fragmentation: Memory block assigned to process is bigger. Some portion of
memory is left unused, as it cannot be used by another process.
External fragmentation: Total memory space is enough to satisfy a request or to reside a
process in it, but it is not contiguous, so it cannot be used.
BOTH FIRST FIT AND BEST FIT:
One solution to the problem of external fragmentation is COMPACTION.
The goal is to shuffle the memory contents so as to place all free memory together in one
large block. (Costly).
Another possible solution to the external-fragmentation problem is to permit the logical
address space of the processes to be non-contiguous, thus allowing a process to be allocated
physical memory wherever such memory is available.
SEGMENTATION:

• Segmentation is a memory-management scheme that supports this programmer


view of memory.
• A logical address space is a collection of segments (CPU).
• For simplicity of implementation, segments are numbered and are referred to by a
segment number, rather than by a segment name.
• Thus, a logical address consists of a two tuple:
<segment-number, offset>
• A C compiler might create separate segments for the following:
1. The code
2. Global variables
3. The heap, from which memory is allocated
4. The stacks used by each thread
5. The standard C library
SEGMENTATION:

• Divides main memory into variable-sized segments.


• Processes are divided into variable-sized segments, each representing a specific unit
(e.g., code, data, stack).
• Uses segment registers for address translation.
• The Address Translation Unit is responsible for performing the address translation
process. It takes the logical address (segment number + offset) and uses the segment
register, segment table register, and segment limit register to determine the
corresponding physical address.
• May lead to external fragmentation.
• Supports memory sharing among processes.

PAGING:

• A computer can address more memory than the amount physically installed on the
system.
• This extra memory is actually called virtual memory and it is a section of a hardware
that's set up to match the computer's RAM.
• Paging technique plays an important role in implementing virtual memory.
• Paging is a storage mechanism that allows OS to retrieve processes from the
secondary storage into the main memory in the form of pages.
• Paging is a memory management technique in which process address space is
broken into blocks of the same size called pages.
• The size of the process is measured in the number of pages.
• The Memory Management Unit (MMU) is responsible for converting logical
addresses to physical addresses.
• The physical address refers to the actual address of a frame in which each page will
be stored, whereas the logical address refers to the address that is generated by the
CPU for each page.
• When the CPU accesses a page using its logical address, the OS must first collect the
physical address in order to access that page physically. There are two elements to
the logical address:
▪ Page number
▪ Offset
• The OS’s memory management unit must convert the page numbers to the frame
numbers.
• The address generated by the CPU (Logical Address) is divided into the following:
Page offset(d): It refers to the number of bits necessary to represent a certain word
on a page, page size in Logical Address Space, or page word number or page offset.
Page number(p): It is the number of bits needed to represent the pages in the
Logical Address Space or the page number.
• The Physical Address is divided into the following:
Frame offset(d): It refers to the number of bits necessary to represent a certain word
in a frame, or the Physical Address Space frame size, the word number of a frame, or
the frame offset.
Frame number(f): It’s the number of bits needed to indicate a frame of the Physical
Address Space or a frame number.
STRUCTURE OF PAGE TABLE:

• Paging is a memory management technique where a large process is divided into


pages and is placed in physical memory which is also divided into frames.
• Frame and page size is equivalent.
• The operating system uses a page table to map the logical address of the page
generated by CPU to its physical address in the main memory.

Structure of page table:


o Hierarchical Page Table
o Hashed Page Table
o Inverted Page Table
HIERARCHICAL PAGE TABLE:

• Hierarchical Paging is multilevel paging.


• There might be a case where the page table is too big to fit in a contiguous space, so
we may have a hierarchy with several levels.
• In this type of Paging the logical address space is broke up into Multiple page tables.
• Hierarchical Paging is one of the simplest techniques and for this purpose,
• a two-level page table and three-level page table can be used.
HASHED PAGE TABLE:

• A common approach for handling address spaces larger than 32 bits is to use a
hashed page table, with the hash value being the virtual page number.
• Each element consists of three fields:
(1) the virtual page number,
(2) the value of the mapped page frame, and
(3) a pointer to the next element in the linked list.
The algorithm works as follows: The virtual page number in the virtual address is
hashed into the hash table
INVERTED PAGE TABLE:

• An inverted page table has one entry for each real page (or frame) of memory.
• Each entry consists of the virtual address of the page stored in that real memory
location, with information about the process that owns the page.
• Thus, only one page table is in the system, and it has only one entry for each page of
physical memory.

• IBM was the first major company to use inverted page tables, starting with the IBM
System 38 and continuing through the RS/6000 and the current IBM Power CPUs.
• For the IBM RT, each virtual address in the system consists of a triple:
<process-id, page-number, offset>
• Each inverted page-table entry is a pair where the process-id assumes the role of the
address-space identifier.

DEMAND PAGING:

• Demand paging is a memory management technique used by operating system to


optimize memory usage.
• In demand paging, only the required pages of a program are loaded into memory
when needed, rather than loading the entire program at once.
• This helps to reduce memory wastage and improve overall system performance.
• A demand-paging system is similar to a paging system with swapping where
processes reside in secondary memory (usually a disk).
• When we want to execute a process, we swap it into memory.
• Rather than swapping the entire process into memory, though, we use a lazy
swapper.
• A lazy swapper never swaps a page into memory unless that page will be needed.
• A swapper manipulates entire processes, whereas a pager is concerned with the
individual pages of a process.

PERFORMANCE OF DEMAND PAGING:

• Demand paging can significantly affect the performance of a computer system.


• For most computer systems, the memory-access time, denoted ma, ranges from 10
to 200 nanoseconds.
• As long as we have no page faults, the effective access time is equal to the memory
access time. If, however, a page fault occurs, we must first read the relevant page
from disk and then access the desired word.
• Let p be the probability of a page fault (0 ≤ p ≤ 1).
• We would expect p to be close to zero—that is, we would expect to have only a few
page faults.
• The effective access time is then
o effective access time = (1 − p) × ma + p × page fault time.

• We are faced with three major components of the page-fault service time:
1. Service the page-fault interrupt.
2. Read in the page.
3. Restart the process.

You might also like