You are on page 1of 29

UNIT II:

Memory Management
Ms. Gandhalee Manohar
 Memory management is the functionality of an operating
system which handles or manages primary memory and moves
processes back and forth between main memory and disk during
Memory execution.
Management:  Memory management keeps track of each and
every memory location, regardless of either it is allocated to some
process or it is free.
 Memory management is the process of controlling and
coordinating computermemory, assigning portions called
blocks to various running programs to optimize overall system
performance. Memory management resides in hardware, in
the OS (operating system), and in programs and applications.
Why  Program must be brought into memory and placed within a
Management? process for it to be run
 Input queue or job queue – collection of processes on the disk
that are waiting to be brought into memory to run the
program
 User programs go through several steps before being run
 A memory abstraction is an abstraction layer between the
Memory program execution and the memory that provides a different
Abstraction: “view” of a memory location depending on the execution
context in which the memory access is made.
 Simple design but no multi programming approach

No Memory
Abstraction
Multi-  In a multiprogramming system there are one or more programs
programming: loaded in main memory which are ready to execute.
Fixed Size partitioning Variable size partitioning

 Physical memory is
 Natural extension --
broken up into fixed
physical memory is
partitions.
broken up into variable
Methods:  Easy to implement, fast sized partitions
context switch.  No internal
 Problems: Internal fragmentation
fragmentation, Partition
 Problems: External
sizes could be large or
fragmentation
insufficient.
 As processes are loaded and removed from memory, the
free memory space is broken into little pieces. It happens
after sometimes that processes cannot be allocated to
memory blocks considering their small size and memory
blocks remains unused. This problem is known as
Fragmentation.
 Fragmentation is of two types −
Fragmentation:
 Internal fragmentation:
Memory block assigned to process is bigger. Some portion
of memory is left unused, as it cannot be used by another
process.
 External fragmentation:
Total memory space is enough to satisfy a request or to
reside a process in it, but it is not contiguous, so it cannot
be used.
 Memory compaction is the process of moving allocated
objects together, and leaving empty space together.
 External fragmentation can be reduced by compaction or
shuffle memory contents to place all free memory
together in one large block. To make compaction feasible,
relocation should be dynamic. The internal fragmentation
can be reduced by effectively assigning the smallest
partition but large enough for the process.
Compaction:
 An address generated by the CPU is a logical
address . Logical address is also known a
Virtual address.

 An address actually available on memory unit is a


Address physical address.
Spaces :
 If the binding of instructions and data to the memory addresses
are at compile time or load time, the logical addresses are
same.
 If the same binding occurs at the run time the logical and
physical addresses are difference
The role of relocation, the ability to execute processes
independently from their physical location in memory, is
Relocation: central for memory management: virtually all the
techniques in this field rely on the ability to relocate
processes efficiently.
 Swapping is a mechanism in which a process can be swapped
temporarily out of main memory (or move) to secondary storage
(disk) and make that memory available to other processes. At
some later time, the system swaps back the process from the
secondary storage to main memory.
Swapping:  Though performance is usually affected by swapping process but
it helps in running multiple and big processes in parallel and that's
the reason Swapping is also known as a technique for memory
compaction.
 Example: memory with 5 processes and 3
holes:
 tick marks show memory allocation units.
 shaded regions (0 in the bitmap) are free.
Managing
allocated and
free partitions

 Linked List
Memory  First Fit
Allocation  Best Fit
Strategies:  Worst Fit
 Virtual Memory is a space where large programs can store themselves in form of
pages while their execution and only the required pages or portions of processes
are loaded into the main memory. This technique is useful as large virtual memory
is provided for user programs when a very small physical memory is there.
 In real scenarios, most processes never need all their pages at once, for following
reasons :
 Error handling code is not needed unless that specific error occurs, some of
which are quite rare.
Virtual  Arrays are often over-sized for worst-case scenarios, and only a small fraction of
the arrays are actually used in practice.
Memory:  Certain features of certain programs are rarely used.

Benefits of having Virtual Memory :


 Large programs can be written, as virtual space available is huge compared to physical
memory.
 Less I/O required, leads to faster and easy swapping of processes.
 More physical memory available, as programs are stored on virtual memory, so they
occupy very less space on actual physical memory.
 Paging is a memory management scheme that eliminates the need for
contiguous allocation of physical memory. This scheme permits the physical
address space of a process to be non – contiguous.
 The mapping from virtual to physical address is done by the memory
management unit (MMU) which is a hardware device and this mapping is known
as paging technique.

Paging:  Paging is used for faster access to data. When a program needs a page, it is
available in the main memory as the OS copies a certain number of pages from
your storage device to main memory.
 OS performs an operation for storing and retrieving data from secondary
storage devices for use in main memory. Paging is one of such memory
management scheme. It avoids external fragmentation.
 The Physical Address Space is conceptually divided into a number of fixed-size
blocks, called frames. The Logical address Space is also splitted into fixed-size
blocks, called pages.
 In this technique, logical memory is divided into blocks of same size called
pages and physical memory is divided into fixed sized blocks called frames.
Page Size = Frame Size
Initially all pages are in secondary storage when a process is needed to
execute its pages are loaded into any available memory frames from the
secondary storage.

Paging

P=page,
d=page offset,
f=frame
 Read-write rights
Memory  Valid-invalid pages
Protection and  Sharing common code
Sharing:  Memory sharing due to mapping
 In paging two memories are accessed one to find the page
table and then location is accessed leading to time delay in
Translation search and processing and slowed down memory access.
Look Aside  This is solved by TLB: it is high speed memory divided into a
Buffer key a value where key is used to make the search faster.
 Expensive hardware. Holds small number of entries.
 Use for large page tables.
 Allocates the page tables contiguously in the main memory.
(a disadvantage)
 Solution: Split the page table into smaller piece and achieve two
level paging (page table itself is paged)

Hierarchical
Paging:
Hashed Paged
Table:
Inverted Page
Table
 The basic idea behind demand paging is that when a process is
swapped in, its pages are not swapped in all at once. Rather they are
swapped in only when the process needs them(On demand). This is
termed as lazy swapper, although a pager is a more accurate term.

Demand
Paging:
 The pages that are not moved into the memory, are marked as invalid in the page table. For an invalid entry the rest
of the table is empty. In case of pages that are loaded in the memory, they are marked as valid along with the
information about where to find the swapped out page.
 When the process requires any of the page that is not loaded into the memory, a page fault trap is triggered and
following steps are followed,
 The memory address which is requested by the process is first checked, to verify the request made by the process.
 If its found to be invalid, the process is terminated.
 In case the request by the process is valid, a free frame is located, possibly from a free-frame list, where the required
page will be moved.
 A new operation is scheduled to move the necessary page from disk to the specified memory location. ( This will
usually block the process on an I/O wait, allowing some other process to use the CPU in the meantime. )
 When the I/O operation is complete, the process's page table is updated with the new frame number, and the
invalid bit is changed to valid.
 The instruction that caused the page fault must now be restarted from the beginning.
 There are cases when no pages are loaded into the memory initially, pages are only loaded when demanded by the
process by generating page faults. This is called Pure Demand Paging.
 The only major issue with Demand Paging is, after a new page is loaded, the process starts execution from the
beginning. Its is not a big issue for small programs, but for larger programs it affects performance drastically.
Page  FIFO

Replacement  OPTIMAL

Algorithms:  Least Recently Used (LRU)


 Equal allocation
Design Issues  Proportional allocation
for Paging:  Global and local allocation
Thrashing:
Segmentation:

You might also like