Professional Documents
Culture Documents
2
Introduction to Memory Management
3
Base and Limit Registers
A pair of base and limit registers define the logical address space
The base register holds the smallest legal physical memory
address; the limit register specifies the size of the range.
CPU must check every memory access generated in user mode to
be sure it is between base and limit for that user
4
Hardware Address Protection
5
Memory Allocation Methods
6
Memory allocation: Fixed Partitioning
Simplest method for allocating memory is to divide
memory into several fixed-sized partitions.
Each partition may contain exactly one process.
Degree of multiprogramming is bound by the number
of partitions!!!
Equal-size partitions
Any process whose size is less than or equal to
the partition size can be loaded into an available
partition
The operating system can swap a process out of a
partition
If none are in a ready or running state
7
Fixed Partitioning Problems
8
Solution – Unequal Size Partitions
9
Multiple-partition allocation – Variable size
Multiple-partition allocation - MVT
Variable-partition sizes for efficiency (sized to a given process’ needs)
Hole – Initially, all memory is available for user processes and is considered
one large block of available memory, a hole.
When a process arrives and needs memory, the system searches the set
for a hole that is large enough for this process. If the hole is too large, it is
split into two parts.
One part is allocated to the arriving process; the other is returned to the set
of holes.
10
Multiple-partition allocation
Multiple-partition allocation - MVT
When a process terminates, it releases its block of memory, which is then
placed back in the set of holes.
If the new hole is adjacent to others, these adjacent holes are merged to
form one larger hole.
Operating system maintains information about:
a) allocated partitions b) free partitions (hole)
11
Dynamic Storage-Allocation Problem
How to satisfy a request of size n from a list of free holes?
Worst-fit: Allocate the largest hole; must also search entire list
Produces the largest leftover hole
First-fit and best-fit better than worst-fit in terms of speed and storage
utilization
12
Fragmentation
Both the first-fit and best-fit strategies for memory allocation
suffer from external fragmentation.
External Fragmentation – total memory space exists to
satisfy a request, but it is not contiguous
In the worst case, we could have a block of free (or wasted)
memory between every two processes.
Internal Fragmentation – allocated memory may be slightly
larger than requested memory; this size difference is memory
internal to a partition, but not being used
Depending on the total amount of memory storage and the
average process size, external fragmentation may be a minor
or a major problem.
13
Fragmentation
Statistical analysis of first fit, for instance, reveals that, even
with some optimization, given N allocated blocks, another 0.5
N blocks will be lost to fragmentation.
That is, one-third of memory may be unusable!
* 1/3 may be unusable -> 50-percent rule
This property is known as the 50-percent rule.
14
Fragmentation Removal
15
Swapping
A process can be swapped temporarily out of memory to a
backing store, and then brought back into memory for continued
execution
Total physical memory space of processes can exceed
physical memory
Backing store – fast disk large enough to accommodate copies
of all memory images for all users; must provide direct access to
these memory images
Roll out, roll in – swapping variant used for priority-based
scheduling algorithms; lower-priority process is swapped out so
higher-priority process can be loaded and executed
Major part of swap time is transfer time; total transfer time is
directly proportional to the amount of memory swapped
System maintains a ready queue of ready-to-run processes
which have memory images on disk
16
Schematic View of Swapping
17
Context Switch Time including Swapping
18
Context Switch Time and Swapping (Cont.)
19
Address Binding
mapping the program's logical or virtual addresses to corresponding physical
or main memory addresses
Programs on disk, ready to be brought into memory to execute from an
input queue
Without support, must be loaded into address 0000
Further, addresses represented in different ways at different stages of a
program’s life
Source code addresses usually symbolic i.e. in variable names e.g.
count, i etc
Compiled code addresses bind to relocatable addresses
i.e. “14 bytes from beginning of this module”
Linker or loader will bind relocatable addresses to absolute addresses
i.e. 74014
Each binding maps one address space to another
20
Binding of Instructions and Data to Memory
21
Binding of Instructions and Data to Memory
22
Dynamic Linking
So far… it has been necessary for the entire program and all
data of a process to be in physical memory for the process to
execute.
The size of a process has thus been limited to the size of
physical memory.
To obtain better memory-space utilization, we can use dynamic
loading:
1. A routine is not loaded until it is called.
2. All routines are kept on disk in a relocatable load format.
3. Main program is loaded into memory
4. When a routine needs to call another routine, relocatable linking loader is
called to load the desired routine into memory.
5. Control is transferred to new routine.
23
Dynamic Linking
Static linking – system libraries and program code
combined by the loader into the binary program image
Dynamic linking –linking postponed until execution time
Small piece of code, stub, used to locate the appropriate
memory-resident library routine
Stub replaces itself with the address of the routine, and
executes the routine
Operating system checks if routine is in processes’
memory address
If not in address space, add to address space
Dynamic linking is particularly useful for libraries
System also known as shared libraries
24
Logical vs. Physical Address Space
25
Memory-Management Unit (MMU)
Hardware device that at run time maps virtual to physical address
To start, consider simple scheme where the value in the relocation
register is added to every address generated by a user process at the
time it is sent to memory
Base register now called relocation register
MS-DOS on Intel 80x86 used 4 relocation registers
The user program deals with logical addresses; it never sees the real
physical addresses
Execution-time binding occurs when reference is made to location
in memory
Logical address bound to physical addresses
26
Dynamic relocation using a relocation register
27
Hardware Support for Relocation and Limit Registers
28
Paging
Paging
Physical address space of a process can be noncontiguous; process
is allocated physical memory whenever the latter is available
Avoids external fragmentation
Avoids problem of varying sized memory chunks
Divide physical memory into fixed-sized blocks called frames
Size is power of 2, between 512 bytes and 16 Mbytes
Divide logical memory into blocks of same size called pages
Keep track of all free frames
To run a program of size N pages, need to find N free frames and
load program
Set up a page table to translate logical to physical addresses
Every process has its own page table
Backing store likewise split into pages
Still have Internal fragmentation
30
Address Translation Scheme
Address generated by CPU is divided into:
Page number (p) – used as an index into a page table which
contains base address of each page in physical memory
Page offset (d) – combined with base address to define the
physical memory address that is sent to the memory unit
m = logical address
31
Paging Hardware
32
Paging Model of Logical and Physical Memory
33
Paging Example
34
Paging Example
35
Free Frames
36
Implementation of Page Table
Page table is kept in main memory
Page-table base register (PTBR) points to the page table
Page-table length register (PTLR) indicates size of the page
table
In this scheme every data/instruction access requires two
memory accesses
One for the page table and one for the data / instruction
The two memory access problem can be solved by the
use of a special fast-lookup hardware cache called
associative memory or translation look-aside buffers
(TLBs)
37
Implementation of Page Table (Cont.)
Some TLBs store address-space identifiers (ASIDs) in each
TLB entry – uniquely identifies each process to provide
address-space protection for that process
Otherwise need to flush at every context switch
TLBs typically small (64 to 1,024 entries)
On a TLB miss, value is loaded into the TLB for faster access
next time
Replacement policies must be considered
Some entries can be wired down for permanent fast
access
38
Paging Hardware With TLB
39
Background
40
Background
Create a page table for each process
PTBR (page table base register)
Points to the start of that table (is a pointer)
PTLR (page table length register)
Tells how many entries are there in that register
Each PCB has values for PTBR and PTLR
OS manages the place to store page table (PTBR value)
PTLR is not fixed → virtual addresses are not fixed
Virtual address has 4GB upper limit
Rarely a process needs all the 4 GBs
Add more entries when more memory is needed
Delete entries when the memory is freed.
41
Two-Level Page-Table Scheme
42
Benefits
The page table itself need not be allocated
contiguously
The inner page tables can be allocated at different places
At times the inner page tables themselves can also be
stored into the backing store
Protection and security is also improved
A user cannot traverse all the page table entries by adding
or subtracting the pointer to inner page table entry
They become much more relevant in case of 64 bit
computers
64 bit computers provide 2^64 bytes of memory to each
process
43
Memory Management
NUMERICALS
51
Paging – Effective Access Time
Find the effective access time when there is an 80% chance of a
TLB hit and it takes 100 nanosecond to access the memory.
52
Memory management – Memory size
Suppose you have a ram of size 32MB, calculate the total number
of locations/entries in ram and the number of bits required to
address each location.
53
Memory management - Paging
Supposing logical address = 26 bits
m – logical address
Physical address = 16 bits
Page size = 1KB
Find the following: # of frames & pages
x – physical address
m is the logical address space. As n = 10, m – n = 26 – 10 = 16. Hence 216 = 64K pages in the virtual
address space
54
Memory management - paging
Consider a logical address space of 64 pages of 1024 words each,
mapped onto a physical memory of 32 frames.
a. How many bits are required to represent the logical address space?
b. How many bits are there in the physical address?
Logical address space bits = m = ?
Recall that the size of the page/frame is = 2n and the size of logical address space are 2m
Then, logical address space = # of pages x page size
➔ 2m = # of pages x 2n
➔ 2m – n = # of pages
n=?
Page size = 2n ➔ 1024 = 2n
210 = 2n ➔ n = 10-bits
Again: # of pages = 2m-n ➔ 64 = 2m - 10 ➔ 26 = 2m - 10 ➔ 6 = m -10 ➔ m = 16-bits (logical
address)
Physical address: Let x be the physical address bits.
Size of physical address space = 2x
Size of physical address space = # of frames × frame size (frame size = page size )
Size of physical address space = 32 × 1024 ➔ 2x = 25 × 210 ➔ 2x = 215 ➔ number of bits = x =15
bits
55
Memory management - paging
Consider a logical address space of 64 pages of 1024 words each,
mapped onto a physical memory of 32 frames.
a. How many bits are required to represent the logical address space?
b. How many bits are there in the physical address?
Easy solution
Addressing within a 1024-words page requires 10 bits because 1024 = 210. Since
the logical address space consists of 64 pages = 26 pages, the logical addresses
must be 10+6 = 16 bits. Similarly, since there are 32 frame = 25 physical frames,
physical addresses are 5 + 10 = 15 bits long.
56
Memory management - paging
Consider a system using Paging for memory management. The
system uses a page size of 1024 bytes, and the length of the
address register is 18 bits. Now compute the following
parameters:
a) Size of RAM
b) Total number of frames in the RAM
c) Number of bits in a page table entry
Solution:
c) 18 - 10 = 8 bits
57
Memory management - Paging
Question
Write a C/C++ function to perform address translation in a system using Paging for
memory management. You are given with the page size, page table, and a logical block
no, and you are required to compute the corresponding physical address. Use the following
named constant, and the function prototype:
const int PS = ... ; // page size
Solution:
58
Memory management - Paging
Consider a system with a page size of 256. Assume a process running in this system
has the following page table:
Page number Frame number
0 50
1 10
2 90
Now translate the following logical addresses into the corresponding paging physical
addresses:
i) 700
frame = tbl[2] = 90
= 23,228
59
Memory translation – logical to physical
Given a logical address of 2486 and a page size of 256. What is the corresponding
physical address?
We can divide the address by page size to get page #. So 2486/256 = 9.7
Therefore, the page # is 9. In the page table, we can grab the corresponding frame
number.
In order to find the offset, we take the mod of 2486 with 256. So, 2486%256 = 182
So ➔ (256 * 5) – 1 = 1279
Now, we will add the offset to it to get the actual physical memory address: 1279 +
182 = 1461 = 0x5B5 is the memory location.
60
Memory translation – logical to physical(2nd method)
Given a logical address of 2486 and a page size of 256. What is the corresponding physical address?
Convert the logical address to binary number. 2486 in binary is: 100110110110
Now, take the log2 of the page size. i.e. log2 (256) = log2 (28) ➔ 8
offset
In the above binary number, the least-significant 8 numbers are the page offset. i.e. 1001 10110110
The underlined 0’s and 1’s is the offset. 10110110 = 182
The most significant left most 4 bits represents the page number.
page number
So ➔ (256 * 5) – 1 = 1279
Now, we will add the offset to it to get the actual physical memory address: 1279 + 182 = 1461 = 0x5B5
Is the memory location.
61
Paging guidance
62