1) Memory management structures for paging can become very large using straightforward methods for modern computers with large 32-bit address spaces.
2) Hierarchical paging addresses this issue by breaking up the page table into multiple levels, such as a two-level scheme that divides the logical address space into page tables.
3) Other techniques like hashed page tables and inverted page tables can also address this issue, such as by hashing the virtual page number to search for a match or having one entry for each real page of memory respectively.
1) Memory management structures for paging can become very large using straightforward methods for modern computers with large 32-bit address spaces.
2) Hierarchical paging addresses this issue by breaking up the page table into multiple levels, such as a two-level scheme that divides the logical address space into page tables.
3) Other techniques like hashed page tables and inverted page tables can also address this issue, such as by hashing the virtual page number to search for a match or having one entry for each real page of memory respectively.
1) Memory management structures for paging can become very large using straightforward methods for modern computers with large 32-bit address spaces.
2) Hierarchical paging addresses this issue by breaking up the page table into multiple levels, such as a two-level scheme that divides the logical address space into page tables.
3) Other techniques like hashed page tables and inverted page tables can also address this issue, such as by hashing the virtual page number to search for a match or having one entry for each real page of memory respectively.
Structure of the page table • Memory structures for paging can get huge using straight-forward methods – Consider a 32-bit logical address space as on modern computers – Page size of 4 KB (212) – Page table would have 1 million entries (232 / 212) – If each entry is 4 bytes ➔ each process 4 MB of physical address space for the page table alone • Don’t want to allocate that contiguously in main memory Saturday, November 25, 2023 Biju K Raveendran @ BITS Pilani Goa 2 Structure of the page table • Hierarchical Paging
• Hashed Page Tables
• Inverted Page Tables
Saturday, November 25, 2023 Biju K Raveendran @ BITS Pilani Goa 3
Hierarchical Page Tables • Break up the logical address space into multiple page tables
• A simple technique is a two-level page table
Saturday, November 25, 2023 Biju K Raveendran @ BITS Pilani Goa 4
Two level Page Table Scheme
Saturday, November 25, 2023 Biju K Raveendran @ BITS Pilani Goa 5
Two level Paging Example • A logical address (on 32-bit machine with 4K page size) is divided into: – a page number consisting of 20 bits – a page offset consisting of 12 bits – Need 1MB address space for one process’s page table • Since the page table is paged, the page number is further divided into: – a 10-bit page number – a 10-bit page offset
Saturday, November 25, 2023 Biju K Raveendran @ BITS Pilani Goa 6
Two level Paging Example • Thus, a logical address is as follows:
page number page offset
p1 p2 d 10 10 12
where p1 is an index into the outer page table, and p2 is the
displacement within the page of the outer page table This address translation scheme is called forward mapped page table
Saturday, November 25, 2023 Biju K Raveendran @ BITS Pilani Goa 7
Address Translation Scheme
Saturday, November 25, 2023 Biju K Raveendran @ BITS Pilani Goa 8
Three level Paging Scheme
Saturday, November 25, 2023 Biju K Raveendran @ BITS Pilani Goa 9
Multi-level paging and Performance • Since each level is stored as a separate table in memory, covering a logical address to a physical one may take four memory accesses. • Even though time needed for one memory access is quintupled, caching permits performance to remain reasonable.
Saturday, November 25, 2023 Biju K Raveendran @ BITS Pilani Goa 10
Multi-level paging and Performance • Since each level is stored as a separate table in memory, covering a logical address to a physical one may take four memory accesses. • Even though time needed for one memory access is quintupled, caching permits performance to remain reasonable. • N memory reference, k TLB misses – (N+k) times TLB access – N times Memory access (for data) • Access Time (AT) = [(MemAccessTime + ) (N) + ((#of levels*MemAccessTime)++OS Exception overhead)k Saturday, November 25, 2023 Biju K Raveendran @ BITS Pilani Goa 11 Hashed Page Tables • Common in address spaces > 32 bits • The virtual page number is hashed into a page table – This page table contains a chain of elements hashing to the same location • Virtual page numbers are compared in this chain searching for a match – If a match is found, the corresponding physical frame is extracted Saturday, November 25, 2023 Biju K Raveendran @ BITS Pilani Goa 12 Hashed Page Tables • Variation for 64-bit addresses is clustered page tables – Similar to hashed but each entry refers to several pages (such as 16) rather than 1 – Especially useful for sparse address spaces (where memory references are non- contiguous and scattered)
Saturday, November 25, 2023 Biju K Raveendran @ BITS Pilani Goa 13
Hashed Page Table
Saturday, November 25, 2023 Biju K Raveendran @ BITS Pilani Goa 14
Inverted Page Table • One entry for each real page of memory • Entry consists of the virtual address of the page stored in that real memory location, with information about the process that owns that page • Decreases memory needed to store each page table, but increases time needed to search the table when a page reference occurs • Use hash table to limit the search to one — or at most a few — page-table entries – TLB can accelerate access • But how to implement shared memory? – One mapping of a virtual address to the shared physical address Saturday, November 25, 2023 Biju K Raveendran @ BITS Pilani Goa 15 Inverted Page Table Architecture
Saturday, November 25, 2023 Biju K Raveendran @ BITS Pilani Goa 16