Chapter 4:Memory Management

Basic Memory Management y Relocation and protection y Swapping y Virtual Memory y Paging y Segmentation y Paging with segmentation
y

1

y

Ideally programmers want memory that is
large fast non volatile

y

Memory hierarchy
small amount of fast, expensive memory ² cache some medium-speed, medium price, volatile main memory RAM) Tens of hundreds of gigabytes of slow, cheap , nonvolatile disk storage

y

Memory manager handles the memory hierarchy

2

Memory manager :
which parts of memory are in use and which parts are not in use Allocate and deallocate memory Manage swapping between main memory and disk when main memory is too small to hold all the processes.

Operating System Concepts

an operating system with one user process 4 .Basic Memory Management Monoprogramming without Swapping or Paging Three simple ways of organizing memory .

Multiprogramming with Fixed Partitions y Fixed memory partitions separate input queues for each partition single input queue 5 .

even though plenty of memory is free. 2. When job arrives. . it can be put into the input queue for the smallest partition large enough o hold it.1. May be small jobs have to wait to get into memory.

Whenever a partition becomes free. the job closest to the front of the queue that fits in it could be loaded into the empty partition and run. y Search the whole input queue and pick the largest job that fits. y At least one small partition y Job may not be skipped over more than k times. y .

Operating System Concepts .y Input queue ² collection of processes on the disk that are waiting to be brought into memory to run the program. y User programs go through several steps before being run.

Multistep Processing of a User Program Operating System Concepts .

Physical Address Space Logical address ² generated by the CPU. Operating System Concepts . y Logical and physical addresses are the same in compile-time and load-time address-binding schemes.Logical vs. Physical address ² address seen by the memory unit. y logical (virtual) and physical addresses differ in execution-time address-binding scheme. also referred to as virtual address.

Relocation and Protection
y

Cannot be sure where program will be loaded in memory
address locations of variables, code routines cannot be absolute must keep a program out of other processes· partitions

11

Relocation
y When a program is linked the linker must know at what address the program will begin in memory. y Call to a procedure at absolute address 100 y Program is Loaded in partition 1 y 100K+100

Operating System Concepts

Dynamic relocation using a relocation register

Operating System Concepts

Protection y y y y Divide memory into blocks of 2-KB bytes 4-bit protection code to each block (IBM) PSW contained 4-bit key Use base and limit values address locations added to base value to map to physical address address locations larger than limit value is an error 14 .

Hardware Support for Relocation and Limit Registers Operating System Concepts .

y Backing store ² fast disk large enough to accommodate copies of all memory images for all users. Operating System Concepts .Swapping y A process can be swapped temporarily out of memory to a backing store. and then brought back into memory for continued execution. must provide direct access to these memory images.

Schematic View of Swapping Operating System Concepts .

Swapping (1) Memory allocation changes as processes come into memory leave memory Shaded regions are unused memory 18 .

y Memory compaction : moving one by one all the processes downward as far as possible.swapping Complicates allocation and deallocation of memory as well as keeping track of it«. y Operating System Concepts . y It is usually not done because it requires a lot of CPU time.

When memory is assigned dynamically os must manage it. 20 . y Two ways to keep track of memory usage y Bitmaps Linked list.

3 holes tick marks show allocation units shaded regions are free y y Corresponding bit map Same information as a list Operating System Concepts .Memory Management with Bit Maps y Part of memory with 5 processes.

Size of allocation unit is important design issue y Smaller the allocation unit the larger the bitmap y Larger the allocation unit . smaller bitmap . but memory may be wasted in the last unit of the process. y Problem : searching a bitmap for a given length is slow operation y Operating System Concepts .

Memory Management with Linked Lists Four neighbour combinations for the terminating process X Operating System Concepts .

y First fit y Next fit y Best fit y Worst fit y Quick fit y Operating System Concepts .When the processes and holes are kept on a list sorted by address. several algorithms can be used to allocate memory for a newly created process.

Next fit : It works the same way as first fit. one for the process and one for the unused memory. The next time it is called to find the hole.ALGORITHM: y y First fit : scan the list of segments until it finds a hole that is big enough to hold process. Operating System Concepts . The hole is then broken up into two pieces. it starts searching the list from the place where it left off last time. instead of always at the begging as first fit does. except that it keeps track of where it is finds a suitable hole.

y Worst fit : The memory manager scans entire list of segments and takes the largest available hole .y Best fit : The memory manager scans entire list of segments and takes the smallest hole that is adequate. .

y Quick fit : The memory manager maintains separate lists for common size requested. finding a hole of the required size is extremely fast when a process terminates or is swapped out. Operating System Concepts . With quick fit. finding its neighbours to see if a merge is possible is expensive.

12 b. worst fit.Example y Consider a swapping system in which memory consists of the following hole sizes in memory order : 10 4 20 18 7 9 12 15 which hole is taken for successive requests of a. 9 for first fit? Repeat the question for best fit. and next fit. . 10 c.

y Swapping of overlays in and out done by system y y Ex: 16-MB program can run on 4-MB machine«. Operating System Concepts .Virtual Memory Split program into pieces called overlays.

Page and page frame are always same size. Operating System Concepts . y Physical address : address seen by memory unit Physical address space is divided into unit called page frame.Virtual Memory y Virtual address : program generated address Virtual address space is divided up into units called pages.

y In MMU scheme. Operating System Concepts .MemoryMemory-Management Unit (MMU) y Hardware device that maps virtual to physical address. y The user program deals with logical addresses. it never sees the real physical addresses. the value in the relocation register is added to every address generated by a user process at the time it is sent to memory.

Virtual Memory Paging (1) The position and function of the MMU Operating System Concepts .

Paging (2) The relation between virtual addresses and physical memory addresses given by page table Operating System Concepts .

Page Tables (1) Internal operation of MMU with 16 bit.4 KB pages Operating System Concepts .

Page Table The page number is used as an index into the page table. yielding the number of the page frame« y P/A bit is o. y P/A bit is 1. trap to the OS. the page frame number found in the page table is copied to the high order 3 bits of the output register« y Operating System Concepts .

Protection bit: It tells that what kind of access are permitted. If 0-virtual page to which the entry belongs is not currently in memory.Page Tables Entry Typical page table entry y y y Page frame number: Goal of page mapping to locate this value. Present/absent bit: If 1-entry is valid and can be used. Operating System Concepts .

Its value help the O.Page Table Entry y Modified and referenced bits : It keep track of page usage. When a page is written to . Machine that have separate I/O space and do not use memory mapped I/O do not need this bit. If it has not been modified (i.the h/w automatically sets the modified bit.e. it can be just be abandoned. is ´cleanµ). y Caching : This feature is important for page that map onto device reg. y Operating System Concepts . rather than memory.e. if the page in it has been modified (i. it must be written back. is ´dirtyµ). Referenced Bit: It is set whenever a page is referenced either for reading or writing.S choose a page to evict when a page fault occurs.

Page Table Page table can be extremely large y The mapping must be fast y Each process needs its own page table Mapping must be done on every memory reference Solution : single page table consisting of an array of fast hardware register Expensive It can be entirely in main memory Operating System Concepts .

a 10-bit page offset. y Thus. a logical address is as follows: page number pi p2 page offset d y here pi is an index into the outer page table. a page offset consisting of 12 bits. y Since the page table is paged. 10 Operating System Concepts .TwoTwo-Level Paging Example y A logical address (on 32-bit machine with 4K page size) is divided into: a page number consisting of 20 bits. and p2 is the displacement 10 12 within the page of the outer page table. the page number is further divided into: a 10-bit page number.

Those that are not needed should not be kept around.Two Level Page Tables Second-level page tables Top-level page table y In this method avoid keeping all the page tables in memory all time. Operating System Concepts .

Implementation of Page Table y Page table is kept in main memory. y Page-table length register (PRLR) indicates size of the page table. y In this scheme every data/instruction access requires two memory accesses. y The two memory access problem can be solved by the use of a special fast-lookup hardware cache called associative memory or translation look-aside buffers (TLBs) Operating System Concepts . One for the page table and one for the data/instruction. y Page-table base register (PTBR) points to the page table.

Operating System Concepts .TLBs ² Translation Lookaside Buffers` A TLB to speed up paging To see virtual page is present in TLB ²comparing all the entries simultaneously.

Paging Hardware With TLB Operating System Concepts .

Operating System Concepts . but increases time needed to search the table when a page reference occurs. y Decreases memory needed to store each page table. y Entry consists of the virtual address of the page stored in that real memory location. with information about the process that owns that page.Inverted Page Table y One entry for each page frame of memory. y Use hash table to limit the search to one ³ or at most a few ³ page-table entries.

Inverted Page Tables Comparison of a traditional page table with an inverted page table 45 .

Inverted Page Table Architecture Operating System Concepts .

Page Replacement Algorithms y Page fault forces choice which page must be removed make room for incoming page y Modified page must first be saved unmodified just overwritten y Better not to choose an often used page will probably need to be brought back in soon 47 .

FIFO Page Replacement Algorithm y Maintain a linked list of all pages in order they came into memory With the page at the head of the list the oldest one and the page at the tail the most recent arrival. y y Page at beginning of list replaced Disadvantage Throwing out heavily used page. 48 .

FIFO Page Replacement Operating System Concepts .

A has R bit set (numbers above pages are loading times) 50 .Second Chance Page Replacement Algorithm y Operation of a second chance pages sorted in FIFO order if fault occurs at time 20.

Second-Chance (clock) Page-Replacement Algorithm Operating System Concepts .

The Clock Page Replacement Algorithm 52 .

100 and or perhaps1000 instruction later. 53 . Each page can be labelled with the number of instruction that will be executed before that page is first referenced. Other pages may not be referenced until 10.Optimal Page Replacement Algorithm Optimal but unrealizable Working: One of these pages will be referenced on the very next instruction.

Optimal Page Replacement Algorithm The page with the highest label should be removed. the OS has no way of knowing when each of the page will be referenced next. At the time of page fault . .

both page bits for all its pages are set to 0 by the O. on each clock interrupt). to distinguish pages that have not been referenced recently from those that have been. Periodically (e. the R bit is cleared.Not Recently Used : ‡ When a process is started up.g. ‡ Operating System Concepts .S.

Not Recently Used Page Replacement Algorithm Pages are classified 1. 56 . modified(0 1) 1. not referenced. not modified(0 0) 1. 4. referenced. 3. because the page will need to be written out before replacement It probably will be used again soon It probably will be used again soon . Best page to replace Not quite good . not modified(1 0) 1. and the page will need to be written out to disk before replacement 2. not referenced. referenced. modified(1 1) 1.

.y NRU removes page at random lowest numbered nonempty class It is better to remove a modified page that has not been referenced in at least one clock tick than a clean page that is in heavy us.

and then moving it to the front is a very time consuming operation. y Alternatively keep counter in each page table entry choose page with lowest value counter 58 . even in hardware. least at rear update this list every memory reference !! Finding a page in the list. deleting it.Least Recently Used (LRU) y Assume pages used recently will used again soon throw out page that has been unused for longest time y Must keep a linked list of pages most recently used at front.

0.2.LRU using Matrix 0.3. 59 .2.1.1.3.3: the row whose value is lowest is the last recently used.2.

Problem: It never forgets anything. At each clock interrupt the o. is added to the counter.s. initially zero. The counters are an attempt to keep track of how often each page has been referenced. scans all the pages in memory.NOT FREQUENTLY USED: y y y y y y It requires a software counter associated with each page.which is 0 or 1 . When page fault occurs. 60 . For each page the R bit . the page with the lowest counter is chosen for replacement.

(a) ² (e) 61 .Simulating LRU in Software y y The aging algorithm simulates LRU in software Note 6 pages for 5 clock ticks.

Working set: The set of pages that a process is currently using is called its working set.The Working Set Page Replacement Algorithm (1) y Demand paging: Pages are loaded only on demand not in advance. During any phase of execution the process reference only a relatively small fraction of its pages. y y 62 .

This approach is called the working set model. It is greatly reduce the page fault rate.y Thrashing : A program causing page fault every few instruction is said to be thrashing. y y Prepaging : Loading of page before letting process run is called prepaging. Working set model: Many paging systems try to keep track of each process· working set and make sure that it is in memory before letting the process run. Operating System Concepts .

64 .The Working Set Page Replacement Algorithm Current Virtual Time: Amount of CPU time a process has actually used has since it started Working set of process is the set of pages it has referenced during the past (T) seconds of virtual time.

The WSClock Page Replacement Algorithm 65 .

remove if the page is dirty(cant remove immediately) ² to avoid process switch write to disk is scheduled but hand is advanced and the algorithm continues with next page.The WSClock Page Replacement Algorithm 1. R =0 ‡ If the page is clean. .

Review of Page Replacement Algorithms 67 .

assuming three frames. II. FIFO LRU Optimal I. III.y Consider the following page reference string : 0123012301234567 How many page fault occur in the following replacement algorithms. .

L.T. last reference time. load time. dirty bit. and reference bit. Last reference 374 321 306 331 Dirty bit 1 0 1 0 Ref.Example: y Given a system with four page frames. the following table indicates page. bit Page 0 1 2 3 167 321 254 154 1 0 0 1 .

Which page will FIFO replace? Ans : 3 FIFO selects the page loaded first. page 3 at time 154 y Which page will LRU replace? Ans : 2 y Which page will NRU replace? Ans:1 NRU first looks for a clean page that was not recently referenced. it is reset and load time replace with current time Same for 0 So page 2 is selected . y Which page will second chance replace? y Second chance first checks the oldest page ²page 3 Reference bit 1.

09018 18787 12827 82383 How many page fault will occur if the program has three page frames available to it and uses: 1. LRU 3. optimal replacement . FIFO 2.y Given reference to the following pages by a program.

e. .y a. 256 pages of logical address space and a page size of 210 bytes. b. On a simple paging system with 224 bytes of physical memory. How many bytes are in page frame? How many bits in the physical address specify the page frame number? How many entries are in the page table? What is the size of the logical address space? How many bits are needed to store an entry in the page table? Assume each page table entry contains a valid/invalid bit in addition to the page frame number. c. d.

How many bits are in the logical address specify the offset within the page? How many bits are in the a logical address? What is the size of the logical address space? How many bits in the physical address specify the page frame number? How many bits in the physical address specify the offset within the page frame? . d.y a. c. On a simple paging system with a page table containing 64 entries of 11 bits (including valid/invalid bit) each and a page size of 512 bytes. e. b.

Before Page --0-9090190 819 819 819 781 781 781 781 278 0 9 0 1 8 1 8 7 8 7 1 2 8 Fault 1 2 2 3 4 4 4 5 5 5 5 6 6 after 0-9090190 819 819 781 781 781 781 278 278 278 .

Before Page 278 278 278 278 278 327 832 2 7 8 2 3 8 3 Fault 6 6 6 6 7 8 8 after 278 278 278 278 327 832 832 .

Design Issues for Paging Systems Local versus Global Allocation Policies (1) y y y Original configuration Local page replacement Global page replacement 76 .

Local versus Global Allocation Policies

yfixed yIf

fraction of memory VS dynamic

working set grows thrashing will result even if free page frame ,If working set shrinks local algorithm waste memory The system must continually decide how many page frames to assign to each process.

y y y

No of pages proportional to process size. Allocation must be updated dynamically PFF(page fault frequency) : It tells when to increase or decrease a process page allocation but says nothing about which page to replace..

Load Control
y y

Despite good designs, system may still thrash When PFF algorithm indicates
some processes need more memory but no processes need less

y

Solution : Reduce number of processes competing for memory
swap one or more to disk, divide up pages they held reconsider degree of multiprogramming
79

Page Size (1) Small page size internal fragmentation programs need many pages. larger page tables Large page size programs need less pages. smaller page tables 80 .

of pages needed per process =s/p Page table space=se/p bytes Internal fragmentation = p/2 Total overhead due to the page table and I. loss is given by page table space space internal fragmentation p ! 2 se 81 .Page Size (2) y s™e p overhead !  p 2 Optimized when s = average process size in bytes p = page size in bytes e = each page entry requires e bytes no.F.

Separate Instruction and Data Spaces One address space y Separate I and D spaces y 82 .

Shared Pages Two processes sharing same program : sharing its page table 83 .

y .Shared Pages y y y Program text can be shared Separate table for I-space and D-space Each process has two pointers in its process table Special data structures are needed to keep track of shared pages.

Cleaning Policy y Paging works best when there are plenty of free page frames. Need for a background process. paging daemon periodically inspects state of memory y y When too few frames are free selects pages to evict using a replacement algorithm 85 .

y To implement .two-hand circular list (clock) Front hand controlled by paging daemon Back hand is used for page replacement .

needed page in 4.Implementation Issues Operating System Involvement with Paging Four times when OS involved with paging 1. pages 87 .   Page fault time determine virtual address causing fault swap target page out.   Process execution MMU reset for new process TLB flushed 3. Process creation   determine program size create page table 2.  Process termination time release page table.

5. 3. tries to discover which virtual page needed OS checks validity of address. 2. 4. context switch take place.Page Fault Handling (1) 1. Hardware traps to kernel. write it to disk . suspending faulting process and letting another process until disk transfer 88 . seeks page frame If selected frame is dirty. Assembly code routine is started to save the General registers and other info. OS determines page fault. saving the program counter on the stack.

Page Fault Handling (2) o o o o o o OS brings schedules new page in from disk Page table updated Faulting instruction backed up to the state when it began Faulting process scheduled Registers restored Program continues 89 .

Backing Store (a) Paging to static swap area (b) Backing up pages dynamically 90 .

.Swapping Special area on the disk-swap area Allocate space in advance : copy entire process image to the swap area y Do not allocate in advance y y Allocate disk space for each page when it is swapped out and deallocate when swap back in There must be table per process telling for each page on disk where it is.

Segmentation (1) y y One-dimensional address space with growing tables One table may bump into another 92 .

independently 93 .Segmentation (2): provide machine with many completely independent address space Allows each table to grow or shrink.

Segmentation (3) Comparison of paging and segmentation 94 .

Implementation of Pure Segmentation (a)-(d) Development of checkerboarding (e) Removal of the checkerboarding by compaction 95 .

Segmentation with Paging: MULTICS Descriptor segment points to page tables y Segment descriptor y 96 .

Segmentation with Paging: MULTICS (2) A 34-bit MULTICS virtual address 97 .

Segmentation with Paging: MULTICS (3) Conversion of a 2-part MULTICS address into a main memory address 98 .

Segmentation with Paging: MULTICS (4) y Simplified version of the MULTICS TLB 99 .

Segmentation with Paging: Pentium (1) A Pentium selector 100 .

Segmentation with Paging: Pentium(2) Pentium code segment descriptor y Data segments differ slightly y 101 .

Segmentation with Paging: Pentium (3) Conversion of a (selector. offset) pair to a linear address 102 .

Segmentation with Paging: Pentium (4) Mapping of a linear address onto a physical address 103 .

Sign up to vote on this title
UsefulNot useful