Professional Documents
Culture Documents
OPERATING SYSTEMS
FIFTH EDITION
Chapter 3
Memory Management:
Virtual Memory
Understanding Operating Systems, Sixth Edition 2
Learning Objectives
After completing this chapter, you should be able to
describe:
• The basic functionality of the memory allocation methods
covered in this chapter: paged, demand paging,
segmented, and segmented/demand paged memory
allocation
• The difference between a first-in first-out page
replacement policy and a least-recently-used page
replacement policy
• The mechanics of paging and how a memory allocation
scheme determines which pages should be swapped out
of memory
• Virtual memory and cache memory
Understanding Operating Systems, Sixth Edition 3
Demand Paging
• Demand Paging introduced the concept of loading only a
part of the program into memory for processing.
• When the job begins to run, its pages are brought into
memory only as they are needed.
Demand Paging
• Demand paging has three new fields for
each page in the PMT:
• The first field (Status) tells the system where to
find each page. If it is already in memory, the
system will be spared the time required to bring
it from secondary storage. It is faster for the
operating system to scan a table located in main
memory than it is to retrieve a page from a disk.
• The second field (Modified), noting if the page
has been modified, is used to save time when
pages are removed from main memory and
returned to secondary storage. If the contents of
the page haven’t been modified then the page
doesn’t need to be rewritten to secondary
storage. The original, already there, is correct.
• The third field (Referenced), which indicates any
recent activity, is used to determine which pages
show the most processing activity, and which
are relatively inactive. This information is used
by several page-swapping policy schemes to
determine which pages should remain in main
memory and which should be swapped out
when the system needs to make room for other
pages being requested.
Understanding Operating Systems, Sixth Edition 9
Swapping Process
• If a job (e.g. Job 4) requests that Page 3 be
brought into memory even though there are no
more empty page frames available the Page
Fault Handler will determines whether there
are empty page frames in memory so the
requested page can be immediately copied
from secondary storage.
• If all page frames are busy, the Page Fault
Handler must decide which page will be
swapped out.
• The decision is dependent on the predefined
policy for page removal.
• When there is an excessive amount of page
swapping between main memory and
secondary storage, the operation becomes
inefficient. This phenomenon is called
thrashing.
• Thrashing is caused by under allocation of the
minimum number of pages required by a
process, forcing it to continuously page fault.
• The system can detect thrashing by evaluating
the level of CPU utilization as compared to the
level of multiprogramming. It can be eliminated
by reducing the level of multiprogramming.
Understanding Operating Systems, Sixth Edition 10
Comparison
Advantages Disadvantages
Understanding Operating Systems, Sixth Edition 18
Virtual Memory
• virtual memory: a technique that allows programs to be executed even though
they are not stored entirely in memory.
Cache Memory
• Cache memory is extremely fast memory
that is built into a computer’s central
processing unit (CPU), or located next to it
on a separate chip.
• Whenever data must be passed through (a) the traditional path used by early
the system bus, the data transfer speed computers between main memory and the
slows to the motherboard’s capability.
CPU and (b) the path used by modern
computers to connect the main memory
• Using (b) architecture, the CPU can
process data much faster by avoiding the and the CPU via cache memory.
bottleneck created by the system bus.
Understanding Operating Systems, Sixth Edition 20
Cache Memory
• A typical microprocessor has two levels of caches:
• Level 1 (L1)
• Level 2 (L2)
Summary
• Paged memory allocation
• Efficient use of memory
• Allocate jobs in noncontiguous memory locations
• Problems
• Increased overhead
• Internal fragmentation
• Demand paging scheme
• Eliminates physical memory size constraint
• LRU provides slightly better efficiency (compared to FIFO)
• Segmented memory allocation scheme
• Solves internal fragmentation problem
• Segmented/demand paged memory
• Problems solved
• Compaction, external fragmentation, secondary storage handling
• Virtual memory
• Programs execute if not stored entirely in memory
• Job’s size no longer restricted to main memory size
• Cache memory
• CPU can execute instruction faster