You are on page 1of 98

Chapter 8

Virtual
Memory
Ninth Edition, Global Edition
William Stallings

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


劣介entrant P.T PP faiths

Mir l 幽曲

M


load

一 웞
f
if
MM 廳
pagetable

剡 free we
胸 E

pre paging 텄 no

Al al
page Replacement

FIFO
Here Setae 1 2 3 4 1 2 5 23 4 5

1 1 1 4 44 5 55 5 5 5
from1만
1 2 2 2 1 1 1 1 1 1 3
4 4
3

I
2 2 2

v v1 르르 v 7

1 1 1 1 5 5 5 5 4 4
it 2 2 2 2 1 1 1 1 5
유사증가
1 3 1 22 2 2
4 4 4 41 7
V V V v e v v f v w
L Belady's Anomaly
Belady's Anomaly


he frame
L
2 Optimal Alg 개념적존재 실제 사용불가
順 알뺘함
3 Least Recently Used

1 2 3 4 1 2 5
LRU
23 4 5
왜 LN
Bein
Ait
1 1 1 4 4 4 5 3 1 1
2 2 2 1 1 1 1 4 4
7 2 2 2 2
7 3 I
v V V V V V__v v v 10

1 1 1 1 1 1 1 5
2 2 2 2 2 2 2
1 1 5 5 4 4
4 4 3 3 3
V V 8
v V__V__V v
PT 이에 Alg

承 瀞 鬪


門膨
y y

page Butting
MM

月 Id its

Allocation of Francs

i Equal Allocation

ii proportional Allocation

find allocation 고정
fvariabilis 잴

Thrashing

degree
of wallingramming t CPU Utilization

패 쐐 드
닚痲
f hTㄼ degree.tn州凹Many
for
too
D to a
is t.cn

any i 0

aEis



o
O
I
i
working set
process

localityI지역성1
3 6 4 3 3
5
7 7 22 a
m
사서 弧 炭
Virtual memory A storage allocation scheme in which secondary memory can be
addressed as though it were part of main memory. The addresses a
program may use to reference memory are distinguished from the
addresses the memory system uses to identify physical storage sites, and
program-generated addresses are translated automatically to the
corresponding machine addresses.The size of virtual storage is limited by
the addressing scheme of the computer system and by the amount of
secondary memory available and not by the actual number of main storage
locations.
Virtual address The address assigned to a location in virtual memory to allow that location
to be accessed as though it were part of main memory.
Virtual address The virtual storage assigned to a process.
space
Address space The range of memory addresses available to a process.
Real address The address of a storage location in main memory.

Table 8.1 Virtual Memory Terminology

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


Hardware and Control Structures
n Two characteristics fundamental to memory
management:
1) All memory references are logical addresses that are
dynamically translated into physical addresses at run time
2) A process may be broken up into a number of pieces that
don’t need to be contiguously located in main memory
during execution the the
n If these two characteristics are present, it is not
necessary that all of the pages or segments of a
process be in main memory during execution
Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.
n Operating system brings into main memory a few pieces of the
program
n Resident set
n Portion of process that is in main memory

n An interrupt is generated when an


address is needed that is not in main
memory
n Operating system places the process
in a blocking state

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved. Continued . . .


Execution of a Process
n Piece of process that contains the logical address is
brought into main memory
n Operating system issues a disk I/O Read request
n Another process is dispatched to run while the disk I/O takes
place
n An interrupt is issued when disk I/O is complete, which
causes the operating system to place the affected process in
the Ready state

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


Implications
n More processes may be maintained in main memory
n
wait ee
Because only some of the pieces of any particular process are loaded, there is
room for more processes
n This leads to more efficient utilization of the processor because it is more
likely that at least one of the more numerous processes will be in a Ready
state at any particular time

n A process may be larger than all of main memory


n If the program being written is too large, the programmer must devise ways to
structure the program into pieces that can be loaded separately in some sort
of overlay strategy
n With virtual memory based on paging or segmentation, that job is left to the
OS and the hardware
n The OS automatically loads pieces of a process into main memory as
required

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


Real and Virtual Memory
Real Virtual
memory memory
Main memory,
the actual Memory on disk
RAM

Allows for effective


multiprogramming
and relieves the user
of tight constraints of
main memory
Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.
Simple Paging Virtual Memory Simple Segmentation Virtual Memory
Paging Segmentation
Main memory partitioned into small fixed-size Main memory not partitioned
chunks called frames
Program broken into pages by the compiler or Program segments specified by the programmer to
memory management system the compiler (i.e., the decision is made by the
programmer)
Internal fragmentation within frames No internal fragmentation Table 8.2
No external fragmentation External fragmentation Characteristics
Operating system must maintain a page table for Operating system must maintain a segment table of Paging and
each process showing which frame each page for each process showing the load address and Segmentation
occupies length of each segment
Operating system must maintain a free frame list Operating system must maintain a list of free holes pagereplacement
in main memory
Processor uses page number, offset to calculate Processor uses segment number, offset to calculate
Pi I2
absolute address absolute address
All the pages of a Not all pages of a All the segments of a Not all segments of a
process must be in main process need be in main process must be in main process need be in main
memory for process to memory frames for the memory for process to memory for the process
run, unless overlays are process to run. Pages run, unless overlays are to run. Segments may
used may be read in as used be read in as needed
needed

a
Reading a page into
main memory may
Reading a segment into
main memory may
require writing a page require writing one or
out to disk more segments out to
disk

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


Thrashing

To avoid this, the operating system


A state in which the system tries to guess, based on recent
spends most of its time history, which pieces are least
swapping process pieces likely to be used in the near future
rather than executing
instructions

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


Principle of Locality
n Program and data references within a process tend to cluster

n Only a few pieces of a process will be needed over a short


period of time

n Therefore it is possible to make intelligent guesses about which


pieces will be needed in the future
n Avoids thrashing

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


For virtual memory to be practical and
effective:
• Hardware must support paging and
segmentation
• Operating system must include software for
managing the movement of pages and/or
o
segments between secondary memory and
main memory

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


Paging
n The term virtual memory is usually associated with systems that
employ paging
n Use of paging to achieve virtual memory was first reported for
the Atlas computer

n Each process has its own page table


o
n Each page table entry (PTE) contains the frame number of
the corresponding page in main memory
n A page table is also needed for a virtual memory
scheme based on paging

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


메모지에놇빼
c
없냐
있냐 있었냐 in

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


Virtual Address Physical Address
Page # Offset Frame # Offset

Register
n bits Page Table Ptr

Page Table
m bits
Offset
Page

Page#
Frame
+
Frame #

Program Paging Mechanism Main Memory

Figure 8.2 Address Translation in a Paging System

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


三一
in
info
4-kbyte root
page table
6 tT
10
1 10 1
4-Mbyte user
page table

4-Gbyte user
address space

Figure 8.3 A Two-Level Hierarchical Page Table


Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.
Virtual Address
10 bits 10 bits 12 bits Frame # Offset

Root page
table ptr

Page
Frame
+ +

TT
4-kbyte page


Root page table table (contains
(contains 1024 PTEs) 1024 PTEs)

Program Paging Mechanism Main Memory

Figure 8.4 Address Translation in a Two-Level Paging System

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.



n Page number portion of a virtual address is mapped into a hash
value
n Hash value points to inverted page table

n Fixed proportion of real memory is required for the tables


regardless of the number of processes or virtual pages supported

n Structure is called inverted because it indexes page table entries by


frame number rather than by virtual page number

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


Virtual Address
n bits
Page # Offset
Control
n bits bits
Process
hash m bits Page # ID Chain
function 0

2m 1 Frame # Offset
m bits
Inverted Page Table Real Address
(one entry for each
physical memory frame)

Figure 8.5 Inverted Page Table Structure


Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.
Inverted Page Table
Each entry in the page table includes:

Page Process Control Chain


number identifier bits pointer

• This is the • The process • Includes • The index


that owns flags and value of the
page number
this page protection next entry
portion of and locking in the chain
the virtual information
address
Copyright © 2018 Pearson Education, Ltd. All Rights
Reserved.
Translation Lookaside
Buffer (TLB)
n To overcome the effect of
n Each virtual memory doubling the memory access
reference can cause two time, most virtual memory
physical memory accesses: schemes make use of a special
n One to fetch the page high-speed cache called a
table entry translation lookaside buffer
(TLB)
n One to fetch the data
n This cache functions in
the same way as a
memory cache and
contains those page table
entities that have been
most recently used

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


Secondary
Main Memory Memory
Virtual Address
Page # Offset

Translation
Lookaside Buffer

TLB hit
Offset

sonative reg
as
Load
page
Page Table

TLB miss

Frame # Offset
Real Address

Page fault

Figure 8.6 Use of a Translation Lookaside Buffer

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.
-

Associative Mapping
n The TLB only contains some of the page table entries so we
cannot simply index into the TLB based on page number
n Each TLB entry must include the page number as well as the complete
page table entry

n The processor is equipped with hardware that allows it to


interrogate simultaneously a number of TLB entries to
determine if there is a match on page number

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


Virtual Address Virtual Address
Page # Offset Page # Offset
5 502 5 502

Page # PT Entries
19
511
37
27
14
37 1
211
5 37
90

37 502 37 502
Frame # Offset Translation Lookaside Buffer Frame # Offset
Real Address Real Address

Page Table

(a) Direct mapping (b) Associative mapping

Figure 8.8 Direct Versus Associative Lookup for Page Table Entries

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


TLB Operation
Virtual Address

Page # Offset
TLB
TLB miss
TLB
hit Cache Operation

Real Address

+ Tag Remainder Hit Value

o
Cache
Miss

Main
Memory

Page Table
Value

Figure 8.9 Translation Lookaside Buffer and Cache Operation


Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.
Page Size
n The smaller the page size, the lesser the amount of internal
fragmentation
n However, more pages are required per process
n More pages per process means larger page tables
n For large programs in a heavily multiprogrammed environment
some portion of the page tables of active processes must be in
virtual memory instead of main memory
n The physical characteristics of most secondary-memory devices
favor a larger page size for more efficient block transfer of data

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


Page Fault Rate

Page Fault Rate


P W N
(a) Page Size (b) Number of Page Frames Allocated

P = size of entire process


W = working set size
N = total number of pages in process

Figure 8.10 Typical Paging Behavior of a Program


Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.
Computer Page Size

Atlas 512 48-bit words


Honeywell-Multics 1024 36-bit words
Table 8.3
IBM 370/XA and 370/ESA 4 Kbytes
VAX family 512 bytes Example
IBM AS/400 512 bytes of Page
DEC Alpha 8 Kbytes
Sizes
MIPS 4 Kbytes to 16 Mbytes
UltraSPARC 8 Kbytes to 4 Mbytes

Pentium 4 Kbytes or 4 Mbytes


IBM POWER 4 Kbytes
Itanium 4 Kbytes to 256 Mbytes

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


Page Size
The design issue of Main memory is
page size is related to getting larger and
the size of physical address space used by
main memory and applications is also
program size growing

n Contemporary programming
techniques used in large
programs tend to decrease the Most obvious on
personal computers
locality of references within a where applications are
process becoming increasingly
complex

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


Segmentation
n Segmentation allows the programmer to view memory as
consisting of multiple address spaces or segments

Advantages:

n Simplifies handling of growing data structures


n Allows programs to be altered and recompiled independently
n Lends itself to sharing data among processes
n Lends itself to protection

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


Segment Organization
n Each segment table entry contains the starting address of the
corresponding segment in main memory and the length of the
segment

n A bit is needed to determine if the segment is already in main


memory
n Another bit is needed to determine if the segment has been
modified since it was loaded in main memory

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


Virtual address Physical address
Seg # Offset = d + Base + d

Register
Seg Table Ptr

Segment table
d

Segment
Seg #
+
Length Base

Program Segmentation mechanism Main memory

Figure 8.11 Address Translation in a Segmentation System


Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.
Combined Paging and
Segmentation

In a combined
paging/segmentation system Segmentation is visible to the
a user’s address space is programmer
broken up into a number of
segments. Each segment is
broken up into a number of
fixed-sized pages which are Paging is transparent to the
equal in length to a main programmer
memory frame

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


Virtual Address
Seg # Page # Offset Frame # Offset

Seg Table Ptr

Segment Page
Table Table
Offset

Page#
Page
Frame
Seg#

+ +

Program Segmentation Paging Main Memory


Mechanism Mechanism

Figure 8.12 Address Translation in a Segmentation/Paging System

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


P M Other Control Bits Length Segment Base

(b) Segmentation only

Virtual Address
Segment Number Page Number Offset

Segment Table Entry


Control Bits Length Segment Base

Page Table Entry


P M Other Control Bits Frame Number P= present bit
M = Modified bit
(c) Combined segmentation and paging

Figure 8.1 Typical Memory Management Formats


Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.
Protection and Sharing
n Segmentation lends itself to the implementation of protection
and sharing policies
n Each entry has a base address and length so inadvertent memory
access can be controlled

n Sharing can be achieved by segments referencing multiple


processes

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


Address Main Memory
0

20K
Dispatcher X
35K No access
allowed
50K
Process A
80K
90K Branch instruction
(not allowed)
Process B X
Reference to
data (allowed)

140K

Process C
Reference to
data (not allowed)
X
190K

Figure 8.13 Protection Relationships Between Segments


Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.
Operating System Software

The design of the memory management


portion of an operating system depends on
three fundamental areas of choice:

• Whether or not to use virtual memory techniques


• The use of paging or segmentation or both
• The algorithms employed for various aspects of
memory management

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


Fetch Policy Resident Set Management
Demand paging Resident set size
Prepaging Fixed
Variable
Placement Policy Replacement Scope
Global
Replacement Policy Local
Basic Algorithms
Optimal Cleaning Policy
Least recently used (LRU) Demand
First-in-first-out (FIFO) Precleaning
Clock
Page Buffering Load Control
Degree of multiprogramming

Table 8.4 Operating System Policies for Virtual Memory

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


n Determines when a
page should be Two main
brought into types:
memory

Demand
Prepaging
Paging

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


Demand Paging
n Demand Paging
n Only brings pages into main memory when a reference is made
to a location on the page
n Many page faults when process is first started
n Principle of locality suggests that as more and more pages are
brought in, most future references will be to pages that have
recently been brought in, and page faults should drop to a very
low level

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


Prepaging
n Prepaging
n Pages other than the one demanded by a page fault are brought
in
n Exploits the characteristics of most secondary memory devices
n If pages of a process are stored contiguously in secondary
memory it is more efficient to bring in a number of pages at
one time
n Ineffective if extra pages are not referenced
n Should not be confused with “swapping”

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


Placement Policy
n Determines where in real memory a process piece is
to reside
n Important design issue in a segmentation system
n Paging or combined paging with segmentation
placing is irrelevant because hardware performs
functions with equal efficiency
n For NUMA systems an automatic placement
strategy is desirable

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


Replacement Policy
n Deals with the selection of a page in main memory
to be replaced when a new page must be brought in
n Objective is that the page that is removed be the page
least likely to be referenced in the near future

n The more elaborate the replacement policy the


greater the hardware and software overhead to
implement it

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


§ When a frame is locked the page currently stored in that
frame may not be replaced
§ Kernel of the OS as well as key control structures are held
in locked frames
§ I/O buffers and time-critical areas may be locked into
main memory frames
§ Locking is achieved by associating a lock bit with each
frame

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


Algorithms used for
the selection of a
page to replace:
• Optimal
• Least recently used (LRU)
• First-in-first-out (FIFO)
• Clock

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


Page address
stream 2 3 2 1 5 2 4 5 3 2 5 2

2 2 2 2 2 2 4 4 4 2 2 2
OPT 3 3 3 3 3 3 3 3 3 3 3
1 5 5 5 5 5 5 5 5
F F F

2 2 2 2 2 2 2 2 3 3 3 3
LRU 3 3 3 5 5 5 5 5 5 5 5
1 1 1 4 4 4 2 2 2
F F F F

2 2 2 2 5 5 5 5 3 3 3 3
FIFO 3 3 3 3 2 2 2 2 2 5 5
1 1 1 4 4 4 4 4 2
F F F F F F

2* 2* 2* 2* 5* 5* 5* 5* 3* 3* 3* 3*
CLOCK 3* 3* 3* 3 2* 2* 2* 2 2* 2 2*
1* 1 1 4* 4* 4 4 5* 5*
F F F F F

F = page fault occurring after the frame allocation is initially filled

Figure 8.14 Behavior of Four Page-Replacement Algorithms


Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.
Least Recently Used
(LRU)
n Replaces the page that has not been referenced for
the longest time
n By the principle of locality, this should be the page
least likely to be referenced in the near future
n Difficult to implement
n One approach is to tag each page with the time of last
reference
n This requires a great deal of overhead

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


First-in-First-out (FIFO)
n Treats page frames allocated to a process as a circular
buffer
n Pages are removed in round-robin style
§ Simple replacement policy to implement

n Page that has been in memory the


longest is replaced

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


Clock Policy
n Requires the association of an additional bit with each
frame
n Referred to as the use bit

n When a page is first loaded in memory or referenced, the


use bit is set to 1
n The set of frames is considered to be a circular buffer
n Any frame with a use bit of 1 is passed over
by the algorithm
n Page frames visualized as laid out in a circle
Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.
First frame in
circular buffer of
n–1 0 frames that are
candidates for replacement
page 9 page 19 1
use = 1 use = 1
page 1
use = 1

next frame page 45 2


pointer use = 1

page 222 page 191


8 use = 0 use = 1 3

page 33 page 556


use = 1 use = 0
7 page 67 page 13 4
use = 1 use = 0

6 5
(a) State of buffer just prior to a page replacement

n–1 0

page 9 page 19 1
use = 1 use = 1
page 1
use = 1

page 45 2
use = 0

page 222 page 191


8 use = 0 use = 0 3

page 33 page 727


use = 1 use = 1
7 page 67 page 13 4
use = 1 use = 0

6 5
(b) State of buffer just after the next page replacement

Figure 8.15 Example of Clock Policy Operation

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


40
Page Faults per 1000 References

FIFO
35
30 CLOCK

25 LRU
20
OPT
15
10
5
0
6 8 10 12 14

Number of Frames Allocated

Figure 8.16 Comparison of Fixed-Allocation, Local Page Replacement Algorithms

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


n Improves paging A replaced page is
not lost, but
performance and rather assigned to
allows the use of one of two lists

a simpler page
replacement
policy
Free page list Modified page list

List of page
frames available Pages are written
for reading in out in clusters
pages

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


Replacement Policy and Cache Size
n With large caches, replacement of pages can have a
performance impact
n If the page frame selected for replacement is in the cache,
that cache block is lost as well as the page that it holds
n In systems using page buffering, cache performance can be
improved with a policy for page placement in the page buffer
n Most operating systems place pages by selecting an arbitrary
page frame from the page buffer

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


n The OS must decide how many pages to bring into
main memory
n The smaller the amount of memory allocated to each
process, the more processes can reside in memory
n Small number of pages loaded increases page faults
n Beyond a certain size, further allocations
of pages will not effect the page fault rate

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


Resident Set Size
Fixed-allocation Variable-allocation
n Gives a process a fixed n Allows the number of
number of frames in main page frames allocated to a
memory within which to process to be varied over
execute the lifetime of the process
n When a page fault occurs,
one of the pages of that
process must be replaced

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


n The scope of a replacement strategy can be categorized as
global or local
n Both types are activated by a page fault when there are no
free page frames

Local
• Chooses only among the resident pages of the process that generated
the page fault

Global
• Considers all unlocked pages in main memory

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


Local Replacement Global Replacement

Fixed Allocation •Number of frames allocated •Not possible.


to a process is fixed.

•Page to be replaced is chosen


from among the frames
allocated to that process.

Variable Allocation •The number of frames •Page to be replaced is chosen from


allocated to a process may be all available frames in main
changed from time to time to memory; this causes the size of the
maintain the working set of resident set of processes to vary.
the process.

•Page to be replaced is chosen


from among the frames
allocated to that process.

Table 8.5 Resident Set Management

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


Fixed Allocation, Local Scope
n Necessary to decide ahead of time the amount of
allocation to give a process
n If allocation is too small, there will be a high page fault
rate

If allocation is too • Increased processor idle time


large, there will be
too few programs • Increased time spent in
in main memory swapping

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


Variable Allocation
Global Scope
n Easiest to implement
n Adopted in a number of operating systems

n OS maintains a list of free frames


n Free frame is added to resident set of process when a page fault
occurs

n If no frames are available the OS must choose a page currently in


memory

n One way to counter potential problems is to use page buffering

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


n When a new process is loaded into main memory, allocate to it a
certain number of page frames as its resident set
n When a page fault occurs, select the page to replace from among
the resident set of the process that suffers the fault

n Reevaluate the allocation provided to the process and increase or


decrease it to improve overall performance

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


Variable Allocation
Local Scope
n Decision to increase or decrease a resident set size is based
on the assessment of the likely future demands of active
processes

Key elements:

• Criteria used to determine


resident set size
• The timing of changes

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


Sequence of
Page
Window Size, Δ
References
2 3 4 5
24 24 24 24 24
15 24 15 24 15 24 15 24 15
18 15 18 24 15 18 24 15 18 24 15 18
23 18 23 15 18 23 24 15 18 23 24 15 18 23
24 23 24 18 23 24 • •
17 24 17 23 24 17 18 23 24 17 15 18 23 24 17
18 17 18 24 17 18 • 18 23 24 17
24 18 24 • 24 17 18 •
18 • 18 24 • 24 17 18
17 18 17 24 18 17 • •
17 17 18 17 • •
15 17 15 17 15 18 17 15 24 18 17 15
24 15 24 17 15 24 17 15 24 •
17 24 17 • • 17 15 24
24 • 24 17 • •
18 24 18 17 24 18 17 24 18 15 17 24 18

Figure 8.17 Working Set of Process as Defined by Window Size


Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.
Working Set Size

Transient Transient Transient Transient Time

Stable Stable Stable Stable

Figure 8.18 Typical Graph of Working Set Size [MAEK87]


Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.
Page Fault Frequency
(PFF)
n Requires a use bit to be associated with each page in memory

n Bit is set to 1 when that page is accessed


n When a page fault occurs, the OS notes the virtual time since the
last page fault for that process

n Does not perform well during the transient periods when there is
a shift to a new locality

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


n Evaluates the working set of a process at sampling instances based
on elapsed virtual time
n Driven by three parameters:

The number of
The minimum The maximum page faults that
duration of the duration of the are allowed to
sampling sampling occur between
interval interval sampling
instances

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


Cleaning Policy
n Concerned with determining when a modified page should be
written out to secondary memory

Demand Cleaning
A page is written out to secondary memory only when it has been selected for
replacement

Precleaning
Allows the writing of pages in batches

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


Load Control
n Determines the number of processes that will be resident in
main memory
n Multiprogramming level

n Critical in effective memory management


n Too few processes, many occasions when all processes will
be blocked and much time will be spent in swapping
n Too many processes will lead to thrashing

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


Processor Utilization

Multiprogramming Level

Figure 8.19 Multiprogramming Effects


Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.
n If the degree of multiprogramming is to be reduced, one or more
of the currently resident processes must be swapped out

Six possibilities exist:


• Lowest-priority process
• Faulting process
• Last process activated
• Process with the smallest resident set
• Largest process
• Process with the largest remaining execution window

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


UNIX
n Intended to be machine independent so its memory
management schemes will vary
n Early UNIX: variable partitioning with no virtual memory scheme
n Current implementations of UNIX and Solaris make use of paged
virtual memory
SVR4 and Solaris use
two separate schemes:
• Paging system
• Kernel memory allocator
Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.
Kernel Memory
Paging System
Allocator

Provides a virtual memory


capability that allocates page frames Allocates memory for the kernel
in main memory to processes

Allocates page frames to disk block


buffers

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


Copy
Page frame number Age on Mod- Refe- Valid Pro-
write ify rence tect

(a) Page table entry

Swap device number Device block number Type of storage

(b) Disk block descriptor

Reference Logical Block Pfdata


Page state
count device number pointer

(c) Page frame data table entry

Reference Page/storage
count unit number

(d) Swap-use table entry

Figure 8.20 UNIX SVR4 Memory Management Formats


Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.
Page Table Entry

Page frame number


Refers to frame in real memory.

Age
Indicates how long the page has been in memory without being referenced. The length and contents of this
Table 8.6
field are processor dependent.

Copy on write
Set when more than one process shares a page. If one of the processes writes into the page, a separate copy
of the page must first be made for all other processes that share the page. This feature allows the copy
operation to be deferred until necessary and avoided in cases where it turns out not to be necessary.
UNIX SVR4
Modify Memory
Indicates page has been modified.

Reference
Management
Indicates page has been referenced. This bit is set to 0 when the page is first loaded and may be periodically
reset by the page replacement algorithm. Parameters
Valid (page 1 of 2)
Indicates page is in main memory.

Protect
Indicates whether write operation is allowed.

Disk Block Descriptor

Swap device number


Logical device number of the secondary device that holds the corresponding page. This allows more than
one device to be used for swapping.

Device block number


Block location of page on swap device.

Type of storage
Storage may be swap unit or executable file. In the latter case, there is an indication as to whether the (Table can be found on page 381
virtual memory to be allocated should be cleared first. in the textbook)

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


Page Frame Data Table Entry

Page state
Indicates whether this frame is available or has an associated page. In the latter case, the Table 8.6
status of the page is specified: on swap device, in executable file, or DMA in progress.

Reference count UNIX SVR4


Number of processes that reference the page. Memory
Logical device
Management
Logical device that contains a copy of the page. Parameters
(page 2 of 2)
Block number
Block location of the page copy on the logical device.

Pfdata pointer
Pointer to other pfdata table entries on a list of free pages and on a hash queue of pages.

Swap-Use Table Entry

Reference count
Number of page table entries that point to a page on the swap device.

Page/storage unit number (Table can be


Page identifier on storage unit. found on page 381
in the textbook)

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


n The page frame data table is used for page
replacement
n Pointers are used to create lists within the table
n All available frames are linked together in a list of free
frames available for bringing in pages
n When the number of available frames drops below a
certain threshold, the kernel will steal a number of frames
to compensate

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


End of Beginning
page list of page list

fro
nt
h
hands p re a d

an
d

a nd
c kh
ba

Figure 8.21 Two-Handed Clock Page-Replacement Algorithm


Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.
n The kernel generates and destroys small tables and buffers
frequently during the course of execution, each of which requires
dynamic memory allocation.

n Most of these blocks are significantly smaller than typical pages


(therefore paging would be inefficient)
n Allocations and free operations must be made as fast as possible

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


n Technique adopted for SVR4
n UNIX often exhibits steady-state behavior in kernel
memory demand
n i.e. the amount of demand for blocks of a particular size
varies slowly in time

n Defers coalescing until it seems likely that it is


needed, and then coalesces as many blocks as
possible

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


Initial value of Di is 0
After an operation, the value of Di is updated as follows

(I) if the next operation is a block allocate request:


if there is any free block, select one to allocate
if the selected block is locally free
then Di := Di + 2
else Di := Di + 1
otherwise
first get two blocks by splitting a larger one into two (recursive operation)
allocate one and mark the other locally free
Di remains unchanged (but D may change for other block sizes because of the
recursive call)

(II) if the next operation is a block free request


Case Di ≥ 2
mark it locally free and free it locally
Di := Di - 2
Case Di = 1
mark it globally free and free it globally; coalesce if possible
Di := 0
Case Di = 0
mark it globally free and free it globally; coalesce if possible
select one locally free block of size 2i and free it globally; coalesce if possible
Di := 0

Figure 8.22 Lazy Buddy System Algorithm


Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.
Linux
Memory Management
n Shares many characteristics with UNIX
n Is quite complex

• Process virtual
memory
Two main • Kernel memory
aspects allocation

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


n Three level page table structure:

Page directory Page middle directory Page table

Process has a single page


May span multiple pages May also span multiple pages
directory

Each entry points to one page Each entry points to one page Each entry refers to one
of the page middle directory in the page table virtual page of the process

Must be in main memory for


an active process

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


Virtual address
Global Directory Middle Directory Page Table Offset

+
Page table

Page middle +
directory Page frame
in physical
memory
Page
directory
+

+
cr3
register

Figure 8.23 Address Translation in Linux Virtual Memory Scheme


Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.
n Based on the clock algorithm
n The use bit is replaced with an 8-bit age variable
n Incremented each time the page is accessed

n Periodically decrements the age bits


n A page with an age of 0 is an “old” page that has not been
referenced in some time and is the best candidate for
replacement

n A form of least frequently used policy

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


Inactive Active
timeout

PG_active = 0 timeout PG_active = 1


PG_referenced = 0 PG_referenced = 0

ed
used timeout used timeout

PG_active = 0 us PG_active = 1
PG_referenced = 1 PG_referenced = 1

used

Figure 8.24 Linux Page Reclaiming


Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.
n Kernel memory capability manages physical main memory page frames
n Primary function is to allocate and deallocate frames for particular
uses
Possible owners of a frame include:
• User-space processes
• Dynamically allocated kernel data
• Static kernel code
• Page cache

n A buddy algorithm is used so that memory for the kernel can be


allocated and deallocated in units of one or more pages
n Page allocator alone would be inefficient because the kernel requires
small short-term memory chunks in odd sizes
n Slab allocation
n Used by Linux to accommodate small chunks
Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.
Windows
Memory Management
n Virtual memory manager controls how memory is allocated
and how paging is performed
n Designed to operate over a variety of platforms
n Uses page sizes ranging from 4 Kbytes to 64 Kbytes

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


Windows Virtual Address Map
n On 32 bit platforms each user process sees a separate 32 bit
address space allowing 4 GB of virtual memory per process
§ By default half is reserved for the OS
n Large memory intensive applications run more effectively
using 64-bit Windows
n Most modern PCs use the AMD64 processor architecture
which is capable of running as either a 32-bit or 64-bit
system

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


0
64-Kbyte region for
NULL-pointer assignments
(inaccessible)

2-Gbyte user
address space
(unreserved, usable)

64-Kbyte region for


bad pointer assignments
(inaccessible)

2-Gbyte region for


the operating system
(inacessible)

0xFFFFFFFF

Figure 8.25 Windows Default 32-bit Virtual Address Space


Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.
Windows Paging
n On creation, a process can make use of the entire user space of
almost 2 GB
n This space is divided into fixed-size pages managed in
contiguous regions allocated on 64 KB boundaries

n Regions may be in one of three states:

Available Reserved Committed

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


n Windows uses variable allocation, local scope
n When activated, a process is assigned a data
structure to manage its working set
n Working sets of active processes are
adjusted depending on the availability
of main memory

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


Android Memory
Management
n Android includes a number of extensions to the normal Linux kernel memory management
facility

n These include:
n ASHMem

n This feature provides anonymous shared memory, which abstracts memory as file
descriptors
n A file descriptor can be passed to another process to share memory

n Pmem

n This feature allocates virtual memory so that it is physically contiguous


n This feature is useful for hardware that does not support virtual memory

n Low Memory Killer


n This feature enables the system to notify an app or apps that they need to free up
memory
n If an app does not cooperate, it is terminated

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.


Summary
n Hardware and control structures n UNIX and Solaris memory
n Locality and virtual memory
management
n Paging system
n Paging
n Kernel memory allocator
n Segmentation
n Combined paging and n Linux memory management
segmentation
n Linux virtual memory
n Protection and sharing
n Kernel memory allocation
n OS software
n Windows memory management
n Fetch policy
n Windows virtual address map
n Placement policy
n Windows paging
n Replacement policy
n Windows swapping
n Resident set management
n Cleaning policy n Android memory management
n Load control

Copyright © 2018 Pearson Education, Ltd. All Rights Reserved.

You might also like