You are on page 1of 8

Informacion extraida de http://iakovlev.org/index.html?

p=945
CHAPTER3PROTECTEDMODEMEMORYMANAGEMENT
ThischapterdescribestheIntelArchitecture's
protectedmodememorymanagementfacilities,including
thephysicalmemoryrequirements,thesegmentation
mechanism,andthepagingmechanism.RefertoChapter4,
Protectionforadescriptionoftheprocessor's
protectionmechanism.RefertoChapter16,8086Emulation
foradescriptionofmemoryaddressingprotectionin
realaddressandvirtual8086modes.
3.1.MEMORYMANAGEMENTOVERVIEW
ThememorymanagementfacilitiesoftheIntel
Architecturearedividedintotwoparts:segmentationand
paging.Segmentationprovidesamechanismofisolating
individualcode,data,andstackmodulessothatmultiple
programs(ortasks)canrunonthesameprocessorwithout
interferingwithoneanother.Pagingprovidesamechanism
forimplementingaconventionaldemandpaged,virtual
memorysystemwheresectionsofaprogram'sexecution
environmentaremappedintophysicalmemoryasneeded.
Pagingcanalsobeusedtoprovideisolationbetween
multipletasks.Whenoperatinginprotectedmode,some
formofsegmentationmustbeused.Thereisnomodebit
todisablesegmentation.Theuseofpaging,however,is
optional.
Thesetwomechanisms(segmentationandpaging)canbe
configuredtosupportsimplesingleprogram(orsingle
task)systems,multitaskingsystems,ormultiple
processorsystemsthatusedsharedmemory.
AsshowninFigure31,segmentationprovidesamechanism
fordividingtheprocessor'saddressablememoryspace
(calledthelinearaddressspace)intosmallerprotected
addressspacescalledsegments.Segmentscanbeusedto

holdthecode,data,andstackforaprogramortohold
systemdatastructures(suchasaTSSorLDT).Ifmore
thanoneprogram(ortask)isrunningonaprocessor,
eachprogramcanbeassigneditsownsetofsegments.The
processorthenenforcestheboundariesbetweenthese
segmentsandinsuresthatoneprogramdoesnotinterfere
withtheexecutionofanotherprogrambywritingintothe
otherprogram'ssegments.Thesegmentationmechanismalso
allowstypingofsegmentssothattheoperationsthatmay
beperformedonaparticulartypeofsegmentcanbe
restricted.
Allofthesegmentswithinasystemarecontainedinthe
processor'slinearaddressspace.Tolocateabyteina
particularsegment,alogicaladdress(sometimescalleda
farpointer)mustbeprovided.Alogicaladdressconsists
ofasegmentselectorandanoffset.Thesegmentselector
isauniqueidentifierforasegment.Amongotherthings
itprovidesanoffsetintoadescriptortable(suchas
theglobaldescriptortable,GDT)toadatastructure
calledasegmentdescriptor.Eachsegmenthasasegment
descriptor,whichspecifiesthesizeofthesegment,the
accessrightsandprivilegelevelforthesegment,the
segmenttype,andthelocationofthefirstbyteofthe
segmentinthelinearaddressspace(calledthebase
addressofthesegment).Theoffsetpartofthelogical
addressisaddedtothebaseaddressforthesegmentto
locateabytewithinthesegment.Thebaseaddressplus
theoffsetthusformsalinearaddressintheprocessor's
linearadressspace.
Figure 3-1

If paging is not used, the linear address space of the processor is


mapped directly into the physical address space of processor. The
physical address space is defined as the range of addresses that the
processor can generate on its address bus.
Because multitasking computing systems commonly define a linear
address space much larger than it is economically feasible to
contain all at once in physical memory, some method of
"virtualizing" the linear address space is needed. This virtualization
of the linear address space is handled through the processor's
paging mechanism.
Paging supports a "virtual memory" environment where a large
linear address space is simulated with a small amount of physical
memory (RAM and ROM) and some disk storage. When using
paging, each segment is divided into pages (ordinarily 4 KBytes
each in size), which are stored either in physical memory or on the
disk. The operating system or executive maintains a page directory
and a set of page tables to keep track of the pages. When a
program (or task) attempts to access an address location in the
linear address space, the processor uses the page directory and

page tables to translate the linear address into a physical address


and then performs the requested operation (read or write) on the
memory location. If the page being accessed is not currently in
physical memory, the processor interrupts execution of the program
(by generating a page-fault exception). The operating system or
executive then reads the page into physical memory from the disk
and continues executing the program.
When paging is implemented properly in the operating-system or
executive, the swapping of pages between physical memory and
the disk is transparent to the correct execution of a program. Even
programs written for 16-bit Intel Architecture processors can be
paged (transparently) when they are run in virtual-8086 mode.

http://faculty.cs.niu.edu/~berezin/463/lec/06memory/vmlayout.ht
ml

VM FIFO example
VM LRU example
VM Table layout
Table size determined by:
Number of Virtual pages =

Round up ( VM range / page size )

Size of Frame index = n bits where


2^n = Real range / page size.
Size of an entry = n bit frame index + dirty bit + present flag
+ other ovehead such as a timestamp or counter.
Size of table = Number of entries * Size of entry

VM Table layout example


16 Meg virtual memory space. 2^24
1 Meg physical memory. 2^20

16 K page size. 2^14


2^24 / 2^14 = 2^10 = 1024 virtual pages (1024 slots in table).
2^20 / 2^14 = 2^6 = 64 page frames. (6 bit frame pointer).
Table size 1024 slots *
( 6 bit pointer + 1 bit P/A flag + 1 bit dirty flag )
Additional storage for time stamp or other aging data.
Table kept in unswappable memory or a separate memory area, such as
a custom cache for virtual memory logic.
Given a memory access, its address is parsed into a virtual memory
page pointer and an index or offset into the memory page.
Top 10 bits of an @ become virtual page pointer.
Lower 14 bits become the offset into the physical (and virtual) page frame.
VM Table use
Given a memory access,
Parse address for VM page index.
and Offset into the memory page.
Check VM table for presence of frame
If present
Combine stored frame id and offset for actual memory address.
Else if no frame assigned
Page fault
Pause program
Select frame to use.

LRU, LFU, other

If frame being used


If dirty bit flags change
Stored current frame contents on disk (swap out)
Endif
Endif
Swap in page from disk.
Endif

VM problems
Each memory access requires 2 memory accesses
1st to read the table
2nd to access the page frame in physical memory.
Another issue is size of page table for large systems.
4 GB (2^32) Virtual memory space.
256 MB (2^28) Physical memory.
4 KB (2^12) page and frame size (equal to small cluster).
2^32/2^12 = 2^20 1 Mi. slots. 2^28/2^12 = 2^16 (16 bit pointer)
Table size:
1 Mi.

* (16 bit pointer + 1 bit P/A flag + 1 bit dirty flag +


>= 4 MiB of storage.

14 bit timestamp)

Too big to be practical.


Translation look-aside buffer
Speed of access is critical in virtual memory.
TLB is a small block of high speed associative cache.
containing vm page id/frame page id pairs for the most recently
accessed pages.
Small enough to be seached in parallel.

Multi-tiered or vm applied to table itself.


Table parsed into a two tier structure.
Only the highest tier kept in permanent memory
Often dedicated sram cache.
Second tier swapped out just like the other memory.
A memory reference is parsed into 3 elements.
Page directory entry, Page table entry, Memory offset.
Other Virtual Memory Issue.
Non-swappable memory.
Certain code and data cannot be swapped to vm without causing problems.
This means the even fewer frames actually available for swapping.
Operating system modules.
Kernel (OS) functions - especially time dependent functions.
Interrupt mechanisms
These may include the code for access the hard drives.
Memory used for processing data I/O. (data buffers)
At least parts of the virtual memory table itself.
These mapped outside the virtual page area or once they have been
assigned to actual physical frames, they are locked in.
Virtual memory space assignment
System wide - singe virtual memory space for OS and all applications.
Simpler design on some levels.
or
For each application - each application given its own VM space.
More overhead but each application believes it has full address range.
OS or parts of are often excluded from VM ( permanently in real memory)
Windows swap space uses idea, http://support.microsoft.com/kb/555223
Each app given 2GiB private and 2GiB shared (with os and other apps)
Each frame is 4 KiB in size.
Although each app is assiged 4GiB memory, system will not allocate
that in physical frames or as storage in swap space unless actually
used.
Size of swap, for systems with less than 2GiB real memory
Otherwise use a 1:1 ratio.

1.5 x

Windows default allocates a minimum size and then adds to it. When
not needed the additional drive space is freed for other use. Because
the VM system works in 4 K frames, defragmentation is a minor issue.
Part of the efficiency of the VM model is based on working with
standard sized pages.

However,
Even if program or data does not take up all bytes of a frame,
All bytes are reserved for specific page.
If many small blocks of memory,
Much space is wasted.
This is known as internal fragmentation.
Since most programs and blocks of data span several virtual pages,
The percentage of partial frames tend to be small.
Alternative is to use segmentation.