You are on page 1of 27

Memory Management

(Chaper 4, Tanenbaum)

Copyright © 1996–2002 Eskicioglu


July 99

and Marsland (and Prentice­Hall and Memory Mgmt


Paul Lu)
Introduction
The CPU fetches instructions and data of a program
from memory; therefore, both the program and its data
must reside in the main (RAM and ROM) memory.
Modern multiprogramming systems are capable of
storing more than one program, together with the data
they access, in the main memory.
A fundamental task of the memory management
component of an operating system is to ensure safe
execution of programs by providing:
– Sharing of memory
– Memory protection

Copyright © 1996–2002 Eskicioglu


July 99

and Marsland (and Prentice­Hall and M emory M gmt 2


Paul Lu)
Issues in sharing memory
• Transparency
Several processes may co­exist, unaware of each other,
in the main memory and run regardless of the number
and location of processes.
• Safety (or protection)
Processes must not corrupt each other (nor the OS!)
• Efficiency
CPU utilization must be preserved and memory must be
fairly allocated. Want low overheads for memory
management.
• Relocation
Ability of a program to run in different memory
locations.

Copyright © 1996–2002 Eskicioglu


July 99

and Marsland (and Prentice­Hall and M emory M gmt 3


Paul Lu)
Storage allocation
Information stored in main memory can be classified in
a variety of ways:
• Program (code) and data (variables, constants)
• Read­only (code, constants) and read­write (variables)
• Address (e.g., pointers) or data (other variables); binding
(when memory is allocated for the object): static or
dynamic
The compiler, linker, loader and run­time libraries all
cooperate to manage this information.

Copyright © 1996–2002 Eskicioglu


July 99

and Marsland (and Prentice­Hall and M emory M gmt 4


Paul Lu)
Creating an executable code
Before a program can be executed by the CPU, it must
go through several steps:
– Compiling (translating)—generates the object code.
– Linking—combines the object code into a single self­
sufficient executable code.
– Loading—copies the executable code into memory.
May include run­time linking with libraries.
– Execution—dynamic memory allocation.

Copyright © 1996–2002 Eskicioglu


July 99

and Marsland (and Prentice­Hall and M emory M gmt 5


Paul Lu)
From source to executable code
translation linking

compiler linker shared


librarie
s
execution
sourc object librarie
e code s
code (module (object)
) .
loading
code
loader
data
executabl workspace
e code
(load
module) .
.

seconda main
ry memo
storage ry

Copyright © 1996–2002 Eskicioglu


July 99

and Marsland (and Prentice­Hall and M emory M gmt 6


Paul Lu)
Address binding (relocation)
The process of associating program instructions and
data (addresses) to physical memory addresses is called
address binding, or relocation.
• Static—new locations are determined before execution.
– Compile time: The compiler or assembler translates
symbolic addresses (e.g., variables) to absolute
addresses.
– Load time: The compiler translates symbolic addresses
to relative (relocatable) addresses. The loader translates
these to absolute addresses.
• Dynamic—new locations are determined during
execution.
– Run time: The program retains its relative addresses.
The absolute addresses are generated by hardware.

Copyright © 1996–2002 Eskicioglu


July 99

and Marsland (and Prentice­Hall and M emory M gmt 7


Paul Lu)
Partitioning
A simple method to accommodate several programs in
memory at the same time (to support
multiprogramming) is partitioning. In this scheme, the
memory is divided into a number of contiguous regions,
called partitions. Two forms of memory partitioning,
depending on when and how partitions are created (and
modified), are possible:
• Static partitioning
• Dynamic partitioning
These techniques were used by the IBM OS/360
operating system—MFT (Multiprogramming with Fixed
Number of Tasks) and MVT (Multiprogramming with
Variable Number of Tasks.)

Copyright © 1996–2002 Eskicioglu


July 99

and Marsland (and Prentice­Hall and M emory M gmt 14


Paul Lu)
Static partitioning
Main memory is divided into
small jobs 10 K
fixed number of (fixed size)
partitions during system
average jobs 100 K generation or startup.
Programs are queued to run
in the smallest available
large jobs 500 K
partition. An executable
prepared to run in one
partition may not be able to
run in another, without being
Operating System
re­linked. This technique uses
absolute loading.

Copyright © 1996–2002 Eskicioglu


July 99

and Marsland (and Prentice­Hall and M emory M gmt 15


Paul Lu)
Dynamic partitioning
Any number of programs can be loaded to memory as long as
there is room for each. When a program is loaded (relocatable
loading), it is allocated memory in exact amount as it needs.
Also, the addresses in the program are fixed after loaded, so
it cannot move. The operating system keeps track of each
partition (their size and locations in the memory.)

K
B B

A A
..
Operati Operati Operati
. Operati
ng ng ng ng
System System System System

Partition allocation at different


times
Copyright © 1996–2002 Eskicioglu
July 99

and Marsland (and Prentice­Hall and M emory M gmt 16


Paul Lu)
Partition Allocation Methods in Memory Management

• In Partition Allocation, when there is more than one


partition freely available to accommodate a process’s
request, a partition must be selected.
• To choose a particular partition, a partition allocation
method is needed.

There are different Placement Algorithm:


A. First Fit

B. Best Fit

C. Worst Fit
First Fit:
• In the first fit, the partition is allocated which is the first
sufficient block from the top of Main Memory.
• It scans memory from the beginning and chooses the first
available block that is large enough.
• Thus it allocates the first hole that is large enough.
Best Fit
• Allocate the process to the partition which is the first
smallest sufficient partition among the free available
partition.

• It searches the entire list of holes to find the smallest hole


whose size is greater than or equal to the size of the process.
Worst Fit

• Allocate the process to the partition which is the largest


sufficient among the freely available partitions available in
the main memory.

• It is opposite to the best-fit algorithm. It searches the entire


list of holes to find the largest hole and allocate it to
process.
Fragmentation
Fragmentation refers to the unused memory that the
memory management system cannot allocate.
• Internal fragmentation
Waste of memory within a partition, caused by the
difference between the size of a partition and the
process loaded. Severe in static partitioning schemes.
• External fragmentation
Waste of memory between partitions, caused by
scattered noncontiguous free space. Severe in dynamic
partitioning schemes.
Compaction (aka relocation) is a technique that is used
to overcome external fragmentation.

Copyright © 1996–2002 Eskicioglu


July 99

and Marsland (and Prentice­Hall and M emory M gmt 18


Paul Lu)
Swapping
The basic idea of swapping is to treat main memory as a
‘‘preemptable’’ resource. A high­speed swapping device is
used as the backing storage of the preempted processes.

SWAP-
IN

Operati
SWAP-OUT ng
Swapping device System
(usually, a hard Memory
disk)
July 99

M emory M gmt 19
Copyright © 1996–2002 Eskicioglu
Swapping cont.

Swapping is a medium­term scheduling method.


Memory becomes preemptable.
swap­in dispatch
processes processes process
on in
disk memory running
swap­out suspend

SWAPPER DI SPATCH E R

Swapping brings flexibility even to systems with fixed


partitions, because:
“if needed, the operating system can always make room for
high­priority jobs, no matter what!’’
Note that, although the mechanics of swapping are fairly
simple in principle, its implementation requires specific
support (e.g., special file system and dynamic relocation)
from the OS that uses it.

Copyright © 1996–2002 Eskicioglu


July 99

and Marsland (and Prentice­Hall and M emory M gmt 20


Paul Lu)
Segmentation
A segment is a region of contiguous memory. The most
important problem with base­and­bounds relocation is
that there is only one segment for each process.
Segmentation generalizes the base­and­bounds
technique by allowing each process to be split over
several segments (i.e., multiple base­bounds pairs). A
segment table holds the base and bounds of each
segment. Although the segments may be scattered in
memory, each segment is mapped to a contiguous
region.
Additional fields (Read/Write and Shared) in the
segment table adds protection and sharing capabilities
to segments (i.e., permissions)

Copyright © 1996–2002 Eskicioglu


July 99

and Marsland (and Prentice­Hall and M emory M gmt 26


Paul Lu)
Relocation with segmentation
Logical Address

Segment # Offset

Seg # Base Bounds R/W S


0 0X1000 0X1120 0 1
1 0X3000 0X3340 0
2 0
3 0X0000
0X2000 0X0FFF
0X2F00 01 1
4 0
0X4000 0X4520 1 0

 fault!

Physical Address

See next slide for memory allocation example.


Copyright © 1996–2002 Eskicioglu
July 99

and Marsland (and Prentice­Hall and M emory M gmt 27


Paul Lu)
Paging
Physical memory is divided into a number of fixed size
blocks, called frames. The logical memory is also divided
into chunks of the same size, called pages. The size of
frame/page is determined by the hardware and can be
any value between 512 bytes (VAX) and 16 megabytes
(MIPS 10000)! Why have such large frames? Why not?
A page table defines (maps) the base address of pages for
each frame in the main memory.
The major goals of paging are to make memory allocation
and swapping easier and to reduce fragmentation.
Paging also allows allocation of non­contiguous memory
(i.e., pages need not be adjacent.)

Copyright © 1996–2002 Eskicioglu


July 99

and Marsland (and Prentice­Hall and M emory M gmt 30


Paul Lu)
Relocation with paging
Logical Address
Page # Page
offset

Page table
0 Page Table Entry (PTE)
1
2
3
4
5
6
7

Frame # Frame offset

Physical
Address

Copyright © 1996–2002 Eskicioglu


July 99

and Marsland (and Prentice­Hall and M emory M gmt 31


Paul Lu)
Paging—an example
Logical
0 Address
2 300 0
1 1
2 2
3 Page
table 8
3
0
4 1 3
4
2
3 1
Logical 6 5
4
Address 0
Space 6
7

1 300 8

Physical Physical
Address Memory

Copyright © 1996–2002 Eskicioglu


July 99

and Marsland (and Prentice­Hall and M emory M gmt 32


Paul Lu)
Page replacement and replacement policies
• In an operating system that uses paging for memory
management, a page replacement algorithm is needed to
decide which page needs to be replaced when new page
comes in.

• Page Fault – A page fault happens when a running


program accesses a memory page that is mapped into the
virtual address space, but not loaded in physical memory.

• Page Replacement Algorithms :


1. First In First Out (FIFO)
2. Optimal Page replacement
3. Least Recently Used
First In First Out (FIFO)
This is the simplest page replacement algorithm. In this algorithm, the operating
system keeps track of all pages in the memory in a queue, the oldest page is in the
front of the queue. When a page needs to be replaced page in the front of the queue
is selected for removal.

Example-1Consider page reference string 1, 3, 0, 3, 5, 6 with 3 page


frames.Find number of page faults.

Initially all slots are empty, so when 1, 3, 0 came


they are allocated to the empty slots —> 3 Page Faults.
when 3 comes,
it is already in memory so —> 0 Page Faults.
Then 5 comes,
it is not available in memory so it replaces the oldest page
slot i.e 1. —>1 Page Fault.
6 comes, it is also not available in memory so it replaces
the oldest page slot i.e 3 —>1 Page Fault.
Finally when 3 come it is not available
so it replaces 0 1 page fault
Belady’s anomaly – Belady’s anomaly proves that it is possible to have
more page faults when increasing the number of page frames while
using the First in First Out (FIFO) page
Optimal Page replacement
In this algorithm, pages are replaced which would not be used for
the longest duration of time in the future.
Example-2:Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, with
4 page frame. Find number of page fault.

Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults.
0 is already there so —> 0 Page fault.
when 3 came it will take the place of 7 because it is not used for the longest duration of time in the
future.—>1 Page fault.
0 is already there so —> 0 Page fault.
4 will takes place of 1 —> 1 Page Fault.

Now for the further page reference string —> 0 Page fault because they are
already available in the memory.
Optimal page replacement is perfect, but not possible in practice as the
operating system cannot know future requests.
Least Recently Used
IIn this algorithm page will be replaced which is least recently
used.
Example-2:Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, with
4 page frame. Find number of page fault.

Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
0 is already there so —> 0 Page fault.
when 3 came it will take the place of 7 because it is least recently used —>1 Page fault
0 is already there so —> 0 Page fault.
4 will takes place of 1 —> 1 Page Fault.

Now for the further page reference string —> 0 Page fault because they are
already available in the memory.
Thrashing
Thrashing is a condition or a situation when the system is
spending a major portion of its time in servicing the page
faults, but the actual processing done is very negligible.

Techniques to handle:
1. Working Set Model –
• The basic principle states that if we allocate enough frames to a process to
accommodate its current locality, it will only fault whenever it moves to
some new locality.
• But if the allocated frames are lesser than the size of the current locality,
the process is bound to thrash.
• According to this model, based on a parameter A, the working set is defined
as the set of pages in the most recent ‘A’ page references.
• Hence, all the actively used pages would always end up being a part of the
working set.

You might also like