You are on page 1of 74

DEPARTMENT OF COMPUTER TECHNOLOGY

IV-SEMESTER
OPERATING SYSTEMS

UNIT-4
Memory management techniques
Syllabus

• Memory management
techniques-contiguous and
non-contiguous, paging and
segmentation, translation look aside
buffer (TLB) and overheads

DTEL 2
INTRODUCTION

An Operating System performs the following activities for


Memory Management: ( func of OS )

Allocates and deallocates the memory.

Keeps a record of which part of primary memory is used by whom


and how much.

Distributes the memory while multiprocessing.

In multiprogramming, the operating system selects which


processes acquire memory when and how much memory they get.

DTEL 3
INTRODUCTION

The CPU fetches instructions and data of a program


from memory; therefore, both the program and its data
must reside in the main (RAM and ROM) memory.

Modern multiprogramming systems are capable of


storing more than one program, together with the data
they access, in the main memory.

DTEL 4
INTRODUCTION

A fundamental task of the memory


management component of an operating
system is to ensure safe execution of
programs by providing:
–Sharing of memory
–Memory protection

DTEL 5
Information stored in main memory can be
classified in a variety of ways:
•Program (code) and data (variables, constants)
•Readonly (code, constants) and readwrite
(variables)
•Address (e.g., pointers) or data (other
variables); binding (when memory is allocated
for the object): static or dynamic
The compiler, linker, loader and runtime libraries
all cooperate to manage this information.

DTEL 6
Creating an executable code

Before a program can be executed by the CPU,


it must go through several steps:

Compiling (translating)—generates the object code.

Linking—process of combining object code and function


libraries into an executable code.

Loading—copies/loads the executable code into memory.


May include runtime linking with libraries.

Execution—dynamic memory allocation.

DTEL 7
From source to executable code

translation linking

compiler linker shared


librarie
s
execution
source object librarie
code
code (module) s .
(object) .
.
loading
code
loader
data

executable workspace
code
(load module)
.
.
.

seconda main
ry memo
storag ry
e

DTEL 8
Address binding (relocation)

The first thing that is done by the operating system


for memory management is the Address Binding
Because whenever we write a program we stored it in
secondary storage and that program is to be loaded into the
main memory in order to get executed,

Process of making logical address to physical address in


main memory is called Address Binding

DTEL 9
Address binding (relocation)
The process of associating program instructions and data
(addresses) to physical memory addresses is called address
binding, or relocation.
It may take place during:
Compile time: The compiler or assembler translates symbolic addresses
(e.g., variables) to absolute addresses.(The absolute addresses are
generated by hardware.)
Load time: The compiler translates symbolic addresses to relative
(relocatable) addresses. Must generate relocatable code if memory
location is not known at compile time

Run time: The program retains its relative addresses.(move a process


during run time from one location to another.)

Static—new locations are determined before execution.


Dynamic—new locations are determined during execution

DTEL 10
Multistep Processing of a User Program

DTEL 11
Logical vs. Physical Address Space

The concept of a logical address space that is bound to a


separate physical address space is central to proper
memory management

Logical address – generated by the CPU; also referred to as


virtual address

Physical address – address seen by the memory unit

Logical and physical addresses are the same in


compile-time and load-time address-binding schemes;
logical (virtual) and physical addresses differ in
execution-time address-binding scheme
While logical address and physical address are different at
run time.

DTEL 12
Memory-Management Unit (MMU)

Therefore, we need a special hardware to convert


the logical address into physical address and that
hardware unit is known as Memory Management Unit
(M.M.U)

In MMU scheme, the value in the relocation register


is added to every address generated by a user
process at the time it is sent to memory

The user program deals with logical addresses; it


never sees the real physical addresses

DTEL 13
Memory Management schemes
Memory management is the process of regulating
and organizing computer memory in order to
allocate and deallocate memory space efficiently
for programs and applications that require it.

This helps to guarantee that the system runs


efficiently and has enough memory to run apps and
tasks

1
An important task of a memory management
system is to bring (load) programs into main
memory for execution.
Memory allocation schemes
Contiguous memory management schemes
Non-Contiguous memory management schemes

Memory Management Techniques


Swapping
Paging
Segmentation

1
1
Contiguous memory allocation techniques
were commonly employed by earlier operating
systems*:
•Direct placement
•Overlays
•Partitioning

*Note: Techniques similar to those listed above are still used by some modern, dedicated special
purpose os and real time os

1
Direct placement

Memory allocation is trivial. No


max OS (drivers, buffers) special relocation is needed, because
unused the user programs are always loaded
(one at a time) into the same memory
location (absolute loading). The
User
Program
linker produces the same loading
address for every user program.
Examples:
user
Operating System • Batch monitors(series of program)
• MSDOS
0

18
Overlays
It is a technique to run a program
0,0
that is bigger than the size of the
overlay
tree physical memory.i.e to allow
1,0 2,0 3,0 large programs to execute (fit) in
smaller memory.
2,1 2,3
1,1 3,1
1,2 3,2
A program is organized (by the
2,2 user) into a treelike structure of
object modules, called overlays.
1,1 2,1 The root overlay is always loaded
1,0 2,0
into the memory, Divide the
program into modules in such a
0,0
0,0 way that not all modules need to
Operating
System
Operating be in the memory at the same time.
System

memory
snapshots 19
Question –
The overlay tree for a program is as shown below:

What will be the size of the partition (in physical memory) required to
load (and run) this program?
(a) 12 KB (b) 14 KB (c) 10 KB (d) 8 KB
20
Solution –
The overlay tree for a program is as shown below:

Using the overlay concept we need not actually have the entire program
inside the main memory.Only we need to have the part which are required at
that instance of time, either we need Root-A-D or Root-A-E or Root-B-F or
Root-C-G part.
Root+A+D = 2KB + 4KB + 6KB = 12KB
Root+A+E = 2KB + 4KB + 8KB = 14KB
Root+B+F = 2KB + 6KB + 2KB = 10KB
Root+C+G = 2KB + 8KB + 4KB = 14KB

So if we have 14KB size of partition then we can run any of them.


Answer -(b) 14KB 21
Partitioning
In this scheme, the memory is divided into a number of
contiguous regions, called partitions.
Two forms of memory partitioning, depending on when
and how partitions are created (and modified), are
possible:
• Static partitioning (Fixed size partitioning)
MFT (Multiprogramming with Fixed Number of Tasks)
• Dynamic partitioning (variable size partitioning)
MVT (Multiprogramming with Variable Number of
Tasks.)

These techniques were used by the IBM OS/360 operating system.

2
memory management techniques
difference between fixed size partitioning and variable size
partitioning
S.No Fixed-size partitioning Variable size partitioning

1. The memory is divided into The memory is divided into


fixed-sized memory blocks. variable-sized memory
blocks based on the size of
the process.
2. It suffers from both internal It suffers external
and external fragmentation. fragmentation.

3. The degree of The degree of


multiprogramming is less. multiprogramming is more.

4. A process greater than the A process of any size can


fixed partition size cannot be be allocated to the memory.
allocated to the memory.

2
Static partitioning (Fixed size partitioning)
Each process in this method of contiguous
memory allocation is given a fixed size
continuous block in the main memory.

24
1.Static partitioning (Fixed size partitioning)
It is a memory allocation technique used in
operating systems to divide the physical
memory into fixed-size partitions or regions,
each assigned to a specific process or user.

For example, in the below diagram, the


memory is divided into five blocks, each of
size 4 MB.

2
1.Static partitioning (Fixed size partitioning)
For example, in the below diagram, the
memory is divided into five blocks, each of
size 4 MB.

If a process of size 4 MB comes, it will be easily


allocated to any of the 4 MB memory blocks.
2
If a process of less than 4 MB comes, we can easily allocate
that process in the memory. that process will suffer internal
fragmentation and external fragmentation.

since a whole block of 4 MB will be allocated, which is not


required, and thus the leftover memory will be wasted.

if a process size greater than 4 MB comes, we


cannot allocate that process in the memory

2
Fragmentation
Fragmentation refers to the unused memory that the
memory management system cannot allocate.
• Internal fragmentation
Waste of memory within a partition, caused by the
difference between the size of a partition and the process
loaded. Severe in static partitioning schemes.
• External fragmentation
Waste of memory between partitions, caused by scattered
noncontiguous free space. Severe in dynamic partitioning
schemes.
Compaction is a technique that is used to overcome
external fragmentation.

2
Internal Fragmentation
When a process is assigned to a memory block and if
that process is smaller than the memory requested, it
creates a free space in the assigned memory block. Then
the difference between assigned and requested memory
is called internal fragmentation. Usually, memory is
divided into fixed size blocs.

2
External fragmentation:
When portions of allocated memory are too small to hold
any process.
Example
The RAM has a total of 10 kb free space , but it is not
contiguous, or is fragmented. If a process with 10 kb size
wants to loads on the RAM, then it cannot load because
space is not contiguously free.

3
Parameter Internal Fragmentation External Fragmentation
Definition, The difference between the memory When there are empty spaces
space needed and the assigned among the non-contiguous
memory is considered as internal memory blocks that cannot be
fragmentation. assigned to any process, this
problem is considered as external
fragmentation.
Memory In internal fragmentation, the In external fragmentation, the
Block size memory blocks are of fixed size. memory blocks are of varying
sizes.
Occurrence Internal fragmentation occurs when External fragmentation occurs
we divide physical memory into when a process or processes are
contiguous mounted-sized blocks removed from the main memory
and allocate memory for a process and the free spaces created are
that may be larger than the amount too small to fit a new process.
of memory requested. As a result,
the unused allocated space remains
and cannot be used by other
processes.
Solution The use of a dynamic partitioning Compaction, paging, and
scheme and best-fit block search are segmentation are the solutions for
the solutions that can reduce internal external fragmentation. 3
Multiple-partition allocation
• Hole – block of available memory;
• holes of various size are scattered throughout memory
• When a process arrives, it is allocated memory from a
hole large enough to accommodate it
• Operating system maintains information about:
a) allocated partitions b) free partitions (hole)

OS OS OS OS

process 5 process 5 process 5 process 5


process 9 process 9

process 8 process 10

process 2 process 2 process 2 process 2

DTEL 32
Difference between contiguous and non-contiguous
allocation
S.No Contiguous Allocation Non-Contiguous Allocation

1. In contiguous allocation, contiguous In non-contiguous allocation,


blocks of memory are allocated to non-contiguous blocks of memory
processes. are allocated to processes.
2. Contiguous allocation can be Non-contiguous allocation can be
achieved using fixed partitioning achieved using paging and
and variable partitioning. segmentation.
3. It executes fastly. It executes slowly.
4. This method is easier to control by This method is harder to control by
the operating system. the operating system.
5. There is less overhead since many There is more overhead since
address translations are not there. many address translations are
there.
6. This method suffers internal and This method suffers external
external fragmentation. fragmentation.
7. There is a wastage of memory. There is no wastage of memory.
8. In contiguous allocation, In non-contiguous allocation,
swapped-in processes are placed swapped-in processes are placed
at their original location. at any location.
3
Processes that have been assigned continuous blocks of
memory will fill the main memory at any given time.
However, when a process completes, it leaves behind an empty
block known as a hole.
This space could also be used for a new process. Hence, the main
memory consists of processes and holes, and any one of these
holes can be allotted to a new incoming process.
We have three strategies to allot a hole to an incoming process:

DTEL 34
Strategies Used for Contiguous Memory Allocation Input Queues

Three strategies to allot a hole to an incoming process

1. First-fit: Allocate the first hole that is big enough


2. Best-fit: Allocate the smallest hole that is big enough;
must search entire list, unless ordered by size, Produces
the smallest leftover hole
3. Worst-fit: Allocate the largest hole; must also search
entire list , Produces the largest leftover hole

First-fit and best-fit better than worst-fit in terms of


speed and storage utilization

DTEL 35
DTEL 36
DTEL 37
DTEL 38
Solve:
1. Consider five memory partitions of size 100 KB, 500 KB, 200 KB, 450
KB and 600 KB in same order. If sequence of requests for blocks of
size 212 KB, 417 KB, 112 KB and 426 KB in same order come, then
which of the following algorithm makes the efficient use of memory?

A.Best fit algorithm

B.First fit algorithm

C.Worst fit algorithm

DTEL 39
Solve:
2.Given five memory partitions of 100Kb, 500Kb, 200Kb,
300Kb,600Kb (in order), how would the first-fit, best-fit, and
worstfit algorithms place processes of 212 Kb, 417 Kb, 112
Kb, and 426 Kb (in order)?
Which algorithm makes the most efficient use of memory?

DTEL 40
Dynamic partitioning (variable size partitioning)
Any number of programs can be loaded to memory as long as there
is room for each. When a program is loaded (relocatable loading), it is
allocated memory in exact amount as it needs. Also, the addresses in
the program are fixed after loaded, The operating system keeps track of
each partition (their size and locations in the memory.)

K
B B

A A
...
Operati Operati Operati Operati
ng ng ng ng
Syste Syste Syste Syste
m m m m

Partition allocation at
different times 4
Variable size partitioning :
The memory is divided into blocks of varying sizes. , the
process will be allotted a block of main memory with the
same size as the process requires.
For example, a process P1 of size 4 MB comes into the
memory. After that, another process P2 of size 12 MB comes
into the memory, and then another process P3 of size 5 MB
comes into the memory. In the memory, these processes will
look like this-

4
Difference between fixed size partitioning and variable size
partitioning

S.No Fixed-size partitioning Variable size partitioning

1. The memory is divided into The memory is divided into


fixed-sized memory blocks. variable-sized memory
blocks based on the size of
the process.
2. It suffers from both internal and It suffers external
external fragmentation. fragmentation.

3. The degree of The degree of


multiprogramming is less. multiprogramming is more.

4. A process greater than the A process of any size can


fixed partition size cannot be be allocated to the memory.
allocated to the memory.

4
Contiguous Allocation

• Base register contains value of smallest


physical address
• Limit register contains range of logical
addresses –
• MMU maps logical address dynamically

DTEL 44
A pair of base and limit registers define the logical address space

DTEL 45
Non-Contiguous Allocation
This method allocates memory space present in
different locations to the process based on its needs.
Since all the available memory space is distributed, the
freely available space is also scattered here and there.
This memory allocation technique reduces memory
wastage, which reduces Internal and External
Fragmentation.

There are two ways of performing non-contiguous memory


allocation. These are-
1.Paging
2.Segmentation

4
1.Paging : (storage mechanism)
Paging is a non-contiguous memory management
technique that allows the operating system to fetch
processes from secondary memory and store them in
the main memory in the form of pages.

Paging is a fixed size partitioning scheme.

In paging, secondary memory and main memory are divided


into equal fixed size partitions.

In paging, secondary memory is divided into multiple


pages,
and the main memory is divided into various frames.

Both the pages and frames are of equal sizes.


4
1.Paging : (storage mechanism)
Memory is divided into fixed-size blocks called pages,
Each page is of the same size, and the size is typically
a power of 2, such as 4KB or 8KB..
It has the advantage of reducing memory wastage
but it increases the overheads due to address
translation.

The Operating system needs to maintain the table


which is called the Page Table for each process which
contains the base address of each block that is
acquired by the process in memory space.
Paging is done to remove External Fragmentation.

4
The mapping between logical pages and physical page
frames is maintained by the page table,
which is used by the memory management unit to translate
logical addresses into physical addresses

Secondary
memory 4
Paging
Physical memory is divided into a number of fixed size
blocks, called frames.
The logical memory is also divided into chunks of the
same size, called pages.
The size of frame/page is determined by the hardware
and can be any value between 512 bytes (VAX) and 16
megabytes (MIPS 10000)
A page table defines (maps) the base address of pages
for each frame in the main memory.
The major goals of paging are to make memory
allocation and swapping easier and to reduce
fragmentation.
Paging also allows allocation of noncontiguous memory
(i.e., pages need not be adjacent.)
5
1.Paging : example
Assuming that the main memory is 16 KB and the frame size is 1 KB,
the main memory will be partitioned into a collection of 16 1 KB frames.
P1, P2, P3, and P4 are the four processes in the system, each of which is 4 KB in size.
Each process is separated into 1 KB pages, allowing one page to be saved in a single
frame.

5
Translating Logical Address into Physical Address-

•CPU always generates a logical address.


•A physical address is needed to access the main
memory.

steps to translate logical address into physical


address-

52
Step-01:

CPU generates a logical address consisting of two


parts-
1.Page Number
2.Page Offset

53
Step-02:
For the page number generated by the CPU,
•Page Table provides the corresponding frame number
(base address of the frame) where that page is stored in the
main memory.
Step-03:
The frame number combined with the page offset forms
the required physical address.
•Frame number specifies the specific frame where the
required page is stored.
•Page Offset specifies the specific word that has to be read
from that page.

54
55
1.Segmentation :
It is a method in which the process is divided into
parts of variable sizes, and put it in to main memory.
Each segmented part is known as a segment.

For example, there is a process P of size 500 KB.


It is divided into five segments, S0, S1, S2, S3 and
S4, each of variable size in the secondary memory.
These five segments will be loaded
non-contiguously from the secondary memory into
the main memory.
Pure Segmentation means segmentation without
paging.

5
1.Segmentation :

5
Segmentation—an example

Logical addresses Mappin Physical addresses


0x6000 g
0x5520

0x5000 0x4520
Seg 4
0x4000
0x4000
0x3F00 Seg 1 0x3340
0x3000
0x3000 0x2F00
Seg 3

0x2000
0x2000

0x1340 0x1120
Seg 0
0x1000
0x1000 Seg 2

0x0120
0x0000 0x0000

5
Segmentation
A table stores the information about all such segments and is called
Segment Table.
Segment Table – It maps two-dimensional Logical address
into one-dimensional Physical address.
It’s each table entry has:

•Base Address: It contains the starting physical address where


the segments reside in memory.
•Limit: It specifies the length of the segment.

Address generated by the CPU is divided into:

•Segment number (s): Number of bits required to represent


the segment.
•Segment offset (d): Number of bits required to represent the
size of the segment.
6
Address Translation in Segmentation.

Translation of Two dimensional Logical Address to dimensional


Physical Address.

6
6
The difference between paging and segmentation is-
Pure Segmentation means segmentation without
paging

In Paging, a process address space is


broken into fixed sized blocks called pages.

In Segmentation, a process address space


is broken in varying sized blocks called
sections.

6
Demand Demand
Consideration
Paging Segmentation

Programmer aware No Yes


How many addr
1 Many
spaces
Ease user sharing No Yes
Internal
Yes No
fragmentation
External
No Yes
fragmentation
Placement question No Yes
Replacement
Yes Yes
question

6
The difference between paging and segmentation is-
r. Key Paging Segmentation
No.
Memory Size In Paging, a process address In Segmentation, a process
1 space is broken into fixed sized address space is broken in varying
blocks called pages. sized blocks called sections.
Accountability Operating System divides the Compiler is responsible to calculate
2 memory into pages. the segment size, the virtual
address and actual address.
3
Size Page size is determined by Section size is determined by the
available memory. user.
4
Speed Paging technique is faster in Segmentation is slower than
terms of memory access. paging.
Fragmentation Paging can cause internal Segmentation can cause external
5 fragmentation as some pages fragmentation as some memory
may go underutilized. block may not be used at all.
Logical Address During paging, a logical address During segmentation, a logical
6 is divided into page number and address is divided into section
page offset. number and section offset.
Table During paging, a logical address During segmentation, a logical
7 is divided into page number and address is divided into section
page offset. number and section offset.
8
Data Storage Page table stores the page data. Segmentation table stores the
segmentation data.
6
Swapping : Swapping is a medium term
scheduling method. Memory becomes preemptable
Swapping is a memory management scheme in
which any process can be temporarily swapped
from main memory to secondary memory so that
the main memory can be made available for other
processes. It is used to improve main memory
utilization. In secondary memory, the place where
the swapped-out process is stored is called swap
space.
A running process may become suspended if it makes
an I/O request.
processes
on
swap in
processes
dispatch
in process
memory running
disk
swapout suspend

6
Swapping : Swapping has been subdivided into two concepts:
swap-in and swap-out.
•Swap-in is a method of transferring a program from a hard
disc to main memory, or RAM.
•Swap-out is a technique for moving a process from RAM
to the hard disc.

6
Memory protection

The second fundamental task of a memory


management system is to protect programs sharing the
memory from each other. This protection also covers
the operating system itself. Memory protection can be
provided at either of the two levels:
• Hardware:
– address translation (most common!)
• Software:
– language dependent: strong typing
– language independent: software fault isolation

6
Translation Lookaside Buffer (TLB) in Paging
Translation Lookaside Buffer (TLB) is nothing but a special
cache used to keep track of recently used transactions.
TLB contains page table entries that have been most
recently used.
Given a virtual address, the processor examines the TLB,

if a page table entry is present (TLB hit),


If a page table entry is not found in the TLB (TLB miss),

TLB first checks if the page is already in main memory,


if not in main memory a page fault is issued then the TLB
is updated to include the new page entry.

DTEL 69
Translation Lookaside Buffer (TLB) in Paging

DTEL 70
Steps in TLB hit:

1.CPU generates virtual (logical) address.

2.It is checked in TLB (present).

3.Corresponding frame number is retrieved,


which now tells where the main memory page
lies.

DTEL 71
Steps in TLB miss:

1.CPU generates virtual (logical) address.

2.It is checked in TLB (not present).

3.Now the page number is matched to page table residing in main


memory (assuming page table contains all PTE).

4.Corresponding frame number is retrieved, which now tells where


the main memory page lies.

5.The TLB is updated with new PTE (if space is not there, one of the
replacement technique comes into picture i.e either FIFO, LRU or
MFU etc).

DTEL 72
Effective memory access time(EMAT) : TLB is
used to reduce effective memory access time as it
is a high speed associative cache.
EMAT = h*(c+m) + (1-h)*(c+2m)
where, h = hit ratio of TLB
m = Memory access time
c = TLB access time

DTEL 73
THANK YOU

DTEL 74

You might also like