Professional Documents
Culture Documents
18EI6D4
P Sujith 1RV18EI038
Assistant Professor,
1
EIE DEPARTMENT
Memory Management requirements: Memory partitioning: Fixed, dynamic,
partitioning, Buddy System Memory allocation Strategies (First Fit, Best Fit,
Worst Fit, Next Fit), Fragmentation, Swapping, Segmentation, Paging, Virtual
Memory, Demand paging.
CONTENTS
4.1 Introduction……………………………………………………………... 2
4.2 Types of Memory Management/Partitioning Techniques……………. 5
4.3 Buddy System Memory Allocation Strategies……………………….... 12
4.3.1 First Fit
4.3.2 Next Fit
4.3.3 Best Fit
4.3.4 Worst Fit
4.4 Fragmentation…………………………………………………………… 19
4.5 Swapping…………………………………………………………………. 21
4.6 Segmentation…………………………………………………………….. 23
4.7 Paging…………………………………………………………………….. 26
4.8 Virtual Memory………………………………………………………….. 30
4.9 Demand Paging…………………………………………………………... 32
2
EIE DEPARTMENT
UNIT 4
In a Uni-programming system, the main memory is divided into two parts: One part for the
operating system and one part for the program currently being executed.
In a multiprogramming system, the "user" part of memory must be further subdivided to
accommodate multiple processes. The task of the subdivision is carried out dynamically by the
operating system and is known as Memory management.
Effective memory management is vital in a multiprogramming system. If only a few processes
are in memory, then for much of the time all of the processes will be waiting for I/O and the
processor will be idle. Thus memory needs to be allocated to ensure a reasonable supply of ready
processes to consume available processor time. Memory management mainly focuses on
managing Primary memory.
From Fig 4.1, Processes are stored in the Secondary memory and whenever needed they are put into
the Primary memory.
3
EIE DEPARTMENT
Consider the following scenarios:
1. RAM size is 4MB and process size is also 4MB (Fig 4.2 a).
The number of processes that can be accommodated in RAM is 4MB/4MB = 1.
Fig 4.2 a
2. RAM size is 8MB and process size is 4MB (fig 4.2 b).
Fig 4.2 b
4
EIE DEPARTMENT
3. RAM size is 16MB and process size is 4MB.
Fig 4.2 c
CPU utilization = 1 - (K^n). Here n is 4. For this scenario, CPU utilization = 1 - 0.24 = 0.51 i.e
76%.
1. Space utilization:
Due to fragmentation, there can be a chance of memory loss. The goal
is to minimize fragmentation.
5
EIE DEPARTMENT
2. Run large programs in small memory space. If the program size is 1000 KB but it needs to
run in 500 KB memory This is achieved by using Virtual memory.
3. Fence address (FA) is used to detect if the user is trying to access the memory address that
has been protected.
In Fig 4.3, we can see that If the address is greater than 500, the user can access the memory but if it
is less than 500, it is trapped as it is accessing the protected memory.
6
EIE DEPARTMENT
The different Memory Management techniques are shown below in Fig 4.4:
7
EIE DEPARTMENT
FIXED PARTITIONING
In Fixed partitioning, the number of partitions in RAM are fixed but the size of each partition may
or may not be the same.
Internal fragmentation: In the Fig 4.5, the first process is only consuming 1MB out
of 4MB in the main memory. Hence, Internal Fragmentation in the first block is (4-1) =
3MB. Sum of Internal Fragmentation in every block = (4-1)+(8-7)+(8-7)+(16-14)=
3+1+1+2 = 7MB.
DRAWBACKS:
● Internal fragmentation.
● Inefficient memory usage.
● Limiting process size (i.e. a 16MB memory space cannot accommodate 32 MB process size)
● Limitation on degree of multiprogramming because the number of partitions are fixed.
8
EIE DEPARTMENT
External fragmentation:
The total unused space of various partitions cannot be used to load the processes even though there
is space available because it is in the contiguous form.
For example, in the above figure, the total waste space is 7 MB but the next processes cannot
accommodate it as it is contiguous.
9
EIE DEPARTMENT
HARDWARE PROTECTION:
After allocating logical memory space for a particular process, protection to that space is enforced
by using a base register and limit register. To defend against accessing memory outside of the
allocated memory space for a process, CPU hardware compares each requested memory address
to these registers. If one of the two following conditions holds, an operating system trap occurs.
(Fig 4.7)
Fig 4.7 Hardware address protection with base and limit registers
10
EIE DEPARTMENT
VARIABLE PARTITIONING:
ADVANTAGES:
● No internal fragmentation
● No limitation on the degree of multiprogramming
● No limitation on the number of processes
● No limitation on process size
11
EIE DEPARTMENT
DRAWBACKS:
● After a process completes its execution or task, a vacancy (hole) is created in RAM. As a result,
External fragmentation occurs. Regardless of how much space there is, the process cannot be
divided between those remaining spaces. External fragmentation can be overcome by the
Compression method.
● Implementing variable partitioning is more difficult than that of fixed partitioning because
memory is allocated during runtime rather than during system configuration. Complexity
increases when it comes to allocation and deallocation.
DYNAMIC LOADING:
● Loading the program into the main memory on demand is called dynamic loading. It
improves system performance.
● Complete program is loaded at a time, but sometimes a certain part of the program (‘P1’ as
given in Fig 4.9 is loaded only when called by the program ‘P’.
● Example: Calling the subroutines.
12
EIE DEPARTMENT
DYNAMIC LINKING:
● Establishing the link between all the modules or all the functions of the program in order to
continue the program execution is called Linking.
● Initially only ‘P’ gets loaded, and CPU links the dependent program ‘Q’ to the executing
program ‘P’ when its needed.
A physical address identifies the physical location of required data in memory. The user never
directly deals with the physical address but can access it by its corresponding logical address. The
user program generates the logical address and thinks that the program is running in this logical
address but the program needs physical memory for its execution, therefore, the logical address
must be mapped to the physical address by MMU before they are used.
13
EIE DEPARTMENT
MEMORY MANAGEMENT UNIT(MMU):
In a real-time execution mode, MMU converts LAS into PAS with the help of a relocation register.
It is used to do the run-time mapping from logical address to physical address space. This can be
achieved by a Relocation register.
Fig 4.11
14
EIE DEPARTMENT
Memory Allocation Strategies
For both fixed and dynamic memory allocation schemes, the operating system must keep a list
of each memory location noting which are free and which are busy. Then as new jobs come
into the system, the free partitions must be allocated.
● First Fit
● Next Fit
● Best Fit
● Worst Fit
1. First Fit:
The first fit approach is to allocate the first free partition or hole large enough which can
accommodate the process. It finishes after finding the first suitable free partition.
Disadvantage: The remaining unused memory areas left after allocation become waste if it is
too small. Thus requests for larger memory requirements cannot be accomplished.
Consider the Fg 4.13, A process P1 which requires 15K space in memory to fit. It will search
for the first suitable hole in which it can fit easily.
EIE DEPARTMENT 15
2. Next Fit:
The next fit is a modified version of the first fit. It begins as the first fit to find a free partition.
When called next time it starts searching from where it left off, not from the beginning. In Fig
4.13, P1 is 15K and P2 is 35K.
3. Best Fit: The best-fit deals with allocating the smallest free partition which meets the
requirement of the requesting process. This algorithm first searches the entire list of free
partitions and considers the smallest hole that is adequate. It then tries to find a hole that is
close to the actual process size needed.
Advantage: Memory utilization is much better than first fit as it searches the smallest free
partition first available.
Disadvantage:
● External Fragmentation
● Slow allocation
● Slow deallocation
● Tends to produce many useless tiny fragments
EIE DEPARTMENT 16
(In this the whole list is searched and P1 will occupy the hole which will leave a small amount
of space. Therefore, P1 will occupy 20k hole and P2 will occupy 100K hole.)
4. Worst Fit:
The algorithm searches for free space in memory in which it can store the desired information.
The algorithm selects the largest possible free space that the information can be stored on (i.e.
bigger than the information needing to be stored) and stores it there.
This is directly opposed to the best-fit algorithm which searches the memory in much the same
way as before.
Disadvantages:
● External fragmentation
● Tends to break large free blocks such that large partitions cannot be allocated.
EIE DEPARTMENT 17
Fig 4.15 Worst fit
Question 1
Given five memory partitions of 100K, 500K, 200K, 300K, and 600K (in order). How would
each of the first fit, best fit, and worst fit algorithms place processes of 212K, 417K, 112K,
426K(in order)?
Solution:
First Fit:
For P1 we will search for a hole large enough which can accommodate the process. Similarly
for other processes.
(In this P4 will not be allocated as there is not enough memory space left.)
EIE DEPARTMENT 18
Fig Q1.1
Best Fit:
It allocates the smallest free partition which meets the requirement of the requesting process.
Therefore assigning P1 to 4th hole , P2 to 2nd hole and so on.
Fig Q1.2
EIE DEPARTMENT 19
Worst Fit:
It will search for the largest possible space available to store the information. Therefore P1 will
occupy the last hole and so on. P4 will not be allocated as there is not enough space left.
Fig Q1.3
Question 2
Requests from the process are 300K, 25K, 125K, and 50K respectively. The above could be
satisfied with the First fit, Best fit, and Worst fit (Fig Q2).
Fig Q2
Solution:
EIE DEPARTMENT 20
First fit:
For P1 we will search for a hole large enough which can accommodate the process. Similarly
for other processes.
Fig Q2.1
Best Fit:
It allocates the smallest free partition which meets the requirement of the requesting process.
Therefore assigning P1 to the 4th hole, P2 to the remaining portion of 4th hole and so on.
EIE DEPARTMENT 21
Fig Q2.2
Worst Fit:
It will search for the largest possible space available to store the information. Therefore P1 will
occupy the 4th hole and so on.
Fig Q2.3
EIE DEPARTMENT 22
Fragmentation
Due to continuous loading and removal of processes from memory, the free memory space is
broken into little pieces. It may happen that after some time the processes cannot be allocated
to memory blocks considering their small size and therefore memory blocks remain unused.
This problem is known as fragmentation.
● Internal Fragmentation
● External Fragmentation
Internal Fragmentation:
● Internal fragmentation happens when the memory is split into blocks of fixed size.
● Whenever a process makes a request for memory, the fixed-sized block is allotted to
the process.
● In case the memory allocated to the process is somewhat larger than the memory
requested, the difference between the allocated and the requested memory is called
Internal Fragmentation.
● Since the allocated block size is larger than the required memory space some portion of
the memory is left unused as it cannot be used by another process.
EIE DEPARTMENT 23
In the above figure, we can see that the three processes of size 2 MB, 4 MB, and 6 MB are
allocated to fixed memory block sizes of 3 MB,10 MB, and 8 MB respectively. The remaining
space in each block cannot be allocated further and hence are called internal fragments.
The internal fragmentation can be reduced by effectively assigning the smallest partition large
enough for the process.
External Fragmentation:
External fragmentation occurs when there is a sufficient quantity of area within the memory to
satisfy the memory request of a process but the process's memory request cannot be fulfilled
because the available memory is present in a non-contiguous manner.
In the above diagram (Fig 4.17), we can see that there is enough space (55 KB) to run a process-
07 (required process size 50 KB) but the memory (fragment) is not contiguously resulting in
external fragmentation.
Here, we use compaction, paging or segmentation to use the free space to run a process.
EIE DEPARTMENT 24
Swapping:
EIE DEPARTMENT 25
Fig 4.18 Swapping of two processes
● Fig 4.18 shows the swapping of two processes where the disk is used as a Backing store.
● In the above diagram, suppose there is a multiprogramming environment with a round-
robin scheduling algorithm; whenever the time quantum expires then the memory
manager starts to swap out those processes that are just finished and swap another
process into the memory that has been freed.
● A variant of the swapping technique is the priority-based scheduling algorithm.
● If any higher-priority process arrives and wants service, then the memory manager
swaps out lower priority processes and then loads the higher priority processes and then
executes them.
● When the process with higher priority finishes, then the process with lower priority
swaps back in and continues its execution. This variant is sometimes known as a roll-
in and roll-out.
Advantages Of Swapping:
EIE DEPARTMENT 26
● It helps the CPU to manage multiple processes within a single main memory.
● It helps to create and use virtual memory.
● Swapping allows the CPU to perform multiple tasks simultaneously. Therefore,
processes do not have to wait very long before they are executed.
● It improves the main memory utilization.
Disadvantages of Swapping:
● If the computer system loses power, the user may lose all information related to the
program in case of substantial swapping activity.
● If the swapping algorithm is not good, the composite method can increase the number
of page faults and decrease the overall processing performance.
● There may be inefficiency in the case where a resource or a variable is commonly used
by those processes that are participating in the swapping process.
EIE DEPARTMENT 27
SEGMENTATION
The details about each segment are stored in a table called a segment table. Segment table is
stored in one (or many) of the segments.
Base Address: It contains the starting physical address where the segments reside in memory.
EIE DEPARTMENT 28
Advantages of Segmentation:
● No internal fragmentation
● Less overhead
● The segment table is of lesser size as compare to the page table in paging.
Disadvantages of Segmentation:
EIE DEPARTMENT 29
PAGING
Paging is a memory management scheme that eliminates the need for contiguous allocation of
physical memory. This scheme permits the physical address space of a process to be non –
contiguous.
Logical Address or Virtual Address (represented in bits): An address generated by the CPU.
Physical Address (represented in bits): An address actually available on the memory unit
The mapping from virtual to physical address is done by the Memory management unit
(MMU) which is a hardware device and this mapping is known as the paging technique.
The Physical Address Space is conceptually divided into a number of fixed-size blocks, called
frames.
frames.
The Logical Address Space is also divided into fixed-size blocks, called pages.
1 P1
3 P2
4 P3
P0
Logical address
of a process P4
EIE DEPARTMENT 30
Let us consider an example:
● p = page size
● l = logical address
● l = (p,d), where d is the offset within the page
p=l (div) p
d=l (mod) p
p=14(div)10 = 1
(1,4)
d=14(mod)10 = 4
(f-1)*p+d
1→10 K
3→10 K
41K 4→10 K
5→1 K
2→10 K
EIE DEPARTMENT 31
Fig 4.21 Hardware Architecture of Paging
From Fig 4. The address generated by CPU is called a logical address and it is divided into two
parts a page number (p) and a page offset (d). The page number is used as an index into a page
table.The page table contains the base address of each page in physical memory.
Page number(p): Number of bits required to represent the pages in Logical Address Space or
Page number
Page offset(d): Number of bits required to represent a particular word in a page or page size of
Logical Address Space or word number of a page or page offset.
Frame number(f): Number of bits required to represent the frame of Physical Address Space
or Frame number.
Frame offset(d): Number of bits required to represent a particular word in a frame or frame
size of Physical Address Space or word number of a frame or frame offset.
The hardware implementation of the page table can be done by using dedicated registers. But
the usage of register for the page table is satisfactory only if the page table is small. If the page
table contains a large number of entries then we can use TLB (translation Look-aside buffer),
a special, small, fast look-up hardware cache.
EIE DEPARTMENT 32
Paging with TLB:
The hardware implementation of the page table can be done by using dedicated registers. But
the usage of register for the page table is satisfactory only if the page table is small. If the page
table contains a large number of entries then we can use TLB (translation Look-aside buffer),
a special, small, fast look-up hardware cache. Paging with TLB is shown in Fig 4.
When this memory is used, then an item is compared with all tags simultaneously.
If the item is found, then the corresponding value is returned. This is called TLB hit (i.e. entry
is found). If not then it is called TLB Miss.
EIE DEPARTMENT 33
Memory Protection in Paging:
It is done with the help of protection bits associated with each frame. Bits are maintained in the
page table, so during finding the frame number, the protection bits in the page table are also
checked.
For eg page is only in read mode (modification is not allowed).valid and invalid bits can also
be associated with the page.
2 V
Valid→page is in the process
logical address space
3 V
Bits
4 I
5 I
6 V
Advantages of Paging:
Disadvantages of Paging:
EIE DEPARTMENT 34
Virtual Memory
● Virtual Memory is a storage allocation scheme in which secondary memory can be
addressed as though it were part of main memory.
● Virtual memory is a separation of user logic memory from physical memory.
● In this method, we keep only a part of the process in the memory and the other part on
the disk(secondary storage).
● Virtual memory serves two purposes. First, it allows us to extend the use of physical
memory by using a disk. Second, it allows us to have memory protection, because each
virtual address is translated to a physical address.
Following are the situations when the entire program is not required to be loaded fully in the
main memory.
● User written error handling routines are used only when an error occurs in the data or
computation.
● Certain options and features of a program may be used rarely.
● Many tables are assigned a fixed amount of address space even though only a small
amount of the table is actually used.
● The ability to execute a program that is only partially in memory would counter many
benefits.
● Less number of I/O would be needed to load or swap each user program into memory.
● Each user program could take less physical memory, more programs could be run at the
same time, with a corresponding increase in CPU utilization and throughput.
EIE DEPARTMENT 35
Virtual memory is a technique that is implemented using both hardware and software. It maps
memory addresses used by a program, called virtual addresses, into physical addresses in
computer memory.
● All memory references within a process are logical addresses that are dynamically
translated into physical addresses at run time as shown above Fig 4.23. This means that
a process can be swapped in and out of the main memory such that it occupies different
places in the main memory at different times during execution.
● A process may be broken into several pieces and these pieces need not be continuously
located in the main memory during execution. The combination of dynamic run-time
address translation and use of page or segment table permits this.
● If these characteristics are present then, not all the pages or segments need to be present
in the main memory during execution. This means that the required pages need to be
loaded into memory whenever required. Virtual memory is implemented using Demand
Paging.
EIE DEPARTMENT 36
Demand Paging
EIE DEPARTMENT 37
Advantages of Demand Paging:
● More processes may be maintained in the main memory: We are going to load only
some of the pages of any particular process which means there is room for more
processes. This leads to more efficient utilization of the processor because it is more
likely that at least one of the more numerous processes will be in the ready state at any
particular time.
● A process may be larger than all of the main memory: One of the most fundamental
restrictions in programming is lifted. A process larger than the main memory can be
executed because of demand paging. The OS itself loads pages of a process in the main
memory as required.
● It allows greater multiprogramming levels by using less of the available (primary)
memory for each process.
EIE DEPARTMENT 38
Question bank: Unit 4
EIE DEPARTMENT 39