You are on page 1of 39

RV College of Engineering®, Bengaluru-59

(Autonomous Institution Affiliated to VTU)

Department of Electronics and Instrumentation Engineering

18EI6D4

Real Time Operating Systems

Name of the student USN

Ishita Jha 1RV18EI022

P Sujith 1RV18EI038

Phaalguni Rao Mudradi 1RV18EI039

S Kandha Kumaran 1RV18EI048

Sidhant Shekhar 1RV18EI052

Somya Garg 1RV18EI055

Under the guidance of:

Dr. Kendaganna Swamy S,

Assistant Professor,

Electronics and Instrumentation Department

1
EIE DEPARTMENT
Memory Management requirements: Memory partitioning: Fixed, dynamic,
partitioning, Buddy System Memory allocation Strategies (First Fit, Best Fit,
Worst Fit, Next Fit), Fragmentation, Swapping, Segmentation, Paging, Virtual
Memory, Demand paging.

CONTENTS

4.1 Introduction……………………………………………………………... 2
4.2 Types of Memory Management/Partitioning Techniques……………. 5
4.3 Buddy System Memory Allocation Strategies……………………….... 12
4.3.1 First Fit
4.3.2 Next Fit
4.3.3 Best Fit
4.3.4 Worst Fit
4.4 Fragmentation…………………………………………………………… 19
4.5 Swapping…………………………………………………………………. 21
4.6 Segmentation…………………………………………………………….. 23
4.7 Paging…………………………………………………………………….. 26
4.8 Virtual Memory………………………………………………………….. 30
4.9 Demand Paging…………………………………………………………... 32

2
EIE DEPARTMENT
UNIT 4

Memory Management Requirements

In a Uni-programming system, the main memory is divided into two parts: One part for the
operating system and one part for the program currently being executed.
In a multiprogramming system, the "user" part of memory must be further subdivided to
accommodate multiple processes. The task of the subdivision is carried out dynamically by the
operating system and is known as Memory management.
Effective memory management is vital in a multiprogramming system. If only a few processes
are in memory, then for much of the time all of the processes will be waiting for I/O and the
processor will be idle. Thus memory needs to be allocated to ensure a reasonable supply of ready
processes to consume available processor time. Memory management mainly focuses on
managing Primary memory.

Fig 4.1 Memory distribution

From Fig 4.1, Processes are stored in the Secondary memory and whenever needed they are put into
the Primary memory.

3
EIE DEPARTMENT
Consider the following scenarios:

1. RAM size is 4MB and process size is also 4MB (Fig 4.2 a).
The number of processes that can be accommodated in RAM is 4MB/4MB = 1.

Fig 4.2 a

We know that a process can perform CPU or I/O operation.


Let K be the I/O operation factor. (Assume K to be 70%)
CPU utilization = 1 - (K^n). where n is the number of processes in RAM. For this scenario,
CPU utilization = 1 - 0.7 = 0.3 i.e 30%.

2. RAM size is 8MB and process size is 4MB (fig 4.2 b).

Fig 4.2 b

The number of processes that can be accommodated in RAM is 8MB/4MB = 2.


CPU utilization = 1 - (K^n). Here n is 2. For this scenario, CPU utilization = 1 - 0.49 = 0.51
i.e 51%.

4
EIE DEPARTMENT
3. RAM size is 16MB and process size is 4MB.

Fig 4.2 c

The number of processes that can be accommodated in RAM is 16MB/4MB = 4.

CPU utilization = 1 - (K^n). Here n is 4. For this scenario, CPU utilization = 1 - 0.24 = 0.51 i.e
76%.

So we can conclude that


● As the degree of multiprogramming increases, CPU utilization also increases.
● In order to bring many processes from secondary memory to primary memory, the OS will take
the help of MM.

Functions of Memory Management:

● It keeps track of every memory location


● It tracks if memory is allocated or not.
● It tracks how much memory is allocated
● It manages which process should get memory and when
● It updates the state of a memory location when it is allocated or deallocated.

Goals of Memory Management:

1. Space utilization:
Due to fragmentation, there can be a chance of memory loss. The goal
is to minimize fragmentation.

5
EIE DEPARTMENT
2. Run large programs in small memory space. If the program size is 1000 KB but it needs to
run in 500 KB memory This is achieved by using Virtual memory.

3. Fence address (FA) is used to detect if the user is trying to access the memory address that
has been protected.

Fig 4.3 Fence address

In Fig 4.3, we can see that If the address is greater than 500, the user can access the memory but if it
is less than 500, it is trapped as it is accessing the protected memory.

6
EIE DEPARTMENT
The different Memory Management techniques are shown below in Fig 4.4:

● Contiguous: Processes are stored in a continuous/sequential manner.


● Static Partitioning: It is also called fixed partitioning. In this technique, the size of the
partition is fixed.
● Dynamic partitioning: it is also called Variable partitioning.
● Non-Contiguous: This technique is used in modern systems. Here the processes are
stored randomly.

Paging will be discussed further in the chapter.

Fig 4.4 Different Memory Management techniques

7
EIE DEPARTMENT
FIXED PARTITIONING
In Fixed partitioning, the number of partitions in RAM are fixed but the size of each partition may
or may not be the same.

Fig 4.5 Fixed partitioning

Internal fragmentation: In the Fig 4.5, the first process is only consuming 1MB out
of 4MB in the main memory. Hence, Internal Fragmentation in the first block is (4-1) =
3MB. Sum of Internal Fragmentation in every block = (4-1)+(8-7)+(8-7)+(16-14)=
3+1+1+2 = 7MB.

ADVANTAGES: Easy to implement: The design is very simple.

DRAWBACKS:
● Internal fragmentation.
● Inefficient memory usage.
● Limiting process size (i.e. a 16MB memory space cannot accommodate 32 MB process size)
● Limitation on degree of multiprogramming because the number of partitions are fixed.

8
EIE DEPARTMENT
External fragmentation:
The total unused space of various partitions cannot be used to load the processes even though there
is space available because it is in the contiguous form.
For example, in the above figure, the total waste space is 7 MB but the next processes cannot
accommodate it as it is contiguous.

BASE AND LIMIT REGISTER:

Fig 4.6 Base and Limit register

Base: It holds the smallest legal physical address.

Limit: It specifies the size of the range.

In the Fig 4.6,


● The base value is 300040
● The limit is 120900
● End = base + limit = 300040+120900= 420940

9
EIE DEPARTMENT
HARDWARE PROTECTION:
After allocating logical memory space for a particular process, protection to that space is enforced
by using a base register and limit register. To defend against accessing memory outside of the
allocated memory space for a process, CPU hardware compares each requested memory address
to these registers. If one of the two following conditions holds, an operating system trap occurs.
(Fig 4.7)

● The target address is less than the base register.


● The target address is greater than or equal to the limit register.

Fig 4.7 Hardware address protection with base and limit registers

10
EIE DEPARTMENT
VARIABLE PARTITIONING:

● It is also known as Dynamic partitioning


● In contrast with fixed partitioning, partitions are not made before the execution or during
system configuration.

Fig 4.8 Dynamic partitioning

ADVANTAGES:
● No internal fragmentation
● No limitation on the degree of multiprogramming
● No limitation on the number of processes
● No limitation on process size

11
EIE DEPARTMENT
DRAWBACKS:
● After a process completes its execution or task, a vacancy (hole) is created in RAM. As a result,
External fragmentation occurs. Regardless of how much space there is, the process cannot be
divided between those remaining spaces. External fragmentation can be overcome by the
Compression method.
● Implementing variable partitioning is more difficult than that of fixed partitioning because
memory is allocated during runtime rather than during system configuration. Complexity
increases when it comes to allocation and deallocation.

DYNAMIC LOADING:
● Loading the program into the main memory on demand is called dynamic loading. It
improves system performance.
● Complete program is loaded at a time, but sometimes a certain part of the program (‘P1’ as
given in Fig 4.9 is loaded only when called by the program ‘P’.
● Example: Calling the subroutines.

Fig 4.9 Dynamic loading

12
EIE DEPARTMENT
DYNAMIC LINKING:

● Establishing the link between all the modules or all the functions of the program in order to
continue the program execution is called Linking.
● Initially only ‘P’ gets loaded, and CPU links the dependent program ‘Q’ to the executing
program ‘P’ when its needed.

Fig 4.10 Dynamic linking

LOGICAL ADDRESS SPACE (LAS):

● It is generated by the CPU


● It is also called a virtual address
● During compile and loading time the memory address binding LAS and PAS( physical address
space) are the same
● During execution time, Logical address space and physical address space are different.

PHYSICAL ADDRESS SPACE:

A physical address identifies the physical location of required data in memory. The user never
directly deals with the physical address but can access it by its corresponding logical address. The
user program generates the logical address and thinks that the program is running in this logical
address but the program needs physical memory for its execution, therefore, the logical address
must be mapped to the physical address by MMU before they are used.

13
EIE DEPARTMENT
MEMORY MANAGEMENT UNIT(MMU):

In a real-time execution mode, MMU converts LAS into PAS with the help of a relocation register.
It is used to do the run-time mapping from logical address to physical address space. This can be
achieved by a Relocation register.

Fig 4.11

● CPU will generate a logical address for eg: 346


● MMU will generate a relocation register (base register) for eg: 1400.
● The value in the relocation register is added to every address generated by a user process at
the time the address is sent to memory. The user program never sees the real physical
addresses.
● The program can create a pointer to location 346, store it in memory, manipulate it, and
compare it with other addresses all like the number 346.
● The user program generates only logical addresses. However, these logical addresses must
be mapped to physical addresses before they are used.

14
EIE DEPARTMENT
Memory Allocation Strategies
For both fixed and dynamic memory allocation schemes, the operating system must keep a list
of each memory location noting which are free and which are busy. Then as new jobs come
into the system, the free partitions must be allocated.

There are four algorithms to manage the hole in dynamic partitioning.

● First Fit
● Next Fit
● Best Fit
● Worst Fit

1. First Fit:

The first fit approach is to allocate the first free partition or hole large enough which can
accommodate the process. It finishes after finding the first suitable free partition.

Advantage: Fastest algorithm because it searches as little as possible.

Disadvantage: The remaining unused memory areas left after allocation become waste if it is
too small. Thus requests for larger memory requirements cannot be accomplished.

Consider the Fg 4.13, A process P1 which requires 15K space in memory to fit. It will search
for the first suitable hole in which it can fit easily.

Therefore, in the below figure it will occupy the first slot.

Fig 4.12 First fit

EIE DEPARTMENT 15
2. Next Fit:
The next fit is a modified version of the first fit. It begins as the first fit to find a free partition.
When called next time it starts searching from where it left off, not from the beginning. In Fig
4.13, P1 is 15K and P2 is 35K.

Fig 4.13 Next fit


This algorithm uses a pointer that moves along the memory chain to search for the next fit. (In
this P1 will occupy the first slot and for P2 it will not search from start rather it will search
from the next hole occupied by the first process.)

3. Best Fit: The best-fit deals with allocating the smallest free partition which meets the
requirement of the requesting process. This algorithm first searches the entire list of free
partitions and considers the smallest hole that is adequate. It then tries to find a hole that is
close to the actual process size needed.

Advantage: Memory utilization is much better than first fit as it searches the smallest free
partition first available.

Disadvantage:

● External Fragmentation
● Slow allocation
● Slow deallocation
● Tends to produce many useless tiny fragments

EIE DEPARTMENT 16
(In this the whole list is searched and P1 will occupy the hole which will leave a small amount
of space. Therefore, P1 will occupy 20k hole and P2 will occupy 100K hole.)

Fig 4.14 Best fit

4. Worst Fit:

The algorithm searches for free space in memory in which it can store the desired information.
The algorithm selects the largest possible free space that the information can be stored on (i.e.
bigger than the information needing to be stored) and stores it there.

This is directly opposed to the best-fit algorithm which searches the memory in much the same
way as before.

Advantage: Works best if allocations are of medium sizes.

Disadvantages:

● External fragmentation
● Tends to break large free blocks such that large partitions cannot be allocated.

EIE DEPARTMENT 17
Fig 4.15 Worst fit

Question 1
Given five memory partitions of 100K, 500K, 200K, 300K, and 600K (in order). How would
each of the first fit, best fit, and worst fit algorithms place processes of 212K, 417K, 112K,
426K(in order)?

Solution:
First Fit:
For P1 we will search for a hole large enough which can accommodate the process. Similarly
for other processes.

(In this P4 will not be allocated as there is not enough memory space left.)

EIE DEPARTMENT 18
Fig Q1.1

Best Fit:
It allocates the smallest free partition which meets the requirement of the requesting process.
Therefore assigning P1 to 4th hole , P2 to 2nd hole and so on.

Fig Q1.2

EIE DEPARTMENT 19
Worst Fit:

It will search for the largest possible space available to store the information. Therefore P1 will
occupy the last hole and so on. P4 will not be allocated as there is not enough space left.

Fig Q1.3

Question 2
Requests from the process are 300K, 25K, 125K, and 50K respectively. The above could be
satisfied with the First fit, Best fit, and Worst fit (Fig Q2).

Fig Q2

Solution:

EIE DEPARTMENT 20
First fit:
For P1 we will search for a hole large enough which can accommodate the process. Similarly
for other processes.

Fig Q2.1

Best Fit:

It allocates the smallest free partition which meets the requirement of the requesting process.
Therefore assigning P1 to the 4th hole, P2 to the remaining portion of 4th hole and so on.

EIE DEPARTMENT 21
Fig Q2.2

Worst Fit:
It will search for the largest possible space available to store the information. Therefore P1 will
occupy the 4th hole and so on.

Fig Q2.3

EIE DEPARTMENT 22
Fragmentation

Due to continuous loading and removal of processes from memory, the free memory space is
broken into little pieces. It may happen that after some time the processes cannot be allocated
to memory blocks considering their small size and therefore memory blocks remain unused.
This problem is known as fragmentation.

There are two types of Fragmentation :

● Internal Fragmentation
● External Fragmentation

Internal Fragmentation:

● Internal fragmentation happens when the memory is split into blocks of fixed size.
● Whenever a process makes a request for memory, the fixed-sized block is allotted to
the process.
● In case the memory allocated to the process is somewhat larger than the memory
requested, the difference between the allocated and the requested memory is called
Internal Fragmentation.
● Since the allocated block size is larger than the required memory space some portion of
the memory is left unused as it cannot be used by another process.

Fig 4.16 Internal fragmentation

EIE DEPARTMENT 23
In the above figure, we can see that the three processes of size 2 MB, 4 MB, and 6 MB are
allocated to fixed memory block sizes of 3 MB,10 MB, and 8 MB respectively. The remaining
space in each block cannot be allocated further and hence are called internal fragments.

The internal fragmentation can be reduced by effectively assigning the smallest partition large
enough for the process.

External Fragmentation:

External fragmentation occurs when there is a sufficient quantity of area within the memory to
satisfy the memory request of a process but the process's memory request cannot be fulfilled
because the available memory is present in a non-contiguous manner.

Fig 4.17 External fragmentation

In the above diagram (Fig 4.17), we can see that there is enough space (55 KB) to run a process-
07 (required process size 50 KB) but the memory (fragment) is not contiguously resulting in
external fragmentation.
Here, we use compaction, paging or segmentation to use the free space to run a process.

Major points of differences between Internal Fragmentation and External Fragmentation

EIE DEPARTMENT 24
Swapping:

● Swapping is a memory management scheme in which any process can be temporarily


swapped from main memory to secondary memory so that the main memory can be
made available for other processes.
● It is used to improve main memory utilization.
● In secondary memory, the place where the swapped-out process is stored is called swap
space.
● The procedure by which any process gets removed from the hard disk and placed in the
main memory or RAM is commonly known as Swap In.
● On the other hand, Swap Out is the method of removing a process from the main
memory or RAM and then adding it to the Hard Disk.

EIE DEPARTMENT 25
Fig 4.18 Swapping of two processes

● Fig 4.18 shows the swapping of two processes where the disk is used as a Backing store.
● In the above diagram, suppose there is a multiprogramming environment with a round-
robin scheduling algorithm; whenever the time quantum expires then the memory
manager starts to swap out those processes that are just finished and swap another
process into the memory that has been freed.
● A variant of the swapping technique is the priority-based scheduling algorithm.
● If any higher-priority process arrives and wants service, then the memory manager
swaps out lower priority processes and then loads the higher priority processes and then
executes them.
● When the process with higher priority finishes, then the process with lower priority
swaps back in and continues its execution. This variant is sometimes known as a roll-
in and roll-out.

Advantages Of Swapping:

EIE DEPARTMENT 26
● It helps the CPU to manage multiple processes within a single main memory.
● It helps to create and use virtual memory.
● Swapping allows the CPU to perform multiple tasks simultaneously. Therefore,
processes do not have to wait very long before they are executed.
● It improves the main memory utilization.

Disadvantages of Swapping:

● If the computer system loses power, the user may lose all information related to the
program in case of substantial swapping activity.
● If the swapping algorithm is not good, the composite method can increase the number
of page faults and decrease the overall processing performance.
● There may be inefficiency in the case where a resource or a variable is commonly used
by those processes that are participating in the swapping process.

EIE DEPARTMENT 27
SEGMENTATION

In Operating Systems, Segmentation is a memory management technique in which the memory


is divided into the variable size parts. Each part is known as a segment which can be allocated
to a process.

The details about each segment are stored in a table called a segment table. Segment table is
stored in one (or many) of the segments.

Segment table contains mainly two information about segment:

1. Base: It is the base address of the segment

2. Limit: It is the length of the segment.

Segment Table: It maps two-dimensional Logical addresses into one-dimensional Physical


addresses. It’s each table entry has:

Base Address: It contains the starting physical address where the segments reside in memory.

Limit: It specifies the length of the segment.

Fig 4.19 Translation of Two-dimensional Logical Address to one-dimensional Physical


Address.

EIE DEPARTMENT 28
Advantages of Segmentation:

● No internal fragmentation

● Average Segment Size is larger than the actual page size.

● Less overhead

● It is easier to relocate segments than entire address space.

● The segment table is of lesser size as compare to the page table in paging.

Disadvantages of Segmentation:

● It can have external fragmentation.

● it is difficult to allocate contiguous memory to variable sized partition.

● Costly memory management algorithms.

EIE DEPARTMENT 29
PAGING

Paging is a memory management scheme that eliminates the need for contiguous allocation of
physical memory. This scheme permits the physical address space of a process to be non –
contiguous.

Logical Address or Virtual Address (represented in bits): An address generated by the CPU.

Physical Address (represented in bits): An address actually available on the memory unit

The mapping from virtual to physical address is done by the Memory management unit
(MMU) which is a hardware device and this mapping is known as the paging technique.

The Physical Address Space is conceptually divided into a number of fixed-size blocks, called
frames.

frames.

The Logical Address Space is also divided into fixed-size blocks, called pages.

Pages are stored in noncontiguous locations

Page Size = Frame Size

1 P1

3 P2

4 P3

P0
Logical address
of a process P4

Physical address (MM)


Page map table (PMT)

This PMT tells which page is stored in which frame.

EIE DEPARTMENT 30
Let us consider an example:

Physical Address = 12 bits, then Physical Address Space = 4 K words

Logical Address = 13 bits, then Logical Address Space = 8 K words

Page size = frame size = 1 K words (assumption)

Some of the parameters related to paging are:

● p = page size
● l = logical address
● l = (p,d), where d is the offset within the page

To find page number p is given as,

p=l (div) p

d=l (mod) p

Suppose P (page size) is 10, and location that is to be accessed is 14 then,

p=14(div)10 = 1
(1,4)
d=14(mod)10 = 4

Formula to calculate physical address is given as,

(f-1)*p+d

Here there is no external fragmentation but there is a chance of internal fragmentation.

Eg: Process(p) = 41K

1→10 K

3→10 K

41K 4→10 K

5→1 K

2→10 K

Here in the 5th block only 1K of data is stored


and the remaining part of the page is wasted this
is called internal fragmentation

EIE DEPARTMENT 31
Fig 4.21 Hardware Architecture of Paging

From Fig 4. The address generated by CPU is called a logical address and it is divided into two
parts a page number (p) and a page offset (d). The page number is used as an index into a page
table.The page table contains the base address of each page in physical memory.

The address generated by the CPU is divided into:

Page number(p): Number of bits required to represent the pages in Logical Address Space or
Page number

Page offset(d): Number of bits required to represent a particular word in a page or page size of
Logical Address Space or word number of a page or page offset.

Physical Address is divided into:

Frame number(f): Number of bits required to represent the frame of Physical Address Space
or Frame number.

Frame offset(d): Number of bits required to represent a particular word in a frame or frame
size of Physical Address Space or word number of a frame or frame offset.

The hardware implementation of the page table can be done by using dedicated registers. But
the usage of register for the page table is satisfactory only if the page table is small. If the page
table contains a large number of entries then we can use TLB (translation Look-aside buffer),
a special, small, fast look-up hardware cache.

EIE DEPARTMENT 32
Paging with TLB:

The hardware implementation of the page table can be done by using dedicated registers. But
the usage of register for the page table is satisfactory only if the page table is small. If the page
table contains a large number of entries then we can use TLB (translation Look-aside buffer),
a special, small, fast look-up hardware cache. Paging with TLB is shown in Fig 4.

The TLB is associative, high-speed memory.

Each entry in TLB consists of two parts: a tag and a value.

When this memory is used, then an item is compared with all tags simultaneously.

If the item is found, then the corresponding value is returned. This is called TLB hit (i.e. entry
is found). If not then it is called TLB Miss.

Fig 4.22 Paging with TLB

EIE DEPARTMENT 33
Memory Protection in Paging:

It is done with the help of protection bits associated with each frame. Bits are maintained in the
page table, so during finding the frame number, the protection bits in the page table are also
checked.

For eg page is only in read mode (modification is not allowed).valid and invalid bits can also
be associated with the page.

2 V
Valid→page is in the process
logical address space
3 V

Bits
4 I

1 V Invalid →page is not in the process


logical address space

5 I

6 V

Advantages of Paging:

● Easy to use memory management algorithm


● No need for external Fragmentation
● Swapping is easy between equal-sized pages and page frames.

Disadvantages of Paging:

● May cause Internal fragmentation


● Complex memory management algorithm
● Page tables consume additional memory.
● Multi-level paging may lead to memory reference overhead.

EIE DEPARTMENT 34
Virtual Memory
● Virtual Memory is a storage allocation scheme in which secondary memory can be
addressed as though it were part of main memory.
● Virtual memory is a separation of user logic memory from physical memory.
● In this method, we keep only a part of the process in the memory and the other part on
the disk(secondary storage).
● Virtual memory serves two purposes. First, it allows us to extend the use of physical
memory by using a disk. Second, it allows us to have memory protection, because each
virtual address is translated to a physical address.

Following are the situations when the entire program is not required to be loaded fully in the
main memory.

● User written error handling routines are used only when an error occurs in the data or
computation.
● Certain options and features of a program may be used rarely.
● Many tables are assigned a fixed amount of address space even though only a small
amount of the table is actually used.
● The ability to execute a program that is only partially in memory would counter many
benefits.
● Less number of I/O would be needed to load or swap each user program into memory.
● Each user program could take less physical memory, more programs could be run at the
same time, with a corresponding increase in CPU utilization and throughput.

Fig 4.23 Mapping of Logical Address into Physical Address

EIE DEPARTMENT 35
Virtual memory is a technique that is implemented using both hardware and software. It maps
memory addresses used by a program, called virtual addresses, into physical addresses in
computer memory.

● All memory references within a process are logical addresses that are dynamically
translated into physical addresses at run time as shown above Fig 4.23. This means that
a process can be swapped in and out of the main memory such that it occupies different
places in the main memory at different times during execution.
● A process may be broken into several pieces and these pieces need not be continuously
located in the main memory during execution. The combination of dynamic run-time
address translation and use of page or segment table permits this.
● If these characteristics are present then, not all the pages or segments need to be present
in the main memory during execution. This means that the required pages need to be
loaded into memory whenever required. Virtual memory is implemented using Demand
Paging.

Advantages of Virtual Memory

● The degree of Multiprogramming increases and external fragmentation decreases.


● Users can run large applications with less real RAM.
● Programmers are relieved of trying to fit a program into limited memory.
● Logical address space is much larger than physical address space.

Disadvantages of Virtual Memory

● The system becomes slower since swapping takes time.


● It takes more time to switch between applications.
● The user will have lesser hard disk space for its use.

EIE DEPARTMENT 36
Demand Paging

● It is a combination of paging and swapping.


● The complete process is stored inside the disk in the form of pages.
● A page is copied to the main memory when its demand is made or a page fault occurs.
● In this technique, whichever part of the process is needed to be executed, is placed in
the main memory and the rest stays in the secondary memory as shown in Fig 4.21.

Fig 4.21 Demand paging

EIE DEPARTMENT 37
Advantages of Demand Paging:

● More processes may be maintained in the main memory: We are going to load only
some of the pages of any particular process which means there is room for more
processes. This leads to more efficient utilization of the processor because it is more
likely that at least one of the more numerous processes will be in the ready state at any
particular time.
● A process may be larger than all of the main memory: One of the most fundamental
restrictions in programming is lifted. A process larger than the main memory can be
executed because of demand paging. The OS itself loads pages of a process in the main
memory as required.
● It allows greater multiprogramming levels by using less of the available (primary)
memory for each process.

Disadvantages of Demand Paging:

● A page fault occurs.


● A page fault is a type of exception raised by computer hardware when a running
program accesses a memory page that is not currently mapped by the memory
management unit (MMU) into the virtual address space of a process.
● The number of tables and the amount of processor overhead for handling page
interrupts are greater than in the case of the simple paged management techniques.

EIE DEPARTMENT 38
Question bank: Unit 4

1. How are programs compiled and run?


2. Discuss in brief the memory hierarchy.
3. Brief about memory management and its requirements.
4. What are the memory management techniques
5. Differentiate between contiguous and non contiguous techniques of memory
management.
6. What are the functions of memory management ?
7. What are the goals of memory management?
8. What is the function of Fence Address ?
9. What are the advantages and disadvantages of fixed partitioning?
10. Explain the terms Base and limit registers.
11. What is dynamic loading ?
12. Explain about the logical address space (LAS) and physical address space (PAS).
13. Brief about the memory management unit.
14. What are the 4 different types of algorithms used to manage the hole in dynamic
memory ? What are the advantages of each?
14. Elaborate on the paging concept.
15. Explain what a Page mapping table(PMT) is.
16. Explain the hardware architecture of paging with the help of a block diagram.
17. Brief about the paging with translation lookahead buffer with the help of a block
diagram
18. Write about memory management swapping.
19. Differentiate between segmentation and paging.
20. Explain the architecture of the segmentation .
21. What is Virtual memory? Mention its advantages and disadvantages.
22. What is demand paging? Mention its advantages and disadvantages.

EIE DEPARTMENT 39

You might also like