You are on page 1of 44

Unit 3: Address Spaces

Paging
Need of Paging
• Segmentation:
• It logically chops the address space into variable-sized
pieces(segments)
• The free space itself gets divided in fragments of variable
sizes.
• Thus, while allocating space for a new process, one needs to
consider the sizes of available memory chunks.

• If address space is into fixed-sized pieces ( instead of


variable size segments) this problem can be overcome.
• The fixed sized pieces of an address space are known as
“Pages”.
Paging
• Paging is a storage mechanism used to load processes from the
secondary storage into the main memory in the form of pages.

• It physically divides an address space of process into fixed sized


pages.

• The main memory will also be divided in the form of frames.

• One page of the process is to be stored in one of the frames of


the memory.

• The pages can be stored at the different locations of the


memory but the priority is always to find the contiguous frames
or holes.
Paging
• The basic method for
implementing paging involves
breaking physical
memory/RAM into fixed-sized
blocks called frames
• and breaking logical
memory/address space into
blocks of the same size called

pages.

• When a process is to be
executed, its pages are
loaded into any available
memory frames from their
Paging
Basic Method of Implementing “Paging”

Every address generated by the CPU is divided into two parts:


a page number (p) and a page offset (d):
Basic Method of Implementing “Paging”
Process
Address Space
The steps taken by the MMU to translate
a logical address generated by the CPU
to a physical address:
1. Extract the page number p and use it
as an index into the page table.
2. Extract the corresponding frame
number f from the page table.
3. Replace the page number p in the
logical address with the frame
number f .
Basic Method of Implementing “Paging”
The steps taken by the MMU to translate
a logical address generated by the CPU
to a physical address:
1. Extract the page number p and use it
as an index into the page table.
2. Extract the corresponding frame
number f from the page table.
3. Replace the page number p in the
logical address with the frame
number f .
Example:
Page size = 4 bytes
Physical memory of 32 bytes (8 frames)

How the programmer’s view of memory


can be mapped into physical memory?

Logical Page no + Offset Physical Address


address
0 0+0 20 = (5 × 4) + 0
3 0+3 23 = (5 × 4) + 3
4 1+0 24 = (6 × 4) + 0
13 3+2 9 = ( 2 x 4) + 2
Paging

The mechanism of paging requires three basic elements to store the information for mapping
• Page Table
• Page Table Base Register
• Translation Look Aside Buffer
Page Table

• There can be multiple processes executing


on multiple CPU cores at a time.
• Every process has it’s pages loaded in RAM.
• Every process has its own Page Table.
• A page table maps page number to frame
number
• Page tables are stored in memory area that
be accessed only by OS kernel
Process
Page Table Base Register Control
Block

• The address of every page table is


stored in Page Table Base Register
(PTBR) which is part of Process Control
Block (PCB).
• When a process is being executed by a
processor, its page table address is
retrieved from PTBR and that page
table is activated.
Need for Translation Lookaside Buffer
• This PTE will contain information like frame number (The address of main memory where we
want to refer), and some other useful bits (e.g., valid/invalid bit, dirty bit, protection bit etc).
• This page table entry (PTE) will tell where in the main memory the actual page is residing.
• Earlier systems used to store Page Table in registers, but then the number of registers in CPU
are not scalable. (They can not be increased on the go)
• So Page Table is store in Main Memory.
• But this approach needs to access MM twice:
1. To find the frame number 
 
2. To go to the address specified by frame number

• Two memory accesses make system slower.  


Need for Translation Lookaside Buffer
• To overcome the problem of
double access of MM, a high-speed
cache is set up for page table entries.
• It is called as
Translation Lookaside Buffer (TLB).
• (TLB) is a special cache used to keep track of
recently used transactions.
• TLB contains page table entries that have
been most recently used.
• While Page Table contains entries for all
pages in in MM.
Need for Translation Lookaside Buffer
• To overcome this problem a high-speed cache is set up for page table entries called a
Translation Lookaside Buffer (TLB). Translation Lookaside Buffer (TLB) is nothing but a special
cache used to keep track of recently used transactions. TLB contains page table entries that
have been most recently used. Given a virtual address, the processor examines the TLB if a page
table entry is present (TLB hit), the frame number is retrieved and the real address is formed. If
a page table entry is not found in the TLB (TLB miss), the page number is used as index while
processing page table. TLB first checks if the page is already in main memory, if not in main
memory a page fault is issued then the TLB is updated to include the new page entry.
How TLB is used for faster Address
Translation??
• Steps in TLB hit:
1. CPU generates virtual
(logical) address.
2. It is checked in TLB
(present).
3. Corresponding frame
number is retrieved,
which now tells where in
the main memory page
lies.
How TLB is used for faster Address
Translation??
• Steps in TLB miss:
1. CPU generates virtual (logical) address.
2. It is checked in TLB (not present).
3. Now the page number is matched to page
table residing in main memory.
4. Frame number is retrieved from Page
Table
5. The TLB is updated with new PTE (if space
is not there, one of the replacement
technique comes into picture i.e either
FIFO, LRU or MFU etc).
Who Handles the TLB Miss??
• In earlier systems the hardware used to have complex instruction sets. (CISC)
• In such systems, there used to be special instructions set to handle TLB Miss.
• Such primitive instructions know the address of Page table and can directly access it.

• Modern architecture systems, have reduced instruction set. (RISC)


• TLB miss is handled by trap handler routine/function.
• On a TLB miss, the hardware simply raises an exception which pauses the execution of
current instruction and jumps to a trap handler.
• The trap handler is code within the OS that will lookup the translation in the page table, use
special “privileged” instructions to update the TLB, and return from the trap;
TLB Issue: Context Switches
• TLB contains resent information about (page no.)  (frame no.) mapping.
• But this information is valid only for the currently running/executing process .
• When context-switching between processes, the translations in the TLB for the last process
are not meaningful to the about-to-be-run process.
TLB Issue: Context Switches
• Approach 1:
• The TLB entries related to it are flushed as they are not relevant to new process to be
executed.
• Example:
• P1 process is being executed.
• It is halted for purpose of I/O operation.
• The TLB entries related to it are flushed as they are not relevant to new process P2 to be
executed.
• But when P1 comes back from I/O operations, it would get TLB miss for every page, which is
a huge overhead.
TLB Issue: Context Switches

• Approach 2:
• Instead of flushing the TLB,
• Some systems provide to address space identifier (ASID) field in the TLB to identify “to which
process this entry belongs to”.
• The ASID is a process identifier (PID), but usually it has fewer bits (e.g., 8 bits for the ASID
versus 32 bits for a PID).
Paging Vs. Segmentation
S. No. Paging Segmentation

1. In paging, program is divided into fixed or In segmentation, program is divided into variable size
mounted size pages. sections.

2. For paging operating system is For segmentation compiler is accountable.


accountable.
3. Page size is determined by hardware. Here, the section size is given by the user.
It is faster in the comparison of
4. segmentation. Segmentation is slow.

5. Paging could result in internal Segmentation could result in external fragmentation.


fragmentation.

6. In paging, logical address is split into page Here, logical address is split into section number and
number and page offset. section offset.
Paging comprises a page table which While segmentation also comprises the segment table
7. encloses the base address of every page. which encloses segment number and segment offset.
Paging Vs. Segmentation
S. No. Paging Segmentation

8. Page table is employed to keep up the Section Table maintains the section data.
page data.
In paging, operating system must In segmentation, operating system maintain a list of
9.
maintain a free frame list. holes in main memory.
10. Paging is invisible to the user. Segmentation is visible to the user.

11. In paging, processor needs page number, In segmentation, processor uses segment number, offset
offset to calculate absolute address. to calculate full address.

12. It is hard to allow sharing of procedures Facilitates sharing of procedures between the processes.
between processes. 
In paging, a programmer cannot
13 efficiently handle data structure. It can efficiently handle data structures.
Hybrid Approach: Paging and Segments
• Segmentation can be combined with Paging to get the best features out of both the techniques.

• Segmented Paging: The main memory is divided into variable size segments which are further
divided into fixed size pages.
• Pages are smaller than segments.
• Each Segment has a page table which means every program has multiple page tables.
• The logical address is represented as Segment Number (base address), Page number and
page offset.
• Segment Number → It points to the appropriate Segment Number.
• Page Number → It Points to the exact page within the segment
Hybrid Approach: Paging and Segments
Hybrid Approach: Paging and Segments
Step-01:
• CPU generates a
logical address
consisting of three
parts-

1. Segment Number
2. Page Number
3. Page Offset Page Table
Base Address
Hybrid Approach: Paging and Segments
Step-02:
• For the generated
segment number,
corresponding entry
is located in the
segment table.

• Segment table
provides the frame Page Table
number of the frame Base Address
storing the page table
of the referred
segment.

• The frame containing


the page table is
located.
Hybrid Approach: Paging and Segments
Step-03:

• For the generated


page number,
corresponding entry
is located in the page
table.

• Page table provides Page Table


the frame number of Base Address
the frame storing the
required page of the
referred segment.

• The frame containing


the required page is
located.
Hybrid Approach: Paging and Segments

The advantages of segmented paging are-


• Segment table contains only one entry corresponding to each segment.
• It reduces memory usage.
• The size of Page Table is limited by the segment size.
• It solves the problem of external fragmentation.

The disadvantages of segmented paging are-


• Segmented paging suffers from internal fragmentation.
• The complexity level is much higher as compared to paging.
Beyond Physical Memory: Mechanisms

So far we have assumed some unrealistic things…

E.g.
1. The address space of a process is small and completely fits into physical memory.
2. Address space of every running process fits into memory.

We will now relax these big assumptions, and assume that we wish to support many
concurrently-running processes having large address spaces.
OS uses some more mechanisms to deal with this actual situation.
Those we re going to study next.
Swap Space

• OS reserve some space on the hard disk for moving pages back and forth.
• This space is known as swap space, because we swap pages out of main memory to it and swap
pages into main memory from it.
• Thus, we will simply assume that the OS can read from and write to the swap space, in page-
sized units.
• To do so, the OS will need to remember the disk address of a given page.
Swap Space

• How using swap space allows the system to pretend that main memory is larger than it actually
is??
• A typical scenario:
• Three processes (Proc 0, Proc 1, and Proc 2) are actively sharing physical/main memory;
• Each of the three, however, only have some of their valid pages in main memory, with the rest
located in swap space on disk.
• A fourth process (Proc 3) has all of its pages swapped out to disk, and thus clearly isn’t currently
running.
Present Bit

• How using swap space allows the system to pretend that main memory is larger than it actually
is??
• Thus, OS swaps the pages between main memory and swap space as and when needed.
• For this purpose “Present bit” in Page Table Entry is used.
• For all the pages that are present in the main memory, this bit will be set to 1 and the bit will be 0
for all the pages which are absent.
• If some page is not present in the main memory then it is called page fault.
Page replacement control flow
• When present bit=0, it means
page is not present in main
memory. This situation is called
as “Page Fault”.
• Particular page is loaded in man
memory and PTE is updated i.e.
present bit is set to 1.
What If Memory Is Full?

• In the process described above, we assumed there is plenty of free space available in main memory
to load a page from swap space.
• But this not be the case; memory may be full (or close to it).
• Here, the OS might like to first page out one or more pages to make room for the new page(s) the
OS is about to bring in.
• The process of picking a page to kick out, or replace is known as the page-replacement policy.
• First In First Out
• Optimal Page Replacement
• Least recently used
The Linux Virtual Memory System.

• Memory management under Linux has two components.


• The first deals with allocating and freeing physical memory—pages, groups of pages, and small
• blocks of RAM.
• The second handles virtual memory, which is memory-mapped into the address space of running
processes.
The Linux Virtual Memory System.
Management of Physical Memory:
• Linux separates physical memory into four different zones, or regions:
• ZONE_DMA : 16 bit Direct memory access memory zone (<16MB)
• ZONE_DMA32: 32 bit Direct memory access memory zone
• ZONE_NORMAL: Regular mapped pages ( address spaces)
• ZONE_HIGHMEM: Memory not allocated to Kernel (available to user) (>896MB)

Page allocator: Each zone has its own allocator, which is responsible for allocating and freeing all
physical pages for the zone and is capable of allocating ranges of physically contiguous pages on
request.
The Linux Virtual Memory System.
Virtual Memory:
• The Linux virtual memory system is responsible for maintaining the address space accessible to
each process.
• It creates pages of virtual memory on demand and manages loading those pages from disk and
swapping them back out to disk as required.
• Under Linux, the virtual memory manager maintains two separate views of a process’s address
space:
1. Logical view
2. Physical view
The Linux Virtual Memory System.
Virtual Memory:
1. Logical view
• It describes instructions that the virtual memory system has received concerning the layout of
the address space.
• In this view, the address space consists of a set of nonoverlapping regions, each region
representing a continuous, page-aligned subset of the address space.
2. Physical view
• This view is stored in the hardware page tables for the process.
• The page table entries identify the exact current location of each page of virtual memory
The Linux Virtual Memory System.
Swapping and Paging:
• An important task for a virtual memory system is to relocate pages of memory from physical
memory out to disk when that memory is needed.
• Early UNIX systems performed this relocation by swapping out the contents of entire processes
at once, but modern versions of UNIX rely more on paging—the movement of individual pages
of virtual memory between physical memory and disk
The paging system can be divided into two sections:
1. Policy algorithm: decides which pages to write out to backing store and when to write them.
2. Paging mechanism: carries out the transfer and pages data back into physical memory when
they are needed again.
The Linux Virtual Memory System.
• Kernel Virtual Memory:
• Linux reserves for its own internal use a constant, architecture-dependent region of the virtual
address space of every process.
• The page-table entries that map to these kernel pages are marked as protected, and not visible or
modifiable when the processor is running in user mode.
The Linux Virtual Memory System.
• Execution and Loading of User Programs:
• The execution of user programs is triggered by a call to the exec() system call.
• This exec() call commands the kernel to run a new program within the current process, completely
overwriting the current execution context with the initial context of the new program.
• The first job of this system service is to verify that the calling process has permission rights to the file
• being executed.
• Once that matter has been checked, the kernel invokes a loader routine to start running the
program.
The Linux Virtual Memory System.
• Mapping of Programs into Memory
• Under Linux, the binary loader does not load a binary file into physical memory.
• Rather, the pages of the binary file are mapped into regions of virtual memory.
• Only when the program tries to access a given page will a page fault result in the loading of that
page into physical memory using demand paging.
• It is the responsibility of the kernel’s binary loader to set up the initial memory mapping.
The Linux Virtual Memory System.
• Static and Dynamic Linking
• Once the program has been loaded and has started running, all the necessary contents of the binary
file have been loaded into the process’s virtual address space.
• Linux implements dynamic linking in user mode through a special linker library.
• Every dynamically linked program contains a small, statically linked function that is called when the
program starts.
• This static function just maps the link library into memory and runs the code that the function
contains.

You might also like