You are on page 1of 99

IT14:Operating Systems Concepts

Prof. Priti Kale (ICEM-MCA)

Indira College of Engineering Management, Pune


Syllabus
• 1.1.  Overview of operating systems
• 1.2  Functionalities and Characteristics of OS
• 1.3  Hardware concepts related to OS
• 1.4  CPU states,1.5  I/O channels
• 1.6  Memory Management,1.6.1    Memory Management Techniques,1.6.2  
Contiguous & Non-Contiguous allocation
• 1.6.3 Logical & physical memory conversion of Logical to Physical address
• 1.7    Paging, 1.7.1    Demand Paging,1.7.2    Page Replacement Concept
• 1.8   Segmentation - Segment with paging
• 1.9   Virtual Memory Concept
• 1.10    Thrashing

Indira College of Engineering Management, Pune 2


Architecture of Computer System
Hardware

Operating System (OS)

Programming Language (e.g. PASCAL)

Application Programs (e.g. WORD, EXCEL)

Indira College of Engineering Management, Pune 3


Detail Layered View of Computer

Indira College of Engineering Management, Pune 4


System Software, Application Software and Driver Programs
• System Software- Performs essential operation tasks
• Operating system
• Utility programs
• Application Software - Performs specific tasks for users
• Business application
• Communications application
• Multimedia application
• Entertainment and educational software
• Driver Programs (Device Driver)
• small program that allows a specific input or output
device to communicate with the rest of the computer
system

Indira College of Engineering Management, Pune 5


3 type of programs

• user / application programs


• programs used by the users to perform a task
• system programs
• an interface between user and computer
• driver programs
• communicate I/O devices with computer

Indira College of Engineering Management, Pune 6


Hierarchy of
computer
software

Indira College of Engineering Management, Pune 7


Program Hierarchy
User 1 User 2 User 3 ............. User n

electronic computer text editor database


system spreadsheet game

Operating System

Computer
Hardware

Indira College of Engineering Management, Pune 8


Operating System
• a collection of programs which control the resources of a
computer system
• written in low-level languages (i.e. machine-dependent)
• an interface between the users and the hardware
• when the computer is on, OS will first load into the main
memory

Indira College of Engineering Management, Pune 9


Basic functions of the operating system
Device configuration
Controls peripheral devices connected to the computer

File management
Transfers files between main memory and secondary
storage, manages file folders, allocates the secondary
storage space, and provides file protection and
Operating recovery
System
Memory management
Allocates the use of random access memory (RAM) to
requesting processes

Interface platform
Allows the computer to run other applications

Indira College of Engineering Management, Pune 10


Other function of Operating System
• best use of the computer resources
• provide a background for user’s programs to execute
• display and deal with errors when it happens
• control the selection and operation of the peripherals
• act as a communication link between users
• system protection

Indira College of Engineering Management, Pune 11


Memory Management in
Operating Systems

Indira College of Engineering Management, Pune 12


Base and Limit Registers

• A pair of base and limit registers define the


logical address space

Indira College of Engineering Management, Pune 13


Logical vs. Physical Address Space
• The concept of a logical address space that is
bound to a separate physical address space is
central to proper memory management
• Logical address – generated by the CPU; also
referred to as virtual address
• Physical address – address seen by the memory unit
• Logical and physical addresses are the same in
compile-time and load-time address-binding
schemes; logical (virtual) and physical addresses
differ in execution-time address-binding scheme

Indira College of Engineering Management, Pune 14


Memor
y
1. Memory is central to the operation of a modern computer system.
2. Memory is alargearrayof words orbytes, each location with its own
address.
3. Interaction is achievedthrough a sequence of reads/writes of
specific memory address.
4. The CPU fetches the program from the hard disk and stores in the
memory.
5. Ifa program is to be executed, it must be mapped to absolute
addressesandloaded into memory.

Indira College of Engineering Management, Pune 15


Contd…
 In a multiprogramming environment, in order to improve both the
CPU utilisation and the speed of the computer’s response, several
processes must be kept in memory.
 Thereare many different algorithms depending on the particular
situation to manage the memory.
 Selection ofa memory management scheme for a specific system
dependsupon many factors, but especially upon the hardware design
of the system.
 Each algorithm requires its own hardware support.

Indira College of Engineering Management, Pune 16


Memory Management – Responsibilities of OS

 Keep track of which parts of memory are currently


being used and by whom.
 Decide which processes are to be loaded into
memory when memory space becomes available.
 Allocate and deallocate memory space as needed.

Indira College of Engineering Management, Pune 17


Contd…

 In the multiprogramming environment operating


system dynamically allocates memory to multiple
processes.
 Thus memory plays a significant role in the
important aspects of computer system like
performance, S/W support, reliability and stability.

Indira College of Engineering Management, Pune 18


Memory-Management Unit (MMU)
• Hardware device that maps virtual to physical
address

• In MMU scheme, the value in the relocation


register is added to every address generated by a
user process at the time it is sent to memory

• The user program deals with logical addresses;


it never sees the real physical addresses

Indira College of Engineering Management, Pune 19


Dynamic relocation using a relocation register

Indira College of Engineering Management, Pune 20


Dynamic Loading
• Routine is not loaded until it is called
• Better memory-space utilization; unused routine
is never loaded
• Useful when large amounts of code are needed
to handle infrequently occurring cases
• No special support from the operating system is
required implemented through program design

Indira College of Engineering Management, Pune 21


Dynamic Linking
• Linking postponed until execution time
• Small piece of code, stub, used to locate the
appropriate memory-resident library routine
• Stub replaces itself with the address of the
routine, and executes the routine
• Operating system needed to check if routine is
in processes’ memory address
• Dynamic linking is particularly useful for
libraries
• System also known as shared libraries

Indira College of Engineering Management, Pune 22


Overlay
s
 In idealistic situation, the entire program and its related
data is loaded in physical memory for execution.
 But what if process is larger than the amount of memory
allocated to it?
 We can overcome this problem by adopting a
technique called as Overlays. Like dynamic loading,
overlays can also be implemented by users without OS
support.

Indira College of Engineering Management, Pune 23


 The entire program or application is divided
into instructions and data sets such thatwhen one
instruction set is needed it is loaded in memory and after its
execution is over, the space is released.
 As and when requirement for other instruction arises it is
loaded into space that was released previously by the
instructions that are no longer needed.
 Such instructions can be called as overlays, which are
loaded and unloaded by the program.

Indira College of Engineering Management, Pune 24


 Definition : An overlay is a part of an application,
which has been loaded at same origin where previously
some other part (s) of the program was residing.
 A program based on overlay scheme mainly consists
of following:
 A “root” piece which is always memory resident
 Set of overlays.
 Overlay gives the program a way to extend limited main
storage. An importantaspect related to overlays
identification in program is concept of mutual exclusion
i.e., routines which do not invoke each other and are not
loaded in memory simultaneously.
Indira College of Engineering Management, Pune 25
Example of Overlay

20K Read( ) Read ( )


20 K

50K Function1( )

Function 1( ) Function 2( )
70K Function 2( )

Overlay A 40K Display ( ) Overlay B


40K Display ( ) (50K) (70K)

Without Overlay (180K) With Overlay (Maximum.130K)

Indira College of Engineering Management, Pune 26


Limitations of
Overlays
Overlays scheme suffers from following limitations:
 Require careful and time-consuming planning.

 Programmer is responsible for organizing overlay structure

of program with the help of file structures etc. and also to


ensure that piece of code is already loaded when it is
called.
 Operating System provides the facility to load files into

overlay region.
Indira College of Engineering Management, Pune 27
Swapping
 Swapping is an approach for memory management by
bringing each process in entirety, running it and then putting
itback on the disk, so that another program may be loaded
into that space.
 Swapping is a technique that lets you use a disk file as an
extension of memory.
 Lower priority user processes are swapped to backing
store (disk) when they are waiting for I/O or some
other event like arrival of higher priority processes. This
is Rollout Swapping.
 Swapping the process back into store when
some event occurs or when needed (may be in a different
partition) is known as Roll-in swapping.
Indira College of Engineering Management, Pune 28
Swapping
• A process can be swapped temporarily out of memory to a backing store, and then
brought back into memory for continued execution

• Backing store – fast disk large enough to accommodate copies of all memory
images for all users; must provide direct access to these memory images

• Roll out, roll in – swapping variant used for priority-based scheduling algorithms;
lower-priority process is swapped out so higher-priority process can be loaded and
executed

• Major part of swap time is transfer time; total transfer time is directly proportional to
the amount of memory swapped

• Modified versions of swapping are found on many systems (i.e., UNIX, Linux, and
Windows)
• System maintains a ready queue of ready-to-run processes which have memory
images on disk

Indira College of Engineering Management, Pune 29


Schematic View of Swapping

Indira College of Engineering Management, Pune 30


Swapping

Operating System
Roll Out
Process P1
User

Process/Application Process P2

Roll In
Main Memory Disk

Indira College of Engineering Management, Pune 31


Benefits of
Swapping
 Allows higher degree of multiprogramming.
 Allows dynamic relocation, i.e., if address binding at
execution time is being used we can swap in different
location else in case of compile and load time bindings
processes have to be moved to same location only.
 Bettermemory utilisation.
 Less wastage of CPU time on compaction, and
 Can easily be applied on priority-based scheduling
algorithms to improve performance.

Indira College of Engineering Management, Pune 32


Limitations
 Entire program must be resident in store when it is
executing.
 Also processes with changing memory requirements will need
to issue system calls for requesting and releasing memory.

 It is necessary to know exactly how much memory a user


process is using and also that it is blocked or waiting for I/O.

Indira College of Engineering Management, Pune 33


Memory Allocation

Memory Allocation

Contiguous Allocation Non-Contiguous Allocation

Single-partition systems Multiple-partition systems

Fixed-Sized Partitions Variable Sized Partitions

Equal-Sized Unequal-Size

Indira College of Engineering Management, Pune 34


Contiguous Allocation
• Main memory usually into two partitions:
• Resident operating system, usually held in low memory with interrupt
vector
• User processes then held in high memory

• Relocation registers used to protect user processes from each other, and
from changing operating-system code and data
• Base register contains value of smallest physical address
• Limit register contains range of logical addresses – each logical
address must be less than the limit register
• MMU maps logical address dynamically

Indira College of Engineering Management, Pune 35


HW address protection with base and limit registers

Indira College of Engineering Management, Pune 36


Contiguous Allocation (Cont.)
• Multiple-partition allocation
• Hole – block of available memory; holes of various
size are scattered throughout memory
• When a process arrives, it is allocated memory from
a hole large enough to accommodate it
• Operating system maintains information about:
a) allocated partitions b) free partitions (hole)
OS OS OS OS

process 5 process 5 process 5 process 5


process 9 process 9

process 8 process 10

process 2 process 2 process 2 process 2

Indira College of Engineering Management, Pune 37


Dynamic Storage-Allocation Problem
How to satisfy a request of size n from a list of free holes
• First-fit: Allocate the first hole that is big enough
• Best-fit: Allocate the smallest hole that is big enough; must
search entire list, unless ordered by size
• Produces the smallest leftover hole
• Worst-fit: Allocate the largest hole; must also search entire
list
• Produces the largest leftover hole
First-fit and best-fit better than worst-fit in terms of
speed and storage utilization

Indira College of Engineering Management, Pune 38


Fragmentation
• External Fragmentation – total memory space exists to satisfy a request,
but it is not contiguous
• Internal Fragmentation – allocated memory may be slightly larger than
requested memory; this size difference is memory internal to a partition, but
not being used
• Reduce external fragmentation by compaction
• Shuffle memory contents to place all free memory together in one large
block
• Compaction is possible only if relocation is dynamic, and is done at
execution time
• I/O problem
• Latch job in memory while it is involved in I/O
• Do I/O only into OS buffers

Indira College of Engineering Management, Pune 39


HW address protection with base and limit registers

Indira College of Engineering Management, Pune 40


Fragmentation
• External Fragmentation – total memory space exists to
satisfy a request, but it is not contiguous
• Internal Fragmentation – allocated memory may be
slightly larger than requested memory; this size difference
is memory internal to a partition, but not being used
• Reduce external fragmentation by compaction
• Shuffle memory contents to place all free memory together in
one large block
• Compaction is possible only if relocation is dynamic, and is
done at execution time
• I/O problem
• Latch job in memory while it is involved in I/O
• Do I/O only into OS buffers

Indira College of Engineering Management, Pune 41


Paging
• Logical address space of a process can be noncontiguous;
process is allocated physical memory whenever the latter is
available
• Divide physical memory into fixed-sized blocks called
frames (size is power of 2, between 512 bytes and 8,192
bytes)
• Divide logical memory into blocks of same size called
pages
• Keep track of all free frames
• To run a program of size n pages, need to find n free frames
and load program
• Set up a page table to translate logical to physical addresses
• Internal fragmentation

Indira College of Engineering Management, Pune 42


Address Translation Scheme
• Address generated by CPU is divided into:

• Page number (p) – used as an index into a page table which contains
base address of each page in physical memory

• Page offset (d) – combined with base address to define the physical
memory address that is sent to the memory unit

page number page offset


p d
m-n n

• For given logical address space 2m and page size 2n

Indira College of Engineering Management, Pune 43


Paging Hardware

Indira College of Engineering Management, Pune 44


Paging Model of Logical and Physical Memory

Indira College of Engineering Management, Pune 45


Paging Example

32-byte memory and 4-byte pages


Indira College of Engineering Management, Pune 46
Free Frames

Before allocation After allocation

Indira College of Engineering Management, Pune 47


Implementation of Page Table
• Page table is kept in main memory
• Page-table base register (PTBR) points to the page table
• Page-table length register (PRLR) indicates size of the
page table
• In this scheme every data/instruction access requires two
memory accesses. One for the page table and one for the
data/instruction.
• The two memory access problem can be solved by the use
of a special fast-lookup hardware cache called associative
memory or translation look-aside buffers (TLBs)
• Some TLBs store address-space identifiers (ASIDs) in
each TLB entry – uniquely identifies each process to
provide address-space protection for that process

Indira College of Engineering Management, Pune 48


Associative Memory
• Associative memory – parallel search
Page # Frame #

Address translation (p, d)


• If p is in associative register, get frame # out
• Otherwise get frame # from page table in memory

Indira College of Engineering Management, Pune 49


Paging Hardware With TLB

Indira College of Engineering Management, Pune 50


Effective Access Time
• Associative Lookup =  time unit
• Assume memory cycle time is 1 microsecond
• Hit ratio – percentage of times that a page
number is found in the associative registers;
ratio related to number of associative registers
• Hit ratio = 
• Effective Access Time (EAT)
EAT = (1 + )  + (2 + )(1 – )
=2+–

Indira College of Engineering Management, Pune 51


Memory Protection
• Memory protection implemented by
associating protection bit with each frame

• Valid-invalid bit attached to each entry in


the page table:
• “valid” indicates that the associated page is in
the process’ logical address space, and is thus a
legal page
• “invalid” indicates that the page is not in the
process’ logical address space

Indira College of Engineering Management, Pune 52


Valid (v) or Invalid (i) Bit In A Page Table

Indira College of Engineering Management, Pune 53


Shared Pages
• Shared code
• One copy of read-only (reentrant) code shared
among processes (i.e., text editors, compilers,
window systems).
• Shared code must appear in same location in the
logical address space of all processes

• Private code and data


• Each process keeps a separate copy of the code
and data
• The pages for the private code and data can
appear anywhere in the logical address space

Indira College of Engineering Management, Pune 54


Shared Pages Example

Indira College of Engineering Management, Pune 55


Structure of the Page Table
• Hierarchical Paging

• Hashed Page Tables

• Inverted Page Tables

Indira College of Engineering Management, Pune 56


Hierarchical Page Tables
• Break up the logical address space into multiple
page tables

• A simple technique is a two-level page table

Indira College of Engineering Management, Pune 57


Two-Level Page-Table Scheme

Indira College of Engineering Management, Pune 58


Two-Level Paging Example
• A logical address (on 32-bit machine with 1K page size) is divided into:
• a page number consisting of 22 bits
• a page offset consisting of 10 bits
• Since the page table is paged, the page number is further divided into:
• a 12-bit page number
• a 10-bit page offset
• Thus, a logical address is as follows:

page number page offset


pi p2 d

where pi is an index into the 12 10table, and p2 10


outer page is the displacement within the page of
the outer page table

Indira College of Engineering Management, Pune 59


Address-Translation Scheme

Indira College of Engineering Management, Pune 60


Three-level Paging Scheme

Indira College of Engineering Management, Pune 61


Hashed Page Tables
• Common in address spaces > 32 bits

• The virtual page number is hashed into a page


table. This page table contains a chain of
elements hashing to the same location.

• Virtual page numbers are compared in this chain


searching for a match. If a match is found, the
corresponding physical frame is extracted.

Indira College of Engineering Management, Pune 62


Hash Function & hash table
• A hash function is any function that can be used to map data of arbitrary size to
fixed-size values.
• The values returned by a hash function are called hash values, hash codes, digests,
or simply hashes.
• The values are usually used to index a fixed-size table called a hash table.
• Use of a hash function to index a hash table is called  hashing or scatter storage
addressing.

Indira College of Engineering Management, Pune 63


Hash Function & hash table
• a hash table (hash map) is a data structure that implements an associative
array abstract data type, a structure that can map keys to values.
• A hash table uses a hash function to compute an index, also called a hash code,
into an array of buckets or slots, from which the desired value can be found.
• During lookup, the key is hashed and the resulting hash indicates where the
corresponding value is stored.

Indira College of Engineering Management, Pune 64


Hash table example

Indira College of Engineering Management, Pune 65


Hashed Page Table

Indira College of Engineering Management, Pune 66


Inverted Page Table
• One entry for each real page of memory
• Entry consists of the virtual address of the
page stored in that real memory location,
with information about the process that
owns that page
• Decreases memory needed to store each
page table, but increases time needed to
search the table when a page reference
occurs
• Use hash table to limit the search to one
— or at most a few — page-table entries
Indira College of Engineering Management, Pune 67
Inverted Page Table Architecture

Indira College of Engineering Management, Pune 68


PAGING and SEGMENTATION COMPARISION
Sr. Key Paging Segmentation
No.  

Memory Size In Paging, a process In Segmentation, a process address space


address space is broken is broken in varying sized blocks called
into fixed sized blocks sections.
1  
called pages.

Accountability Operating System Compiler is responsible to calculate the


2 divides the memory into segment size, the virtual address and
pages. actual address.

Size Page size is determined Section size is determined by the user.


3 by available memory.  

Speed Paging technique is Segmentation is slower than paging.


4 faster in terms of  
memory access.

Indira College of Engineering Management, Pune 69


PAGING and SEGMENTATION COMPARISION
Sr. Key Paging Segmentation
 
No.
Fragmentation Paging can cause Segmentation can cause external
internal fragmentation as fragmentation as some memory block may
5 some pages may go not be used at all.  
underutilized.
Logical During paging, a logical During segmentation, a logical address is
Address address is divided into divided into section number and section
6 page number and page offset.  
offset.

Table During paging, a logical During segmentation, a logical address is


address is divided into divided into section number and section
7 page number and page offset.  
offset.

Data Storage Page table stores the Segmentation table stores the segmentation
8  
page data. data.

Indira College of Engineering Management, Pune 70


Segmentation
• Memory-management scheme that supports user view of
memory
• A program is a collection of segments. A segment is a logical
unit such as:
main program,
procedure,
function,
method,
object,
local variables, global variables,
common block,
stack,
symbol table, arrays
Indira College of Engineering Management, Pune 71
User’s View of a Program

Indira College of Engineering Management, Pune 72


Logical View of Segmentation
1

4
1

3 2
4

user space physical memory space

Indira College of Engineering Management, Pune 73


Segmentation Architecture
• Logical address consists of a two tuple:
<segment-number, offset>,
• Segment table – maps two-dimensional physical
addresses; each table entry has:
• base – contains the starting physical address where the
segments reside in memory
• limit – specifies the length of the segment
• Segment-table base register (STBR) points to the
segment table’s location in memory
• Segment-table length register (STLR) indicates
number of segments used by a program;
segment number s is legal if s < STLR

Indira College of Engineering Management, Pune 74


Segmentation Architecture (Cont.)
• Protection
• With each entry in segment table associate:
• validation bit = 0  illegal segment
• read/write/execute privileges
• Protection bits associated with segments;
code sharing occurs at segment level
• Since segments vary in length, memory
allocation is a dynamic storage-allocation
problem
• A segmentation example is shown in the
following diagram

Indira College of Engineering Management, Pune 75


Segmented Paging

In segmented paging,
•Process is first divided into segments and then each segment is divided
into pages.
•These pages are then stored in the frames of main memory.
•A page table exists for each segment that keeps track of the frames
storing the pages of that segment.
•Each page table occupies one frame in the main memory.
•Number of entries in the page table of a segment = Number of pages
that segment is divided.
•A segment table exists that keeps track of the frames storing the page
tables of segments.
•Number of entries in the segment table of a process = Number of
segments that process is divided.
•The base address of the segment table is stored in the segment table
base register.
 
Indira College of Engineering Management, Pune 76
Segmented Paging
In Segmented Paging, the main memory is divided into variable size
segments which are further divided into fixed size pages.
1.Pages are smaller than segments.
2.Each Segment has a page table which means every program has multiple
page tables.
3.The logical address is represented as Segment Number (base address),
Page number and page offset.
Segment Number → It points to the appropriate Segment Number.

Page Number → It Points to the exact page within the segment
Page Offset → Used as an offset within the page frame
Each Page table contains the various information about every page of the
segment.
The Segment Table contains the information about every segment.
Each segment table entry points to a page table entry
and every page table entry is mapped to one of the page within a segment.

Indira College of Engineering Management, Pune 77


Indira College of Engineering Management, Pune 78
Translation of logical address to physical address

• The CPU generates a logical address which is divided into


two parts: Segment Number and Segment Offset.
• The Segment Offset must be less than the segment limit.
• Offset is further divided into Page number and Page Offset.
• To map the exact page number in the page table, the page
number is added into the page table base.
• The actual frame number with the page offset is mapped to
the main memory to get the desired word in the page of the
certain segment of the process.

Indira College of Engineering Management, Pune 79


Translation of logical address to physical address

Indira College of Engineering Management, Pune 80


Advantages of Segmented Paging
• It reduces memory usage.
• Page table size is limited by the segment size.
• Segment table has only one entry corresponding to one actual segment.
• External Fragmentation is not there.
• It simplifies memory allocation.

Disadvantages of Segmented Paging


• Internal Fragmentation will be there.
• The complexity level will be much higher as compare to paging.
• Page Tables need to be contiguously stored in the memory.

Indira College of Engineering Management, Pune 81


Segmentation Hardware

Indira College of Engineering Management, Pune 82


Example of Segmentation

Indira College of Engineering Management, Pune 83


 Virtual Memory
• A computer can address more memory than the amount
physically installed on the system. This extra memory is
actually called virtual memory and it is a section of a
hard disk that's set up to emulate the computer's RAM.
• The main visible advantage of this scheme is that
programs can be larger than physical memory.
• Virtual memory serves two purposes.
I. First, it allows us to extend the use of physical memory
by using disk.
II. Second, it allows us to have memory protection,
because each virtual address is translated to a physical
address.
Indira College of Engineering Management, Pune 84
 Virtual Memory
Following are the situations, when entire program is not required to be loaded fully in main
memory.
• User written error handling routines are used only when an error occurred in the data or
computation.
• Certain options and features of a program may be used rarely.
• Many tables are assigned a fixed amount of address space even though only a small amount
of the table is actually used.
• The ability to execute a program that is only partially in memory would counter many
benefits.
• Less number of I/O would be needed to load or swap each user program into memory.
• A program would no longer be constrained by the amount of physical memory that is
available.
• Each user program could take less physical memory, more programs could be run the same
time, with a corresponding increase in CPU utilization and throughput.

Indira College of Engineering Management, Pune 85


 Virtual
Memory
•  The MMU's job is to
translate virtual addresses
into physical addresses. A
basic example is given
below −

Indira College of Engineering Management, Pune 86


 Virtual Memory
• Virtual memory is commonly implemented by demand paging.
• It can also be implemented in a segmentation system.
• Demand segmentation can also be used to provide virtual memory.

Indira College of Engineering Management, Pune 87


Demand Paging
• A demand paging system is quite similar to a paging system with
swapping where processes reside in secondary memory and pages are
loaded only on demand, not in advance.
• When a context switch occurs, the operating system does not copy
any of the old program’s pages out to the disk or any of the new
program’s pages into the main memory
• Instead, it just begins executing the new program after loading the
first page and fetches that program’s pages as they are referenced
• While executing a program, if the program references a page which is
not available in the main memory because it was swapped out a little
ago, the processor treats this invalid memory reference as a page
fault and transfers control from the program to the operating system to
demand the page back into the memory.
Indira College of Engineering Management, Pune 88
Advantages & Disadvantages :Demand Paging
• Advantages
• Following are the advantages of Demand Paging −
• Large virtual memory.
• More efficient use of memory.
• There is no limit on degree of multiprogramming.
• Disadvantages
• Number of tables and the amount of processor overhead for handling
page interrupts are greater than in the case of the simple paged
management techniques.

Indira College of Engineering Management, Pune 89


What is page
replacement?
• When memory located in secondary memory is
needed, it can be retrieved back to main memory.
• Process of storing data from main memory to
secondary memory ->swapping out
• Retrieving data back to main memory ->swapping
in

Indira College of Engineering Management, Pune 90


Page Replacement Algorithm

• Page replacement algorithms are the techniques using which an Operating


System decides which memory pages to swap out, write to disk when a
page of memory needs to be allocated. Paging happens whenever a page
fault occurs and a free page cannot be used for allocation purpose
accounting to reason that pages are not available or the number of free
pages is lower than required pages.
• When the page that was selected for replacement and was paged out, is
referenced again, it has to read in from disk, and this requires for I/O
completion. This process determines the quality of the page replacement
algorithm: the lesser the time waiting for page-ins, the better is the
algorithm.

Indira College of Engineering Management, Pune 91


Page Replacement Algorithm

• A page replacement algorithm looks at the limited information about


accessing the pages provided by hardware, and tries to select which
pages should be replaced to minimize the total number of page misses,
while balancing it with the costs of primary storage and processor time
of the algorithm itself.
• There are many different page replacement algorithms.
• We evaluate an algorithm by running it on a particular string of
memory reference and computing the number of page faults,

Indira College of Engineering Management, Pune 92


Algorithms
• First In First Out
• Optimal Replacement
• Not Recently Used
• Second Chance
• CLOCK
• Not Frequently Used
• Least Recently Used
• Random Replacement
• Working Set Replacement
Indira College of Engineering Management, Pune 93
Thrashing
• In computer science, thrashing occurs when a computer's virtual
memory resources are overused, leading to a constant state of paging
and page faults, inhibiting most application-level processing.
• This causes the performance of the computer to degrade or collapse.
• The situation can continue indefinitely until either the user closes
some running applications or the active processes free up additional
virtual memory resources.

Indira College of Engineering Management, Pune 94


Causes of Thrashing :
• High degree of multiprogramming : If the number of processes keeps on
increasing in the memory than number of frames allocated to each process will be
decreased. So, less number of frames will be available to each process. Due to
this, page fault will occur more frequently and more CPU time will be wasted in
just swapping in and out of pages and the utilization will keep on decreasing.For
example:
Let free frames = 400
Case 1: Number of process = 100
Then, each process will get 4 frames.
• Case 2: Number of process = 400
Each process will get 1 frame.
Case 2 is a condition of thrashing, as the number of processes are
increased,frames per process are decreased. Hence CPU time will be consumed in
just swapping pages.

Indira College of Engineering Management, Pune 95


Causes of Thrashing :
• Lacks of Frames:
• If a process has less number of frames then less pages of that process
will be able to reside in memory and hence more frequent swapping in
and out will be required.
• This may lead to thrashing. Hence sufficient amount of frames must be
allocated to each process in order to prevent thrashing.

Indira College of Engineering Management, Pune 96


Techniques to Handle Thrashing
• Working Set Model
• This model is based on locality.
• What locality is saying, the page used recently can be used again and
also the pages which are nearby this page will also be used.
• Working set means set of pages in the most recent D time. The page
which completed its D amount of time in working set automatically
dropped from it.
• So accuracy of working set depends on D we have chosen.
• This working set model avoid thrashing while keeping the degree of
multiprogramming as high as possible.

Indira College of Engineering Management, Pune 97


Page Fault Frequency
• It is some direct approach than working
set model.
• When thrashing occurring we know that it
has few number of frames.
• And if it is not thrashing that means it has
too many frames.
• Based on this property we assign an upper
and lower bound for the desired page fault
rate.
• According to page fault rate we allocate
or remove pages. If the page fault rate
become less than the lower limit, frames
can be removed from the process.
• Similarly, if the page fault rate become
more than the upper limit, more number
of frames can be allocated to the process.
• And if no frames available due to high
page fault rate, we will just suspend the
processes and will restart them again
when frames available. Indira College of Engineering Management, Pune 98
Thank You

Indira College of Engineering Management, Pune 99

You might also like