You are on page 1of 115

Computer Organization

(EE 311)

Dr. B. Kiran Babu


Ad-hoc Faculty
EEE, NIT AP – 534101
kiranbabu.b@gmail.com, 9177667874
Content to be discussed:
❖ Storage Devices

❖ Types of Storage
➢ Primary Storage - Internal memory
➢ Secondary Storage – External memory
➢ Tertiary Storage – External memory
➢ Off-line Storage – External memory

❖ Other Examples of Storage Device

❖ Characteristics of Secondary Memory


Storage Devices
➢ Computer storage contains many computer components that are used to store data.

➢ Provides one of the core functions of the contemporary computer.

Types of Storage
1. Primary Storage - Internal memory
2. Secondary Storage – External memory
3. Tertiary Storage – External memory
4. Off-line Storage – External memory
1. Primary Storage - Internal memory
➢ The primary storage also known as processor memory / primary memory / main memory / internal
memory.

➢ Main memory is directly or indirectly connected to the central processing unit (CPU) via a bus.

➢ The CPU continuously reads instructions stored there and executes them as required.

➢ Example:
❑ RAM (DRAM & SRAM)
❑ ROM (PROM, EPROM & EEPROM)
❑ Cache

Advantages and Disadvantages:


➢ It is faster (access time) but expensive as well as limited in size.
➢ It is volatile.
2. Secondary Storage - External memory
➢ To store large amount of data or programs permanently, we need a cheaper and permanent
memory. Such memory is called secondary memory.

➢ Secondary memory devices can be used to store the large amount of data, audio, video and
multimedia files.

➢ It is not directly accessible by the CPU.

➢ Computer usually uses its input/output channels to access secondary storage and transfers the
desired data using intermediate area in primary storage.

Example:
– Hard disk
2. Secondary Storage - External memory

Hard Disk

➢ The hard disk drive is the main and usually largest


data storage device in a computer.

➢ It can store any where from 160 gigabytes to 2


terabytes.

➢ Hard disk speed: It is the speed at which content


can be read and written on a hard disk.

➢ A hard disk unit comes with a rotation speed


varying from 4500 to 7200 rpm.

➢ Disk access time is measured in milli seconds.


3. Tertiary Storage – External memory

➢ Typically it involves a robotic mechanism which will mount (insert) and dismount removable mass
storage media into a storage device.

➢ It is a comprehensive computer storage system that is usually very slow, so it is usually used to
archive data that is not accessed frequently.

➢ This is primarily useful for extraordinarily large data stores, accessed without human operators.

Examples:

❑ Magnetic Tape
❑ Optical Disc
❑ Tape Libraries
❑ Optical Jukeboxes
3. Tertiary Storage – External memory

Magnetic Tape

➢ A magnetic ally coated strip of plastic on which data can been coded.
➢ Tapes for computers are similar to tapes used to store music.
➢ Tape is much less expensive than other storage mediums but commonly a much slower solution
that is commonly used for backup.
3. Tertiary Storage – External memory

Optical Disc
➢ Optical disc is a storage media that holds the content in digital format and is read using a laser
assembly.

➢ The most common types of optical media are


– Blu-ray (BD)
– Compact Disc (CD) - ROM
– Digital Versatile Disc (DVD)
3. Tertiary Storage – External memory

Tape Libraries

➢ These may contain one or more tape drives, a barcode reader for the tapes
and a robot to load the tapes.
➢ The capacity of these tape libraries is more than a thousand times that of
hard drives and so they they are useful for storing large amounts of data.

Optical Jukeboxes

➢ These are storage devices that can handle optical disks and provide tertiary
storage ranging from terabytes to petabytes.
➢ They can also be called optical disk libraries, robotic drives etc.
4. Off-line Storage – External memory

➢ Also known as disconnected storage.

➢ It is a computer data storage on a medium or a device that is not under the control of a processing unit.

➢ It must be inserted or connected by a human operator before a computer can access it again.

Examples:
– Floppy Disk
– Zip diskette
– USB Flash drive
– Memory card
4. Off-line Storage – External memory

Floppy Disk
➢ It is a soft magnetic disk.
➢ Floppy disks are portable.
➢ Floppy disks are slower to access than hard disks and have less storage capacity, but they are much
less expensive.
➢ Can store data upto1.44MB.
➢ Two common sizes: 5¼” and 3½”.
4. Off-line Storage – External memory

Zip Diskette

➢ Hardware data storage device developed by Iomega that functions like a Standard 1.44“ floppy drive.
➢ Capable to hold upto100MB of data or 250MB of data on new drives.
➢ Now it less popular as users needed larger storage capabilities.
4. Off-line Storage – External memory

USB Flash Drive


➢ A small portable flash memory card that plugs into a computer’s USB port and functions as a portable
hard drive.

➢ Flash drives are available in sizes such as 256MB,512MB,1GB,5GB,and16GB and are an easy way to
transfer and store information.
4. Off-line Storage – External memory
Memory Card

➢ An electronic flash memory storage disk commonly used in consumer electronic devices such as digital
cameras, MP3 players, mobile phones, and other small portable devices.

➢ Memory cards are usually read by connecting the device containing the card to your computer, or by
using a USB card reader.
Other Examples of Storage Device
➢ Punch Card
➢ Cloud Storage

Punched Card
➢ Early method of data storage used with early computers.
➢ Punch cards also known as Hollerith cards.
➢ Containing several punched holes that represents data.

Cloud Storage
➢ Cloud storage means "the storage of data online in the
cloud,“ wherein a data is stored in and accessible from
multiple distributed and connected resources that
comprise a cloud.
➢ Cloud storage can provide the benefits of greater
accessibility and reliability; rapid deployment; strong
protection for data backup, archival and disaster
recovery purposes.
Characteristics of Secondary Memory – External memory

These are some characteristics of secondary memory, which distinguish it from primary
memory −

➢ It is non-volatile, i.e. it retains data when power is switched off.

➢ It is large capacities to the tune of terabytes.

➢ It is cheaper as compared to primary memory.


THANK YOU
Computer Organization
(EE 311)

Dr. B. Kiran Babu


Ad-hoc Faculty
EEE, NIT AP – 534101
kiranbabu.b@gmail.com, 9177667874
Content to be discussed:

❖ Introduction to Operating System (OS)

➢ What is an Operating System?

➢ Operating system goals

➢ Operating System (OS) as a Resource Manager (RM)


What is an Operating System?
• A modern computer consists of:

➢ One or more processors


➢ Main memory
Managing all these components requires a layer of
➢ Disks software – the Operating System (OS).
➢ Printers
➢ Various input/output devices.

▪ An Operating System is a collection of programs designed to manage the system’s resources, namely, memory, processors, peripheral
devices, and information.

▪ A program that acts as an intermediary between a user of a computer and


the computer hardware and controls the execution of all kinds of programs.

▪ An operating system is a software which performs all the basic tasks like
file management, memory management, process management, handling input
and output, and controlling peripheral devices such as disk drives and printers.

▪ Ex: Windows, Unix, Linux, Ubuntu, OS/400, Advanced Interactive Executive (AIX),
z/OS, Virtual Memory System (VMS), etc.
Operating system goals:
❖ Execute user programs and make solving user problems easier

❖ Make the computer system convenient to use

❖ Use the computer hardware in an efficient manner

❖ By means of password and similar other techniques, it prevents unauthorized access to


programs and data.

❖ Recording delays between request for a service and response from the system.

❖ Keeping track of time and resources used by various jobs and users.

❖ Production of error messages, and other debugging and error detecting aids.

❖ Coordination and assignment of compilers, interpreters, assemblers and other software


to the various users of the computer systems.
Operating System (OS) as a Resource Manager (RM):

➢ The Operating System is a manager of system resources. Since there can be many conflicting requests for the resources, the
Operating System must decide which requests are to be allocated resources to operate the computer system fairly and efficiently.

Basic Functions of an operating System:


➢ Memory Management
➢ Processor/Process Management
➢ Device Management
➢ File Management
➢ Network Management
Memory Management:
❑ Main memory provides a storage that can be accessed directly by the CPU. For a program to be
executed, it must in the main memory.

❑ Memory management refers to the management of Primary Memory or Main Memory. Main memory is
a large array of words or bytes where each word or byte has its own address.

❑ An Operating System does the following activities for memory management:

✓ Keeps tracks of primary memory, i.e., what part of it are in use by whom, what part are not in
use.

✓ In multiprogramming, the OS decides which process will get memory when and how much.

✓ Allocates the memory when a process requests it.

✓ De-allocates the memory when a process no longer needs it.


Processor/Process Management:
In multiprogramming environment, the OS decides which process gets the processor when and for how much time. This
function is called process scheduling.

An Operating System does the following activities for processor management:

✓ Keeps tracks of processor and status of process. The program responsible for this task is known as traffic
controller.

✓ Allocates the processor (CPU) to a process.

✓ De-allocates processor when a process is no longer required.


Device Management

An Operating System (OS) manages device communication via their respective drivers.

It does the following activities for device management:

Keeps tracks of all devices. The program responsible for this task is known as the I/O controller.

Decides which process gets the device when and for how much time.

Allocates the device in the most efficient way.

De-allocates devices.
File Management:
A file is a unit of (usually named) information stored on a computer. It may be a document, a webpage or a wide
range of other types of information.

A file system is normally organized into directories for easy navigation and usage.

These directories may contain files and other directions.

An Operating System does the following activities for file management:

✓ Keeps track of information, location, uses, status etc. The collective facilities are often known as file system.

✓ Decides who gets the resources.

✓ Allocates the resources.

✓ De-allocates the resources.


Network Management:

❑ An Operating System is responsible for the computer system networking via a distributed environment.

❑ A distributed system is a collection of processors, which do not share memory, clock pulse or any peripheral
devices. Instead, each processor is having its own clock pulse, and RAM and they communicate through network.

❑ Various networking protocols are TCP/IP (Transmission Control Protocol/ Internet Protocol), UDP (User
Datagram Protocol), FTP (File Transfer Protocol), HTTP (Hyper Text Transfer Protocol), NFS (Network File
System), etc.
THANK YOU
Computer Organization
(EE 311)

Dr. B. Kiran Babu


Ad-hoc Faculty
EEE, NIT AP – 534101
kiranbabu.b@gmail.com, 9177667874
Content to be discussed:

➢ Evolution of OS

➢ Types of Operating Systems


Evolution of OS
Operating System has been evolved through a number of stages like (i) serial processing, (ii) batch
processing, (iii) multiprocessing.

Serial Processing:
✓ No operating system
✓ Machines run from a console with display lights (error messages), input device (punch card, tape) and printer (for
output)
✓ Setup included: loading and compiling the program, and loading and linking common functions – very time
consuming (errors!)
Batch Processing:

✓ Users submit jobs to operator & Operator batches jobs


✓ Monitor controls sequence of events to process batch
✓ When one job is finished, control returns to Monitor which reads next job

Multiprogramming
✓ In multiprogramming, many processes are simultaneously resident in memory, and execution switches between
processes.
✓ More efficient use of computer time.
Operating System Types:

Important types of operating systems which are most commonly used.

✓ Batch Operating System


✓ Time-sharing Operating Systems
✓ Distributed Operating System
✓ Network Operating System
✓ Real-Time Operating System
Batch Operating System:
❖ The users of a batch operating system do not interact with the computer directly. Each user prepares his
job on an off-line device like punch cards and submits it to the computer operator. To speed up
processing, jobs with similar needs are batched together and run as a group. The programmers leave their
programs with the operator and the operator then sorts the programs with similar requirements into
batches.
❖ The problems with Batch Systems are as follows:
Lack of interaction between the user and the job.

CPU is often idle, because the speed of the mechanical I/O devices is slower than the CPU.

Difficult to provide the desired priority.

Examples of Batch based Operating System:


Payroll System, Bank Statements, etc.
Time-sharing Operating Systems
• Each task is given some time to execute so that all the tasks work smoothly.
• Each user gets the time of CPU as they use a single system.
• These systems are also known as Multitasking Systems.
• The task can be from a single user or different users also.
• The time that each task gets to execute is called quantum. After this time interval is over OS
switches over to the next task.
• Time-Sharing Systems – used to minimize response time.

Advantages of Timesharing operating systems are as follows:


Provides the advantage of quick response
Avoids duplication of software
Reduces CPU idle time
Disadvantages of Time-sharing operating systems are as follows:
Problem of reliability
Question of security and integrity of user programs and data

Examples of Time-Sharing OSs are: Multics, Unix, etc.


Distributed Operating System
➢ Distributed systems use multiple central processors to serve multiple real-time applications and multiple users.

➢ The processors communicate with one another through various communication lines (such as high-speed buses). These
are referred as loosely coupled systems or distributed systems. Processors in a distributed system may vary in size and
function. These processors are referred as sites, nodes, computers, and so on.
Examples of Distributed Operating System are- LOCUS, etc.
Advantages:
With resource sharing facility, a user at one site may be able to use the resources available at another.
Speedup the exchange of data with one another via electronic mail.
If one site fails in a distributed system, the remaining sites can potentially continue operating.
Better service to the customers.
Reduction of the load on the host computer.
Reduction of delays in data processing.

Disadvantages
• Failure of the main network will stop the entire communication
Network Operating System
A Network Operating System runs on a server and provides the server the capability to manage data, users,
groups, security, applications, and other networking functions. The primary purpose of the network operating
system is to allow shared file and printer access among multiple computers in a network, typically a local area
network (LAN), a private network or to other networks.

Ex: Microsoft Windows Server 2003, Microsoft Windows Server 2008, UNIX, Linux, Mac OS X,
Novell NetWare, and BSD.

Advantages:
➢ Highly stable.
➢ Security is server managed.
➢ Remote access to servers is possible from different locations
Disadvantages:
➢ High cost of buying and running a server.
➢ Dependency on a central location for most operations.
➢ Regular maintenance and updates are required.
Real-Time Operating System
➢ These types of OSs serve real-time systems.
➢ A real-time system is defined as a data processing system in which the time interval required to process and respond
to inputs is so small that it controls the environment.

➢ Real-time systems are used when there are time requirements that are very strict like missile systems, air traffic
control systems, robots, etc.
EX: Scientific experiments, medical imaging systems, industrial control systems, weapon systems, robots, air
traffic control systems, etc.
Advantages:
➢ Focus on running applications and less importance to applications which are in the queue.
➢ These types of systems are error-free.
➢ Memory allocation is best managed in these types of systems.
Disadvantages
➢ Very few tasks run at the same time and their concentration is very less on few applications to avoid errors.
➢ The algorithms are very complex and difficult for the designer to write on.
➢ Sometimes the system resources are not so good and they are expensive as well.
THANK YOU
Computer Organization
(EE 311)

Dr. B. Kiran Babu


Ad-hoc Faculty
EEE, NIT AP – 534101
kiranbabu.b@gmail.com, 9177667874
Content to be discussed:

❖ Memory Management in Operating System


➢ Memory Hierarchy

➢ Definition

➢ Terms used in MM

➢ Requirements
Memory hierarchy
❑ What is the memory hierarchy?
✓ Different levels of memory
✓ Some are small & fast
✓ Others are large & slow

❑ What levels are usually included?

✓ Cache: small amount of fast, expensive memory


▪ L1 (level 1) cache: usually on the CPU chip
▪ L2 & L3 cache: off-chip, made of SRAM
✓ Main memory: medium-speed, medium price memory (DRAM)
✓ Disk: many gigabytes of slow, cheap, non-volatile storage
Definition of Memory management of OS

❖ It is the process of

➢ allocating primary memory to user programs

➢ reclaiming that memory when it is no longer needed

➢ protecting each user’s memory area from other user programs; i.e., ensuring that
each program only references memory locations that have been allocated to it
Memory Management Important Terms

Frame: A fixed-length block of main memory.

Page: A fixed-length block of data that resides in secondary memory (such as disk).
A page of data may temporarily be copied into a frame of main memory.

Segment: A variable-length block of data that resides in secondary memory.


An entire segment may temporarily be copied into an available region of
main memory (segmentation) or the segment may be divided into pages
which can be individually copied into main memory (combined segmentation
and paging).
Memory Management Requirements
❖ In order to manage memory effectively the OS must have

✓ Memory allocation policies

✓ Methods to track the status of memory locations (free or allocated)

✓ Policies for pre-empting memory from one process to allocate to another

❖ Memory management is intended to satisfy the following requirements:


➢ Relocation
➢ Protection
➢ Sharing
➢ Logical organization
➢ Physical organization
Relocation
➢ Relocation is the process of adjusting program addresses to match the actual physical addresses
where the program resides when it executes
➢ Programmers typically do not know in advance which other programs will be resident in main memory
at the time of execution of their program

✓ Active processes need to be able to be swapped in and out of main memory in order to maximize
processor utilization

✓ Specifying that a process must be placed in the same memory region when it is swapped back
▪ may need to relocate the process to a different area of memory

Protection
➢ Processes need to acquire permission to reference memory locations for reading or writing purposes

➢ Location of a program in main memory is unpredictable

➢ Memory references generated by a process must be checked at run time

➢ Mechanisms that support relocation also support protection


Sharing
➢ Advantageous to allow each process access to the same copy of the program rather than have their own
separate copy

➢ Memory management must allow controlled access to shared areas of memory without compromising
protection

➢ Mechanisms used to support relocation support sharing capabilities

Logical Organization
✓ Main memory is organized as linear or it can be a one-dimensional address space which consists of
a sequence of bytes or words.
✓ Programs can be organized into modules (some of those are unmodifiable (read-only, execute only)
and some of those contain data that can be modified)
✓ Modules can be written and compiled independently
✓ Different degrees of protection given to modules (read-only, execute-only)
✓ Share modules among processes
✓ Segmentation is the tool that most readily satisfies requirements
Physical Organization

❑ The structure of computer memory has two levels referred to as main memory and
secondary memory.

❑ Main memory is relatively very fast and costly as compared to the secondary memory.

❑ Main memory is volatile. Thus secondary memory is provided for storage of data on a
long-term basis while the main memory holds currently used programs.

❑ Memory available for a program plus its data may be insufficient

➢ Overlaying allows various modules to be assigned the same region of memory

❑ Programmer does not know how much space will be available


Preparing Program for Execution
❖ Program Transformations
❑ Translation (Compilation)
❑ Linking
❑ Loading
THANK YOU
Computer Organization
(EE 311)

Dr. B. Kiran Babu


Ad-hoc Faculty
EEE, NIT AP – 534101
kiranbabu.b@gmail.com, 9177667874
Content to be discussed:

❖ Memory Management in Operating System


➢ Fragmentation

➢ Swapping

➢ Partitioning

➢ Virtual Memory

➢ Paging
Fragmentation
• As processes are loaded and removed from memory, the free memory space is broken into little pieces.
• It happens after sometimes that processes can not be allocated to memory blocks considering their small size and
memory blocks remains unused.
• This problem is known as Fragmentation.

fragmented memory before compaction

Compaction: the exertion of force on


something so that it becomes more dense

fragmented memory after compaction

TWO TYPES of FRAGMENTATION:


1. External fragmentation: Total memory space is enough to satisfy a request or to reside a process in it, but it is
not contiguous so it can not be used.
2. Internal fragmentation: Memory block assigned to process is bigger. Some portion of memory is left unused as
it can not be used by another process.
Swapping
➢ Swapping is a technique in which a process can be swapped temporarily out of main memory (or move)
to secondary storage (disk) and make that memory available to other processes.

➢ At some later time, the system swaps back the process from the secondary storage to main memory.

➢ Though performance is usually affected by swapping process but it helps in running multiple and big
processes in parallel and that's the reason Swapping is also known as a technique for memory
compaction.

➢ The total time taken by swapping process includes the time it takes to move the entire process to a
secondary disk and then to copy the process back to memory, as well as the time the process takes
competing to regain main memory.
Swapping
Limitations of swapping

➢ Process must fit into physical memory (impossible to run larger processes)

➢ Memory becomes fragmented


✓ External fragmentation: lots of small free memory areas

✓ Compaction is needed to reassemble the larger free areas


Partitioning
1. Equal Fixed-size partitions:
➢ The simplest scheme for partitioning the available memory is fixed-
size partitions.
➢ Note that, although the partitions are of fixed size, they need not be of
equal size.
➢ When a process is brought into memory, it is placed in the smallest
available partition that will hold it.

2. Unequal fixed-size partitions:


➢ Even with the use of unequal fixed-size partitions, there will be wasted
memory.
➢ In most cases, a process will not require exactly as much memory as
provided by the partition.
➢ For example, a process that requires 3M bytes of memory would be
placed in the 4M partition of Figure b, wasting 1M that could be used
by another process.
Advantages of Fixed-size Partition Scheme
• This scheme is simple and is easy to implement
• It supports multiprogramming as multiple processes can be stored inside the main memory.
• Management is easy using this scheme

Disadvantages of Fixed-size Partition Scheme


• Internal Fragmentation
• Limitation on the size of the process
• External Fragmentation
• Degree of multiprogramming is less
Partitioning
3. Variable-size partitions (or) Dynamic Partitioning:

➢ A more efficient approach is to use variable-size partitions.

➢ When a process is brought into memory, it is allocated exactly


as much memory as it requires and no more.
Advantages of Variable-size Partition Scheme
• No Internal Fragmentation
• Degree of Multiprogramming is Dynamic
• No Limitation on the Size of Process

Disadvantages of Variable-size Partition Scheme


• External Fragmentation
• Difficult Implementation

Note: Compaction technique can be used to overcome this problem.


Virtual Memory

❖ A computer can address more memory than the amount physically installed on the system. This extra
memory is actually called virtual memory.

❖ Basic idea: allow the OS to hand out more memory than exists on the system.

❖ Keep recently used stuff in physical memory.

❖ Move less recently used stuff to disk.

❖ Virtual memory (VM) especially helpful in multiprogrammed system


✓ CPU schedules process B while process A waits for its memory to be retrieved from disk
Virtual and physical addresses
CPU chip
❖ Program uses virtual addresses
CPU MMU
• Addresses local to the process
• MMU translates virtual address to physical address
Virtual addresses
from CPU to MMU
❖ Translation done by the Memory Management Unit
• Usually on the same chip as the CPU Memory
• Only physical addresses leave the CPU/MMU chip

Physical addresses
❖ Physical memory indexed by physical addresses on bus, in memory
Disk
controller
Paging
60–64K -
➢ It is a memory management scheme that is used to retrieve processes 56–60K -
from the secondary memory (hard disk) in the form of pages and 52–56K -
store them in the main memory. 48–52K 6
➢ Pages of a process are only brought from the secondary memory to 44–48K 5
the main memory when they are needed. 40–44K 1
➢ The main objective of paging is to divide each process into the pages
36–40K -
(of fixed size). 32–36K -
28–32K 3 28–32K
➢ These pages are stored in the frames of main memory. 24–28K - 24–28K

➢ Paging technique plays an important role in implementing virtual


20–24K - 20–24K
16–20K 0 16–20K
memory. 12–16K - 12–16K
➢ Virtual addresses mapped to physical addresses
8–12K - 8–12K
4–8K 4 4–8K
0–4K 7 0–4K
➢ Table translates virtual page number to physical page number
✓ Not all virtual memory has a physical page Virtual Physical
address memory
✓ Not every physical page need be used
space
➢ Example:
✓ 64 KB virtual memory
✓ 32 KB physical memory
Mapping: logical physical address
• Split address from CPU into two pieces Example:
• Page number (p) • 4 KB (=4096 byte) pages (page size)
• Page offset (d) • 32 bit logical addresses
• Page number 2d = 4096 d = 12
• Index into page table
• Page table contains base address of page in physical
memory
• Page offset 32-12 = 20 bits 12 bits
• Added to base address to get actual physical memory
address p d
• Page size = 2d bytes
32 bit logical address
Address translation architecture
Page frame number Page frame number
page number
page offset

0
1
CPU p d f d
..
.
0
f-1
1
Page address .. f
. f+1
p-1 Frame address f+2
p f
..
.
p+1 physical memory
page table
Page table: Is used to keep track relation
Page address (or) logical address = Page number + offset
between page of a process to a frame in physical
Frame address (or) physical address = frame number + offset memory.
Advantages and Disadvantages of Paging

➢ Due to equal size of the pages and frames, swapping becomes very easy.

➢ This reduces external fragmentation but still suffer from internal fragmentation.

➢ This is simple to implement and assumed as an efficient memory management technique.

➢ Page table requires extra memory space, so may not be good for a system having small RAM.
THANK YOU
Computer Organization
(EE 311)

Dr. B. Kiran Babu


Ad-hoc Faculty
EEE, NIT AP – 534101
kiranbabu.b@gmail.com, 9177667874
Content to be discussed:

❖ Page Replacement Algorithms


➢ First In First Out (FIFO)

➢ Least Recently Used (LRU)

➢ Optimal Page Replacement


Paging:
➢ It is a memory management scheme that is used to retrieve processes from the secondary memory
(hard disk) in the form of pages and store them in the main memory.
➢ Pages of a process are only brought from the secondary memory to the main memory when they are
needed.
➢ The main objective of paging is to divide each process into the pages (of fixed size).
➢ These pages are stored in the frames of main memory.

➢ When an executing process refers to a page, it is first searched in the main memory. If it is not
present in the main memory, a page fault (or) page miss occurs.
➢ Page Fault is the condition in which a running process refers to a page that is not loaded in the
main memory.
➢ In such a case, the OS has to bring the page from the secondary storage into the main memory.
➢ This may cause some pages in the main memory to be replaced due to limited storage.
➢ A Page Replacement Algorithm is required to decide which page has to be removed.
Page Replacement Algorithm

➢ Page Replacement happens when a requested page is not present in the main memory and the available space
is not sufficient for allocation to the requested page.

➢ Page Replacement Algorithm decides which page to remove, also called swap out when a new page needs to
be loaded into the main memory.

➢ Page replacement algorithm must take the lesser waiting time for page-ins.

➢ A page replacement algorithm tries to select which pages should be replaced so as to minimize the total
number of page misses.

➢ The fewer is the page faults the better is the algorithm.

➢ If a process requests for page and that page is found in the main memory then it is called page hit , otherwise
page miss or page fault.
Some Page Replacement Algorithms:

❖ First In First Out (FIFO)

❖ Least Recently Used (LRU)

❖ Optimal Page Replacement


First In First Out (FIFO) Page Replacement Algorithm

➢ This is the simplest page replacement algorithm.

➢ In this algorithm, the OS maintains a queue that keeps track of all the pages in memory, with the oldest page at
the front and the most recent page at the back.

➢ When there is a need for page replacement, the FIFO algorithm, swaps out the page at the front of the queue,
that is the page which has been in the memory for the longest time.
Example for FIFO implementation:
Consider the page reference string of size 12: 1, 2, 3, 4, 5, 1, 3, 1, 6, 3, 2, 3 with frame size 4 (i.e. maximum
4 pages in a frame).

F1
F2
F3
F4
Example for FIFO implementation:
Consider the page reference string of size 12: 1, 2, 3, 4, 5, 1, 3, 1, 6, 3, 2, 3 with frame size 4 (i.e. maximum
4 pages in a frame).

F1
F2
F3
F4
Example for FIFO implementation:
Consider the page reference string of size 12: 1, 2, 3, 4, 5, 1, 3, 1, 6, 3, 2, 3 with frame size 4 (i.e. maximum
4 pages in a frame).

F1
F2
F3
F4
Example for FIFO implementation:
Consider the page reference string of size 12: 1, 2, 3, 4, 5, 1, 3, 1, 6, 3, 2, 3 with frame size 4 (i.e. maximum
4 pages in a frame).

F1
F2
F3
F4
Example for FIFO implementation:
Consider the page reference string of size 12: 1, 2, 3, 4, 5, 1, 3, 1, 6, 3, 2, 3 with frame size 4 (i.e. maximum
4 pages in a frame).

F1
F2
F3
F4
Example for FIFO implementation:
Consider the page reference string of size 12: 1, 2, 3, 4, 5, 1, 3, 1, 6, 3, 2, 3 with frame size 4 (i.e. maximum
4 pages in a frame).

F1
F2
F3
F4
Example for FIFO implementation:
Consider the page reference string of size 12: 1, 2, 3, 4, 5, 1, 3, 1, 6, 3, 2, 3 with frame size 4 (i.e. maximum
4 pages in a frame).

F1
F2
F3
F4
Example for FIFO implementation:
Consider the page reference string of size 12: 1, 2, 3, 4, 5, 1, 3, 1, 6, 3, 2, 3 with frame size 4 (i.e. maximum
4 pages in a frame).

F1
F2
F3
F4
Example for FIFO implementation:
Consider the page reference string of size 12: 1, 2, 3, 4, 5, 1, 3, 1, 6, 3, 2, 3 with frame size 4 (i.e. maximum
4 pages in a frame).

F1
F2
F3
F4
Example for FIFO implementation:
Consider the page reference string of size 12: 1, 2, 3, 4, 5, 1, 3, 1, 6, 3, 2, 3 with frame size 4 (i.e. maximum
4 pages in a frame).

F1
F2
F3
F4
Example for FIFO implementation:
Consider the page reference string of size 12: 1, 2, 3, 4, 5, 1, 3, 1, 6, 3, 2, 3 with frame size 4 (i.e. maximum
4 pages in a frame).

F1
F2
F3
F4
Example for FIFO implementation:
Consider the page reference string of size 12: 1, 2, 3, 4, 5, 1, 3, 1, 6, 3, 2, 3 with frame size 4 (i.e. maximum
4 pages in a frame).

Total Page Hits = 3 F1


F2
Total Page Fault = 9
F3
Page Hit Ratio = 3/12 = 0.25 F4
Page Miss ratio = 9/12 = 0.75
FIFO Page Replacement Algorithm

➢ Advantages

➢ Simple and easy to implement.


➢ Low overhead (i.e. it requires little bookkeeping on the part of the operating system)

➢ Disadvantages

➢ Poor performance.
➢ Doesn’t consider the frequency of use or last used time, simply replaces the oldest page.
➢ Suffers from Belady’s Anomaly (i.e. when we increase the number of page frames, the page faults
will increase).
Least Recently Used (LRU) Page Replacement
Algorithm

❖ In this algorithm page will be replaced which is least recently used.

❖ It works on the idea that the pages that have been most heavily used in the past are
most likely to be used heavily in the future too.

❖ In LRU, whenever page replacement happens, the page which has not been used
for the longest amount of time is replaced.
Example for LRU implementation:
Consider the page reference string of size 12: 1, 2, 3, 4, 5, 1, 3, 1, 6, 3, 2, 3 with frame size 4 (i.e. maximum
4 pages in a frame).

F1
F2
F3
F4
Example for LRU implementation:
Consider the page reference string of size 12: 1, 2, 3, 4, 5, 1, 3, 1, 6, 3, 2, 3 with frame size 4 (i.e. maximum
4 pages in a frame).

F1
F2
F3
F4
Example for LRU implementation:
Consider the page reference string of size 12: 1, 2, 3, 4, 5, 1, 3, 1, 6, 3, 2, 3 with frame size 4 (i.e. maximum
4 pages in a frame).

F1
F2
F3
F4
Example for LRU implementation:
Consider the page reference string of size 12: 1, 2, 3, 4, 5, 1, 3, 1, 6, 3, 2, 3 with frame size 4 (i.e. maximum
4 pages in a frame).

F1
F2
F3
F4
Example for LRU implementation:
Consider the page reference string of size 12: 1, 2, 3, 4, 5, 1, 3, 1, 6, 3, 2, 3 with frame size 4 (i.e. maximum
4 pages in a frame).

F1
F2
F3
F4
Example for LRU implementation:
Consider the page reference string of size 12: 1, 2, 3, 4, 5, 1, 3, 1, 6, 3, 2, 3 with frame size 4 (i.e. maximum
4 pages in a frame).

F1
F2
F3
F4
Example for LRU implementation:
Consider the page reference string of size 12: 1, 2, 3, 4, 5, 1, 3, 1, 6, 3, 2, 3 with frame size 4 (i.e. maximum
4 pages in a frame).

F1
F2
F3
F4
Example for LRU implementation:
Consider the page reference string of size 12: 1, 2, 3, 4, 5, 1, 3, 1, 6, 3, 2, 3 with frame size 4 (i.e. maximum
4 pages in a frame).

F1
F2
F3
F4
Example for LRU implementation:
Consider the page reference string of size 12: 1, 2, 3, 4, 5, 1, 3, 1, 6, 3, 2, 3 with frame size 4 (i.e. maximum
4 pages in a frame).

F1
F2
F3
F4
Example for LRU implementation:
Consider the page reference string of size 12: 1, 2, 3, 4, 5, 1, 3, 1, 6, 3, 2, 3 with frame size 4 (i.e. maximum
4 pages in a frame).

F1
F2
F3
F4
Example for LRU implementation:
Consider the page reference string of size 12: 1, 2, 3, 4, 5, 1, 3, 1, 6, 3, 2, 3 with frame size 4 (i.e. maximum
4 pages in a frame).

F1
F2
F3
F4
Example for LRU implementation:
Consider the page reference string of size 12: 1, 2, 3, 4, 5, 1, 3, 1, 6, 3, 2, 3 with frame size 4 (i.e. maximum
4 pages in a frame).

Total Page Hits = 4


F1
Total Page Fault = 8 F2

Page Hit Ratio = 4/12 = 0.33 F3


F4
Page Miss ratio = 8/12 = 0.67
LRU Page Replacement Algorithm

➢ Advantages

➢ Efficient.
➢ Doesn't suffer from Belady’s Anomaly.

➢ Disadvantages

➢ Complex Implementation.
➢ Expensive.
➢ Requires hardware support.
Optimal Page Replacement
➢ Optimal Page Replacement algorithm is the best page replacement algorithm as it gives the
least number of page faults.

➢ It is also known as Clairvoyant Replacement Algorithm, or Belady’s Optimal Page


Replacement policy.

➢ In this algorithm, pages are replaced which would not be used for the longest duration of
time in the future, i.e., the pages in the memory which are going to be referred farthest in
the future are replaced.

➢ This algorithm was introduced long back and is difficult to implement because it requires
future knowledge of the program behavior. However, it is possible to implement optimal
page replacement on the second run by using the page reference information collected on
the first run.
Example for Optimal Page Implementation:
Consider the page reference string of size 12: 1, 2, 3, 4, 5, 1, 3, 1, 6, 3, 2, 3 with frame size 4 (i.e. maximum
4 pages in a frame).

F1
F2
F3
F4
Example for Optimal Page Implementation:
Consider the page reference string of size 12: 1, 2, 3, 4, 5, 1, 3, 1, 6, 3, 2, 3 with frame size 4 (i.e. maximum
4 pages in a frame).

F1
F2
F3
F4
Example for Optimal Page Implementation:
Consider the page reference string of size 12: 1, 2, 3, 4, 5, 1, 3, 1, 6, 3, 2, 3 with frame size 4 (i.e. maximum
4 pages in a frame).

F1
F2
F3
F4
Example for Optimal Page Implementation:
Consider the page reference string of size 12: 1, 2, 3, 4, 5, 1, 3, 1, 6, 3, 2, 3 with frame size 4 (i.e. maximum
4 pages in a frame).

F1
F2
F3
F4
Example for Optimal Page Implementation:
Consider the page reference string of size 12: 1, 2, 3, 4, 5, 1, 3, 1, 6, 3, 2, 3 with frame size 4 (i.e. maximum
4 pages in a frame).

F1
F2
F3
F4
Example for Optimal Page Implementation:
Consider the page reference string of size 12: 1, 2, 3, 4, 5, 1, 3, 1, 6, 3, 2, 3 with frame size 4 (i.e. maximum
4 pages in a frame).

F1
F2
F3
F4
Example for Optimal Page Implementation:
Consider the page reference string of size 12: 1, 2, 3, 4, 5, 1, 3, 1, 6, 3, 2, 3 with frame size 4 (i.e. maximum
4 pages in a frame).

F1
F2
F3
F4
Example for Optimal Page Implementation:
Consider the page reference string of size 12: 1, 2, 3, 4, 5, 1, 3, 1, 6, 3, 2, 3 with frame size 4 (i.e. maximum
4 pages in a frame).

F1
F2
F3
F4
Example for Optimal Page Implementation:
Consider the page reference string of size 12: 1, 2, 3, 4, 5, 1, 3, 1, 6, 3, 2, 3 with frame size 4 (i.e. maximum
4 pages in a frame).

F1
F2
F3
F4
Example for Optimal Page Implementation:
Consider the page reference string of size 12: 1, 2, 3, 4, 5, 1, 3, 1, 6, 3, 2, 3 with frame size 4 (i.e. maximum
4 pages in a frame).

F1
F2
F3
F4
Example for Optimal Page Implementation:
Consider the page reference string of size 12: 1, 2, 3, 4, 5, 1, 3, 1, 6, 3, 2, 3 with frame size 4 (i.e. maximum
4 pages in a frame).

F1
F2
F3
F4
Example for Optimal Page Implementation:
Consider the page reference string of size 12: 1, 2, 3, 4, 5, 1, 3, 1, 6, 3, 2, 3 with frame size 4 (i.e. maximum
4 pages in a frame).

Total Page Hits = 6


F1
Total Page Fault = 6 F2

Page Hit Ratio = 6/12 = 0.5 F3


F4
Page Miss ratio = 6/12 = 0.5
Optimal Page Replacement Algorithm

➢ Advantages

➢ Easy to Implement.
➢ Simple data structures are used.
➢ Highly efficient.

➢ Disadvantages

➢ Requires future knowledge of the program.


➢ Time-consuming.
THANK YOU

You might also like