You are on page 1of 61

Operating System

(Diploma in Computer Engineering)

IIIrd year / Ist part

Made By Anil Oli

All rights belongs to Tech About Need


Table Of Contents

Contents
Unit-1: Operating Systems ................................................................................................................................. 5

Introduction: Definition, Functions ............................................................................................................... 5

OS as resource manager ................................................................................................................................. 7

OS as an extended machine ........................................................................................................................... 7

History of OS ................................................................................................................................................. 8

Types of OS ................................................................................................................................................... 8

Introduction to System Call ........................................................................................................................... 9

Introduction to Shell ...................................................................................................................................... 9

Open source systems .................................................................................................................................... 10

Unit 2: Process Management ........................................................................................................................... 11

Definition of Process.................................................................................................................................... 11

Process Vs Program ..................................................................................................................................... 11

Process Model .............................................................................................................................................. 12

Process State ................................................................................................................................................ 12

Process Transitions ...................................................................................................................................... 13

PCB (Process Control Block) ...................................................................................................................... 14

Thread .......................................................................................................................................................... 15

Process Vs Thread........................................................................................................................................ 15

Inter Process Communication ...................................................................................................................... 15

Race Condition............................................................................................................................................. 16

Critical Section............................................................................................................................................. 16

Mutual Exclusion with Busy Waiting .......................................................................................................... 17

Sleep and Wakeup........................................................................................................................................ 18

Semaphore.................................................................................................................................................... 19

Process Scheduling ...................................................................................................................................... 20

Process Scheduling Goals ............................................................................................................................ 20


Batch System Scheduling ............................................................................................................................ 20

Deadlock ...................................................................................................................................................... 24

Preemption ................................................................................................................................................... 26

Non-Preemption ........................................................................................................................................... 26

Preemptive vs Non-preemptive.................................................................................................................... 27

Deadlock Modeling ...................................................................................................................................... 27

Deadlock handling strategies ....................................................................................................................... 28

Recovery From Deadlock: ........................................................................................................................... 29

Starvation ..................................................................................................................................................... 29

Unit-3: Memory Management ......................................................................................................................... 29

Memory Manager......................................................................................................................................... 30

Memory Hierarchy ....................................................................................................................................... 30

Memory Management in Monoprogramming ............................................................................................. 31

Memory Management in Multiprogramming .............................................................................................. 31

Multiprogramming with Fixed Partitions .................................................................................................... 31

Multiprogramming with Variable Partitions ................................................................................................ 32

Relocation and protection ............................................................................................................................ 32

Compaction .................................................................................................................................................. 33

Coalescing .................................................................................................................................................... 33

Virtual Memory ........................................................................................................................................... 33

Page Fault..................................................................................................................................................... 35

Page Replacement algorithms ...................................................................................................................... 35

Segmentation................................................................................................................................................ 37

Fragmentation .............................................................................................................................................. 39

Unit-4: File Management ................................................................................................................................. 43

File System................................................................................................................................................... 43

File System Layout ...................................................................................................................................... 43

File Naming ................................................................................................................................................. 43

File Structure ................................................................................................................................................ 44

File Attributes .............................................................................................................................................. 44


File Access ................................................................................................................................................... 44

File Operations ............................................................................................................................................. 45

Directory ...................................................................................................................................................... 45

File System Implementation ........................................................................................................................ 48

Unit-5: I/O Management .................................................................................................................................. 50

Classification of IO devices ......................................................................................................................... 50

Controllers.................................................................................................................................................... 51

Memory Mapped IO vs IO mapped IO ........................................................................................................ 52

Interrupt IO vs Polled IO ............................................................................................................................. 53

DMA (Direct Memory Access) ................................................................................................................... 54

Goals of IO software .................................................................................................................................... 55

Handling IO ................................................................................................................................................. 56

IO Software Layers ...................................................................................................................................... 56

Disk Structure .............................................................................................................................................. 57

Disk Scheduling ........................................................................................................................................... 58


Unit-1: Operating Systems

Introduction: Definition, Functions


An operating system is a program that acts as an interface between the user and the computer hardware and
controls the execution of all kinds of programs.

Function of Operating Systems


1. Memory Management

2. Processor Management

3. Device Management

4. File Management

5. Security

6. Control over system performance

7. Job accounting

8. Error detecting aids

9. Coordination between other software and users

Memory Management

Memory management refers to management of Primary Memory or Main Memory. Main memory is a large
array of words or bytes where each word or byte has its own address.
An Operating System does the following activities for memory management −

1. Keeps track of primary memory, i.e., what part of it is in use by whom, what part is not in use.

2. In multiprogramming, the OS decides which process will get memory when and how much.

3. Allocates the memory when a process requests it to do so.

4. De-allocates the memory when a process no longer needs it or has been terminated.

Processor Management

In a multiprogramming environment, the OS decides which process gets the processor when and
for how much time. This function is called process scheduling. An Operating System does the
following activities for processor management −

1. Keeps track of processor and status of process. The program responsible for this task is known as
traffic controller.

2. Allocates the processor (CPU) to a process.

3. De-allocates the processor when a process is no longer required.

Device Management

An Operating System manages device communication via their respective drivers. It does the
following activities for device management −

1. Keep track of all devices. Program responsible for this task is known as the I/O controller.

2. Decides which process gets the device when and for how much time.

3. Allocates the device in an efficient way.

4. De-allocates devices.

File Management

A file system is normally organized into directories for easy navigation and usage. These directories may
contain files and other directions.

An Operating System does the following activities for file management −

1. Keep track of information, location, uses, status etc. The collective facilities are often known as file
systems.

2. Decides who gets the resources.

3. Allocates the resources.

4. De-allocates the resources.


Security

By means of password and similar other techniques, it prevents unauthorized access to programs and data.

Control over system performance

Recording delays between request for a service and response from the system.

Job accounting

Keeping track of time and resources used by various jobs and users.

Error detecting aids

Production of dumps, traces, error messages, and other debugging and error detecting aids.

Coordination between other software and users

Coordination and assignment of compilers, interpreters, assemblers and other software to the various users
of the computer systems.

OS as resource manager
An Operating System is a collection of programs and utilities. It acts as an interface between user and
computer. It creates a user-friendly environment.

A computer has many resources(Hardware and Software), which may be required to complete a task. The
commonly required resources are Input/Output devices, Memory file storage space, CPU(Central Processing
Unit) time and so on.

When a number of computers are connected through a network with more than one computer trying for a
computer print or a common resource, then the operating system follows the same order and manages the
resources in an efficient manner.

Resources sharing in two ways "in time" and "in space". When a resource is a time-sharing resource, first
one of the tasks gets the resource for some time, then another and so on.

The other kind of sharing is "space sharing". In this method, the users share the space of resources.

By allocating the time and space for the different computers by their requirement so it is called resource
manager.

OS as an extended machine
1. provides stable, portable, reliable, safe, well-behaved environment (ideally)
2. Magician: makes computer appear to be more than it really is
3. Single processor appears like many separate processors
4. Single memory made to look like many separate memories, each potentially larger than the real
memory

History of OS

Types of OS

Batch OS
A set of similar jobs are stored in the main memory for execution. A job gets assigned to the CPU, only
when the execution of the previous job completes.

Multiprogramming OS
The main memory consists of jobs waiting for CPU time. The OS selects one of the processes and assigns it
to the CPU. Whenever the executing process needs to wait for any other operation (like I/O), the OS selects
another process from the job queue and assigns it to the CPU. This way, the CPU is never kept idle and the
user gets the flavor of getting multiple tasks done at once.

Multitasking OS
Multitasking OS combines the benefits of Multiprogramming OS and CPU scheduling to perform quick
switches between jobs. The switch is so quick that the user can interact with each program as it runs
Time Sharing OS
Time-sharing systems require interaction with the user to instruct the OS to perform various tasks. The OS
responds with an output. The instructions are usually given through an input device like the keyboard.

Real Time OS
Real-Time OS are usually built for dedicated systems to accomplish a specific set of tasks within deadlines.

Introduction to System Call


A system call is the programmatic way in which a computer program requests a service from the kernel of
the operating system it is executed on. A system call is a way for programs to interact with the operating
system.

System call provides the services of the operating system to the user programs via Application Program
Interface (API). It provides an interface between a process and operating system to allow user-level
processes to request services of the operating system.

Services Provided by System Calls:

1. Process creation and management


2. Main memory management
3. File Access, Directory and File system management
4. Device handling(I/O)
5. Protection
6. Networking

Introduction to Shell
The shell is a program that provides the user with an interface to use the operating system’s functions
through some commands. A shell script is a program that is used to perform specific tasks.

Shell scripts are mostly used to avoid repetitive work. You can write a script to automate a set of instructions
to be executed one after the other, instead of typing in the commands one after the other n number of times.

Some Shell Commands:


ls : For listing file

pwd : for view present work directory

cd /.. : change directory backward

cd ./root : go to root directory


touch filename.extension : create new file

rm filename : delete file

mkdir foldername : create folder

cp filename directory_for_pasting : copy the file

adduser username : for adding user

passwd username : change the password

Note: This are Linux based commands

Open source systems


Open source is a term that originally referred to open source software (OSS). Open source software is code
that is designed to be publicly accessible anyone can see, modify, and distribute the code as they see fit.

Open source software is developed in a decentralized and collaborative way, relying on peer review and
community production. Open source software is often cheaper, more flexible, and has more longevity than
its proprietary peers because it is developed by communities rather than a single author or company.

Open source has become a movement and a way of working that reaches beyond software production. The
open source movement uses the values and decentralized production model of open source software to find
new ways to solve problems in their communities and industries.
Unit 2: Process Management

Definition of Process
In the Operating System, a Process is something that is currently under execution. So, an active program can
be called a Process. For example, when you want to search something on web then you start a browser. So,
this can be process.

Process Vs Program DSEERC


Process Model

Running: The currently executing process.

Waiting/Blocked: Process waiting for some event such as completion of I/O operation, waiting for other
processes, synchronization signal, etc.

Ready: A process that is waiting to be executed.

New: The process that is just being created. The Program Control Block is already being made but the
program is not yet loaded in the main memory. The program remains in the new state until the long term
scheduler moves the process to the ready state (main memory).

Terminated/Exit: A process that is finished or aborted due to some reason.

Process State
The states that a Process enters in working from start till end are known as Process states. These are listed
below as:
 Created-Process is newly created by system call, is not ready to run

 User running-Process is running in user mode which means it is a user process.

 Kernel Running-Indicates process is a kernel process running in kernel mode.

 Zombie- Process does not exist/ is terminated.

 Preempted- When process runs from kernel to user mode, it is said to be preempted.

 Ready to run in memory- It indicated that process has reached a state where it is ready to run in
memory and is waiting for kernel to schedule it.

 Ready to run, swapped– Process is ready to run but no empty main memory is present

 Sleep, swapped- Process has been swapped to secondary storage and is at a blocked state.

 Asleep in memory- Process is in memory (not swapped to secondary storage) but is in blocked state.

Process Transitions
The working of Process is explained in following steps:

User-running: Process is in user-running.

Kernel-running: Process is allocated to kernel and hence, is in kernel mode.

Ready to run in memory: Further, after processing in main memory process is rescheduled to the Kernel.
i.e. The process is not executing but is ready to run as soon as the kernel schedules it.
Asleep in memory: Process is sleeping but resides in main memory. It is waiting for the task to begin.

Ready to run, swapped: Process is ready to run and be swapped by the processor into main memory,
thereby allowing kernel to schedule it for execution.

Sleep, Swapped: Process is in sleep state in secondary memory, making space for execution of other
processes in main memory. It may resume once the task is fulfilled.

Pre-empted: Kernel preempts an on-going process for allocation of another process, while the first process
is moving from kernel to user mode.

Created: Process is newly created but not running. This is the start state for all processes.

Zombie: Process has been executed thoroughly and exit call has been enabled.

The process, thereby, no longer exists. But, it stores a statistical record for the process. This is the final state
of all processes.

PCB (Process Control Block)


Process Control Block is a data structure that contains information of the process related to it. The process
control block is also known as a task control block, entry of the process table, etc.

It is very important for process management as the data structuring for processes is done in terms of the
PCB. It also defines the current state of the operating system.

The process control stores many data items that are needed for efficient process management.
Thread
Thread is an execution unit that is part of a process. A process can have multiple threads, all executing at the
same time. It is a unit of execution in concurrent programming. A thread is lightweight and can be managed
independently by a scheduler. It helps you to improve the application performance using parallelism.

Multiple threads share information like data, code, files, etc. We can implement threads in three different
ways:

 Kernel-level threads

 User-level threads

 Hybrid threads

Process Vs Thread

Inter Process Communication


Interprocess communication is the mechanism provided by the operating system that allows processes to
communicate with each other. This communication could involve a process letting another process know
that some event has occurred or the transferring of data from one process to another.
Race Condition
A race condition is an undesirable situation that occurs when a device or system attempts to perform two or
more operations at the same time, but because of the nature of the device or system, the operations must be
done in the proper sequence to be done correctly.

When race conditions occur

A race condition occurs when two threads access a shared variable at the same time. The first thread reads
the variable, and the second thread reads the same value from the variable. Then the first thread and second
thread perform their operations on the value, and they race to see which thread can write the value last to the
shared variable. The value of the thread that writes its value last is preserved, because the thread is writing
over the value that the previous thread wrote.

Critical Section
Critical Section is the part of a program which tries to access shared resources. That resource may be any
resource in a computer like a memory location, Data structure, CPU or any IO device.
The critical section cannot be executed by more than one process at the same time; operating system faces
the difficulties in allowing and disallowing the processes from entering the critical section.

The critical section problem is used to design a set of protocols which can ensure that the Race condition
among the processes will never arise.

1. Mutual Exclusion

2. Process solution

3. Bound waiting

Mutual Exclusion is a special type of binary semaphore which is used for controlling access to the shared
resource.

Process solution is used when no one is in the critical section, and someone wants in.

Bound waiting solution, after a process makes a request for getting into its critical section, there is a limit
for how many other processes can get into their critical section.

Mutual Exclusion with Busy Waiting

Lock Variables
The Lock variable mechanism is a synchronization mechanism that is implemented in a user mode. It is a
software procedure.

Lock variable is a solution for busy waiting that can be easily applied by more than two processes.
In the lock variable mechanism, we use a lock variable, i.e., Lock. There are two values of Lock variable,
which are 1 and 0. If the value of Lock is 1, then it means the critical section is occupied, but if the value of
lock is 0, then it means the critical section is empty.

Peterson’s Solution
Peterson’s solution provides a good algorithmic description of solving the critical-section problem and
illustrates some of the complexities involved in designing software that addresses the requirements of
mutual exclusion, progress, and bounded waiting.

This solution is for 2 processes to enter into critical section. This solution works for only 2 processes.

Disadvantage of Peterson’s Solution:

 This solution works for 2 processes, but this solution is best scheme in user mode for critical section.

 This is also a busy waiting solution so CPU time is wasted. And because of that “SPIN LOCK”
problem can come. And this problem can come in any of the busy waiting solution.

Sleep and Wakeup


The concept of sleep and wake is very simple. If the critical section is not empty then the process will go and
sleep. It will be waked up by the other process which is currently executing inside the critical section so that
the process can get inside the critical section.
 If there is no customer, then the barber sleeps in his own chair.

 When a customer arrives, he has to wake up the barber.

 If there are many customers and the barber is cutting a customer’s hair, then the remaining customers
either wait if there are empty chairs in the waiting room or they leave if no chairs are empty.

Semaphore
Semaphores are integer variables that are used to solve the critical section problem by using two atomic
operations, wait and signal that are used for process synchronization.

Types of Semaphores

Counting Semaphores
These are integer value semaphores and have an unrestricted value domain. These semaphores are used to
coordinate the resource access, where the semaphore count is the number of available resources. If the
resources are added, semaphore count automatically incremented and if the resources are removed, the count
is decremented.

Binary Semaphores
The binary semaphores are like counting semaphores but their value is restricted to 0 and 1. The wait
operation only works when the semaphore is 1 and the signal operation succeeds when semaphore is 0. It is
sometimes easier to implement binary semaphores than counting semaphores.

Advantages of Semaphores

 Semaphores allow only one process into the critical section. They follow the mutual exclusion
principle strictly and are much more efficient than some other methods of synchronization.

 There is no resource wastage because of busy waiting in semaphores as processor time is not wasted
unnecessarily to check if a condition is fulfilled to allow a process to access the critical section.
 Semaphores are implemented in the machine independent code of the microkernel. So they are
machine independent.

Disadvantages of Semaphores

 Semaphores are complicated so the wait and signal operations must be implemented in the correct
order to prevent deadlocks.

 Semaphores are impractical for last scale use as their use leads to loss of modularity. This happens
because the wait and signal operations prevent the creation of a structured layout for the system.

 Semaphores may lead to a priority inversion where low priority processes may access the critical
section first and high priority processes later.

Process Scheduling
The process scheduling is the activity of the process manager that handles the removal of the running
process from the CPU and the selection of another process on the basis of a particular strategy.

Process scheduling is an essential part of a Multiprogramming operating systems. Such operating systems
allow more than one process to be loaded into the executable memory at a time and the loaded process
shares the CPU using time multiplexing.

Process Scheduling Goals


Fairness: Each process gets fair share of the CPU.

Efficiency: When CPU is 100% busy then efficiency is increased.

Response Time: Minimize the response time for interactive user.

Throughput: Maximizes jobs per given time period.

Waiting Time: Minimizes total time spent waiting in the ready queue.

Turn Around Time: Minimizes the time between submission and termination.

Batch System Scheduling


A Process Scheduler schedules different processes to be assigned to the CPU based on particular scheduling
algorithms.
preemptive = specially to prevent attack
These algorithms are either non-preemptive or preemptive. Non-preemptive algorithms are designed so that
once a process enters the running state, it cannot be preempted until it completes its allotted time, whereas
the preemptive scheduling is based on priority where a scheduler may preempt a low priority running
process anytime when a high priority process enters into a ready state.
First Come First Serve (FCFS)

 Jobs are executed on first come, first serve basis.

 It is a non-preemptive, pre-emptive scheduling algorithm.

 Easy to understand and implement.

 Its implementation is based on FIFO queue.

 Poor in performance as average wait time is high.

Wait time of each process is as follows –

Average Wait Time: (0+4+6+13) / 4 = 5.75

Shortest Job Next (SJN)

 This is also known as shortest job first, or SJF

 This is a non-preemptive, pre-emptive scheduling algorithm.

 Best approach to minimize waiting time.

 Easy to implement in Batch systems where required CPU time is known in advance.

 Impossible to implement in interactive systems where required CPU time is not known.
 The processer should know in advance how much time process will take.

Given: Table of processes, and their Arrival time, Execution time

Waiting time of each process is as follows –

Average Wait Time: (0 + 4 + 12 + 5)/4 = 21 / 4 = 5.25

Priority Based Scheduling

 Priority scheduling is a non-preemptive algorithm and one of the most common scheduling
algorithms in batch systems.

 Each process is assigned a priority. Process with highest priority is to be executed first and so on.
 Processes with same priority are executed on first come first served basis.

 Priority can be decided based on memory requirements, time requirements or any other resource
requirement.

Given: Table of processes, and their Arrival time, Execution time, and priority. Here we are
considering 1 is the lowest priority.

Waiting time of each process is as follows –

Average Wait Time: (0 + 10 + 12 + 2)/4 = 24 / 4 = 6

Shortest Remaining Time

 Shortest remaining time (SRT) is the preemptive version of the SJN algorithm.
 The processor is allocated to the job closest to completion but it can be preempted by a newer ready
job with shorter time to completion.

 Impossible to implement in interactive systems where required CPU time is not known.

 It is often used in batch environments where short jobs need to give preference.

Round Robin Scheduling

 Round Robin is the preemptive process scheduling algorithm.

 Each process is provided a fix time to execute, it is called a quantum.

 Once a process is executed for a given time period, it is preempted and other process executes for a
given time period.

 Context switching is used to save states of preempted processes.

Wait time of each process is as follows –

Average Wait Time: (9+2+12+11) / 4 = 8.5


Rate Monotonic Scheduling https://www.youtube.com/watch?v=7SyX1mmcSDs
Deadlock
Deadlock is a situation where a set of processes are blocked because each process is holding a resource and
waiting for another resource acquired by some other process.
Condition of Deadlock arises:

Mutual Exclusion –
At least one resource must be kept in a non-shareable state; if another process requests it, it must wait for it
to be released.

Hold and Wait –


A process must hold at least one resource while also waiting for at least one resource that another process is
currently holding.

No preemption –
Once a process holds a resource (i.e. after its request is granted), that resource cannot be taken away from
that process until the process voluntarily releases it.

Circular Wait –
There must be a set of processes P0, P1, P2,…, PN such that every P[I] is waiting for P[(I + 1) percent (N +
1)]. (It is important to note that this condition implies the hold-and-wait condition, but dealing with the four
conditions is easier if they are considered separately).

Methods for handling deadlock

 Preventing or avoiding deadlock by Avoid allowing the system to become stuck in a loop.

 Detection and recovery of deadlocks, When deadlocks are detected, abort the process or preempt
some resources.

 Ignore the problem entirely.

 To avoid deadlocks, the system requires more information about all processes. The system, in
particular, must understand what resources a process will or may request in the future.
 Deadlock detection is relatively simple, but deadlock recovery necessitates either aborting processes
or preempting resources, neither of which is an appealing option.

 If deadlocks are not avoided or detected, the system will gradually slow down as more processes
become stuck waiting for resources that the deadlock has blocked and other waiting processes.
Unfortunately, when the computing requirements of a real-time process are high, this slowdown can
be confused with a general system slowdown.

Resources
Computer Resources means all computer hardware, software, communications devices, facilities,
equipment, networks, passwords, licensing and attendant policies, manuals and guides. NO EXPECTATION
OF PRIVACY the computers and computer accounts given to Users are to assist them in performance of
their jobs. Users do not have an expectation of privacy in anything they create, store, send, or receive on the
computer system.

Preemption
Preemption is the act of temporarily interrupting an executing task, with the intention of resuming it at a
later time. This interrupt is done by an external scheduler with no assistance or cooperation from the task. 
This preemptive scheduler usually runs in the most privileged protection ring, meaning that interruption and
resuming are considered highly secure actions. Such a change in the currently executing task of a processor
is known as context switching.

Non-Preemption
Non-preemptive scheduling does not interrupt a process running CPU in the middle of the execution.
Instead, it waits till the process completes its CPU burst time, and then it can allocate the CPU to another
process.
Preemptive vs Non-preemptive

Deadlock Modeling
A deadlock occurs when a set of processes is stalled because each process is holding a resource and waiting
for another process to acquire another resource. In the diagram below, for example, Process 1 is holding
Resource 1 while Process 2 acquires Resource 2, and Process 2 is waiting for Resource 1.
Deadlock handling strategies
1. Deadlock Prevention

2. Deadlock avoidance

3. Deadlock detection

Deadlock Prevention
The strategy of deadlock prevention is to design the system in such a way that the possibility of deadlock is
excluded. Indirect method prevent the occurrence of one of three necessary condition of deadlock i.e.,
mutual exclusion, no pre-emption and hold and wait. Direct method prevent the occurrence of circular wait.

Deadlock avoidance
This approach allows the three necessary conditions of deadlock but makes judicious choices to assure that
deadlock point is never reached. It allows more concurrency than avoidance detection

A decision is made dynamically whether the current resource allocation request will, if granted, potentially
lead to deadlock. It requires the knowledge of future process requests. Two techniques to avoid deadlock:

1. Process initiation denial

2. Resource allocation denial

Advantages of deadlock avoidance techniques:

 Not necessary to pre-empt and rollback processes

 Less restrictive than deadlock prevention

Disadvantages:

 Future resource requirements must be known in advance

 Processes can be blocked for long periods

 Exists fixed number of resources for allocation

Deadlock Detection:
Deadlock detection is used by employing and algorithm that tracks the circular waiting and killing one or
more processes so that deadlock is removed. The system state is examined periodically to determine if a set
of processes is deadlocked. A deadlock is resolved by aborting and restarting a process, relinquishing all the
resources that the process held.

 This technique does not limit resources access or restrict process action.

 Requested resources are granted to processes whenever possible.


 It never delays the process initiation and facilitates online handling.

 The disadvantage is the inherent pre-emption losses.

Recovery From Deadlock:


There are three basic approaches to getting out of a bind:

 Inform the system operator and give him/her permission to intervene manually.

 Stop one or more of the processes involved in the deadlock.

 Prevent the use of resources.

Starvation
Starvation is the problem that occurs when high priority processes keep executing and low priority processes
get blocked for indefinite time. In heavily loaded computer system, a steady stream of higher-priority
processes can prevent a low-priority process from ever getting the CPU.

Ostrich Algorithm
The ostrich algorithm means that the deadlock is simply ignored and it is assumed that it will never occur.

 Pretend (imagine) that there’s no problem.

 This is the easiest way to deal with problem.

 This algorithm says that stick your head in the sand and pretend (imagine) that there is no problem at
all.

 This strategy suggests to ignore the deadlock because deadlocks occur rarely, but system crashes due
to hardware failures, compiler errors, and operating system bugs frequently, then not to pay a large
penalty in performance or convenience to eliminate deadlocks.

Unit-3: Memory Management


Memory management is the process of controlling and coordinating computer memory, assigning portions
called blocks to various running programs to optimize overall system performance. Memory management
resides in hardware, in the operating system, and in programs and applications.

In hardware, memory management involves components that physically store data, such as RAM chips,
memory caches, and flash-based SSDs. In the OS, memory management involves the allocation of specific
memory blocks to individual programs as user demands change.
At the application level, memory management ensures the availability of adequate memory for the objects
and data structures of each running program at all times. Application memory management combines two
related tasks, known as allocation and recycling.

Memory Manager
A memory manager is a software utility that operates in conjunction with the operating system. It helps
manage memory more efficiently and provides additional features such as flushing out unused segments of
memory. All modern operating systems provide memory management.

Memory Hierarchy
C C M M M O

Following characteristics of Memory Hierarchy Design from above figure:

Capacity:

It is the global volume of information the memory can store. As we move from top to bottom in the
Hierarchy, the capacity increases.

Access Time:

It is the time interval between the read/write request and the availability of the data. As we move from top to
bottom in the Hierarchy, the access time increases.

Performance:

Earlier when the computer system was designed without Memory Hierarchy design, the speed gap increases
between the CPU registers and Main Memory due to large difference in access time. This results in lower
performance of the system and thus, enhancement was required. This enhancement was made in the form of
Memory Hierarchy Design because of which the performance of the system increases. One of the most
significant ways to increase system performance is minimizing how far down the memory hierarchy one has
to go to manipulate data.
Cost per bit:

As we move from bottom to top in the Hierarchy, the cost per bit increases i.e. Internal Memory is costlier
than External Memory.

Memory Management in Monoprogramming


One process in memory at all times

• Process has entire memory available to it

• Only one process at a time can be running

• Not practical for multiprogramming

• Used on older OS such as MS-DOS

Memory Management in Multiprogramming


Multiprogramming is required to support multiple users (processes)

• Used in just about all modern OS

• Requires multiple processes to be resident in memory at the same time

• Increases processor utilization – Especially for I/O bound processes

• Able to analyze and predict system performance

• Able to model the relationship between the quantity of memory and CPU utilization

Multiprogramming with Fixed Partitions


• Desire: Have more than one program in memory at one time

• Solution: Divide memory up into partitions – Partitions may be equally or variably sized

• Create an input queue to place processes in the smallest partition that is large enough to hold the process

• Problems:

– Some memory partitions may have processes waiting while other partitions are unused

– The space in the partition, which is larger than the process memory requirement, is not used - thus
it is wasted

• May be used for batch systems, where memory requirements can be modeled

• Not good for interactive systems that often have dynamic memory requirements
Multiprogramming with Variable Partitions

Allocate “just large enough”

• Good idea to make the variable partitions a little larger then needed for “growing” memory requirements

• Memory can be compacted to consolidate holes

– Move memory partitions down as far as possible

– Compaction is inefficient because of excessive CPU requirements to reorganize memory partitions

• If memory requirements grow beyond a processes partition size, move the partition to a new partition

– Requires relocation

Relocation and protection


Relocation:

When a program is run it does not know in advance what location it will be loaded at. Therefore, the
program cannot simply generate static addresses (e.g. from jump instructions). Instead, they must be made
relative to where the program has been loaded.

Protection:

Once you can have two programs in memory at the same time there is a danger that one program can write
to the address space of another program. This is obviously dangerous and should be avoided.

In order to cater for relocation we could make the loader modify all the relevant addresses as the binary file
is loaded. The OS/360 worked in this way but the scheme suffers from the following problems

· The program cannot be moved, after it has been loaded without going through the same process.
· Using this scheme does not help the protection problem as the program can still generate illegal addresses
(maybe by using absolute addressing).

· The program needs to have some sort of map that tells the loader which addresses need to be modified.

A solution, which solves both the relocation and protection problem is to equip the machine with two
registers called the base and limit registers.

Compaction
 Compaction is a process in which the free space is collected in a large memory chunk to make some
space available for processes.

 In memory management, swapping creates multiple fragments in the memory because of the
processes moving in and out.

 Compaction refers to combining all the empty spaces together and processes.

 Compaction helps to solve the problem of fragmentation, but it requires too much of CPU time.

 It moves all the occupied areas of store to one end and leaves one large free space for incoming jobs,
instead of numerous small ones.

 In compaction, the system also maintains relocation information and it must be performed on each
new allocation of job to the memory or completion of job from memory.

Coalescing
In computer science, coalescing is a part of memory management in which two adjacent free blocks of
computer memory are merged.

When a program no longer requires certain blocks of memory, these blocks of memory can be freed.
Without coalescing, these blocks of memory stay separate from each other in their original requested size,
even if they are next to each other. If a subsequent request for memory specifies a size of memory that
cannot be met with an integer number of these (potentially unequally-sized) freed blocks, these neighboring
blocks of freed memory cannot be allocated for this request. Coalescing alleviates this issue by setting the
neighboring blocks of freed memory to be contiguous without boundaries, such that part or all of it can be
allocated for the request.

Virtual Memory
Virtual memory is a section of volatile memory created temporarily on the storage drive. It is created when a
computer is running many processes at once and RAM is running low.
Paging
Paging is a computer memory management function that presents storage locations to the computer’s CPU
as additional memory, called virtual memory. Each piece of data needs a storage address.
paging is the memory management technique in which the space is divided it blocks of same size
Page tables
Page Table is a data structure used by the virtual memory system to store the mapping between logical
addresses and physical addresses.

Logical addresses are generated by the CPU for the pages of the processes therefore they are generally used
by the processes.

Physical addresses are the actual frame address of the memory. They are generally used by the hardware or
more specifically by RAM subsystems.

Example:
Page Fault
A page fault occurs when a program attempts to access data or code that is in its address space, but is not
currently located in the system RAM. So when page fault occurs then following sequence of events happens:
The computer hardware traps to the kernel and program counter (PC) is saved on the stack.

Page Replacement algorithms

First In First Out (FIFO)


In this algorithm, the operating system keeps track of all pages in the memory in a queue, the oldest page is
in the front of the queue. When a page needs to be replaced page in the front of the queue is selected for
removal.

Example: Consider page reference string 1, 3, 0, 3, 5, 6 with 3 page frames.Find number of page faults.

writing is also important

Initially all slots are empty, so when 1, 3, 0 came they are allocated to the empty slots —> 3 Page Faults.

When 3 comes, it is already in memory so —> 0 Page Faults.

Then 5 comes, it is not available in memory so it replaces the oldest page slot i.e 1. —>1 Page Fault.

6 comes, it is also not available in memory so it replaces the oldest page slot i.e 3 —>1 Page Fault.

Finally when 3 come it is not available so it replaces 0 1 page fault.


Belady’s anomaly
Belady’s anomaly proves that it is possible to have more page faults when increasing the number of page
frames while using the First in First Out (FIFO) page replacement algorithm. For example, if we consider
reference string 3, 2, 1, 0, 3, 2, 4, 3, 2, 1, 0, 4 and 3 slots, we get 9 total page faults, but if we increase slots
to 4, we get 10 page faults.

Optimal Page replacement


In this algorithm, pages are replaced which would not be used for the longest duration of time in the future.

Example: Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, with 4 page frame. Find number of
page fault.

Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults

0 is already there so —> 0 Page fault.

when 3 came it will take the place of 7 because it is not used for the longest duration of time in the future.—
>1 Page fault.

0 is already there so —> 0 Page fault..

4 will takes place of 1 —> 1 Page Fault.

Now for the further page reference string —> 0 Page fault because they are already available in the memory.

Optimal page replacement is perfect, but not possible in practice as the operating system cannot know future
requests. The use of Optimal Page replacement is to set up a benchmark so that other replacement
algorithms can be analyzed against it.
Not Recently Used
The not recently used (NRU) page replacement algorithm is an algorithm that favors keeping pages in
memory that have been recently used. This algorithm works on the following principle: when a page is
referenced, a referenced bit is set for that page, marking it as referenced. Similarly, when a page is modified
(written to), a modified bit is set. The setting of the bits is usually done by the hardware, although it is
possible to do so on the software level as well.

At a certain fixed time interval, a timer interrupt triggers and clears the referenced bit of all the pages, so
only pages referenced within the current timer interval are marked with a referenced bit. When a page needs
to be replaced, the operating system divides the pages into four classes:

3. referenced, modified

2. referenced, not modified

1. Not referenced, modified

0. Not referenced, not modified

Although it does not seem possible for a page to be modified yet not referenced, this happens when a class 3
page has its referenced bit cleared by the timer interrupt. The NRU algorithm picks a random page from the
lowest category for removal. So out of the above four page categories, the NRU algorithm will replace a not-
referenced, not-modified page if such a page exists. Note that this algorithm implies that a modified but not-
referenced (within the last timer interval) page is less important than a not-modified page that is intensely
referenced.

Clock Page Replacement Algorithms


Clock is a more efficient version of FIFO than Second-chance because pages don't have to be constantly
pushed to the back of the list, but it performs the same general function as Second-Chance. The clock
algorithm keeps a circular list of pages in memory, with the "hand" (iterator) pointing to the last examined
page frame in the list. When a page fault occurs and no empty frames exist, then the R (referenced) bit is
inspected at the hand's location. If R is 0, the new page is put in place of the page the "hand" points to, and
the hand is advanced one position. Otherwise, the R bit is cleared, then the clock hand is incremented and
the process is repeated until a page is replaced. This algorithm was first described in 1969 by F. J. Corbató.

Segmentation
A process is divided into Segments. The chunks that a program is divided into which are not necessarily all
of the same sizes are called segments. Segmentation gives user’s view of the process which paging does not
give. Here the user’s view is mapped to physical memory.

There are types of segmentation:


Virtual memory segmentation
Each process is divided into a number of segments, not all of which are resident at any one point in time.

Simple segmentation
Each process is divided into a number of segments, all of which are loaded into memory at run time, though
not necessarily contiguously.

There is no simple relationship between logical addresses and physical addresses in segmentation. A table
stores the information about all such segments and is called Segment Table.

Segment Table
It maps two-dimensional Logical address into one-dimensional Physical address. It’s each table entry has:

Base Address: It contains the starting physical address where the segments reside in memory.

Limit: It specifies the length of the segment.


Translation of Two dimensional Logical Address to one dimensional Physical Address.

Address generated by the CPU is divided into:

Segment number (s): Number of bits required to represent the segment.

Segment offset (d): Number of bits required to represent the size of the segment.

Importance of Segmentation

 No internal fragmentation division of process or block

 Average Segment Size is larger than the actual page size.

 Less overhead

 It is easier to relocate segments than entire address space.

 The segment table is of lesser size as compared to the page table in paging.
segment table consumes less memory space than page
segment size is specified by user
Drawback of Segmentation

 It can have external fragmentation.

 It is difficult to allocate contiguous memory to variable sized partition.

 Costly memory management algorithms.


time required for fetch instructions or code
swapping of segments of unequal size is not easy

Fragmentation
Fragmentation is an unwanted problem in the operating system in which the processes are loaded and
unloaded from memory, and free memory space is fragmented. Processes can't be assigned to memory
blocks due to their small size, and the memory blocks stay unused. It is also necessary to understand that as
programs are loaded and deleted from memory, they generate free space or a hole in the memory. These
small blocks cannot be allotted to new arriving processes, resulting in inefficient memory use.

The conditions of fragmentation depend on the memory allocation system. As the process is loaded and
unloaded from memory, these areas are fragmented into small pieces of memory that cannot be allocated to
incoming processes. It is called fragmentation.

Types of Fragmentation

Internal Fragmentation
When a process is allocated to a memory block, and if the process is smaller than the amount of memory
requested, a free space is created in the given memory block. Due to this, the free space of the memory
block is unused, which causes internal fragmentation.

For Example:

Assume that memory allocation in RAM is done using fixed partitioning (i.e., memory blocks of fixed
sizes). 2MB, 4MB, 4MB, and 8MB are the available sizes. The Operating System uses a part of this RAM.

Let's suppose a process P1 with a size of 3MB arrives and is given a memory block of 4MB. As a result, the
1MB of free space in this block is unused and cannot be used to allocate memory to another process. It is
known as internal fragmentation.
How to avoid internal fragmentation?
The problem of internal fragmentation may arise due to the fixed sizes of the memory blocks. It may be
solved by assigning space to the process via dynamic partitioning. Dynamic partitioning allocates only the
amount of space requested by the process. As a result, there is no internal fragmentation.

External Fragmentation
External fragmentation happens when a dynamic memory allocation method allocates some memory but
leaves a small amount of memory unusable. The quantity of available memory is substantially reduced if
there is too much external fragmentation. There is enough memory space to complete a request, but it is not
contiguous. It's known as external fragmentation.

For Example:

Let's take the example of external fragmentation. In the above diagram, you can see that there is sufficient
space (50 KB) to run a process (05) (need 45KB), but the memory is not contiguous. You can use
compaction, paging, and segmentation to use the free space to execute a process.

How to remove external fragmentation?


This problem occurs when you allocate RAM to processes continuously. It is done in paging and
segmentation, where memory is allocated to processes non-contiguously. As a result, if you remove this
condition, external fragmentation may be decreased.
Compaction is another method for removing external fragmentation. External fragmentation may be
decreased when dynamic partitioning is used for memory allocation by combining all free memory into a
single large block. The larger memory block is used to allocate space based on the requirements of the new
processes. This method is also known as defragmentation.

Advantages Of fragmentation

 Fast Data Writes

Data write in a system that supports data fragmentation may be faster than reorganizing data storage
to enable contiguous data writes.

 Fewer Failures

If there is insufficient sequential space in a system that does not support fragmentation, the write will
fail.

 Storage Optimization

A fragmented system might potentially make better use of a storage device by utilizing every
available storage block.

Disadvantages Of fragmentation

 Need for regular defragmentation

A more fragmented storage device's performance will degrade with time, necessitating the
requirement for time-consuming defragmentation operations.

 Slower Read Times

The time it takes to read a non-sequential file might increase as a storage device becomes more
fragmented.
Unit-4: File Management
Operating system is used to manage files of computer system. A file is collection of specific information
stored in the memory of computer system. File management is defined as the process of manipulating files
in computer system, it management includes the process of creating, modifying and deleting the files.

File System
A file is a collection of correlated information which is recorded on secondary or non-volatile storage like
magnetic disks, optical disks, and tapes. It is a method of data collection that is used as a medium for giving
input and receiving output from that program.

In general, a file is a sequence of bits, bytes, or records whose meaning is defined by the file creator and
user. Every File has a logical location where they are located for storage and retrieval.

File System Layout


A file system is a set of files, directories, and other structures. File systems maintain information and
identify where a file or directory's data is located on the disk. In addition to files and directories, file systems
contain a boot block, a superblock, bitmaps, and one or more allocation groups. An allocation group
contains disk i-nodes and fragments. Each file system occupies one logical volume.

File Naming
Descriptive file names are an important part of organizing, sharing, and keeping track of data files. Develop
a naming convention based on elements that are important to the project.
File Structure
A File Structure needs to be predefined format in such a way that an operating system understands. It has an
exclusively defined structure, which is based on its type.

Three types of files structure in OS:

A text file: It is a series of characters that is organized in lines.

An object file: It is a series of bytes that is organized into blocks.

A source file: It is a series of functions and processes.

File Attributes
A file has a name and data. Moreover, it also stores Meta information like file creation date and time, current
size, last modified date, etc. All this information is called the attributes of a file system.

Here, are some important File attributes used in OS:

Name: It is the only information stored in a human-readable form.

Identifier: Every file is identified by a unique tag number within a file system known as an identifier.

Location: Points to file location on device.

Type: This attribute is required for systems that support various types of files.

Size: Attribute used to display the current file size.

Protection: This attribute assigns and controls the access rights of reading, writing, and executing the file.

Time, date and security: It is used for protection, security, and also used for monitoring.

File Access
File access is a process that determines the way that files are accessed and read into memory. Generally, a
single access method is always supported by operating systems. Though there are some operating system
which also supports multiple access methods.

Three file access methods are:

 Sequential access

 Direct random access

 Index sequential access

Sequential Access
In this type of file access method, records are accessed in a certain pre-defined sequence. In the sequential
access method, information stored in the file is also processed one by one. Most compilers access files using
this access method.
Direct Random Access
The random access method is also called direct random access. This method allow accessing the record
directly. Each record has its own address on which can be directly accessed for reading and writing.

Index Sequential Access


This type of accessing method is based on simple sequential access. In this access method, an index is built
for every file, with a direct pointer to different memory blocks. In this method, the Index is searched
sequentially, and its pointer can access the file directly. Multiple levels of indexing can be used to offer
greater efficiency in access. It also reduces the time needed to access a single record.

File Operations
A file is an abstract data type. OS can provide system calls to create, write, read, reposition, delete and
truncate files.

Creating a file – First space in the file system must be found for the file. Second, an entry for the new file
must be made in the directory.

Writing a file – To write a file, specify both the name of the file and the information to be written to the
file. The system must keep a write pointer to the location in the file where the next write is to take place.

Reading a file – To read from a file, directory is searched for the associated entry and the system needs to
keep a read pointer to the location in the file where the next read is to take place. Because a process is either
reading from or writing to a file, the current operation location can be kept as a per process current file
position pointer.

Repositioning within a file – Directory is searched for the appropriate entry and the current file position
pointer is repositioned to a given value. This operation is also known as file seek.

Deleting a file – To delete a file, search the directory for the named file. When found, release all file space
and erase the directory entry.

Truncating a file – User may want to erase the contents of a file but keep its attributes. This function
allows all attributes to remain unchanged except for file length.

Directory
Directory can be defined as the listing of the related files on the disk. The directory may store some or the
entire file attributes.

To get the benefit of different file systems on the different operating systems, a hard disk can be divided into
the number of partitions of different sizes. The partitions are also called volumes or mini disks.
Each partition must have at least one directory in which, all the files of the partition can be listed. A
directory entry is maintained for each file in the directory which stores all the information related to that file.

Directory Systems

 Single-level directory

 Two-level directory

 Hierarchical directory

Single-level directory
The single-level directory is the simplest directory structure. In it, all files are contained in the same
directory which makes it easy to support and understand.

A single level directory has a significant limitation, however, when the number of files increases or when
the system has more than one user. Since all the files are in the same directory, they must have a unique
name. If two users call their dataset test, then the unique name rule violated.

Advantages:

 Since it is a single directory, so its implementation is very easy.

 If the files are smaller in size, searching will become faster.

 The operations like file creation, searching, deletion, updating are very easy in such a directory
structure.

Disadvantages:

 There may chance of name collision because two files cannot have the same name.

 Searching will become time taking if the directory is large.

 This cannot group the same type of files together.


Two-level directory
In the two-level directory structure, each user has their own user files directory (UFD). The UFDs have
similar structures, but each lists only the files of a single user. System’s master file directory (MFD) is
searches whenever a new user id=s logged in. The MFD is indexed by username or account number, and
each entry points to the UFD for that user.

Advantages:

 We can give full path like /User-name/directory-name/.

 Different users can have the same directory as well as the file name.

 Searching of files becomes easier due to pathname and user-grouping.

Disadvantages:

 A user is not allowed to share files with other users.

 Still, it not very scalable, two files of the same type cannot be grouped together in the same user.

Hierarchical directory
Hierarchical directory systems is used for users with a large number of files, as the single-level directory
system and two-level directory system is not satisfactory.

The two-level directory system even on a single-user PC, is inconvenient.

Since it is common for the users, want to group their files together in logical ways.

Therefore, some way is needed is a general hierarchy.

Here, hierarchy means a tree of directories.


With hierarchical approach, each user on the computer system can have as many directories as are needed,
so that files can be grouped together in natural ways.

The figure given below shows this hierarchical directory system:

File System Implementation

Continuous Allocation
A single continuous set of blocks is allocated to a file at the time of file creation. Thus, this is a pre-
allocation strategy, using variable size portions. The file allocation table needs just a single entry for each
file, showing the starting block and the length of the file. This method is best from the point of view of the
individual sequential file. Multiple blocks can be read in at a time to improve I/O performance for sequential
processing. It is also easy to retrieve a single block. For example, if a file starts at block b, and the ith block
of the file is wanted, its location on secondary storage is simply b+i-1.
Disadvantage

 External fragmentation will occur, making it difficult to find contiguous blocks of space of sufficient
length. Compaction algorithm will be necessary to free up additional space on disk.

 Also, with pre-allocation, it is necessary to declare the size of the file at the time of creation.

Linked List Allocation:


The second method for storing files is to keep each one as a linked list of disk blocks. The first word of each
block is used as a pointer to the next one. The rest of the block is for data. Unlike Contiguous allocation no
space is lost in disk fragmentation.
Random access of a file is very slow.

Linked-List Allocation Using FAT:


The disadvantage of linked list can be overcome by taking the pointer word from each disk block and
putting it in a table in memory. Such a table in main memory is called a FAT (File Allocation Table). Using
FAT random access can be made much easier.

The primary disadvantage of this method is that the entire table must be in memory all the time to make it
work.

Unit-5: I/O Management

Classification of IO devices
Block devices − A block device is one with which the driver communicates by sending entire blocks of data.
For example, Hard disks, USB cameras, Disk-On-Key etc.

Character devices − A character device is one with which the driver communicates by sending and
receiving single characters (bytes, octets). For example, serial ports, parallel ports, sounds cards etc.
Controllers
Device drivers are software modules that can be plugged into an OS to handle a particular device. Operating
System takes help from device drivers to handle all I/O devices.

The Device Controller works like an interface between a device and a device driver. I/O units (Keyboard,
mouse, printer, etc.) typically consist of a mechanical component and an electronic component where
electronic component is called the device controller.

There is always a device controller and a device driver for each device to communicate with the Operating
Systems. A device controller may be able to handle multiple devices. As an interface its main task is to
convert serial bit stream to block of bytes, perform error correction as necessary.

Any device connected to the computer is connected by a plug and socket, and the socket is connected to a
device controller. Following is a model for connecting the CPU, memory, controllers, and I/O devices where
CPU and device controllers all use a common bus for communication.
Memory Mapped IO vs IO mapped IO
Interrupt IO vs Polled IO
Interrupt IO Polled IO
DMA (Direct Memory Access)
Direct memory access (DMA) is a method that allows an input/output (I/O) device to send or receive data
directly to or from the main memory, bypassing the CPU to speed up memory operations.

The process is managed by a chip known as a DMA controller (DMAC).

The operating system uses the DMA hardware as follows –


Goals of IO software

Uniform naming:
For example naming of files systems in Operating Systems is done in a way that user does not have to be
aware of underlying hardware name.

Synchronous versus Asynchronous:


When the CPU is working on some process it goes in the block state when the interrupt occurs. Therefore
most of the devices are asynchronous. And if the I/O operation are in blocking state it is much easier to write
the I/O operation. It is always the operating system responsibility to create such a interrupt driven user
program.

Device Independence:
The most important part of I/O software is device independence. It is always most preferable to write
program which can open all other I/O devices. For example, it is not necessary to write the input taking
program again and again for taking input from various file and devices. As this creates much work to do and
also much space to store the different programs.

Buffering:
Data that we enter into a system cannot be stored directly in memory. For example the data is converted into
smaller groups and then transferred to outer buffer for examination.

Buffer have major impact on I/O software as it is the one which ultimately helps storing the data and
copying data. Many device have constraints and just to avoid it some data is always put into the buffer in
advance so the buffer rate of getting filled with data and getting empty remains balanced

Error handling:
Errors and mostly generated by controller and also they are mostly handled by controller itself. When lower
level solves the problem it does not reach the upper level.

Shareable and Non-Shareable Devices:


Devices like Hard Disk can be shared among multiple process while devices like Printers cannot be shared.
The goal of I/O software is to handle both types of devices.
Handling IO

Programmed I/O
The programmed I/O method controls the transfer of data between connected devices and the computer.
Each I/O device connected to the computer is continually checked for inputs. Once it receives an input
signal from a device, it carries out that request until it no longer receives an input signal. Let's say you want
to print a document. When you select print on your computer, the request is sent through the central
processing unit (CPU) and the communication signal is acknowledged and sent out to the printer.

Interrupt-Based I/O
The interrupt-based I/O method controls the data transfer activity to and from connected I/O devices. It
allows the CPU to continue to process other work instead and will be interrupted only when it receives an
input signal from an I/O device. For example, if you strike a key on a keyboard, the interrupt I/O will send a
signal to the CPU that it needs to pause from its current task and carry out the request from the keyboard
stroke.

Direct Memory Access (DMA) I/O


The name itself explains what the direct memory access I/O method does. It directly transfers blocks of data
between the memory and I/O devices without having to involve the CPU. If the CPU was involved, it would
slow down the computer. When an input signal is received from an I/O device that requires access to
memory, the DMA will receive the necessary information required to make that transfer, allowing the CPU
to continue with its other tasks. For example, if you need to transfer pictures from a camera plugged into a
USB port on your computer, instead of the CPU processing this request, the signal will be sent to the DMA,
which will handle it.

IO Software Layers
Basically, input/output software organized in the following four layers:

 Interrupt handlers

 Device drivers

 Device-independent input/output software

 User-space input/output software

In every input/output software, each of the above given four layer has a well-defined function to perform
and a well-defined interface to the adjacent layers.
Interrupt Handlers
Whenever the interrupt occurs, then the interrupt procedure does whatever it has to in order to handle the
interrupt.

Device Drivers
Basically, device drivers is a device-specific code just for controlling the input/output device that are
attached to the computer system.

Disk Structure
The actual physical details of a modern hard disk may be quite complicated. Simply, there are one or more
surfaces, each of which contains several tracks, each of which is divided into sectors.

There is one read/write head for every surface of the disk. Also, the same track on all surfaces is known as a
cylinder, when talking about movement of the read/write head, the cylinder is a useful concept, because all
the heads (one for each surface), move in and out of the disk together.

We say that the “read/write head is at cylinder #2", when we mean that the top read/write head is at track #2
of the top surface, the next head is at track #2 of the next surface, the third head is at track #2 of the third
surface, etc.
The unit of information transfer is the sector (though often whole tracks may be read and written, depending
on the hardware). As far as most file-systems are concerned, though, the sectors are what matter. In fact, we
usually talk about a 'block device'. A block often corresponds to a sector, though it need not do, several
sectors may be aggregated to form a single logical block. We say that the “read/write head is at cylinder #2",
when we mean that the top read/write head is at track #2 of the top surface, the next head is at track #2 of the
next surface, the third head is at track #2 of the third surface, etc.

The unit of information transfer is the sector (though often whole tracks may be read and written, depending
on the hardware). As far as most file-systems are concerned, though, the sectors are what matter. In fact, we
usually talk about a 'block device'. A block often corresponds to a sector, though it need not do, several
sectors may be aggregated to form a single logical block.

Disk Scheduling
Disk scheduling is done by operating systems to schedule I/O requests arriving for the disk. Disk scheduling
is also known as I/O scheduling.

Disk scheduling is important because:

 Multiple I/O requests may arrive by different processes and only one I/O request can be served at a
time by the disk controller. Thus other I/O requests need to wait in the waiting queue and need to be
scheduled.

 Two or more request may be far from each other so can result in greater disk arm movement.

 Hard drives are one of the slowest parts of the computer system and thus need to be accessed in an
efficient manner.

FCFS
FCFS is the simplest of all the Disk Scheduling Algorithms. In FCFS, the requests are addressed in the order
they arrive in the disk queue. Let us understand this with the help of an example.

Example:

Suppose the order of request is- (82, 170, 43, 140, 24, 16, 190)

And current position of Read/Write head is: 50


So, total seek time:

=(82-50)+(170-82)+(170-43)+(140-43)+(140-24)+(24-16)+(190-16)

=642

Advantages:

 Every request gets a fair chance

 No indefinite postponement

Disadvantages:
 Does not try to optimize seek time

 May not provide the best possible service

SSTF
In SSTF (Shortest Seek Time First), requests having shortest seek time are executed first. So, the seek time
of every request is calculated in advance in the queue and then they are scheduled according to their
calculated seek time. As a result, the request near the disk arm will get executed first. SSTF is certainly an
improvement over FCFS as it decreases the average response time and increases the throughput of system.
Let us understand this with the help of an example.

Example:
Suppose the order of request is- (82, 170, 43, 140, 24, 16, 190)

And current position of Read/Write head is: 50

So, total seek time:

=(50-43)+(43-24)+(24-16)+(82-16)+(140-82)+(170-40)+(190-170)

=208

Advantages:

 Average Response Time decreases

 Throughput increases

Disadvantages:

 Overhead to calculate seek time in advance

 Can cause Starvation for a request if it has higher seek time as compared to incoming requests

 High variance of response time as SSTF favors only some requests

SCAN
In SCAN algorithm the disk arm moves into a particular direction and services the requests coming in its
path and after reaching the end of disk, it reverses its direction and again services the request arriving in its
path. So, this algorithm works as an elevator and hence also known as elevator algorithm. As a result, the
requests at the midrange are serviced more and those arriving behind the disk arm will have to wait.

Example:
Suppose the requests to be addressed are-82, 170, 43, 140, 24, 16, and 190. And the Read/Write arm is at 50,
and it is also given that the disk arm should move “towards the larger value”.

Therefore, the seek time is calculated as:

= (199-50) + (199-16)

=332

Advantages:

 High throughput

 Low variance of response time

 Average response time

Disadvantages:

 Long waiting time for requests for locations just visited by disk arm

You might also like