You are on page 1of 38

Unit V - I/O & FILE

Management
Unit Contents
• I/O Management and Disk Scheduling: I/O Devices, Organization of the I/O
Function, Operating System Design Issues,
• I/O Buffering, Disk Scheduling (Ex. FCFS, SSTF & SCAN), Disk Cache. File
Management: Overview, File Organization
• and Access, File Directories, File Allocation Methods and Free Space
Management. Case Study: Linux

2
I/O Devices

3
Differences between I/O Devices

1.Data rate:-Data transfer rates of the devices


vary on a large scale.
2.Unit of transfer:-Stream of bytes or characters
in larger blocks.
3.Data Representation:-Different data encoding
schemes are used for different devices.
4.Nature of Errors vary from one device to
another.

4
Organization of I/O functions
• Device Management is the part of the OS
responsible for directly manipulating
Hardware devices.
• It is implemented through the interaction of a
device driver and interrupt routine.

5
Techniques used for
performing I/O
 Programmed I/O- The processor issues an I/O command, on behalf of a
process, to an I/O module; that process then busy waits for the operation to
be completed before proceeding.

 Interrupt driven I/O- The processor issues an I/O command on behalf of a


process. There are then two possibilities.
• If the I/O instruction from the process is non-blocking, then the processor
continues to execute instructions from the process that issued the I/O
command.
• If the I/O instruction is blocking, then the next instruction that the processor
executes is from the OS, which will put the current process in a blocked state
and schedule another process.

 Direct Memory Access - A DMA module controls the exchange of data


between main memory and an I/O module. The processor sends a request for
the transfer of a block of data to the DMA module and is interrupted only after
the entire block has been transferred.

6
Steps to perform input
instruction
1. Application process requests read operation.
2. Device driver queries the status register to
determine if device is idle. If device is busy ,
driver waits for device to become idle.
3. The driver stores an input command into
controller’s command register thereby starting
the device.
4. When device driver completes its work, it saves
info regarding the operation that it began in
device status table.
7
5.By that time device completes operation and interrupts
the CPU, thereby causing interrupt handler to run.
6. Interrupt handler determines which device caused
interrupt . It then branches to the device handler for that
device.
7. Device handler retrives pending IO status info from the
device status table.
8. Device handler copies contents of controllers data
registers into user process space.
9. Device handler finishes work & returns control to
application process.
8
DMA
• A special control unit may be provided to allow
transfer of block of data directly between an
external device and main memory without
continuous intervention by the processor. This
technique is called as DMA.
• DMA can be used with either polling or interrupt
software.
• DMA is usually used for data transfer from or to
disks(many bytes of info can be transferred in
single I/O)
9
10
DMA data transfer operation
1. Program makes DMA set up request.
2. Program deposits the address value A and data
count(d).
3. Program also indicates the VM address of the
data on disk.
4. DMA Controller records the receipt of relevant
info and acknowledges the DMA complete.
5. Device communicates the data to controller
buffer.

11
DMA operation Contd..
6. The controller grabs the address bus and data
bus to store the data , one word at a time.
7. Data count is decremented.
8. The above steps are repeated till the desired
data transfer is accomplished. At the same
time a DMA data transfer complete signal is
sent to the process.

12
I/O Buffering
• Buffering is the technique by which the device
manager can keep slower I/O devices busy
during times when a process is not requiring
I/O operations.
Types of I/O buffering schemes
1.Single buffering
2.Double Buffering
3.Circular buffering
13
Single buffer

User Process OS I/O


Devices

14
For a block oriented device-
Single Buffer
1. Input transfers are made to system buffer.
2. After transferring, the process moves the
block into user space and requests another
block .
3. User process can be processing one block of
data while the next block is being read in.
4. OS is able to swap the process out.
5. OS must keep track of the assignment of
system buffers to user processes.
15
Double Buffer
• There are 2 buffers in the system.
• One buffer is for the driver or controller to
store data while waiting for it to be retrieved
by higher level of the hierarchy.
• Other buffer is used to store data from the
lower level module.
• Double buffering is also called Buffer
Swapping.

16
Circular Buffer
• When more than 1 buffer is used the collection
of buffers is itself referred to as circular buffer.
• In this , the producer cannot pass the consumer
because it would overwrite buffers before they
had been consumed.
• The producer can only fill up to buffer j-1
while data in buffer j is waiting to be
consumed.

17
18
DISK SCHEDULLING
• Disk scheduling is done by operating systems to schedule
I/O requests arriving for the disk. Disk scheduling is also
known as I/O scheduling.
• Disk scheduling is important because:
• Multiple I/O requests may arrive by different processes and
only one I/O request can be served at a time by the disk
controller. Thus other I/O requests need to wait in the
waiting queue and need to be scheduled.
• Two or more request may be far from each other so can
result in greater disk arm movement.
• Hard drives are one of the slowest parts of the computer
system and thus need to be accessed in an efficient manner.

19
• Seek Time:Seek time is the time taken to locate the disk
arm to a specified track where the data is to be read or write.
So the disk scheduling algorithm that gives minimum
average seek time is better.
• Rotational Latency: Rotational Latency is the time taken
by the desired sector of disk to rotate into a position so that
it can access the read/write heads. So the disk scheduling
algorithm that gives minimum rotational latency is better.
• Transfer Time: Transfer time is the time to transfer the
data. It depends on the rotating speed of the disk and
number of bytes to be transferred.
• Disk Access Time: Disk Access Time is:
Disk Access Time = Seek Time + Rotational Latency + Transfer Time

20
• FCFS: FCFS is the simplest of all the Disk
Scheduling Algorithms. In FCFS, the requests are
addressed in the order they arrive in the disk
queue.Let us understand this with the help of an
example.
Example:
• Suppose the order of request is-
(82,170,43,140,24,16,190)
And current position of Read/Write head is : 50

21
22
So, total seek time:
=(82-50)+(170-82)+(170-43)+(140-43)+(140-
24)+(24-16)+(190-16)
=642
•Advantages:
•Every request gets a fair chance
•No indefinite postponement
•Disadvantages:
•Does not try to optimize seek time
•May not provide the best possible service

23
Consider a disk queue with requests for I/O to
blocks on cylinders 98, 183, 41, 122, 14, 124,
65, 67. The FCFS scheduling algorithm is used.
The head is initially at cylinder number 53. The
cylinders are numbered from 0 to 199. The total
head movement (in number of cylinders)
incurred while servicing these requests is
_______

24
25
• Total head movements incurred while
servicing these requests
• = (98 – 53) + (183 – 98) + (183 – 41) + (122 –
41) + (122 – 14) + (124 – 14) + (124 – 65) +
(67 – 65)
• = 45 + 85 + 142 + 81 + 108 + 110 + 59 + 2
• 632

26
• SSTF: In SSTF (Shortest Seek Time First), requests having
shortest seek time are executed first. So, the seek time of
every request is calculated in advance in the queue and then
they are scheduled according to their calculated seek time.
As a result, the request near the disk arm will get executed
first. SSTF is certainly an improvement over FCFS as it
decreases the average response time and increases the
throughput of system.Let us understand this with the help of
an example.
Example:
• Suppose the order of request is- (82,170,43,140,24,16,190)
And current position of Read/Write head is : 50

27
28
So, total seek time:
=(50-43)+(43-24)+(24-16)+(82-16)+(140-82)+(170-40)+(190-170)
=208
Or (190-16)+ (50-16) = 208
Advantages:
•Average Response Time decreases
•Throughput increases
Disadvantages:
•Overhead to calculate seek time in advance
•Can cause Starvation for a request if it has higher seek time as
compared to incoming requests
•High variance of response time as SSTF favours only some requests

29
Consider a disk queue with requests for I/O to
blocks on cylinders 98, 183, 41, 122, 14, 124,
65, 67. The SSTF scheduling algorithm is used.
The head is initially at cylinder number 53
moving towards larger cylinder numbers on its
servicing pass. The cylinders are numbered from
0 to 199. The total head movement (in number of
cylinders) incurred while servicing these
requests is _______
30
31
• Total head movements incurred while
servicing these requests
• = (65 – 53) + (67 – 65) + (67 – 41) + (41 – 14)
+ (98 – 14) + (122 – 98) + (124 – 122) + (183
– 124)
• = 12 + 2 + 26 + 27 + 84 + 24 + 2 + 59
• = 236

32
• SCAN: In SCAN algorithm the disk arm moves into a
particular direction and services the requests coming in
its path and after reaching the end of disk, it reverses its
direction and again services the request arriving in its
path. So, this algorithm works as an elevator and hence
also known as elevator algorithm. As a result, the
requests at the midrange are serviced more and those
arriving behind the disk arm will have to wait.
Example:
• Suppose the requests to be addressed are-
82,170,43,140,24,16,190. And the Read/Write arm is at
50, and it is also given that the disk arm should
move “towards the larger value”.

33
34
Therefore, the seek time is calculated as:
=(199-50)+(199-16)
=332
Advantages:
•High throughput
•Low variance of response time
•Average response time
Disadvantages:
•Long waiting time for requests for locations just
visited by disk arm
35
Consider a disk queue with requests for I/O to
blocks on cylinders 98, 183, 41, 122, 14, 124,
65, 67. The SCAN scheduling algorithm is used.
The head is initially at cylinder number 53
moving towards larger cylinder numbers on its
servicing pass. The cylinders are numbered from
0 to 199. The total head movement (in number of
cylinders) incurred while servicing these
requests is _______
36
37
• Total head movements incurred while
servicing these requests
• = (199 – 53) + (199 – 14)
• = 146 + 185
• 331

38

You might also like