You are on page 1of 13

2 Device Management

Recall that a stored program or von Neumann computer comprises a CPU that is made up of an ALU and a
control unit, a primary memory unit, a collection of I/O devices, and a bus system that interconnects the other
components, as illustrated in Figure 6.

(a) (b)
Figure 6: (a) The von Neumann architecture; (b) abbreviated form (Source: (Nutt, 2003))

As can be seen from the figure, I/O devices are attached to the computer bus. Data transfer between the various
components of the computer is through the bus system.
It is the job of the OS device manager to control all the computer’s I/O devices. The device manager issues
commands to devices, catches interrupts, and handles errors. It also provides an interface between the devices
and the rest of the system that is simple and easy to use.
I/O devices can be categorised by the function they perform, viz.: input devices like the keyboard and mouse;
output devices like the printer and screen; storage devices like disks, which are both input and output devices;
and communication devices like network cards and serial and parallel ports, which also act as both input and
output devices.
I/O devices can also be classified by the type of data they handle, as block devices and character devices. A
block device stores or reads data in fixed-size blocks. A typical example is a disk. A character device on the
other hand delivers or receives a stream of characters without regard to any block structure. Printers, network
cards, mice, as well as most other devices that are not disk-like are character devices.
Most operating systems treat all I/O devices in the same general manner, but treat the processor and memory
differently. Device management refers to the way these I/O devices are handled.
Why is device management important? Computation (i.e., use of the processor) happens to be many, many
times faster than I/O. If there are several processes running on a computer therefore, and the processor has to
halt until I/O is completed for a given process, then the overall computation time for all the processes could be
very long. Even if through some mechanism, the processor could be made to attend to other processes that need
its attention, and only periodically check to see if I/O for a given process is complete so that that process could
also be allocated processor time, time is wasted whenever the processor checks to determine if I/O for that
process is still in progress2.
Secondly, apart from the great disparity in processor and I/O speeds, there is also great disparity in the speeds
of I/O devices. The keyboard and mouse for example, are millions of times slower than network cards; the
device manager must therefore be capable of managing these speed differences amongst I/O devices.

2 Busy-wait is the term used to describe what happens to the processor in such a circumstance, since the
processor is busy (testing to see if I/O is completed) but is effectively waiting for I/O to complete before the
processor can be allocated to the process that was involved in I/O.

CSC 314 – Operating Systems Denis L. Nkweteyim Page 15


The device manager comprises two parts for each device that it manages: the device-dependent and device-
independent parts. The dependent parts (also called device drivers), implement aspects of device management
that are unique to each device type, while the independent (or generic) part defines a general software
environment in which a device-dependent driver can execute.
The OS designer decides which aspects of device management are device-dependent and which are not. The
independent parts are then implemented in the operating system, complete with an interface to make read
and/or write calls from/to any device, and will work with all devices, while the dependent parts are
implemented in the driver software for each device. With this partitioning of the device manager into dependent
and independent parts, a new device can be added to the computer system simply by attaching it to the bus
system, and then providing the device-specific driver which interfaces with the device-independent part of the
manager; so, the device-dependent component of the device manager (i.e, the drivers) must come with the
device, and usually are supplied by the device manufacturer.

2.1 Device Controllers


Each I/O device consists of the device itself (mostly mechanical) and another hardware (i.e., electronic)
component (usually implemented on a printed circuit board) called the device controller (see Figure 7) that
controls the operation of the device. The first task of the controller is to connect the device to the computer’s
address and data buses, and provide a set of components that CPU instructions can manipulate to cause the
device to function. Different controllers have different speeds, capabilities, and operations, but then, they
provide the same interface to the operating system. The OS uses this common interface to achieve the goal of
resource abstraction by hiding the details of the different controllers from the programmer. The second task of
the controller is to monitor the device’s status; this is a mundane job that if handled by the processor, would
take up too much of the processor’s time.

Figure 7: Device controllers

Figure 8 is a pictorial representation of device management in a computer system3. The hardware-hardware


interface between the devices and controllers is important, but is of interest to the device manufacturer and not
to software designers. The interface between the device controller and the bus system is also important to the
person attaching the device to the computer, and is transparent to driver software, i.e., abstraction is used to
make the details of the connection of no concern to the driver software.

3 File management is treated later as a separate topic

CSC 314 – Operating Systems Denis L. Nkweteyim Page 16


Figure 8. Device management (Source: (Nutt, 2003), with minor modifications)

We explain below, how in the hypothetical example above, the software interface between the controller and the
OS works.
We notice that the controller has a number of registers (3 in this example): command, status, and data registers.
The status register comprises a number of flag bits, including a busy bit, a done bit, and an error bit which
is set whenever the controller encounters an error it cannot recover from. The truth table shows how the busy
and done flags are used to place the controller in different states.
 In the idle state (when both flags are set to 0), the software is allowed to place a command in the
controller’s command register, and data in its data register. The direction into which the data will travel
depends on the contents of the command register: if it is an input command, data will be read from the
device; if it is an output command, data will be written to the device.
 The presence of the new command in the command register causes the busy flag to be set (resulting in
the 1, 0 or working state), and the data is moved into or out of the device, depending on the command.
The process knows which direction to move data by reading the state from the status register
 At the end of the operation, the controller clears the busy flag and sets the done flag, i.e., the device is
set to the 0, 1 or finished state.
 If the read or write operation happened without error, the done flag is cleared, and the device returns to
the 0, 0 or idle state to indicate that the device is again ready for use.
If the controller encountered an error, then the error flag in the status register will be set.

CSC 314 – Operating Systems Denis L. Nkweteyim Page 17


2.2 Memory-Mapped I/O
Each controller has a few registers that are used for communicating with the CPU. By writing into these
registers, the OS can command the device to deliver or accept data, or perform some other action. By reading
from the registers, the OS can learn what the device’s state is. In addition to registers, many devices have a data
buffer that the OS can read data from or write data to. But how does the CPU communicate with the control
registers and the data buffers?
Recall that the set of operation codes (or machine instructions) that are designed for a given microprocessor
constitutes the instruction set for that computer. Two approaches exist for the communication: isolated I/O and
memory-mapped I/O (see Figure 9.)
Isolated I/O
Traditionally, the instruction set
included special I/O instructions
separate from instructions that
access memory. In this approach,
each device register is assigned an
I/O port number or address. Hence,
reading and writing data from/to an
I/O device register required
instructions like:
in R3, port # and
out R3, port #
Reading and writing to/from
memory from/to a register on the
other hand requires instructions of
the form:
Load R3, mem addr
Mov mem addr, R3
We notice that using this scheme,
the address spaces for memory and
I/O are different. Figure 9: Isolated I/O and Memory Mapped I/O

Memory-Mapped I/O
The second approach (used in all modern computer systems) to communicate with I/O devices uses memory-
mapped I/O. In the memory-mapped I/O approach, devices are mapped to primary memory addresses rather
than having specialized device addresses. In this approach, all the control registers and data buffers are mapped
into memory space by assigning a unique memory address to each control register and ensuring that such
addresses are not assigned in primary memory. Some computer systems use a hybrid approach in which data
buffers are mapped to memory but separate I/O ports are used for the control registers.
Advantages of memory-mapped input include the following:
1. No special I/O instructions (which are generally supported only through an assembly language routine
call, with the additional overheads involved) are required
2. The problem in the traditional approach of requiring new categories of I/O instructions, giving rise to a
large number of instruction set types (and larger programming possibilities and difficulties to the
programmer) is avoided. With the memory-mapped approach, every I/O instruction is treated the same
way equivalent memory instructions are treated.
3. The OS can better control which processes can use which I/O device

CSC 314 – Operating Systems Denis L. Nkweteyim Page 18


Disadvantages of the memory mapped approach include:
1. The risk of caching (which whilst OK for main memory, would be disastrous for device control
registers)
2. The use of a separate memory bus between the CPU and RAM is a problem for I/O devices since the
devices cannot directly know that their attention is needed (as they are not connected directly to the
memory bus).
The difficulties mentioned above are all handled by more complex design in the electronics. In spite of the
increased hardware complexity, the net benefits of memory-mapped I/O far outweigh the disadvantages
resulting from the increased complexity such that memory-mapped I/O is today almost universally used.

2.3 Methods of Performing Input and Output


Polling
No matter whether a CPU does or does not have memory-mapped I/O, it needs to address device controllers to
exchange data with them. In the simplest I/O method, a user program issues a system call, which the OS kernel
translates into a procedure call to the appropriate driver. The driver then starts the I/O and continuously queries
the device to see if the I/O has been completed. This method of determining when the device has completed I/O
is known as polling or busy waiting. The processor (i.e., CPU) gets involved in the data transfer process
because it runs the device driver code that results in the transfer. In effect, the CPU does the transfer between
main memory and the device controller. This is illustrated in Figure 10(a). When the I/O has completed, the
driver puts the data where they are needed. The OS then returns control to the calling program. Busy waiting
has the disadvantage that it ties up the CPU polling the device until I/O is completed.
Direct Memory Access
The second method to achieve I/O is by direct memory access (DMA). Direct memory access (DMA)
controllers differ from conventional controllers in that the hardware is designed to perform the same data
transfer function that the CPU performs. Hence the CPU can be bypassed completely in the data transfer
process. The CPU’s only involvement is starting the DMA transfer process. The DMA scheme is illustrated in
Figure 10 (b).
DMA can significantly improve I/O performance as the CPU is freed from the data transfer process. DMA can
also significantly improve controller performance because the controller no longer needs to wait for the CPU to
transfer data to and from memory.

Figure 10. Comparing (a) conventional and (b) Use of DMA controller

CSC 314 – Operating Systems Denis L. Nkweteyim Page 19


Just as with buses, DMA controllers can operate in word-at-a-time mode and block mode. In the former mode,
the DMA controller requests for the transfer of a word and gets it. However, if the CPU also wants to use the
bus currently in use, then it has to wait. This mechanism is known as cycle stealing because the DMA controller
occasionally steals a clock cycle from the CPU, causing the CPU to delay slightly. In block mode, the DMA
controller tells the device to acquire the bus and issue a series of data transfers. This form of operation is known
as burst mode because not just one word, but a burst of data is transferred. Burst mode is more efficient than
cycle stealing because it takes some time to acquire the bus and multiple words are transferred in one bus
acquisition. The downside to burst mode is that the CPU may end up facing long delays should it need the bus
at the same time that several words are being transferred.
Interrupts
The third method for performing I/O makes use of interrupts, and is aimed at overcoming the wastage resulting
from the busy waiting involved in polling. In this approach, when the device driver starts the device, it also
requests that the device sends it a signal when done with I/O. The OS then blocks the calling program if the
caller must get the results of I/O before it can proceed. The OS then looks for other work that may need the
processor’s attention. Notice that the processor is not held up until I/O is completed. The process of generating
and handling interrupts is summarized in Figure 11.

(b)
(a)

Figure 11: (a) Steps in starting an I/O device and getting an interrupt. (b) Interrupt processing
(Adapted from Tanenbaum (2001))

We first consider Figure 11 (a) – steps in generating an interrupt.


Step 1: The driver tells the controller what to do by writing into the controller’s registers, and the controller in
its turn starts the device.
Step 2: After writing or reading the number of bytes it was instructed to transfer, the controller sends an
interrupt signal to the interrupt controller.
Step 3: If the interrupt controller is ready to accept interrupt, it informs the CPU by asserting a pin on the CPU
chip. Notice that the interrupt controller does not always accept the interrupt, for example, when another
interrupt is being processed, or a higher priority interrupt request is made simultaneously. In such a case, the
interrupting device will be ignored for the moment, and the device will continue to assert an interrupt signal
until it is eventually serviced.
Step 4: In addition to informing the CPU of the presence of an interrupt, the interrupt controller also puts the
number of the interrupting device on the bus so that the CPU can read the device number and know which one
of several possible I/O devices has just finished I/O.
An interrupt signal causes the CPU to stop (i.e., interrupt) what it was doing (that is why it is called an

CSC 314 – Operating Systems Denis L. Nkweteyim Page 20


interrupt!), and start doing something else. After the CPU decides to take the interrupt, the program counter
(PC) (a register which holds the address of the next instruction to be fetched), and the program status word
(PSW) (a register which holds control information, the CPU priority, and the mode (supervisor or user) are
pushed to the stack and the CPU switched to kernel mode. Part of memory contains the interrupt vector, a
series of interrupt handlers (i.e., starting memory addresses of instructions that service the interrupt). The device
number is used as an index to the interrupt handler for each device. With the PC and PSW stored on stack, the
PC is now loaded with the address of the interrupt handler. This causes program control to branch to the code to
handle the interrupt. When execution of this code comes to an end, the old PC and PSW are then popped off the
stack, and so the PC can now get old PC information, which determines that flow of control should be returned
to the first instruction that had not been executed before the interrupt was received (see Figure 11(b)).

2.4 I/O Software Layers


I/O software is typically organised in four layers as illustrated in Figure 12.

Figure 12: I/O Software Layers

Interrupts are an unpleasant fact of life, and their handlers should be hidden to the lowest level, with as little
involvement of the rest of the operating system as possible.
The next layer up are the device drivers. As we saw earlier, device drivers are specific to individual devices;
device drivers are device-specific code, typically written by the device manufacturer, for controlling the device.
Device drivers can control devices because they have access to the device controller registers.
The device-independent part of the operating system provides the following functions:
1. Uniform interfacing for device drivers
2. Buffering
3. Error reporting
4. Allocating and releasing dedicated devices
5. Providing a service-independent block size
Uniform interfacing for device drivers
The idea here is to make all I/O devices and drivers look more-or-less the same to the rest of the OS. Hence,
each driver simple plugs into the OS. And because the driver interface is the same, different driver writers can
independently write drivers for their devices knowing that they will work as expected. Without this facility,
each time a new device comes along, one would have to modify the OS to be able to handle it.
Buffering
Buffering is a technique widely used in I/O. Consider for example, a process that wants to read data from a
modem. A number of possibilities are illustrated in Figure 13.

CSC 314 – Operating Systems Denis L. Nkweteyim Page 21


Figure 13: (a) Unbuffered input; (b) Buffering in user space; (c) Buffering in kernel followed by copying to user
space; (d) Double buffering in kernel. (Source: Tanenbaum (2001))
In Figure 13 (a), the user process invokes the read system call, and then blocks (i.e., waits) until a character is
read. After the character arrives, an interrupt is generated, the character is handed to the user process, and the
process is unblocked, ready for the next character, and the process repeats. The problem with this approach is
that the user process has to be woken for every incoming character. This is very expensive.
Figure 13 (b) is an improvement over Figure 13 (a). Here, the user process provides an n-character buffer in
user space, and reads up to n characters at a time. The interrupt that is generated for each arriving character
causes the character to be put in the buffer until it fills up, before the user process is woken up. Though this
scheme is far more efficient than the previous one, it too has problems, for example, when the buffer fills up
and its data needs to be transferred perhaps to disk at the same time that a new character is arriving. Copying to
disk is an expensive (i.e., slow) operation.
Figure 13 (c) is an improvement over Figure 13 (b). Here, a buffer is created in the kernel, and the interrupt
handler causes incoming characters to be put there (Step 1). When this buffer fills up, all its contents can be
copied into the user space buffer in one step (Step 2), and then the empty kernel buffer can start receiving data
again. This is far more efficient than copying to disk. But the problem remains of what happens when the user
space data is being copied to disk at the time characters are arriving the user buffer.
One way to overcome the problem above is through double buffering (see Figure 13 (d)). Here, the kernel is
provided with a second buffer. After the first buffer fills up, the second one starts receiving data, while the
contents of the first one are being copied to user space. By the time the second buffer fills up, the first one has
been emptied, and will resume accepting data while the second one is being copied into user space.
Circular buffering is achieved when the number of buffers is increased from 2 to n. In this technique the data
producer (i.e., device controller in read operations, CPU in write operations) goes ahead and fills up available
buffers, while the data consumer (CPU in read operations; device controller in write operations) reads data from
full buffers. Care must be taken though, to ensure that the producer does not go past the consumer, because
otherwise, data will be overwritten before they are consumed; similarly, the consumer must not go past the
producer because if it does, the consumer will be reading data from a buffer before authentic data are put in it.
Error reporting
Although most errors are device-specific, the device-independent component provides the framework for
handling them. For example, programming errors like trying to read data from an output device like a printer or
writing to an input device like a keyboard may be handled easily. Other errors like writing a damaged block of
data may be handled by drivers, but if the driver does not know what to do, it may pass the problem back to the
device-independent component.

CSC 314 – Operating Systems Denis L. Nkweteyim Page 22


Allocating and releasing dedicated devices
This concerns access if dedicated devices like CD-ROM that can only be used by a single process at a time
need to be controlled.
Providing a service-independent block size
Different disks may have different sector sizes, and it is up to the device-independent part of the OS to hide this
fact, and provide a common block size to higher layers.
User-level I/O Software
Most I/O software is implemented in the OS, but part of it consists of libraries linked together with user
programs. Examples include the write system call in C, and the printf and scanf library functions in C.
Another category of I/O software (i.e., other than library procedures) is the spooling system. Spooling is used to
handle dedicated I/O devices in a multiprogramming system. For example, although it is technically easy to let
each user process open the special file associated with a printer, the problem is that a user process may open
that file for too long, thereby preventing other user processes from having access to the printer. What is done in
practice is to create a special daemon process and a spooling directory. If a file is to be printed, the entire file is
generated and placed in the spooling directory. It is the job of the daemon process (the only process with access
to the special file associated with the printer) to print the files in the spooling directory. Spooling is also used in
other situations, e.g., in file transfer over networks. The user simply puts the file in a network spooling
directory, and a network daemon handles the file transfer.

2.5 Device Classes


In this section, we consider specifics of a sample of I/O devices.
The OS distinguishes devices as being either block- or character-oriented. An I/O operation on a character-
oriented device reads or writes one byte. Most character-oriented devices like modems and printers have cables
connecting the device to the controller via serial or parallel ports on the computer. The cable acts as a
communication device, transmitting information between the computer and I/O device. Other forms of
communication devices include broadcast, telephone, and coaxial wires.
Block-oriented devices on the other hand write a fixed number (usually 512 or more) of bytes in one operation.
Block-oriented devices like storage devices are usually integrated with the controller as a single hardware
component. Some block-oriented devices access data sequentially (e.g., tapes) so that when a block of data is
read, it is not necessary to provide the address of the next block of data, since this address is implicitly defined
by the previous operation.
Unlike sequentially accessed storage devices, randomly accessed storage devices like floppy disks, hard disks,
and optical disks, have no limitation that encourages the writing or reading of blocks of data in any particular
order. This is because the read/write head can be moved from one block of data to any other block without
having to read the intervening blocks.

2.5.1 Communication Devices


These are character devices used to transfer bytes of information between a computer and a remote device (e.g.,
printer, modem). A communications device controller manipulates the remote device using a controller-device
protocol (i.e., agreement on the syntax and semantics of information); the controller itself is manipulated by the
driver software.
2.5.1.1 Serial Communication
In serial communication (e.g., communication between a modem and a computer), data is transferred between
the computer and the communication device one bit at a time. To connect a modem to a computer using the
serial communication port for example, a serial cable is used to connect the serial communications controller in
the computer to the modem, and the other end of the modem connected to a telephone jack 4. The controller and
4 Most modems and their controllers today are actually built as a single unit, eliminating the need for the
controller-modem cabling.

CSC 314 – Operating Systems Denis L. Nkweteyim Page 23


the serial device can then use a common protocol to exchange data.
A serial cable comprises several wires, and is terminated by a 9-pin or 25-pin connector of which one pin is
used for transmitting data, one pin for receiving data, one pin for ground, and the other pins are meant for
various control functions, most of which are not used.
The serial controller is implemented using a specialized microprocessor called the UART (Universal
Asynchronous Receiver Transmitter). UARTs are useful because the computer only works with whole
characters (byte, word, etc) but data only comes over the serial line one bit at a time. The UART does the
character-to-serial and serial-to-character conversions.
The RS-232 is the most commonly used controller-device protocol for serial communication involving an
asynchronous5 device like a modem. The standard defines the interface between the terminal and controller for
exchanging 8-bit bytes of information. The standard specifies the type of physical connection (9-pin or 25-pin),
as well as the meaning of the signal on each of 4 pins used by the standard (send, receive, ground, control).
Although data is transmitted a byte at a time, the byte must be stripped into individual bits, as well as control
and error-checking bits, and each of these bits transmitted as part of a separate signal.
The transmission speed of serial devices is usually in the range of 110 to 57600 signals per second. This rate is
known as the baud rate. In the RS-232 standard, 11 signals are transmitted for each 8-bit byte transmitted: three
of the 11 signals are used for synchronizing the operation of the device controller and the device, and the other
8 represent the data bits.

2.5.1.2 Parallel Communication


Parallel communication ports used to be commonly used to connect computers to printers. Unlike in serial
communication when a bit at a time is transmitted, parallel communication involves the transmission of several
bits at a time (comprising a character), simultaneously. Parallel communication is thus faster. The interface too
is much simpler as there is no need to do character-to-serial and serial-to-character conversions.

2.5.1.3 USB (Universal Serial Bus)


With advances in computer technology, computers and their accessories became much faster and the need arose
to transfer vast amounts of data at a time, but serial and parallel communication were inadequate to handle the
communication speeds required. The USB protocol was developed to meet such demands.

2.5.2 Sequentially Accessed Storage Devices


The most common sequential storage device is the magnetic tape, used today principally for backup. A
magnetic tape is a plastic tape with a ferrite coating. Traditionally, magnetic tapes had a width of 0.5 inch, but
today, smaller width tapes are prevalent.
0.5-inch taps are formatted with 9 logical tracks, each running the full length of the tape. The read/write head of
the tape can sense 9 bits across the 9 tracks, comprising a data byte (8 bits), and one parity bit used to provide
simple error checking when a byte is read. A collection of bytes packed densely on the tape constitutes a
physical record, or block. Physical records (or blocks) are separated by an inter-record gap. The density of the
bytes on the tape is measured in bytes per inch (the number of bytes packed on an inch length of tape).

2.5.3 Randomly Accessed Storage Devices


The blocks in a randomly accessed device like magnetic disks can be accessed in any order. Though much
faster than accessing tape data, there is a small, measurable, performance penalty for accessing blocks stored at
physically distant locations on the recording surface.
Magnetic disks have one or more disk platters, each having one or two storage surfaces (see Figure 14). Each
surface is divided into tracks and sectors as shown. The platters spin on a common axis, past read/write heads
that can be moved radially to align heads with the track that is being read or written to. Data is read/written on a
track on a sector by sector basis. A sector of data is the minimum amount of data that can be read or written at a
5An asynchronous terminal is a character-oriented device that exchanges characters with the computer, using
explicit signals to control the transfer of each character.

CSC 314 – Operating Systems Denis L. Nkweteyim Page 24


time. The collection of tracks on the different platter surfaces is called a cylinder.
Because of the need to minimize the cost of disks, the
read/write heads are ganged together, and controlled by a
single motor. The penalty for this reduction in cost is that
all the heads move together, and it is not possible for one
head to be positioned over different tracks at the same
time.
Access time (the time it takes to access a file) depends on
three factors:
 Seek time, the time it takes to position the
read/write head on the proper track; this is the
Figure 14: Hard Disk
slowest of the three factors, and as a result, several
studies have been conducted to determine the most efficient way of moving the disk read/write head;
 Search time, also known as rotational delay or disk latency time, is the time it takes to rotate the disk
until the requested record is moved under the read/write head;
 Transfer time, the fastest of the three times, is the time it takes to copy the data.
Example:
Assume that one revolution of the disk takes 16.8 ms, the maximum seek time is 50 ms, and it takes 0.00094 ms
to transfer a byte. Compute the average and maximum access times to transfer a record of 100 bytes of data,
assuming the data is read one character at a time.
Max rotational delay = 16.8 ms, and so avg rotational delay = 8.4 ms.
Max seek time = 50 ms, and so avg seek time = 25 ms.
Transfer time = 100 × 0.00094 = 0.094 ms
Maximum access time = (16.8 + 50 + 0.094) ms = 66.894 ms to read 1 record.
Avg access time = (8.4 + 25 + 0.094) ms = 33.494 ms to read 1 record.
Now consider the effect of blocking. Assume first that we read 10 records individually without blocking. On
average, it takes 10 * 33.494 = 334.94 ms to access the data.
But if the 10 records all form a block, the average access time would be computed as follows:
Avg access time = ((8.4 + 25) + 0.094 * 10) ms = 33.4 + 0.94 ms = 34.34 ms, significantly less than the time
used when no blocking was involved.

We now look more closely at seek time, the most costly component of access time. Consider a
multiprogramming system for example, with multiple requests to access the disk at the same time. Assume for
example that requests are made for blocks on tracks 12, 123, 50, 13, 124, and 49 in that order. Suppose that the
time it takes to initiate the seek operation for a track is X ms, and that it requires YK additional ms to get to a
track Y tracks away from the current track. Assuming the value of K to be 3, the seek time for the problem
above (assuming the tracks are accessed in the same order) is:
CF
+ (X + 3*(123 – 12))
+ (X + 3*(123 – 50))
+ (X + 3*(50 – 13))
+ (X + 3*(124 – 13))
+ (X + 3*(124 – 49)) = 5X * 921 ms
where CF is the time it takes to access the first of the tracks (track 12 in this example), and varies depending on
the original position of the read/write head. Minimizing seek time could lead to a great reduction in the overall
access time. Various algorithms have been developed in this regard.

CSC 314 – Operating Systems Denis L. Nkweteyim Page 25


2.5.3.1 First-Come-First-Served (FCFS)
Requests are serviced in the order in which they arrive at the driver. The algorithm is simple but does not give
good performance. For example, if requests are made to read tracks 76, 124, 17, 269, 201, 29, 137, and 12, the
FCFS algorithm will start at track 76, move 48 tracks to track 124, etc., until all the tracks are read, by which
time the read/write heads would have moved over 880 tracks. This is illustrated in Figure 15(a).

2.5.3.2 Shortest-Seek-Time-First (SSTF)


For this algorithm, the driver selects the next request as the one requiring the minimum seek time from the
current position. The main drawback with this algorithm is that under heavy load conditions, SSTF can prevent
distant requests from ever being serviced, a phenomenon known as starvation. For the disk request in the
previous section, the algorithm responds by moving the read/write heads from track 76 to 29, 17, 12, 124, 137,
and 269, crossing 321 tracks as illustrated in Figure 15(b).

2.5.3.3 Scan/Look Algorithms


The scan algorithm has the head move from the current track toward the highest numbered track, servicing all
requests for a track as it passes that track. When it reaches the highest numbered track, it reverses direction of
the scan, servicing newly arrived requests as it moves towards track 0.
The look algorithm is similar to scan, except that the last track that is serviced before reversing direction is the
highest numbered track requested, not the highest numbered track available on the disk.
Given the request above, and assuming that there are 299 tracks on the disk, the look algorithm will move the
head from track 76 to 124, 137, 201, 269, 29, 17, and 12, a total of 450 tracks. Assuming that the highest
numbered track is 299, the scan algorithm will move the head from track 76 to 124, 137, 201, 269, 299, 29, 17,
and 12, a total of 510 tracks. The relative performance of the two algorithms is illustrated in Figure 15(c).

2.5.3.4 Circular Scan/Look


Consider what happens if a request for track 15 is serviced, and shortly after, a request for track 13 is made.
Using the scan algorithm, almost two full scans of the disk are made (forward to the highest numbered scan,
and then backwards towards track 0), before the request for track 13 is serviced. This is very expensive.
Circular scan overcomes this problem by ensuring that the scan of the disk always takes place in the same
direction. After the highest number track is scanned, the read/write head moves to track zero, and then scanning
resumes as before. Hence, the request for track 13 above will be serviced approximately one disk scan (and not
two as before) later. Circular look is similar to circular scan, except that the last track that is accessed is the last
that was requested, not the last present on the disk. Both the circular scan and circular look algorithms rely on
the existence of a special homing command that moves the head to track zero in a small amount of time.
Using the example above, circular look would move the head from track 76 to 124, 137, 201, 269, 12, 17, and
29, while circular scan will move the head from track 76 to 124, 137, 201, 269, 299, 12, 17, and 29. Assuming
that the drive requires the equivalent of 100 steps to move the head from track 269 or 299 to zero, circular scan
requires 382 steps, and circular look 322 steps, as Figure 15(d).

CSC 314 – Operating Systems Denis L. Nkweteyim Page 26


Figure 15: Seek time algorithms compared. (a) First-come-first-served; (b) Shortest-seek-time-first;(c)
Scan/Look; (d) Circular Scan/Look

CSC 314 – Operating Systems Denis L. Nkweteyim Page 27

You might also like