Professional Documents
Culture Documents
MODULE 5
Chapter 12: MASS-STORAGE STRUCTURE
12.1Overview of Mass Storage Structures:
12.1.1 Magnetic –Disks:
• Magnetic -disks provide the bulk of secondary-storage for modern computer-systems (Figure 12.1).
• Each disk-platter has a flat circular-shape, like a CD.
• The 2 surfaces of a platter are covered with a magnetic material.
• Information is stored on the platters by recording magnetically.
Solid-State Disks:
• An SSD is non-volatile memory that is used like a hard-drive.
• For example:
DRAM with a battery to maintain its state in a power-failure through flash-memory technologies.
• Advantages compared to Hard-disks:
1) More reliable: SSDs have no moving parts and are faster because they have no seek-time or
latency.
2) Less power consumption.
• Disadvantages:
1) More expensive
2) Less capacity and so shorter life spans, so their uses are somewhat limited.
• Applications:
1) One use for SSDs is in storage-arrays, where they hold file-system metadata that require high
performance.
2) SSDs are also used in laptops to make them smaller, faster, and more energy-efficient.
• A storage-area network (SAN) is a private network connecting servers and storage units (Figure 12.3).
• The power of a SAN lies in its flexibility.
• Multiple hosts and multiple storage-arrays can attach to the same SAN.
• Storage can be dynamically allocated to hosts.
• A SAN switch allows or prohibits access between the hosts and the storage.
• SANs make it possible for clusters of servers to share the same storage and for storage arrays to
include multiple direct host connections.
• SANs typically have more ports than storage-arrays.
• FC is the most common SAN interconnect.
• Another SAN interconnect is InfiniBand — a special-purpose bus architecture that provides hardware and
software support for high-speed interconnection networks for servers and storage units.
1. FCFS Scheduling:
• Starting with cylinder 53, the disk-head will first move from 53 to 98, then to 183, 37, 122, 14, 124,65,
and finally to 67 as shown in Figure 5.1.
Head movement from 53 to 98 = 45
Head movement from 98 to 183 = 85
Head movement from 183 to 37 = 146
Head movement from 37 to 122 =85
Head movement from 122 to 14 =108
Head movement from 14 to 124 =110
Head movement from 124 to 65 =59
Head movement from 65 to 67 = 2
Total head movement = 640
• Advantage: This algorithm is simple & fair.
• Disadvantage: Generally, this algorithm does not provide the fastest service.
2. SSTF Scheduling:
• For example:
• The closest request to the initial head position 53 is at cylinder 65. Once we are at cylinder 65, the
next closest request is at cylinder 67.
• From there, the request at cylinder 37 is closer than 98, so 37 is served next. Continuing, we service
the request at cylinder 14, then 98, 122, 124, and finally 183. It is shown in Figure 5.2.
Head movement from 53 to 65 = 12
Head movement from 65 to 67 = 2
Head movement from 67 to 37 = 30
Head movement from 37 to 14 =23
Head movement from 14 to 98 =84
Head movement from 98 to 122 =24
Head movement from 122 to 124 =2
Head movement from 124 to 183 = 59
Total head movement = 236
• Advantage: SSTF is a substantial improvement over FCFS, it is not optimal.
• Disadvantage: Essentially, SSTF is a form of SJF and it may cause starvation of some requests.
3. SCAN Scheduling:
• The SCAN algorithm is sometimes called the elevator algorithm, since the disk-arm behaves just like
an elevator in a building.
• Here is how it works:
1. The disk-arm starts at one end of the disk.
2. Then, the disk-arm moves towards the other end, servicing the request as it reaches each
cylinder.
3. At the other end, the direction of the head movement is reversed and servicing continues.
Kavitha D. N., Asst. Prof., CS&E, VVCE
OS Module 5 Notes
• The head continuously scans back and forth across the disk.
• For example:
• Before applying SCAN algorithm, we need to know the current direction of head movement.
• Assume that disk-arm is moving toward 0, the head will service 37 and then 14.
• At cylinder 0, the arm will reverse and will move toward the other end of the disk, servicing the
requests at 65,67,98, 122, 124, and 183. It is shown in Figure 5.3.
Head movement from 53 to 37 = 16
Head movement from 37 to 14 = 23
Head movement from 14 to 0 = 14
Head movement from 0 to 65 =65
Head movement from 65 to 67 =2
Head movement from 67 to 98 =31
Head movement from 98 to 122 =24
Head movement from 122 to 124 = 2
Head movement from 124 to 183 = 59
Total head movement = 236
• Disadvantage: If a request arrives just in from of head, it will be serviced immediately.
On the other hand, if a request arrives just behind the head, it will have to wait until
the arms reach other end and reverses direction.
4. C-SCAN Scheduling:
• Circular SCAN (C-SCAN) scheduling is a variant of SCAN designed to provide a more uniform
wait time.
• Like SCAN, C-SCAN moves the head from one end of the disk to the other, servicing requests along
the way.
• Before applying C - SCAN algorithm, we need to know the current direction of head movement.
• Assume that disk-arm is moving toward 199, the head will service 65, 67, 98, 122, 124, 183.
• Then it will move to 199 and the arm will reverse and move towards 0.
• While moving towards 0, it will not serve. But, after reaching 0, it will reverse again and then serve
14 and 37. It is shown in Figure 5.7.
Head movement from 53 to 65 = 12
Head movement from 65 to 67 = 2
Head movement from 67 to 98 = 31
Head movement from 98 to 122 =24
Head movement from 122 to 124 =2
Head movement from 124 to 183 =59
Head movement from 183 to 199 =16
Head movement from 199 to 0 = 199
Head movement from 0 to 14 = 14
Head movement from 14 to 37 = 23
Total head movement = 382
• SCAN algorithm move the disk-arm across the full width of the disk.
• In practice, the SCAN algorithm is not implemented in this way.
• The arm goes only as far as the final request in each direction.
• Then, the arm reverses, without going all the way to the end of the disk.
• For example:
• Assume that disk-arm is moving toward 199, the head will service 65, 67, 98, 122, 124, 183.
• Then the arm will reverse and move towards 14. Then it will serve 37. It is shown in Figure 5.5. Head
movement from 53 to 65 = 12
Head movement from 65 to 67 = 2
Head movement from 67 to 98 = 31
Head movement from 98 to 122 =24
Head movement from 122 to 124 =2
Head movement from 124 to 183 =59
Head movement from 183 to 14 = 169
Head movement from 14 to 37 = 23
Total head movement = 322
Example Problems
1) Suppose that the disk-drive has 5000 cylinders numbered from 0 to 4999. The drive is currently serving a
request at cylinder 143, and the previous request was at cylinder 125. The queue of pending requests in FIFO
order is 86, 1470, 913, 1774, 948, 1509, 1022, 1750, 130. Starting from the current (location) head position,
what is the total distance (in cylinders) that the disk-arm moves to satisfy all the pending requests, for each of
the following disk-scheduling algorithms?
(i) FCFS
(ii) SSTF
(iii) SCAN
(iv) LOCK
(v) C-SCAN
Solution:
(i) FCFS
(ii) SSTF
(iii) SCAN
(iv) LOOK
(v) C-SCAN
Solution:
(i) FCFS
(ii) SSTF:
3) Given the following queue 95, 180, 34, 119, 11, 123, 62, 64 with head initially at track 50 andending
at track 199. Calculate the number moves using
i) FCFS
ii) SSTF
iii) Elevator and
iv) C-look.
Chapter 7: Deadlocks
• When processes request a resource and if the resources are not available at that time the
process enters into waiting state.
• Waiting process may not change its state because the resources they are requested are
held by other process. This situation is called deadlock.
• The situation where the processes wait for each other to release the resource held by other
process is called deadlock.
• The system consists of finite number of resources and it is distributed among number of
processes.
• A process must request a resource before using it and it must release the resource after
using it.
• The amount of resource requested may not exceed the total number of resources
available.
1. Request: If the request is not granted immediately, then the requesting process must wait
until it can acquire the resources.
1. Mutual Exclusion: Only one process can hold the resource at a time. If any other process
requests for the resource, the requesting process must wait until the resource has been
released.
2. Hold and Wait: A process must be holding at least one resource and waiting to acquire
additional resources that are currently being held by the other process.
3. No Preemption: Resources cannot be preempted i.e., only the process holding the
resources must release it after the process has completed its task.
4. Circular Wait: A set {P0,P1........Pn} of waiting process must exist such that P0 is waiting
for a resource held by P1, P1 is waiting for a resource held by P2. Pn-1 is waiting for
resource held by process Pn and Pn is waiting for the resource held by P0.
• Deadlocks are described by using a directed graph called system resource allocation
graph.
• The graph consists of set of vertices (v) and set of edges (e).
• The set of vertices (v) can be described into two different types of nodes
1. P={P1,P2..... Pn} i.e., set consisting of all active processes, represented by circle.
2. R = {R1,R2....Rn} i.e., set consisting of all resource types in the system, represented by
rectangle, with dot representing the instances of that resource type.
1. Request edge – directed edge from process Pi to resource type Rj. Denoted by Pi Rj.
2. Assignment edge – directed edge from resource Rj to process Pi. Denoted by Rj Pi.
• The resource-allocation graph shown in figure 7.2 depicts the following situation:
• Resource instances:
• Process states:
• Given the definition of a resource-allocation graph, it can be shown that the graph contains
no cycles, the no process in the system is deadlocked.
• If each resource type has exactly one instance, then a cycle implies that a deadlock has
occurred.
• If the cycle involves only a set of resource types, each of which has only a single instance,
then a deadlock has occurred.
• If each resource type has several instances, then a cycle does not necessarily imply that a
deadlock has occurred. In this case, a cycle in the graph is necessary but not sufficient for
deadlock to occur.
P1 R1 P2 R3 P3 R2 P1
P2 R3 P3 R2 P2
P1 R1 P3 R2 P1
• There is no deadlock.
• Process P2 may release its instance of resource type R1 and Process P4 may release its
instance of resource type R2.
• If a resource-allocation graph does not have a cycle, then the system is not in a deadlocked
state.
• If there is a cycle, then the system may or may not be in a deadlocked state.
1. Use a protocol to prevent or avoid deadlocks, ensuring that the system will never enter a
deadlock state.
2. Allow the system to enter a deadlock state, detect it, and recover.
3. Ignore the problem altogether and pretend that deadlocks never occur in the system.
• The third solution is used by most of the OS, including UNIX and Windows.
• To ensure that deadlocks never occur, the system can use either a deadlock-prevention
or deadlock-avoidance scheme.
1. Deadlock-prevention: provides a set of methods for ensuring that at least one of the
necessary conditions cannot hold.
• For a deadlock to occur each of the four necessary conditions must hold.
• If at least one of the conditions does not hold then we can prevent the occurrence of a
deadlock.
1. Mutual Exclusion: This holds for non-sharable resources. For e.g. a printer can be used
by only one process at a time.
• Mutual exclusion is not possible in sharable resources and thus they cannot be involved in
deadlock. Read-only files are good example for sharable resource. A process never waits
for accessing sharable resources. So we cannot prevent deadlock by denying the mutual-
exclusion condition on sharable resources.
2. Hold and Wait: must guarantee that whenever a process requests a resource, it does not
hold any other resources.
• Require process to request and be allocated all its resources before it begins execution, or
allow process to request resources only when the process has none. Low resource
utilization; starvation possible.
3. No-Preemption: To ensure that this condition never occurs the resources must be
preempted.
• If a process that is holding some resources requests another resource that cannot be
immediately allocated to it, then all resources currently being held are released.
• Preempted resources are added to the list of resources for which the process is waiting.
Process will be restarted only when it can regain its old resources, as well as the new ones
that it is requesting.
4. Circular Wait: impose a ordering of all resource types, and require that each process
requests the resources in an increasing order of enumeration.
• Let R = {R1, R2, ...Rn} be the set of resource types. We assign each resource type with a
unique integer value. This will allows us to compare two resources and determine whether
one precedes the other in ordering.
• E.g.: we can define a one to one function:
• F: R N as follows: where the value 'N' indicates the no. of instances of the resource.
F(tape drive)=1
F(disk drive) =5
F(printer)=12
• A process holding 'disk drive', cannot request for tape drive, but it can request for printer.
1. Each process can request the resource in increasing order. A process can request any
number of instances of resource type say Ri initially and then, it can request instances of
resource type Rj only if F(Rj) >F(Ri).
• If these two protocols are used then the circular wait cannot hold.
• E.g.: If a process is having an instance of disk drive, whose 'N' value is set as'5'.
Then it can request for printer only. Because F(Printer) > F(disk drive).
If the process wants the tape drive, it has to release all the resources held.
• Deadlock prevention algorithm may lead to low device utilization and reduces system
throughput.
• Avoiding deadlocks requires additional information about the sequence in which the
resources are to be requested.
• Simplest and most useful model requires that each process declare the maximum number
of resources of each type that it may need.
• A safe state is a state in which there exists at least one order in which all the process will
run completely without resulting in a deadlock.
• A sequence of processes <P1, P2, …, Pn>is a safe sequence for the current allocation state
if, for each Pi, the resources that Pi can still request can be satisfied by currently available
resources plus resources held by all the Pj, with j < i. In this situation,
If Pi resource needs are not immediately available, then Pi can wait until all Pj
have finished.
• A safe state is not deadlocked state. Conversely, a deadlocked state is not an unsafe state.
• Deadlock Avoidance ensures that a system will never enter an unsafe state.
• In addition to the request edge and the assignment edge a new edge called claim edge is
used.
• A claim edge Pi Rj indicated that process Pi may request resource Rj; a claim edge
converts to request edge in direction, and represented by a dashed line.
• When a process Pi requests resource Rj the request is granted only if converting the
request edge Pi Rj to as assignment edge Rj Pi do not result in a cycle.
• If there are no cycles then the allocation of the resource to process leave the system in safe
state.
• If a cycle is found, then the allocation will put the system in an unsafe state.
• This algorithm is applicable to the system with multiple instances of each resource types.
• The Banker's Algorithm gets its name because the algorithm could be used in a banking
system to ensure that the bank never allocated its available cash in such a way that it could
no longer satisfy the needs of all its customers.
• When a process starts, it must state in advance the maximum allocation of resources it may
request, up to the amount available on the system.
• When a request is made, the scheduler determines whether granting the request would leave
the system in a safe state.
• If not, then the process must wait until the request can be granted safely.
1. Available: A vector of length m indicates the number of available resources of each type.
If available [j] = k, there are k instances of resource type Rj available.
Need[i][j]=Max[i][j] – Allocation[i][j]
• These data structures vary over time in both size and value.
Go to step 2
4. If Finish [i] = = true for all i, then the system is in a safe state.
• This algorithm may require an order of m*n² operations to determine whether a state is
safe.
• When a request for resources is made by process Pi, the following actions are taken:
1. If Requesti ≤ Needi , go to step 2. Otherwise, raise error condition, since process has
exceeded its maximum claim.
2. If Requesti ≤ Available, go to step 3. Otherwise Pi must wait, since resources are not
available.
• If the resulting resource-allocation state is safe, the transaction is completed, and the
resources are allocated to Pi.
• If unsafe Pi must wait for request, and the old resource-allocation state is restored.
Process Need
A B C
P0 7 4 3
P1 1 2 2
P2 6 0 0
P3 0 1 1
P4 4 3 1
• The system is in a safe state since the sequence < P1, P3, P4, P0, P1> satisfies safety
criteria. (Problem solved in Class).
• The following algorithm employs several time-varying data structures that are similar to
those used in Banker’s algorithm.
• Available: A vector of length m indicates the number of available resources of each type.
Detection algorithm:
Finish[i] = false;
otherwise, Finish[i] = true
• We can reclaim resources held by process P0, but insufficient resources to fulfill other
processes requests. Deadlock exists, consisting of processes P1, P2, P3, and P4.
7.6.3 Detection-Algorithm Usage:
• When should we invoke the detection algorithm? The answer depend on two factors:
• If deadlocks occur frequently, then the detection algorithm should be invoked frequently.
• Resources allocated to deadlocked processes will be idle until the deadlock can be broken.
• In addition, the number of processes involved in the deadlock cycle may grow.
• If the deadlock-detection algorithm is invoked for every resource request, this will incur a
considerable overhead in computation.
• A less expensive alternative is simply to invoke the algorithm at less frequent intervals.
(E.g. Once per hour)
2. Resource Preemption
1. Abort all deadlocked processes. This method breaks the deadlock cycle by terminating
all the processes involved in deadlock. But, at a great expense, the deadlocked processes
may have computed for a long time, and the results of these partial computations must be
discarded and probably will have to be recomputed later.
2. Abort one process at a time until the deadlock cycle is eliminated. This method incurs
considerable overhead, since, after each process is aborted, a deadlock-detection algorithm
must be invoked to determine whether any processes are still deadlocked.
• In the latter case there are many factors that can go into deciding which processes to
terminate next:
2. How long process has computed, and how much longer to completion.
• If preemption is required to deal with deadlocks, then three issues need to be addressed:
2. Rollback: Ideally one would like to roll back a preempted process to a safe state prior to
the point at which that resource originally allocated to the process.
• Unfortunately it can be difficult or impossible to determine what such a safe state is, and
so the only safe rollback is to roll back all the way back to the beginning. ( i.e. abort the
process and make it start over).
3. Starvation: There are chances that the same resource is picked from process as a victim,
every time the deadlock occurs and this continues. This is starvation. A count can be kept
on number of rollback of a process and the process has to be a victim for finite number of
times only.