Professional Documents
Culture Documents
Computer systems are full of shared resources, and it is necessary in several situations to control use of these
resources so that only one process can use one of them at a time (i.e., mutual exclusion). For example, two
processes should not send output to the same printer at the same time. Usually, a process needs to have
exclusive access to more than just one resource, and this may lead to deadlock. Some of these request patterns
may lead to deadlock, for example, process A having exclusive access to resource X and is blocked requesting
resource Y which is currently held by process B which in turn is blocked requesting resource X.
A set of processes is said to be deadlocked if each process in the set is waiting for an event that only another
process in the set can cause (in the example above, the event would be releasing a resource that a process
needs). Deadlock can occur under several other circumstances, including the need for exclusive use of shared
memory, the need for exclusive use of shared devices in a networked environment, the need to access database
records that are locked by processes that are reading/updating the records, etc.
If any of the resources in the paragraphs above were preemptable, meaning the resource can be taken away
from the process without causing an error in the computation, then the deadlock could be avoided. It all of the
resources were nonpreemptable, meaning taking them away would lead to error in the computation, then the
deadlock would be possible.
Like critical sections, deadlock is a global condition, not a local one. That is, the individual threads involved in
a deadlock have no error. The problem arises from the collective action of a group of threads. Four conditions
must hold for there to be a deadlock:
1. Mutual exclusion: Each resource is either currently assigned to exactly one (and no other) process or is
available.
2. Hold and wait condition: Processes currently holding resources granted earlier are allowed to request
new resources
3. No pre-emption condition: Resources previously granted to a process cannot be forcibly taken away
from the process. They must be explicitly released by the process holding them.
4. Circular wait condition: There must be a circular chain of two or more processes, each of which is
waiting for a resource held by the next member of the chain.
The example in Figure 43 illustrates how a resource graph can be used. Three processes (A, B, and C, make
requests for three resources (R, S, and T) as shown in Figure 43 (a), (b), and (c). Assume round robin
scheduling is used such that the processor receives the requests in the order shown in Figure 43 (d). Figure 43
(e), (f), (g), (h), (i), and (j) show the six resulting process-resource graphs. The cycle in Figure 43 (j) is an
indication of a deadlock.
If the OS however knew that granting a particular request might lead to deadlock, the OS can suspend the
process until the condition that would lead to deadlock is cleared. For example, if the OS suspended process B
and used the schedule in Figure 43 (k), then the corresponding process-resource graphs are shown in Figure 43
(l), (m), (n), (o), (p), and (q), and this sequence does not lead to deadlock.
Note again that the treatment of process-resource graphs here has assumed that there is only one resource of a
particular type present. The approach can however, be generalized to handle multiple resources of a particular
type.
Figure 1. Simple state transition diagram for one process which may request up to two units of a single resource
type
The model represents a process which may request up to two units of a single resource type, one at a time. S0 is
the initial state; the process holds no resource, and the only possible state transition is to state s1, through a
request (r) for a resource. At state s1, the process still holds no resource but is waiting for one. When the
resource is allocated, the process transitions to state s2. From state s2, the process may release, i.e., deallocate
(d) the resource and return to state s0, or request (r) the second resource to transition to state s3 (waiting for
requested resource), and to s4 when the resource gets allocated. From s4, there is only one possible transition,
and that is to s2 if one of the resources held by the process is released.
Of course, there cannot be any deadlock here because only one process is involved.
The model can be extended to consider two processes as shown in Figure 1.
In the figure, the state transition diagram of Figure 1 has been replicated to simultaneously represent the states
for the two processes, i.e., state sij represents a state in which the first process is in state si, and the second
process in state sj. Some states are not feasible, for example, s44 would represent a state in which both processes
have acquired both of the units of the resource, which is not possible.
State s33 is a deadlock state since both processes are holding one unit of the resource and waiting for the other.
We next consider a case with multiple resources of each type, and a matrix-based solution to deadlock
detection. Let there be n processes, P1 through Pn, m resource classes with E1 resources of class 1, E2 resources
of class 2,…Em resources of class m, i.e., E is the existing resource vector, and gives the total number of
instances of each resource in existence.
Let A be the vector of currently available (i.e., unassigned) resources. Hence Ai is the number of resources of
class i that is currently unassigned.
Let C represent the current allocation matrix. Cell Cij of this matrix represents the number of instances of
resource j that are currently held by process pi.
Let R represent the request matrix. Cell Rij of this matrix represents the number of instances of resource j that
process pi wants.
These four data structures are illustrated in Figure 45. The following holds true: i.e., sum of
available and allocated resources equals the total number of available resources.
Assume now that with the same initial state as above, A requests and gets another resource (state (b) of Figure
48). The scheduler could run B with all its resource demands (state (b)), and then B can release its resources (d).
Now, we cannot guarantee that A or C can successfully run to completion. In retrospect, the allocation of a unit
of the resource to A in state (b) was an error. State (b) is said to be an unsafe state because there is no sequence
that guarantees completion.
It should be noted that an unsafe state is not a deadlocked state. With luck, a process may release some of its
resources, allowing another process to run to completion before at some stage, the former process makes its
maximum resource demand.
Safe and unsafe states can also be demonstrated in the context of the state-transition model, but we shall not
cover that here. Interested readers can read on this on pages 389-391 of Nutt (2003).
Figure 49: Banker's algorithm for single resource (a) safe, (b) safe, (c)
unsafe
Four customers A, B, C, D (processes) are granted lines of credits, i.e., maximum resource demands (Max
column). In this example, only 10 units of the resource (10 million CFA) are available to the banker (the
operating system). The customers go about their business, sometimes making loan requests. Figure 49 (b)
illustrates the situation at a certain moment. That is a safe state because even if all customers make their
maximum demands, all can be delayed, except C, whose request can be met. When C releases its resources (i.e.,
pays back its debt), four units of the resource will be available, enough to service either B or D, and so on.
However, if B requested a unit in (b), and the request was granted leading to state (c), and suddenly all the
customers make their maximum request, none of them will be serviced, i.e., the system is deadlocked.
The banker’s algorithm considers each request as it is made, and checks if granting the request leads to a safe
state. If it does, the request is granted; if not it is postponed until later. A state is judged to be safe if there are
enough resources to satisfy some customer. If so, the loan is assumed repaid, and the customer now closest to
the limit checked and so on. If all loans can eventually be repaid, the state is safe, and the initial request can be
granted.
The matrix on the left (current resource assignment) and the right (resources still needed) correspond to the C
and R matrix in Figure 50. Vector E is the existing resources vector, vector P the possessed (i.e., assigned)
resource vector, vector A the available resources. The algorithm is as follows:
1. Look for a row R whose unmet resource needs are all smaller than or equal to A. if no such rows exist,
the system will deadlock since no process can run to completion
2. Assume the process of the row chosen requests all the resources it needs and finishes. Mark that process
as terminated and add all its resources to the A vector.
3. Repeat steps 1 and 2 until either all processes are marked terminated, in which case the initial state was
safe, or until a deadlock occurs in which case the initial case was not safe.
Going back to the example, which is a safe state, consider what happens if B requests a scanner. This request
can be granted because the resulting state is still safe (D can finish, then A or E, and then the rest). However,
after granting B’s request above, and E requests the last scanner, granting that request would reduce the
available resources vector to (1 0 0 0), which leads to deadlock. So, E’s request needs to be delayed.
The problem with the deadlock avoidance algorithms
Processes rarely know in advance what their maximum resource needs are, so although in theory the algorithms
are excellent, in practice they are useless.