Professional Documents
Culture Documents
At time t0, the system is in a safe state. The sequence <p1,p0,p2> satisfies
the safety condition.
It is possible to go from a safe state to an unsafe state
At time t1, process P2 requests and is allocated 1 more tape drive – No
longer safe state
If no cycle exists, then the allocation of the resource will leave the system
in a safe state. If a cycle is found, then the allocation will put the system in
an unsafe state. Therefore, process Pj will have to wait for its requests to
be satisfied
Example
BANKER'S ALGORITHM
The resource-allocation graph algorithm is not applicable to a resource
allocation system with multiple instances of each resource type
When a new process enters the system, it must declare the maximum
number of instances of each resource type that it may need.
This number may not exceed the total number of resources in the system.
When a user requests a set of resources the system-must determine
whether the allocation of the resources will leave the system in a safe state.
If it will, the resources are allocated; otherwise/the process must wait until
some other process releases enough resources
Let n be the number of processes in the system and m be the number of
resource types
Data structures
Available: A vector of length m indicates the number of available
resources of each type. If Available[j] = k, there are k instances of
resource type Rj available.
Max: An nx m matrix defines the maximum demand of each process. If
Max[i,j] - k, then P; may request at most k instances of resource type Rj.
Allocation: An n x m matrix defines the number of resources of each type
currently allocated to each process. If Allocation[i,j]= k, then process Pi- is
currently allocated k instances of resource type Rj.
Need: An n x m matrix indicates the remaining resource need of each
process. If Need[i,j] = k, then Pi may need k more instances of resource
type Rj to complete its task.
Note that Need[i,j] = Max[i,j] - Allocation[i,j].
SAFETY ALGORITHM
The algorithm for finding out whether or not a system is in a safe state can
be described as follows:
Steps of Algorithm:
1. Let Work and Finish be vectors of length m and n respectively.
Initialize Work= Available and Finish[i] = false for i=1 to n
RESOURCE-REQUEST ALGORITHM
Let Request i be the request vector for process Pi. If Request[j] = k, then
process Pi, wants k instances of resource type Rj;. When a request for
resources is made by process Pi, the following actions are taken:
1. If Request i < Need i, go to step 2. Otherwise, raise an error condition,
since the process has exceeded its maximum claim.
2. If Requesti < Available, go to step 3. Otherwise, Pi, must wait, since the
resources are not available.
3. Have the system pretend to have allocated the requested resources to
process Pi by modifying the state as follows:
Available := Available – Request i
Allocation i := Allocation i + Request i
Need i= Need i — Request i
Need = Max-allocation
The sequence <Po, P2,P3,P1,P4> will result in Finish[i] = true for all i.
Suppose now that process P2 makes one additional request for an instance
of type C
Modified Request Matrix
Process Termination
To eliminate deadlocks by aborting a process, we use one of two methods.
In both methods, the system reclaims all resources allocated to the
terminated processes.
1. Abort all deadlocked processes
2. Abort one process at a time until the deadlock cycle is eliminated
How to choose the process?
1.What the priority of the process is
2. How long the process has computed, and how much longer the process
will compute before completing its designated task
3. How many and what type of resources the process has used (for
example, whether the resources are simple to preempt)
4. How many more resources the process needs in order to complete
5. How many processes will need to be terminated
6. Whether the process is interactive or batch
Resource Pre-emption
To eliminate deadlocks using resource pre-emption, we successively pre-
empt some resources from processes and give these resources to other
processes until the deadlock cycle is broken
If preemption is required to deal with deadlocks, then three issues need to
be addressed:
1. Selecting a victim
2. Rollback
3. Starvation
COMBINED APPROACH TO DEADLOCK HANDLING
None of the basic approaches for handling deadlocks (prevention,
avoidance, and detection) alone is appropriate for the entire spectrum of
resource-allocation problems encountered in operating systems
One possibility is to combine the three basic approaches, allowing the use
of the optimal approach for each class of resources in the system. The
proposed method is based on the notion that resources can be partitioned
into classes that are hierarchically ordered.
A resource-ordering technique is applied to the classes. Within each class,
the most appropriate technique for handling deadlocks can be used.
Example
Consider a system that consists of the following four classes of
resources:
• Internal resources: Resources used by the system, such as a process
control block
• Central memory: Memory used by a user job
• Job resources: Assignable devices (such as a tape drive) and files
• Swappable space: Space for each user job on the backing store
One mixed deadlock solution for this system orders the classes as shown,
and uses the following approaches to each class:
• Internal resources: Prevention through resource ordering can be used,
since run-time choices between pending requests are unnecessary.
SUMMARY
A deadlock state occurs when two or more processes are waiting
indefinitely for an event that can be caused only by one of the waiting
processes. Principally,
There are three methods for dealing with deadlocks:
•Use some protocol to ensure that the system will never enter a deadlock
state.
• Allow the system to enter deadlock state and then recover.
• Ignore the problem all together, and pretend that deadlocks never occur
in the system.
A deadlock situation may occur if and only if four necessary conditions
hold simultaneously in the system: mutual exclusion, hold and wait, no
preemption, and circular wait. To prevent deadlocks, we ensure that at
least one ofthe necessary conditions never holds.
Another method for avoiding deadlocks that is less stringent than the
prevention algorithms is to have a priori information on how each process
will be utilizing the resources. The banker's algorithm needs to know the
maximum number of each resource class that may be requested by each
process. Using thisinformation, we can define a deadlock-avoidance
algorithm.
If a system does not employ a protocol to ensure that deadlocks will never
occur, then a detection and recovery scheme must be employed. A
deadlock detection algorithm must be invoked to determine whether a
deadlock has
occurred. If a deadlock is detected, the system must recover either by
terminating some of the deadlocked processes, or by preempting resources
from some of the deadlocked processes.
In a system that selects victims for rollback primarily on the basis of cost
factors, starvation may occur. As a result, the selected process never
completes its designated task.
Finally, researchers have argued that none of these basic approaches
aloneare appropriate for the entire spectrum of resource-allocation
problems in operating systems. The basic approaches can be combined,
allowing the separate selection of an optimal one for each class of
resources in a system.
PROCESS SYNCHRONISATION
Cooperating Process
A cooperating process is one that can affect or be affected by the other
processes executing in the system. Cooperating processes may either
directly share a logical address space (that is, both code and data), or be
allowed to share data only through files.
SYNCHRONIZATION HARDWARE
Uniprocessor environment: By not allowing interrupts to occur while a
shared variable is being modified.
Disabling interrupts on a multiprocessor can be time-consuming, as the
message is passed to all the processors. This message passing delays entry
into each critical section, and system efficiency decreases. Also, consider
the effect on a system's clock, if the clock is kept updated by interrupts.
Many machines therefore provide special hardware instructions that allow
us either to test and modify the content of a word, or to swap the contents
of two words, atomically.
The important characteristic is that this instruction is executed atomically
—that is, as one uninterruptible unit. Thus, if two Test-and-Set
instructions are executed simultaneously (each on a different CPU), they
will be executed sequentially in some arbitrary order. If the machine
supports the Test-and-Set instruction, then we can implement mutual
exclusion by declaring a Boolean variable lock, initialized to false.
SEMAPHORES
Used for complex problems.
Semaphore is a synchronization tool
A semaphore S is an integer variable that, apart from initialization, is
accessed only through two standard atomic operations: wait and signal.
The classical definitions of wait and signal are
wait(S): while S < 0 do no-op;
S := S - 1;
signal(S): S := S + 1;
Modifications to the integer value of the semaphore in the wait and signal
operations must be executed indivisibly.
Conditions
No two process can simultaneously modify the same semaphore value. In
addition, in the case of the wait(S), the testing of the integer value of S (S
< 0), and its possible modification (S := S — 1), must also be executed
without interruption
Usage
We can use semaphores to deal with the n-process critical-section
problem.
The n processes share a semaphore, mutex (standing for mutual
exclusion),
initialized to 1.
Implementation
All require busy waiting.
While a process is in its critical section, any other process that tries to
enter its critical section must loop continuously in the entry code. This
continual lopping is clearly a problem in a real multiprogramming system,
where a single CPU is shared among many processes. Busy waiting wastes
CPU cycles that some other process might be able to use productively.
This type of semaphore is also called a spinlock
Semaphores are integer variables that are used to solve the critical section
problem by using two atomic operations, wait and signal that are used for
process synchronization.
The definitions of wait and signal are as follows −
Wait
The wait operation decrements the value of its argument S, if it is
positive. If S is negative or zero, then no operation is performed.
wait(S)
{
while (S<=0);
S--;
}
Signal
The signal operation increments the value of its argument S.
signal(S)
{
S++;
}
Types of Semaphores
There are two main types of semaphores i.e. counting semaphores and
binary semaphores. Details about these are given as follows −
Counting Semaphores
These are integer value semaphores and have an unrestricted value
domain. These semaphores are used to coordinate the resource access,
where the semaphore count is the number of available resources. If the
resources are added, semaphore count automatically incremented and if
the resources are removed, the count is decremented.
Binary Semaphores
The binary semaphores are like counting semaphores but their value is
restricted to 0 and 1. The wait operation only works when the semaphore
is 1 and the signal operation succeeds when semaphore is 0. It is
sometimes easier to implement binary semaphores than counting
semaphores.
Advantages of Semaphores
Some of the advantages of semaphores are as follows
Semaphores allow only one process into the critical section. They follow
the mutual exclusion principle strictly and are much more efficient than
some other methods of synchronization.
There is no resource wastage because of busy waiting in semaphores as
processor time is not wasted unnecessarily to check if a condition is
fulfilled to allow a process to access the critical section.
Semaphores are implemented in the machine independent code of the
microkernel. So they are machine independent.
Disadvantages of Semaphores
Some of the disadvantages of semaphores are as follows
Semaphores are complicated so the wait and signal operations must be
implemented in the correct order to prevent deadlocks.
Semaphores are impractical for last scale use as their use leads to loss of
modularity. This happens because the wait and signal operations prevent
the creation of a structured layout for the system.
Semaphores may lead to a priority inversion where low priority processes
may access the critical section first and high priority processes later.
Deadlocks and Starvation
The implementation of a semaphore with a waiting queue may result in a
situation where two or more processes are waiting indefinitely for an event
that can be caused by only one of the waiting processes. The event in
question is the execution of a signal operation. When such a state is
reached, these processes are said to be deadlocked.
Example
The job of the Producer is to generate the data, put it into the buffer, and
again start generating data.
While the job of the Consumer is to consume the data from the buffer.
In the above code, mutex and wrt are semaphores that are initialized to 1.
Also, rc is a variable that is initialized to 0. The mutex semaphore ensures
mutual exclusion and wrt handles the writing mechanism and is common
to the reader and writer process code.
The variable rc denotes the number of readers accessing the object. As
soon as rc becomes 1, wait operation is used on wrt. This means that a
writer cannot access the object anymore. After the read operation is done,
rc is decremented. When re becomes 0, signal operation is used on wrt. So
a writer can access the object now.
Writer Process
wait(wrt);
.
. WRITE INTO THE OBJECT
.
signal(wrt);
If a writer wants to access the object, wait operation is performed on wrt.
After that no other writer can access the object. When a writer is done
writing into the object, signal operation is performed on wrt.
The Dining-Philosophers Problem
The dining philosophers problem states that there are 5 philosophers
sharing a circular table and they eat and think alternatively. There is a
bowl of rice for each of the philosophers and 5 chopsticks. A philosopher
needs both their right and left chopstick to eat. A hungry philosopher may
only eat if there are both chopsticks available. Otherwise a philosopher
puts down their chopstick and begin thinking again.
The dining philosopher is a classic synchronization problem as it
demonstrates a large class of concurrency control problems.
Procedure P2(....)
{
}
Procedure Pn(....)
{
}
Initialization Code(....)
{
}
}
Only one process can be active in a monitor at a time. Other processes that
need to access the shared variables in a monitor have to line up in a queue
and are only provided access when the previous process release the shared
variables.
MONITOR VS SEMAPHORE
Both semaphores and monitors are used to solve the critical section
problem (as they allow processes to access the shared resources in mutual
exclusion) and to achieve process synchronization in the multiprocessing
environment.
MONITOR
A Monitor type high-level synchronization construct. It is an abstract data
type. The Monitor type contains shared variables and the set of procedures
that operate on the shared variable.
When any process wishes to access the shared variables in the monitor, it
needs to access it through the procedures. These processes line up in a
queue and are only provided access when the previous process release the
shared variables. Only one process can be active in a monitor at a time.
Monitor has condition variables.
SEMAPHORE
A Semaphore is a lower-level object. A semaphore is a non-negative
integer variable. The value of Semaphore indicates the number of shared
resources available in the system. The value of semaphore can be modified
only by two functions, namely wait() and signal() operations (apart from
the initialization).
When any process accesses the shared resources, it performs the wait()
operation on the semaphore and when the process releases the shared
resources, it performs the signal() operation on the semaphore. Semaphore
does not have condition variables. When a process is modifying the value
of the semaphore, no other process can simultaneously modify the value of
the semaphore.