You are on page 1of 10

Name ID

Mohamed Abdelrahman Anwar 20011634

Operating Systems
Sheet 7
Deadlock and Starvation

6.1 PRINCIPLES OF DEADLOCK


1.

Process P Process Q
Get A Get B
Get B Get A
Release A Release B
Release B Release A

Deadlock path would be:


Process P gets resource A then process Q gets resource B then
process P waits on resource B and process Q waits on resource
A.

Process P Process Q
Get A Get B
Release A Get A
Get B Release B
Release B Release A

No Deadlock path would be:


Process P gets A then Process Q gets B then process P releases
A so process B can get it after then it would release B which
process P could acquire after.
2.
• Deadlock Prevention
• Deadlock Avoidance
• Deadlock Detection
3.

A cycle can be detected in this graph which means there is a


possibility of a deadlock.
Cycle: P0 -> C -> P2 -> D -> P1 -> B -> P0 -> C.
6.2 DEADLOCK PREVENTION
4. we can rearrange the get calls to resources in alphabetical
order and that would prevent any deadlocks.

As expected, there are no cycles in the RAG.


6.3 DEADLOCK AVOIDANCE
5.
a. Free space = 25.
Process Max Need Remaining
1 70 45 25
2 60 40 20
3 60 15 45
4 60 25 35
Either choose Process 1 or 2 to finish and release their memory
and the other processes can finish in any order.
b. Free space = 15.
Process Max Need Remaining
1 70 45 25
2 60 40 20
3 60 15 45
4 60 35 25
Then it’s an unsafe state since no process can finish.
6.4 DEADLOCK DETECTION
6.
W = (2,1,0,0)
Mark P3 -> W = (2,2,2,0)
Mark P2 -> W = (4,2,2,1)
Mark P1 -> then no deadlock detected.
7.
Need = claim – allocated = (2,1,6,5)
P2 needs 1 so we now have 2.
P1 needs 0 so we now have 3 and total of 5
P4 needs 2 so we now have 7 and total of 12
P3 needs 0.
Min is equal to 3.
6.5 AN INTEGRATED DEADLOCK STRATEGY
8.
a. Rank order from 1 to 6 for each of the ways of handling
deadlock, based on which approach permits the greatest
concurrency when there is no deadlock:
1. Resource ordering - This approach allows the most
concurrency because it requires no preemption or rollback
of threads. By ordering resource requests, deadlocks can
be avoided altogether, allowing all threads to make
progress without waiting.
2. Banker's algorithm - This approach allows for a high
degree of concurrency as long as there are sufficient
resources to meet the needs of all threads. However, it
requires careful tracking of available resources and may
cause delays in granting requests to ensure safety.
3. Restart thread and release all resources if the thread
needs to wait - This approach can allow for good
concurrency as long as the system is not heavily loaded.
However, it can be inefficient as it requires restarting the
thread and releasing resources, which may result in
wasted effort and resources.
4. Reserve all resources in advance - This approach can limit
concurrency as it requires all resources to be reserved in
advance. This means that some resources may go unused,
leading to inefficiency and reduced concurrency.
5. Detect deadlock and kill the thread, releasing all resources
- This approach can lead to reduced concurrency as
threads are terminated and resources are released. It can
also result in wasted effort if the thread was close to
completing its task before being terminated.
6. Detect deadlock and roll back thread's actions - This
approach can be the least concurrent as it requires threads
to be rolled back, which can be time-consuming and
resource-intensive. It can also result in wasted effort if the
thread had made significant progress before being rolled
back.
b. Rank order the approaches from 1 to 6, with 1 being the
most efficient, assuming that deadlock is a very rare event:
7. Resource ordering - This approach requires no overhead
and is the most efficient.
8. Reserve all resources in advance - This approach can be
efficient as it eliminates the need to track available
resources. However, it can result in some resources going
unused.
9. Banker's algorithm - This approach requires some
overhead to track available resources and may cause
delays in granting requests to ensure safety.
10. Detect deadlock and roll back thread's actions - This
approach can be inefficient as it requires threads to be
rolled back, which can be time-consuming and resource-
intensive.
11. Restart thread and release all resources if the thread
needs to wait - This approach can be inefficient as it
requires resources to be released and may result in wasted
effort and resources.
12. Detect deadlock and kill the thread, releasing all
resources - This approach can be the least efficient as it
requires resources to be released and may result in wasted
effort if the thread was close to completing its task.

6.6 DINING PHILOSOPHERS’ PROBLEM


9.
a. The solution of picking up the left fork first and checking for
the availability of the right fork before picking it up can lead to
a deadlock situation, where all philosophers pick up their left
fork and wait indefinitely for the right fork. This situation occurs
when each philosopher picks up their left fork and no
philosopher picks up their right fork.
b. The solution of picking up both forks at the same time (in a
critical section) only if both forks are available can prevent the
deadlock situation described above. If a philosopher cannot
pick up both forks at the same time, they will release any forks
they have picked up and wait until both forks are available. This
solution ensures that a philosopher only eats when they can
pick up both forks, thus preventing the deadlock situation.

GENERAL QUESTIONS
10.
a.
Process R1 R2 R3 R4
P1 0 0 0 0
P2 0 7 5 0
P3 6 6 2 2
P4 2 0 0 2
P5 0 3 2 0
b. the system is safe after applying banker’s algorithm as there
is no process waiting for resources.
Start with (2,1,0,0)
Gets P1 -> (2,1,1,2)
Gets P4 -> (4,4,6,6)
Gets P5 -> (4,6,9,8)
Gets P2 -> (6,6,9,8)
Gets P3 -> Done.
c. same as the last one no deadlocks using deadlock detection
algorithm.
d. none
e. it can be granted immediately but can cause deadlock at
process P3 and P2.

You might also like