You are on page 1of 8

OS 2nd assignments answers

1. Outline the critical sectiOn prOblem pOint Out and explain its three requirements

Consider a system consisting of n processes {Po, P1, ... , P11_ I}. Each process has a segment of code,
called a critical section, in which the process may be changing common variables, updating a table,
writing a file, and so on. The important feature of the system is that, when one process is executing in
its critical section, no other process is to be allowed to execute in its critical section. That is, no two
processes are executing in their critical sections at the same time. The critical-section problem is to
design a protocol that the processes can use to cooperate. Each process must request permission to
enter its critical section. The section of code implementing this request is the entry section . The
critical section may be followed by an exit section. The remaining code is the remainder section. The
general structure of a typical process Pi is shown in Figure 6.1. The entry section and exit section are
enclosed in boxes to highlight these important segments of code. A solution to the critical-section
problem must satisfy the following three requirements:
1. Mutual exclusion. If process Pi is executing in its critical section, then no other processes can be
executing in their critical sections.
2. Progress. If no process is executing in its critical section and some processes wish to enter their
critical sections, then only those processes that are not executing in their remainder sections can
participate in deciding which will enter its critical section next, and this selection carmot be
postponed indefinitely.
3. Bounded waiting. There exists a bound, or limit, on the number of times that other processes are
allowed to enter their critical sections after a process has made a request to enter its critical
section and before that request is granted.

do {
I entry section I
critical section
I exit section I
remainder section
} while (TRUE)

2. summarize semaphOres, explain hOw mutual exclusiOn is implemented with


semaphOres

The hardware-based solutions to the critical-section problem are complicated for application
programmers to use. To overcome this difficulty, we can use a synchronization tool called a
samaphores.A semaphore S is an integer variable that, apart from initialization, is accessed only
through two standard atomic operations: wait () and signal (). The wait () operation was originally
termed P .The definition of wait () is as follows:
wait(S) {
while (S <= 0){
s--;}
The definition of signal() is as follows:
signal(S) {
S++; }
All modifications to the integer value of the semaphore in the wait () and signal() operations must be
executed indivisibly. That is, when one process modifies the semaphore value, no other process can
simultaneously modify that same semaphore value. In addition, in the case of wait (S), the testing of
the integer value of S (S :S 0), as well as its possible modification (S--), must be executed without
interruption.
Semaphores are integer variables that can be accessed only through two atomic operations: wait() and
signal().

Mutual exclusion is a property that ensures that only one process can access a critical section (a shared
resource or code) at a time.To implement mutual exclusion with semaphores, we can use a semaphore variable
mutex (initialized to 1) to control access to the critical section.A process that wants to enter the critical section
must first perform a wait(mutex) operation, which decrements the value of mutex by 1. If the value of mutex is
positive, the process can proceed; otherwise, it must wait until mutex becomes positive again.When the
process exits the critical section, it performs a signal(mutex) operation, which increments the value of mutex
by 1. This allows another process that is waiting on mutex to enter the critical section.This scheme ensures that
at most one process can be in the critical section at any time, since the value of mutex can never exceed 1.

3. explain the petersOn’s sOlutiOn fOr the race cOnditiOn with algOrithm
Peterson’s solution is a software-based algorithm for achieving mutual exclusion among two processes that
share a common variable. The algorithm works as follows:

• Each process has a boolean flag, flag[i], indicating whether it wants to enter the critical section.
Initially, both flags are false.
• There is also a shared variable, turn, indicating whose turn it is to enter the critical section. Initially,
turn can be either 0 or 1.
• To enter the critical section, a process P[i] sets its flag to true and assigns the turn to the other process
P[j]. Then, it repeatedly checks whether the other process also wants to enter the critical section and
whether it is the other process’s turn. If both conditions are true, P[i] waits; otherwise, it enters the
critical section.
• To exit the critical section, a process P[i] simply sets its flag to false.

The algorithm satisfies the three requirements for mutual exclusion: progress, bounded waiting, and mutual
exclusion. Progress is ensured because a process can enter the critical section only if the other process does not
want to enter or has given the turn to the first process. Bounded waiting is ensured because a process that wants
to enter the critical section will get the turn after at most one entry by the other process. Mutual exclusion is
ensured because only one process can enter the critical section at a time, as the flag and turn variables prevent
simultaneous entry.

4. Define Deadlocks? Explain necessary conditions

A deadlock is a situation where a set of processes are blocked because each process is holding a resource
and waiting for another resource acquired by some other process. According to the web page context,
deadlocks can occur in various systems, such as operating systems, database systems, and distributed
systems. Deadlocks can be prevented, avoided, detected, or recovered from, depending on the methods used
by the system. Deadlocks are an important problem for operating system designers, as they can affect the
performance, reliability, and correctness of the system.

According to the web page context, there are four necessary conditions for the occurrence of a deadlock:

1. Mutual Exclusion: At least one resource must be held in a non-shareable mode, meaning it cannot be
simultaneously used by multiple processes.
2. Hold and Wait: A process must be holding at least one resource while waiting for another resource to
be released by another process.
3. No Preemption: Resources cannot be preempted from a process; that is, a resource can be released
only voluntarily by the process holding it.
4. Circular Wait: A set of processes must be waiting for each other in a circular chain.

If any one of these conditions is not met, a deadlock cannot occur. Therefore, to prevent deadlocks, one or more
of these conditions must be eliminated. Various techniques, such as deadlock detection, avoidance, and
prevention, can be used to eliminate these conditions and prevent deadlocks from occurring.
5. explain the variOus methOds Of recOvery frOm deadlOck

There are several methods for recovering from a deadlock, including:

1. Process Termination: This method involves aborting one or more processes to break the circular wait
condition causing the deadlock. The deadlocked processes may have been computed for a long time,
and the result of those partial computations must be discarded and there is a probability of recalculating
them later. This method is simple and ensures that the deadlock will be resolved quickly, as all
processes involved in the deadlock are terminated simultaneously. However, it can result in the loss of
data and other resources that were being used by the terminated processes, and it may cause further
problems in the system if the terminated processes were critical to the system’s operation.
2. Resource Preemption: This method involves preempting resources from one or more processes that
are deadlocked. The resources are then allocated to other processes that need them. This method is
more complex than process termination, but it is less likely to result in the loss of data and other
resources. However, it may cause further problems in the system if the preempted resources were
critical to the system’s operation.
3. Priority Inversion: This method involves temporarily lowering the priority of a process that is holding
a resource needed by another process. This allows the other process to acquire the resource and proceed
with its execution. Once the other process has finished executing, the priority of the first process is
restored. This method is useful when the resources are not shareable and the processes have different
priorities.
4. Rollback: This method involves rolling back one or more processes to a previous state and restarting
them from that state. This method is useful when the processes have checkpoints that can be used to
restore their previous state. However, it can result in the loss of data and other resources that were
being used by the rolled-back processes.

These methods can be used alone or in combination to recover from a deadlock. The choice of method depends
on the specific situation and the resources available.

6. Explain any one synchronization problem for testing newly proposed sync
scheme

One synchronization problem for testing newly proposed sync schemes is the dining-philosophers problem.
This problem is described as follows:

• There are five philosophers who spend their time thinking and eating.
• They share a circular table with five chopsticks between them, one for each pair of adjacent
philosophers.
• To eat, a philosopher needs to pick up both chopsticks next to him. He cannot pick up a chopstick that
is already in use by another philosopher.
• The problem is to design a synchronization protocol that allows the philosophers to eat without
causing deadlocks or starvation.

One possible solution using semaphores is:

• Define a semaphore mutex initialized to 1 to control access to the chopsticks.


• Define an array of semaphores philosopher[5], one for each philosopher, initialized to 0. Each
semaphore represents the state of the corresponding philosopher: 0 means thinking, 1 means hungry, 2
means eating.
• Define an array of integers state[5] to keep track of the state of each philosopher.
• Define a function test(i) that checks if the philosopher i can eat, that is, if both chopsticks next to him
are available and he is hungry. If so, it sets state[i] to 2, signals philosopher[i], and allows him to eat.
• Define a function pickup(i) that is called by philosopher i when he wants to eat. It waits on mutex, sets
state[i] to 1, calls test(i), and releases mutex. Then it waits on philosopher[i] until it is signaled by
test(i).
• Define a function putdown(i) that is called by philosopher i when he finishes eating. It waits on mutex,
sets state[i] to 0, calls test((i+4) mod 5) and test((i+1) mod 5) to check if the adjacent philosophers can
eat, and releases mutex.

7. Explain solution to producer-consumer problem using semaphores

The producer-consumer problem is a classic synchronization problem that involves two processes: a producer
that produces some data and puts it into a buffer, and a consumer that consumes the data from the buffer. The
challenge is to ensure that the producer and the consumer do not access the buffer at the same time or when it is
empty or full.

One possible solution to this problem is to use semaphores, which are synchronization primitives that can be
used to control access to shared resources. A semaphore is an integer variable that can be incremented or
decremented by special operations, called wait and signal. The wait operation decrements the semaphore value
by one, and blocks the process if the value becomes negative. The signal operation increments the semaphore
value by one, and wakes up a blocked process if the value becomes positive.

To solve the producer-consumer problem using semaphores, we need three semaphores: mutex, full, and empty.
The mutex semaphore is used to ensure mutual exclusion between the producer and the consumer when they
access the buffer. The full semaphore is used to count the number of full slots in the buffer, and the empty
semaphore is used to count the number of empty slots in the buffer. The initial values of the semaphores are:

• mutex = 1 (the buffer is initially free)


• full = 0 (the buffer is initially empty)
• empty = n (the buffer has n slots)

The pseudocode for the producer and the consumer processes is as follows:

Producer:

while (true) {
produce an item;
wait(empty); // decrement empty and block if zero
wait(mutex); // enter critical section
put the item into the buffer;
signal(mutex); // leave critical section
signal(full); // increment full
}

Consumer:

while (true) {
wait(full); // decrement full and block if zero
wait(mutex); // enter critical section
take an item from the buffer;
signal(mutex); // leave critical section
signal(empty); // increment empty
consume the item;
}

This solution ensures that the producer and the consumer can access the buffer in a synchronized manner,
without causing any deadlock or starvation. The producer can only put an item into the buffer if there is an
empty slot, and the consumer can only take an item from the buffer if there is a full slot. The mutex semaphore
prevents the producer and the consumer from accessing the buffer at the same time, thus avoiding data
inconsistency.

8. explain readers & writers’ prOblem? give its sOlutiOn with semaphOre.

The readers-writers problem is a synchronization problem that arises when multiple processes need to
access a shared resource. In this problem, there are multiple readers and writers that need to access a shared
file or database. The readers only read the data, while the writers can both read and write the data. The
problem is to ensure that the readers and writers can access the shared resource without interfering with
each other.

One solution to this problem is to use semaphores. We can use two semaphores: mutex and rw_mutex. The
mutex semaphore is used to ensure mutual exclusion between the readers and writers, while the rw_mutex
semaphore is used to ensure that the writers have exclusive access to the shared resource.

Here is the algorithm for the reader:

do {
wait(mutex);
read_count++;
if (read_count == 1) {
wait(rw_mutex);
}
signal(mutex);
// read the data
wait(mutex);
read_count--;
if (read_count == 0) {
signal(rw_mutex);
}
signal(mutex);
} while (true);

Here is the algorithm for the writer:

do {
wait(rw_mutex);
// write the data
signal(rw_mutex);
} while (true);

In the reader algorithm, the reader first waits on the mutex semaphore to gain exclusive access to the shared
resource. The reader then increments the read_count variable to indicate that it is reading the data. If the
read_count variable is 1, it means that this is the first reader to access the shared resource, so the reader
waits on the rw_mutex semaphore to ensure that no writer is accessing the shared resource. The reader then
signals the mutex semaphore to release access to the shared resource. After reading the data, the reader
waits on the mutex semaphore again to decrement the read_count variable. If the read_count variable is 0,
it means that this is the last reader to access the shared resource, so the reader signals the rw_mutex
semaphore to allow writers to access the shared resource.
In the writer algorithm, the writer waits on the rw_mutex semaphore to gain exclusive access to the shared
resource. The writer then writes the data and signals the rw_mutex semaphore to release access to the
shared resource.

This scheme ensures that the readers and writers can access the shared resource without interfering with
each other. The mutex semaphore provides mutual exclusion between the readers and writers, while the
rw_mutex semaphore ensures that the writers have exclusive access to the shared resource.

9. Demonstrate test and set instruction? How can it be used to implement mutual
exclusion ? consider using a fragment of pseudo-assembly language aid you
explanation.

Here is some information about the test-and-set instruction based on the web page context:

• Definition: The test-and-set instruction is a hardware instruction that atomically reads and modifies the
content of a memory location. It takes a memory address and a new value as parameters, and returns
the old value of the memory location while setting it to the new value.
• Usage: The test-and-set instruction can be used to implement mutual exclusion locks, which are
synchronization mechanisms that prevent concurrent access to a shared resource by multiple processes
or threads. A process can acquire a lock by repeatedly executing the test-and-set instruction until it
returns zero, indicating that the lock was free. A process can release a lock by setting its value to zero.
• Example: The following pseudocode shows how the test-and-set instruction can be used to implement
a simple spinlock:

// Global variable lock initialized to 0


int lock = 0;

// Function to acquire the lock


void acquire_lock() {
while (test_and_set(&lock, 1) == 1) {
// Busy wait until the lock is free
}
}

// Function to release the lock


void release_lock() {
lock = 0;
}

13. Explain how resource allocation graph is useful in deadlock detection?

A resource allocation graph (RAG) is a directed graph that represents the allocation of resources to processes
in a system. In a RAG, each process is represented by a node, and each resource is represented by a directed
edge from the resource to the process that holds it. The RAG can be used to detect deadlocks in a system.

To detect deadlocks using a RAG, we look for cycles in the graph. If there is a cycle in the graph, it indicates
that the system is in a deadlock state. Each cycle in the graph corresponds to a set of processes that are waiting
for resources held by other processes in the same set. Deadlocks can be resolved by breaking one or more of the
necessary conditions for deadlock, such as by preemption, rollback, or termination of processes.

The RAG is useful in detecting deadlocks because it provides a visual representation of the allocation of
resources to processes, which can help identify the processes that are involved in the deadlock. It can also help
identify the resources that are causing the deadlock, which can be useful in resolving the deadlock. By analyzing
the RAG, we can determine whether the system is in a safe state, where no deadlock can occur, or an unsafe
state, where deadlock can occur.
14. what dO yOu mean by deadlOck avOidance? explain the use Of banker’s
Algorithm for Deadlock Avoidance with illustration.

Deadlock avoidance is a technique to ensure that a system will never enter an unsafe state that could lead to
deadlock. One way to implement deadlock avoidance is to use the banker’s algorithm, which is based on the
analogy of a bank granting loans to customers.

The banker’s algorithm works as follows:

• The system maintains information about the total amount of each resource available, the amount of
each resource allocated to each process, and the maximum demand of each process.
• The system also keeps track of the available resources, which are the remaining resources after
satisfying the current allocation.
• The system must decide whether to grant a resource request from a process. To do so, it checks if the
request is valid (that is, it does not exceed the maximum claim) and if it is safe (that is, it does not leave
the system in an unsafe state).
• A state is safe if there exists a sequence of processes (a safe sequence) such that each process can be
allocated resources up to its maximum demand and still leave enough resources for the remaining
processes to proceed.
• The system grants the request only if it is both valid and safe. Otherwise, the request is postponed until
a later time.

An illustration of the banker’s algorithm is shown below, using the web page context in edge browser:

• The system has 12 tape drives, which are the resources to be allocated. There are five processes, P0
through P4, which have varying maximum and current allocation of tape drives.
• The table below shows the current state of the system, where Allocation is the number of tape drives
allocated to each process, Max is the maximum demand of each process, and Available is the number
of tape drives available in the system.

Process Allocation Max Available

P0 3 5 2

P1 2 4

P2 2 3

P3 2 4

P4 3 6

• Suppose P2 requests one more tape drive. The system checks if the request is valid and safe.
• The request is valid, since P2’s allocation will not exceed its maximum claim (2 + 1 <= 3).
• The request is safe, since the system can find a safe sequence after granting the request. One possible
safe sequence is P0, P3, P2, P1, P4. For example, after P0 finishes and releases its 3 tape drives, the
system will have 5 tape drives available, which is enough to satisfy P3’s maximum demand of 4.
Similarly, the system can satisfy the remaining processes in order.
• Therefore, the system grants the request and updates the state of the system as follows:

Process Allocation Max Available

P0 3 5 1

P1 2 4

P2 3 3

P3 2 4

P4 3 6

• This is an example of how the banker’s algorithm can prevent deadlock by avoiding unsafe states.

15. summarize the safe, unsafe, and deadlOck state spaces

Here is a summary of the safe, unsafe, and deadlock state spaces from part 7.5 of the current page:

• Safe state: A state is safe if the system can allocate resources to each process in some order and still
avoid a deadlock. A safe state is not a deadlock state, but it may lead to one if some unsafe requests are
granted.More formally, a system is in a safe state only if there exists a safe sequence .A safe state is not
a deadlock state but converse3ly, a deadlocked state is an unsafe state.
• Unsafe state: A state is unsafe if there is no safe sequence of processes that can finish. An unsafe state
may or may not lead to a deadlock, depending on the future requests and releases of resources .Not all
unsafe states are deadlocks , however an unsafe3 state may lead to a deadlock . as long as the state is
safe, the os cannot prevent processes from requesting resourc3es in such a way that a deadlock occurs.
• Deadlock state: A state is deadlock if no process can proceed. This happens when every process is
waiting for a resource that is held by another waiting process. A deadlock state is always an unsafe
state, but not vice versa.

Given the concept of a safe state, we can define avoidance algorithms that ensure that the
system will never deadlock. The idea is simply to ensure that the system will always remain in
a safe state. Initially, the system is in a safe state. Whenever a process requests a resource
that is currently available, the system must decide whether the resource can be allocated
immediately or whether the process must wait. The request is granted only if the allocation
leaves the system in a safe state. In this scheme, if a process requests a resource that is
currently available, it may still have to wait. Thus, resource utilization may be lower than it
would otherwise be.

You might also like