Professional Documents
Culture Documents
important
Ans.A deadlock in computer science refers to a situation where two or more processes are unable to proceed
because each is waiting for the other to release a resource. Essentially, a deadlock is a state in which no progress
can be made, and the system is effectively "stuck."
1. Mutual Exclusion: At least one resource must be held in a non-sharable mode, meaning that only one process
can use the resource at a time. This condition implies that once a process acquires a resource, it cannot be
shared or used by other processes until the owning process releases it.
2. Hold and Wait: A process must be holding at least one resource and waiting to acquire additional resources
that are currently held by other processes. This condition can lead to a situation where processes are waiting
indefinitely for resources to be released by other processes.
3. No Preemption: Resources cannot be forcibly taken away from a process; they can only be released
voluntarily by the process holding them. This condition ensures that a process cannot be interrupted and have
its resources reassigned to another process, which could potentially prevent a deadlock.
4. Circular Wait: There must be a circular chain of two or more processes, each waiting for a resource held by
the next process in the chain. This circular wait implies that there is a closed loop of processes, each waiting
for a resource held by the next process in the loop.
If all these conditions are met simultaneously, a deadlock is likely to occur. Deadlock prevention and recovery
strategies are employed to manage and avoid these situations, such as resource allocation policies, deadlock
detection algorithms, and methods for recovering from deadlock states.
2Q.Explain In detail various methods of deadlock prevention.*
Ans.Deadlock prevention involves designing systems and algorithms in a way that eliminates or avoids the
conditions that lead to deadlock. Several methods are employed to prevent deadlocks in computer systems.
Here are some commonly used deadlock prevention techniques:
1. Mutual Exclusion:
Use of Shared Resources: Allow resources to be shared among processes rather than having exclusive access.
In cases where resources can be safely shared, this can help prevent the mutual exclusion condition.
3. No Preemption:
Process Preemption Allow for the preemption of resources from one process to another. This means that a
resource can be forcefully taken from a process and given to another. However, preemption is often complex
and not always feasible, especially in real-time systems.
4. Circular Wait:
Use of a Resource Hierarchy:Assign a unique numerical value (or priority) to each resource type in the system.
Processes are then required to request resources in an ascending order of priority. This establishes a hierarchy
and prevents circular waits.
-Banker's Algorithm: In the context of resource allocation, the Banker's Algorithm ensures that resource
requests do not lead to unsafe states. It dynamically checks whether granting a request will leave the system in
a safe state, and only allows the request if it does.
It's important to note that some of these methods may impose additional constraints on system behavior or
may not be suitable for all types of systems. The choice of deadlock prevention method often depends on the
specific requirements and characteristics of the application or system in question. Additionally, these methods
may have trade-offs in terms of system performance, complexity, and resource utilization.
3Q.Explain Bankers Algorithm with example.
The Banker's Algorithm is a deadlock avoidance algorithm used to determine whether granting a resource
request will lead to a safe state. It was developed by Edsger Dijkstra. The algorithm operates by maintaining
information about the maximum demand of each process, the currently allocated resources, and the available
resources in the system. The system grants a resource request only if it determines that the resulting state will
be safe.
Available:A vector representing the number of available resources for each resource type.
Max: A matrix representing the maximum demand of each process for each resource type.
Allocation:A matrix representing the number of resources currently allocated to each process for each resource
type.
Need: A matrix representing the remaining resource needs of each process for each resource type (Max -
Allocation).
The algorithm checks whether the system can satisfy the resource request without leading to an unsafe state.
If the request is safe, the resources are allocated; otherwise, the process must wait.
There are different ways to implement the access matrix in operating systems, and one common approach is
to use a two-dimensional matrix. Let's explore how the access matrix is implemented:
2. Operations:
- Define the operations or actions that can be performed on objects. These can include read, write, execute,
delete, etc.
3. Matrix Structure:
- Create a two-dimensional matrix where rows represent subjects, columns represent objects, and each entry
in the matrix represents the access rights of a subject on an object.
Example:
Object1 Object2 Object3
Subject1 RWX R ---
Subject2 --- W R-X
Subject3 R-- --- RW
In this example:
- Subject1 has Read (R), Write (W), and Execute (X) permissions on Object1, only Read (R) permission on
Object2, and no access to Object3.
- Subject2 has Write (W) permission on Object2 and Execute (X) permission on Object3, but no access to
Object1.
- Subject3 has Read (R) permission on Object1, Write (W) permission on Object3, and no access to Object2.
5. Dynamic Updates:
- The access matrix can be updated dynamically based on changes in the system, such as user permissions
being modified, new processes being created, or objects being created or deleted.
The access matrix is a flexible and powerful model for access control, but it can become impractical in large
systems due to its size and the need for dynamic updates. Therefore, various access control mechanisms, such
as ACLs and capabilities, are often used to manage access rights more efficiently.
The Readers-Writers problem is a classical synchronization problem in computer science, particularly in the
field of concurrent programming and operating systems. The problem involves multiple processes (readers and
writers) accessing a shared resource (e.g., a database or file) concurrently. The goal is to ensure that the readers
and writers follow certain rules to prevent conflicts and maintain data consistency.
1. Multiple readers should be allowed to access the shared resource simultaneously because reading doesn't
modify the data and can be done concurrently.
2. Writers, on the other hand, need exclusive access to the resource to prevent conflicts and maintain data
integrity.
A solution to the Readers-Writers problem can be implemented using Monitors, which are high-level
synchronization constructs that provide a way to encapsulate shared data and the operations on that data.
Monitors ensure that only one process can execute within the monitor at any given time.
Explanation:
- `StartRead`: Called by a reader before reading. If there are writers active, the reader waits. Otherwise, it
increments the number of readers.
- `EndRead`: Called by a reader after reading. It decrements the number of readers and signals the `canWrite`
condition if there are no more readers.
- `StartWrite`: Called by a writer before writing. If there are readers or writers active, the writer waits.
Otherwise, it increments the number of writers.
- `EndWrite`: Called by a writer after writing. It decrements the number of writers and signals both `canRead`
and `canWrite` conditions to allow either more readers or a writer to access the resource.
UNIT IV
1Q.Calculate the total number of head movement with the help of below algorithm. Suppose the disk request
queue contains set of references for blocks. **
76, 124,17,269,20,29,137
1)FCFS
2)SSTF
3)SCAN
A.To calculate the total number of head movements for different disk scheduling algorithms, we need to know
the initial position of the disk head and the sequence of disk requests. For this example, let's assume the initial
head position is at track 50, and the disk request queue is: 76, 124, 17, 269, 20, 29, 137.
1) FCFS (First-Come-First-Serve):
Total head movements for FCFS = 26 + 48 + 107 + 252 + 249 + 9 + 108 = 799 movements.
3) SCAN:
Two commonly used disk scheduling algorithms are the FCFS (First-Come-First-Serve) and SSTF (Shortest Seek
Time First). Let's explain each algorithm and then calculate the average seek time for both.
1) FCFS (First-Come-First-Serve):
-Explanation:
- FCFS is a simple disk scheduling algorithm where the disk arm moves to the next request in the order they
arrive.
- The requests are served in the order they are generated, without considering the distance between requests.
- Head movements:
- 50 to 76 (26 movements)
- 76 to 124 (48 movements)
- 124 to 17 (107 movements)
- 17 to 269 (252 movements)
- 269 to 20 (249 movements)
- 20 to 29 (9 movements)
- 29 to 137 (108 movements)
- Total head movements for FCFS = 26 + 48 + 107 + 252 + 249 + 9 + 108 = 799 movements.
Explanation:
- SSTF selects the request with the shortest seek time, i.e., the request closest to the current head position.
- This algorithm minimizes the total seek time by always choosing the nearest available request.
Device drivers are essential components that help the operating system communicate with hardware like
printers and graphics cards. They act as middlemen, managing resources, handling device-specific tasks, and
providing a standardized interface for applications.
3. Driver Architecture:
- Layered Architecture: Organize drivers into layers for easier maintenance.
- Device Stack: Arrange drivers in a stack for effective communication.
6. Security Considerations:
- Privileges: Kernel mode drivers have high privileges, requiring strong security.
- Digital Signatures: Ensure authenticity with digital signatures.
4Q.Suppose a disk drive has 400 cylinders , numbered 0 to 399.The driver is currently serving a request at
cylinder 143 and previous request was at cylinder 125 .The queue of pending request in FIFO order is:
86,147,312,91,177,48,309,222,175,130. Starting from the current head position what is the total distance in
cylinders that the disk to satisfy all the pending request for each of the following disk scheduling algorithms?
1] SSTS 2] SCAN 3] C-SCAN*
Padh k karlo easy hai.Refer 2nd question.
Note: include Graph