You are on page 1of 8

Unit 3 Part-B(LongAnswerQuestions)

important

1Q.Define deadlock and explain necessary conditions for deadlock to occur.**

Ans.A deadlock in computer science refers to a situation where two or more processes are unable to proceed
because each is waiting for the other to release a resource. Essentially, a deadlock is a state in which no progress
can be made, and the system is effectively "stuck."

To have a deadlock, four necessary conditions must be satisfied.These are

1. Mutual Exclusion: At least one resource must be held in a non-sharable mode, meaning that only one process
can use the resource at a time. This condition implies that once a process acquires a resource, it cannot be
shared or used by other processes until the owning process releases it.

2. Hold and Wait: A process must be holding at least one resource and waiting to acquire additional resources
that are currently held by other processes. This condition can lead to a situation where processes are waiting
indefinitely for resources to be released by other processes.

3. No Preemption: Resources cannot be forcibly taken away from a process; they can only be released
voluntarily by the process holding them. This condition ensures that a process cannot be interrupted and have
its resources reassigned to another process, which could potentially prevent a deadlock.

4. Circular Wait: There must be a circular chain of two or more processes, each waiting for a resource held by
the next process in the chain. This circular wait implies that there is a closed loop of processes, each waiting
for a resource held by the next process in the loop.

If all these conditions are met simultaneously, a deadlock is likely to occur. Deadlock prevention and recovery
strategies are employed to manage and avoid these situations, such as resource allocation policies, deadlock
detection algorithms, and methods for recovering from deadlock states.
2Q.Explain In detail various methods of deadlock prevention.*

Ans.Deadlock prevention involves designing systems and algorithms in a way that eliminates or avoids the
conditions that lead to deadlock. Several methods are employed to prevent deadlocks in computer systems.
Here are some commonly used deadlock prevention techniques:

1. Mutual Exclusion:
Use of Shared Resources: Allow resources to be shared among processes rather than having exclusive access.
In cases where resources can be safely shared, this can help prevent the mutual exclusion condition.

2. Hold and Wait:


Require Processes to Request All Resources at Once (Resource Allocation Graph):** One way to address the
hold and wait condition is to require each process to request all the resources it needs at the beginning of its
execution. This helps ensure that a process only starts when it has acquired all necessary resources.
Two-Phase Locking Protocol:In database systems, the two-phase locking protocol ensures that a transaction
acquires all the required locks before it starts execution and releases all locks when it completes. This helps
prevent a process from holding some locks while waiting for others.

3. No Preemption:
Process Preemption Allow for the preemption of resources from one process to another. This means that a
resource can be forcefully taken from a process and given to another. However, preemption is often complex
and not always feasible, especially in real-time systems.

4. Circular Wait:
Use of a Resource Hierarchy:Assign a unique numerical value (or priority) to each resource type in the system.
Processes are then required to request resources in an ascending order of priority. This establishes a hierarchy
and prevents circular waits.

-Banker's Algorithm: In the context of resource allocation, the Banker's Algorithm ensures that resource
requests do not lead to unsafe states. It dynamically checks whether granting a request will leave the system in
a safe state, and only allows the request if it does.

It's important to note that some of these methods may impose additional constraints on system behavior or
may not be suitable for all types of systems. The choice of deadlock prevention method often depends on the
specific requirements and characteristics of the application or system in question. Additionally, these methods
may have trade-offs in terms of system performance, complexity, and resource utilization.
3Q.Explain Bankers Algorithm with example.
The Banker's Algorithm is a deadlock avoidance algorithm used to determine whether granting a resource
request will lead to a safe state. It was developed by Edsger Dijkstra. The algorithm operates by maintaining
information about the maximum demand of each process, the currently allocated resources, and the available
resources in the system. The system grants a resource request only if it determines that the resulting state will
be safe.

The Banker's Algorithm is based on the following data structures:

Available:A vector representing the number of available resources for each resource type.
Max: A matrix representing the maximum demand of each process for each resource type.
Allocation:A matrix representing the number of resources currently allocated to each process for each resource
type.
Need: A matrix representing the remaining resource needs of each process for each resource type (Max -
Allocation).

The algorithm checks whether the system can satisfy the resource request without leading to an unsafe state.
If the request is safe, the resources are allocated; otherwise, the process must wait.

Here is a simplified version of the Banker's Algorithm:

1. Initialize Available, Max, Allocation, and Need matrices.


2. Whenever a process requests resources:
a. If the request is less than or equal to the Need matrix and the Available vector, go to step 3.
b. Otherwise, the process must wait.
3. Assume the resources are allocated to the process temporarily.
4. Check if the resulting state is safe by simulating resource allocation. If the state is safe, grant the resources;
otherwise, deny the request.

4Q.How the access matrix is implemented.**


The access matrix is a security model used in operating systems to define and control access rights to resources
by different entities, such as processes, users, or groups. It represents a table of subjects (e.g., processes or
users) and objects (e.g., files or devices) with their associated access rights. The matrix structure helps enforce
security policies by specifying who can perform what actions on which resources.

There are different ways to implement the access matrix in operating systems, and one common approach is
to use a two-dimensional matrix. Let's explore how the access matrix is implemented:

1. Subjects and Objects:


- Subjects: These are entities that request access to resources. They can be processes, users, or any active
entities in the system.
- Objects: These are resources that subjects want to access. Objects can include files, devices, memory
segments, etc.

2. Operations:
- Define the operations or actions that can be performed on objects. These can include read, write, execute,
delete, etc.

3. Matrix Structure:
- Create a two-dimensional matrix where rows represent subjects, columns represent objects, and each entry
in the matrix represents the access rights of a subject on an object.

Example:
Object1 Object2 Object3
Subject1 RWX R ---
Subject2 --- W R-X
Subject3 R-- --- RW

In this example:
- Subject1 has Read (R), Write (W), and Execute (X) permissions on Object1, only Read (R) permission on
Object2, and no access to Object3.
- Subject2 has Write (W) permission on Object2 and Execute (X) permission on Object3, but no access to
Object1.
- Subject3 has Read (R) permission on Object1, Write (W) permission on Object3, and no access to Object2.

4. Access Control Mechanisms:


- Implement access control mechanisms in the operating system to check access rights before allowing or
denying operations on resources.
- Access control mechanisms may involve checking the corresponding entry in the access matrix to ensure
that a subject has the required permissions on the requested object.

5. Dynamic Updates:
- The access matrix can be updated dynamically based on changes in the system, such as user permissions
being modified, new processes being created, or objects being created or deleted.

6. Access Control Lists (ACLs) and Capabilities:


- Access matrices can be implemented using access control lists (ACLs) or capabilities, which are more
compact representations of access rights associated with objects.
- Access Control Lists (ACLs): Each object has a list of subjects and their associated access rights.
- Capabilities: Each subject has a list of objects and their associated access rights.

The access matrix is a flexible and powerful model for access control, but it can become impractical in large
systems due to its size and the need for dynamic updates. Therefore, various access control mechanisms, such
as ACLs and capabilities, are often used to manage access rights more efficiently.

The Readers-Writers problem is a classical synchronization problem in computer science, particularly in the
field of concurrent programming and operating systems. The problem involves multiple processes (readers and
writers) accessing a shared resource (e.g., a database or file) concurrently. The goal is to ensure that the readers
and writers follow certain rules to prevent conflicts and maintain data consistency.

The main issues to address in the Readers-Writers problem are:

1. Multiple readers should be allowed to access the shared resource simultaneously because reading doesn't
modify the data and can be done concurrently.
2. Writers, on the other hand, need exclusive access to the resource to prevent conflicts and maintain data
integrity.

A solution to the Readers-Writers problem can be implemented using Monitors, which are high-level
synchronization constructs that provide a way to encapsulate shared data and the operations on that data.
Monitors ensure that only one process can execute within the monitor at any given time.

Here's a simplified solution using Monitors:

Explanation:

- `StartRead`: Called by a reader before reading. If there are writers active, the reader waits. Otherwise, it
increments the number of readers.

- `EndRead`: Called by a reader after reading. It decrements the number of readers and signals the `canWrite`
condition if there are no more readers.

- `StartWrite`: Called by a writer before writing. If there are readers or writers active, the writer waits.
Otherwise, it increments the number of writers.

- `EndWrite`: Called by a writer after writing. It decrements the number of writers and signals both `canRead`
and `canWrite` conditions to allow either more readers or a writer to access the resource.
UNIT IV
1Q.Calculate the total number of head movement with the help of below algorithm. Suppose the disk request
queue contains set of references for blocks. **
76, 124,17,269,20,29,137
1)FCFS
2)SSTF
3)SCAN

A.To calculate the total number of head movements for different disk scheduling algorithms, we need to know
the initial position of the disk head and the sequence of disk requests. For this example, let's assume the initial
head position is at track 50, and the disk request queue is: 76, 124, 17, 269, 20, 29, 137.

1) FCFS (First-Come-First-Serve):

- Initial head position: 50


- Head movements:
- 50 to 76 (26 movements)
- 76 to 124 (48 movements)
- 124 to 17 (107 movements)
- 17 to 269 (252 movements)
- 269 to 20 (249 movements)
- 20 to 29 (9 movements)
- 29 to 137 (108 movements)

Total head movements for FCFS = 26 + 48 + 107 + 252 + 249 + 9 + 108 = 799 movements.

2) SSTF (Shortest Seek Time First):

- Initial head position: 50


- Head movements (choose the shortest seek time at each step):
- 50 to 29 (21 movements)
- 29 to 20 (9 movements)
- 20 to 17 (3 movements)
- 17 to 76 (59 movements)
- 76 to 124 (48 movements)
- 124 to 137 (13 movements)
- 137 to 269 (132 movements)

Total head movements for SSTF = 21 + 9 + 3 + 59 + 48 + 13 + 132 = 285 movements.

3) SCAN:

- Initial head position: 50


- Head movements (move in one direction until the end, then reverse):
- 50 to 29 (21 movements, scanning inwards)
- 29 to 20 (9 movements)
- 20 to 17 (3 movements)
- 17 to 0 (17 movements, reaching the innermost track)
- 0 to 76 (76 movements, scanning outwards)
- 76 to 124 (48 movements)
- 124 to 137 (13 movements)
- 137 to 269 (132 movements)

Total head movements for SCAN = 21 + 9 + 3 + 17 + 76 + 48 + 13 + 132 = 319 movements.

Note: include Graph


2Q.Explain about any two-disk scheduling algorithm and calculate average seek time for each.**

Two commonly used disk scheduling algorithms are the FCFS (First-Come-First-Serve) and SSTF (Shortest Seek
Time First). Let's explain each algorithm and then calculate the average seek time for both.

1) FCFS (First-Come-First-Serve):

-Explanation:
- FCFS is a simple disk scheduling algorithm where the disk arm moves to the next request in the order they
arrive.
- The requests are served in the order they are generated, without considering the distance between requests.

Calculation of Average Seek Time:


- Assume the initial head position is at track 50.
- Disk requests: 76, 124, 17, 269, 20, 29, 137.

- Head movements:
- 50 to 76 (26 movements)
- 76 to 124 (48 movements)
- 124 to 17 (107 movements)
- 17 to 269 (252 movements)
- 269 to 20 (249 movements)
- 20 to 29 (9 movements)
- 29 to 137 (108 movements)

- Total head movements for FCFS = 26 + 48 + 107 + 252 + 249 + 9 + 108 = 799 movements.

- Average Seek Time = Total Seek Time / Number of Requests


= 799 / 7
= 114.14 (approx)

2) SSTF (Shortest Seek Time First):

Explanation:
- SSTF selects the request with the shortest seek time, i.e., the request closest to the current head position.
- This algorithm minimizes the total seek time by always choosing the nearest available request.

Calculation of Average Seek Time:


- Assume the initial head position is at track 50.
- Disk requests: 76, 124, 17, 269, 20, 29, 137.

- Head movements (choose the shortest seek time at each step):


- 50 to 29 (21 movements)
- 29 to 20 (9 movements)
- 20 to 17 (3 movements)
- 17 to 76 (59 movements)
- 76 to 124 (48 movements)
- 124 to 137 (13 movements)
- 137 to 269 (132 movements)

- Total head movements for SSTF = 21 + 9 + 3 + 59 + 48 + 13 + 132 = 285 movements.

- Average Seek Time = Total Seek Time / Number of Requests


= 285 / 7
= 40.71 (approx)

Note: include Graph

3Q.Discuss in details about devices drivers.*

Device drivers are essential components that help the operating system communicate with hardware like
printers and graphics cards. They act as middlemen, managing resources, handling device-specific tasks, and
providing a standardized interface for applications.

1. Purpose of Device Drivers:


- Abstraction: Simplify hardware complexity for applications and the OS.
- Interfacing: Provide a consistent way for the OS to talk to hardware.
- Efficiency: Optimize hardware performance with smart strategies.

2. Types of Device Drivers:


- Kernel Mode Drivers: Directly access hardware, critical for low-level operations.
- User Mode Drivers: Offer security and stability but limited access.
- Plug and Play Drivers: Enable automatic device detection.
- File System Drivers: Manage file systems and storage devices.

3. Driver Architecture:
- Layered Architecture: Organize drivers into layers for easier maintenance.
- Device Stack: Arrange drivers in a stack for effective communication.

4. Device Driver Development:


- Programming Language: Typically use low-level languages like C or assembly.
- Interrupt Handling: Manage hardware interrupts for quick responses.
- Memory Management: Carefully handle memory to prevent issues.
- Error Handling: Implement robust error handling for stability.
5. Testing and Debugging:
- Testing Environments: Thoroughly test drivers for compatibility.
- Debugging Tools: Use specialized tools for identifying and fixing issues.

6. Security Considerations:
- Privileges: Kernel mode drivers have high privileges, requiring strong security.
- Digital Signatures: Ensure authenticity with digital signatures.

7. Dynamic Loading and Unloading:


- Dynamic Loading: Load drivers when needed and unload when not.
- Hot Plugging: Support connecting/disconnecting devices while the system runs.

8. Updates and Maintenance:


Regularly update drivers for new hardware and OS versions and Patch Management: Manage patches to
address security and performance.

4Q.Suppose a disk drive has 400 cylinders , numbered 0 to 399.The driver is currently serving a request at
cylinder 143 and previous request was at cylinder 125 .The queue of pending request in FIFO order is:
86,147,312,91,177,48,309,222,175,130. Starting from the current head position what is the total distance in
cylinders that the disk to satisfy all the pending request for each of the following disk scheduling algorithms?
1] SSTS 2] SCAN 3] C-SCAN*
Padh k karlo easy hai.Refer 2nd question.
Note: include Graph

You might also like