Professional Documents
Culture Documents
NAME: Babar
SEMESTER: II
CODE:
Q1.
Q2. Write a detailed note on FCFS, SJF and Priority scheduling taking suitable examples. What is
Preemptive and Non-preemptive Scheduling?
Scheduling algorithms are essential in operating systems to manage the execution of processes or
tasks. Three common scheduling algorithms are First-Come, First-Served (FCFS), Shortest Job First
(SJF), and Priority Scheduling. Let's explore each algorithm and discuss preemptive and non-
preemptive scheduling.
1. First-Come, First-Served (FCFS) Scheduling: FCFS is a simple scheduling algorithm that
executes processes in the order they arrive. It follows a non-preemptive approach, meaning that
once a process starts executing, it continues until it completes or gets blocked. The next process in
the queue starts executing once the current process finishes.
Example:
Consider a scenario where three processes, P1, P2, and P3, arrive in the order P1, P2, P3. Their
burst times (time required for execution) are 6 ms, 3 ms, and 4 ms, respectively. In FCFS, the
processes are executed in the order of their arrival:
P1 (6 ms) -> P2 (3 ms) -> P3 (4 ms)
2. Shortest Job First (SJF) Scheduling: SJF scheduling selects the process with the shortest burst
time first. It can be either preemptive or non-preemptive. In the preemptive version, if a new
process with a shorter burst time arrives while a process is executing, the current process can be
preempted and the new process gets scheduled.
Example:
Consider the same scenario as above but with SJF scheduling. In non-preemptive SJF, the
processes are executed based on their burst times:
P2 (3 ms) -> P3 (4 ms) -> P1 (6 ms)
3. Priority Scheduling: Priority scheduling assigns a priority value to each process and executes them
in order of priority, with the highest priority being executed first. It can also be preemptive or non-
preemptive. In the preemptive version, a process with a higher priority can interrupt the execution
of a lower-priority process.
Example:
Consider a scenario with three processes, P1, P2, and P3, with priority values of 3, 1, and 2,
respectively. In non-preemptive priority scheduling, the processes are executed based on their
priority:
P2 (priority 1) -> P3 (priority 2) -> P1 (priority 3)
Q3. Discuss Banker’s algorithm and how to find out if a system is in a safe state or not?
The Banker's algorithm is a resource allocation and deadlock avoidance algorithm used in operating
systems. It is designed to ensure that a system avoids deadlock by allocating resources to processes in
a safe and controlled manner. The algorithm considers the current state of the system and the future
resource requests of processes to determine whether a particular allocation will lead to a safe state or
risk deadlock.
The Banker's algorithm works based on the principle of resource allocation graphs and uses the
concept of available resources and resource claim matrices. Here's an overview of how the algorithm
operates:
1. Available Resources: The system keeps track of the number of available instances of each resource
type. This information is stored in an array called "Available," where each element represents the
number of available instances of a specific resource.
2. Resource claim Matrix: A two-dimensional matrix called the "Resource Claim Matrix" is used to
record the maximum resource requirements of each process. Each element in the matrix represents
the maximum number of instances of a resource that a process may need.
3. Allocation Matrix: Another two-dimensional matrix called the "Allocation Matrix" is used to
record the currently allocated resources for each process. Each element in this matrix represents
the number of instances of a resource allocated to a process.
4. Need Matrix: The "Need Matrix" is derived by subtracting the Allocation Matrix from the
Resource Claim Matrix. It represents the remaining resource need for each process to complete its
execution.
5. Checking for a Safe State: To determine if a system is in a safe state, the Banker's algorithm
simulates the resource allocation process by checking if there is a sequence of processes that can
complete their execution without causing a deadlock. It does this by considering the available
resources, the current allocation, and the future resource requests of processes.
Set-II
Q4.
Q6. Explain the different Multiprocessor Interconnections and types of Multiprocessor Operating
Systems.
1. Symmetric Multiprocessor (SMP) Operating System: SMP operating systems treat all
processors as equal and provide a single system image. They distribute the workload
across processors, allowing processes or threads to execute on any available processor.
SMP operating systems typically offer symmetric access to shared resources and provide
mechanisms for synchronization and load balancing.
2. Asymmetric Multiprocessor (AMP) Operating System: AMP operating systems
designate one processor as the master or controlling processor, responsible for managing
system resources and scheduling tasks. The master processor handles critical system
functions, while other processors perform specific tasks or execute application-level code.
AMP operating systems are often used in embedded systems or real-time applications.
3. Non-Uniform Memory Access (NUMA) Operating System: NUMA operating systems
manage multiprocessor systems where processors have varying access times to memory.
Memory is divided into multiple banks or nodes, and each processor has its local memory.
NUMA operating systems optimize memory access by scheduling tasks closer to the
memory bank where the required data resides, reducing memory latency and improving
performance.
4. Clustered Operating System: Clustered operating systems manage multiprocessor
systems composed of multiple interconnected clusters. Each cluster consists of its
processors, memory, and I/O subsystem. Clustered operating systems provide mechanisms
for load balancing, fault tolerance, and resource management across clusters. They are
commonly used in high-performance computing and server environments.