You are on page 1of 10

Internal Assignment

NAME: Babar

ROLL NUMBER: 0000

PROGRAM: MASTER OF COMPUTER APPLICATION (MCA)

SEMESTER: II

COURSE NAME: OPERATING SYSTEM

CODE:

SESSION: MAY 2023


Set-I

Q1.

a) Discuss different architecture of Operating System.


 Operating systems can be categorized into different architectures based on their design principles and
structures. Here are some common architectures of operating systems:
1. Monolithic Architecture:
The monolithic architecture is one of the earliest and simplest operating system architectures. In
this design, the entire operating system is implemented as a single, large program running in
kernel mode. It consists of a tightly integrated collection of functions and services, including
process management, memory management, file system, and device drivers. The monolithic
architecture provides efficient inter-process communication but lacks modularity, making it
difficult to modify or extend the system without affecting its stability.
2. Layered Architecture:
The layered architecture organizes the operating system into a hierarchy of layers, where each
layer provides a specific set of services to the layer above it. Each layer only interacts with
adjacent layers, and higher layers depend on lower layers. This design improves modularity and
simplifies system maintenance. Layers typically include hardware abstraction, device drivers, file
system, networking, and user interface.
3. Microkernel Architecture:
The microkernel architecture aims to minimize the kernel's size and complexity by moving most
services out of the kernel and into user space. The microkernel provides only essential
functionalities such as process management, memory management, and inter-process
communication. Other services, such as file systems and device drivers, run as separate user-level
processes. This design enhances flexibility, extensibility, and robustness.
4. Modular Architecture:
The modular architecture extends the microkernel design by allowing additional kernel modules to
be dynamically loaded and unloaded at runtime. These modules provide specific functionalities or
device support. This approach combines the advantages of a microkernel (flexibility, extensibility)
with optimized performance since critical services are part of the kernel. The modular architecture
is widely used in modern operating systems.

5. Virtual Machine Architecture:


The virtual machine architecture employs a layer of abstraction called a virtual machine monitor
(VMM) or hypervisor. It allows multiple operating systems, known as guest operating systems, to
run simultaneously on a single physical machine. The VMM provides a virtualized environment,
isolating each guest operating system from the underlying hardware.

6. Client Server Architecture:


In a client-server architecture, the operating system is divided into client and server components.
The server component provides services such as file sharing, printing, and database management,
while the client component interacts with users and applications. The client sends requests to the
server, which processes them and returns the results. This architecture promotes distributed
computing, scalability, and modular design, enabling systems to handle multiple requests
concurrently.
b) What is VMware? Write a short note on it.
 VMware is a software company that specializes in virtualization and cloud computing technologies. It
provides a range of products and solutions that enable organizations to virtualize their infrastructure,
creating virtual machines (VMs) that can run multiple operating systems and applications on a single
physical server or across a cluster of servers.
VMware's flagship product is VMware vSphere, a comprehensive virtualization platform that allows
businesses to create and manage virtualized environments. vSphere includes a hypervisor, which is a
thin layer of software that enables the creation and management of VMs. The hypervisor abstracts the
underlying hardware, allowing multiple VMs to run independently on the same physical server.
Benefits of VMware virtualization technology include:
1. Server Consolidation
2. Resource Optimization
3. High Availability
4. Disaster Recovery
5. Desktop Virtualization

Q2. Write a detailed note on FCFS, SJF and Priority scheduling taking suitable examples. What is
Preemptive and Non-preemptive Scheduling?
 Scheduling algorithms are essential in operating systems to manage the execution of processes or
tasks. Three common scheduling algorithms are First-Come, First-Served (FCFS), Shortest Job First
(SJF), and Priority Scheduling. Let's explore each algorithm and discuss preemptive and non-
preemptive scheduling.
1. First-Come, First-Served (FCFS) Scheduling: FCFS is a simple scheduling algorithm that
executes processes in the order they arrive. It follows a non-preemptive approach, meaning that
once a process starts executing, it continues until it completes or gets blocked. The next process in
the queue starts executing once the current process finishes.
Example:
Consider a scenario where three processes, P1, P2, and P3, arrive in the order P1, P2, P3. Their
burst times (time required for execution) are 6 ms, 3 ms, and 4 ms, respectively. In FCFS, the
processes are executed in the order of their arrival:
P1 (6 ms) -> P2 (3 ms) -> P3 (4 ms)
2. Shortest Job First (SJF) Scheduling: SJF scheduling selects the process with the shortest burst
time first. It can be either preemptive or non-preemptive. In the preemptive version, if a new
process with a shorter burst time arrives while a process is executing, the current process can be
preempted and the new process gets scheduled.
Example:
Consider the same scenario as above but with SJF scheduling. In non-preemptive SJF, the
processes are executed based on their burst times:
P2 (3 ms) -> P3 (4 ms) -> P1 (6 ms)
3. Priority Scheduling: Priority scheduling assigns a priority value to each process and executes them
in order of priority, with the highest priority being executed first. It can also be preemptive or non-
preemptive. In the preemptive version, a process with a higher priority can interrupt the execution
of a lower-priority process.
Example:
Consider a scenario with three processes, P1, P2, and P3, with priority values of 3, 1, and 2,
respectively. In non-preemptive priority scheduling, the processes are executed based on their
priority:
P2 (priority 1) -> P3 (priority 2) -> P1 (priority 3)

 1. Preemptive Scheduling: Preemptive scheduling allows a running process to be interrupted or


preempted by a higher-priority process. When a higher-priority process becomes ready to execute, the
currently running process is temporarily halted, and the CPU is allocated to the higher-priority
process. The preempted process is placed back in the ready queue to wait for its next turn.
Examples of preemptive scheduling algorithms include Preemptive Priority Scheduling, Round Robin
Scheduling with time quantum, and Multilevel Queue Scheduling with priority levels.

2. Non-Preemptive Scheduling: Non-preemptive scheduling, also known as cooperative scheduling,


does not allow a running process to be interrupted by other processes. Once a process starts executing,
it continues until it voluntarily yields the CPU, blocks for I/O, or completes its execution. Only when
the currently running process finishes or enters a waiting state does the operating system select the
next process from the ready queue for execution.
Examples of non-preemptive scheduling algorithms include First-Come, First-Served (FCFS)
Scheduling, Shortest Job First (SJF) Scheduling, and Priority Scheduling.

Q3. Discuss Banker’s algorithm and how to find out if a system is in a safe state or not?
 The Banker's algorithm is a resource allocation and deadlock avoidance algorithm used in operating
systems. It is designed to ensure that a system avoids deadlock by allocating resources to processes in
a safe and controlled manner. The algorithm considers the current state of the system and the future
resource requests of processes to determine whether a particular allocation will lead to a safe state or
risk deadlock.
The Banker's algorithm works based on the principle of resource allocation graphs and uses the
concept of available resources and resource claim matrices. Here's an overview of how the algorithm
operates:
1. Available Resources: The system keeps track of the number of available instances of each resource
type. This information is stored in an array called "Available," where each element represents the
number of available instances of a specific resource.
2. Resource claim Matrix: A two-dimensional matrix called the "Resource Claim Matrix" is used to
record the maximum resource requirements of each process. Each element in the matrix represents
the maximum number of instances of a resource that a process may need.
3. Allocation Matrix: Another two-dimensional matrix called the "Allocation Matrix" is used to
record the currently allocated resources for each process. Each element in this matrix represents
the number of instances of a resource allocated to a process.
4. Need Matrix: The "Need Matrix" is derived by subtracting the Allocation Matrix from the
Resource Claim Matrix. It represents the remaining resource need for each process to complete its
execution.
5. Checking for a Safe State: To determine if a system is in a safe state, the Banker's algorithm
simulates the resource allocation process by checking if there is a sequence of processes that can
complete their execution without causing a deadlock. It does this by considering the available
resources, the current allocation, and the future resource requests of processes.

The algorithm follows these steps to check for safe state:

 Initialize a "Work" array with the values of the "Available" array.


 Create a "Finish" array to keep track of finished processes. Initially, all elements are set to
false.
 Iterate through the processes and find a process that satisfies the condition:
 The process's resource needs (specified by the "Need" matrix) can be fulfilled by the
available resources (specified by the "Work" array).
 If such a process is found, mark it as finished, update the "Work" array by adding the
allocated resources of the finished process, and repeat the previous step.
 If all processes are marked as finished, the system is in a safe state. Otherwise, it is in an
unsafe state.

Set-II

Q4.

a) What is PCB? What information is stored in it?


 PCB stands for Process Control Block, which is a data structure used by operating systems to manage
and track information about each running process or task in a computer system. Also known as a Task
Control Block (TCB), it serves as a repository of essential process-related information that the
operating system uses for process scheduling, resource management, and process synchronization.
The PCB is typically stored in the operating system's kernel and contains various fields or attributes
that store important information about a process. The specific information stored in a PCB may vary
slightly depending on the operating system, but generally, it includes the following:
1. Process ID (PID): A unique identifier assigned to each process by the operating system. The PID
helps in distinguishing and identifying processes.
2. Process State: Indicates the current state of the process, such as running, ready, waiting, or
terminated. It helps the scheduler to understand the process's execution status.
3. Program Counter (PC): A pointer to the address of the next instruction to be executed in the
process's code. It allows the operating system to keep track of the process's execution progress.
4. CPU Registers: Includes the values of the processor's registers that store the process's working
data, such as the accumulator, index registers, stack pointer, and others. These registers are saved
in the PCB when the process is preempted or scheduled out.
5. Process Priority: The priority level assigned to the process, which determines its relative
importance for scheduling purposes. It helps the scheduler in making decisions about process
execution order.
6. Process Scheduling Information: Includes data related to the process's scheduling, such as its
scheduling queue, quantum or time slice, waiting time, and other scheduling-related attributes.
7. Memory Management Information: Contains details about the memory allocated to the process,
including its base address, limit, and pointers to memory segments or pages. This information aids
in memory management and address space protection.
8. I/O Information: Keeps track of the I/O devices and files associated with the process. It includes
information about open files, I/O requests, and device status.
9. Parent-Child Relationship: Stores the identifier of the parent process and, if applicable, the list of
child processes created by the parent.
10. Accounting Information: Tracks statistical data related to the process, such as CPU usage,
execution time, and memory utilization. This information can be useful for performance
monitoring and resource allocation.
b) What are monitors? Explain.
 In the context of operating systems, monitors refer to synchronization constructs that allow multiple
threads or processes to coordinate their activities and access shared resources in a mutually exclusive
manner. Monitors provide a higher-level abstraction than low-level synchronization primitives like
locks and semaphores.
Monitors were introduced by C.A.R. Hoare in the 1970s as a synchronization mechanism. They
encapsulate both the data structures and the synchronization mechanisms required to access them.
Monitors ensure that only one thread or process can be active within the monitor at any given time,
preventing simultaneous access to shared resources and potential data corruption.
Monitors typically provide two fundamental operations: condition variables and entry procedures.
1. Conditional Variables: These are used to block and awaken threads or processes waiting for a
particular condition to be satisfied. Threads can wait on a condition variable within a monitor and
will be automatically suspended until another thread signals or broadcasts that the condition is
met. This helps avoid busy waiting and improves efficiency.
2. Entry Procedures: These are methods or functions within the monitor that define the critical
sections where shared resources are accessed. Only one thread can be executing an entry
procedure within a monitor at any given time. If a thread attempts to enter a monitor while another
thread is already executing an entry procedure, it is queued and waits until the monitor becomes
available.
Q5. Discuss IPC and critical-section problem along with use of semaphores.
 IPC (Interprocess Communication) is a mechanism that enables communication and data exchange
between multiple processes or threads in an operating system. It allows processes to share data,
synchronize their activities, and coordinate their operations. One of the fundamental challenges in IPC
is managing access to shared resources, which is addressed by the critical-section problem.
Semaphores are synchronization primitives commonly used to solve the critical-section problem and
coordinate access to shared resources. Let's explore these concepts in more detail:
1. Critical-Section Problem:
The critical-section problem arises when multiple processes or threads need to access a shared
resource or a critical section of code concurrently. The goal is to ensure that only one process or
thread can execute the critical section at a time to maintain data consistency and prevent race
conditions.
To solve the critical-section problem, the following conditions must be satisfied:
 Mutual Exclusion: Only one process can access the critical section at a time.
 Progress: If no process is executing in the critical section, one of the waiting
processes should be granted access.
 Bounded Waiting: There should be a limit on the number of times a process can be
denied access to the critical section.
2. Semaphores:
Semaphores are synchronization variables used to control access to shared resources and solve the
critical-section problem. They were introduced by E. W. Dijkstra and come in two types: counting
semaphores and binary semaphores.
 Counting Semaphore: A counting semaphore can take any non-negative integer value. It
maintains a count that represents the number of available resources. The two fundamental
operations on a counting semaphore are "wait" (P) and "signal" (V). The "wait" operation
decrements the semaphore count, and if the count becomes negative, the process is
blocked. The "signal" operation increments the semaphore count and unblocks a waiting
process if any.
 Binary Semaphore: A binary semaphore, also known as a mutex (mutual exclusion), can
have only two values: 0 and 1. It is primarily used to implement mutual exclusion. The
"wait" operation on a binary semaphore sets the value to 0 if it was previously 1,
effectively acquiring the lock. The "signal" operation sets the value to 1, releasing the lock.

Q6. Explain the different Multiprocessor Interconnections and types of Multiprocessor Operating
Systems.

 Different kind of Multiprocessor Interconnections are as follow –


1. Bus-Based Interconnection: In a bus-based interconnection, processors are connected to a shared
bus, which serves as a communication medium. Processors share the bus to transfer data,
instructions, and signals. Bus-based interconnections are relatively simple and cost-effective but
can become a bottleneck as the number of processors and the data traffic increase.
2. Crossbar Interconnection: A crossbar interconnection uses a matrix-like structure to connect
processors to each other. It provides a dedicated connection between each pair of processors,
allowing simultaneous communication between any two processors. Crossbar interconnections
offer high bandwidth and low latency but become more complex and expensive as the number of
processors grows.
3. Switched Interconnection: Switched interconnections employ switches or routers to connect
processors in a network-like fashion. Each processor is connected to a switch, and the switches
route the communication packets between processors. Switched interconnections provide
scalability and flexibility but require more complex hardware and protocols.
4. Hierarchical Interconnection: Hierarchical interconnections divide processors into multiple levels
or clusters, with each level having its interconnection scheme. Processors within a cluster
communicate using a local interconnect, while clusters communicate through a higher-level
interconnect. Hierarchical interconnections strike a balance between performance and complexity
by reducing the overall communication overhead.

Different kind of Multiprocessor Operating Systems are:

1. Symmetric Multiprocessor (SMP) Operating System: SMP operating systems treat all
processors as equal and provide a single system image. They distribute the workload
across processors, allowing processes or threads to execute on any available processor.
SMP operating systems typically offer symmetric access to shared resources and provide
mechanisms for synchronization and load balancing.
2. Asymmetric Multiprocessor (AMP) Operating System: AMP operating systems
designate one processor as the master or controlling processor, responsible for managing
system resources and scheduling tasks. The master processor handles critical system
functions, while other processors perform specific tasks or execute application-level code.
AMP operating systems are often used in embedded systems or real-time applications.
3. Non-Uniform Memory Access (NUMA) Operating System: NUMA operating systems
manage multiprocessor systems where processors have varying access times to memory.
Memory is divided into multiple banks or nodes, and each processor has its local memory.
NUMA operating systems optimize memory access by scheduling tasks closer to the
memory bank where the required data resides, reducing memory latency and improving
performance.
4. Clustered Operating System: Clustered operating systems manage multiprocessor
systems composed of multiple interconnected clusters. Each cluster consists of its
processors, memory, and I/O subsystem. Clustered operating systems provide mechanisms
for load balancing, fault tolerance, and resource management across clusters. They are
commonly used in high-performance computing and server environments.

You might also like