Professional Documents
Culture Documents
INTERNAL ASSIGNMENT of Operating System
INTERNAL ASSIGNMENT of Operating System
NAME DHANANJAY
ROLL NO 2214504860
SEMESTER 2
COURSE CODE & NAME DCA1201 – OPERATING SYSTEM
Ans: Operating structures can be labeled into more than a few architectural models primarily
based on how they control hardware resources, take care of technique scheduling, and supply
offerings to applications. Here are some of the generally diagnosed architectures of working
systems:
Monolithic Architecture: The monolithic architecture is the ordinary mannequin the place the
working device features as a single, massive program running in kernel mode. It at once
manages hardware sources and affords offerings to applications. All components, such as
machine drivers, file systems, and procedure management, are tightly integrated into a single
binary. While it gives environment friendly verbal exchange between components, a fault in
one module can doubtlessly crash the complete system.
Layered Architecture: In the layered architecture, the working device is divided into layers,
with every layer presenting unique performance and services to the layer above it. Layers are
equipped hierarchically, and verbal exchange between layers takes place via well-defined
interfaces. This format permits for modularity and ease of maintenance. However, the overall
performance can be impacted due to the overhead of passing via a couple of layers.
Microkernel Architecture: The microkernel architecture ambitions to hold the kernel as small
as possible, imparting solely imperative offerings such as procedure scheduling, reminiscence
management, and interprocess communication. Other services, such as file structures and
machine drivers, are carried out as separate user-space tactics regarded as servers. This
method improves machine reliability, security, and flexibility however introduces overall
performance overhead due to conversation between user-level servers and the microkernel.
● Write a detailed note on FCFS, SJF and Priority scheduling taking suitable
examples. What is Preemptive and Non-preemptive Scheduling?
Ans: FCFS (First-Come, First-Served), SJF (Shortest Job First), and Priority scheduling
algorithms, alongside preemptive and non-preemptive scheduling.
Using FCFS, the execution order would be: P1 -> P2 -> P3. The ready instances would be 0,
6, and 10, respectively, and the common ready time would be (0 + 6 + 10) / 3 = 5.33.
In summary, scheduling algorithms like FCFS, SJF, and Priority provide one-of-a-kind
techniques for identifying the order of procedure execution. Preemptive scheduling permits
for interruption of walking processes, whilst non-preemptive scheduling ensures completion
of a system earlier than transferring on to the subsequent one. The preference of scheduling
algorithms relies upon machine requirements, priorities, and preferred overall performance
metrics.
● Discuss Banker’s algorithm and how to find out if a system is in a safe state or
not?
Ans: The operating system uses the Banker's Algorithm to allocate resources and prevent
deadlocks.. It was developed by Edgar Dijkstra and named for its resemblance to the way
bankers manage loans. The main purpose of the banker's algorithm is to prevent deadlock
situations by determining whether the system can safely grant resource requests from
processes. This is based on the principle of avoiding situations where a process could wait
indefinitely for a resource and deadlock
The banker's algorithm assumes that each process must specify the maximum number of
resources of each type that it will need while running. Additionally, the algorithm assumes
that all resource requests made by a process must be approved before it can begin execution.
This algorithm maintains several data structures to keep track of the current state of the
system.
1. Available Resources: This represents the number of available instances of each
resource type in the system.
2. Maximum Claim Matrix: This matrix specifies the maximum number of each type
of resource that each process requires.
3. Allocation Matrix: This matrix represents the number of each type of resource
currently allocated to each process.
4. Need Matrix: This matrix shows the remaining resources of each type that each
process requires to complete execution.
To determine if the system is in a secure state, the bunker algorithm follows a security
algorithm that checks if there is a set of resource allocations that all processes can complete
without deadlock.
6. Work and Done Arrays: Create a "work" array that initially contains the number of
available instances of each resource type. It also creates a "Finish" array representing
the completion status of each process (initially set to false for all processes).
7. Safety Algorithm Loop: Repeats the following steps until all processes are marked as
complete or no processes are found that meet the criteria.
8. A. Check the process. Repeat all processes until you find one that meets the
following criteria:
● That resource demand (represented by the demand matrix) can be met by available
resources (represented by the work array).
The process has not finished yet (finish[process] = false).
B. Process found: If a process is found that satisfies the conditions in step 3a, simulate
allocation of its resources. Adds the assigned resource to the work array, updates the work
array, and marks the process as complete by setting finish[process] = true.
9. Safety Check: Ensures that all processes are marked as complete once the security
algorithms have completed. In this case the system is in a safe state.
Otherwise, it indicates that the system is in critical condition. The banker's algorithm prevents
resource requests from entering an unsafe state by carefully checking whether the system can
reach a safe state before resource allocation is granted. It provides a way to allocate resources
to processes in a controlled manner, avoid potential deadlocks, and use resources efficiently.
Assignment Set – 2
Questions
Ans: When creating a process, the operating system performs several operations. To identify
the process, each process is assigned a process identification number (PID). Since the
operating system supports multiprogramming, it needs to keep track of all processes. This
task uses the process control block (PCB) to track the execution status of the process. Each
block of memory contains information about process status, program counters, stack pointers,
open file status, scheduling algorithms, and more. All of this information is mandatory and
must be preserved when the process transitions from one state to another. When a process
transitions from one state to another, the operating system must update information on the
process' circuit board. A process control block (PCB) contains information about a process.
H. A process table is an array of PCBs, meaning that it logically contains one he-she PCB for
every current process in the system.
1. Pointer - This is the stack pointer that must be saved to maintain the current position
of the process as it transitions from one state to another.
2. Process State – Saves the current status of the process.
3. Process number - Each process is assigned a unique ID, called a process ID or PID,
and the process ID is saved.
4. Program Counter - Stores a counter containing the address of the next instruction to
be executed for the process.
5. Register - These are CPU registers, including accumulators, bases, registers, and
general registers.
6. Memory Limit - This field contains information about the memory management
system used by the operating system. This may include page tables, segment tables,
etc.
7. open files list - This information includes the list of files opened by the process
1. Shared data: This is data that's penetrated and modified by multiple vestments or
processes. Shared data resides within the examiner and is defended from concurrent
access.
2. Procedures: Procedures, also known as styles or functions, define operations that can
be performed on participated data. These procedures are generally enforced within an
examiner and called by an external thread or process.
3. Condition Variables: Condition variables are used to coordinate the prosecution of
vestments or processes within an examiner. These allow vestments to stay until
certain conditions are met before continuing. Condition variables can be stay()(
release the examiner and stay for a signal), signal()( tell staying vestments to wake
up), broadcast()( tell all staying vestments to wake up). Notify) and other styles.
For critical section problems, there are three conditions or criteria that must be met.
1. Mutual Exclusion: Only one process or thread can run at a time within a critical
section. This ensures that participated coffers are penetrated and modified in a
reissued and controlled manner.
2. Progress: If there are no processes presently running in the critical section and
processes are staying to start, one of the waiting processes must be granted access
within a limited quantum of time. This criterion ensures that the process doesn't hang
indefinitely staying to enter the critical section.
3. Bounded Waiting: There should be a limit to the number of times a process can enter
a critical section while another process waits to enter the critical section. This ensures
that no bone
the process is starved or constantly remitted in favor of others.
A common fashion to break the critical section problem and achieve collective rejection is to
use semaphores. A semaphore is a synchronization primitive that can be used to control
access to participating coffers and apply collective rejection.
A semaphore is a non-negative integer variable that's penetrated atomically and supports two
of his introductory operations
1. Waiting( aka P or Acquire) If the semaphore value is lesser than 0, the process can
enter the critical section by decrementing the semaphore value. A value of 0 indicates
that the critical section is presently enthralled and the process blocks or sleeps until
the semaphore value is lesser than 0.
2. Signal( aka V or Release) Releases the semaphore by incrementing the value of the
semaphore when the process finishes executing within a critical section.However, one
of them wakes up and gains access to the critical section, If other processes are
staying.
Semaphores satisfy the collective rejection demand of the critical section problem by
allowing processes to coordinate access to participating coffers and icing that only one
process at a time enters the critical section.
Semaphores can also be used to break other synchronization problems beyond collective
rejection.B. Synchronization between patron and consumer processes, avoiding impasse
situations, and administering bounded delay times.
Overall, IPC and critical section issues are important aspects of concurrent programming,
and semaphores are a precious resource for achieving collective rejection, coordinating
access to participating coffers, icing proper synchronization, and precluding race conditions.
It's a tool.
Ans: Multiprocessor interconnect refers to the colorful ways in which multiple processors or
central processing units( CPUs) are connected in a multiprocessor system to grease
communication and collaboration between them. These connections play a crucial part in
determining overall system performance, scalability, and effectiveness. Then are some
generally used multiprocessor connections.
1. Shared Bus: In a participating machine armature, all processors in the system are
connected to a common machine. A machine acts as a communication medium, allowing
processors to change data and instructions. still, the participated machine can come a tailback
and limit scalability as multiple processors contend for access.
2. Crossbar Switch: A crossbar switch provides a devoted point- to- point connection
between each brace of processors in the system. This eliminates access conflicts and allows
concurrent communication between processors. Crossbar switches are largely scalable, but
can be precious and complex to apply.
3. Switched interconnect: A switched interconnect uses a network of switches to connect
the processors. Each processor has a devoted connection to the switch, which on data packets
to their applicable destinations. Switched connections offer great scalability and inflexibility,
but can add quiescence and complexity.
4. Mesh and Torus: A mesh or torus interconnect arranges processors in a grid- suchlike
structure, with each processor connected to its neighbors. This arrangement provides regular
and predictable communication patterns and allows effective data exchange. Mesh and torus
interconnects are scalable and fault tolerant, but performance can be impacted by traffic in
heavily loaded systems.
5. Hypercube: A hypercube interconnect is a more complex network topology in which
processors are connected in a hypercube structure. In a hypercube, each processor is
connected to log N other processors. N is the total number of processors. Hypercubes offer
high fault forbearance, scalability, and effective communication, but can be precious to apply.
Multiprocessor operating systems are specifically designed to manage and coordinate the
conditioning of multiple processors in a multiprocessor system. These operating systems give
mechanisms for scheduling tasks and processes across multiple processors, allocating coffers,
coinciding access to participating coffers, and efficiently using available computing power..
Then are some types of multiprocessor operating systems
These are just a many examples of types of multiprocessor connections and multiprocessor
operating systems. Your choice of connectivity and operating system will depend on factors
similar as performance conditions, scalability, fault forbearance, cost, and the specific use
case or operation of a multiprocessor system.