You are on page 1of 14

INTERNAL ASSIGNMENT

NAME DHANANJAY
ROLL NO 2214504860
SEMESTER 2
COURSE CODE & NAME DCA1201 – OPERATING SYSTEM

SESSION SEPTEMBER 2022


Assignment Set – 1

● Discuss different architectures of Operating Systems.

Ans: Operating structures can be labeled into more than a few architectural models primarily
based on how they control hardware resources, take care of technique scheduling, and supply
offerings to applications. Here are some of the generally diagnosed architectures of working
systems:

Monolithic Architecture: The monolithic architecture is the ordinary mannequin the place the
working device features as a single, massive program running in kernel mode. It at once
manages hardware sources and affords offerings to applications. All components, such as
machine drivers, file systems, and procedure management, are tightly integrated into a single
binary. While it gives environment friendly verbal exchange between components, a fault in
one module can doubtlessly crash the complete system.

Layered Architecture: In the layered architecture, the working device is divided into layers,
with every layer presenting unique performance and services to the layer above it. Layers are
equipped hierarchically, and verbal exchange between layers takes place via well-defined
interfaces. This format permits for modularity and ease of maintenance. However, the overall
performance can be impacted due to the overhead of passing via a couple of layers.

Microkernel Architecture: The microkernel architecture ambitions to hold the kernel as small
as possible, imparting solely imperative offerings such as procedure scheduling, reminiscence
management, and interprocess communication. Other services, such as file structures and
machine drivers, are carried out as separate user-space tactics regarded as servers. This
method improves machine reliability, security, and flexibility however introduces overall
performance overhead due to conversation between user-level servers and the microkernel.

Modular Architecture: Modular architecture combines factors of each monolithic and


microkernel architectures. It lets in for the dynamic loading and unloading of kernel modules,
enabling the addition or elimination of particular functionality barring rebooting the system.
This strategy gives flexibility, scalability, and the capability to lengthen the working system's
abilities whilst keeping precise performance.

Virtual Machine Architecture: Virtual machine architecture includes the use of a


virtualization layer that permits more than one running system, acknowledged as visitor
running systems, to run simultaneously on a single bodily machine. Each visitor running
gadget runs in its personal digital machine, remote from others. This method presents robust
isolation, security, and flexibility for strolling one of a kind running structures and purposes
on the identical hardware.

Client-Server Architecture: In a client-server architecture, the running gadget offers offerings


to consumer purposes going for walks on separate machines. The server offers sources and
services, such as file sharing, printing, and database access, whilst the consumer requests and
makes use of these services. This mannequin is approved for dispensed computing and can be
observed in community working structures and cloud-based environments. These are some of
the primary architectural models of running systems, each with its very own benefits and
trade-offs. Many current working structures may additionally comprise factors from a couple
of architectures to tackle distinct requirements, such as performance, scalability, security, and
virtualization capabilities.

● What is VMware? Write a short note on it.

Ans: VMware is a main software program business enterprise that specializes in


virtualization and cloud computing technologies. Founded in 1998, VMware has performed a
substantial position in remodeling the IT enterprise via enabling businesses to optimize their
IT infrastructure, decorate efficiency, and minimize costs. At its core, VMware's
virtualization science lets in a couple of digital machines (VMs) to run on a single bodily
server. Each VM operates independently and can run distinct running structures and
applications, successfully maximizing the utilization of server resources. This method gives
several advantages such as elevated flexibility, scalability, and extended aid management.
VMware's flagship product is VMware vSphere, a complete virtualization platform that
presents a sturdy infrastructure for strolling and managing digital machines. It gives aspects
like excessive availability, stay migration, and aid pooling, which make sure that functions
and offerings continue to be on hand even all through hardware screw ups or preservation
activities. In addition to server virtualization, VMware gives a variety of options for
computer virtualization, community virtualization, storage virtualization, and cloud
management. These choices allow corporations to create digital laptop environments,
summary community functionality, optimize storage infrastructure, and correctly manipulate
multi-cloud environments. Furthermore, VMware has elevated its portfolio to consist of
cloud-native applied sciences and solutions. For example, VMware Tanzu presents a suite of
equipment and offerings for building, deploying, and managing containerized purposes in
Kubernetes environments. Overall, VMware has been instrumental in revolutionizing the way
IT infrastructure is deployed and managed. Its virtualization applied sciences have helped
agencies acquire higher efficiency, agility, and price financial savings whilst paving the way
for the adoption of cloud computing and contemporary software improvement practices.

● Write a detailed note on FCFS, SJF and Priority scheduling taking suitable
examples. What is Preemptive and Non-preemptive Scheduling?

Ans: FCFS (First-Come, First-Served), SJF (Shortest Job First), and Priority scheduling
algorithms, alongside preemptive and non-preemptive scheduling.

FCFS (First-Come, First-Served) Scheduling:


FCFS is a non-preemptive scheduling algorithm that prioritizes the execution of methods
based totally on their arrival time. The procedure that arrives first is completed first, and
subsequent methods are performed in the order of their arrival. This algorithm is easy and
convenient to understand, however it can lead to negative overall performance if
long-running strategies arrive earlier than quick ones. Here's an example:
Consider three approaches P1, P2, and P3 with arrival instances and burst instances as
follows:
Process Arrival Time Burst Time
P1 0 6
P2 2 4
P3 4 2

Using FCFS, the execution order would be: P1 -> P2 -> P3. The ready instances would be 0,
6, and 10, respectively, and the common ready time would be (0 + 6 + 10) / 3 = 5.33.

SJF (Shortest Job First) Scheduling:


SJF is a non-preemptive or preemptive scheduling algorithm that selects the technique with
the shortest burst time for execution. It minimizes the common ready time and offers higher
turnaround time for shorter processes. In the non-preemptive version, as soon as a manner
begins executing, it continues till completion. In the preemptive version, the scheduler can
interrupt a going for walks manner if a shorter manner arrives. Consider the identical set of
methods as above:
Using SJF (non-preemptive), the execution order would be: P1 -> P3 -> P2. The ready
instances would be 0, 6, and 2, respectively, resulting in a common ready time of (0 + 6 + 2) /
3 = 2.67.
Priority Scheduling: Priority scheduling assigns a precedence fee to every procedure
primarily based on elements like importance, deadlines, or useful resource requirements. The
technique with the very best precedence is performed first. Priority scheduling can be both
preemptive or non-preemptive. In preemptive precedence scheduling, a strolling procedure
can be interrupted if a higher-priority procedure arrives. In non-preemptive precedence
scheduling, a walking manner continues till completion, regardless of new arrivals. Here's an
example:
Consider the equal set of procedures with their priorities:
Process Arrival Time Burst Time Priority
P1 0 6 2
P2 2 4 1
P3 4 2 3
Using precedence scheduling (non-preemptive), the execution order would be: P2 -> P1 ->
P3. The ready instances would be 0, 6, and 2, respectively, resulting in a common ready time
of (0 + 6 + 2) / 3 = 2.67.

Preemptive and Non-preemptive Scheduling:


Preemptive scheduling refers to a scheduling algorithm where a walking procedure can be
interrupted and moved out of the CPU earlier than its completion if a higher-priority
procedure arrives or a time slice (quantum) expires. This approves for higher responsiveness
and the capacity to prioritize necessary or time-critical tasks.

Non-preemptive scheduling, on the other hand, no longer enables interruption of a jogging


system till it completes its execution. It follows a "run-to-completion" approach, which can
lead to longer response instances for higher-priority methods, but ensures predictability and
simplicity in scheduling.

In summary, scheduling algorithms like FCFS, SJF, and Priority provide one-of-a-kind
techniques for identifying the order of procedure execution. Preemptive scheduling permits
for interruption of walking processes, whilst non-preemptive scheduling ensures completion
of a system earlier than transferring on to the subsequent one. The preference of scheduling
algorithms relies upon machine requirements, priorities, and preferred overall performance
metrics.

● Discuss Banker’s algorithm and how to find out if a system is in a safe state or
not?

Ans: The operating system uses the Banker's Algorithm to allocate resources and prevent
deadlocks.. It was developed by Edgar Dijkstra and named for its resemblance to the way
bankers manage loans. The main purpose of the banker's algorithm is to prevent deadlock
situations by determining whether the system can safely grant resource requests from
processes. This is based on the principle of avoiding situations where a process could wait
indefinitely for a resource and deadlock

The banker's algorithm assumes that each process must specify the maximum number of
resources of each type that it will need while running. Additionally, the algorithm assumes
that all resource requests made by a process must be approved before it can begin execution.
This algorithm maintains several data structures to keep track of the current state of the
system.
1. Available Resources: This represents the number of available instances of each
resource type in the system.
2. Maximum Claim Matrix: This matrix specifies the maximum number of each type
of resource that each process requires.
3. Allocation Matrix: This matrix represents the number of each type of resource
currently allocated to each process.
4. Need Matrix: This matrix shows the remaining resources of each type that each
process requires to complete execution.
To determine if the system is in a secure state, the bunker algorithm follows a security
algorithm that checks if there is a set of resource allocations that all processes can complete
without deadlock.

The security algorithm works like this:


5. Initialization: Initialize the available resources, maximum bill matrix, allocation
matrix, and demand matrix based on the current state of the system.

6. Work and Done Arrays: Create a "work" array that initially contains the number of
available instances of each resource type. It also creates a "Finish" array representing
the completion status of each process (initially set to false for all processes).
7. Safety Algorithm Loop: Repeats the following steps until all processes are marked as
complete or no processes are found that meet the criteria.

8. A. Check the process. Repeat all processes until you find one that meets the
following criteria:
● That resource demand (represented by the demand matrix) can be met by available
resources (represented by the work array).
The process has not finished yet (finish[process] = false).

B. Process found: If a process is found that satisfies the conditions in step 3a, simulate
allocation of its resources. Adds the assigned resource to the work array, updates the work
array, and marks the process as complete by setting finish[process] = true.
9. Safety Check: Ensures that all processes are marked as complete once the security
algorithms have completed. In this case the system is in a safe state.

Otherwise, it indicates that the system is in critical condition. The banker's algorithm prevents
resource requests from entering an unsafe state by carefully checking whether the system can
reach a safe state before resource allocation is granted. It provides a way to allocate resources
to processes in a controlled manner, avoid potential deadlocks, and use resources efficiently.

Assignment Set – 2
Questions

● What is PCB? What information is stored in it?

Ans: When creating a process, the operating system performs several operations. To identify
the process, each process is assigned a process identification number (PID). Since the
operating system supports multiprogramming, it needs to keep track of all processes. This
task uses the process control block (PCB) to track the execution status of the process. Each
block of memory contains information about process status, program counters, stack pointers,
open file status, scheduling algorithms, and more. All of this information is mandatory and
must be preserved when the process transitions from one state to another. When a process
transitions from one state to another, the operating system must update information on the
process' circuit board. A process control block (PCB) contains information about a process.
H. A process table is an array of PCBs, meaning that it logically contains one he-she PCB for
every current process in the system.

1. Pointer - This is the stack pointer that must be saved to maintain the current position
of the process as it transitions from one state to another.
2. Process State – Saves the current status of the process.
3. Process number - Each process is assigned a unique ID, called a process ID or PID,
and the process ID is saved.
4. Program Counter - Stores a counter containing the address of the next instruction to
be executed for the process.
5. Register - These are CPU registers, including accumulators, bases, registers, and
general registers.
6. Memory Limit - This field contains information about the memory management
system used by the operating system. This may include page tables, segment tables,
etc.
7. open files list - This information includes the list of files opened by the process

● What are monitors? Explain.

Ans: In the environment of computer programming and concurrent computing, examiner


refers to the synchronization savages used to control access to participating coffers in a
multithreaded or multi-process terrain.
In concurrent programming, multiple vestments or processes may run contemporaneously,
frequently taking access to participated data or coffers. Still, concurrent access can lead to
race conditions and data inconsistencies if not duly controlled. Observers give a medium to
guarantee collective access to participated coffers and help concurrent vestments or processes
from snooping with each other.
The examiner conception was first introduced by hisC.A.R. introduced. In 197
Hoare appeared as a styling officer. An examiner encapsulates participating data and the
operations or procedures that manipulate that data. They apply the concept of collective
rejection, icing that participating data is penetrated in a reissued and controlled manner by
only allowing one thread or process to enter the examiner at a time..
A examiner generally consists of the following factors

1. Shared data: This is data that's penetrated and modified by multiple vestments or
processes. Shared data resides within the examiner and is defended from concurrent
access.
2. Procedures: Procedures, also known as styles or functions, define operations that can
be performed on participated data. These procedures are generally enforced within an
examiner and called by an external thread or process.
3. Condition Variables: Condition variables are used to coordinate the prosecution of
vestments or processes within an examiner. These allow vestments to stay until
certain conditions are met before continuing. Condition variables can be stay()(
release the examiner and stay for a signal), signal()( tell staying vestments to wake
up), broadcast()( tell all staying vestments to wake up). Notify) and other styles.

Observers give an advanced position of abstraction compared to other synchronization


savages similar as cinches and semaphores. recapitulating both data and operations on that
data in a single structure makes it easier to suppose about law concurrency and avoids certain
kinds of programming crimes.
As mentioned in the former discussion, it's important to note that the term" examiner" can
also relate to computer defenses. still, in the environment of concurrent computing, a
examiner is a synchronization medium used by operating systems and programming
languages to insure proper collaboration and access to participated coffers in a multithreaded
or multi process terrain.

● Discuss IPC and critical-section problems along with use of semaphores.

Ans: IPC(inter-process communication) refers to the mechanisms and ways used by


operating systems to enable communication and data exchange between different processes
running contemporaneously on a computer system. IPC allows processes to change
information, coordinate conditioning, and attend prosecution.

One of the abecedarian challenges in concurrent programming is the problem of critical


sections. A critical section is a section of law or region within a program where participating
coffers are penetrated and modified. Critical section problems occur when multiple processes
or vestments attempt to pierce and modify participating coffers at the same time, which can
lead to data inconsistencies and race conditions

For critical section problems, there are three conditions or criteria that must be met.

1. Mutual Exclusion: Only one process or thread can run at a time within a critical
section. This ensures that participated coffers are penetrated and modified in a
reissued and controlled manner.

2. Progress: If there are no processes presently running in the critical section and
processes are staying to start, one of the waiting processes must be granted access
within a limited quantum of time. This criterion ensures that the process doesn't hang
indefinitely staying to enter the critical section.

3. Bounded Waiting: There should be a limit to the number of times a process can enter
a critical section while another process waits to enter the critical section. This ensures
that no bone
the process is starved or constantly remitted in favor of others.

A common fashion to break the critical section problem and achieve collective rejection is to
use semaphores. A semaphore is a synchronization primitive that can be used to control
access to participating coffers and apply collective rejection.

A semaphore is a non-negative integer variable that's penetrated atomically and supports two
of his introductory operations

1. Waiting( aka P or Acquire) If the semaphore value is lesser than 0, the process can
enter the critical section by decrementing the semaphore value. A value of 0 indicates
that the critical section is presently enthralled and the process blocks or sleeps until
the semaphore value is lesser than 0.
2. Signal( aka V or Release) Releases the semaphore by incrementing the value of the
semaphore when the process finishes executing within a critical section.However, one
of them wakes up and gains access to the critical section, If other processes are
staying.

Semaphores satisfy the collective rejection demand of the critical section problem by
allowing processes to coordinate access to participating coffers and icing that only one
process at a time enters the critical section.

Semaphores can also be used to break other synchronization problems beyond collective
rejection.B. Synchronization between patron and consumer processes, avoiding impasse
situations, and administering bounded delay times.

Overall, IPC and critical section issues are important aspects of concurrent programming,
and semaphores are a precious resource for achieving collective rejection, coordinating
access to participating coffers, icing proper synchronization, and precluding race conditions.
It's a tool.

● Explain the different Multiprocessor Interconnections and types of


Multiprocessor Operating Systems.

Ans: Multiprocessor interconnect refers to the colorful ways in which multiple processors or
central processing units( CPUs) are connected in a multiprocessor system to grease
communication and collaboration between them. These connections play a crucial part in
determining overall system performance, scalability, and effectiveness. Then are some
generally used multiprocessor connections.

1. Shared Bus: In a participating machine armature, all processors in the system are
connected to a common machine. A machine acts as a communication medium, allowing
processors to change data and instructions. still, the participated machine can come a tailback
and limit scalability as multiple processors contend for access.
2. Crossbar Switch: A crossbar switch provides a devoted point- to- point connection
between each brace of processors in the system. This eliminates access conflicts and allows
concurrent communication between processors. Crossbar switches are largely scalable, but
can be precious and complex to apply.
3. Switched interconnect: A switched interconnect uses a network of switches to connect
the processors. Each processor has a devoted connection to the switch, which on data packets
to their applicable destinations. Switched connections offer great scalability and inflexibility,
but can add quiescence and complexity.
4. Mesh and Torus: A mesh or torus interconnect arranges processors in a grid- suchlike
structure, with each processor connected to its neighbors. This arrangement provides regular
and predictable communication patterns and allows effective data exchange. Mesh and torus
interconnects are scalable and fault tolerant, but performance can be impacted by traffic in
heavily loaded systems.
5. Hypercube: A hypercube interconnect is a more complex network topology in which
processors are connected in a hypercube structure. In a hypercube, each processor is
connected to log N other processors. N is the total number of processors. Hypercubes offer
high fault forbearance, scalability, and effective communication, but can be precious to apply.

Multiprocessor operating systems are specifically designed to manage and coordinate the
conditioning of multiple processors in a multiprocessor system. These operating systems give
mechanisms for scheduling tasks and processes across multiple processors, allocating coffers,
coinciding access to participating coffers, and efficiently using available computing power..
Then are some types of multiprocessor operating systems

1. Symmetric Multiprocessing( SMP): SMP operating systems treat all processors in a


system as equals, allowing each processor to perform any task or process. It provides a single
participating memory area that all processors can pierce and uses cargo balancing ways to
distribute tasks unevenly across processors. SMP systems are generally used in desktop
computers, waiters, and some high- performance computing surroundings.
2. Asymmetric Multiprocessing( AMP): In AMP systems, different processors are assigned
specific places or tasks. For illustration, one processor can handle operating system functions
while another processor runs stoner operations. Each processor works singly and may have its
own private memory. AMP systems are frequently used in embedded systems where specific
processors are devoted to real- time tasks or special functions.
3. Non-Uniform Memory Access( NUMA): NUMA operating systems manage
multiprocessor systems with physically distributed memory. Each processor can pierce
original memory, and accessing remote memory incurs quiescence. NUMA operating systems
optimize memory accesses by keeping data as close as possible to the processors that need it,
minimizing quiescence and perfecting performance.
4. Clustered System: A clustered operating system manages a collection of individual
computers or bumps connected over a network. Each knot in the cluster has its own operating
system and processor, and a clustered operating system provides mechanisms for cargo
balancing, task distribution, and fault forbearance across the cluster. Clustered systems are
generally used in high performance computing and large data centers.

These are just a many examples of types of multiprocessor connections and multiprocessor
operating systems. Your choice of connectivity and operating system will depend on factors
similar as performance conditions, scalability, fault forbearance, cost, and the specific use
case or operation of a multiprocessor system.

You might also like