You are on page 1of 9

ASSIGNMENT-01

Q.1 Explain about Complexity of Algorithms, Performance Measurements, Asymptotic


Notations.
Answer: - The term complexity is used to describe the performance of an algorithm.
Typically, performance is measured in terms of time or space. Space complexity of an algorithm
is the amount of memory is needed to run the algorithm.
Time complexity of an algorithm is the amount of computer time it needs to run the algorithm.
Most common notation for describing time complexity 0(f(n)), where n is the input size and f is a
function of n. An algorithm ~ 0(f(n)) means there exists N such that for all n > N the run time of
the algorithm is less than c.f(n), where c is a constant.

We can measure two parameters to check performance of any algorithm


(1) Space complexity
(2) Time complexity
The word Asymptotic means approaching a value or curve arbitrarily closely (i.e., as some sort
of limit is taken).
Asymptotic Notation is a way of comparing function that ignores constant factors and small
input sizes. Three notations are used to calculate the running time complexity of an algorithm:

Big-oh notation: Big-oh is the formal method of expressing the upper bound of an algorithm's
running time. It is the measure of the longest amount of time. The function f (n) = O (g
(n)) [read as "f of n is big-oh of g of n"] if and only if exist positive constant c and such that

1. f (n) ⩽ k.g (n)f(n)⩽k.g(n) for n>n0n>n0 in all case   

Hence, function g (n) is an upper bound for function f (n), as g (n) grows faster than f (n)
Omega () Notation: The function f (n) = Ω (g (n)) [read as "f of n is omega of g of n"] if and
only if there exists positive constant c and n0 such that

F (n) ≥ k* g (n) for all n, n≥ n0


Q.2 Explain Divide and Conquer with Examples Such as Sorting, Matrix Multiplication
Answer:- Divide and Conquer is an algorithmic pattern.

In algorithmic methods, the design is to take a dispute on a huge input, break the input into
minor pieces, decide the problem on each of the small pieces, and then merge the piecewise
solutions into a global solution.

This mechanism of solving the problem is called the Divide & Conquer Strategy.

Divide and Conquer algorithm consists of a dispute using the following three steps.

1. Divide the original problem into a set of subproblems.

2. Conquer: Solve every subproblem individually, recursively.

3. Combine: Put together the solutions of the subproblems to get the solution to the whole
problem.

Divide and Conquer | (Strassen’s Matrix Multiplication)

Following is simple Divide and Conquer method to multiply two square matrices.
1) Divide matrices A and B in 4 sub-matrices of size N/2 x N/2 as shown in the below diagram.
2) Calculate following values recursively. Ae + bg, af + bh, ce + dg and cf + dh.

In the above method, we do 8 multiplications for matrices of size N/2 x N/2 and 4 additions.
Addition of two matrices takes O(N 2) time. So the time complexity can be written as 
T(N) = 8T(N/2) + O(N 2)
Here, we will sort an array using the divide and conquer approach (ie. merge sort).

1. Let the given array be: Array for merge sort

2. Divide the array into two halves.

Divide the array into two


subparts
Again, divide each subpart recursively into two halves until you get individual elements.

Divide the
array into smaller subparts
3. Now, combine the individual elements in a sorted manner.
Here, conquer and combine steps go side by side.
Co
mbine the subparts

Q.3 Explain Minimum Spanning Trees – Prim’s and Kruskal’s Algorithms 


Answer:-
Prim’s Algorithm
 It starts to build the Minimum Spanning Tree from any vertex in the graph.
 It traverses one node more than one time to get the minimum distance.
 Prim’s algorithm has a time complexity of O(V2), V being the number of vertices and can
be improved up to O(E log V) using Fibonacci heaps.

 Prim’s algorithm gives connected component as well as it works only on connected


graph.
 Prim’s algorithm runs faster in dense graphs.
 Prim’s algorithm uses List Data Structure.
Kruskal’s Algorithm

 It starts to build the Minimum Spanning Tree from the vertex carrying minimum weight
in the graph.
 It traverses one node only once.
 Kruskal’s algorithm’s time complexity is O(E log V), V being the number of vertices.

 Kruskal’s algorithm can generate forest(disconnected components) at any instant as well


as it can work on disconnected components

 Kruskal’s algorithm runs faster in sparse graphs.


 Kruskal’s algorithm uses Heap Data Structure.

Q.4 Explain the Concept of FCFS scheduling algorithm, Concept of priority scheduling


algorithm like SJF, Concept of non-preemptive and preemptive algorithms, Concept of
round-robin scheduling algorithm.
Answer:-
First Come First Serve (FCFS) is an operating system scheduling algorithm that automatically
executes queued requests and processes in order of their arrival. It is the easiest and simplest
CPU scheduling algorithm. In this type of algorithm, processes which requests the CPU first get
the CPU allocation first.

This is managed with a FIFO queue. The full form of FCFS is First Come First Serve.
As the process enters the ready queue, its PCB (Process Control Block) is linked with the tail of
the queue and, when the CPU becomes free, it should be assigned to the process at the beginning
of the queue.

Priority scheduling algorithm executes the processes depending upon their priority. Each
process is allocated a priority and the process with the highest priority is executed first. Priorities
can be defined internally as well as externally.
Shortest Job First (SJF) Scheduling Algorithm is based upon the burst time of the process. The
processes are put into the ready queue based on their burst times. In this algorithm, the process
with the least burst time is processed first. The burst time of only those processes is compared
that are present or have arrived until that time. It is also non-preemptive in nature. Its preemptive
version is called Shortest Remaining Time First (SRTF) algorithm.

Preemptive Scheduling is a CPU scheduling technique that works by dividing time slots of
CPU to a given process. The time slot given might be able to complete the whole process or
might not be able to it. When the burst time of the process is greater than CPU cycle, it is placed
back into the ready queue and will execute in the next chance. This scheduling is used when the
process switch to ready state.
Algorithms that are backed by preemptive Scheduling are round-robin (RR), priority, SRTF
(shortest remaining time first).
Non-preemptive Scheduling is a CPU scheduling technique the process takes the resource
(CPU time) and holds it till the process gets terminated or is pushed to the waiting state. No
process is interrupted until it is completed, and after that processor switches to another process.
Algorithms that are based on non-preemptive Scheduling are non-preemptive priority, and
shortest Job first.
Round Robin(RR) scheduling algorithm is mainly designed for time-sharing systems. This
algorithm is similar to FCFS scheduling, but in Round Robin(RR) scheduling, preemption is
added which enables the system to switch between processes.

 A fixed time is allotted to each process, called a quantum, for execution.

 Once a process is executed for the given time period that process is preempted and
another process executes for the given time period.

 Context switching is used to save states of preempted processes.

 This algorithm is simple and easy to implement and the most important is thing is this
algorithm is starvation-free as all processes get a fair share of CPU.

 It is important to note here that the length of time quantum is generally from 10 to 100
milliseconds in length.

Q.5 Explain the Concept of deadlock prevention and its avoidance.


Answer:-
Deadlock Prevention :
Deadlock prevention means to block at least one of the four conditions required for deadlock to
occur. If we are able to block any one of them then deadlock can be prevented.

The four conditions which need to be blocked are:-

 Mutual Exclusion
 Hold and Wait
 No Preemption
 Circular Wait
Deadlock Avoidance : 
In Deadlock avoidance we have to anticipate deadlock before it really occurs and ensure that the
system does not go in unsafe state.It is possible to avoid deadlock if resources are allocated
carefully. For deadlock avoidance we use Banker’s and Safety algorithm for resource allocation
purpose. In deadlock avoidance the maximum number of resources of each type that will be
needed are stated at the beginning of the process. 

Q.6 Explain the Need of Memory management and its requirements, paging, segmentation,
concept of fragmentation. 
Answer:-
Reasons for using memory management:

 It allows you to check how much memory needs to be allocated to processes that decide
which processor should get memory at what time.
 Tracks whenever inventory gets freed or unallocated. According to it will update the
status.
 It allocates the space to application routines.
 It also make sure that these applications do not interfere with each other.
 Helps protect different processes from each other
 It places the programs in memory so that memory is utilized to its full extent.
Paging is a storage mechanism that allows OS to retrieve processes from the secondary storage
into the main memory in the form of pages. In the Paging method, the main memory is divided
into small fixed-size blocks of physical memory, which is called frames. The size of a frame
should be kept the same as that of a page to have maximum utilization of the main memory and
to avoid external fragmentation. Paging is used for faster access to data, and it is a logical
concept.

Segmentation method works almost similarly to paging. The only difference between the two is
that segments are of variable-length, whereas, in the paging method, pages are always of fixed
size.

A program segment includes the program’s main function, data structures, utility functions, etc.
The OS maintains a segment map table for all the processes. It also includes a list of free
memory blocks along with its size, segment numbers, and its memory locations in the main
memory or virtual memory.

Fragmentation
Processes are stored and removed from memory, which creates free memory space, which are
too small to use by other processes.

After sometimes, that processes not able to allocate to memory blocks because its small size and
memory blocks always remain unused is called fragmentation. This type of problem happens
during a dynamic memory allocation system when free blocks are quite small, so it is not able to
fulfill any request.

Two types of Fragmentation methods are:

1. External fragmentation
2. Internal fragmentation

You might also like