You are on page 1of 25

Q.

l Find out the Time Complexity of given algorithm

Sum(a. n)

{ if(n<=0) return 0;

else return sum (a, n-l) + a[n];

Ans=

Q.2 Discuss space complexity.

The space complexity of an algorithm refers to the amount of memory or space required by the
algorithm to solve a computational problem as a function of the input size. In the case of the given
algorithm for calculating the sum of elements in an array, the space complexity is influenced by the
recursion.
Q.3 Distinguish Greedy Method and Dynamic Programming.

Q.4 Define 0/1 knapsack problem.


The 0/1 Knapsack Problem is a classic optimization problem in computer science and mathematics. It
is a type of combinatorial optimization problem where the goal is to select a subset of items, each
with a weight and a value, to maximize the total value while staying within the given weight capacity
of a knapsack. The term "0/1" indicates that for each item, you either include it in the knapsack (0/1)
or leave it out completely.
Q.5 Recognize Lower Bound Theory.
According to the lower bound theory, for a lower bound L(n) of an algorithm, it is not possible to
have any other algorithm (for a common problem) whose time complexity is less than L(n) for
random input. Also, every algorithm must take at least L(n) time in the worst case. Note that L(n)
here is the minimum of all the possible algorithms, of maximum complexity.
The Lower Bound is very important for any algorithm. Once we calculated it, then we can compare it
with the actual complexity of the algorithm and if their order is the same then we can declare our
algorithm as optimal. So in this section, we will be discussing techniques for finding the lower bound
of an algorithm.

Q.6 Identify Pattern Matching Algorithms


Certainly, here are the names of some pattern matching algorithms:

1. Brute Force Algorithm

2. Knuth-Morris-Pratt (KMP) Algorithm

3. Boyer-Moore Algorithm

4. Rabin-Karp Algorithm

5. Aho-Corasick Algorithm

6. Finite Automaton-based Algorithm

Q.7 State about assignment problem


The Assignment Problem is a combinatorial optimization problem in the field of
operations research and optimization. In this problem, the objective is to find the
optimal assignment of a set of tasks to a set of agents or resources, such that the
overall cost or benefit is minimized or maximized. The problem is commonly
formulated as a linear programming problem.
Q.8 Describe Randomized Algorithms
Randomized algorithms are algorithms that use random numbers during their execution to make
decisions or introduce an element of randomness. Unlike deterministic algorithms, which produce
the same output for a given input every time they run, randomized algorithms may exhibit different
behaviors on different runs due to the use of randomization.

Q.9 Explain set cover problem


he Set Cover Problem is a classic optimization problem in computer science and
combinatorial optimization. In this problem, the goal is to find the smallest set of
subsets (or "sets") from a given collection that covers all elements in a universal set.
The problem is often formulated as a decision problem: Given a universal set U and a
collection of subsets S1,S2,…,Sk, does there exist a subcollection of Si whose union
covers all elements in U?
Q. I0 Define Cook’s Theorem.

Stephen Cook's theorem is a fundamental result in computational complexity theory. It was


published in 1971 and is also known as the "Cook-Levin Theorem." The theorem deals with the class
of problems that are considered NP-complete.

PART B

--------------------------

Q.1 Contrast between 3 asymptotic notations & define the use & objective of
asymptotic notations
Contrast between Asymptotic Notations:

1. Big O Notation (O):

• Definition: Represents an upper bound on the growth rate of an algorithm. It


provides an upper limit on the rate of growth of a function in the worst-case
scenario.

• Use: Used to describe the upper bound or worst-case time complexity of an


algorithm.

• Example: If an algorithm has a time complexity of O(n^2), it means the running time
grows quadratically or less.

2. Omega Notation (Ω):

• Definition: Represents a lower bound on the growth rate of an algorithm. It provides


a lower limit on the rate of growth of a function in the best-case scenario.

• Use: Used to describe the lower bound or best-case time complexity of an algorithm.

• Example: If an algorithm has a time complexity of Ω(n), it means the running time
grows linearly or faster.

3. Theta Notation (Θ):

• Definition: Represents both upper and lower bounds on the growth rate of an
algorithm. It provides a tight bound on the rate of growth of a function.

• Use: Used to describe both the upper and lower bounds or average-case time
complexity of an algorithm.

• Example: If an algorithm has a time complexity of Θ(n), it means the running time
grows linearly.

Objective and Use of Asymptotic Notations:


1. Objective:

• Analysis of Algorithm Efficiency: Asymptotic notations are used to analyze and


describe the efficiency of algorithms in terms of their running time or space
requirements as the input size grows.

2. Comparison of Algorithms:

• Asymptotic notations provide a framework for comparing the efficiency of different


algorithms without getting into the details of machine-specific constants or low-level
implementation details.

3. Sensitivity to Input Size:

• Asymptotic notations focus on the growth rate of algorithms with respect to the
input size. This allows for a more abstract and generalized understanding of
algorithm performance.

4. Simplification of Analysis:

• Asymptotic notations simplify the analysis of algorithms by focusing on the most


significant terms and ignoring constant factors and lower-order terms, providing a
high-level perspective.

5. Uniformity Across Architectures:

• Asymptotic notations provide a uniform and architecture-independent way to


discuss and compare algorithms. This is important because the efficiency of
algorithms should be evaluated independently of specific hardware or
implementation details.

Q.2 Sort the following sequence using Quick Sort Method. Consider first element as

the Pivot element.

15, 56, 62, 2, 9, 16, 21, 17, 23, 3, 10

Sure, let's use the Quick Sort method to sort the given sequence with the first element as the pivot.
Here are the steps:

1. Original Sequence: 15, 56, 62, 2, 9, 16, 21, 17, 23, 3, 10

2. Step 1 - Partitioning:

• Choose the first element (15) as the pivot.

• Rearrange the elements such that all elements less than the pivot are on the left,
and all elements greater than the pivot are on the right.

Result: 10, 2, 3, 9, 15, 16, 21, 17, 23, 56, 62

3. Step 2 - Recursive Steps:


• Apply the Quick Sort method recursively to the subarrays on the left and right of the
pivot.

• Left Subarray: 10, 2, 3, 9

• Choose 10 as the pivot.

• Result: 9, 2, 3, 10

• Right Subarray: 16, 21, 17, 23, 56, 62

• Choose 16 as the pivot.

• Result: 17, 16, 21, 23, 56, 62

4. Final Sorted Sequence:

• Concatenate the sorted subarrays to get the final sorted sequence.

Result: 9, 2, 3, 10, 15, 17, 16, 21, 23, 56, 62

So, the given sequence sorted using the Quick Sort method with the first element as the pivot is:

9, 2, 3, 10, 15, 17, 16, 21, 23, 56, 62

Q.3 Find out the solution generated by the job sequencing. When n = 7 with
following profit &
deadline -

Profit (P1, P2 ... p7) (3, 5, 20, 18, 1, 6, 30)

Deadline (d1 ,d2….d7) = (1,3,4,3,2,1,2)


ANS= To solve the job sequencing problem, where each job has a profit and a deadline, and the
objective is to maximize the total profit by scheduling jobs within their respective deadlines, the
following steps can be taken:

1. Sort jobs by profit in non-increasing order: Sort the jobs based on their profits in non-
increasing order. In this case, the sorted order is (P3, P7, P2, P4, P6, P1, P5).

2. Allocate jobs to time slots within deadlines: Iterate through the sorted jobs and allocate
them to time slots within their deadlines. If a time slot is already occupied, try the previous
time slots until an available slot is found.

Following this process for the given profits and deadlines, the job sequence would be:
• Time slot 1: Job P3 (Profit = 20)

• Time slot 2: Job P7 (Profit = 30)

• Time slot 3: Job P2 (Profit = 5)

• Time slot 4: Job P4 (Profit = 18)

• Time slot 1 (again, since the slot is available): Job P6 (Profit = 6)

• Time slot 3 (again): Job P1 (Profit = 3)

• Time slot 1 (again): Job P5 (Profit = 1)

3. Calculate the total profit: The total profit is the sum of profits of the scheduled jobs. In this
case, the total profit is 20 + 30 + 5 + 18 + 6 + 3 + 1 = 83.

So, the solution generated by the job sequencing for the given profits and deadlines is to schedule
jobs in the following order: P3, P7, P2, P4, P6, P1, P5, and the total profit is 83.

Q.4 Solve the TSP problem for the following Cost Matrix -

W X Y Z
W [A 8 13 18]
X [3 A 7 8]
Y [4 11 A 10]
Z [6 6 7 A]
The Traveling Salesman Problem (TSP) involves finding the shortest possible route that visits a set of
cities and returns to the starting city. Given the cost matrix you provided, we'll solve the TSP using a
simple approach, assuming we start at city W.

Cost Matrix:

| W X Y Z ---------------

W | A 8 13 18

X|3A78

Y | 4 11 A 10

Z|667A
Solution:

1. Step 1: Initial State

• Start at city W.

2. Step 2: Decision Making

• Choose the city with the minimum cost from the current city. Mark the chosen city
as visited.

• Repeat this process until all cities are visited.

3. Step 3: Return to Starting City

• After visiting all cities, return to the starting city (W).

Path:

• W -> X -> Z -> Y -> W

Total Cost:

• 8+8+7+4=278+8+7+4=27

So, the solution to the TSP for the given cost matrix is to follow the path: W -> X -> Z -> Y -> W, with a
total cost of 27. This is a heuristic solution, and finding the optimal solution for the TSP is an NP-hard
problem. There are more sophisticated algorithms (such as dynamic programming or integer linear
programming) to find optimal solutions for larger instances of the TSP.

Q.5 Discuss Quadratic Assignment Problem using a suitable example,

he Quadratic Assignment Problem (QAP) is a combinatorial optimization problem that deals with
assigning a set of facilities to a set of locations, each with an associated cost, in such a way as to
minimize the total cost. The cost is quadratic, meaning it depends on the distance between the
facilities and the flow of objects between the facilities.

Problem Definition: Given �n facilities and �n locations, the QAP seeks to find the best
permutation of facilities to locations to minimize the total cost, where the cost is determined by the
product of flow between facilities and distance between their assigned locations.

Example: Let's consider a specific example to illustrate the Quadratic Assignment Problem.

• Facility Locations: A, B, C

• Locations: 1, 2, 3
• Flow Matrix (F):

Objective:

Find the assignment of facilities to locations that minimizes the total cost, considering both the flow
and distance matrices.

Solution:
To solve the QAP, one needs to explore different permutations of facility assignments and calculate
the total cost for each permutation. An optimal solution is the permutation that yields the minimum
total cost.

Due to the computational complexity of the QAP, exact solutions are often difficult to find for large
instances. Various heuristic and metaheuristic algorithms, such as simulated annealing, genetic
algorithms, and tabu search, are commonly employed to find good approximations to the optimal
solution.

Note:

The QAP arises in various practical applications, including manufacturing facility layout,
telecommunications network design, and data clustering. It represents a challenging optimization
problem with applications across different domains.

Q,6 Differentiate between Las Vegas and Monte-Carlo Algorithm with

example.

Las Vegas and Monte Carlo algorithms are two types of randomized algorithms,
but they differ in their approach and characteristics. Let's differentiate between
them with examples:

Las Vegas Algorithm:

1. Definition:

• Las Vegas algorithms always produce the correct result, but their
running time is randomized.

2. Characteristics:

• The algorithm may run for different durations on different


executions, but it always produces the correct output.

• The randomness is in the execution time rather than the result.

3. Example: Randomized QuickSort:


• QuickSort is an example of a Las Vegas algorithm.

• The choice of the pivot in QuickSort is randomized.

• The algorithm may have different running times on different inputs,


but it always produces a correctly sorted array.

Monte Carlo Algorithm:

1. Definition:

• Monte Carlo algorithms have a randomized output, and the result


may be incorrect with a certain probability.

2. Characteristics:

• The algorithm has a certain probability of producing an incorrect


result.

• The randomness is in the result rather than the execution time.

3. Example: Primality Testing (Miller-Rabin):

• Miller-Rabin algorithm is an example of a Monte Carlo algorithm for


primality testing.

• It may incorrectly claim a composite number as prime with a certain


probability.

• The probability of error can be reduced by running the algorithm


multiple times.

Comparison:

• Deterministic vs. Randomized Output:

• Las Vegas: Always produces a correct result, but the running time is
randomized.
• Monte Carlo: May produce an incorrect result with a certain
probability.

• Example:

• Las Vegas: Randomized QuickSort is an example where the output


is always correct, but the running time is randomized.

• Monte Carlo: Miller-Rabin primality testing is an example where the


output may be incorrect with a certain probability.

• Probability of Error:

• Las Vegas: Zero probability of error in the output.

• Monte Carlo: Non-zero probability of error in the output.

In summary, Las Vegas algorithms always produce the correct result, but their
running time is randomized. On the other hand, Monte Carlo algorithms may
produce an incorrect result with a certain probability, and the randomness is in
the output rather than the execution time.

Q.7 Prove that the Hanlilton Cycle Problem is NP-CompIete.

https://www.geeksforgeeks.org/proof-that-hamiltonian-cycle-is-np-complete/

PART C

--------------------------------------

Q.1 Solve the recurrence using Master theorem -


(a) T(n)= T(n^0.5) + C
(b) T(n) = 8T(n/2)+n^2
Q.2 Discuss matrix chain multiplication with reference to dynamic

programming technique. Explain 0/1 Knapsack Problem with suitable

example

Matrix Chain Multiplication with Dynamic Programming:

Matrix Chain Multiplication is an optimization problem that involves

finding the most efficient way to multiply a given sequence of matrices.

The objective is to minimize the number of scalar multiplications

needed to compute the product.

Problem Statement:

Given a sequence of matrices �1,�2,…,��A1,A2,…,An, where the

dimensions of the �i-th matrix are ��−1×��pi−1×pi, find the most


efficient way to parenthesize the product �1⋅�2⋅…⋅��A1⋅A2⋅…⋅An to

minimize the total number of scalar multiplications.

Dynamic Programming Approach:

1. Define Subproblems:

• Define subproblems by considering all possible ways to

parenthesize the product. For a subproblem �,�i,j, find

the optimal parenthesization for the matrix product

��⋅��+1⋅…⋅��Ai⋅Ai+1⋅…⋅Aj.

2. Formulate Recursive Relations:

• Let �[�,�]m[i,j] represent the minimum number of

scalar multiplications needed to compute the product

��⋅��+1⋅…⋅��Ai⋅Ai+1⋅…⋅Aj.

• The recursive relation is given by:

�[�,�]=min⁡�≤�<�{�[�,�]+�[�+1,�]+��−1⋅

��⋅��}m[i,j]=mini≤k<j{m[i,k]+m[k+1,j]+pi−1⋅pk⋅pj}

3. Build Bottom-Up Table:

• Use a bottom-up approach to fill the table �[�,�]m[i,j]

for all subproblems.

• The result �[1,�]m[1,n] provides the minimum number

of scalar multiplications needed for the entire matrix

chain.

4. Construct Optimal Parenthesization:

• To construct the optimal parenthesization, keep track of

the values of �k that give the minimum number of scalar

multiplications in each subproblem.


0/1 Knapsack Problem:

The 0/1 Knapsack Problem is a classic optimization problem in

combinatorial optimization. Given a set of items, each with a weight and

a value, determine the maximum value that can be obtained by

selecting a subset of the items, such that the total weight does not

exceed a given capacity.

Problem Statement:
Q.3 (a) Find the pattern ABCBC in the test ACABABCABCBCA using KMP Matcher.

(b) Discuss Naive String Matching Algorithm in detail.

(a) KMP Matcher for Pattern Searching:

The Knuth-Morris-Pratt (KMP) algorithm is a string-searching algorithm that efficiently finds


occurrences of a pattern within a text. It avoids unnecessary character comparisons by using
information from previous comparisons.

Steps for KMP Matcher:

1. Preprocessing (Compute LPS Array):

• Compute the Longest Prefix Suffix (LPS) array for the given pattern.

• LPS[i] represents the length of the longest proper prefix of the pattern that is also a
suffix of the pattern, considering the first i characters.

2. Pattern Matching:
• Traverse the text and pattern simultaneously, using the LPS array to skip unnecessary
comparisons.

• If a mismatch occurs, use the information from the LPS array to determine the next
position to compare.

Example: Finding Pattern "ABCBC" in Text "ACABABCABCBCA"

1. Compute LPS Array:

• Pattern: ABCBC

• LPS: [0, 0, 0, 1, 2]

2. Pattern Matching:

• Compare "A" and "A" (match)

• Compare "C" and "C" (match)

• Compare "A" and "A" (match)

• Compare "B" and "B" (match)

• Compare "A" and "A" (match)

• Move to the next position in the text

Continue the process, and when a mismatch occurs, use the LPS array to determine the next
position.

(b) Naive String Matching Algorithm:

The Naive String Matching Algorithm, also known as the Brute Force or Simple Matching
Algorithm, is a straightforward approach to pattern matching. It involves comparing each
substring of the text with the pattern until a match is found or the end of the text is reached.

Steps for Naive String Matching:

1. Iterate Through Text:

• For each position i in the text, compare the substring starting at i with the pattern.

2. Pattern Matching:

• If a match is found, record the position where the match starts.

3. Continue Iterating:

• Continue the process until the end of the text is reached.

Example:

Consider the text "ABCABCABC" and the pattern "ABC." The steps would be as follows:

1. Compare "ABC" with "ABC" (match at position 1).

2. Move to the next position and compare "BCA" with "ABC" (no match).
3. Move to the next position and compare "CAB" with "ABC" (no match).

4. Continue the process until the end of the text.

Discussion:

• Time Complexity:

• In the worst case, the Naive String Matching Algorithm has a time complexity of O((n-
m+1)m), where n is the length of the text and m is the length of the pattern.

• Space Complexity:

• The algorithm has a constant space complexity as it does not require additional data
structures.

While the Naive String Matching Algorithm is simple, it may not be efficient for large texts and
patterns. More advanced algorithms like KMP and Boyer-Moore provide better time
complexities for pattern searching.

Q.4 (a) Define the terms flow networks and flow. Explain the essential properties of
flow.

(b) Discuss the

terms

(i) Residual

Network

(ii) Augmenting Path


Q.5 (a) Describe the terms P, NP, NP-Hard, NP-Complete with suitable example,

Also, give relationship between them.

(b) Write algorithm for approximation for vertex cover problem

with example

(a) Complexity Classes and Their Relationships:

1. P (Polynomial Time):

• Definition: P consists of decision problems that can be solved in polynomial

time by a deterministic Turing machine.


• Example: Sorting an array, finding the shortest path in a graph using Dijkstra's

algorithm.

2. NP (Non-deterministic Polynomial Time):

• Definition: NP consists of decision problems for which a solution can be

checked quickly (in polynomial time) by a deterministic Turing machine.

• Example: The subset sum problem – given a set of integers, is there a subset

that sums to zero?

3. NP-Hard (Non-deterministic Polynomial Time Hard):

• Definition: NP-Hard consists of problems that are at least as hard as the

hardest problems in NP.

• Example: The traveling salesman problem is NP-Hard.

4. NP-Complete (Non-deterministic Polynomial Time Complete):

• Definition: NP-Complete consists of problems that are both in NP and NP-

Hard. If a polynomial-time algorithm exists for any NP-Complete problem,

then a polynomial-time algorithm exists for all problems in NP (P = NP).

• Example: The boolean satisfiability problem (SAT) is NP-Complete.

Relationships:

1. P ⊆ NP: Every problem that can be solved in polynomial time is also in NP.

2. NP-Hard ⊇ NP: NP-Hard problems are at least as hard as the hardest

problems in NP.

3. P = NP or P ≠ NP: The million-dollar question. Whether P equals NP is one of

the most significant unsolved problems in computer science. If P equals NP,

then every problem in NP has a polynomial-time algorithm.

4. NP-Complete problems are the hardest in NP: Every problem in NP can be

polynomial-time reduced to an NP-Complete problem. If a polynomial-time

algorithm exists for any NP-Complete problem, then it exists for all problems

in NP.

(b) Write algorithm for approximation for vertex cover problem with example

https://www.javatpoint.com/daa-approximation-algorithm-vertex-cover

You might also like