You are on page 1of 49

UNIT I INTRODUCTION

1. What is an algorithm?
An algorithm is a sequence of unambiguous instructions for solving a problem.
i.e., for obtaining a required output for any legitimate input in a finite amount of time

2. What do you mean by Amortized Analysis?


Amortized analysis finds the average running time per operation over a worst case
sequence of operations. Amortized analysis differs from average-case performance in that
probability is not involved; amortized analysis guarantees the time per operation over
worst-case performance.

3. What are important problem types? (or) Enumerate some important types of
problems.
1. Sorting 2. Searching
3. Numerical problems 4. Geometric problems
5. Combinatorial Problems 6. Graph Problems
7. String processing Problems

4. Name some basic Efficiency classes


1. Constant 2. Logarithmic 3. Linear 4. nlogn
5. Quadratic 6. Cubic 7. Exponential 8. Factorial

5. What are algorithm design techniques?


Algorithm design techniques ( or strategies or paradigms) are general approaches
to solving problems algorithmatically, applicable to a variety of problems from different
areas of computing. General design techniques are:
(i) Brute force (ii) divide and conquer
(iii) decrease and conquer (iv) transform and concquer
(v) greedy technique (vi) dynamic programming
(vii) backtracking (viii) branch and bound

6. How is an algorithm’s time efficiency measured?


Time efficiency indicates how fast the algorithm runs. An algorithm’s time
efficiency is measured as a function of its input size by counting the number of times its
basic operation (running time) is executed. Basic operation is the most time consuming
operation in the algorithm’s innermost loop.

7. What is Big ‘Oh’ notation?


A function t(n) is said to be in O(g(n)), denoted t(n) O(g(n)) , if t(n) is bounded above
by some constant multiple of g(n) for all large n, i.e., if there exist some positive constant
c and some nonnegative integers n0 such that
t(n) ≤ cg(n) for all n≥ n0

8. What is an Activation frame?


It is a storage area for an invocation of recursive program (parameters, local
variables, return address/value etc.). Activation frame allocated from frame stack pointed
by frame pointer.

9. Define order of an algorithm


Measuring the performance of an algorithm in relation with the input size n is
known as order of growth.

10. What is recursive call?


Recursive algorithm makes more than a single call to itself is known as recursive
call. An algorithm that calls itself is direct recursive.An algorithm”A” is said to be
indirect recursive if it calls another algorithm which in turn calls “A”.

11. What do you mean by stepwise refinement?


In top down design methodology the problem is solved in sequence (step by step)
is known as stepwise refinement.

12. How is the efficiency of the algorithm defined?


The efficiency of an algorithm is defined with the components.
(i) Time efficiency -indicates how fast the algorithm runs
(ii) Space efficiency -indicates how much extra memory the algorithm needs

13. Define direct recursive and indirect recursive algorithms.


Recursion occurs where the definition of an entity refers to the entity itself.
Recursion can be direct when an entity refers to itself directly or indirect when it refers to
other entities which refers to it. A (Directly) recursive routine calls itself. Mutually
recursive routines are an example of indirect recursion. A (Directly) recursive data type
contains pointers to instances of the data type.

14. What are the characteristics of an algorithm?


Every algorithm should have the following five characteristics
(i) Input
(ii) Output
(iii) Definiteness
(iv) Effectiveness
(v) Termination
Therefore, an algorithm can be defined as a sequence of definite and effective
instructions, which terminates with the production of correct output from the given input.
In other words, viewed little more formally, an algorithm is a step by step formalization
of a mapping function to map input set onto an output set.

15. What do you mean by time complexity and space complexity of an algorithm?
Time complexity indicates how fast the algorithm runs. Space complexity deals with
extra memory it require. Time efficiency is analyzed by determining the number of
repetitions of the basic operation as a function of input size. Basic operation: the
operation that contributes most towards the running time of the algorithm The running
time of an algorithm is the function defined by the number of steps (or amount of
memory) required to solve input instances of size n.

16. Define Big Omega Notations


A function t(n) is said to be in Ω(g(n)) , denoted t(n) Є Ω((g(n)) , if t(n) is bounded below
by some positive constant multiple of g(n) for all large n, i.e., if there exist some positive
constant c and some nonnegative integer n0 such that
t(n) ≥cg(n) for all for all n ≥ n0

17. What are the different criteria used to improve the effectiveness of algorithm?
(i) The effectiveness of algorithm is improved, when the design, satisfies the following
constraints to be minimum.
Time efficiency - how fast an algorithm in question runs.
Space efficiency – an extra space the algorithm requires
(ii) The algorithm has to provide result for all valid inputs.

18. Analyze the time complexity of the following segment:


for(i=0;i<N;i++)
for(j=N/2;j>0;j--)
sum++;
Time Complexity= N * N/2
= N2 /2
Є O(N2)

19. write general plan for analyzing non-recursive algorithms.


i. Decide on parameter indicating an input’s size.
ii. Identify the algorithm’s basic operation
iii. Checking the no.of times basic operation executed depends on size of input.if it
depends on some additional property,then best,worst,avg.cases need to be
investigated
iv. Set up sum expressing the no.of times the basic operation is executed.
(establishing order of growth)

20. How will you measure input size of algorithms?


The time taken by an algorithm grows with the size of the input. So the running
time of the program depends on the size of its input. The input size is measured as the
number of items in the input that is a parameter n is indicating the algorithm’s input size.
21. Define the terms: pseudocode, flow chart
A pseudocode is a mixture of a natural language and programming language like
constructs. A pseudocode is usually more precise than natural language. A flowchart is a
method of expressing an algorithm by a collection of connected geometric shapes
containing descriptions of the algorithm’s steps.

22. write general plan for analyzing recursive algorithms.


i. Decide on parameter indicating an input’s size.
ii. Identify the algorithm’s basic operation
iii. Checking the no.of times basic operation executed depends on size of input.if it
depends on some additional property,then best,worst,avg.cases need to be
investigated
iv. Set up the recurrence relation,with an appropriate initial condition,for the number
of times the basic operation is executed
v. Solve recurrence (establishing order of growth)

23. What do you mean by Combinatorial Problem?


Combinatorial Problems are problems that ask to find a combinatorial object-such
as permutation, a combination, or a subset--that satisfies certain constraints and has some
desired property.

24. Define Big Theta Notations


A function t(n) is said to be in Θ(g(n)) , denoted t(n) Є Θ (g(n)) , if t(n) is bounded both
above and below by some positive constant multiple of g(n) for all large n, i.e., if there
exist some positive constants c1 and c2 and some nonnegative integer n0 such that
c1 g(n) ≤t(n) ≤ c2g(n) for all n ≥ n0

25. What is performance measurement?


Performance measurement is concerned with obtaining the space and the time
requirements of a particular algorithm.

26. What is an algorithm?


An algorithm is a finite set of instructions that, if followed, accomplishes a
particular task. In addition, all algorithms must satisfy the following criteria:
1) input
2) Output
3) Definiteness
4) Finiteness
5) Effectiveness.

27. Define Program.


A program is the expression of an algorithm in a programming language.
Sometimes works such as procedure, function and subroutine are used synonymously
program.

28. What is recursive algorithm?


An algorithm is said to be recursive if the same algorithm is invoked in the body.

29. What is space complexity and time complexity ?


The space complexity of an algorithm is the amount of memory it needs to run to
completion. The time complexity of an algorithm is the amount of computer time it needs
to run to completion.

30. Give the two major phases of performance evaluation


Performance evaluation can be loosely divided into two major phases:
(i) a prior estimates (performance analysis)
(ii) a Posterior testing(performance measurement)

31. Define input size.


The input size of any instance of a problem is defined to be the number of
words(or the number of elements) needed to describe that instance.

32. Define best-case step count.


The best-case step count is the minimum number of steps that can be executed for
the given parameters.

33. Define worst-case step count.


The worst-case step count is the maximum number of steps that can be executed
for the given parameters.

34. Define average step count.


The average step count is the average number of steps executed an instances with
the given parameters.

35. Define Little “oh”.


The function f(n) = 0(g(n)) iff
Lim f(n) =0
n →∞ g(n)

36. Define Little Omega.


The function f(n) = ω (g(n)) iff
Lim f(n) =0
n →∞ g(n)

37. Write algorithm using iterative function to fine sum of n numbers.


Algorithm sum(a,n)
{ S : = 0.0
For i=1 to n do
S : - S + a[i];
Return S;
}

38. Write an algorithm using Recursive function to fine sum of n numbers,


Algorithm Rsum (a,n)
{
If(n≤0) then
Return 0.0;
Else Return Rsum(a, n- 1) + a(n);

39. Define the divide an conquer method.


Given a function to compute on ‘n’ inputs the divide-and-comquer strategy
suggests splitting the inputs in to’k’ distinct susbsets, 1<k <n, yielding ‘k’ subproblems.
The subproblems must be solved, and then a method must be found to combine
subsolutions into a solution of the whole. If the subproblems are still relatively large, then
the divide-and conquer strategy can possibly be reapplied.

40. Define control abstraction.


A control abstraction we mean a procedure whose flow of control is clear but
whose primary operations are by other procedures whose precise meanings are left
undefined.
Write the Control abstraction for Divide-and conquer.
Algorithm DAndC(P)
{
if small(p) then return S(P);
else
{
divide P into smaller instance p1,p2,…pk, k ≥1;
Apply DAndC to each of these subproblems
Return combine (DAnd C(p1) DAnd C(p2),----, DAnd (pk));
}
}

41.What is the substitution method?


One of the methods for solving any such recurrence relation is called the
substitution method.

42. What is the binary search?


If ‘q’ is always chosen such that ‘aq’ is the middle element(that is, q=[(n+1)/2),
then the resulting search algorithm is known as binary search.

43. Give computing time for Binary search?


In conclusion we are now able completely describe the computing time of binary
search by giving formulas that describe the best, average and worst cases.
Successful searches
Θ(1) Θ (logn) Θ (Logn)
best average worst
Unsuccessful searches
Θ (logn)
best, average, worst

45. Define external path length?


The external path length E, is defines analogously as sum of the distance of all
external nodes from the root.

46. Define internal path length.


The internal path length ‘I’ is the sum of the distances of all internal nodes from the root.

47. What is the maximum and minimum problem?


The problem is to find the maximum and minimum items in a set of ‘n’ elements. Though
this problem may look so simple as to be contrived, it allows us to demonstrate
divideand-conquer in simple setting.

48. What is the Quick sort?


Quicksort is divide and conquer strategy that works by partitioning it’s input
elements according to their value relative to some preselected element(pivot). it uses
recursion and the method is also called partition –exchange sort.

49. Write the Analysis for the Quick sort.


O(nlogn) in average and best cases
O(n2) in worst case
50. what is Merge sort? and Is insertion sort better than the merge sort?
Merge sort is divide and conquer strategy that works by dividing an input array in to
two halves,sorting them recursively and then merging the two sorted halves to get the
original array sorted
Insertion sort works exceedingly fast on arrays of less then 16 elements, though for large
‘n’ its computing time is O(n2).

51. Write a algorithm for straightforward maximum and minimum?


Algorithm straight MaxMin(a,n,max,min)
//set max to the maximum and min to the minimum of a[1:n]
{
max := min: = a[i];
for i = 2 to n do
{
if(a[i] >max) then max: = a[i];
if(a[i] >min) then min: = a[i];
}
}

52. what is general divide and conquer recurrence?


Time efficiency T(n)of many divide and conquer algorithms satisfies the equation
T(n)=a.T(n/b)+f(n).This is the general recurrence relation.
53. What is Master’s theorem?
54. Write the algorithm for Iterative binary search?
Algorithm BinSearch(a,n,x)
//Given an array a[1:n] of elements in nondecreasing
// order, n>0, determine whether x is present
{
low : = 1;
high : = n;
while (low < high) do
{
mid : = [(low+high)/2];
if(x < a[mid]) then high:= mid-1;
else if (x >a[mid]) then low:=mid + 1;
else return mid;
}
return 0;
}

55. Describe the recurrence relation for merge sort?


If the time for the merging operation is proportional to n, then the computing time of
merge sort is described by the recurrence relation
T(n) = a n = 1, a a constant
2T (n/2) + n n >1, c a constant

UNIT II GREEDY METHOD

The general method- Optimal storage on tapes- Knapsack problem-


Minimum spanning trees- Single source shortest path method

1. Explain the greedy method.


Greedy method is the most important design technique, which makes a choice that
looks best at that moment. A given ‘n’ inputs are required us to obtain a subset that
satisfies some constraints that is the feasible solution. A greedy method suggests that
one can device an algorithm that works in stages considering one input at a time.

2. Define feasible and optimal solution.


Given n inputs and we are required to form a subset such that it satisfies some
given constraints then such a subset is called feasible solution. A feasible solution
either maximizes or minimizes the given objective function is called as optimal
solution
3. Write the control abstraction for greedy method.
Algorithm Greedy (a, n)
{
solution=0;
for i=1 to n do
{
x= select(a);
if feasible(solution ,x) then
solution=Union(solution ,x);
}
return solution;
}

4. What are the constraints of knapsack problem?


To maximize Σpixi
1≤i≤n
The constraint is : Σwixi ≥ m and 0 ≤ xi ≤ 1 1≤ i ≤ n
1≤i≤n
where m is the bag capacity, n is the number of objects and for each object i wi
and pi are the weight and profit of object respectively.

5. What is a minimum cost spanning tree?


A spanning tree of a connected graph is its connected acyclic subgraph that
contains all vertices of a graph. A minimum spanning tree of a weighted
connected graph is its spanning tree of the smallest weight where bweight of the
tree is the sum of weights on all its edges.
A minimum spanning subtree of a weighted graph (G,w) is a spanning subtree of
G of minimum weight w(T )= Σ w(e )
e€ T
Minimum Spanning Subtree Problem: Given a weighted connected undirected graph
(G,w), find a minimum spanning subtree
6. Specify the algorithms used for constructing Minimum cost spanning tree.
a) Prim’s Algorithm
b) Kruskal’s Algorithm

7. State single source shortest path algorithm (Dijkstra’s algorithm).


For a given vertex called the source in a weigted connected
graph,find shotrtest paths to all its other vertices.Dijikstra’s algorithm applies to
graph with non-negative weights only.

8. What is Knapsack problem?


A bag or sack is given capacity and n objects are given. Each object has weight
wi and
profit pi .Fraction of object is considered as xi (i.e) 0<=xi<=1 .If fraction is 1 then
entire object is put into sack. When we place this fraction into the sack we get wixi
and pixi.

9. Write any two characteristics of Greedy Algorithm?


* To solve a problem in an optimal way construct the solution from given set of
candidates.
* As the algorithm proceeds, two other sets get accumulated among this one set
contains
the candidates that have been already considered and chosen while the other set
contains
the candidates that have been considered but rejected.
10. What is the Greedy approach?
The method suggests constructing solution through sequence of steps,each
expanding partially constructed solution obtained so far,until a complete solution is
reached. On each step,the choice must be
• Feasible(satisfy problem constraints)
• Locally optimal(best local choice among all feasible choices available on that
step)
• Irrevocable(once made,it cant be changed)
11. What are the steps required to develop a greedy algorithm?
* Determine the optimal substructure of the problem.
* Develop a recursive solution.
* Prove that at any stage of recursion one of the optimal choices is greedy choice.
Thus it is always safe to make greedy choice.
* Show that all but one of the sub problems induced by having made the greedy
choice are empty.
* Develop a recursive algorithm and convert into iterative algorithm.
13. Define forest.
Collection of sub trees that are obtained when root node is eliminated is known as
forest
14. Write the difference between the Greedy method and Dynamic programming.
Greedy method Dynamic programming
Only one sequence of decision is
generated.
Many number of decisions are generated.
It does not guarantee to give an
optimal solution always.
It definitely gives an optimal solution
always.
15.state the requirement in optimal storage problem in tapes.
Finding a permutation for the n programs so that when they are stored on the tape in
this order the MRT is minimized.This problem fits the ordering paradigm.
16. state efficiency of prim’s algorithm.
O(|v|2) (WEIGHT MATRIX AND PRIORITY QUEUE AS UNORDERED ARRAY)
O(|E| LOG|V|) (ADJACENCY LIST AND PRIORITY QUEUE AS MIN-HEAP)
17. State Kruskal Algorithm.
The algorithm looks at a MST for a weighted connected graph as an acyclic subgraph
with |v|-1 edges for which the sum of edge weights is the smallest.
18. state efficiency of Dijkstra’s algorithm.
O(|v|2) (WEIGHT MATRIX AND PRIORITY QUEUE AS UNORDERED ARRAY)
O(|E| LOG|V|) (ADJACENCY LIST AND PRIORITY QUEUE AS MIN-HEAP)
19. Differentiate subset paradigm and ordering paradigm
subset paradigm ordering paradigm
At each stage a decision is made
regarding whether a particular input is
in an optimal solution (generating sub
optimal solutions)
For problems that do not call for selection of
optimal subset,in the greedy manner we
make decisions by considering inputs in
some order
Example kNAPSACK,MST Optimal storage on tapes

UNIT III
DYNAMIC PROGRAMMING

The General method- All pairs shortest path- Optimal binary tree-
Multistage graphs

1. Write the difference between the Greedy method and Dynamic programming.
Greedy method Dynamic programming
1.Only one sequence of decision is
generated.
1.Many number of decisions are generated.
2.It does not guarantee to give an optimal
solution always.
2.It definitely gives an optimal solution
always.
2. Define dynamic programming.
Dynamic programming is an algorithm design method that can be used when a
solution to the problem is viewed as the result of sequence of decisions. It is technique
for solving problems with overlapping subproblems.
3. What are the features of dynamic programming?
• Optimal solutions to sub problems are retained so as to avoid recomputing of their
values.
• Decision sequences containing subsequences that are sub optimal are not
considered.
• It definitely gives the optimal solution always.
4. What are the drawbacks of dynamic programming?
• Time and space requirements are high, since storage is needed for all level.
• Optimality should be checked at all levels.
5. Write the general procedure of dynamic programming.
The development of dynamic programming algorithm can be broken into a
sequence of 4 steps.
1. Characterize the structure of an optimal solution.
2. Recursively define the value of the optimal solution.
3. Compute the value of an optimal solution in the bottom-up fashion.
4. Construct an optimal solution from the computed information.

6. Define principle of optimality.


It states that an optimal solution to any of its instances must be made up of
optimal solutions to its subinstances.

7. Define multistage graph


A multistage graph G =(V,E) is a directed graph in which the vertices are partitioned
in to K>=2 disjoint sets Vi,1<=i<=k.The multi stage graph problem is to find a minimum
cost paths from s(source ) to t(sink)
Two approach(forward and backward)

8. Define All pair shortest path problem


Given a weighted connected graph, all pair shortest path problem asks to find the
lengths of shortest paths from each vertex to all other vertices.

9.Define Distance matrix


Recording the lengths of shortest path in n x n matrix is called Distance matrix(D)
10.Define floyd’s algorithm
To find all pair shortest path
The algorithm computes the distance matrix of a weighted graph with n vertices
through series of n by n matrices :D(0)…D(k-1),D(k)…..D(n)

11. State the time efficiency of floyd’s algorithm


O(n3)
It is cubic
12. Define OBST
􀃖 Dynammic pgmg. Used
􀃖 If probabilities of searching for elements of a set are known then finding optimal
BST for which the average number of comparisons in a search is smallest
possible.
13. Define catalan number
The total number of binary search trees with n keys is equal to nth catalan
number
C(n)=(2n to n) 1/(n+1) for n>0,c(0)=1
14. State time and space efficiency of OBST
SPACE EFFICIENCY :QUADRATIC
TIME EFFICIENCY : CUBIC.

UNIT 4 AND 5
BACKTRACKING BRANCH AND BOUND

1. What are the requirements that are needed for performing Backtracking?
To solve any problem using backtracking, it requires that all the solutions satisfy a
complex set of constraints. They are:
i. Explicit constraints.
ii. Implicit constraints.
2.Define explicit constraint.
They are rules that restrict each x to take on values only from a give set. They i
depend on the particular instance I of the problem being solved. All tuples that satisfy the
explicit constraints define a possible solution space.

3. Define implicit constraint.


They are rules that determine which of the tuples in the solution space of I satisfy the
criteria function. It describes the way in which the x must relate to each other. i
4.Define state space tree.
The tree organization of the solution space is referred to as state space tree.
5.Define state space of the problem.
All the paths from the root of the organization tree to all the nodes is called as state
space of the problem
6.Define answer states.
Answer states are those solution states s for which the path from the root to s
defines a tuple that is a member of the set of solutions of the problem.
7.What are static trees?
The tree organizations that are independent of the problem instance being solved
are called as static tree.

8.What are dynamic trees?


The tree organizations those are independent of the problem instance being solved
are called as static tree.

9.Define a live node.


A node which has been generated and all of whose children have not yet been
generated is called as a live node.
10. Define a E – node.
E – node (or) node being expanded. Any live node whose children are currently
being generated is called as a E – node.
11.Define a dead node.
Dead node is defined as a generated node, which is to be expanded further all of
whose children have been generated.
12.,What are the factors that influence the efficiency of the backtracking algorithm?
The efficiency of the backtracking algorithm depends on the following four
factors. They are:
The ti i. me needed to generate the next x k
ii. The number of x satisfying the explicit constraints. k
iii. The time for the bounding functions B
iv. The number of x satisfying the B
k
k k.
13.Define Branch-and-Bound method.
The term Branch-and-Bound refers to all the state space methods in which all
children of the E-node are generated before any other live node can become the E- node.
14.What are the searching techniques that are commonly used in Branch-and-Bound
method.
The searching techniques that are commonly used in Branch-and-Bound method
are:
i. FIFO
ii. LIFO
iii. LC
iv. Heuristic search
15.State 8 – Queens problem.
The problem is to place eight queens on a 8 x 8 chessboard so that no two queen
“attack” that is, so that no two of them are on the same row, column or on the diagonal.
16.State Sum of Subsets problem.
Given n distinct positive numbers usually called as weights , the problem calls for finding
all the combinations of these numbers whose sums are m.
17. State m – colorability decision problem.
Let G be a graph and m be a given positive integer. We want to discover whether the
nodes of G can be colored in such a way that no two adjacent nodes have the same color yet only
m colors are used.
18.Define chromatic number of the graph.
The m – colorability optimization problem asks for the smallest integer m for which the
graph G can be colored. This integer is referred to as the chromatic number of the graph.
19. Define a planar graph.
A graph is said to be planar iff it can be drawn in such a way that no two edges cross each
other.
20. What are NP- hard and Np-complete problems?
The problems whose solutions have computing times are bounded by polynomials of
small degree.
21. What is a decision problem?
Any problem for which the answer is either zero or one is called decision problem.
22. what is approximate solution?
A feasible solution with value close to the value of an optimal solution is called
approximate solution.
23. what is promising and non-promising nodes?
a node in a state space tree is said to be promising if it corresponds to a
partially constructed solution from which a complete solution can be obtained.
The nodes which are not promising for solution in a state space tree are called
non-promising nodes.
24.Write formula for bounding function in Knapsack problem
In knapsack problem upper bound value is computed by the formula
UB = v + (W-w) * (vi+1/wi+1)
25. Write about traveling salesperson problem.
Let g = (V, E) be a directed. The tour of G is a directed simple cycle that includes every vertex in V.
The cost of a tour is
the sum of the cost of the edges on the tour. The traveling salesperson problem is to find a tour of
minimum cost.
In branch and bound technique of TSP problem Lower bound lb= s/2
26. Write some applications of traveling salesperson problem.
-> Routing a postal van to pick up mail from boxes located at n different sites.
-> Using a robot arm to tighten the nuts on some piece of machinery on an assembly line.
-> Production environment in which several commodities are manufactured on the same set of
machines.
27. Give the time complexity and space complexity of traveling salesperson problem.
Time complexity is O (n2 2n). Space complexity is O (n 2n).
28. Differentiate decision problem and optimization problem
Any problem for which the answer is either zero or one is called decision problem
Any problem that involves the identification of an optimal (maximum or minimum) value of a given
cost function is
called optimization problem
29. what is class P and NP?
P is set of all decision problems solvable by deterministic algorithms in polynomial time.
NP is set of all decision problems solvable by non deterministic algorithms in polynomial time.
30. Define NP-Hard and NP-Complete problems
Problem L is NP-Hard if and only if satisfiability reduces to L.
A Problem L is NP-Complete if and only if L is NP-Hard and L belongs to NP.
n))

1.Explain about algorithm with suitable example (Notion of algorithm).


An algorithm is a sequence of unambiguous instructions for solving a computational problem,
i.e., for obtaining a required output for any legitimate input in a finite amount of time.
Algorithms – Computing the Greatest Common Divisor of Two Integers(gcd(m, n): the
largest integer that divides both m and n.)
 Euclid’s algorithm: gcd(m, n) = gcd(n, m mod n)
Step1: If n = 0, return the value of m as the answer and stop; otherwise, proceed to Step 2.
Step2: Divide m by n and assign the value of the remainder to r.
Step 3: Assign the value of n to m and the value of r to n. Go to Step 1.
Algorithm Euclid(m, n)
//Computes gcd(m, n) by Euclid‘s algorithm
//Input: Two nonnegative, not-both-zero integers m and n
//Output: Greatest common divisor of m and n
while n ≠ 0 do
r  m mod n
mn
nr
return m
About This algorithm
 Finiteness: how do we know that Euclid‘s algorithm actually comes to a stop?
 Definiteness: nonambiguity
 Effectiveness: effectively computable.
 Consecutive Integer Algorithm
Step1: Assign the value of min{m, n} to t.
Step2: Divide m by t. If the remainder of this division is 0, go to Step3;otherwise, go to Step
4.
Step3: Divide n by t. If the remainder of this division is 0, return the value of t as the answer
and stop; otherwise, proceed to Step4.
Step4: Decrease the value of t by 1. Go to Step2.
About This algorithm
 Finiteness
 Definiteness
 Effectiveness
 Middle-school procedure
Step1: Find the prime factors of m.
Step2: Find the prime factors of n.
Step3: Identify all the common factors in the two prime expansions found in Step1 and Step2.
(If p is a common factor occurring Pm and Pn times in m and n, respectively, it should be
repeated in min{Pm, Pn} times.)
Step4: Compute the product of all the common factors and return it as the gcd of the numbers
given.
2. Write short note on Fundamentals of Algorithmic Problem Solving
 Understanding the problem
 Asking questions, do a few examples by hand, think about special cases, etc.
 Deciding on
 Exact vs. approximate problem solving
 Appropriate data structure
 Design an algorithm
 Proving correctness
 Analyzing an algorithm
 Time efficiency : how fast the algorithm runs
 Space efficiency: how much extra memory the algorithm needs.
 Coding an algorithm
3. Discuss important problem types that you face during Algorithm Analysis.
 sorting
 Rearrange the items of a given list in ascending order.
 Input: A sequence of n numbers <a1, a2, …, an>
 Output: A reordering <a´1, a´2, …, a´n> of the input sequence such that a´1≤a´2
≤… ≤a´n.
 A specially chosen piece of information used to guide sorting. I.e., sort student
records by names.
Examples of sorting algorithms
 Selection sort
 Bubble sort
 Insertion sort
 Merge sort
 Heap sort …
Evaluate sorting algorithm complexity: the number of key comparisons.
 Two properties
 Stability: A sorting algorithm is called stable if it preserves the relative order of
any two equal elements in its input.
 In place: A sorting algorithm is in place if it does not require extra memory,
except, possibly for a few memory units.
 searching
 Find a given value, called a search key, in a given set.
 Examples of searching algorithms
 Sequential searching
 Binary searching…
 string processing
 A string is a sequence of characters from an alphabet.
 Text strings: letters, numbers, and special characters.
 String matching: searching for a given word/pattern in a text.
 graph problems
 Informal definition
 A graph is a collection of points called vertices, some of which are
connected by line segments called edges.
 Modeling real-life problems
 Modeling WWW
 communication networks
 Project scheduling …
Examples of graph algorithms
 Graph traversal algorithms
 Shortest-path algorithms
 Topological sorting
 combinatorial problems
 geometric problems
 Numerical problems
4. Discuss Fundamentals of the analysis of algorithm efficiency elaborately.
 Algorithm‘s efficiency
 Three notations
 Analyze of efficiency of Mathematical Analysis of Recursive Algorithms
 Analyze of efficiency of Mathematical Analysis of non-Recursive Algorithms
Analysis of algorithms means to investigate an algorithm’s efficiency with respect to
resources: running time and memory space.
 Time efficiency: how fast an algorithm runs.
 Space efficiency: the space an algorithm requires.
 Measuring an input‘s size
 Measuring running time
 Orders of growth (of the algorithm‘s efficiency function)
 Worst-base, best-case and average efficiency
 Measuring Input Sizes
 Efficiency is defined as a function of input size.
 Input size depends on the problem.
 Example 1, what is the input size of the problem of sorting n numbers?
 Example 2, what is the input size of adding two n by n matrices?
 Units for Measuring Running Time
 Measure the running time using standard unit of time measurements,
such as seconds, minutes?
 Depends on the speed of the computer.
 count the number of times each of an algorithm‘s operations is
executed.
 Difficult and unnecessary
 count the number of times an algorithm‘s basic operation is executed.
 Basic operation: the most important operation of the algorithm, the
operation contributing the most to the total running time.
 For example, the basic operation is usually the most time-consuming
operation in the algorithm‘s innermost loop.
 Orders of Growth
 consider only the leading term of a formula
 Ignore the constant coefficient.
 Worst-Case, Best-Case, and Average-Case Efficiency
 Algorithm efficiency depends on the input size n
 For some algorithms efficiency depends on type of input.
Example: Sequential Search
 Problem: Given a list of n elements and a search key K, find an
element equal to K, if any.
 Algorithm: Scan the list and compare its successive elements with K
until either a matching element is found (successful search) of the list
is exhausted (unsuccessful search)
Worst case Efficiency
 Efficiency (# of times the basic operation will be executed) for the
worst case input of size n.
 The algorithm runs the longest among all possible inputs of size n.
Best case
 Efficiency (# of times the basic operation will be executed) for the best
case input of size n.
 The algorithm runs the fastest among all possible inputs of size n.
Average case:
 Efficiency (#of times the basic operation will be executed) for a
typical/random input of size n.
 NOT the average of worst and best case

5. Explain Asymptotic Notations


Three notations used to compare orders of growth of an algorithm‘s basic operation count
a. O(g(n)): class of functions f(n) that grow no faster than g(n)
A function t(n) is said to be in O(g(n)), denoted t(n) O(g(n)), if t(n) is bounded above by
some constant multiple of g(n) for all large n, i.e., if there exist some positive constant c and
some nonnegative integer n0 such that t(n)  cg(n) for all n  n0
b. Ω(g(n)): class of functions f(n) that grow at least as fast as g(n)
A function t(n) is said to be in (g(n)), denoted t(n)  (g(n)), if t(n) is bounded below by
some constant multiple of g(n) for all large n, i.e., if there exist some positive constant c and
some nonnegative integer n0 such that t(n)  cg(n) for all n  n0
0
100
200
300
400
500
600
700
12345678
n*n*n
n*n
n log(n)
n
log(n)
c. Θ (g(n)): class of functions f(n) that grow at same rate as g(n)
A function t(n) is said to be in (g(n)), denoted t(n)  (g(n)), if t(n) is bounded both above
and below by some positive constant multiples of g(n) for all large n, i.e., if there exist some
positive constant c1 and c2 and some nonnegative integer n0 such that c2 g(n)  t(n)  c1 g(n) for
all n  n0
 Amortized efficiency
6. List out the Steps in Mathematical Analysis of non recursive Algorithms.
 Steps in mathematical analysis of nonrecursive algorithms:
 Decide on parameter n indicating input size
 Identify algorithm‘s basic operation
 Check whether the number of times the basic operation is executed depends only
on the input size n. If it also depends on the type of input, investigate worst,
average, and best case efficiency separately.
 Set up summation for C(n) reflecting the number of times the algorithm‘s basic
operation is executed.
 Example: Finding the largest element in a given array
Algorithm MaxElement (A[0..n-1])
//Determines the value of the largest element in a given array
//Input: An array A[0..n-1] of real numbers
//Output: The value of the largest element in A
maxval  A[0]
for i  1 to n-1 do
if A[i] > maxval
maxval  A[i]
return maxval
7. List out the Steps in Mathematical Analysis of Recursive Algorithms.
 Decide on parameter n indicating input size
 Identify algorithm‘s basic operation
 Determine worst, average, and best case for input of size n
 Set up a recurrence relation and initial condition(s) for C(n)-the number of times the basic
operation
will be executed for an input of size n (alternatively count recursive calls).
 Solve the recurrence or estimate the order of magnitude of the solution
F(n) = 1 if n = 0
n * (n-1) * (n-2)… 3 * 2 * 1 if n > 0
 Recursive definition
F(n) = 1 if n = 0
n * F(n-1) if n > 0
Algorithm F(n)
if n=0
return 1 //base case
else
return F (n -1) * n //general case
Example Recursive evaluation of n ! (2)
 Two Recurrences
The one for the factorial function value: F(n)
F(n) = F(n – 1) * n for every n > 0
F(0) = 1
The one for number of multiplications to compute n!, M(n)
M(n) = M(n – 1) + 1 for every n > 0
M(0) = 0
M(n) ∈ Θ (n)

8. Explain in detail about linear search.


Sequential Search searches for the key value in the given set of items sequentially and returns the
position of the key value else returns -1.
Analysis:
For sequential search, best-case inputs are lists of size n with their first elements equal to a search key;
accordingly,
Cbw(n) = 1.
Average Case Analysis:
The standard assumptions are that
(a) the probability of a successful search is equal top (0 <=p<-=1) and
(b) the probability of the first match occurring in the ith position of the list is the same for every i.
Under these assumptions- the average number of key comparisons Cavg(n) is found as follows.
In the case of a successful search, the probability of the first match occurring in the i th position of
the list is p / n for every i, and the number of comparisons made by the algorithm in such a situation is
obviously i. In the case of an unsuccessful search, the number of comparisons is n with the probability
of
such a search being (1- p). Therefore,
For example, if p = 1 (i.e., the search must be successful), the average number of key comparisons
made
by sequential search is (n + 1) /2; i.e., the algorithm will inspect, on average, about half of the list's
elements.
If p = 0 (i.e., the search must be unsuccessful), the average number of key comparisons will be n
because the
algorithm will inspect all n elements on all such inputs.

9. Explain in detail about Tower of Hanoi.


In this puzzle, there are n disks of different sizes and three pegs. Initially, all the disks are on the first
peg in order of size, the largest on the bottom and the smallest on top. The goal is to move all the
disks to the
third peg, using the second one as an auxiliary, if necessary. On1y one disk can be moved at a time,
and it is
forbidden to place a larger disk on top of a smaller one.
1
2
3
The general plan to the Tower of Hanoi problem.
The number of disks n is the obvious choice for the input's size indicator, and so is moving one disk as
the
algorithm's basic operation. Clearly, the number of moves M(n) depends on n only, and we get the
following
recurrence equation for it:
M(n) = M(n-1)+1+M(n-1)
With the obvious initial condition M(1) = 1, the recurrence relation for the number of moves M(n) is:
M(n) = 2M(n- 1) + 1 for n> 1, M(1) = 1.
The total number of calls made by the Tower of Hanoi algorithm: n-1
= 2n-1

10. Explain in detail about quicksort implementation

Quicksort is a fast sorting algorithm that works by splitting a large array of data into smaller

sub-arrays. This implies that each iteration works by splitting the input into two
components, sorting them, and then recombining them. For big datasets, the technique is

highly efficient since its average and best-case complexity is O(n*logn).

It was created by Tony Hoare in 1961 and remains one of the most effective general-purpose

sorting algorithms available today. It works by recursively sorting the sub-lists to either side of

a given pivot and dynamically shifting elements inside the list around that pivot.

As a result, the quick sort method can be summarized in three steps:

 Pick: Select an element.

 Divide: Split the problem set, move smaller parts to the left of the pivot and larger
items to the right.

 Repeat and combine: Repeat the steps and combine the arrays that have
previously been sorted.

Benefits of Quicksort

Let’s go through a few key benefits of using Quicksort:

 It works rapidly and effectively.

 It has the best time complexity when compared to other sorting algorithms.

 Quick sort has a space complexity of O(logn), making it an excellent choice for
situations when space is limited.

Limitations of Quicksort

Despite being the fastest algorithm, QuickSort has a few drawbacks. Let’s have a look at some

of the drawbacks of Quicksort.


 This sorting technique is considered unstable since it does not maintain the key-
value pairs initial order.

 When the pivot element is the largest or smallest, or when all of the components
have the same size. The performance of the quicksort is significantly impacted by
these worst-case scenarios.

 It’s difficult to implement since it’s a recursive process, especially if recursion


isn’t available.

Let’s take a look at an example to get a better understanding of the Quicksort algorithm. In

this example, the array(shown in graphic below) contains unsorted values, which we will sort

using Quicksort.

Unsorted & Sorted Array | Image by Author

1). Selecting Pivot

The process starts by selecting one element (known as the pivot) from the list; this can be

any element. A pivot can be:

 Any element at random


 The first or last element

 Middle element

For this example, we’ll use the last element, 4, as our pivot.

2). Rearranging the Array

Now, the goal here is to rearrange the list such that all the elements less than the pivot are

towards the left of it, and all the elements greater than the pivot are towards the right of

it.

 The pivot element is compared to all of the items starting with the first index. If
the element is greater than the pivot element, a second pointer is appended.

 When compared to other elements, if a smaller element than the pivot element
is found, the smaller element is swapped with the larger element identified
before.

Rearranging Elements | Image by Author

Let’s simplify the above example,

 Every element, starting with 7, will be compared to the pivot(4). A second pointer
will be placed at 7 because 7 is bigger than 4.
 The next element, element 2 will now be compared to the pivot. As 2 is less
than 4, it will be replaced by the bigger figure 7 which was found earlier.

 The numbers 7 and 2 are swapped. Now, pivot will be compared to the next
element, 1 which is smaller than 4.

 So once again, 7 will be swapped with 1.

 The procedure continues until the next-to-last element is reached, and at the
end the pivot element is then replaced with the second pointer. Here,
number 4(pivot) will be replaced with number 6.

Rearranging Elements | Image by Author

As elements 2, 1, and 3 are less than 4, they are on the pivot’s left side. Elements can be in

any order: ‘1’,’2’,’3’, or ‘3’,’1’,’2’, or ‘2’,’3’,’1’. The only requirement is that all of the

elements must be less than the pivot. Similarly, on the right side, regardless of their sequence,

all components should be greater than the pivot.


Pivot in its sorted position | Image by Author

In simple words,the algorithm searches for every value that is smaller than the pivot. Values

smaller than pivot will be placed on the left, while values larger than pivot will be placed on

the right. Once the values are rearranged, it will set the pivot in its sorted position.

3). Dividing Subarrays

Once we have partitioned the array, we can break this problem into two sub-problems.

First, sort the segment of the array to the left of the pivot, and then sort the segment of the

array to the right of the pivot.


Sorting the sub-arrays | Image by Author

 In the same way that we rearranged elements in step 2, we will pick a pivot
element for each of the left and right sub-parts individually.

 Now, we will rearrange the sub-list such that all the elements are less than the
pivot point, which is towards the left. For example, element 3 is the largest among
the three elements, which satisfies the condition, hence the element 3 is in its
sorted position.

 In a similar manner, we will again work on the sub-list and sort the
elements 2 and 1. We will stop the process when we get a single element at the
end.

 Repeat the same process for the right-side sub-list. The subarrays are
subdivided until each subarray consists of only one element.

 Now At this point, the array is sorted :)

Quick Sort Algorithm

Quick Sort Function Algorithm


//start –> Starting index, end --> Ending index
Quicksort(array, start, end)
{
if (start < end)
{
pIndex = Partition(A, start, end)
Quicksort(A,start,pIndex-1)
Quicksort(A,pIndex+1, end)
}
}

Partition Function Algorithm

The sub-arrays are rearranged in a certain order using the partition method. You will find

various ways to partition. Here we will see one of the most used methods.
partition (array, start, end)
{
// Setting rightmost Index as pivot
pivot = arr[end];

i = (start - 1) // Index of smaller element and indicates the


// right position of pivot found so farfor (j = start; j <= end- 1; j++)
{
// If current element is smaller than the pivot
if (arr[j] < pivot)
{
i++; // increment index of smaller element
swap arr[i] and arr[j]
}
}
swap arr[i + 1] and arr[end])
return (i + 1)
}

11.Explain Dijkstra’s Algorithm in detail with example and analyze its efficiency

Dijkstra's Algorithm

Dijkstra's algorithm allows us to find the shortest path between any two vertices of a graph.

It differs from the minimum spanning tree because the shortest distance between two vertices
might not include all the vertices of the graph.

How Dijkstra's Algorithm works

Dijkstra's Algorithm works on the basis that any subpath B -> D of the shortest path A ->
D between vertices A and D is also the shortest path between vertices B and D.
Each subpath is the shortest path
Djikstra used this property in the opposite direction i.e we overestimate the distance of each
vertex from the starting vertex. Then we visit each node and its neighbors to find the shortest
subpath to those neighbors.

The algorithm uses a greedy approach in the sense that we find the next best solution hoping
that the end result is the best solution for the whole problem.

Example of Dijkstra's algorithm

It is easier to start with an example and then think about the algorithm.

Start with a weighted graph


Choose a starting vertex and assign infinity path values to all other devices

Go to each vertex and update its path length

If the path length of the adjacent vertex is lesser than new path length, don't update it
Avoid updating path lengths of already visited vertices

After each iteration, we pick the unvisited vertex with the least path length. So we choose 5
before 7

Notice how the rightmost vertex has its path length updated twice
Repeat until all the vertices have been visited

Djikstra's algorithm pseudocode

We need to maintain the path distance of every vertex. We can store that in an array of size v,
where v is the number of vertices.

We also want to be able to get the shortest path, not only know the length of the shortest path.
For this, we map each vertex to the vertex that last updated its path length.

Once the algorithm is over, we can backtrack from the destination vertex to the source vertex
to find the path.

A minimum priority queue can be used to efficiently receive the vertex with least path
distance.

function dijkstra(G, S)
for each vertex V in G
distance[V] <- infinite
previous[V] <- NULL
If V != S, add V to Priority Queue Q
distance[S] <- 0
while Q IS NOT EMPTY
U <- Extract MIN from Q
for each unvisited neighbour V of U
tempDistance <- distance[U] + edge_weight(U, V)
if tempDistance < distance[V]
distance[V] <- tempDistance
previous[V] <- U
return distance[], previous[]

Code for Dijkstra's Algorithm

The implementation of Dijkstra's Algorithm in C++ is given below. The complexity of


the code can be improved, but the abstractions are convenient to relate the code with the
algorithm.
Python
Java
C
C++
# Dijkstra's Algorithm in Python

import sys

# Providing the graph


vertices = [[0, 0, 1, 1, 0, 0, 0],
[0, 0, 1, 0, 0, 1, 0],
[1, 1, 0, 1, 1, 0, 0],
[1, 0, 1, 0, 0, 0, 1],
[0, 0, 1, 0, 0, 1, 0],
[0, 1, 0, 0, 1, 0, 1],
[0, 0, 0, 1, 0, 1, 0]]

edges = [[0, 0, 1, 2, 0, 0, 0],


[0, 0, 2, 0, 0, 3, 0],
[1, 2, 0, 1, 3, 0, 0],
[2, 0, 1, 0, 0, 0, 1],
[0, 0, 3, 0, 0, 2, 0],
[0, 3, 0, 0, 2, 0, 1],
[0, 0, 0, 1, 0, 1, 0]]

# Find which vertex is to be visited next


def to_be_visited():
global visited_and_distance
v = -10
for index in range(num_of_vertices):
if visited_and_distance[index][0] == 0 \
and (v < 0 or visited_and_distance[index][1] <=
visited_and_distance[v][1]):
v = index
return v

num_of_vertices = len(vertices[0])

visited_and_distance = [[0, 0]]


for i in range(num_of_vertices-1):
visited_and_distance.append([0, sys.maxsize])

for vertex in range(num_of_vertices):

# Find next vertex to be visited


to_visit = to_be_visited()
for neighbor_index in range(num_of_vertices):

# Updating new distances


if vertices[to_visit][neighbor_index] == 1 and \
visited_and_distance[neighbor_index][0] == 0:
new_distance = visited_and_distance[to_visit][1] \
+ edges[to_visit][neighbor_index]
if visited_and_distance[neighbor_index][1] > new_distance:
visited_and_distance[neighbor_index][1] = new_distance

visited_and_distance[to_visit][0] = 1

i=0

# Printing the distance


for distance in visited_and_distance:
print("Distance of ", chr(ord('a') + i),
" from source vertex: ", distance[1])
i=i+1
Dijkstra's Algorithm Complexity

Time Complexity: O(E Log V)


where, E is the number of edges and V is the number of vertices.

Space Complexity: O(V)

Dijkstra's Algorithm Applications

 To find the shortest path

 In social networking applications

 In a telephone network

 To find the locations in the map

12.Explain in detail about prims algorithm with example and analyze its efficiency

Spanning tree - A spanning tree is the subgraph of an undirected connected graph.

Minimum Spanning tree - Minimum spanning tree can be defined as the spanning tree in
which the sum of the weights of the edge is minimum. The weight of the spanning tree is the
sum of the weights given to the edges of the spanning tree.

Prim's Algorithm is a greedy algorithm that is used to find the minimum spanning tree from
a graph. Prim's algorithm finds the subset of edges that includes every vertex of the graph
such that the sum of the weights of the edges can be minimized.

Prim's algorithm starts with the single node and explores all the adjacent nodes with all the
connecting edges at every step. The edges with the minimal weights causing no cycles in the
graph got selected.

How does the prim's algorithm work?


Prim's algorithm is a greedy algorithm that starts from one vertex and continue to add the
edges with the smallest weight until the goal is reached. The steps to implement the prim's
algorithm are given as follows -

o First, we have to initialize an MST with the randomly chosen vertex.


o Now, we have to find all the edges that connect the tree in the above step with the
new vertices. From the edges found, select the minimum edge and add it to the tree.
o Repeat step 2 until the minimum spanning tree is formed.

The applications of prim's algorithm are -

o Prim's algorithm can be used in network designing.


o It can be used to make network cycles.
o It can also be used to lay down electrical wiring cables.

Example of prim's algorithm

Now, let's see the working of prim's algorithm using an example. It will be easier to
understand the prim's algorithm using an example.

Suppose, a weighted graph is -

Step 1 - First, we have to choose a vertex from the above graph. Let's choose B.
Step 2 - Now, we have to choose and add the shortest edge from vertex B. There are two
edges from vertex B that are B to C with weight 10 and edge B to D with weight 4. Among
the edges, the edge BD has the minimum weight. So, add it to the MST.

Step 3 - Now, again, choose the edge with the minimum weight among all the other edges. In
this case, the edges DE and CD are such edges. Add them to MST and explore the adjacent of
C, i.e., E and A. So, select the edge DE and add it to the MST.
Step 4 - Now, select the edge CD, and add it to the MST.

Step 5 - Now, choose the edge CA. Here, we cannot select the edge CE as it would create a
cycle to the graph. So, choose the edge CA and add it to the MST.

So, the graph produced in step 5 is the minimum spanning tree of the given graph. The cost of
the MST is given below -

Cost of MST = 4 + 2 + 1 + 3 = 10 units.

Algorithm

1. Step 1: Select a starting vertex


2. Step 2: Repeat Steps 3 and 4 until there are fringe vertices
3. Step 3: Select an edge 'e' connecting the tree vertex and fringe vertex that has minimu
m weight
4. Step 4: Add the selected edge and the vertex to the minimum spanning tree T
5. [END OF LOOP]
6. Step 5: EXIT
Complexity of Prim's algorithm

Now, let's see the time complexity of Prim's algorithm. The running time of the prim's
algorithm depends upon using the data structure for the graph and the ordering of edges.
Below table shows some choices -

o Time Complexity

Data structure used for the minimum edge weight

Adjacency matrix, linear searching

Adjacency list and binary heap

Adjacency list and Fibonacci heap

Prim's algorithm can be simply implemented by using the adjacency matrix or adjacency list
graph representation, and to add the edge with the minimum weight requires the linearly
searching of an array of weights. It requires O(|V| 2) running time. It can be improved further
by using the implementation of heap to find the minimum weight edges in the inner loop of
the algorithm.

The time complexity of the prim's algorithm is O(E logV) or O(V logV), where E is the no.
of edges, and V is the no. of vertices.

Implementation of Prim's algorithm

Now, let's see the implementation of prim's algorithm.

Program: Write a program to implement prim's algorithm in C language.

1. #include <stdio.h>
2. #include <limits.h>
3. #define vertices 5 /*Define the number of vertices in the graph*/
4. /* create minimum_key() method for finding the vertex that has minimum key-value a
nd that is not added in MST yet */
5. int minimum_key(int k[], int mst[])
6. {
7. int minimum = INT_MAX, min,i;
8.
9. /*iterate over all vertices to find the vertex with minimum key-value*/
10. for (i = 0; i < vertices; i++)
11. if (mst[i] == 0 && k[i] < minimum )
12. minimum = k[i], min = i;
13. return min;
14. }
15. /* create prim() method for constructing and printing the MST.
16. The g[vertices][vertices] is an adjacency matrix that defines the graph for MST.*/
17. void prim(int g[vertices][vertices])
18. {
19. /* create array of size equal to total number of vertices for storing the MST*/
20. int parent[vertices];
21. /* create k[vertices] array for selecting an edge having minimum weight*/
22. int k[vertices];
23. int mst[vertices];
24. int i, count,edge,v; /*Here 'v' is the vertex*/
25. for (i = 0; i < vertices; i++)
26. {
27. k[i] = INT_MAX;
28. mst[i] = 0;
29. }
30. k[0] = 0; /*It select as first vertex*/
31. parent[0] = -1; /* set first value of parent[] array to -1 to make it root of MST*/
32. for (count = 0; count < vertices-1; count++)
33. {
34. /*select the vertex having minimum key and that is not added in the MST yet fro
m the set of vertices*/
35. edge = minimum_key(k, mst);
36. mst[edge] = 1;
37. for (v = 0; v < vertices; v++)
38. {
39. if (g[edge][v] && mst[v] == 0 && g[edge][v] < k[v])
40. {
41. parent[v] = edge, k[v] = g[edge][v];
42. }
43. }
44. }
45. /*Print the constructed Minimum spanning tree*/
46. printf("\n Edge \t Weight\n");
47. for (i = 1; i < vertices; i++)
48. printf(" %d <-> %d %d \n", parent[i], i, g[i][parent[i]]);
49.
50. }
51. int main()
52. {
53. int g[vertices][vertices] = {{0, 0, 3, 0, 0},
54. {0, 0, 10, 4, 0},
55. {3, 10, 0, 2, 6},
56. {0, 4, 2, 0, 1},
57. {0, 0, 6, 1, 0},
58. };
59. prim(g);
60. return 0;
61. }

Output

You might also like