You are on page 1of 23

DESIGN AND ANALYSIS OF ALGORITHMS

UNIT I

INTRODUCTION TO DESIGN AND ANALYSIS OF ALGORITHMS

1. Define an algorithm
A: Algorithm is defined as a collection of unambiguous instructions occurring in some
specific sequence and such an algorithm should produce output for given set of input in
finite amount of time.
2. List out the criteria that all algorithms must satisfy.
A: All algorithms must satisfy the following criteria:
Input: zero or more quantities are externally supplied.
Output: at least one quantity is produced.
Definiteness: Each instruction is clear and unambiguous
Finiteness: If we trace the instructions of an algorithm, then for all cases, the
algorithm terminates after a finite number of steps.
Effectiveness: Every instruction must be very basic and feasible.
Once an algorithm is given for a problem and decided to be correct, an important
step is to determine how much in the way of resources such as time or space, the
algorithm will require.
3. What is meant by computational procedure?
A: Algorithms that are definite (clear, unambiguous) and effective (feasible) are known as
computational procedures.
4. Define program.
A: A program is the expression of an algorithm in a programming language.
5. Define an Abstract Data Type.
A: An ADT (Abstract Data Type) is a mathematical model which gives a set of utilities
available to the user but never states the details of its implementation.
6. List out the steps in designing an algorithm.
A: The various steps in designing an algorithm are
Understanding the problem.
Ascertaining the capabilities of a computational device
Choosing between exact and approximate problem solving
Deciding on appropriate data structures
Algorithm Design techniques
Methods of specifying an algorithm
Proving an algorithms correctness
Analyzing an algorithm
Coding an Algorithm.

7. Give the reason for using approximation algorithm.


A: The reasons are
There are important problems that simply cannot be solved exactly. e.g.: square
root n.
Available algorithms for solving a problem exactly can be unacceptably slow
because of the problems intrinsic complexity. E.g.: Traveling Salesman Problem.
An approximation Algorithm can be a part of a more sophisticated algorithm that
solves a problem exactly.
8. Define algorithm design technique or algorithmic strategies
A: An algorithm design technique is a general approach to solving problems
algorithmically that is applicable to a variety of problems from different areas of
computing. These are important because
They provide guidance for designing algorithms for new problems
They make it possible to classify algorithms according to an underlying design
idea.
E.g. Divide and conquer, Dynamic programming, Greedy method, Backtracking.
9. Define the methods of specifying an algorithm.
A: The various methods are
i.
Using natural language which is not clear
ii.
Pseudo code: mixture of natural language and programming language - like
constructs. This method is more precise.
iii.
Flowchart: expresses an algorithm by a collection of various geometric shapes
containing descriptions of the algorithm steps.
10. What are the two kinds of algorithm efficiency?
A: Time complexity: the time taken by an algorithm to run, determines whether the
algorithm is slow or fast.
Space complexity: the amount of space/memory taken by an algorithm, determines the
extra memory the algorithm needs.
11. List out the factors to be followed while analyzing an algorithm.
A: While analyzing an algorithm we should consider the following factors:
i.
Time efficiency of an algorithm
ii.
Space efficiency of an algorithm
iii.
Simplicity of an algorithm
iv. Generality of an algorithm
v. Range of input.
12. What is meant by Time efficiency?
A: Time efficiency indicates how fast the algorithm runs.

13. What is Space efficiency?


A: Space efficiency indicates how much extra memory the algorithm needs.
14. List out the areas the study of algorithms includes.
A: The various areas are:
Sorting
Searching
String processing
Graph problems
Combinatorial problems
Geometric problems
Numerical problems
15. Define validation.
A: Once an algorithm is devised, it is necessary to show that it computes the correct
answer for all possible legal inputs. This process is called as algorithm validation.
16. What is meant by analysis of algorithms?
A: Analysis of algorithms or Performance analysis refers to the task of determining how
much computing time and storage an algorithm requires. An important result of this study
is that
i.
It allows us to make quantitative judgements about the value of one algorithm
over another.
ii.
It allows us to predict whether the software will meet any efficiency constraints
that exist.
17. What are the steps in analyzing an algorithm?
A: A general framework for analyzing the efficiency of an algorithm is:
Measuring input size
Measuring running time
Computing Orders of growth of algorithms
Measuring Time and Space complexities
Computing Worst case, best case and average case efficiencies.
18. How can you compute the running time of an algorithm?
A: From an algorithm:
We first identify the BASIC OPERATION: important operation (core logic) of an
algorithm.
Generally the operation which is more time consuming is a basic operation in the
algorithm. The concept of basic operations can be well understood with the help
of following example:
Problem Statement Input Size
Basic Operation
Searching a key
Comparison of key
element from the list List of n elements
with every element
of n elements
of list

Performing matrix
multiplication

Two matrices with


order n x n

Actual
multiplication of the
row and column
elements in matrix

Computing GCD of
two numbers

Two numbers

Division

Then we compute total number of time taken by this basic operation. The running
time of basic operation can be obtained approximately using the following the
formula:
T(n) Cop C(n)
Where
T(n) is Running time of BASIC OPERATION
Cop is time taken by the basic operation to execute
C(n) is Number of times the operation needs to be executed.

19. Define worst case efficiency of an algorithm?


A: The worst case efficiency of an algorithm is its efficiency for the worst case input of
size n, which is an input of size n for which the algorithm runs the longest among all
possible inputs of that size.
20. Define best case efficiency of an algorithm?
A: The best case efficiency of an algorithm is its efficiency for the best case input of size
n, which is an input of size n for which the algorithm runs the fastest among all possible
inputs of that size.
21. Define average case efficiency of an algorithm?
A: The average case efficiency of an algorithm describes an algorithms behavior on a
typical or random input. They make assumptions about possible inputs of size n, to
analyze an algorithms average case efficiency.
22. Why space complexity of a program is necessary?
A: The reasons are
i.
If the program is to run on a multi user computer system, then it is important to
specify the amount of memory to be allocated to the program.
ii.
To know in advance whether or not sufficient memory is available in the
computer system to run the program.
iii.
A problem might have several possible solutions with different space
requirements.
iv. To estimate the size of the largest problem that a program can solve.
23. Why Time complexity of a program is necessary?
i.
Some computer systems require the user to provide an upper limit on the amount
of time the program will run. Once this upper limit is reached, the program is
aborted.
ii.
The program might need to provide a satisfactory real-time response.

iii.

If there is alternative ways to solve a problem, then the decision on which to use
will be based primarily on the Expected Performance difference among these
solutions.
24. Define Order of Growth.
A: Measuring the performance of an algorithm in relation with the input size n is called
order of growth. For example, the order of growth for varying input size of n is as given
below:
n
log n
n log n
n2
2n
1
0
0
1
2
2
1
2
4
4
4
2
8
16
16
8
3
24
64
256
16
4
64
256
65536
32
5
160
1024
4294967296
25. What is time space trade-off?
A: Time space trade-off is basically a situation where either space efficiency (memory
utilization) can be achieved at the cost of time or time efficiency (performance efficiency)
can be achieved at the cost of memory.
26. List out the techniques that exploits space for time trade-offs?
A: The techniques that exploits space for time trade-offs are
Input enhancement
Pre-structuring
27. Define Input enhancement.
A: It is a main technique that exploits space for time trade-offs. The idea is to preprocess
the problems input in whole or in part and store the additional information obtained in
order to accelerate solving the problem afterward. E.g. counting methods for sorting,
Boyer Moore Algorithm for string matching.
28. Define Pre-structuring.
A: It is a main technique that exploits space for time trade-offs. It uses extra space to
facilitate a faster access and or more flexible access to the data. E.g. hashing, B trees.
29. What is meant by asymptotic notations?
A: To choose the best algorithm, we need to check efficiency of each algorithm. The
efficiency can be measured by computing the time complexity of each algorithm. This is
done using notations called asymptotic notations. In short, asymptotic notation is a
shorthand way to represent the time complexity. The standard notations used are O[Big
Oh], [Omega] and [Theta].
30. Define Big Oh Notation.
A: Let f and g be two functions defined from a set of natural numbers to the set of nonnegative real numbers, that is, f, g: NR0. It is said that f(n) = O( g(n) ), if there exists

two positive constants c R and n0 N such that f(n) c g(n), for all n n 0. Here, the
function f(n) is bound above by the constant multiple of g(n).
31. Define Omega Notation.
A: Let f and g be two functions defined from a set of natural numbers to the set of nonnegative real numbers, that is, f, g: NR0. It is said that f(n) = ( g(n) ), if there exists
two positive constants c R and n0 N such that f(n) c g(n), for all n n 0. Here, the
function f(n) is bound below by the constant multiple of g(n).
32. Define Theta Notation.
A: Let f and g be two functions defined from a set of natural numbers to the set of
non-negative real numbers, that is, f, g: NR0. It is said that f(n) = ( g(n) ), if there
exists three positive constants c1, c2 R and n0 N such that c1 g(n) f(n) c2 g(n), for all
n n0. Here, the function f(n) is bound lower by the constant multiple of g(n). Here the
function f(n) is bound both above and below by another function g(n).
33. Define conditional asymptotic notation.
A: Many algorithms are easier to analyse if we impose conditions on them initially.
Imposing such conditions, when we specify asymptotic value, it is called conditional
asymptotic notation. Let f, g: NR0, it is said that f(n) = O (g(n) | A(n)), read as g(n)
when A(n), if there exists two positive constants c R and n 0 N such that A(n) [f(n)
c g(n)], for all n n0.
34. What are the properties of order of growth?
A: If f1(n) is O(g1(n)) and f2(n) is O(g2(n)), then f1(n) + f2(n) O(max(g1(n)+ g2(n))).
1. polynomials of degree m (nm) that means maximum degree is considered from
the polynomial for order of growth (nm)
2. O(1) < O (log n)< O( n) < O(n2)< O(n3)<O(2n)
3. exponential functions an have different orders of growth for different values of a.
35. Define recurrence equation.
A: It is an equation that defines a sequence recursively. It is normally in the following
form:
T(n) = T(n-1) + n
for n>0___________ 1
T(0) = 0
_________________ 2
1 is called recurrence relation
2 is called the initial condition.
The general solution to the recursive function specifies some formula.
Eg: f(n) = 2f(n-1) + 1 for n>1
f(1) = 1Then by solving this recurrence relation we get, f(n) = 2n 1.
36. What is meant by divide and conquer?
A:
A technique for designing algorithms that decompose instance into smaller
sub-instance of the same problem, solving the sub-instances independently and

combining the sub solutions to obtain the solution of original instance is called divide
and conquer. It uses the top-down approach to obtain solutions of problems.
37. Define divide and conquer algorithm.
A:
Formally, divide and conquer algorithm consists of the following major
phases: Divide: Breaking the problem( if the input size is too large) into several subproblems that are similar to the original problem but smaller in size.
Conquer: conquer recursively to solve the sub-problems.
Combine: Take the solutions of sub-problems and merge these solutions into a
solution for the original problem.
38. Give the recurrence relation for divide and conquer algorithm.
A:
T(N) = g(n)
if n is small
aT(N/b)+f(n)
Otherwise
39. Give the examples of divide and conquer algorithm.
A:
i Sorting: merge sort and quick sort
ii Binary tree traversals
iii Binary Search
iv Multiplication of large integers
40. What is meant by binary search?
A:
Binary Search is an efficient searching method. While searching the
elements using this method the most essential thing is that the elements must be a presorted one.
41. Give the divide and conquer steps of binary search.
A:
Let A[m] be the middle element of array A and K be the search key. Then
there are three conditions that needs to be tested while searching the array using this
method:
i.
If K= A[m] then desired element is found
ii.
Otherwise divide the array into two sub-arrays about half as large. If
K< A[m] then search the left sub-list
iii.
Otherwise if K> A[m] then search the right sub-list

42. Give the applications of binary search.


A:
Finding a name in the phone book, searching a desired record from the
database, for solving nonlinear equation with one unknown.
43. Give the worst case analysis of binary search algorithm.
A:
1) The search key is not in the sorted list
2) The search key is in the last position.
Here the key comparison is done with half of the size of array. Therefore in worst
case T(n) = (log n).

44. Give the average case analysis of binary search algorithm.


A:
The number of key comparisons on average case is slightly smaller than
that in the worst case. Therefore T(n) = (log n).
45. Give the best case analysis of binary search algorithm.
A:
The number of comparisons on best case is only one. Therefore
T(n) = (1)
46. Find the element 10 in the following set using binary search algorithm.
10
15
20
25
30
35
40
A: Finding the middle element of the given list of elements
Mid = (1+7)/2 = 4. Therefore it becomes
10
15
20
25
30
35
40
Now A[mid] = A[4] = 25 and search key is 10.
We know that 10 < A[4]. Hence searching in the left sub-list, Again
Mid = (1+4)/2 = 2. Therefore it becomes
10
15
20
25
A[2] = 15 and we know that 10 < A[2]. Hence searching in the left sub-list, Again
Mid = (1+1)/2 = 1; A[1] = 10 and 10 = 10.
Hence the element is found.
47. Give the best case, worst case, average case analysis of finding maximum and
minimum.
A:
For finding maximum and minimum element, 3n/2 -2 is the best case,
worst case, average case number of comparisons when n is a power of 2. Hence
neglecting the order of magnitude, we can declare the time complexity as O(n).
48. Give the divide and conquer steps of merge sort.
A: i. Divide the array into two sub-arrays each with n/2 items
ii. Conquer each sub-array by sorting it unless the array is sufficiently small, use
recursion to do this.
iii.Combine the solutions to the sub-arrays by merging them into a single sorted
array.
49. How can you merge two sorted lists?
A:
1) Choose the smaller element of the two lists
2) Remove it from the list and put it into a new list
3) Repeat the previous steps until all elements are arranged.
50. Define merge sort tree
A:
Merge sort can be visualized by means of binary tree where each node of
the tree represents a recursive call and each external node represents individual
elements of given array. Such a tree is called merge sort tree.
51. Give the running time of merge sort.
A:
T(n) = (n log n)

UNIT 2
1) What is meant by greedy algorithm?
A:
Greedy algorithm is an algorithmic design technique mainly used to solve
optimization problems. They take decisions on the basis of information at hand
without worrying about the effect these decisions may have in the future. This
approach suggests constructing a solution through a sequence of steps, each
expanding a partially constructed solution obtained so far, until a complete solution to
the problem is reached. On each step, the choice made must be
Feasible: it should satisfy problems constraints
Locally optimal: Among all feasible solutions the best choice is made
Irrevocable: Once the choice is made then it should not be changed on the
Sub-sequent steps of the algorithm.
2) What is meant by feasible solution?
A:
Solutions that satisfy problems constraints
3) What is meant by locally optimal solution?
A:
The best choice among all feasible solutions is called a locally optimal
solution
4) What is meant by irrevocable solution?
A: Once the choice is made then it should not be changed on the sub-sequent steps of
the algorithm.
5) What are the functions of greedy algorithm?
A:
The greedy approach consists of 4 functions
A function that checks whether a given set of candidates give a solution.
A function that checks the feasibility of a set of candidates
A selection function which chooses some candidate which has not yet
been chosen
An objective function which gives the value of a solution.
6) Write down the advantages and disadvantages of greedy algorithm?
A: Advantage:
Greedy algorithms work fast when they work simple algorithms,easy to
implement.
Disadvantage:
Greedy algorithms dont always work.Hard to prove correct.
7) What is meant by container loading?
A:
In this problem, the ship is to be loaded with containers. All containers are
the same size. Different containers may have different weights. At each stage, each
container is loaded into the ship. The total weight of all the containers must be less
than or equal to the capacity of ship.

8) Give the greedy solution for container loading problem


A:
Select the container that has least weight, then the one with the next
smallest weight, and so on until either all containers have been loaded or there isnt
capacity for the next one. That is, until total weight of all the containers must be less
than or equal to the capacity of ship.
9) Give the analysis for container loading problem
A:
Sorting the container takes O(n log n). The remaining part of the algorithm
takes O(n) time. The overall time complexity is O(n log n).
10) Define Knapsack problem.
A:
Given n items, each with profit value v and weight wi and a knapsack that
can hold at most weight W, the objective is to find the most valuable subset of n items
of given weights and values that fit into a knapsack of a given capacity.
11) Define fractional Knapsack problem.
A:
Given i items, each with profit value v and weight wi and a knapsack that
can hold at most weight W, the objective is to find the most valuable subset of n items
of given weights and values that fit into a knapsack of a given capacity. Here the
items can be broken into smaller pieces so that only a fraction of xi of items I where
0 xi1 may be chosen.
12) Write the algorithm for greedy Knapsack.
A:
1) Sort items in decreasing order of profit per weight (pi/wi)
2) While still room in the knapsack do
3) Consider next item in sorted list
4) Take as much as possible.
13) Give the analysis for greedy Knapsack.
A:
If the items are already sorted into decreasing order of (p i/wi) then the
while loop takes a time in O(n). The overall time complexity including the sort is
O(n log n).
14) Define subset paradigm.
A:
If the inputs to a problem using greedy approach is considered in an
order based on some selection procedure, i.e. some optimization measure for selection is
used, then it is said to work in a subset paradigm. Here at every stage, the input is
checked whether it leads to an optimal solution. If the inclusion of input into partial
solution yields an infeasible solution, discard the input; otherwise, add it to the partial
solution.

15) What are the various applications of Greedy method?


A:
Knapsack problem
Container loading
Single source shortest path
Prims algorithm for minimum spanning tree
Kruskals algorithm for minimum spanning tree
Job sequencing with deadlines
16) What is the basic operation in container loading problem?
A:
The basic operation is:
for (i 1 to num AND container[i].wt <= W) do
{
solution[container[i].id] = 1;
W = W container[i].wt;
}
17) Give time complexities for a successful and unsuccessful binary search
A:
Successful Binary Search
Best case: (1)
Average case: (log n)
Worst case: (log n)
Unsuccessful Binary Search
Best case: (log n)
Average case: (log n)
Worst case: (log n)
18) What is Minimum Spanning Tree?
A:
A Minimum Spanning Tree of a weighted connected graph G is a spanning
tree with minimum or smallest weight.
18.What is Weight of the tree?
A: A weight of the tree is defined as the sum of weights of all its edges.
19.What is Prims Algorithm?
A: This algorithm is for obtaining minimum spanning tree by selecting the adjacent
vertices of already selected vertices.
20.What is Kruskals Algorithm?
A: This algorithm is for obtaining minimum spanning tree but it is not necessary to
choose adjacent vertices of already selected vertices.
21.What is Single Source Shortest Paths Algorithm?
A: Dijsktras algorithm is a popular algorithm for finding shortest path.This algorithm
is called single source shortest algorithm.In this algorithm,for a given vertex called

source the shortest path to all other vertices is obtained.The main focus is not to find
only one single path but to find the shortest paths from any vertex to all other
remaining vertices.
22.What is the use of Dijkstras algorithm?
A: It is used to solve the single source shortest paths problem:for a given vertex
called the source in a weighted connected graph,find the shortes path to all its other
vertices.
UNIT 3
1) What is meant by dynamic programming?
A:
Dynamic Programming is an algorithm design technique that can be used
when the solution to a problem can be viewed as the result of a sequence of decisions.
For example, KNAPSACK: The solution to the knapsack problem can be viewed as
the result of a sequence of decisions. We have to decide the values of x i, 1in. First,
a decision is made on x1, then on x2, then on x3 and so on. An optimal sequence of
decisions maximizes the objective function pi xi. An optimal sequence of decisions
is obtained by the principle of optimality.
2) Define the principle of optimality
A:
Dynamic Programming works on the principle of optimality. The principle
of optimality states that an optimal sequence of decisions has the property that
whatever the initial state and decision are, the remaining decisions must constitute an
optimal sequence of decisions with regard to the state resulting from the first
decision.
3) Compare and contrast between Greedy algorithm and Dynamic programming
A:
GREEDY METHOD
DYNAMIC PROGRAMMING
Greedy method is used for Dynamic programming is also used
obtaining optimum solution
for obtaining optimum solution
An optimum solution is picked There is no special set of feasible
from a set of feasible solutions
solutions in this method
The optimum selection is without Considers all possible sequences in
revising previously generated order to obtain the optimum solution
solutions.
Only one decision sequence is ever Many decision sequences may be
generated
generated
There is no as such guarantee of It is guaranteed that the dynamic
getting optimum solution
programming will generate optimum
solution using principle of optimality

4) Give the applications/examples of dynamic programming

A:

i Multistage graphs
ii Shortest path problems
iii Longest common sub-sequence problem
iv Knapsack Problem.

5) Compare and contrast between Divide and Conquer and Dynamic programming
A:
DIVIDE AND CONQUER
DYNAMIC PROGRAMMING
The problem is divided into small Many decision sequences are
sub-problems. These are solved generated and all the overlapping
independently. Finally all the sub-instances are considered.
solutions of the sub-problems are
combined to get the whole solution
to the given problem
Less efficient because of rework More efficient than divide and
on solution
conquer strategy.
Uses top down approach of Uses bottom up approach of problem
problem
solving(
recursive solving( iterative method)
method)
Divide and Conquer splits its input It splits its input at ever possible
at specific deterministic points
split points rather than at a particular
point. After trying all split points, it
determines which split point is
optimal
Has logarithmic time complexity
Has polynomial time complexity
6) Define multistage graphs
A: A multistage graph G = (V, E) is a directed graph in which the vertices are
partitioned into k>= 2 disjoint sets Vi, 1 i k. The objective is to find a minimum
cost path from source (s) to sink (t). Each set Vi defines a stage in graph. That is,
every path from s to t starts in stage1, goes to stage2, then to stage3 and so on until
terminating in stage k. Hence we obtain minimum path length for each vertex at each
stage.eg: draw a multistage graph
7) What is meant by all pairs shortest path problem?
A:
The all pairs shortest path problem is to determine a matrix A such that
A(i, j) is the length of a shortest path from i to j. It is computed by solving n single
source problems as follows:
Often a path originates from vertex i and goes through some intermediate
vertices and terminates at vertex j.
If k is an intermediate vertex on this shortest path, then the sub-paths from
i to k and k to j must be shortest paths from i to k and k to j, respectively
Otherwise i to j path is not of minimum length
Therefore the principle of optimality holds.
8) Give any two properties of dynamic programming approach

A:
1) Optimal substructure: The dynamic programming technique makes use
of principle of optimality to find the optimal solution from sub-problems.
2) Overlapping sub-problems: The dynamic programming is a technique
in which the problem is divided into sub-problems. The solutions of sub-problems are
shared to get the final solution to the problem. It avoids repetition of work and we can
get the solution more efficiently.
9) What does dynamic programming have in common with divide and conquer?
What is the principle difference between the two?
A:
Both divide and conquer and dynamic programming solves the problem by
breaking it into number of sub-problems. In both these methods the solutions from
sub-problems are collected to get the final solution to the problem. But in divide and
conquer the sub-problems are solved independently whereas in dynamic
programming the sub-problems share the solutions among themselves. The dynamic
programming works on principle of optimality.
10) Give the commonly used designing steps for dynamic programming algorithm.
A:
Dynamic programming design involves 4 major steps
Characterize the structure of optimal solution.
Recursively define the value of an optimal solution.
By using bottom up technique compute the value of optimal solution.
Compute an optimal solution from computed information.
11) Define OBST
A: A binary search tree is organized so as to minimize the number of nodes visited in
all searches, given that it is known how often each word occurs. To increase the search
efficiency more frequently used words are arranged nearer to the root and less frequently
used words away from the root, with the help of probability value of each word. This type
of arrangement is called optimal binary search tree.
12) Give the components of dynamic programming.
A: A dynamic programming solution has three components:
i. Formulate the answer as a recurrence relation or recursive algorithm.
ii. Show that the number of different instances of the recurrence is bounded by a
polynomial.
iii.Specify an order of evaluation for the recurrence so that the solution is
obtained the way we desire.

UNIT 4

1) Define exhaustive search.


A:
Exhaustive search is a basic search routine in which all possible solutions
to a given problem are produced. In this search, it is checked if the problem is solved
or not and the search is continued until a correct solution is generated. It produces
entire solution space for the problem.
2) Differentiate backtracking over exhaustive search
A:
Backtracking can be considered as an improvement over exhaustive
search. Unlike exhaustive search, they construct candidate solutions, one component
at a time and evaluate the partially constructed solutions; if no potential values of the
remaining components can lead to a solution, the remaining components are not
generated at all.
3) Define backtracking
A:
Backtracking is a general algorithm for finding all (or some) solutions to
some computational problem, that incrementally builds candidates to the solutions,
and abandons each partial candidate c ("backtracks") as soon as it determines that c
cannot possibly be completed to a valid solution.
4) Give categories of the problems in backtracking
A:
In backtracking the problem can be categorized into three categories
i Decision problem: finds whether there is any feasible solution?
ii Optimization problem: finds whether there exists best solution?
iii Enumeration problem: finds all the possible feasible solutions
5) Compare and contrast between Branch and Bound and Backtracking
A:
BACKTRACKING
BRANCH AND BOUND
Solution for backtracking is traced In this method, it is not necessary to
using Depth First Search
use DFS for obtaining the solution,
even the Breadth First Search or
Best First Search can be applied
Typically decision problems can Typically optimization problems can
be solved using this method
be solved using this method
While finding solutions to the It proceeds on better solutions, so
problem bad choices can be made there cannot be any bad solutions
The state space tree is searched The state space tree needs to be
until the solution is obtained
searched completely as there may be
chances of getting an optimum
solution anywhere in state space tree
Applns are m-coloring, knapsack
Applns are job sequencing,TSP

6) What is the basic idea of backtracking?

A:

The principle idea of backtracking is to construct solutions one component at


a time and evaluate such partially constructed candidates as follows:
If a partially constructed solution can be developed further without
violating problems constraints, it is done by taking the first remaining
legitimate option for the next component.
If there is no legitimate option for the next component, no alternatives for
remaining components need to be considered. In this case, the algorithm
backtracks to replace the last component of the partially constructed
solution with its next option.

7) Define promising node and non-promising node


A:
A node in a state space tree is said to be promising if it corresponds to a
partially constructed solution that may lead to a complete solution; otherwise it is
called non-promising.
8) What is a state space tree?
A:
A state space tree is a rooted tree whose nodes represent partially
constructed solution to the given problem. In backtracking method, the state space
tree is built for finding the solution. This tree is built using Depth First Search
fashion.
9) What are the types of constraints used in backtracking?
A:
Explicit constraints are rules that restrict each xi to take on values only
from a given set. Explicit constraints depend on the particular instance I of problem
being solved. All tuples that satisfy the explicit constraints define a possible solution
space for I
Examples of explicit constraints
xi 0, or all nonnegative real numbers
xi = {0, 1}
l i xi ui
Implicit constraints are rules that determine which of the tuples in the
solution space of I satisfy the criterion function. Implicit constraints describe the
way in which the xi must relate to each other.
10) Give the formal definition of n queens problem.
A:
Consider a n x n chess board on which we have to place n queens such that
no two queens can attack each other by being in the same row or in the same column
or on the same diagonal. The diagonal conflicts can be checked by the following
formula:
P1 and P2 are two positions that are on same diagonal if
i + j = k + l or
ij=k-l

11) How can you represent the solution for 8 queen problem?

A:
All solutions to the 8 queen problem can be represented as an 8-tuple
(x1,x2,,x8) where xi is the column on which queen i is placed.
For example: The Solution is expressed as an 8-tuple, (4,6,8,2,7,1,3,5)
Q
Q
Q
Q
Q
Q
Q
Q
12) Give the explicit and implicit constraint for 8 queen problem.
A:
Explicit constraint using 8-tuple formulation are Si = { 1,2,3,4,5,6,7,8 },
1 i 8. Therefore the solution space consists of 88 8-tuples
Implicit constraints are
No two xi can be the same, or all the queens must be in different columns
All solutions are permutations of the 8-tuple (1, 2, 3, 4, 5, 6, 7, 8)
Reduces the size of solution space from 88 to 8! tuples
No two queens can be on the same diagonal
13) Give the condition that says two queens are on the same diagonal?
A: The diagonal conflicts can be checked by the following formula:
P1 and P2 are two positions that are on same diagonal if
i + j = k + l or
ij=k-l
14) Define Subset sum Problem
A: Given n positive numbers wi, 1 i n, and m, find all subsets of wi whose sums
are m
For example, n = 4, w = (11, 13, 24, 7), and m = 31, the desired subsets
are (11, 13, 7) and (24, 7)
The solution vectors can also be represented by the indices of the numbers
as (1, 2, 4) and (3, 4)
All solutions are k-tuples, 1 k n

15) Give the explicit and implicit constraint for Subset sum Problem
A: Explicit constraints
xi = {j | j is an integer and 1 j n}
Implicit constraints
No two xi can be the same
wxi = m
xi < x i+1, 1 i k (total order in indices): Helps in avoiding the generation
of multiple instances of same subset; (1, 2, 4) and (1, 4, 2) are the same
subset
16) Define Problem state
A: Problem state is each node in the depth-first search tree
17) Define Solution states
A: Solution states are the problem states s for which the path from the root node to s
defines a tuple in the solution space
In variable tuple size formulation tree, all nodes are solution states
In fixed tuple size formulation tree, only the leaf nodes are solution states
Partitioned into disjoint sub-solution spaces at each internal node
18) Define Answer states
A: Answer states are those solution states s for which the path from root node to s
defines a tuple that is a member of the set of solutions. These states satisfy implicit
constraints.
19) Define Static trees and Dynamic trees
A: Static trees are ones for which tree organizations are independent of the problem
instance being solved
Fixed tuple size formulation
Dynamic trees are ones for which tree organizations are dependent of the
problem instance being solved
20) Define Live node
A: Live node is a generated node for which all of the children have not been
generated.
21) Define Dead node
A: Dead node is a generated node that is not to be expanded any further
All the children of a dead node are already generated
Live nodes are killed using a bounding function to make them dead nodes
22) Define m colorability Problem
A: Let G be a graph and m be a given positive integer. The problem is to find
whether the nodes of G can be colored in such a way that no two adjacent nodes have

the same color yet only m colors are used. This is called as m colorability decision
problem.
E.g.: If d is the degree of the given graph, then it can be colored with d+1 colors. The
m colorability optimization problem asks for the smallest integer m for which the
graph G can be colored. This integer is referred to as the chromatic number of the
graph.
23) Define graph coloring problem
A: same answer as in question 22 with e.g.
24) Define Knapsack Problem
A: Given n items, each with profit value v and weight w i and a knapsack that can
hold at most weight W, the objective is to find
The xis constitute a zero-one-valued vector. The solution space for this problem
consists of the 2n distinct ways to assign zero or one values to the xis.
25.Define Hamiltonian circuit problem
A: The Hamiltonian is defined as a cycle that passes through all the vertices of the
grapg exactly once.It s a sequence of n+1 adjacent vertice Vi0,Vi1,..,Vi(n-1).V i0
where the first vertex of the sequence is asme as the last one while all the other n-1
vertices are distinct.
26. Define graph.
A graph G consists of a nonempty set V which is a nodes of the graph, a set E
which is the set of edges of the graph, and a mapping from the set for edge E to ta set of
pairs of elements of V. It can also be represented as G=(V,E)
27. List out the types of graph
Undirected graph
Directed graph
Weighted graph
Connected graph
Dense graph
Sparse graph
Multigraph
28. Define weighted graph.
A graph in which weights are assigned to every edge is called a weighted graph
29. Define undirected graph
A graph in which every edge is undirected is called a undirected graph.
30. Define directed graph.
A graph in which every edge is directed is called a undirected graph
31. Define connected graph.

A graph is said to be connected if for every pair of its vertices u and v there is a
path from u to v.
32. Define multigraph
A multigraph allows multiple edges between the same vertices.
33. Define complete graph
A graph with every pair of its vertices connected by an edge is called complete
graph.
34. Define connected components of a graph.
A connected component is the maximal subgraph of a given graph.
35. How can you represent the graph?
Graphs can be represented in two ways:
Adjacency matrix
Adjacency linked lists
36. List out the types of graph traversal.
Breadth first search
Depth first search
37. Differentiate between DFS and BFS
Data structures
No of vertex ordering
Edge types(Undirected graphs)
Applications
Efficiency for adjacent matrix
Efficiency for adjacent linked
lists

DFS
stack
2 orderings
Tree and back edge
Connectivity, acyclicity,
articulation points
(|n|)
(|V|+|E|)

BFS
queue
1 orderings
Tree and back edges
Connectivity, acyclicity,
minimum edge paths
(|n|)
(|V|+|E|)

38. How can you check the graph is connected?


Start a DFS traversal at an arbitrary vertex and check, after the algorithm halts,
whether all the graphs vertices will have been visited.
If they have, the graph is connecte3d: otherwise it is not connected.
39. Define spanning tree.
A spanning tree of a connected graph is its connected acyclic subgraph that
contains all the vertices of the graph.
40. Define DFS spanning tree
Spanning tree obtained using a depth first search is called depth first spanning
tree.
41. Define BFS spanning tree

Spanning tree obtained using a breadth first search is called breadth first
spanning tree.
42. Define biconnected graph.
A graph G is biconnected if and only if it contains no articulation points.
43. Define biconnected components.
A maximal biconnected subgraph is called biconnected components.

UNIT 5
1. What are the additional features required in branch-and-bound when compared to
backtracking?
Compared to backtracking, branch-and-bound requires:
A way to provide, for every node of a state space tree, a bound on the
best value of the objective function on any solution that can be obtained
by adding further components to the partial solution represented by the
node.
The value of the best solution seen so far.
2. List out the types of branch-and-bound technique.
First in first out
Least cost or max profit
3. Give the applications of branch and bound technique.
Assignment problem
Knapsack problem
Traveling salesperson problem
4. What is the assignment problem?
Assigning n people to n jobs so that the total cost of the assignment is as small
as possible. The instance of the problem is specified as a n by n cost matrix C so
that the problem can be stated as: select one element in each row of the matrix so
that no two selected items are in the same column and the sum is the smallest
possible.

5. Give the formula used to find the upper bound for knapsack problem.
A simple way to find the upper bound ub is to add v, the total value of the
items already selected, the product of the remaining capacity of the Knapsack Ww and the best per unit payoff among the remaining items, which v i+1/wi+1
ub = v + (W-w)( vi+1/wi+1)
6. What are the strengths of backtracking and branch-and-bound?
The strengths are as follows:

It is typically applied to difficult combinatorial problems for which no


efficient algorithm exists to find exact solutions.
It holds hope for solving some instances of nontrivial sizes in an
acceptable amount of time.

7. Define decision algorithm


Any problem for which answer is either yes or no is called decision problem. The
algorithm for decision problem is called decision algorithm.
8. Define optimization algorithm
Any problem that involves the identification of optimal cost (minimum or
maximum) is called optimization problem. The algorithm for optimization
problem is called optimization algorithm.
9. Define P.
Problems that can be solved in polynomial time. (P stands for polynomial).
E.g. searching of key element, sorting of elements, all pairs shortest path.
10. Define NP
It stands for non-deterministic polynomial time. (NP stands for nonpolynomial). E.g. Traveling salesperson problem, graph coloring problem,
Hamiltonian circuit problem.
11. Define deterministic and non-deterministic algorithm
The algorithm in which every operation is uniquely defined is called
deterministic algorithm. The algorithm in which every operation may not have
unique result, rather there can be specified set of possibilities for every operation.
Such an algorithm is called non-deterministic algorithm. Non- deterministic
means that no particular rule is followed to make the guess.
This is a two stage algorithm
Non-deterministic (guessing) stage: generate an arbitrary string that can
be thought of as a candidate solution.
Deterministic (verification) stage: It takes as input the candidate solution
and the instance to the problem and returns yes if the candidate solution
represents the actual solution.
12. Name the functions used in a non-deterministic algorithm.
1) Choose- Arbitrarily chooses one of the element from given input set.
2) Fail- Indicates the unsuccessful completion.
3) Success- Indicates successful completion.
13. Give the relationship between P, NP, NP complete and NP hard problems.
Relationship between P, NP, NP complete
P class
problems

NP Hard
problems

NP Complete Problems

You might also like