You are on page 1of 128

DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Unit I

Introduction

1 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Structure of Unit I
1 Introduction

1.1 What is an algorithm?

1.2 Fundamentals of problem solving

1.2.1 Computational Devices

1.2.2 Selecting Design Techniques

1.2.3 Choice of Data Structures

1.2.4 Algorithm Representation

1.2.5 Algorithm Correctness

1.2.6 Algorithm Analysis

1.3 Common Problem Types

1.4 Asymptotic Notations

1.5 Linear Search - An Example for Determining Algorithm Efficiency

1.6 Mathematical Analysis - Some Examples

1.6.1 Nonrecursive Algorithms

1.6.2 Recursive Algorithms

2 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Learning Objectives
• Basic definition of algorithm and problem solving.
• Various models of computation.
• Asymptotic representation of time and space complexities.
• Analyzing the algorithm with simple examples.

3 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

1. Introduction
1.1 What is an algorithm?

An algorithm is a sequence of unambiguous instructions for solving a problem in a


finite amount of time. An input to an algorithm is an instance of the problem. The
instances of the problems are been derived from a domain. For instance, if we want to
add two integer numbers, just for argument let us assume that we have an algorithm
for the same, then the instance of the problem is any two arbitrary integers, and
the domain is the set of integers. The algorithm must solve the problem irrespective
whatever be the instance (but from the particular domain) is, and too within the
time bound, which we refer as worst case complexity. Another important property
of an algorithm is it must terminate within finite number of steps. That is why an
algorithm always differ from a program, though algorithms are used for developing the
program. One can give several examples for non-terminating programs, for instance
operating systems. That is why we mentioned at the first sentence that “solving a
problem in finite amount of time”. Having a sequence of some instructions cannot
always define the algorithm. Each instruction must be unambiguous, that it must do
what it is been meant for (definite). Now, you are advised to read the first sentence
again. Hope, you understood what we refer as algorithm, and you must keep in mind
these points, while studying this course.

1.2 Fundamentals of problem solving

Solutions to programming problems are formulated as so-called algorithms. An al-


gorithm is a well-defined procedure, consisting of a number of instructions that are
executed in turn, in order to solve the given problem.

Normally, an algorithm will have certain inputs; for each input, the algorithm should
compute an output which is related to the input by a certain so-called input-output

4 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

relation. Formulating an algorithm makes problem-solving decidedly harder, because


it is necessary to formulate very clearly and precisely the procedure for solving the
problem. The more general the problem, the harder it gets. The advantage, however,
is a much greater understanding of the solution. The process of formulating an
algorithm demands a full understanding of why the algorithm is correct.

The design and analysis of algorithm for a particular problem has several steps. A
good computer scientist must follow these steps, in order to write efficient algorithms.
What are those steps? Understanding the problem, deciding on design technique and
data structures, design the algorithm, prove the correctness of the algorithm, analyze
the algorithm, and then refine the algorithm, if required.

Note that in the design of algorithms, we are not only interested in getting a solution
to a problem, but also within the time bound. So an algorithm that is not been
analyzed is useless.

1.2.1 Computational Devices

It is worth to know some details about the various computational devices, as you
cannot expect that the algorithm you have developed will run on all sort of comput-
ers. One must know Flynn’s classification of these machines, in order to get good
understanding of variety of computational devices.

Systems

SISD SIMD MISD MIMD

Shared Memory Model Direct Connection Machines

EREW ERCW CREW CRCW

Figure 1.1: Flynn’s Classification

5 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Based on number of instructions and number of data, first the system can be classified
as Single Instruction Single Data (SISD), Single Instruction Multiple Data (SIMD),
Multiple Instruction Single Data (MISD) and Multiple Instruction Multiple Data
(MIMD). The SISD machines are nothing but the sequential machines. SIMD ma-
chines are nothing but the class of parallel computers, while MIMD machines are in
the category of distributed computers. However, the class MISD is not existing as
it is of no use. So, one who write a parallel algorithm it must be focused on SIMD
machines, and if it is a sequential algorithm then SISD. Similarly, for distributed
algorithms must focus on MIMD. These classes are defined based on their nature
of execution of instruction with various data. Now, Let us see further about the
SIMD machines. These architectures are classified under two categories, viz. shared
memory model and direct connection machines. In shared memory model, there is
a large common global shared memory and small local memory for each processor.
So, data exchange on these machines are only through the shared memory. While
in direct connection machines, there is no common memory, but the local memory is
little large. Here the data exchange takes place only through the connection avail-
able between the processors. Depending on type of connection, these machines can
be viewed as tree, mesh, etc. Now, let us come back to the shared memory model.
Here, there are several processors, and each of these processor can access the shared
memory. That is, each of these processors can do read and write operation on the
shared memory. So, based on read and write operation, the shared memory model
can further be classified into four categories, such as Exclusive Read Exclusive Write
(EREW), Exclusive Read Concurrent Write (ERCW), Concurrent Read Exclusive
Write (CREW), Concurrent Read Concurrent Write (CRCW). In this, the ERCW
model does not exist.

In this course, however we focused on sequential algorithms. There in various design


techniques and its analysis are dealt with.

1.2.2 Selecting Design Techniques

For the purpose of designing an algorithm, several design techniques are presently
known. But, one cannot expect that a problem can be solved using all the design

6 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

techniques. It is purely due to the nature of the problem. Some of the problems are
polynomially solvable, while others are not (more detail in part V). For polynomially
solvable problems, one can devise a polynomial time algorithm for that problem.
There are problems that can give only approximate solution, while others give exact
solution. Certain problems the solution must be optimized, while others are not. So,
choice of a particular design technique is multi-fold. You may understand this after
studying various design techniques. The design techniques which we are going to
see for this course is divide & conquer, greedy, dynamic programming, backtracking
and branch & bound. Also, at the end some useful tips on designing approximate
algorithms also given.

1.2.3 Choice of Data Structures

Broadly speaking, data structures are the core component of an efficient algorithm.
You might have seen some small algorithms therein no major data structures are
used. However, this is not the case always. Majority of the problems can be solved
efficiently by choosing proper data structures. Good choice of data structures give
better time and space efficiency. We are however not seeing more details on data
structures, as you have already studied the course on data structures.

1.2.4 Algorithm Representation

There is no specific rule for representing an algorithm. Even a natural language


is sufficient for representing the algorithm. However, it must satisfy the properties
of an algorithm. Some are interested in writing the algorithm that resembles with
some programming language, while others may consider pseudo code representation.
Some go with graphical representation such as flow chart. How is it represented is
immaterial, what is it representing is required.

1.2.5 Algorithm Correctness

One must guarantee that the designed algorithm will work correctly. As testing is
done at the design stage, it is required to prove the correctness by means mathemat-

7 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

ical proof. Hope, you might have studied various proof techniques in your discrete
mathematics course. What is this correctness means? The algorithm must yield the
required result for every legitimate input within a finite amount of time. For instance,
if we have an algorithm for finding the shortest path between two vertices of a graph,
then it must exactly generate the shortest path between any pair of vertices and for
any graph. Note that in order to prove the algorithms is not correct, you need just
one instance of its input for which the algorithm fails. If the algorithm is found to
be incorrect, you need to redesign the algorithm. The notion of correctness for ap-
proximation algorithms is less straight forward than it is for exact algorithms. For
approximation algorithms, we are interested in limit of the error produced by the
algorithm.

1.2.6 Algorithm Analysis

Space and time complexity are the two factors we analyze. The time complexity
indicates how fast the algorithm runs, while the space complexity indicates how much
extra memory the algorithm needs. The ultimate aim of algorithm design is to reduce
both time and space complexities. However, such reduction is only up to certain level,
beyond that when we reduce the time the space get increased and the reduction of
space increases the time. That is what called the time space trade off. Note that the
time and space are always mentioned in terms of size of input. For instance, to sort
n numbers, the size of the input is n.

Have you understood?


1. What is an algorithm?
2. How to represent the algorithm?
3. What are the various computing models?
4. What do you mean by algorithm analysis?
5. How do you prove the correctness?

8 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

1.3 Common Problem Types


Though there is no limit on the the kind of problem to be solved using computers,
the researchers are attracted towards some specific types of problems such as sort-
ing, searching, string processing, graph problems, combinatorial problems, geometric
problems and numeric problems. These are only a few, several other problems are
also of important. In each of these types of problems several algorithms also existing.
Research is not only focusing on devising algorithm for new problem, but also finding
improvement over the previous existing algorithm, by improving the time and space
efficiencies. Each type of problem having its own nature, and hence specific data
structures exist for that. For instance, in geometric problems, the data structures
such as segment tree, k-d tree, etc are used. While graph theoretic problems, the
data structures such as adjacency matrix, adjacency list, etc are used.

I hope it is not required to mention what we are solving in each problem type, as the
name itself clearly states that.

1.4 Asymptotic Notations


Three standard notations are commonly used in algorithm analysis. They are O,Ω
and Θ, which are respectively named as big oh (order), big omega and theta. Let us
see the detailed definitions of these notations now.

Let f and g be two functions defined from a set of natural numbers to the set of non-
negative real numbers. That is, f, g : N → <≥0 . It is said that f (n) = O(g(n)), if
there exists two positive constants c ∈ < and n0 ∈ N such that f (n) ≤ cg(n), for all
n ≥ n0 .

Here, the function f (n) is bound above by the constant multiple of g(n). So, this
definition says the upper bound for f (n). Similarly, the lower bound is defined with
notation Ω as:

Let f and g be two functions defined from a set of natural numbers to the set of non-
negative real numbers. That is, f, g : N → <≥0 . It is said that f (n) = Ω(g(n)), if

9 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

there exists two positive constants c ∈ < and n0 ∈ N such that f (n) ≥ cg(n), for all
n ≥ n0 .

Sometimes, we may need a notation to define the exact complexity in which the
function f (n) is bound both above and below by another function g(n). The Θ is
such a notation used here which can be defined mathematically as follows.

Let f and g be two functions defined from a set of natural numbers to the set of non-
negative real numbers. That is, f, g : N → <≥0 . It is said that f (n) = Θ(g(n)), if
there exists three positive constants c1 , c2 ∈ < and n0 ∈ N such that c1 g(n) ≤ f (n) ≤
c2 g(n), for all n ≥ n0 .

The definitions of O,Ω and Θ requires that the members of these sets are non-negative
for sufficiently large values of n, which we say as asymptotically non-negative. Note
that both f (n) and g(n) are asymptotically non-negative.

The condition ∀n ≥ n0 is used in all the three definitions given above. Is it really
needed? Yes, of course. For example, cn/ log n is agreeable for n ≥ 2, where c is
a constant. (cn/ log n is indeterminate, for n=0,1.) So, cn/ log n can be written as
O(n/ log n).

Now, let us see some useful properties of the big oh (O) notation.

1. O(f (n)) + O(g(n)) = O(max{f (n), g(n)})

Proof: Let f (n) ≤ g(n).

L.H.S = c1 g(n) + c2 f (n)


≤ c1 g(n) + c2 g(n)
= (c1 + c2 )g(n)
= O(g(n))
Therefore, the R.H.S is O(max{f (n), g(n)}). 
2. f (n) = O(g(n)) and g(n) ≤ h(n) implies f (n) = O(h(n)).
This statement shows the relaxation on tightness of asymptotic value of O
notation.

10 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

3. Any function can be said as an order of itself. That is, f (n) = O(f (n)).
The proof of this statement trivially follows from the fact that f (n) ≤ 1 × f (n).
4. Any constant value is equivalent to O(1). That is, C = O(1), where C is a
constant.

The asymptotic notations are defined only in terms of the size of the input. So, the
constants are not directly applied with the stated properties for obtaining the results.
For instance, suppose we have the summation 1 + 2 + · · · + n. By using the property
explained above, we get

1 + 2 + · · · + n = O(max{1, 2, . . . , n})
= O(n)

which is wrong. Actually, the asymptotic value of the sum is O(n2 ), and is obtained
as follows:
1 + 2 + . . . + n = n(n + 1)/2
= O(max{n2 , n})
= O(n2 )

There are some more useful properties.

1. If limn→∞ {f (n)/g(n)} ∈ <>0 then f (n) ∈ Θ(g(n)).


2. If limn→∞ {f (n)/g(n)} = 0 then f (n) ∈ O(g(n)) but f (n) ∈
/ Θ(g(n)). That is,
O(f (n)) ⊂ O(g(n)), which also implies f (n) ∈ O(g(n)) and g(n) ∈
/ O(f (n)).
3. If limn→∞ {f (n)/g(n)} = +∞ ⇒ f (n) ∈ Ω(g(n)) but f (n) ∈
/ Θ(g(n)). That is,
O(f (n)) ⊃ O(g(n)).

Sometimes, the L-Hospital rule is useful in obtaining the limit value. The rule states
that
f (n) f 0 (n)
lim = lim 0
n→∞ g(n) n→∞ g (n)

where f 0 and g 0 are the first derivatives of f and g respectively.

√ √
Problem: 1.1 Prove that log n ∈ O( n) but n ∈
/ O(log n)

11 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Solution:
1
log
√n
limn→∞ n
= limn→∞ 1
n
−1 [By L − Hospital rule]
2
n 2


2 n
= limn→∞ n

= limn→∞ √2
n

= 0
√ √
Therefore, due to the above property, log n ∈ O( n) but n ∈
/ O(log n). 

Have you understood?


1. How do you define various asymptotic notations?
2. What are the properties satisfied by O notation?

1.5 Linear Search - An Example for Determining


Algorithm Efficiency
Algorithms are usually analyzed to get best-case, worst-case and average-case asymp-
totic values. Each problem is defined on a certain domain. So, when we go for anal-
ysis, it is necessary that the analyzed value will be satisfiable for all instances of the
domain.

Now, we see how an algorithm is analyzed in various cases, viz., best case, worst-case
and average-case.

Let Dn be the domain of a problem, where n be the size of the input. Let I ∈ Dn
be an instance of the problem, taken from the domain Dn . Also, take T (I) as the
computation time of the algorithm for the instance I ∈ Dn .

Best-case analysis: This gives the minimum computation time of the algorithm
with respect to all instances from the respective domain. Mathematically, it can be

12 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

a b x k
q q q q q q
1 2 i n

Figure 1.2: A sample array of elements

stated as
B(n) = min{T (I)|I ∈ Dn } (1.1)

Worst-case analysis: This gives the maximum computation time of the algorithm
with respect to all instances from the respective domain. Mathematically,

W (n) = max{T (I)|I ∈ Dn } (1.2)

Average-case analysis: Mathematically, the average case value can be stated as


X
A(n) = p(I)T (I) (1.3)
I∈Dn

where p(I) is the average probability with respect to the instance I.

As an example, we take an array of some elements. Here, the array is to be searched


for the existence of an element x. If the element x is in the array, the corresponding
location of the array should be returned as an output. Notice that the element may
or may not be in the array. If the element is in the array, it may be in any location,
that is, in the first location, second location, anywhere in the middle or at the end
of the array. To find the location of the element x, we use the following algorithm,
which is said to be the linear search algorithm.

Algorithm 1.1: Linear Search(A[1 · · · n], x)


begin
for i ← 1 to n do
if A[i] = x then Return i;
end
Return “Element does not exist”;
end

13 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Table 1.1: Required number of comparisons for finding the elements at


various locations

Location of the Number of comparisons


element required
0 1
1 2
2 3
. .
. .
. .
n−1 n
not in the array n

The searching process done here is sequential as the searching continues one location
at a time, and in linear order starting from the first location to the end of the array,
and hence the name linear search. Also, the algorithm terminates as soon as it finds
the element or after searching the whole array, if it fails to find the element in the
array. The return value for the successful search is the location of the element x.

In the above algorithm, the comparison is made between the array element and the
element which is to be searched for. This is a major elementary operation and is
used for the analysis. The table 1.1 provides the information about the number of
comparisons required to find the element present in various locations of the array. By
equation 1.1, we say that,

B(n) = min{1, 2, . . . , n}
= 1
= Θ(1)

That is, the best case arises when the searching element appears in the first location

14 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

(that is at 1). Similarly, by equation 1.2,

W (n) = max{1, 2, . . . , n}
= n
= O(n)

That is the worst case arises when the element could either be in the last location or
not be in the array.

Now, let us see the average-case analysis. Let k be the probability of x being in the
array. Therefore, the average probability is:

p(Ii ) = k/n f or 0 ≤ i ≤ n − 1
p(In ) = 1 − k (1.4)

The equation 1.4 is the probability for x not being in the array. Now,
n
X
A(n) = p(Ii )T (Ii )
i=0
n−1
X
= (k/n) (i + 1) + (1 − k)n
i=0
k n(n + 1)
= + (1 − k)n
n 2
k(n + 1)
= + (1 − k)n
2

Suppose x is in the array, then k = 1. Therefore,

A(n) = (n + 1)/2 = O(n)

In case of x being in the array or not, k = 1/2

⇒ A(n) = (n + 1)/4 + n/2 = (3/4)n + 1 = O(n).

The table 1.2 concludes the above discussion.

15 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Table 1.2: Computation time for linear searching

Algorithm Best-case Worst-case Average-case


Linear Search Θ(1) O(n) O(n)

1.6 Mathematical Analysis - Some Examples


This section gives idea on how worst case complexities for various problems can be
found. For clarity, the algorithms for the problems are categorized into recursive and
nonrecursive algorithms.

16 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

1.6.1 Nonrecursive Algorithms

Example: 1.1 Finding the largest element in an array.

Algorithm

The formal description is given in algorithm 1.2.

Algorithm 1.2: MaxArray(A[1 · · · n])


begin
maxval ← A[0];
for i ← 2 to n do
if A[i] > maxval then maxval ← A[i];
end
Return maxval;
end

Analysis

The size of input is n, as there are n elements in the array. Here the basic operation
is comparison, which is done inside the loop. So, it is required to find how many
comparisons are been done. As the loop iterates for n − 1 times, the time complexity
of the algorithm 1.2 is
n−1
X
1 = n − 1 = Θ(n)
i=1

17 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Example: 1.2 Check whether all the elements in a given array are distinct.

Algorithm

The formal description is given in algorithm 1.3.

Algorithm 1.3: Unique(A[1 · · · n])


begin
for i ← 1 to n − 1 do
for j ← i + 1 to n do
if A[i] = A[j] then Return “false”;
end
end
Return “true”;
end

Analysis

Here too the size of the input is n, as there are n elements in the array. Also, the
basic operation is once again the comparison. However, such comparison comes inside
two loops, the outer loop iterates for n − 1 times and the inner one for n − i times.
Therefore, the time complexity of algorithm 1.3 is
n−1
X n
X n−1
X
1 = (n − i)
i=1 j=i+1 i=1
= (n − 1) + (n − 2) + · · · + 1
(n − 1)n
=
2
= Θ(n2 )

Students are advised to listen the usage of θ and O. Note that wherever exact
complexity comes, we use Θ, and wherever upper bound comes we use O.

Example: 1.3 Given two n × n matrices, find the their product.

18 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Algorithm

The formal description is given in algorithm 1.4.

Algorithm 1.4: MatrixMul(A, B)


begin
for i ← 1 to n do
for j ← 1 to n do
C[i, j] ← 0;
for k ← 1 to n do
C[i, j] ← C[i, j] + A[i, k] ∗ B[k, j];
end
end
end
Return C;
end

Analysis

For this problem, the size of the input is n, as we consider two matrices of order
n × n. So, note that the size of input is not the number of input, it is nothing but a
parameter that defines the number of inputs. What is the basic operation? Addition
and multiplication, and that comes inside the innermost loop. As there are three
loops each iterates for n times, the complexity of the algorithm 1.4 is

n X
X n X
n n X
X n
1 = n
i=1 j=1 k=1 i=1 j=1
n
2
X
= n
i=1
3
= n
= Θ(n3 )

19 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Example: 1.4 Given a positive decimal integer, find the number of binary digits in
its binary representation.

Algorithm

The formal description is given in algorithm 1.5.

Algorithm 1.5: Binary(n)


begin
count ← 1;
while n > 1 do
count ← count + 1;
n ← bn/2c;
end
Return count;
end

Analysis

Note here that the frequently executed instruction (comparison) is not inside the
loop, rather it determines whether the loop’s body to be executed. Since the number
of times the comparison will be executed is larger than the number of repetitions of
the loop’s body by exactly 1, the choice is not that important. The important thing
here is the loop’s variable generally takes only a few values between its lower and
upper bounds, therefore, we have to use alternate way of computing the number of
times the body of the loop is executed. Since the value of n is about halved on each
iteration, the answer should be log n. That is, the number of times the value of n is
compared is log n + 1, and therefore the complexity of the algorithm 1.5 is Θ(log n).
Another important factor to consider here is the size of the input, which is n. Note
that we are giving only one positive integer n as an input, however the size of the
input is n, As value of n varies, the complexity also varies as related to this n, and
hence the size of the input is n.

20 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

1.6.2 Recursive Algorithms

Example: 1.5 Given a positive integer n, find the factorial of n.

Algorithm

As n! = n(n − 1) · · · 1, n! can be written as n! = n ∗ (n − 1)!. Also by definition 0! = 1.


The formal description is given in algorithm 1.6.

Algorithm 1.6: Factorial(n)


begin
if n = 0 then Return 1;
else Return n ∗ F actortial(n − 1);
end

Analysis

Let T (n) be the time taken for executing algorithm 1.6. Then,
T (n) = T (n − 1) + 1
Here, T (n−1) is to find (n−1)!, and 1 is for the multiplication. This type of equations
are called recurrence equations. By applying the value of T recursively, we get
T (n) = T (n − 1) + 1
= T (n − 2) + 2
..
.
= T (0) + n
= n
= Θ(n)
Note that this recurrence equation is simple, we could solve it by recursively applying
the value. However, such approaches are not possible when the recurrence equation
is more complex. You can refer the book “Object Oriented Data Structues” by
K.S.Easwarakumar for knowing how to solve such complex recurrence equations.

21 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Example: 1.6 Tower of Hanoi: You have n disks of different sizes and three pegs.
Initially, all the disks are on the first peg in order of size, the largest on the bottom
and the smallest on the top. The goal is to move all the disks to the third peg using
the second one as an auxiliary, if necessary. You are permitted to move only one disk
at a time, and it is forbidden to place a larger disk on the top of a smaller one.

Algorithm

Here, we assume that initially all the n disks are place on peg A, the peg B is auxiliary
and the peg C is the target. The formal description is given in algorithm 1.7. In

Algorithm 1.7: Hanoi(A, B, C, n)


begin
if n = 1 then Move disk from peg A to Peg C;
else
Hanoi(A, C, B, n − 1);
Move disk from peg A to Peg C;
Hanoi(B, A, C, n − 1);
end
end

algorithm 1.7, for n > 1, we first move recursively the n − 1 disks from peg A to
peg B, keeping C as auxiliary. Then moving the largest disk from peg A to peg C.
Finally, recursively moving the n − 1 disks from peg B to peg C, keeping peg A as
auxiliary.

Analysis

Let T (n) be the time taken by algorithm 1.7. Then T (n) can be recursively defined
as
T (n) = 2T (n − 1) + 1
This implies,

T (n) = 2n + 2n−1 + · · · 2 + 1

22 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

= 2n − 1
= Θ(2n )

Example: 1.7 Find the number of binary digits in a decimal number, recursively.

Algorithm

As a last example, let us see a recursive algorithm for the problem we have considered
already, that is determining the number of binary digits of a given decimal number.
The formal description is given in algorithm 1.8.

Algorithm 1.8: BinRec(n)


begin
if n = 1 then Return 1;
else Return BinRec(bn/2c) + 1
end

Analysis

Let T (n) be the time taken for solving algorithm 1.8. Then T (n) is recursively defined
as
T (n) = T (bn/2c) + 1

where T(1)=1.

In the right hand side of the recurrence equation contains T (n/2), so direct substitu-
tion of its value, as we we have done in the previous examples will not give simplicity
in solving the equation. So, we assume first that n is a power of 2, say n = 2k . Then,
the equation becomes,
T (2k ) = T (2k−1 ) + 1

By recursive substitution, we get

T (2k ) = T (2k−2 ) + 2

23 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

= T (2k−3 ) + 3
..
.
= T (1) + k
= 1+k

Remember here that the size of input is in term of n, and not in term of k, so we
have to resubstitute the value of k in terms of n. Hence, we get

T (n) = 1 + log n
= Θ(log n), when n is a power of 2

Note that we have derived this complexity only when n is a power of 2. This imposes
the need for conditional asymptotic notation and is given below.

Let f, g : N → <≥0 . It is said that f (n) = O(g(n) | A(n)), read as g(n) when A(n), if
there exists two positive constants c ∈ < and n0 ∈ N such that A(n) ⇒ [f (n) ≤ cg(n)]
for all n ≥ n0 .

Similar definition can also be stated for Ω and Θ notations. Just before the above
definition, we have seen that T (n) = T (bn/2c) + 1 = Θ(log n | n is a power of 2),
which is the conditional asymptotic value.

However, we cannot say our input is always satisfying this condition. So, what could
be done? We have to prove this for all values of n. How? Refer the following theorem.

Theorem: 1.1 Let p ≥ 2 be an integer. Let f, g : N → <≥0 . Also, f be an eventually


non-decreasing function and g be a p-smooth function. If f (n) ∈ O(g(n) | n is a power
of p), then f (n) ∈ O(g(n)).

Proof: Omitted. 

Two terminologies are used in this theorem, one is eventually non-decreasing and the
other is p-smooth. What are these meant for? Just read further.

24 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

A function f : N → <≥0 is eventually non-decreasing if ∃ n0 ∈ N such that f (n) ≤


f (n + 1), ∀n ≥ n0 .

Let p ≥ 2 be an integer. Let f be an eventually non-decreasing function. f is said to


be p-smooth if f (pn) ∈ O(f (n)).

Now, let us take the example given above and apply the theorem for proving T (n) =
O(log n), ∀n. To prove this, we have to prove that T (n) is eventually non-decreasing
and log n is 2-smooth.

Claim(i): T (n) is eventually non-decreasing.

Using mathematical induction, the proof is as follows.

T (1) = 1 ≤ 1 + 1 = T (2)

Assume for all m < n, T (m) ≤ T (m + 1). In particular,

T (bn/2c) + 1 ≤ T (b(n + 1)/2c) + 1

Now,

T (n) = T (bn/2c) + 1
≤ T (b(n + 1)/2c) + 1
= T (n + 1)

Therefore, T is eventually non-decreasing.

Claim (ii): log n is 2-smooth.

log 2n = log 2 + log n


= O(log n)

which implies log n is 2-smooth.

Therefore, T (n) = Θ(log n).

25 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Have you understood?


1. How best, worst and average case complexities are defined?
2. How time analysis to be done for various problems?

Exercises
1. Find the value of T (n) for
(a) T (n) = 2T (bn/2c) + n.
(b) T (n) = 2T (dn/2e) + n + 34.
(c) T (n) = 3T (n/3) + n/3.
(d) T (n) = 2T (n/2) + n/ log n.
q q
(e) T (n) = (n)T ( (n)) + n.
q
(f) T (n) = 2T (n/4) + (n).
q
(g) T (n) = T ( (n)) + 1.
2. Suppose the worst-case running time of an algorithm is O(f (n)) and its best-case
running time is Ω(f (n)). Then, can you say the running time of the algorithm
is Θ(f (n))? Justify your answer.
3. If R is a relation. Then, R transitive if aRb, bRc implies aRc. R is reflexive if
aRa. R is symmetric if aRb implies bRa. Find which of the notations O,o,Ω,ω
and Θ are transitive, which are reflexive and which are symmetric.
4. Show that for any real constants a and b, where b > 0,

(n + a)b = Θ(nb )

5. Write an algorithm to find the number of occurrence of each character of a given


string. Calculate the time and space requirements of your algorithm.

Summary
An algorithm is a sequence of non ambiguous instruction for solving a problem in
a finite amount of time. An algorithm can be specified in several forms including a

26 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

natural language or a pseudocode. A good algorithm is usually a result of repeated


efforts and rework. Analyzing the algorithm is done for time efficiency and also
for space efficiency. The efficiency must be computed based on the size of inputs.
Algorithms can be analyzed for the best case, worst case or for the average case.

27 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

28 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Unit II

Divide & Conquer and Greedy


Method

29 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Structure of Unit II
1 Divide and Conquer

1.1 Methodology

1.2 Merge Sort

1.3 Quick Sort

1.4 Binary Search

1.5 Binary Tree Traversal

1.6 Multiplication of Large Integers

1.6.1 Strassen’s Matrix Multiplication

2 Greedy Method

2.1 General Approach

2.2 Minimum Cost Spanning Trees

2.2.1 Kruskal’s Algorithm

2.2.2 Prim’s Algorithm

2.3 Shortest Path Problems

2.3.1 Single Source Shortest Paths

30 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Learning Objectives
• Methodology of Divide & Conquer and Greedy approaches.

• Addressing solution to various problems using these methods.

31 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

1. Divide and Conquer


1.1 Methodology
Certain problems are difficult to solve when the problem size is higher. In such
situations, algorithmically the problem can be simplified by dividing the problems into
smaller ones, and finding the solutions to the smaller size problems are much simpler
as compared to solve the problems of bigger size. However, the solutions to the smaller
problems can not be the solution to the original problem, rather these solutions are
combined together correctly to obtain the solution to the original problem.

So, the general approach for this divide and conquer method is splitting the input
into two subproblems of the same kind as the original one. Later, these subproblems
be divided further down, still we get a problem instance, which is comfortable to
solve directly or easily. Moreover, due to nature of the method, it is always possible
to write recursive algorithms. Remember that recursive algorithms may be good for
readability and simplicity, but the real efficiency may be thought of converting the
recursive algorithms into the non-recursive one, using known standard approaches.

A good example for the divide and conquer methodologies is, assume that we wish to
select a fastest 100m runner from a college, and that must be determined by having
only four tracks and no clock available to find their running time. Now, imagine that
is it possible to make running all the students at a time? Obviously, no. So the
concerned selection authorities divide the group of students into four, and make them
to run. This phase is called the divide phase. Now, what is the conquer phase? From
the group of people, one can determine who is the best runner out of four students.
Then, who will be the overall runner? So, it is required to conquer. That is, four
winners from four different group can now be asked to run again to test, and that to
be continued till we find the over all best runner.

In the above said example initially, we divided the problem into several smaller prob-

32 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

lems, depending on the size of the input. However, in the course of study, for the
various problems encountered, we always divide the problem into two subproblems.
Then, those subproblems be divided into another two subproblems, and the same
may be repeated till we get the problem of smaller instance.

Now let us see various standard problems that can be solved using divide and conquer
stratergy.

1.2 Merge Sort


Given an array of n numbers, the merge sort could sort the numbers in nondecreasing
order. Informally, the given array containing the numbers to be sorted out be split
into two halves. Let us say left half and right half. Each of these subarrays now
containing at the most n/2 elements. Now, each subarray further be split into another
two arrays, so that each of these subarrays now containing at the most n/4 elements.
This process then continued till we get each subarray with only one element, wherein
the array is sorted as it containing only one element. These sequence of operations
are together come under divide phase.

For instance, let us take an array of 8 numbers as given below:

99, 76, 45, 32, 90, 47, 82, 23

This problem requires three passes in the divide phase and are shown below.

99, 76, 45, 32 | 90, 47, 82, 23 ← P ass 1

99, 76 | 45, 32 | 90, 47 | 82, 23 ← P ass 2


99 | 76 | 45 | 32 | 90 | 47 | 82 | 23 ← P ass 3
After three passes in the above example, each of the subarray got a sorted sequence,
as it contains only one element. Now, it is required to go for the merge phase. Therein
we take two sorted subarrays, and merge its elements so as to form another sorted
subarray of double the size, and this process must be continued till we get a single
sorted subarray. This sequence of operations is illustrated below.

76, 99 | 32, 45 | 47, 90 | 23, 82 ← P ass 1

33 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

32, 45, 76, 99 | 23, 47, 82, 90 ← P ass 2


23, 32, 45, 47, 76, 82, 90, 99 ← P ass 3

Algorithm

Here, it is assumed that an array, say A, is given as input, having l as its lower index
and h as higher index. That is, the given array is A(l, h). So, the array containing
h − l + 1 elements. For simplicity, the merge portion of the algorithm is described
separately as algorithm1.2, therein we take B(l, h) is a temporary array, and g, i, j, k
are the integer variables.

Algorithm 1.1: Mergesort(l,h)


begin
if l < h then
m ← b(l + h)/2c;
call Mergesort(l,m);
call Mergesort(m+1,h);
call Merge(l,m,h);
end
end

Analysis

Due to nature of recursion exists in the algorithm, the time complexity T (n), where
n is the size of the input, of the algorithm can be described by means of recurrence
equation, which is: 
 1 if n = 1
T (n) =
 2T (n/2) + n if n > 1
This leads that T (n) = O(n log n).

To know how to solve the recurrence equation, the reader can refer the book “Object
Oriented Data Structure” by Prof. K.S.Easwarakumar.

34 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Algorithm 1.2: Merge(l,m,h)


begin
g ← l; i ← l; j ← m + 1;
while k ≤ m and j ≤ h do
if A(g) ≤ A(j) then
B(i) ← A(g); g ← g + 1;
else
B(i) ← A(j); j ← j + 1;
end
i ← i + 1;
end
if g > m then
for k ← j to h do
B(i) ← A(k); i ← i + 1;
end
else
for k ← g to m do
B(i) ← A(k); i ← i + 1;
end
end
for k ← l to h do
A(k) ← B(k);
end
end

1.3 Quick Sort

Quick sort is yet another sorting technique that can be solved using divide and conquer
strategy. In merge sort, the given array is divided into two equal halves each time,
in the divide phase. However, in quick sort, the division is based on the chosen pivot
element. Usually, the pivot element is selected as the first element in the array. After
selecting the pivot element, the quick sort partitioning algorithm by determining

35 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

where the pivot element be placed in the overall sorted sequence. That is, the pivot
element be placed in the exact position where it suppose to be in the final array.
Also, all the elements that lie left of the placed pivot element are lesser than the
pivot element and all the elements that lie right side of the pivot element are larger
than the pivot element. Hence, two subproblems exist, one is due to the subarray
that lie left side of the pivot element, and the other is due the subarray that lie right
side of the pivot element. So, further processing is carried out by running quick sort
algorithm on these two subarrays.

For instance, let us consider the following input array.

45, 32, 99, 76, 90, 47, 82, 23

Let us consider the first element be the pivot element. That is, in this example 45
is the pivot element. After one pass, this pivot element 45 be placed in the exact
position, and thus the array be partitioned into two sub problems. We will see a little
later how the pivot element be placed. Now, let us have the array after placing the
pivot element be of the following.

23, 32, 45, 99, 76, 90, 47, 82, 99

Let us see the elements that lie left and right sides of the pivot element in the above
array. Now, it is required to see how such one pass be carried out to fix a pivot
element.

Let i and j be the two pointers, set respectively at the second and the last position
of the array, as shown in the following figure.

45, 32, 99, 76, 90, 47, 82, 23


↑ ↑
i j

The j pointer moves towards left and stop its traversal once it locates an element
smaller than the pivot element. Similarly the pointer i moves towards right and stop
its traversal once it locates an element larger than the pivot element. Due to this

36 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

move the array configuration becomes,

45, 32, 99, 76, 90, 47, 82, 23


↑ ↑
i j

As j already points an element that is smaller than the pivot element, the j pointer
can not move in this case. The next step is interchanging the values pointed by the i
andj pointers, provided the i pointer lies left to the j pointer. Due to this swapping
the array configuration becomes,

45, 32, 23, 76, 90, 47, 82, 99


↑ ↑
i j

Again the i and j pointers may be permitted to move, as described above, the array
configuratin becomes

45, 32, 23, 76, 90, 47, 82, 99


↑ ↑
j i

In this case the i pointer lies right of the j pointer and hence swapping of the values
pointed by i and j pointer does not arise, rather another swapping is required, which
is swapping of the pivot element and the element pointed by the j pointer. That
leads to,
23, 32, 45, 76, 90, 47, 82, 99
↑ ↑
j i
And, this swapping places the pivot element in the correct position.

Algorithm

Now, let us see the algorithm for such partition of the array. In algorithm 1.3 it is
assumed that the starting index of the array is m and the ending index is j, which
is also used as moving pointer from right to left. This partition algorithm will fix

37 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Algorithm 1.3: Partition(m,j)


begin
v ← A(m);
i ← m + 1;
while true do
repeat i ← i + 1 until A(i) ≥ v;
repeat j ← j − 1 until A(j) ≤ v;
if i < j then call Swap(A(i),A(j));
else Exit
end
call Swap(v,A(j));
end

only one pivot element. However, it is required to fix all the elements in the array in
proper place. So, the repeated call of the algorithm 1.3 is required and are described
in algorithm 1.4. In algorithm 1.4 is is assumed to sort the elements in the array A,

Algorithm 1.4: Quicksort(p,q)


begin
if p < q then
j ← q + 1;
call Partition(p,j);
call Quicksort(p,j-1);
call Quicksort(j+1,q);
end
end

whose starting index is p and the ending index is q that is actually inside the global
array A(1 · · · n), A(n + 1) is considered to be the element that must be greater than
all elements in the array, that is it can be assumed as infinity. This inclusion is only
for the logical consistency in executing the algorithm.

38 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Analysis

The Quicksort function is recursive, as in merge sort. Its time complexity is therefore
stated using a recurrence formula. Note that in the quick sort, the subarrays are
partitioned into any random size. The running time of the quick sort is equal to the
running time of the two recursive calls plus the time spent for partition. If T (n) is
the time taken by the quick sort algorithm, T (n) can be defined as

T (n) = T (p) + T (n − p − 1) + cn,


T (0) = 1, and
T (1) = 1 (1.1)

where p is the number of elements in the first partition.

The worst case arises when the given array is sorted. That is, the pivot element is
the smallest one. Then p will be zero and the equation 1.1 becomes

T(n) = T(n-1) + cn

If we solve this recurrence equation, we get (x−1)3 = 0 as the characteristic equation.


Therefore, T (n) becomes O(n2 ) in the worst case.

The best case arises only when the array gets partitioned equally every time. This
simplifies equation 1.1 to

T(n) = 2T(n/2) + cn

This is proved to be O(n log n) by choosing n is a power of 2.

The average case complexity is determined by assuming the initial array elements are
equally likely out of n! permutations. Therefore, the probability of placing the pivot
element at a location is 1/n. If the pivot element is placed at location k this results
in two subarrays of size k − 1 and n − k, respectively, and the average time complexity

39 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

is stated as
n
1X
T (n) = cn + [T (k − 1) + T (n − k)]
n k=1
n
2X
= cn + T (k − 1) (1.2)
n k=1
where c is a positive constant. This recurrence equation is of special type, and that to
be solved in different way. Here we remove all but one term in the summation given
in equation 1.2. This is accomplished by, first replacing n by (n − 1) in equation 1.2.
This yields
2 n−1
X
T (n − 1) = c(n − 1) + T (k − 1) (1.3)
n − 1 k=1
Now, on multiplying equation (1.2) by n and equation (1.3) by n − 1, we obtain,
n
nT (n) = cn2 + 2
X
T (k − 1) (1.4)
k=1

and
n−1
(n − 1)T (n − 1) = c(n − 1)2 + 2
X
T (k − 1) (1.5)
k=1

Subtracting equation (1.5) from equation (1.4), we get

nT (n) = (n + 1)T (n − 1) + c(2n − 1) (1.6)

This recurrence equation is different from what we have seen so far, as its coefficients
are not constants. Therefore, this recurrence equation cannot be solved using any of
the standard methods. The method of mathematical induction can instead be used
to solve this recurrence equation. This approach leads T (n) to be O(n log n), with
the assumption that T (0) ≤ b and T (1) ≤ b for some constant b.

Let n = 2. Then, equation (1.6) becomes


2T (2) = 3b + 3c

3
⇒ T (2) = 2
(b + c)

⇒ ≤ O(2 log 2)

40 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Therefore, the statement is true for the base value 2. For the hypothesis, we assume
that T (n) ≤ kn log n for 1 ≤ n < m, where k is some constant.

Now we have to prove that T (m) ≤ km log m. From equation (1.6), we get

m+1 c(2m − 1)
T (m) = T (m − 1) +
m m

m+1 c(2m − 1)
≤ k (m − 1) log(m − 1) +
m m

m2 − 1 c(2m − 1)
= k log(m − 1) +
m m

k c
≤ km log m − log m + 2c −
m m

= O(m log m)

Therefore, the time complexity for the quick sort algorithm is summarized as

Best Case Average Case Worst Case


O(n log n) O(n log n) O(n2 )

1.4 Binary Search


Binary search algorithm is a technique for finding a particular value in a sorted list.
It makes progressively better guesses, and closes in on the sought value, by examining
the value of the list member exactly halfway between what it determined to be an
element too low in the list and one too high in the list. A binary search finds the
median element in a list, compares its value to the one we are searching for, and
determines if it is greater than, less than, or equal to the one we want. A guess that
turns out to be too high becomes the new top of the list, and one too low the new
bottom of the list. The binary search’s next guess is halfway between the new list’s
top and bottom. Pursuing this strategy iteratively, it very quickly prunes the list,
narrows the search, and finds the value.

41 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

An example of binary search in action is a simple guessing game in which a player


has to guess a positive integer, between 1 and N , selected by another player, using
only questions answered with yes or no. Supposing N is 16 and the number 11 is
selected, the game might proceed as follows.

Is the number greater than 8? (Yes)


Is the number greater than 12? (No)
Is the number greater than 10? (Yes)
Is the number greater than 11? (No)

Therefore, the number must be 11. At each step, we choose a number right in the
middle of the range of possible values for the number. For example, once we know the
number is greater than 8, but less than or equal to 12, we know to choose a number
in the middle of the range [9, 12] (in this case 10 is optimal).

Similarly, when the given data are arranged in any sorted order, say ascending order,
then search on a particular element is determined by comparing the middle element,
based on its contents it is always possible to determine where the element that is
searched to be, that is either in the left half or in the right half. Further comparison
narrow down the size of the subarray to be searched further. Sometimes, the chosen
middle element itself be the element to be wanted, and in this case search may be
terminated. Also, if the element is not found even after the search array narrowed
down to the size of one, then it is to be declared as the no such element.

Algorithm

The most common application of binary search is to find a specific value in a sorted
list. To cast this in the frame of the guessing game, realize that we are now guessing
the index, or numbered place, of the value in the list. This is useful because, given
the index, other data structures will contain associated information. Suppose a data
structure containing the classic collection of name, address, telephone number and so
forth has been accumulated, and an array is prepared containing the names, numbered
from 1 to n. A query might be: what is the telephone number for a given name X. To

42 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

answer this the array would be searched and the index (if any) corresponding to that
name determined, whereupon it would be used to report the associated telephone
number and so forth. Appropriate provision must be made for the name not being in
the list (typically by returning an index value of zero), indeed the question of interest
might be only whether X is in the list or not.

If the list of names is in sorted order, a binary search will find a given name with far
fewer probes than the simple procedure of probing each name in the list, one after the
other in a Linear search, and the procedure is much simpler than organizing a Hash
table (kind of data structure) though that would be faster still, typically averaging
just over one probe. This applies for a uniform distribution of search items but if it is
known that some few items are much more likely to be sought for than the majority
then a linear search with the list ordered so that the most popular items are first may
do better.

The binary search begins by comparing the sought value X to the value in the middle
of the list; because the values are sorted, it is clear whether the sought value would
belong before or after that middle value, and the search then continues through the
correct half in the same way. The formal representation for doing binary search on
an array A with indices in the range 0 to n − 1 (inclusive) is given in algorithm 1.5.

Algorithm 1.5: BinarySearch(A, value)


begin
low ← 0;
high ← n − 1;
while low <= high do
p ← low + ((high − low)/2);
if A[p] > value then high ← p − 1;
else if A[p] < value then low ← p + 1;
else return p;
end
return NotFound;
end

43 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

In this algorithm n is nothing but the number of elements in the array, value is the
one we are searching for. Initially, we assumed that the lower index of the array is
low and upper index of the array is high, which are taken as temporary data, and
that will be altered later in the algorithm based on the comparison. So all the time
the subarray to be searched for is indexed within low and high values.

Should the sorted array contain multiple elements with equal keys (for instance,
3,5,7,7,7,8,9) then should such a key value be sought (for instance, 7), the index
returned will be of the first-encountered equal element as the spans are halved, and
this will not necessarily be that of the first, last, or middle element of the run of
equal-key elements but will depend on the placement of the values with respect to n.
That is, for a given list, the same index would be returned each time, but if the list
is changed by adding deleting or moving elements, a different equal value may then
be selected.

Analysis

Binary search is a logarithmic algorithm and executes in O(log n) time. Specifically,


1 + log n iterations are needed to return an answer. In most cases it is considerably
faster than a linear search. It can be implemented using recursion or iteration. We
have given a non-recursive version of the algorithm.

1.5 Binary Tree Traversal

A binary tree is an important tree structure which is used in several applications. It


is characterized by the fact that no node can have more than two children. Unlike
trees, the binary tree distinguishes its sub-trees as left and right sub-trees. Moreover,
a binary tree may have zero nodes, whereas in trees atleast one node must be present.
Thus a tree is really a different object from a binary tree.

A binary tree is a finite set of nodes which is either empty or consists of a root and
two disjointed binary trees called the lef t sub-tree and the right sub-tree. The left
and right sub-trees of a binary tree is also a binary tree.

44 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Classification of binary trees based on their sub-trees make the following two trees
different.
 
a
 
a
% @
% @
b

b


The first one has an empty right sub-tree, whereas the second one does not have any
left sub-tree. In case of trees, both are the same as there is no distinction between
its children.

A binary tree can be implemented either using arrays are using pointers. For those
who are in need of the details can refer any of the data structure book. As the scope
of this material deals with only the traversal, let us focus on the same independent
of its implementation. However, we must distinguish its left and right subtrees, and
the respective terminology is given below.

Lchild – The root of the left subtree.


item – The information of the node.
Rchild – The root of the right subtree.

Traversal

The most important operation which can be performed on a binary tree is visiting
all nodes exactly once. This is said to be the traversal. Traversal can be used
to find information available in the binary tree in a linear order. Since the tree is
defined recursively, each node and its left and right children should be treated equally.
Moreover, this recursive definition helps us to write a simple recursive procedure for
the traversal.

Binary tree traversal can be carried out in three different ways. They are:

Inorder: Traversing the left sub-tree, visiting the root and finally traversing the right
sub-tree.

45 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Preorder: Visiting the root, traversing the left sub-tree and finally traversing the
right sub-tree.

Postorder: Traversing the left sub-tree, traversing the right sub-tree and finally
visiting the root.

Based on the order of visiting the root node with respect to its children, the traversals
are named as inorder, preorder or postorder.

Algorithms

The three traversals listed above are formally described as in algorithms 1.6, 1.7 and
1.8.

Algorithm 1.6: Preorder(T)


begin
if T is not empty then
Visit the root;
call Preorder(Lchild(T));
call Preorder(Rchild(T));
end
end

Algorithm 1.7: Inorder(T)


begin
if T is not empty then
call Inorder(Lchild(T));
Visit the root;
call Inorder(Rchild(T));
end
end

46 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Algorithm 1.8: Postorder(T)


begin
if T is not empty then
call Postorder(Lchild(T));
call Postorder(Rchild(T));
Visit the root;
end
end

The above traversals are useful for generating the infix, prefix and postfix forms of an
algebraic expression, if the expression is represented as a binary tree. For instance,
the expression

(a + ((b ∗ (c − e))/f ))

can be represented in the form of a binary tree as shown in figure 1.1. The outputs of
inorder, preorder and postorder are infix, prefix and postfix forms respectively. The
traversals and related outputs for the tree are also tabulated.


+

 @ 
@
a
 /
 @
@

 f

 
@
@
b
 −
% @
% @
c
 e


Figure 1.1: A binary tree for (a + ((b ∗ (c − e))/f ))

47 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Traversal Output
Inorder a + b ∗ c − e/f
Preorder +a/ ∗ b − cef
Postorder abce − ∗f /+

Analysis

As all the three traversal algorithms are required to visit each and every node exactly
once, the complexity of these algorithms are linear to the number of nodes of the
binary tree.

1.6 Multiplication of Large Integers


Why do we think of multiplication of large integers as a problem to be encountered
using divide and conquer strategy. The computing system has the limitation of hold-
ing integer data. That is, it is depending on its word length. So, large integers cannot
be held by the memory of the system. In this situation, it is required to divide the
number in parts and do the operation as required.

Let us think of the way in which we perform the integer multiplication. Therein, we
take each digit at a time and that will be multiplied with all the digits of the other
number, each time. After considering all the digits of this number, by summing the
values we get the final result. In divide and conquer strategy, we almost follow the
same, but reduce the number of multiplications. That is, in normal multiplication
of two integer numbers with n digits requires O(n2 ) multiplications. However in the
approach we consider here requires less than O(n2 ) multiplications.

For the purpose of explanation, let us first consider the numbers are having only
two digits. That is, let a and b be the two numbers, which are represented by a = a1 a0
and b = b1 b0 . This leads, a = a1 ∗ 101 + a0 ∗ 100 and b = b1 ∗ 101 + b0 ∗ 101 . So, if
c = a ∗ b then c becomes,

c = c2 ∗ 102 + c1 ∗ 101 + c0

48 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

where,
c 2 = a1 ∗ b 1
c 0 = a0 ∗ b 0
c1 = (a1 + a0 ) ∗ (b1 + b0 ) − (c2 + c0 )

Note that the number of multiplications are reduced at the cost of slight increase
in number of additions. Let us now think of this strategy for the multiplication of two
n digit numbers, where in we assume that a = a1 ∗ 10n/2 + a0 and b = b1 ∗ 10n/2 + b0 .
This is possible when n is an even number. If n/2 is also even, then we can repeat
the same for computing the products c2 , c1 and c0 . So, if n is the powe of 2, we
have a recursive algorithm for computing the product of two n digit integers, and this
recursion may be stopped when n becomes one.

Analysis

How many digit multiplications does this algorithm make. Since multiplication of two
n digit numbers requires three multiplication of n/2 digit numbers, the recurrence
equation becomes,
T (n) = 3T (n/2), n > 1, T (1) = 1
By solving this recurrence equation, we get

T (n) = 3log n = nlog 3 ≈ n1.585

1.6.1 Strassen’s Matrix Multiplication

In a similar fashion as we did for the integer multiplication, we can perform the
matrix multiplication. To illustrate that let us first take two 2 × 2 matrices and then
extended the same to two n×n matrices. Note that we use capital letters to represent
the matrices and small letters for its elements.
   
a00 a01  b00 b01 
Let A =  and B =  , then for C = A ∗ B,
a10 a11 b10 b11
 
m1 + m4 − m5 + m7 m3 + m5
C= 
m2 + m4 m1 + m3 − m2 + m6

49 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

where,
m1 = (a00 + a11 ) ∗ (b00 + b11 )
m2 = (a10 + a11 ) ∗ b00
m3 = a00 ∗ (b01 + b11 )
m4 = a11 ∗ (b10 − b00 )
m5 = (a00 + a01 ) ∗ b11
m6 = (a10 − a00 ) ∗ (b00 + b01 )
m7 = (a01 − a11 ) ∗ (b10 + b11 )

Thus, for multiplying two 2×2 matrices requires 7 multiplications and 8 additions.
Actually, this approach is not meant with only these numbers, however with the
assumption of the matrices of sizes n × n, where n tends to infinity.

Now, let us generalize this approach for multiplying two n × n matrices, where n
is a power of 2. If n is not a power of two, matrices can be padded with rows and
columns of zeros to make it to be the power of 2. One can easily extend the above
approach for the higher order matrices if one view the same as the following.
     
C00 C01 A00 A01 B00 B01
 = ∗ 
C10 C11 A10 A11 B10 B11

Note here in that we considered sub matrices rather than elements. So, it is possible
to extend the above approach here too.

Analysis

From the fact described above, it is known that according to the Strassen’s algorithm,
multiplication of two n × n matrices leads to the recurrence equation

T (n) = 7 ∗ T (n/2), n > 1, T (1) = 1

where n is the power of 2. By solving this recurrence equation gives

T (n) = 7log n = nlog 7 ≈ n2.807

which is better than O(n3 ) multiplications required by the brute-force algorithm.

50 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

2. Greedy Method
2.1 General Approach
The greedy method is the most straightforward design technique. Therein given n
inputs it is required to find a subset that satisfies some constraints. Any subset that
satisfies these constraints is called a feasible solution. Any feasible solution that either
maximises or minimizes the given objective function is called the optimal solution.

This method suggest that the algorithm can work in stages, and at each stage a
decision is made regarding whether or not a particular input is in an optimal solution.
This is done by considering the inputs in an order determined by some selection
procedure. A general approach for the greedy approach is given in algorithm 2.1.

Algorithm 2.1: Greedy(A,n)


begin
for i ← 1 to n do
x ← Select(A);
if Feasible(solution,x) then
solution ← U nion(solution, x);
end
end
Return (Solution);
end

In the above algorithm the given n inputs are available in an array A, the select
procedure selects an input from A, and removes the element from the array. The
feasibility of the chosen element is determined by the function Feasible. The Union
function will add the element to the solution set, and as it is given inside the If
condition, only the feasible elements will be added in the solution set.

51 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

2.2 Minimum Cost Spanning Trees

A spanning tree of a connected graph G, is a tree consisting the of edges and all the
vertices of G. Figure 2.1 shows a graph and two of its spanning trees.

  
A
 A
 A
 J  J 
 J  J 
 J  J  
B
 C
 B
 C
 B
 C
@ # # @ #
@ ## # @ ##
#@@ #  #@@
#
D
 E
 D
 E
 D
 E


Figure 2.1: A graph and two of its spanning trees

Spanning trees find application in a variety of problems. One such application involves
determining the set of independent cycles in a graph. Note that the inclusion of an
edge in a spanning tree creates a cycle. The set of edges of G, not in T , together
with the edges of T result in a set of independent cycles. Problems which involve
the finding of independent circuit equations for an electrical network require cycles
of this kind.

Now let us deals with weighted graphs. This means that the edges of the graphs
are assigned to have some weights. Our aim is therefore to find a spanning tree T
of minimum cost, for a given graph G, which means that the total weights of the
edges of the spanning tree must be minimum compared to all other spanning trees
generated from G. An example to this concept is shown in figure 2.2. In this section,
a discussion will be taken up on two different algorithms, viz. Prim and Kruskal, for
finding the minimum cost spanning tree. In both methods the tree will be built up in
stages by adding one edge to T at a time. Recall that the spanning tree exists only
for connected graphs. Therefore the input to these algorithms are connected graphs,
and is weighted too.

52 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

2.2.1 Kruskal’s Algorithm

In Kruskal’s algorithm, the edges must first be sorted according to their weights, and
further formal description can be seen after examining the following example.

 1   1 
A
 B
 A
 B

5  T 2 6 J 5 5  T 2 J 5
 T J  T J
 T J  T J
E
 G 4
3  F E
 3 
G F

T  T 
6T 9 8TT 7  
T 
D C
 D
 C

2 2
(a) (b)

Figure 2.2: A graph and a minimum cost spanning tree

Consider the graph given in figure 2.3. The order of inclusion of edges of this graph
in the minimum cost spanning tree is (A, B), (A, G), (C, D), (A, D), (B, C), (A, E),
(B, F ), (B, G), (D, E), (C, F ), (C, G) and (D, G). The cost (weight) of these edges
corresponds to the sequence 1, 2, 2, 3, 4, 5, 5, 6, 6, 7, 8 and 9. The first four edges,
i.e. (A, B), (C, D), (A, G) and (A, D), are to be included one after another into the
minimum cost spanning tree T , as these edges do not create any cycle due to their
inclusion. The next edge to be considered is (B, C). The inclusion of this edge leads
to a cycle. Since the cycle is not permissible in the spanning tree, we merely reject
this edge. The edges (A, E) and (B, F ) will now be included, and bring the total
number of edges added in the spanning tree to 6. The algorithm is now terminated.
This is due to the fact that the spanning tree of a graph consisting of n vertices has
only n − 1 edges. The resultant spanning tree is shown in figure 2.2(b), and its cost
is 18. The stages of the algorithm for the graph given in figure 2.2(a) are shown in
figure 2.3.

53 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Edge Weight Action T

 
A
 B

  
– – – E
 G
 F

 
D
 C


 1 
A
 B

  
(A,B) 1 accept E
 G
 F

 
D
 C


 1 
A
 B

J2
 J 
(A,G) 2 accept E
 G
 F

 
D
 C


 1 
A
 B

e2
 e 
(D,C) 2 accept E
 G
 F

 
D
 C

2

Figure 3.3 (contd.)

54 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Edge Weight Action T

 1 
A
 B

e2
 e 
(A,D) 3 accept E
 3 
G F

 
D
 C

2

 1 
A
 B

e2
 e 
(B,C) 4 reject E 3 
G F
 
 
D
 C

2

 1 
A
 B

5 e 2

 e 
(A,E) 5 accept E
 3 
G F

 
D
 C

2

 1 
A
 B

5 e 2 e5

 e e
(B,F) 5 accept E
 3 
G F

 
D
 C

2

Figure 2.3: Stages of Kruskal’s algorithm

55 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Algorithm

Let us now consider the formal algorithm (algorithm 2.2) for Kruskal’s method to
find the minimum cost spanning tree. In this algorithm let us assume that the given
graph G contains n vertices and E is the set of edges of the graph. We have a sorted
sequence of edges of E. Let T = (VT , ET ). Initially ET = φ and VT = V , where V is
the set of vertices of G.

Algorithm 2.2: Kruskal


begin
Create separate singleton sets for the vertices of G;
while T has less than n-1 edges and E 6= φ do
Choose an edge (u, v) from E of lowest cost;
Delete (u, v) from E;
end
if find(u) 6= find(v) then
Add (u, v) to T ;
call Union(find(u),find(v));
end
else Discard (u, v);
if T has less than n − 1 edges then
Print(“No spanning tree exists for the given graph00 );
end
end

Analysis

The time complexity of the algorithm 2.2 depend on the preprocessing task of sorting.
The best algorithm known for sorting requires O(n log n) to sort n given numbers.
As there are e edges, O(e log e) time is required to sort these edges according to
their weights. A cycle can be detected as follows: Initially each vertex will be in
a set which contains only that vertex. Whenever a new edge is included, the sets
corresponding to the end vertices of this edge will be merged by applying the union

56 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

operation. Note that end vertices of newly encountered edges belong to the same
set, and the inclusion of this edge will certainly form a cycle, as the vertices already
belong to the said set are vertices of a subtree. In our example, we initally have the
sets {A}, {B}, {C}, {D}, {E}, {F} and {G}. The first edge to be included is (A,B).
As A and B belong to different sets, the inclusion of this edge does not create any
cycle. Therefore, the edge (A,B) is included and results in the sets {A,B}, {C}, {D},
{E}, {F} and {G}. The next minimum weighted edge is (A,G) and its inclusion leads
to the sets {A,B,G}, {C}, {D}, {E} and {F}. The third edge to be included is (C,D),
and the sets become {A,B,G}, {C,D}, {E} and {F}. Now the edge (A,D) is to be
considered for inclusion, and due to this process the sets we obtain are {A,B,G,C,D},
{E} and {F}. For the next iteration here, the edge (B,C) is encountered. As the end
vertices of this edge (B,C), belong to same set, it creates a cycle and therefore we
reject the edge. By proceeding in this way, the result will be a single set consisting
of all vertices of the graph, if the graph is connected. At this stage, the tree obtained
due to the inclusion of edges as stated above, would be the required spanning tree.

It is known that the sequence of union-find operations on sets can be done in near
optimal time as per the following lemma 2.1 (proof omitted), where α(m, n) is inverse
Ackermann’s function.

Lemma: 2.1 Let T(m,n) be the maximum time required to process any intermixed se-
quence of m ≥ n finds and n−1 unions. Then k1 mα(m, n) ≤ T (m, n) ≤ k2 mα(m, n),
for some positive constants k1 and k2 .

Therefore, the computing time of Kruskal’s algorithm is dominated by the complexity


of sorting the edges according to their weights, which in the worst case is O(e log e).
As the maximum value of e is O(n2 ), the total time required by Kruskal’s algorithm
to compute a minimum cost spanning tree is O(e log n).

57 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

2.2.2 Prim’s Algorithm

In contrast to Kruskal’s algorithm, Prim’s algorithm permits T to grow as a single


tree. This means that at no stage of the algorithm does a forest appear. Let the set of
vertices belonging to T be VT . Therefore the set of vertices not in T will be V − VT ,
where V is the set of vertices of the given graph G. Initially VT contains only an
arbitrary vertex which is chosen to be a root of T , and ET is empty, where ET is the
set of edges in T . Now, from the set of edges adjacent from a vertex in VT to a vertex
in V − VT , select a minimum weighted edge and this edge will become a member of
ET , if the chosen edge does not create any cycle with the edges of the tree already
considered. Repeating this process of selecting a minimum weighted edge between
the vertices in VT and V − VT for inclusion into T till T has n − 1 edges, will result
in a minimum cost spanning tree.

Figure 2.4 gives a sketch of this method for the graph given in figure 2.2. Once
A is chosen as a root of T , the edges (A,B), (A,G), (A,D) and (A,E) are considered
for choosing the minimum weighted one as these edges are incident on A, a vertex
presently in T . Here, the edge (A,B) has the least weight, and so the edge (A,B) will
be included in T . The vertices A and B are in VT . For the next iteration, the edges
(A,G), (A,D), (A,E), (B,G), (B,C), and (B,F) should be considered for choosing the
minimum weighted edge and (A,G) is accordingly selected. As the edge (A,G) does
not create any cycle with the edges already in T , this edge gets included in T . If we
repeat this process till |VT | = n − 1, T becomes the required spanning tree.

Algorithm

While implementing Prim’s algorithm, we require a data structure to determine the


minimum weighted edge between the tree vertices and their adjacent non-tree vertices.
The data structure we have chosen is the binary heap tree, or heap tree in short. In
the heap tree the value of a node must be less than or equal to the value of its
children. The minimum valued node will always be the root. Now let us see the
formal description of the algorithm, which is given in algorithm 2.3. In algorithm 2.3,
it is assumed that the given graph consists of n vertices.

58 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Edges under Weights Chosen T rooted at A


consideration edge

 
A
 B

  
– – –
E
 G
 F

 
D
 C


 1 
A
 B

(A,B) 1
  
(A,G) 2 (A,B) E
 G
 F

(A,D) 3
(A,E) 5  
D
 C


(A,G) 2  1 
(A,D) A
 B

3
e2
(A,E) 5  e 
(A,G)
(B,G) 6 E
 G
 F

(B,C) 4
 
(B,F) 5 D C
 

(A,D) 3  1 
(A,E) 5 A
 B

e2
(B,G) 6  e 
(B,C) 4 (A,D) E
 3 
G F

(B,F) 5
 
(G,D) 9 D C
(G,C)  
8

Figure 2.4 (Contd.)

59 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Edges under Weights Chosen T rooted at A


consideration edge

 1 
(A,E) 5 A
 B

(B,G) 6 e2
 e 
(B,C) 4
(B,F) 5 E
 3 
G F

(C,D)
(G,D) 9
(G,C) 8  
(C,D) 2 D
 C

2
(D,E) 6

 1 
(A,E) 5 A
 B

(B,G) 6 e2
(B,C) 4  e 
(B,F) 5 (B,C) E
 3 
G F

(G,D) 9 reject
(G,C) 8  
(D,E) 6 D
 C

2
(C,F) 7

 1 
A
 B

(A,E) 5 5 e 2
(B,G) 6 
 e 
(B,F) 5 E 3 
G F
(G,D) 9 (A,E)  
(G,C) 8  
(D,E) 6 D C
(C,F) 7  2 

 1 
A
 B

(B,G) 6 5 e 2 e5
(B,F) 5 
 e e
(G,D) 9 (B,F) E
 3 
G F

(G,C) 8
(D,E) 6  
(C,F) 7 D
 C

2
Figure 2.4: Stages of Prim’s algorithm

60 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Algorithm 2.3: Prim’s


begin
Let T = (VT , ET ) such that VT = {r} and ET = φ, where r is a vertex of G;
Let Er be the set of edges (r, v), v ∈ V − VT ;
Create a binary heap tree BT for the edges in Er ;
while T has lesser than n − 1 edges do
Delete the root, has edge ei = (ui , vi ), of BT ;
(Assume ui ∈ VT and vi ∈ V − VT );
if ei does not create any cycle in T then
include ei in T ;
insert the edges of the kind ej = (vi , vj ), vj ∈ V − VT , into BT ;
end
end
end

Analysis

The time complexity of the Prim’s algorithm depends mainly on two factors. One
is the detection of cycles and the other is the manipulation of the heap tree. The
detection of cycles can be done using sets as seen in the Kruskal’s algorithm. As far
as the binary heap operations are considered, the insertion and deletion of nodes can
be carried out in O(log e) time, if the heap tree contains e nodes. As there are e edges
which are supposed to be in the heap tree at any time, the total time required to insert
and delete the nodes containing information on these edges requires O(e log e) time.
Thus, the time required to compute Prim’s algorithm is O(e log n), as e = O(n2 ) in
the worst case.

2.3 Shortest Path Problems

The shortest path problem deals on weighted graphs as in the previous section, how-
ever the difference here being that the edges have orientation (direction). Consider
a graph G to be a pictorial representation of road structure which connects between

61 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

various cities. The vertices of G are cities and the weights of the edges are nothing
but the distance between two cities along the road. Assume that we wish to travel
from city A to city B. If more than one path exists between A and B, then our
interest lies in finding the shortest one. The length of a path is defined to be the sum
of the weights of the edges on that path rather than the number of edges. In each
path, the starting vertex of the path will be referred to as the source, and the last
vertex as the destination. With the terms source and destination, we study here the
single source shortest path problem. It is also assumed that the weights of the edges
are positive.

2.3.1 Single Source Shortest Paths

Assuming a given directed graph G = (V, E), with weights assigned to the edges of G,
we have to find the shortest paths from any given source vertex to all the remaining
vertices of G. Recall that the edges are assigned with positive weights. The problem
of finding the single source shortest path from a source vertex v0 is solved as follows:
Let Pv be the set of vertices including v0 , to which the shortest path from v0 has
already been determined. Then, let dist(w), w ∈
/ Pv , be the length of shortest path
from v0 going through only the vertices in Pv and ending in w. Note that the paths
are generated in the non-decreasing order of path lengths. Therefore the destination
of the next path generated must be the vertex u with minimum dist(u) among all
vertices not in Pv . In this process, all intermediate vertices on the shortest path to u
must be in Pv . To prove this fact, if we assume that there is an intermediate vertex
w∈
/ Pv on this path then the path from v0 to u will contain the path from v0 to w,
and the length of the path from v0 to w will be less than the length of the path from
v0 to u. Therefore the path from v0 to w must have been generated before the path
from v0 to u is considered. Hence, we can conclude that all intermediate vertices of
the path from v0 to u must be in Pv .

Example

Once the path from v0 to u is generated, the vertex u becomes a member of Pv . At


this moment, dist(w) for w not in Pv may decrease as if assuming that the shortest

62 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

  
A 5 - B 12 - E
  
J  C 8  >

J  C  
7 C 
10 J 4
J 
  C
  3C
 
J 
^  
J CW 
C - D

 
2

dist(v)
Pv v=A v=B v=C v=D v=E chosen vertex
A 0 5 10 ∞ ∞ B
A,B 0 5 10 8 17 D
A,B,D 0 5 10 8 11 C
A,B,D,C 0 5 10 8 11 E
A,B,D,C,E

Figure 2.5: A graph and the table showing how to find the shortest paths
from the single source A

path from v0 to w is passing through u. Moreover, the path from u to w can be


chosen as an edge. This means that there should not be any intermediate vertex
between u and w, and the path length of the path from v0 to w is determined by
dist(u) + length(< u, w >).

First we work out this method for the graph given in figure 2.5. Let us denote the
weight of an edge e as w(e). A is taken as the source vertex. Initially dist(v) =
w((A, v)), where v ∈ {B, C, D, E}. It is clear that dist(A) = 0. These values are
given in the first row of table. Next, the minimum dist among the vertices not in Pv
is 5, and the corresponding vertex is B. Therefore, B will be included in Pv . Due to
the inclusion of B in Pv , the dist value of vertices not in Pv , and adjacent to B get
changed as
dist(w) = min{dist(w), dist(B) + w((B, w))

where w ∈
/ Pv and w is adjacent to B. This process has to be repeated till all

63 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

vertices are chosen. Now dist(v) provides the shortest distance between A to v,
where v ∈ {B, C, D, E}.

Algorithm

The formal algorithm for this method is given in algorithm 2.4. In this algorithm, it
is assumed that the source vertex is v. At the end of the algorithm dist(i), 1 ≤ i ≤ n
provides the shortest distance from the vertex v to i. The given graph contains n
vertices. Let w(e) denotes the weight of the edge e. The weight of the edge e = (u, v)
can be obtained from the cost adjacency matrix COST as COST (u, v).

Algorithm 2.4: SSSP


begin
Pv ← ϕ;
dist(v) ← 0;
for u ∈ V (G), u 6= v do
Set dist(u) ← w((v, u));
end
Pv ← Pv ∪ {v};
i ← 2;
while i < n do
/ v {dist(y)};
Choose x such that dist(x) as miny∈P
Pv ← Pv ∪ {x};
i ← i + 1;
for all y ∈
/ Pv do
Set dist(y) ← min{dist(y), dist(x) + w((x, y))};
end
end
end

The above algorithm outputs the length of the shortest path from the source vertex
to each of the remaining vertices, but does not give the set of edges belonging to the
path. A little modification carried out in the algorithm would generate such paths.

64 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Analysis

The total time required for the execution of statements inside the while loop is O(n2 ).
Therefore, the time required to compute the SSSP algorithm is O(n2 ). Note that the
use of the cost adjacency matrix rather than the cost adjacency list does not increase
the time, as chosing the vertex x in each iteration of while requires O(n) time.

Have you understood?


1. What is the complexity of divide phase of merge sort?
2. How much time required for merging two sorted subarrays each of size n?
3. What is the worst case complexity of quick sort algorithm?
4. Why we prefer quick sort though it is not giving optimal time in worst case?
5. What is the nature of inputs for doing binary search?
6. What is the difference between searching and traversal?
7. What is the difference between optimal solution and feasible solution?

Exercises
1. Determine the lower bound for sorting n numbers using swapping and compar-
ison?
2. Given an array of n numbers, the first half is sorted in ascending order and rest
in descending order. If you wish to sort the array in descending order using
quick sort technique, what will be the time complexity, if you are not aware of
the nature of input?
3. Under what circumstances, linear search is better as compared to binary search?
4. Write an algorithm to construct a binary tree for an expression given in postfix
form.
5. Can you extend the kruskal’s algorithm for generating rooted minimum span-
ning tree on a directed graph? Justify your answer.
6. Can you extend the prims’s algorithm for generating rooted minimum spanning
tree on a directed graph? Justify your answer.

65 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

7. Write a divide and conquer algorithm for finding a position of the largest element
in an array of n numbers.
8. Design a divide and conquer algorithm for computing the number of levels in a
binary tree.
9. Which of the tree classic traversal algorithms yields a sorted list if applied to a
binary search tree? Prove this property.

Summary
Divide and conquer is a general algorithm design technique that solves a problem’s
instance by dividing it into several smaller instances, solving each of them recursively,
and then combining their solutions to get a solution to the original instance of the
problem. However, the greedy technique suggests constructing a solution to an opti-
mization problem through a sequence of steps, each expanding a partially constructed
solution obtained so far, until a complete solution to the problem is reached. On each
step, the choice made must be feasible, locally optimal and irrevocable.

66 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Unit III

Dynamic Programming

67 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Structure of Unit III


1 Dynamic Programming

1.1 Computing a Binomial Coefficient

1.2 Warshall’s and Floyd’s Algorithms

1.2.1 Transitive Closure - Warshall’s Algorithm

1.2.2 All Pair Shortest Paths - Floyd’s Algorithm

1.3 Optimal Binary Search Trees

1.4 Knapsack Problem and Memory Functions

1.4.1 Memory Function Approach

68 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Learning Objectives
• Concept of principle of optimality.

• Addressing solution to various problems using dynamic programming approach.

69 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

1. Dynamic Programming
It is yet another algorithm design method that can be used when the solutions to a
problem may be viewed as the result of a sequence of decisions. One way to solve
problems for which a sequence of stepwise decision leading to the optimal decision
sequence is to try out all possible decision sequences. In those cases, it is required
to explore all possible decision sequences, and then pick the optimal one. However,
this approach increases the time complexity. To avoid that dynamic programming is
used. Therein, often drasitcally reduces the amount of enumeration by avoiding the
enumeration of some decision sequence that cannot be optimal. How is it be possible.
This is due to the Principle of Optimality. What is it?

Principle of Optimality: An optimal sequence of decision has the property that


whatever the initial state and decision are, the remaining decisions must constitute
an optimal decision sequence with regard to the state resulting from the first decision.

This makes the dynamic programming different from the greedy method. In greedy
method only one decision sequence is ever generated. In dynamic programming,
many decision sequences may be generated, but the sequences containing suboptimal
subsequences cannot be optimal and the same will not be generated.

1.1 Computing a Binomial Coefficient

It is a nice problem to illustrate how a dynamic programming technique can be


applied to a non optimization problem. Hope, you are aware that what is a binomial
 
n
coefficient. May be some may remember by seeing the notation C(n, k) or  .
k
The name come from the participation of these numbers in the so called binomial
formula.

(a + b)h = C(n, 0)an + · · · + C(n, i)an−i bi + · · · + C(n, n)bn

70 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

But, what make us interesting is computing this binomial coefficient, using dynamic
programming? It is due the interesting property, which is given below, it possess.

C(n, k) = C(n − 1, k − 1) + C(n − 1, k), for n > k > 0 (1.1)

and
C(n, 0) = C(n, n) = 1

To find the value of C(n, k), we record the values of the binomial coefficients in a table
of n + 1 rows and k + 1 columns, numbered from 0 to n and from0 to k, respectively,
and the same is shown below.

0 1 2 3 ··· k
0 1
1 1 1
2 1 2 1
3 1 3 3 1
..
.
k 1 1
..
.
n 1 ··· C(n,k)

What is happening in this table. It is very simple: we have to fill the table row by
row, starting with row 0 and ending with row n, and each row must be filled from
left to right. Note that each row must start with 1 as we know C(n, 0) = 1 for any n.
Also, for rows 0 through k, ends with 1, that is at the main diagonal of the table, as
C(i, i) = 1, for 0 ≤ i ≤ k. The remain entries must be computed based on equation
1.1.

Algorithm

The formal description of the method stated above is given in algorithm 1.1.

71 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Algorithm 1.1: APSP


begin
for i ← 0 to n do
for j ← 0 to min{i, k} do
if j = 0 or j = k then C[i, j] ← 1;
else C[i, j] ← C[i − 1, j − 1] + C[i − 1, j];
end
end
return C[n, k];
end

Analysis

Clearly, the complexity of this algorithm is due to the number of additions, say
A(n, k). Also, in equation 1.1 only one addition is used. It is also worth to note that
for the first k + 1 rows in the table form a triangle, while the remaining n − k rows
from a rectangle. So, we view the number of additions into two parts:

k X
X i−1 n X
X k
A(n, k) = 1+ 1
i=1 j=1 i=k+1 i=1
k
X n
X
= (i − 1) + k
i=1 i=k+1
(k − 1)k
= + k(n − k)
2
= O(nk)

1.2 Warshall’s and Floyd’s Algorithms


In this section, we study two algorithms, one for finding the transitive closure of a
directed graph, which is known as Warshall’s algorithm, and the other is to find all pair
shortest path on the weighted directed graph, which is known as Floyd’s algorithm.

72 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Basically, these two algorithms resembles the same conceptually, and hence both are
been considered in the same section.

1.2.1 Transitive Closure - Warshall’s Algorithm

A graph can be represented by an adjacency matrix (recollect what you have studied
in data structures), and this representation also be useful for directed graph, wherein
if the ith row and j th column having one means there is a directed edge from ith vertex
to j th vertex. However, in some cases, we are interested in knowing the existence of
a directed path from ith vertex to j th vertex. So, to know that the transitive closure
matrix can be computed, and in that matrix if an entry correspond to the ith row
and j th column contain a 1, then there exist a path from ith vertex to j th vertex.
Otherwise, the entry contain a zero.

As an example, let us consider the graph given in figure 1.1.

ah - bh
I
ch dh
?


Figure 1.1: A sample graph for illustration.

Its adjacency matrix A becomes,

a b c d
 
a 
0 1 0 0 
b  0 0 0 1 
 
 
c  0 1 0 0 
 
 
d 1 0 1 0

and the transitive closure becomes,

73 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

a b c d
 
a 
1 1 1 1 
b  1 1 1 1 
 
 
c  0 0 0 0 
 
 
d 1 1 1 1

Obvious Method

It is obvious to determine the transitive closures of a digraph (directed graph) with


the help of a dfs or a bfs searching methods. How? By performing either traversal
staring at the ith vertex gives the information about the vertices reachable from the
ith vertex and hence the columns that contains one in the ith row of the transitive
closure, and by repeating this process for each and every vertex as a starting point
yields the transitive closure of the digraph.

Algorithm

Warshall’s algorithm constructs the transitive closure of a graph with n vertices


through a series of n × n boolean matrices R0 , . . . , Rk , . . . , Rn . Each of these matrices
provides certain information about the directed paths in the digraph. Specifically, the
ij th entry of the matrix Rk is one iff there exists a directed path from the ith vertex to
j th vertex with each intermediate vertex, if any, numbered not higher than k. Thus,
in Rn reflects the paths that can use all n vertices of the digraph as intermediate and
hence is nothing else but the digraph’s transitive closure. Note that in Warshall’s
algorithm, Rk is constructed from its immediate predecessor Rk−1 .

While constructing the paths, two situations are possible for a construction of a path
from vertex i to vertex j. In the first, the list of its intermediate vertices does not
contain the k th vertex and in the other the k th vertex may exist. Without loss of
generality, we can assume that the vertex numbered k appears only once in this
path. How? Even if it appears more than once, the set of vertices between the first
occurrence and the last occurrence of this k th vertex included itself from a directed
cycle. Even if we remove such cycle, one can prove the existence of the directed path.

74 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Now, in the first case, the new path in Rk is same as that of one in Rk−1 , as it does
not contain the vertex k. But in the second case, the new path is determined by
concatenating the path from the ith vertex to k th vertex and the path k th vertex to
j th vertex. Thus,

Rk [i, j] = Rk−1 [i, j] or (Rk−1 [i, k] and Rk−1 [k, j])

Thus, the formal description of this approach is given in algorithm 1.2.

Algorithm 1.2: Transitive Closure


begin
for k ← 1 to n do
for i ← 1 to n do
for j ← 1 to n do
Rk [i, j] ← Rk−1 [i, j] or (Rk−1 [i, k] and Rk−1 [k, j]);
end
end
end
end

Analysis

The time complexity of the algorithm 1.2 is O(n3 ), as there are three loops each
iterate for n times.

1.2.2 All Pair Shortest Paths - Floyd’s Algorithm

The problem of all pair shortest path is to find the shortest distance between any two
vertices of the graph. This is an extension of the single source shortest paths problem,
as repeatedly calling the function for the single source shortest path by changing the
source vertices would lead to the all pair shortest paths. The time complexity of
the single source shortest paths algorithm is O(n2 ). Therefore, calling this function
n times to obtain all the pair shortest paths requires O(n3 ) time. However, the

75 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

restriction of this method is that it does not allow for negative weights for edges. In
this section we see another algorithm which requires only O(n3 ) time, but with the
flexibility of permitting negative weights to edges so long as G has no cycles with
negative length.

Algorithm

The algorithm is designed to work in an incremental fashion. Let A0 be the cost


adjacency matrix and {v1 , v2 , . . . vn } be the set of vertices of G. In the next iteration,
the matrix A0 will be modified such a way that the shortest path between any two
vertices can have only v1 as an intermediate vertex. Let this new matrix be A1 . Then
A1 has to be modified so that intermediate vertices in the shortest path between any
two vertices must be from the set {v1 , v2 } and the new matrix is A2 . In general, the
matrix Ai contains the shortest path distance between any two vertices so that the
intermediate vertices of that path belong to the set {v1 , v2 , . . . , vi }. This process leads
to An as the required output, as the graph has only n vertices.

Note that at a particular instance of the cost matrix, say Ai , the value at Ai (vj , vk )
is determined by

Ai (vj , vk ) = min{Ai−1 (vj , vk ), Ai−1 (vj , vi ) + Ai−1 (vi , vk )}, i ≥ 1 (1.2)

This indicates that the shortest path between any two vertices vj and vk so that the
intermediate vertices are only in {v1 , v2 , . . . , vi } falls under any one of the following:-

1. The shortest path between vertices vj and vk does not have vi as an intermediate
vertex, and so its distance will be the same as Ai−1 (vj , vk ).
2. The shortest path between vj and vk is constituted as the first shortest path
from vj to vi followed by the shortest path from vi to vk . Note here that the
shortest path vj to vi or vi to vk does not contain vi as an intermediate vertex
due to vi itself being a source or sink of this path. Therefore the shortest path
distance between vj and vk is determined by Ai−1 (vj , vi ) + Ai−1 (vi , vk ).

Consider the changes that have taken place in the cost adjacency matrix according

76 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS


A
  A 4
2  A
 4 A

  3 
AAU
B

-
 C
7

Figure 1.2: A sample graph for illustration.

to equation 1.2 for the graph given in figure 1.2. The initial cost adjacency matrix
for this graph is

A0 A B C
A 0 4 4
B 2 0 7
C ∞ 3 0

using equation 1.2, the matrices A1 , A2 and A3 are obtained as

A1 A B C A2 A B C A3 A B C
A 0 4 4 A 0 4 4 A 0 4 4
B 2 0 6 B 2 0 6 B 2 0 6
C ∞ 3 0 C 5 3 0 C 5 3 0

The matrix A3 is the required matrix.

The formal description of this method for the all pair shortest path problem is given
in algorithm 1.3. Therein, we assume that A0 be the cost adjacency matrix of a graph
with n vertices. Note here that suffix on the matrix A is used only for the clarity.
Without loss of generality, one can remove the suffix on the original matrix A, and
the modified version is given in algorithm 1.4.

77 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Algorithm 1.3: APSP


begin
for i ← 1 to n do
for j ← 1 to n do
for k ← 1 to n do
Ai (j, k) ← min{Ai−1 (j, k), Ai−1 (j, i) + Ai−1 (i, k)};
end
end
end
end

Algorithm 1.4: APSP


begin
for i ← 1 to n do
for j ← 1 to n do
for k ← 1 to n do
A(j, k) ← min{A(j, k), A(j, i) + A(i, k)};
end
end
end
end

Analysis

It is clear that the time required to compute APSP is O(n3 ), as there are three for
loops each run for n iterations. The output of this algorithm gives only the distance
of the shortest path between any pair of vertices. The algorithm can be modified to
obtain the sequence of edges as they appear in the shortest path, and the same is left
to the interested students.

78 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

1.3 Optimal Binary Search Trees


A binary search tree is a special kind of binary tree. In this type of binary tree, the
elements in the left and right sub-trees of each node are respectively lesser and greater
than the element of that node.

Searching for an element x, in a binary search tree can be performed recursively, in


the following way:

1. Compare x with the root element of the binary tree, if the binary tree is non-
empty. If it matches, the element is in the root and the algorithm terminates
successfully by returning the address of the root node. If the binary tree is
empty, it returns a NULL value.
2. If x is less than the element in the root, the search continues in the left sub-tree;
3. If x is greater than the element in the root, the search continues in the right
sub-tree.

From the above search method, one can easily note that the keys of the binary search
tree is distinct.

One of the application of a binary search tree is to implement a dictionary, therein we


may have to do operations such as searching, insertion and deletion. For a given set
of values one can get different binary search trees. This is due to choice on the order
of input values. Now, the question is, when there exist several binary search trees,
which one be good in the sense of searching time. That is what called the problem of
finding the optimal binary search tree. Note that, such optimal trees are not meant
only for the successful search, however that to be also for the unsuccessful search.

Here, in the problem statement, apart from making simple searches, we also assume
that probabilities of a key to be searched. Moreover, the complexity is depending on
the number of comparisons made. That is, if K is the key appears at level 2, and
the respective probability of searching the key K is 0.3, then the average number
of comparisons required for searching the key is 2 × 0.3, which is level times the

79 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

probabilities. All what is required here is to determine a binary search tree wherein
the average number of comparison is minimal for searching all keys.

Obvious Method

Note that, even when a tree contains 4 keys there are 14 different ways the search
can be generated. The obvious method suggest that one can generate all possible
combination of the trees, and then determine which is optimal? Is it possible always?
Certainly no. Why? This exhaustive search approach is unrealistic as the total
number of binary search trees with n keys is equal to the nth Catalan Number
 
2n  1
c(n) =  for n > 0, c(0) = 1.
n n+1

This is the fast growing function to infinity, and so a clever method is required to de-
termine the optimal binary search tree, wherein the dynamic programming approach
is useful.

Algorithm

Let a1 , a2 , . . . , an be the distinct keys given in ascending order, and the respective
probabilities to seach for the keys be p1 , p2 , . . . , pn . Let C[i, j] be the least average
number of comparisons made in a successful search in a binary tree Ti,j made up of
keys ai , . . . , aj , for 1 ≤ i ≤ j ≤ n. So, the problem here is to find the value of C[1, n]
from the smaller instances of C[i, j], based on the priciple of optimality.

To derive the recurrence equation underlying the dynamic programming algorithm,


we need to consider all possible ways to choose a root ak among the keys ai , . . . , aj . If
ak forms the root, then the keys ai , . . . , ak−1 forms the left subtree, and ak+1 , . . . , aj
forms the right subtree. Let us assume that the level of the root is 1, then the
recurrence equation for C[i, j] is,
 
Xj 
C[i, j] = min  ps .( level of as in Ti,j )
i≤k≤j
s=i
k−1
(
X
= min pk .1 + ps .( level of as in Ti,j )+
i≤k≤j
s=i

80 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS


j
X 
ps .( level of as in Ti,j )
s=k+1
k−1
(
X
= min pk .1 + ps .( level of as in Ti,k−1 + 1)+
i≤k≤j
s=i

j
X 
ps .( level of as in Tk+1,j + 1)

s=k+1
(k−1
X
= min ps .( level of as in Ti,k−1 )+
i≤k≤j
s=i

j
X j
X 
ps .( level of as in Tk+1,j ) + ps 
s=k+1 s=i
 
 j
X 
= min C[i, k − 1] + C[k + 1, j] + ps 
i≤k≤j
s=i

Note that this formula implies that

C[i, i − 1] = 0 for 1 ≤ i ≤ n + 1

and
C[i, i] = pi for 1 ≤ i ≤ n

The table shown in figure 1.3 illustrate how the values needed for computing C[i, j].
The arrows point to the pairs of entries whose sums are computed in order to find the
smallest one to be recorded as the value of c[i, j]. The ? is nothing but the goal. Using
the formula given above and by the table given in figure 1.3, it is known that filling
the table is first to be done along its diagonals, starting with all zeros on the main
diagonal and the probabilities pi ’s to be at the right above and moving the upper right
corner for further computation. The table in figure 1.3 is only for computing the C
values, however the construction of optimal binary tree is also possible. For this, we
need to maintain abother two dimensional table to record the value of k for which
the minimum in the derived equation is achieved. The table has the same shape as
the table shown in figure 1.3.

Let us now consider an example for knowing how the new two-dimensional table is
to be constructed. Let us assume that there are four keys A, B, C and D and their

81 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

0 1 j n
1 0 p1 *
0 p2
i 6 6
C[i,j]
6 6
-
-

-
- pn
n+1 0

Figure 1.3: Table for constructing optimal binary search tree

searching probabilities are respectively 0.1, 0.2. 0.4 and 0.3. The initial table looks
like,

C 0 1 2 3 4 R 0 1 2 3 4
1 0 0.1 1 1
2 0 0.2 2 2
3 0 0.4 3 3
4 0 0.3 4 4
5 0 5

Here, for instance, C[1, 2] = min{k = 1 ⇒ 0.5, k = 2 ⇒ 0.4}. Thus, the minimum
value is obtained when k = 2, and so the root of the optimal tree has index 2. By
proceeding this way, finally we get,

C 0 1 2 3 4 R 0 1 2 3 4
1 0 0.1 0.4 1.1 1.7 1 1 2 3 3
2 0 0.2 0.8 1.4 2 2 3 3
3 0 0.4 1.0 3 3 3
4 0 0.3 4 4
5 0 5

82 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Now, let us see for this example how the optimal binary search tree is constructed.
Since R[1,4]=3, the root is C. As, A and B are smaller than C, these keys must be
in the left subtree and the key D must be in the right subtree. Out of the two keys A
and B, to know the structure of the subtree, we consider R[1,2]. Since R[1,2]=2, the
root of the subtree is B and A must be its left child, as A is less than B. Therefore,
the final optimal tree for the given example is shown in figure 1.4. Now, let us see

Cl
\\
Bl Dl

Al

Figure 1.4: Optimal binary search tree for the example.

the formal description of this approach, and the same is given in algorithm 1.5. In
algorithm 1.5, it is assumed that the array P contains the search probabilities for the
sorted list of n keys.

Analysis

One can easily say that the time complexity of algorithm 1.5 is O(n3 ), as there are
3 loops running one inside the other. However, a more clever analysis shows that
entries in the root table R are always non decreasing along each row and column, and
that makes it possible to reduce the running time of the algorithm to O(n2 ).

1.4 Knapsack Problem and Memory Functions


Knapsack is one of the very important problem considered in the design and analysis
of algorithm, as the solution to this problem is obtained using various design methods.
In this section, we consider a specific knapsack problem, that is 0/1-Knapsack. First;
let us see what the knapsack problems, then we see its variant 0/1-Knapsack function.

Knapsack Problem: Given n items of known weights w1 , w2 , . . . , wn , the respective

83 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Algorithm 1.5: OptimalBST


begin
for i ← 1 to n do
C[i, i − 1] ← 0;
C[i, i] ← P [i];
R[i, i] ← i;
end
C[n + 1, n] ← 0;
for d ← 1 to n − 1 do
for j ← 1 to n − d do
j ← i + d;
min ← ∞;
for k ← i to j do
if C[i, k − 1] + C[k + 1, j] < min then
min ← C[i, k − 1] + C[k + 1, j];
kmin ← k;
end
end
R[i, j] ← k;
sum ← P [i];
for s ← i + 1 to j do sum ← sum + P [s];
C[i, j] ← min + sum;
Return C[1, n], R;
end
end
end

item values v1 , v2 , . . . , vn , and the knapsack capacity, find the most valuable (maxi-
mizes the value) subset of the item (may be its portion) that fit into the knapsack.

The variant 0/1-Knapsack never permit the inclusion of the portion of the item into

84 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

the knapsack. That is, either the item is to be inserted fully into the knapsack, or not
at all. We consider here, only the 0/1-Knapsack. From the previous sections while
we dealt on dynamic programming, we have seen a very nice feature that all the
problems are somehow put in the form of recurrence equations, there in the solution
of an instance is derived from the instance in terms solution of its smaller instances.

Let V [i, j] be the value of an optimal solution to the instance defined by the first i
items, with weights w1 , w2 , . . . , wi , values v1 , v2 , . . . , vi and the knapsack capacity j.
Note that the prior to considering inclusion of ith item, the problem instance with
first i − 1 items might have been considered, and we may have the optimal solution
for the sub-instance consists of first k − 1 items. Therefore, we can now view the
problem in two angles.

• When the ith item is not included into the knapsack, the value of an optimal
subset is nothing but V [i − 1, j].

• On the otherhand, if the ith item is included, then j − wi ≥ 0, and an optimal


subset is made up of this item and an optimal subset of the first i − 1 items
that fit into the knapsack capacity j − wi . Therefore, the optimal subset is
vi + V [i − 1, j − wi ]

Based on this, the recurrence equation is defined as,



 max{V [i − 1, j], v + V [i − 1, j − w ]}, if j − w ≥ 0
i i i
V [i, j] =
 V [i − 1, j], otherwise

with the initial condition,


V [0, j] = 0, for j ≥ 0

and
V [i, 0] = 0, for i ≥ 0

Now, the problem is to find V [n, W ], the maximal value of a subset of the n given
items that fit into the knapsack capacity W .

85 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

An Example

Consider the following problem instance with the knapsack capacity 5.

item weight value (in Rs.)


1 2 12
2 1 10
3 3 20
4 2 15

Based on the above formula, the dynamic programming table is filled as,

capacity j
i 0 1 2 3 4 5
0 0 0 0 0 0 0
w1 =2, v1 = 12 1 0 0 12 12 12 12
w2 =1, v2 = 10 2 0 10 12 22 22 22
w3 =3, v3 = 20 3 0 10 12 22 30 32
w4 =2, v4 = 15 4 0 10 15 25 30 37

This table is filled based on the formula derived above. But, this is not enough as we
should know what items are to be included in the knapsack in order to maximise the
value (profit).

Here V [4, 5] = 37 (so the maximum profit earned is 37) and V [4, 5] 6= V [3, 5], so
item 4 is included in the optimal solution. Once item 4 is included the remaining
capacity of the knapsack is 3. So, we have to consider V [3, 3]. As V [3, 3] = V [2, 3],
item 3 is not a part of an optimal subset. Now V [2, 3] 6= V [1, 3], item 2 is included
in the optimal solution, and that leads that the remaining capacity of the knapsack
is 2. Now V [1, 2] 6= V [0, 2], and so item 1 is also included in the optimal solution.
Therefore, items 1,2 and 4 forms the optimal solution.

86 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Analysis

We are not providing any formal algorithm for this approach, as it is simple and
similar to other algorithms described in this chapter, and the same is left as an
exercise to the students. There are two parts in the algorithm, one is construction
of the table and the other is obtaining the optimal solution. From a straightforward
view, one can easily say for the first part the time complexity is O(nW ), and for the
second part it is O(n + W ), where n is the number of items and W is the knapsack
capacity. Therefore the overall time complexity is dominated by the first part and
that is O(nW ).

Disadvantage

When we recollect the approach we have dealt through out this chapter for solving
the problems based on dynamic programming method is by deriving a recurrence
equation and then by constructing a table, therein the solution of an instance is
derived from the solutions of the sub-instances. Though this approach seems to be
good, it is required to determine values corresponding to all the entries of the table.
However, not all the values are utilized. Why is this? Basically, we view the problem
as bottom-up approach, and one cannot predict what is required at the top. So, to
avoid this unnecessary computation, it is better to go for top-down approach and
that is what we call as memory function approach.

1.4.1 Memory Function Approach

Here the solution to the problem is determined in top-down manner but, in addition,
maintains a table of the kind that would have been used by the bottom-up dynamic
programming approach. Initially, all the entries of the table are filled as null. Then,
whenever a new value is needed the method checks for the corresponding entry in the
table, and it is null then only it is calculated by recursive all and the result is recorded
in the table. However, if the corresponding table entry is not null, the the value is
simply retrieved. This way, one can save time by avoiding unnecessary computations.

87 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

The following table shows the values computed for the problem given in the example,
using memory function approach.

capacity j
i 0 1 2 3 4 5
0 0 0 0 0 0 0
w1 =2, v1 = 12 1 0 0 12 null 12 12
w2 =1, v2 = 10 2 0 null 12 22 null 22
w3 =3, v3 = 20 3 0 null null 22 null 32
w4 =2, v4 = 15 4 0 null null null null 37

Algorithm

The formal description of memory function based knapsack problem is given in algo-
rithm 1.6. This algorithm determines only the optimal profit, and not the solution
set, which is left as an exercise to the students.

In this algorithm, i is nothing but the number of first items being considered and j
indicates the knapsack capacity. The weights are stored in the array W and the values
are stored in V al. It is also assumed that the table V [0 · · · n, 0 · · · W ] is initialized
with −1’s except for row 0 and column 0, which are initialized with 0’s.

Analysis

It is important to note here that the memory function based knapsack solution is
only improving the running time by a constant factor and not in asymptotic sense.

Have you understood?


1. What is priciple of optimality?
2. What is transitive closure?
3. How does dynamic programming differs from greedy?

88 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Algorithm 1.6: MFKnapsack(i,j)


begin
if V [i, j] < 0 then
if j < W [i] then value ← M F Knapsack(i − 1, j);
else
value ←
max{M F Knapsack(i − 1, j), V al[i] + M F Knapsack(i − 1, j − W [i])}
end
V [i, j] ← value;
end
Return V [i, j];
end

Exercises
1. What does dynamic programming have in common with divide-and-conquer?
2. Compute C(6, 3) by applying the dynamic programming algorithm?
3. What is the space efficiency of the dynamic programming algorithm for com-
puting C(n, k)?
4. Explain how to implement Warshall’s algorithm without using extra memory
for storing elements of the algorithm’s intermediate matrices.
5. Give an example of a graph or a digraph with negative weights for which Floyd’s
algorithm does not yield the correct result.
6. How would you construct an optimal binary search tree for a set of n keys if all
the keys are equally likely to be search dor? What will be the average number
of comparisons in the tree if n = 2k ?
7. Design a Θ(n2 ) algorithm for finding an optimal binary search tree.
8. Give two reasons why the memory function approach is unattractive for the
problem of computing a binomial coefficient.
9. Prove that the efficiency class of the memory function algorithm for the knap-
sack problem is the same as that of the bottom-up algorithm.

89 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Summary
Dynamic programming is a technique for solving problems with overlapping subprob-
lems. Typically, these subproblems arise from a recurrence relating a solution to a
given problem with solutions to the smaller subproblems of the same type. More-
over, the dynamic programming method satisfy the principle of optimality, i.e an
optimal solution to any of its instances must be made up of optimal solutions to its
subinstances.

90 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Unit IV

Backtracking
and
Branch & Bound

91 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Structure of Unit IV
1 Backtracking

1.1 N-Queens Problem

1.2 Hamiltonian Circuit Problem

1.3 Subset-Sum Problem

1.3.1 A General Algorithm

2 Branch & Bound

2.1 Assignment Problem

2.2 Knapsack Problem

2.3 Traveling Salesman Problem

2.4 A General Algorithm

92 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Learning Objectives
• General concept of backtracking and branch & bound techniques.

• Addressing solution to various problems using these methods.

93 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

1. Backtracking
The two techniques which we study in this unit are often make it possible to solve
at least some large instances of difficult combinatorial problems. Let us first see the
details of the working nature of the backtracking approach. Conceptually, how this
method differ from the branch and bound can be seen when we deal with branch and
bound. At this moment, to avoid confusion, it is focused only on the backtracking.

As we mentioned before, the problems dealt with this methods are more complex
problems, which are probably cannot lead to a polynomial time solution. One way
of finding the solution (non-polynomial) to these problems are by conducting an
exhaustive search, that is by exploring all possible combinations and find the one
which is the solution. For large problem instance this is impractical to deal with, as
the problems are having exponentially growing nature.

Backtracking is a more intelligent variation of this approach. Here, a state-space tree


is constructed. The tree is constructed according to the depth first manner. The
principal idea is to construct solutions by considering one component at a time and
evaluate such partially constructed candidates as follows:

If a partially constructed solution can be developed further without violating the prob-
lem’s constraints, it is done by taking the first remaining legitimate option for the
next component. If there is no such option, and no other candidates from the remain-
ing component, the algorithm backtracks to replace the last component of the partially
constructed solution with its next option.

What we wish to say here is, the state-space tree is constructed according to the
depth first manner, while at a particular node it is found that further exploring the
node cannot lead to a solution (as it is not obeying the constraints), the algorithm

94 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

backtracks to the previous node and continue the depth first choice as the next candi-
date. Note that for a given problem there may be several solutions, but the algorithm
stops as soon as it finds a solutions. If we wish to find all the existing solutions, it is
required to continue the algorithm from the node where the last solution is found.

In state-space tree, the root represents an initial state. The nodes in the first level
represents the choices made for the first component of a solution. In general, the
nodes in the ith level represents the choices made for the ith component of a solution.
If the node corresponds to a partially constructed solution that may still lead to a
complete solution is said to be a promising node, otherwise it is called a nonpromising
node.

Here, the concept of backtracking is described based on some example. In my opinion,


it is not sufficient. As this is the course material, we follow the way it is been
considered in the prescribed text book. However, the students are advised to follow
the book “Fundmentals of Computer Algorithms” by Horowitz and Sahni, to know
how to develop the backtracking algorithms for the problems we discuss here.

Let us see now some problems that can be dealt with backtracking method.

1.1 N-Queens Problem


Problem Statement: Place N queens on an N × N chess board so that no two
queens attack each other by being in the same row or in the same column or on the
same diagonal.

It is known that the problem is trivial for N = 1, and it is easy to find the solution for
N = 2 and 3. So, for the purpose of illustration, let us have N = 4. In this problem,
three attacking situations must be avoided. That is, placing the queens in the same
row, same column and the same diagonal. The first constraint is directly eliminating
by considering that the ith queen is placed in the ith row, so that no two queens get
placed in the same row. Now, it is required to think of other two constraints. Assume
that the rows and columns are numbered 1 to N (here it is 4). The solution vector is

95 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

represented by (x(1), · · · , x(4)), where x(i) represents the column position for the ith
queen. Initially the vector is (0,0,0,0).

(0,0,0,0)

(1,0,0,0) (2,0,0,0)

(2,1,0,0)(2,2,0,0)(2,3,0,0)(2,4,0,0)

(2,4,1,0)

(2,4,1,1)(2,4,1,2)(2,4,1,3)

(1,1,0,0)
(1,2,0,0) (1,3,0,0) (1,4,0,0)

(1,3,1,0)(1,3,2,0)(1,3,3,0)(1,3,4,0)(1,4,1,0)(1,4,2,0)(1,4,3,0)(1,4,4,0)

(1,4,2,1)(1,4,2,2)(1,4,2,3)(1,4,2,4)

Figure 1.1: State-space tree for 4-Queens problem

Now, let us refer the state-space tree shown in figure 1.1. First, queen 1 is placed in
the first possible position. That is at (1,1) (first row, first column). Then we place
queen 2, after trying unsuccessfully columns 1 and 2, at (2,3). This proves to be
a dead end because there is no acceptable position for queen 3. So the algorithm
backtracks and puts queen 2 at (2,4). Then queen 3 is placed at (3,2) that is also
another dead end. Again the algorithm backtracks all the way to queen 1 and moves
it to (1,2), then queen 2 is at (2,4), queen 3 is at (3,1) and queen 4 is at (4,3), which
is a solution to the problem. So, the final solution vector is (2,4,1,3).

Now, let us see how this backtracking method works for another problem called
Hamiltonian circuit.

96 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

1.2 Hamiltonian Circuit Problem


Hamiltonian circuit is yet another problem which is nice for explaining how the back-
tracking approach works. Note that not all graphs contains a Hamiltonian circuit.
Let us consider the graph shown in figure 1.2 for illustration. This graph contains
the Hamiltonian circuit. Like the previous problem, here too we consider the solution

1j 2j
3j 6j
4j 5j

Figure 1.2: A graph for demonstrating backtracking approach for Hamil-


tonian circuit

vector in the form of (x(1), · · · , x(n)), where n is the number of vertices of the graph
and x(i) represents the ith vertex in the hamiltonial circuit. Now if we perform de-
tecting the Hamiltonian circuit on the graph shown in figure 1.2, using backtracking,
the solution vector is (1,0,0,0,0,0,0), assuming that the circuit starts from vertex 1. If
we proceed according to the dfs, after 6 iterations we get the vector is (1,2,3,4,5,6,0),
which is the dead end, as the Hamiltonian circuit suppose to ends at the start ver-
tex. So, the algorithm backtrack to the position (1,2,3,0,0,0,0), then proceed towards
(1,2,3,5,4,0,0), which also a dead end. Now, it backtracks to the previous position
(1,2,3,5,0,0,0) and proceeds for (1,2,3,5,6,0,0). Again it reached the dead end. Now,
the backtracking to the position where the subsolution vector is (1,2,0,0,0,0,0). Then,
it leads to the vector (1,2,6,5,3,4,1), which is the solution vector.

Herein, we have not shown the state-space tree. Those who are interested can generate
the tree and see. For the purpose of understanding, a kind of such tree is shown for
the previous problem. However, algorithmically we will deal only with the solution
vector.

For more clarity, before going for general algorithm description for the backtracking
problem, let us see one more example.

97 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

1.3 Subset-Sum Problem


Here, we have to find a subset of a given set S = {s1 , . . . , sn } of n positive integers
whose sum is equal to a given positive integer d. For example, for S = {1, 2, 5, 6, 8}
and d = 9, there are two solutions, viz. {1, 2, 6} and {1, 8}. It is also to note that for
some instances there will not be any solutions too.

Before proceeding to the algorithm we can sort the elements in the set. This is not
mandatory, but this may simplify the constraints as

s0 + si+1 > d

and n
s0 +
X
sj < d
j=i+1

where s0 is sum of numbers recorded in a node. If this sum is equal to d, then we


have a solution to the problem. The first constraint denotes that the sum s0 is too
large and the second constraint denotes that s0 is too small. So, if any of these two
constraints are satisfied, we can terminate the node as it is nonpromising.

Consider the instance as S = {3, 5, 6, 7} and d = 15. In this example, if we take one
value at a time initially it leads to the solution vector (3,5,6), which is the dead end,
as it satisfies the first constraint. Therefore the algorithm backtracks to the previous
position and then that leads to the vector (3,5,7), which is the solution vector.

1.3.1 A General Algorithm

We have seen three examples to illustrate how the backtracking approach works. In
all the three examples, we have described the solution set in the form of a vector. As
mentioned before, it is worth designing algorithm for each of these problems, and the
students are advised to do so. However, three examples used in this chapter might
have given a clear understanding of what is backtracking.

This section, we can see a more general algorithm (algorithm 1.1) that describes the
behavior of the backtracking method.

98 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Algorithm 1.1: Backtrack(n)


begin
k ← 1;
while k > 0 do
if ∃ an untried X(k) : X[k] is consistant with X[1, . . . k − 1] and
satisfying the constraints then
if X[1, . . . , k] is path to an answer node then
Print X[1, . . . , k];
end
k ← k + 1;
end
else k ← k − 1;
end
end

Analysis

The efficiency of the the backtracking algorithm is very much depends on four factors.

• The time to generate the next X[k],

• the number of X[k] satisfying the explicit constraints,

• the time takes for verifying the constraints, and

• the number of nodes generated in the state-space tree (logically).

As mentioned here, the four factors are important to determine the time required by
the backtracking algorithm. Once a state-space tree organization is selected, the first
three of these are relatively independent of the problem instance being solved. Only
the fourth, the number of nodes generated, and that varies from one problem instance
to another. A backtracking algorithm on one problem instance might generate only
O(n) nodes while on a different instance it might generate almost all the nodes in the
state-space tree, and it might be either 2n or n!. So, the the worst case running time

99 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

for a backtracking algorithm is either O(p(n)2n ) or (O(q(n)n!), where p(n) and q(n)
are the polynomials in n (size of input).

100 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

2. Branch & Bound


Like backtracking, the branch and bound make it possible to solve the problems of
large instance which are difficult to solve in polynomial time. In branch and bound
too we generate state-space tree whose nodes reflect the specific choices made for a
solution’s components. Here too, this method terminate a node from further explor-
ing as soon as it can be guaranteed that no solution to the problem can be obtained
by considering choices that correspond to the node’s descendants. The techniques
backtracking and branch & bound differ in the nature of problems they are applied
on. Branch and bound is applicable only to optimization problems because it is based
on computing a bound on possible values of the problem’s objective function. Back-
tracking is not constraint by this, often it is applied to non-optimization problems.
Another distinction between these two methods lies in the way of generating nodes of
the state space tree. In backtracking, depth first order is considered, while in branch
and bound, it is due to several rules, the most natural one is best-first.

The state space tree concept described in the previous section, a branch is cut off as
soon as we can deduce that it cannot lead to a solution. This idea can be strengthened
further while dealing with the optimization problems, therein one seeks to minimize
or maximize the objective function, usually subject to some constraints. A feasible
solution is the one that satisfies all the problem constraints, while an optimal solution
is a feasible solution with the best value of the objective function.

Here too, let us see the branch and bound solution to three standard problem for
clear understanding of how this approach works. All the three problems we discuss
here we consider best first choice approach for exploring the tree, and this choice is
made based on some bound value. So, the major difficult part of designing a branch
and bound algorithm for a particular problem is to select the correct bound.

101 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

2.1 Assignment Problem


The assignment problem deals with assigning n people to n jobs so that the total cost
of the assignment is as small as possible. The instance of this problem is specified by
the cost matrix of size n × n. Let us consider the following cost matrix
job1 job2 job3 job4
 

9 2 7 8 
person a
6 4 3 7 person b
 
 
 
5 8 1 8 person c
 
 
 
7 6 9 4 person d
Now, the problem is to select one element in each row so that no two selected elements
are in the same column and their sum is the least. Here, the task now is to find a
lower bound on the cost of an optimal selection without actually solving the problem?
Only constraint here in choosing the lower bound value is that any solution including
the optimal one should not be less than this value. This can be determined by several
ways, but we select here the lower bound as the sum of the smallest value in each of
the rows. For the given problem it is 2 + 3 + 1 + 4 = 10. Note that this not the cost
of a solution, as 3 and 1 came from the same column of the matrix, and it is only
the lower bound to determine the legitimate selection. Now, consider figure 2.1. We
start with the root that corresponds to no elements selected from the cost matrix. As
we discussed before, the lower bound (lb) value is 10. The nodes on the first level of
the tree correspond to four jobs in the first row of the matrix as they are the potential
selection for the first component of the solution. Now, these four nodes may contain
an optimal solution. The most promising one corresponds to job 2 as it is proving
the lowest bound value. As our strategy is best first, we consider the this node for
exploring the tree further, and that corresponds to three more nodes at level 2. At
this moment, there are six leaves, including three in level one. Out of which, the leaf
corresponds to the (a → 2, b → 1) is the most promising to explore further. Now,
bu selecting the the third column’s element from c’s row, leaves us with no choice
but to select the element from the fourth column of d’s row. This yields the feasible
solution at the node corresponds to (a → 2, b → 1, c → 3, d → 4) with the total cost
of 13, and its sibling corresponds to (a → 2, b → 1, c → 4, d → 3) with the total cost

102 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

start
lb =10

a→1 a→2 a→3 a→4


lb =17 lb =10 lb =20 lb =18
× × ×
b→1 b→3 b→4
lb =13 lb =14 lb =17
× ×
c→3 c→4
d→4 d→3
C =13 C =25
solution inf. soln.

Figure 2.1: State-space tree for the instance of the assignment problem

of 25. Since this cost is higher compared to the cost of the previous one, this node is
terminated from further exploring. Here, at present the best solution seen so far is
the node that provided the cost value 13. Also, the lb value corresponds to all other
nodes are higher than this cost, we also terminate all other nodes and the algorithm.

2.2 Knapsack Problem


let us now discuss the branch and bound technique for the knapsack problem. Just
recall the problem definition. Given n items of known weights wi and values vi , and a
knapsack capacity W , find the most valuable subset of items that fit in the knapsack.
To obtain reduction in the number of nodes of the state-space tree, let us consider
arranging the item of the given instance in descending order by their value-to-weight
ratios. That is
v1 /w1 ≥ v2 /w2 ≥ · · · ≥ vn /wn .

If we have done this, we know the first item gives the best payoff per weight unit and
the last one gives the worst payoff per weight unit, with ties resolved arbitrarily.

103 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

w=0, v=0
ub = 100

w=4, v=40 w=0, v=0


ub = 76 ub = 60
× (inferior)
w=11 w=4, v=40
ub = 70
×
w=9, v=65 w=4, v=40
ub = 69 ub = 64
× (inferior)

w=12 w=9, v=65


ub = 65
× solution

Figure 2.2: State-space tree for the instance of the knapsack problem

For illustration, let us consider the following problem instance, wherein the knapsack
capacity is 10.

value
item weight value weight
1 4 40 10
2 7 42 6
3 5 25 5
4 4 12 4

We structure the state-space tree for this problem as a binary tree constructed as
follows (figure 2.2). Each node on the ith level represents all the subsets of n items
that include a particular selection made from the first i ordered items. Note that, the
two branches of each nodes, the left one represents inclusion of the item and the right
represents exclusion of the item. For instance at the first level, the left child of the
root corresponds to inclusion of item 1 and the right child of the root corresponds to
the exclusion of item 1.

104 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Now the next task is to determine the bound value. As the problem here is to
maximize the profit, we have to define the upper bound. It is computed by adding
v (the total value of the items already selected) and the product of the remaining
capacity of the knapsack W − w and the best per unit payoff among the remaining
items, which is vi+1 /wi+1 . Therefore the upper bound ub is

ub = v + (W − w)(vi+1 /wi+1 ).

Now consider the state-space tree given in figure 2.2. The root node represents for
no items have been selected as yet. So the value corresponds to v and w for this
node is 0, and the value of the upper bound ub = 100. The left child of this root
is for the subset that includes the item 1. Here, the total weight and the value of
the items already included are respectively 4 and 40, the value of the upper bound is
40 + (10 − 4) ∗ 6 = 76. The right child if for the subset that does not include item 1.
Accordingly, w = 0, v = 0 and ub = 0 + (10 − 0) ∗ 6 = 60. As the left child’s bound is
larger than the right child (as these are the only two leaves at present), we consider
the left child for further expansion. If we proceed this way, the node corresponds to
the vector (1,0,1,0) provides the solution (1 represents the presents of the item and 0
represents absence of the item). That is, in this solution, items 1 and 3 are present.
The node corresponds to the solution vectors (1,0,0,-) and (0,-,-,-) are note consider
for further expansion as their bound are inferior to the presently obtained solution.
The nodes correspond to(1,1,-,-) and (1,0,3,4) are not feasible to consider. Therefore
the maximum profit that can be earned for the given problem instance is 65.

2.3 Traveling Salesman Problem


Let us now consider traveling salesman problem to determine a solution based on
branch and bound technique. Like, other problems discussed in this chapter, here
too it is required to find a reasonable lower bound. One simple way of find such
lower bound is by choosing the smallest element in the intercity distance matrix D
and multiplying it by the number of cities. But, we can find more informative lower
bound with less computational effort. How? For each city i, 1 ≤ i ≤ n, find the sum
si of the distance from city i to the two nearest cities; computer the sum s of these

105 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

aj 3 bj
5 6
1 8
9 7
cj 4 dj
2
ej 3

a
lb = 14

a,b a,c a,d a,e


lb = 14 lb = 16 lb = 19
× (b not before c) × (lb > opt.) × (lb > opt.)

a,b,c a,b,d a,b,e


lb = 16 lb = 16 lb = 19
× (lb > opt.)
a,b,c,d, (e,a) a,b,c,e, (d,a) a,b,d,c, (e,a) a,b,d,e, (c,a)
l = 24 l = 19 l = 24 l = 16
first tour better tour inf. tour optimal tour

Figure 2.3: A graph and the state-space tree for the instance of the
traveling salesman problem

n numbers, and then divide this sum s by 2. As the distances are integers, the lower
bound lb is defined as, lb = ds/2e.

Now, for the problem instance considered in this section (figure 2.3), the lower bound
is
lb = d[(1 + 3) + (3 + 6) + (1 + 2) + (3 + 4) + (2 + 3)]/2e = 14.

Each time when we add an edge to the tour, it is required to modify the lower bound
accordingly. For instance, if we include the edge (a,d), we get the following lower
bound,
d[(1 + 5) + (3 + 6) + (1 + 2) + (3 + 5) + (2 + 3)]/2e = 16.

106 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

It is obtained by summing the lengths of the two shortest edges incident with each
of the vertices, with the required inclusion of edges (a,d) and (d,a).

To apply the branch and bound technique for the problem instance given here, the
following constraints are assumed.

• The tour always start at a, and

• generate only the tours in which b is visited before c.

Hope that more explanation may not be required here, as the figure 2.3 illustrate
how the solution is been arrived. In that, the tour has no choice but to visit the
remaining unvisited city and return to the starting one. That what happened in the
nodes exist in the last level of the tree. In the last level of the tree, there are four
nodes and all the nodes giving a feasible solution, where in the first is giving the tour
length of 24. While the second gives 19, which is better than the first, however the
third is not desirable as it is inferior to the previous best. But, the fourth one is
better than all other feasible solution determined and hence it is the optimal tour.
Also, the node correspond to the tour (a,c) is not considered for further expansion,
as it is not satisfying the second constraint.

2.4 A General Algorithm


Though there are several ways a state-space tree can be generated, we considered
in this chapter the best-first method, therein when a node is considered as a best
node then all its children will be generated simultaneously, and the selection of best
node is decided from the list of live nodes, that is the nodes which are leafs and also a
possible candidate that lead to the solution. Here, the algorithm depends on selection
of the best node. For that let us consider the data structure called heap. It may be
either min-heap or max-heap, depending on whether the optimization is respectively
minimization or maximization. Note that a heap can be implemented in O(n log n)
time, where n is the number of nodes in the heap. This is heap is only an additional
data structure apart from the state-space tree.

107 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Now, let us see the algorithm (algorithm 2.1) for the best-first branch and bound
technique.

Algorithm 2.1: BFBB(T )


begin
if T is an answer node then
Output T ;
return;
end
E ← T;
heap ← empty;
while true do
for each child X of E do
Add X to the state-space tree;
if X is a live node then Add X to the heap;
end
Let X 0 be a child of E providing best bound value;
if X 0 is an answer node then
Output the path from X 0 to T ;
return;
end
if heap is empty then
Print “no answer node”;
exit;
end
E ← Delete(heap);
end
end

Analysis

Like backtracking, here too the complexity depends on the number of nodes generated
in the state-space tree, which is either 2n or n!. Therefore the complexity of a branch

108 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

and bound algorithm is either O(p(n)2n ) or (O(q(n)n!), where p(n) and q(n) are the
polynomials in n (size of input).

Have you understood?


1. How does backtracking differ from branch & bound?
2. What you mean by state-space tree?
3. Why do we prefer backtracking or branch & bound, though these approaches
deal with hard problems?

Exercises
1. Explain how the board’s symmetry can be used to find the second solution to
the four queens problem?
2. Implement the backtracking algorithm for the n-queens problem in the language
of your choice.
3. Generate all permutations of {1, 2, 3, 4} by backtracking.
4. What data structure would you use to keep track of live nodes in a best-first
branch-and-bound algorithm?
5. Write a program to solve the knapsack problem with the branch and bound
algorithm.
6. Give an example of the best-case input for the branch-and-bound algorithm for
the assignment problem.

Summary
Backtracking and branch-and-bound are two algorithm design techniques for solving
problems in which the number of choices grows at least exponentially with their
instance size. Both these methods employ, as their principal mechanism,a state-space
tree, which a rooted tree whose nodes represent partially constructed solutions to the
problems in question. The basic difference is, in backtracking the state space tree is
constructed in depth first search fashion. However in branch-and-bound generating

109 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

state-space tree with the idea of estimating he best value obtained from a current
node of the decision tree: if such estimate is not superior to the best solution seen up
to that point in the processing, the node is eliminated from further consideration.

110 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Unit V

NP-Hard and NP-Complete


Problems

111 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Structure of Unit V
1 NP-Complete Problems

1.1 P and NP Classes

1.2 Approximation Algorithms

1.2.1 Traveling Salesman Problem

1.2.2 Knapsack Problem

112 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Learning Objectives
• P and NP Classes.

• Finding approximate solutions to hard problems.

113 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

1. NP-Complete Problems
1.1 P and NP Classes
This section deals with the theory of computational complexity. So far, we have seen
several algorithms for various problems. Some of them are solvable in polynomial
time, and others are in exponential time. So, one may now be interested in knowing
that whether a given problem can be solved in polynomial time by some algorithm.

The problems that can be solved in polynomial time are called tractable and others are
called intractable. That is the intractable problems cannot be solvable for arbitrary
instances of intractable problems in a reasonable amount of time unless such instances
are very small.

We have seen several problems that have polynomial time algorithm, and computer
science theoreticians some times call that as P . A more formal definition includes in
P are only decision problems, which are with yes/no answers.

That is, P is a class of decision problems that can be solved in polynomial time by
deterministic algorithms.

The restriction of P only to the decision problems can be justified by the following
reason. Many important problems that are not decision problems in their most nat-
ural formulation can be reduced to a series of decision problems that are easier to
study. For instance, in the graph coloring problem, rather than knowing the smallest
number of colors need to to color the vertices of the graph so that no two adjacent
vertices get the same color, we can formulate the problem as whether there exists
such coloring or not.

It is natural to ask now whether every decision problem can be solved in polynomial
time? The answer is “no”. There are decision problems which are not solvable at all
by any algorithm. Such problems are called undecidable. A famous example for the

114 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

undecidable problem is the halting problem. What is it? Given a computer program
and an input to it, determine whether the program will halt on that input or continue
working indefinitely on it.

Let us now see a short proof by the way of contradiction for this fact. Let A be an
algorithm that solves the halting problem. That is given a program P an input I,

 1, if program halts on input I
A(P, I) =
 0, otherwise

Now, assume that the program P given as input to itself and using its output for
constructing another program called Q as follows.

 halts, if A(P, P ) = 0
Q(P ) =
 does not halt, if A(P, P ) = 1

Then, substitute Q for P , we obtain



 halts, if A(Q, Q) = 0
Q(Q) = 
does not halt, if A(Q, Q) = 1

Which is a contradiction, as neither of the two outcomes for program Q is possible,


and that completes the proof.

We extend the study of complexity theory by posing a question. “Are the decidable
problems are intractable?”. The answer is “yes”. There are a large number of im-
portant problems for which no polynomial time algorithm has been found, nor been
proved the impossibility such algorithms. These class of problems are discussed in
the monograph by Gary and Johnson. Those problems are call NP-Complete prob-
lems (we will see the formal definition later). The problems that we have studied in
this course that fall under this category are, Hamiltonian circuit, traveling salesman
and knapsack problem. Likewise, other problems that fall into this category is parti-
tion problem, bin packing, graph coloring, etc. Some of these problems are decision
problems, and those who are not having the devision-version counterpart.

Another common feature of a vast majority of decision problems is the fact that while
solving such problems can be computationally difficult, checking whether a proposed

115 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

solution actually solves the problem is computationally easy. This general observation
about the decison problems has lead to the notion of a nondeterministic algorithm.

A nondeterministic algorithm has two stages. One is the guessing stage and other is
the verification stage. That is, these algorithms on taking the input instance I, the
guessing stage, by choosing an arbitrary string S, and can be thought of as a candidate
solution to the given instance I. Then in the verification stage, a deterministic takes
I and S as its input and output “yes” if S represents a solution to instance I. Note
that, if S is not a solution, then this algorithm either returns “no” or never halt.
A nondeterministic algorithm is said to be nondeterministic polynomial, if the time
efficiency of its verification stage is polynomial

Now, we can define N P classes.

NP is a class of decision problems that can be solved by nondeterministic polynomial


algorithms.

Now, we are getting into a more interesting stage, due to the following facts. We know
most of the decision problems are in N P . Also, the N P class includes all problems
in P . That is, P ⊆ N P . How? If a problem is in P , we can use the deterministic
polynomial time algorithm that solves it in the verification stage of a nondeterministic
algorithm that simply ignores string S generated in its nondeterministic guessing
stage. So, we do not have doubt in P ⊆ N P . But, N P contains the problems such
as Hamiltonian circuit, traveling salesman, knapsack, graph coloring, etc. This leads
to the most important open question: “Is P a proper subset of N or P = N P ?
Symbolically, we can say
?
P = NP

Now, it is the time to define what is N P -Completeness? Before seeing its definition,
it is required to see the definition of problem reducibility.

A decision problem D1 is said to be polynomially reducible to a decision problem D2

116 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

if there exists a function F that transforms instances of D1 to instances of D − 2 such


that

• f maps all yes instances of D1 to yes instances of D2 and all no instances of


D1 to no instances of D2 .

• f is computable by a polynomial time algorithm.

This implies that the problem D1 is polynomially is reducible to the problem D2 .


That is, if the problem D2 is polynomially solvable then D1 also.

A decision problem D is said to be N P -complete if

• it belongs to the class N P , and

• every problem in N P is polynomially reducible to D.

Notice what is nice feature the N P -completeness possesses. What is it conveying?


It says that if a problem, which is N P -complete, is at any moment solved using an
deterministic polynomial time algorithm, then all the N P -complete problems too.
?
Then, what could do to prove P = N P ? Just prove that any one problem, which is
N P -complete, is solvable using deterministic polynomial time algorithm. Oh!!!!. But
the wonder is no one proved that, and hence it is still an open problem.

At this moment what is required? We cannot simply through out all the N P -complete
problems, as most of them are having practical importance. Then, what is an alter-
native? We should concentrate with alternate approach that seek to alleviate the
intractability of such problems, and are addressed in the following section.

1.2 Approximation Algorithms


To illustrate the behavior of approximation algorithm, we consider the two stan-
dard problems that we have discussed before, which are traveling salesman and the

117 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

knapsack. If an instance of the problem is very small, we might be able to solve it


by some methods which we discussed before. For instance, we have seen dynamic
programming approach for the knapsack problem. Though this approach works in
principle, its practicality is limited by the dependence on the instance parameters
being relatively small. The discovery of the branch and bound method is an im-
portant breakthrough, as this technique makes it possible to get solutions to many
large instances of difficult problems of combinatorial optimization in an acceptable
amount of time. However, such good performance cannot usually be guaranteed. So,
approximation is another way of dealing these problems by a fast algorithm. Note
that, this technique is appealing for applications for which a good but not necessarily
optimal solution is sufficient.

Many of the approximation algorithms are greedy based, there in some problem-
specific heuristic is used. What is heuristic? A common-sense rule drawn from expe-
rience rather than from a mathematically proved assertion. The name itself denotes
that the solution to the problem is approximation of the actual optimal solution.
Then, how accurate this approximation is? How do we quantify this value? The
accuracy of an approximate solution sa to a problem minimizing some function f by
the size of the relative error of this approximation.
f (sa ) − f (s? )
re(sa ) =
f (s? )
where s? is an exact solution to the problem. Alternatively,
f (sa )
re(sa ) = −1
f (s? )
we simply use the accuracy ratio,
f (sa )
r(sa ) =
f (s? )
as a measure of accuracy of sa . For maximization problem, this may be viewed as
f (s? )
r(sa ) =
f (sa )
for the sake of uniformity. Here, the better approximation is obtained when the r(sa )
is close to 1. The lowest upper bound of possible r(sa ) taken over all instances of the

118 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

problem is called the performance ratio, which is denoted by RA . We would like to


have approximation algorithms with RA as close to 1 as possible.

A polynomial time approximation algorithm is a c-approximation if its performance


ratio is at most c, that is
f (sa ) ≤ f (s? )

1.2.1 Traveling Salesman Problem

The decision version of the traveling salesman problem is one of the well known
N P -complete problem. Here, we see two approximation algorithms for this problem,
the first one is the nearest-neighbor algorithm and the other is twice-around-the-tree
algorithm.

Nearest Neighbor Algorithm

It is a greedy based algorithm, and it performs as described in algorithm 1.1. Why

Algorithm 1.1: Nearest-neighbor


begin

1. To start with, choose an arbitrary city.


2. Repeat the following operation until all the cities have been visited:
go to the unvisited city nearest the one visited last (ties can be broken
arbitrarily).
3. Return to the starting city.

end

this algorithm is called an approximation algorithm? Consider the following example


(figure 1.1). By choosing a as the start vertex of the tour, the algorithm 1.1 yields
the tour sa : a − b − c − d − a of length 10. While the optimal solution for this instance
is s? : a − b − d − c − a of length 8. So, remember that the solution obtained by the
nearest neighbor algorithm not optimal and it is only the approximation. Thus, the

119 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

al 1 bl

6 2
3 3
dl 1
cl

Figure 1.1: A graph for illustrating nearest neighbor algorithm.

accuracy ratio is
f (sa )
r(sa ) = = 1.25
f (s? )
That is, the tour obtained by the algorithm 1.1 is 25% longer than the optimal one.

What is the merit and demerit of this algorithm? The merit is its simplicity; and the
demerit is, one cannot say anything about its accuracy. For instance, if we change the
weight of the edge (a, d) from 6 to an arbitrary large number w ≥ 6, the algorithm
will still yield the same tour, but its length is 4 + w, while the optimal length is still
8. Therefore, the ratio
4+w
r(sa ) =
8
which can be large by choosing a large value of w. Hence RA = ∞, for this algorithm.
This is really bad.

Twice-around-the-tree Algorithm

Here, we see a rather simple approximation algorithm with a finite performance ratio,
for Euclidian instances. The Euclidean sub-instances satisfy the following property
on the intercity distances.

Triangle inequality:

d[i, j] ≤ d[i, k] + d[k, j]

for any i, j, k.

120 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Symmetry:

d[i, j] = d[j, i]

for any i, j.

Now, let us see the algorithm (algorithm 1.2). As an example consider the graph

Algorithm 1.2: Twice-around-tree


begin

1. Construct a minimum spanning tree of the graph corresponding to a


given instances of the traveling salesman problem.
2. Starting at an arbitrary vertex, perform a walk around the minimum
spanning tree recording the vertices passed by.
3. Scan the list of vertices obtained in step 2 and eliminate from it
all repeated occurrences of the same vertex except the starting one at
the end of the list. The vertices remaining on the list will form a
Hamiltonian circuit, which is the output of the algorithm.

end

shown in figure 1.2. The minimum spanning tre of this graph is made up of edges
(a,b), (b,c),(b,d) and (d,e). Therefore, the walk determined by algorithm 1.2 is

a, b, c, b, d, e, d, b, a

. and hence the Hamiltonian circuit is

a, b, c, d, e, a

and its length is 41, and this tour is not optimal. Here, we cannot find the accuracy
ratio, as we do not know the length of an optimal tour, provided if we take Euclidean
sub-instances, then we can at least estimate it above, and the same is shown in the
following theorem.

121 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS


al 12 el al el
?
9 9
4 7 ? 6 6

8 bl 8 dl 12 bl -
dl
\ \
6\
\ 12 \
w \ I
cl cl
\ \

Figure 1.2: A graph for illustrating twice-around-tree algorithm.

Theorem: 1.1 Algorithm 1.2 is a 2-approximation for the traveling salesman prob-
lem with Euclidean distance.

Proof: The time complexity of algorithm 1.2 depends on construction of the mini-
mum spanning tree. As we know, such construction can be done in polynomial time
(prim’s or Kruskal), algorithm 1.2 is a polynomial time approximation algorithm.

We have to prove that


f (sa ) ≤ 2f (s? ).

That is, the length of the tour sa obtained by the algorithm 1.2 is at most twice the
length of the optimal tour s? .

Note that one can get a spanning tree by removing an edge from the optimal tour
S ? , let the spanning tree be T . Also, the weight w(T ) will be greater than or equal
to the weight of the graph’s minimum spanning tree w(T ? ). This implies,

f (s? ) > w(T ) ≥ w(T ? )

Multiplying by 2 on this inequality, we get 2f (s? ) > 2w(T ? ) = the length of the walk
obtained in step 2 of algorithm 1.2. Now, in step 3, we make some shortcuts, which
anyway cannot increase the total length. Therefore,

2f (s? ) > F (sa )

122 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

Hence the proof.  The above theorem, make us to think


whether there is a polynomial time approximation algorithm for traveling salesman
problem with a finite performance ratio for all instances of the problem? The answer
is no, unless otherwise P = N P . This leads to the following theorem.

Theorem: 1.2 If P 6= N P , then there exists no c-approximation algorithm for the


traveling salesman problem. That is, f (sa ) 6≤ cf (s? ) for some constant c, and for all
instances of the problem.

Proof: We prove it by contradiction. Suppose there exists an algorithm A such that


f (sa ) ≤ cf (s? ) for some constant c. Without loss of generality, consider C as positive
integer.

First let us reduce this problem to Hamiltonian problem. Let G be an arbitrary graph
with n vertices. Map G to a complete weighted graph G0 by assigning weight1 to eac
of its edges and add new edges, with weight cn + 1, between each pair of vertices not
adjacent in G.

So, if G has a Hamiltonian circuit, its length in G0 is n. Hence, it is the exact solution
s? to the traveling salesman problem for G0 . Then,

f (sa ) ≤ cn

If G does not have a Hamiltonian circuit, the shortest tour in G0 contains at least one
edge of weight cn + 1. So,
f (sa ) ≥ f (s? ) > cn

From the above two inequalities, we can conclude that one can solve the Hamiltonian
circuit problem for the graph G in polynomial time by mapping G to G0 , applying
algorithm A to get the shortest tour in G0 , and comparing its length with cn.

But from the known fact the Hamiltonian circuit problem is N P -complete, and hence
it is impossible to do so, unless P = N P . 

123 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

1.2.2 Knapsack Problem

The problem we have considered here is 0/1-knapsack. That is, either the item is
fully inserted inside the knapsack or not at all. Here also a greedy based approach is
considered. First the items are arranged in increasing order of its value-to-weight ra-
tios. Then selecting the items one by one, according to that order. The corresponding
procedure is described in algorithm 1.3.

Algorithm 1.3: Greedy algorithm for 0/1-knapsack


begin

1. Compute the value-to-weight rations ri = vi /wi , i = 1, 2, . . . , n, for the


items given.
2. Sort the items in nonincreasing order of the ratios computed in step 1
(ties can be broken arbitrarily).
3. Repeat the following operation until no item is left in the sorted list,
if the current item on the list fits into the knapsack, place it in the
knapsack; otherwise, proceed to the next item.

end

To determine the accuracy ratio, consider the following example, wherein the knap-
sack capacity is 10.

value
item weight value weight
1 4 40 10
2 7 42 6
3 5 25 6
4 3 12 4

For this instance, algorithm 1.3, select the first item of weight 4, and skip the next
item of weight 7 and then select the third item of weight 4, finally skip the last item
of weight 3. The solution obtained happened to be the optimal. However, it may

124 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

not happen always, other wise we would have a polynomial time algorithm for the
N P -hard problem. To demonstrate that let us consider the following instance for
which the knapsack capacity is W > 2.

value
item weight value weight
1 1 2 2
2 W W 1

For this instance, the algorithm 1.3 first selects the item 1 and skips the second one,
and so the value of the solution set is 2. However, the optimal selection is choosing
item 2, whose value is W . So, the accuracy ratio of this approximate solution is w/2,
which is unbounded.

In this case, how do we find a better value. Let us consider this way, choose the
better of the two alternatives: the one obtained by the algorithm 1.3 and the other
one consisting of single item of the largest value that fits into the knapsack. Fine!!!.
Is it possible always to consider various subsets? May be the following schemes
illustrates that.

For the 0/1 knapsack problem, there exist a polynomial-time approximation scheme,
which are parametric families of algorithms that allow us to get approximations sa(k)
with any predefined accuracy level

f (s(k)
a )
≤ 1 + 1/k
f (s? )

for any instance of size n, where k is an integer parameter in the range 0 ≤ k < n.

For instance, the following scenerio with k = 2 yields {1,3,4} is the optimal solution,
where in the knapsack capacity is 10.

125 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

item Added items value


∅ 1,3,4 69
{1} 3,4 69
value {2} 4 46
item weight value weight
{3} 1,4 69
1 4 40 10
{4} 1,3 69
2 7 42 6
{1,2} not feasible
3 5 25 5
{1,3} 4 69
4 1 4 4
{1,4} 3 69
{2,3} not feasible
{2,4} 46
{3,4} 1 69

The first table denotes the problem instance, and the second one denotes the subsets
generated by the algorithm. The time efficiency of the algorithm is n. Indeed, the
total number of subsets the algorithm generates is
 
k k
X n  X n(n − 1) · · · (n − j + 1)
 =
j=0 j j=0 j!
k
nj
X

j=0
k
nk
X

j=0

= (k + 1)nk

For each of those subsets, it is required to spent O(n) time to determine the possible
extension, So, the total time required by the algorithm is O(knk+1 ). Note that the
time efficiency of the algorithm if exponential in k.

We have not said anything about determining the upper bound for the algorithm. It
is simple. The solution to the continuous version of the knapsack problem is sufficient
for this, as this always yields the optimal solution. In the continuous version, we are
permitted to take arbitrary fractions of the items given. Also, the items are ordered

126 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

according to their efficiency in using the knapsack capacity. If at a particular moment,


if the item is not fit into the knapsack, then its fraction is considered to fill upto the
full capacity of the knapsack. The procedure for continuous knapsack problem is
given in algorithm 1.4.

Algorithm 1.4: Greedy algorithm for 0/1-knapsack


begin

1. Compute the value-to-weight rations ri = vi /wi , i = 1, 2, . . . , n, for the


items given.
2. Sort the items in nonincreasing order of the ratios computed in step 1
(ties can be broken arbitrarily).
3. Repeat the following operations until the knapsack is filled to its full
capacity or no item is left in the sorted list: if the current item on the
list fits into the knapsack in its entirety, take it and proceed to the
next item; otherwise, take its largest fraction to fill the knapsack to
its full capacity and stop.

end

127 ANNA UNIVERSITY CHENNAI


DMC1653 DESIGN AND ANALYSIS OF ALGORITHMS

This algorithm provides a more sophisticated way of computing upper bounds for
solving the 0/1-knapsack than by the branch and bound method.

Have you understood?


1. What you mean by deterministic algorithm?
2. What the class N P refer to?
3. What are the two stages of non-deterministic algorithms?
4. Why the algorithms described in this unit are called approximation algorithms?

Exercises
1. What is the time efficiency class of greedy algorithm for the knapsack problem?
2. What is the time efficiency of the nearest-neighbor algorithm?
3. Prove that making a shortcut of the kind used by the twice-around-the tree
algorithm cannot increase the tour’s length in an Euclidean graph.
4. Design a simple 2-approximation algorithm for finding a minimum vertex cover
in a given graph.
5. Design a polynomial time greedy algorithm for the graph-coloring problem, and
show that the performance ratio of your approximation algorithm is infinitely
large.

Summary
Approximation algorithms are often used to find approximate solutions to difficult
problems of combinatorial optimization. The nearest neighbor is a simple greedy
algorithm for approximating a solution to the traveling salesman problem. Twice-
around-the-tree is an approximation algorithm for the traveling salesman problem
with the performance ratio of w for Euclidean graph. A sensible greedy algorithm for
the knapsack problem is based on processing an input’s items in descending order of
their value to weight ratios. For the continuous version of the problem, this algorithm
always yields an exact optimal solution.

128 ANNA UNIVERSITY CHENNAI

You might also like