You are on page 1of 12

Travelling Salesman Problem – What it is?

▪ Travelling Salesman Problem (TSP) is an interesting problem. Problem is


defined as “given n cities and distance between each pair of cities, find out the
path which visits each city exactly once and come back to starting city, with the
constraint of minimizing the travelling distance.”
▪ TSP has many practical applications. It is used in network design, and
transportation route design. The objective is to minimize the distance. We can
start tour from any random city and visit other cities in any order. With n cities,
n! different permutations are possible. Exploring all paths using brute force
attacks may not be useful in real life applications.
LCBB using Static State Space Tree for Travelling Salseman Problem
▪ Branch and bound is an effective way to find better, if not best, solution in quick
time by pruning some of the unnecessary branches of search tree.
▪ It works as follow:
Consider directed weighted graph G = (V, E, W), where node represents cities
and weighted directed edges represents direction and distance between two
cities.
1. Initially, graph is represented by cost matrix C, where Cij = cost of edge, if there is a
direct path from city i to city j
Cij = ∞, if there is no direct path from city i to city j.

2. Convert cost matrix to reduced matrix by subtracting minimum values from


appropriate rows and columns, such that each row and column contains at least
one zero entry.

3. Find cost of reduced matrix. Cost is given by summation of subtracted amount


from the cost matrix to convert it in to reduce matrix.

4. Prepare state space tree for the reduce matrix

5. Find least cost valued node A (i.e. E-node), by computing reduced cost node matrix
with every remaining node.

a. If <i, j> edge is to be included, then do following: Cij= cost of edge, if there is a
direct path from city i to city j
Cij = ∞, if there is no direct path from city i to city j.
b. Convert cost matrix to reduced matrix by subtracting minimum values from
appropriate rows and columns, such that each row and column contains at
least one zero entry.
c. Find cost of reduced matrix. Cost is given by summation of subtracted
amount from the cost matrix to convert it in to reduce matrix.
d. Prepare state space tree for the reduce matrix
e. Find least cost valued node A (i.e. E-node), by computing reduced cost node
matrix with every remaining node.
f. If <i, j> edge is to be included, then do following:
(i) Set all values in row i and all values in column j of A to ∞
(ii) Set A[j, 1] = ∞
(iii) Reduce A again, except rows and columns having all ∞ entries.
g. Compute the cost of newly created reduced matrix as,
Cost = L + Cost(i, j) + r

Where, L is cost of original reduced cost matrix and r is A[i, j].

8. If all nodes are not visited then go to step 4.

Reduction procedure is described below :

Dynamic programming was invented by U.S. mathematician Richard Bellman in 1950.


Like greedy algorithms, it is also used to solve optimization problems. But unlike greedy
approach, dynamic programming always ensures optimal / best solution.
A feasible solution is a solution that satisfies constraints of the problem. When the
problem has multiple feasible solutions with different cost, the solution with the
minimum cost or maximum profit is called optimal solution
Cost metric depends on the problem. For sorting problem, cost metric may be a number of
comparisons or number of swaps. For matrix multiplication, a cost metric is a number of
multiplications. For knapsack problem, cost metric is total profit earned.

General Strategy
▪ Dynamic programming is powerful design technique for optimization problems.
Here word “programming” refers to planning or construction of a solution, it
does not have any resemblance with computer programming.
▪ Divide and conquer divides the problem into small sub problems. Sub problems
are solved recursively. Unlike divide and conquer, sub problems in dynamic
programming are not independent. Sub problems in it overlap with each other.
Solutions of sub problems are merged to get the solution of the original large
problem.
▪ In divide and conquer, sub problems are independent and hence repeated
problems are solved multiple times. Dynamic programming saves the solution in
the table, so when the same problem encounters again, the solution is retrieved
from the table. It is bottom up approach. It starts solving the smallest possible
problem and uses a solution of the smaller problem to build solution of the
larger problem.
Limitations
▪ The method is applicable to only those problems which possess the property of
principle of optimality.
▪ We must keep track of partial solutions.
▪ Dynamic programming is more complex and time-consuming

Characteristics of Dynamic Programming


Dynamic programming works on following principles :

▪ Characterize structure of optimal solution, i.e. build a mathematical model of


the solution.
▪ Recursively define the value of the optimal solution.
▪ Using bottom up approach, compute the value of the optimal solution for each
possible sub problems.
▪ Construct optimal solution for the original problem using information computed
in the previous step.
Applications of Dynamic Programming
Dynamic programming is used to solve optimization problems. It is used to solve many real
life problems such as,

▪ Make a change problem


▪ Knapsack problem
▪ Optimal binary search tree
▪ Travelling salesman problem
▪ All pair shortest path problem
▪ Assembly line scheduling
▪ Multi stage graph problem
Principle of Optimality
Principle of optimality : “In an optimal sequence of decisions or choices, each sub
sequence must also be optimal”.
▪ The principle of optimality is the heart of dynamic programming. It states that
to find the optimal solution of the original problem, a solution of each sub
problem also must be optimal. It is not possible to derive optimal solution using
dynamic programming if the problem does not possess the principle of
optimality.
▪ Shortest path problem satisfies the principle of optimality. If A – X1 – X2 – . . .
Xn – B is the shortest path between nodes A and B, then any sub path Xi to
Xj must be shortest. If there exist multiple paths from Xi to Xj, and the selected
path is not minimum, then obviously the path from A to B cannot be shortest.
▪ For example : In Figure (a) shortest path from A to C is A – B – C. There exist
two paths from B to C. One is B – C and other is B – E – D – C. But B – C is the
shortest path so we should add that one in final solution.
▪ On the other hand, longest path problem does not satisfy the principle of
optimality. In Figure (b), the longest non-cyclic path from A to D is A – B – C –
D. In this path, sub path from B to C is just an edge joining B and C. However, BC
itself is not the longest path, because longest non-cyclic path from B to C is
B – A – E – D – C. Thus the sub path of longest path may not be longest always,
so violets principle of optimality.

Elements of Dynamic Programming


▪ Optimal substructure: “For the optimal solution of the problem, a solution of
sub problem also must be optimal”. Dynamic programming builds the optimal
solution of the bigger problem using the solution of smaller sub problems.
Hence we should consider only those sub problems which have an optimal
solution.
▪ Overlapping subproblems: When the big problem is divided into small
problems, it may create exponential subproblems. Only polynomial numbers of
them are distinct.
▪ Following figure shows the overlapping problems of the binomial coefficient.
Dynamic programming saves solution in the table, so no rework is done. When
C(n – 3, r – 2) problem encounters again, its solution is retrieved from the table.
▪ Divide and conquer solves C(n – 3, r – 2) four times, whereas dynamic
programming solves only once. There may exist many sub problems with
multiplicity greate

All Pairs Shortest Path Algorithm – Introduction


All Pairs Shortest Path Algorithm is also known as the Floyd-Warshall algorithm. And this is
an optimization problem that can be solved using dynamic programming.
Let G = <V, E> be a directed graph, where V is a set of vertices and E is a set of edges with
nonnegative length. Find the shortest path between each pair of nodes.

L = Matrix, which gives the length of each edge

L[i, j] = 0, if i == j // Distance of node from itself is zero

L[i, j] = ∞ , if i ≠ j and (i, j) ∉ E

L[i, j] = w (i, j), if i ≠ j and (i, j) ∈ E // w(i, j) is the weight of the edge (i, j)

Principle of optimality :
If k is the node on the shortest path from i to j, then the path from i to k and k to j, must
also be shortest.

In the following figure, the optimal path from i to j is either p or summation of p1 and p2.
Algorithm for All Pairs Shortest Path
This approach is also known as the Floyd-warshall shortest path algorithm. The
algorithm for all pair shortest path (APSP) problem is described below
Problem: Apply Floyd’s method to find the shortest path for the below-
mentioned all pairs.
Bellman–Ford Algorithm for Shortest Paths
Bellman-Ford algorithm is used to find minimum distance from the source vertex to any
other vertex. The main difference between this algorithm with Dijkstra’s the algorithm is,
in Dijkstra’s algorithm we cannot handle the negative weight, but here we can handle it
easily.

Bellman-Ford algorithm finds the distance in a bottom-up manner. At first, it finds those
distances which have only one edge in the path. After that increase the path length to find
all possible solutions.

You might also like