Professional Documents
Culture Documents
QUICK SORT:
Quick Sort is a Divide and Conquer algorithm. It picks an element as pivot
and partitions the given array around the picked pivot. There are many
different versions of quick Sort that pick pivot in different ways.
1. Always pick first element as pivot.
2. Always pick last element as pivot (implemented below)
3. Pick a random element as pivot.
4. Pick median as pivot.
The key process in quick Sort is partition(). Target of partitions is, given an
array and an element x of array as pivot, put x at its correct position in sorted
array and put all smaller elements (smaller than x) before x, and put all
greater elements (greater than x) after x. All this should be done in linear
time.
Pseudo Code for recursive Quick Sort function:
/* low --> Starting index, high --> Ending index */
quickSort(arr[], low, high)
{
if (low < high)
{
/* pi is partitioning index, arr[pi] is now
at right place */
pi = partition(arr, low, high);
Implementation:
Following are the implementations of Quick Sort:
/* C++ implementation of QuickSort */
#include <bits/stdc++.h>
using namespace std;
// Driver Code
int main()
{
int arr[] = {10, 7, 8, 9, 1, 5};
int n = sizeof(arr) / sizeof(arr[0]);
quickSort(arr, 0, n - 1);
cout << "Sorted array: \n";
printArray(arr, n);
return 0;
}
MERGE SORT:
Like Quick Sort, Merge Sort is a Divide and Conquer algorithm. It divides the
input array into two halves, calls itself for the two halves, and then merges
the two sorted halves. The merge () function is used for merging two
halves. The merge (arr, l, m, r) is a key process that assumes that arr[l..m]
and arr [m+1..r] are sorted and merges the two sorted sub-arrays into one.
See the following C implementation for details.
Merge Sort(arr[], l, r)
If r > l
1. Find the middle point to divide the array into two halves:
middle m = l+ (r-l)/2
2. Call mergeSort for first half:
Call mergeSort(arr, l, m)
3. Call mergeSort for second half:
Call mergeSort(arr, m+1, r)
4. Merge the two halves sorted in step 2 and 3:
Call merge(arr, l, m, r)
// UTILITY FUNCTIONS
// Function to print an array
void printArray(int A[], int size)
{
for (auto i = 0; i < size; i++)
cout << A[i] << " ";
}
// Driver code
int main()
{
int arr[] = { 12, 11, 13, 5, 6, 7 };
auto arr_size = sizeof(arr) / sizeof(arr[0]);
BINARY SEARCH:
Binary Search: Search a sorted array by repeatedly dividing the search
interval in half. Begin with an interval covering the whole array. If the value of
the search key is less than the item in the middle of the interval, narrow the
interval to the lower half. Otherwise, narrow it to the upper half. Repeatedly
check until the value is found or the interval is empty. The idea of binary
search is to use the information that the array is sorted and reduce the time
complexity to O (Log n).
We basically ignore half of the elements just after one comparison.
1. Compare x with the middle element.
2. If x matches with the middle element, we return the mid index.
3. Else If x is greater than the mid element, then x can only lie in the right
half subarray after the mid element. So we recur for the right half.
4. Else (x is smaller) recur for the left half.
int main(void)
{
int arr[] = { 2, 3, 4, 10, 40 };
int x = 10;
int n = sizeof(arr) / sizeof(arr[0]);
int result = binarySearch(arr, 0, n - 1, x);
(result == -1) ? cout << "Element is not present in array"
: cout << "Element is present at index " << result;
return 0;
}
Output:
Element is present at index 3
int main(void)
{
int arr[] = { 2, 3, 4, 10, 40 };
int x = 10;
int n = sizeof(arr) / sizeof(arr[0]);
int result = binarySearch(arr, 0, n - 1, x);
(result == -1) ? cout << "Element is not present in array"
: cout << "Element is present at index " << result;
return 0;
}
Output:
Element is present at index 3
Input:
Items as (value, weight) pairs
arr[] = {{60, 10}, {100, 20}, {120, 30}}
Knapsack Capacity, W = 50;
Output:
Maximum possible value = 240
by taking items of weight 10 and 20 kg and 2/3 fraction
of 30 kg. Hence total price will be 60+100+(2/3)(120) = 240
In Fractional Knapsack, we can break items for maximizing the total value
of knapsack. This problem in which we can break an item is also called the
fractional knapsack problem.
Input:
Same as above
Output:
Maximum possible value = 240
By taking full items of 10 kg, 20 kg and
2/3rd of last item of 30 kg
// Constructor
Item(int value, int weight)
{
this->value=value;
this->weight=weight;
}
};
// Driver code
int main()
{
int W = 50; // Weight of knapsack
Item arr[] = { { 60, 10 }, { 100, 20 }, { 120, 30 } };
// Function call
cout << "Maximum value we can obtain = "
<< fractionalKnapsack(W, arr, n);
return 0;
}
Output
Maximum value we can obtain = 240
0-1_KNAPSACK PROBLEM:
// Base Case
if (n == 0 || W == 0)
return 0;
// Driver code
int main()
{
int val[] = { 60, 100, 120 };
int wt[] = { 10, 20, 30 };
int W = 50;
int n = sizeof(val) / sizeof(val[0]);
cout << knapSack(W, wt, val, n);
return 0;
}
Output 220
It should be noted that the above function computes the same sub-
problems again and again. See the following recursion tree, K(1, 1) is
being evaluated twice. The time complexity of this naive recursive
solution is exponential (2^n)
Complexity Analysis:
Time Complexity: O(2n).
As there are redundant subproblems.
Auxiliary Space :O(1).
As no extra data structure has been used for storing values.
Since subproblems are evaluated again, this problem has Overlapping Sub-
problems property. So the 0-1 Knapsack problem has both properties of a
dynamic programming problem.
Method 2: Like other typical Dynamic Programming(DP) problems, re-
computation of same subproblems can be avoided by constructing a
temporary array K[][] in bottom-up manner. Following is Dynamic
Programming based implementation.
0 1 2 3 4 5 6
0 0 0 0 0 0 0 0
1 0 10 10 10 10 10 10
2 0 10 15 25 25 25 25
3 0
Explanation:
For filling 'weight = 2' we come
across 'j = 3' in which
we take maximum of
(10, 15 + DP[1][3-2]) = 25
| |
'2' '2 filled'
not filled
0 1 2 3 4 5 6
0 0 0 0 0 0 0 0
1 0 10 10 10 10 10 10
2 0 10 15 25 25 25 25
3 0 10 15 40 50 55 65
Explanation:
For filling 'weight=3',
we come across 'j=4' in which
we take maximum of (25, 40 + DP[2][4-3])
= 50
// Driver Code
int main()
{
int val[] = { 60, 100, 120 };
int wt[] = { 10, 20, 30 };
int W = 50;
int n = sizeof(val) / sizeof(val[0]);
Output: 220
Complexity Analysis:
Time Complexity: O(N*W).
where ‘N’ is the number of weight element and ‘W’ is capacity. As for
every weight element we traverse through all weight capacities
1<=w<=W.
Auxiliary Space: O(N*W).
The use of 2-D array of size ‘N*W’.
Method 3: This method uses Memoization Technique (an extension of
recursive approach).
This method is basically an extension to the recursive approach so that we
can overcome the problem of calculating redundant cases and thus
increased complexity. We can solve this problem by simply creating a 2-D
array that can store a particular state (n, w) if we get it the first time. Now if
we come across the same state (n, w) again instead of calculating it in
exponential complexity we can directly return its result stored in the table in
constant time. This method gives an edge over the recursive approach in this
aspect.
if (wt[i] > W) {
// Driver Code
int main()
{
int val[] = { 60, 100, 120 };
int wt[] = { 10, 20, 30 };
int W = 50;
int n = sizeof(val) / sizeof(val[0]);
cout << knapSack(W, wt, val, n);
return 0;
}
Output: 220
Method 4 :- We use the dynamic programming approach but with optimized
space complexity .
#include <bits/stdc++.h>
using namespace std;
int knapSack(int W, int wt[], int val[], int n)
{
// making and initializing dp array
int dp[W + 1];
memset(dp, 0, sizeof(dp));
if (wt[i - 1] <= w)
// finding the maximum value
dp[w] = max(dp[w],
dp[w - wt[i - 1]] + val[i - 1]);
}
}
return dp[W]; // returning the maximum value of knapsack
}
int main()
{
int val[] = { 60, 100, 120 };
int wt[] = { 10, 20, 30 };
int W = 50;
int n = sizeof(val) / sizeof(val[0]);
cout << knapSack(W, wt, val, n);
return 0;
}
Output: 220
The graph contains 9 vertices and 14 edges. So, the minimum spanning tree
formed will be having (9 – 1) = 8 edges.
After sorting:
Now pick all edges one by one from the sorted list of edges
1. Pick edge 7-6: No cycle is formed, include it.
6. Pick edge 8-6: Since including this edge results in the cycle, discard it.
8. Pick edge 7-8: Since including this edge results in the cycle, discard it.
Since the number of edges included equals (V – 1), the algorithm stops here.
Below is the implementation of the above idea:
// graph
#include <bits/stdc++.h>
class Edge {
public:
};
class Graph {
public:
int V, E;
Edge* edge;
};
graph->V = V;
graph->E = E;
return graph;
class subset {
public:
int parent;
int rank;
};
// (path compression)
if (subsets[i].parent != i)
subsets[i].parent
= find(subsets, subsets[i].parent);
return subsets[i].parent;
}
// A function that does union of two sets of x and y
subsets[xroot].parent = yroot;
subsets[yroot].parent = xroot;
else {
subsets[yroot].parent = xroot;
subsets[xroot].rank++;
Edge* a1 = (Edge*)a;
Edge* b1 = (Edge*)b;
// algorithm
int V = graph->V;
// array of edges
myComp);
subsets[v].parent = v;
subsets[v].rank = 0;
if (x != y) {
result[e++] = next_edge;
Union(subsets, x, y);
// built MST
"MST\n";
int minimumCost = 0;
// return;
<< endl;
// Driver code
int main()
10
0--------1
|\|
6| 5\ |15
|\|
2--------3
4 */
graph->edge[0].src = 0;
graph->edge[0].dest = 1;
graph->edge[0].weight = 10;
graph->edge[1].src = 0;
graph->edge[1].dest = 2;
graph->edge[1].weight = 6;
graph->edge[2].src = 0;
graph->edge[2].dest = 3;
graph->edge[2].weight = 5;
graph->edge[3].src = 1;
graph->edge[3].dest = 3;
graph->edge[3].weight = 15;
graph->edge[4].src = 2;
graph->edge[4].dest = 3;
graph->edge[4].weight = 4;
// Function call
KruskalMST(graph);
return 0;
Output
Following are the edges in the constructed MST
2 -- 3 == 4
0 -- 3 == 5
0 -- 1 == 10
Minimum Cost Spanning Tree: 19
The idea of using key values is to pick the minimum weight edge from cut.
The key values are used only for vertices which are not yet included in MST,
the key value for these vertices indicate the minimum weight edges
connecting them to the set of vertices included in MST.
The set mstSet is initially empty and keys assigned to vertices are {0, INF,
INF, INF, INF, INF, INF, INF} where INF indicates infinite. Now pick the
vertex with the minimum key value. The vertex 0 is picked, include it
in mstSet. So mstSet becomes {0}. After including to mstSet, update key
values of adjacent vertices. Adjacent vertices of 0 are 1 and 7. The key
values of 1 and 7 are updated as 4 and 8. Following subgraph shows
vertices and their key values, only the vertices with finite key values are
shown. The vertices included in MST are shown in green color.
Pick the vertex with minimum key value and not already included in MST (not
in mstSET). The vertex 1 is picked and added to mstSet. So mstSet now
becomes {0, 1}. Update the key values of adjacent vertices of 1. The key
value of vertex 2 becomes 8.
Pick the vertex with minimum key value and not already included in MST (not
in mstSET). We can either pick vertex 7 or vertex 2, let vertex 7 is picked. So
mstSet now becomes {0, 1, 7}. Update the key values of adjacent vertices of
7. The key value of vertex 6 and 8 becomes finite (1 and 7 respectively).
Pick the vertex with minimum key value and not already included in MST (not
in mstSET). Vertex 6 is picked. So mstSet now becomes {0, 1, 7, 6}. Update
the key values of adjacent vertices of 6. The key value of vertex 5 and 8 are
updated.
We repeat the above steps until mstSet includes all vertices of given graph.
Finally, we get the following graph.
#include <bits/stdc++.h>
#define V 5
return min_index;
cout<<"Edge \tWeight\n";
}
// Function to construct and print MST for
// matrix representation
int parent[V];
int key[V];
bool mstSet[V];
key[0] = 0;
mstSet[u] = true;
printMST(parent, graph);
// Driver code
int main()
23
(0)--(1)--(2)
|/\|
6| 8/ \5 |7
|/\|
(3)-------(4)
9 */
int graph[V][V] = { { 0, 2, 0, 6, 0 },
{ 2, 0, 3, 8, 5 },
{ 0, 3, 0, 0, 7 },
{ 6, 8, 0, 0, 9 },
{ 0, 5, 7, 9, 0 } };
return 0;
Output:
Edge Weight
0 - 1 2
1 - 2 3
0 - 3 6
1 - 4 5
Time Complexity of the above program is O(V^2).
Simple Divide and Conquer also leads to O(N3), can there be a better
way?
In the above divide and conquer method, the main component for high time
complexity is 8 recursive calls. The idea of Strassen’s method is to reduce
the number of recursive calls to 7. Strassen’s method is similar to above
simple divide and conquer method in the sense that this method also divide
matrices to sub-matrices of size N/2 x N/2 as shown in the above diagram,
but in Strassen’s method, the four sub-matrices of result are calculated using
following formulae.
Time Complexity of Strassen’s Method
Addition and Subtraction of two matrices takes O(N 2) time. So time
complexity can be written as
T(N) = 7T(N/2) + O(N 2)
# Version 3.6
import numpy as np
def split(matrix):
"""
"""
"""
"""
if len(x) == 1:
return x * y
a, b, c, d = split(x)
e, f, g, h = split(y)
p1 = strassen(a, f - h)
p2 = strassen(a + b, h)
p3 = strassen(c + d, e)
p4 = strassen(d, g - e)
p5 = strassen(a + d, e + h)
p6 = strassen(b - d, g + h)
p7 = strassen(a - c, e + f)
c11 = p5 + p4 - p2 + p6
c12 = p1 + p2
c21 = p3 + p4
c22 = p1 + p5 - p3 - p7
# Combining the 4 quadrants into a single matrix by stacking horizontally and vertically.
return c
The set sptSet is initially empty and distances assigned to vertices are {0,
INF, INF, INF, INF, INF, INF, INF} where INF indicates infinite. Now pick the
vertex with a minimum distance value. The vertex 0 is picked, include it
in sptSet. So sptSet becomes {0}. After including 0 to sptSet, update
distance values of its adjacent vertices. Adjacent vertices of 0 are 1 and 7.
The distance values of 1 and 7 are updated as 4 and 8. The following
subgraph shows vertices and their distance values, only the vertices with
finite distance values are shown. The vertices included in SPT are shown in
green colour.
Pick the vertex with minimum distance value and not already included in SPT
(not in sptSET). The vertex 1 is picked and added to sptSet. So sptSet now
becomes {0, 1}. Update the distance values of adjacent vertices of 1. The
distance value of vertex 2 becomes 12.
Pick the vertex with minimum distance value and not already included in SPT
(not in sptSET). Vertex 7 is picked. So sptSet now becomes {0, 1, 7}. Update
the distance values of adjacent vertices of 7. The distance value of vertex 6
and 8 becomes finite (15 and 9 respectively).
Pick the vertex with minimum distance value and not already included in SPT
(not in sptSET). Vertex 6 is picked. So sptSet now becomes {0, 1, 7, 6}.
Update the distance values of adjacent vertices of 6. The distance value of
vertex 5 and 8 are updated.
We repeat the above steps until sptSet includes all vertices of the given
graph. Finally, we get the following Shortest Path Tree (SPT).
How to implement the above algorithm?
We use a boolean array sptSet[] to represent the set of vertices included in
SPT. If a value sptSet[v] is true, then vertex v is included in SPT, otherwise
not. Array dist[] is used to store the shortest distance values of all vertices.
import java.util.*;
import java.lang.*;
import java.io.*;
class ShortestPath {
// from the set of vertices not yet included in shortest path tree
min = dist[v];
min_index = v;
return min_index;
// representation
int dist[] = new int[V]; // The output array. dist[i] will hold
dist[i] = Integer.MAX_VALUE;
sptSet[i] = false;
dist[src] = 0;
// Find shortest path for all vertices
// iteration.
sptSet[u] = true;
// picked vertex.
printSolution(dist);
// Driver method
{ 4, 0, 8, 0, 0, 0, 0, 11, 0 },
{ 0, 8, 0, 7, 0, 4, 0, 0, 2 },
{ 0, 0, 7, 0, 9, 14, 0, 0, 0 },
{ 0, 0, 0, 9, 0, 10, 0, 0, 0 },
{ 0, 0, 4, 14, 10, 0, 2, 0, 0 },
{ 0, 0, 0, 0, 0, 2, 0, 1, 6 },
{ 8, 11, 0, 0, 0, 0, 1, 0, 7 },
{ 0, 0, 2, 0, 0, 0, 6, 7, 0 } };
t.dijkstra(graph, 0);
OUTPUT:
Vertex Distance from Source
0 0
1 4
2 12
3 19
4 21
5 11
6 9
7 8
8 14
Notes:
1) The code calculates the shortest distance but doesn’t calculate the path
information. We can create a parent array, update the parent array when
distance is updated (like prim’s implementation) and use it to show the
shortest path from source to different vertices.
2) The code is for undirected graphs, the same Dijkstra function can be used
for directed graphs also.
3) The code finds the shortest distances from the source to all vertices. If we
are interested only in the shortest distance from the source to a single target,
we can break the for loop when the picked minimum distance vertex is equal
to the target (Step 3.a of the algorithm).
4) Time Complexity of the implementation is O(V^2). If the input graph is
represented using adjacency list, it can be reduced to O(E log V) with the
help of a binary heap. Please see
Dijkstra’s Algorithm for Adjacency List Representation for more details.
5) Dijkstra’s algorithm doesn’t work for graphs with negative weight cycles. It
may give correct results for a graph with negative edges but you must allow
a vertex can be visited multiple times and that version will lose its fast time
complexity. For graphs with negative weight edges and cycles.
Let all edges are processed in the following order: (B, E), (D, B), (B, D), (A,
B), (A, C), (D, C), (B, C), (E, D). We get the following distances when all
edges are processed the first time. The first row shows initial distances. The
second row shows distances when edges (B, E), (D, B), (B, D) and (A, B) are
processed. The third row shows distances when (A, C) is processed. The
fourth row shows when (D, C), (B, C) and (E, D) are processed.
The first iteration guarantees to give all shortest paths which are at most 1
edge long. We get the following distances when all edges are processed
second time (The last row shows final values).
The second iteration guarantees to give all shortest paths which are at most
2 edges long. The algorithm processes all edges 2 more times. The
distances are minimized after the second iteration, so third and fourth
iterations don’t update the distances.
Implementation:
// A Java program for Bellman-Ford's single source shortest path
// algorithm.
import java.util.*;
import java.lang.*;
import java.io.*;
class Graph {
class Edge {
Edge()
};
int V, E;
Edge edge[];
Graph(int v, int e)
V = v;
E = e;
}
// The main function that finds shortest distances from src
// vertices as INFINITE
dist[i] = Integer.MAX_VALUE;
dist[src] = 0;
int u = graph.edge[j].src;
int v = graph.edge[j].dest;
int u = graph.edge[j].src;
int v = graph.edge[j].dest;
int weight = graph.edge[j].weight;
return;
printArr(dist, V);
graph.edge[0].src = 0;
graph.edge[0].dest = 1;
graph.edge[0].weight = -1;
graph.edge[1].src = 0;
graph.edge[1].dest = 2;
graph.edge[1].weight = 4;
// add edge 1-2 (or B-C in above figure)
graph.edge[2].src = 1;
graph.edge[2].dest = 2;
graph.edge[2].weight = 3;
graph.edge[3].src = 1;
graph.edge[3].dest = 3;
graph.edge[3].weight = 2;
graph.edge[4].src = 1;
graph.edge[4].dest = 4;
graph.edge[4].weight = 2;
graph.edge[5].src = 3;
graph.edge[5].dest = 2;
graph.edge[5].weight = 5;
graph.edge[6].src = 3;
graph.edge[6].dest = 1;
graph.edge[6].weight = 1;
graph.edge[7].src = 4;
graph.edge[7].dest = 3;
graph.edge[7].weight = -3;
graph.BellmanFord(graph, 0);
}
Output:
Notes
1) Negative weights are found in various applications of graphs. For
example, instead of paying cost for a path, we may get some advantage if
we follow the path.
2) Bellman-Ford works better (better than Dijkstra’s) for distributed systems.
Unlike Dijkstra’s where we need to find the minimum value of all vertices, in
Bellman-Ford, edges are considered one by one.
3) Bellman-Ford does not work with undirected graph with negative edges as
it will declared as negative cycle.
MULTISTAGE GRAPH (Shortest Path):
// From 0, we can go to 1 or 2 or 3 to
// reach 7.
M(1, 0) = min(1 + M(2, 1),
2 + M(2, 2),
5 + M(2, 3))
This means that our problem of 0 —> 7 is now sub-divided into 3 sub-
problems :-
So if we have total 'n' stages and target
as T, then the stopping condition will be :-
M(n-1, i) = i ---> T + M(n, T) = i ---> T
M(1, 0)
/ | \
/ | \
M(2, 1) M(2, 2) M(2, 3)
/ \ / \ / \
M(3, 4) M(3, 5) M(3, 4) M(3, 5) M(3, 6) M(3, 6)
. . . . . .
. . . . . .
. . . . . .
So, here we have drawn a very small part of the Recursion Tree and we can
already see Overlapping Sub-Problems. We can largely reduce the number
of M(x, y) evaluations using Dynamic Programming.
Implementation details:
The below implementation assumes that nodes are numbered from 0 to N-1
from first stage (source) to last stage (destination). We also assume that the
input graph is multistage.
static int N = 8;
static int INF = Integer.MAX_VALUE;
dist[N - 1] = 0;
return dist[0];
}
// Driver code
public static void main(String[] args)
{
// Graph stored in the form of an
// adjacency Matrix
int[][] graph = new int[][]{{INF, 1, 2, 5, INF, INF, INF, INF},
{INF, INF, INF, INF, 4, 11, INF, INF},
{INF, INF, INF, INF, 9, 5, 16, INF},
{INF, INF, INF, INF, INF, INF, 2, INF},
{INF, INF, INF, INF, INF, INF, INF, 18},
{INF, INF, INF, INF, INF, INF, INF, 13},
{INF, INF, INF, INF, INF, INF, INF, 2}};
System.out.println(shortestDist(graph));
}
}
Output: 9
The Floyd Warshall Algorithm is for solving the All Pairs Shortest Path
problem. The problem is to find shortest distances between every pair of
vertices in a given edge weighted directed Graph.
Example:
Input:
graph[][] = { {0, 5, INF, 10},
{INF, 0, 3, INF},
{INF, INF, 0, 1},
{INF, INF, INF, 0} }
which represents the following graph
10
(0)------->(3)
| /|\
5 | |
| | 1
\|/ |
(1)------->(2)
3
Note that the value of graph[i][j] is 0 if i is equal to j
And graph[i][j] is INF (infinite) if there is no edge from vertex i
to j.
Output:
Shortest distance matrix
0 5 8 9
INF 0 3 4
INF INF 0 1
INF INF INF 0
class AllPairShortestPath
{
final static int INF = 99999, V = 4;
Output:
Following matrix shows the shortest distances between every pair of
vertices
0 5 8 9
INF 0 3 4
INF INF 0 1
INF INF INF 0
Time Complexity: O(V^3)
For example, consider the graph shown in figure on right side. A TSP tour in
the graph is 1-2-4-3-1. The cost of the tour is 10+25+30+15 which is 80.
.
Following are different solutions for the traveling salesman problem.
Naive Solution:
1) Consider city 1 as the starting and ending point.
2) Generate all (n-1)! Permutations of cities.
3) Calculate cost of every permutation and keep track of minimum cost
permutation.
4) Return the permutation with minimum cost.
Time Complexity: Θ(n!)
Dynamic Programming:
Let the given set of vertices be {1, 2, 3, 4,….n}. Let us consider 1 as starting
and ending point of output. For every other vertex i (other than 1), we find the
minimum cost path with 1 as the starting point, i as the ending point and all
vertices appearing exactly once. Let the cost of this path be cost(i), the cost
of corresponding Cycle would be cost(i) + dist(i, 1) where dist(i, 1) is the
distance from i to 1. Finally, we return the minimum of all [cost(i) + dist(i, 1)]
values. This looks simple so far. Now the question is how to get cost(i)?
To calculate cost(i) using Dynamic Programming, we need to have some
recursive relation in terms of sub-problems. Let us define a term C(S, i) be
the cost of the minimum cost path visiting each vertex in set S exactly once,
starting at 1 and ending at i.
We start with all subsets of size 2 and calculate C(S, i) for all subsets where
S is the subset, then we calculate C(S, i) for all subsets S of size 3 and so
on. Note that 1 must be present in every subset.
For a set of size n, we consider n-2 subsets each of size n-1 such that all
subsets don’t have nth in them.
Using the above recurrence relation, we can write dynamic programming
based solution. There are at most O(n*2 n) subproblems, and each one takes
linear time to solve. The total running time is therefore O(n 2*2n). The time
complexity is much less than O(n!), but still exponential. Space required is
also exponential. So this approach is also infeasible even for slightly higher
number of vertices.
import java.util.*;
class GFG{
static int V = 4;
// implementation of traveling
// Salesman Problem
int s)
ArrayList<Integer> vertex =
new ArrayList<Integer>();
if (i != s)
vertex.add(i);
// Hamiltonian Cycle.
do
int current_pathweight = 0;
int k = s;
for (int i = 0;
current_pathweight +=
graph[k][vertex.get(i)];
k = vertex.get(i);
}
current_pathweight += graph[k][s];
// update minimum
min_path = Math.min(min_path,
current_pathweight);
} while (findNextPermutation(vertex));
return min_path;
ArrayList<Integer> data,
data.set(left, data.get(right));
data.set(right, temp);
return data;
// both inclusive
ArrayList<Integer> data,
data.set(left++,
data.get(right));
data.set(right--, temp);
return data;
ArrayList<Integer> data)
if (data.size() <= 1)
return false;
if (data.get(last) <
data.get(last + 1))
break;
}
last--;
if (last < 0)
return false;
// to the pivot
if (data.get(i) >
data.get(last))
nextGreater = i;
break;
// the pivot
data = swap(data,
nextGreater, last);
data.size() - 1);
// next_permutation is done
return true;
}
// Driver Code
int s = 0;
System.out.println(
travllingSalesmanProblem(graph, s));
Output
80
8- QUEENS PROBLEM:
The eight queens problem is the problem of placing eight queens on an 8×8
chessboard such that none of them attack one another (no two are in the
same row, column, or diagonal). More generally, the n queens problem
places n queens on an n×n chessboard.
There are different solutions for the problem.
The expected output is a binary matrix which has 1s for the blocks where
queens are placed. For example, following is the output matrix for above 4
queen solution.
{ 0, 1, 0, 0}
{ 0, 0, 0, 1}
{ 1, 0, 0, 0}
{ 0, 0, 1, 0}
Naive Algorithm
Generate all possible configurations of queens on board and print a
configuration that satisfies the given constraints.
while there are untried configurations
{
generate the next configuration
if queens don't attack in this configuration then
{
print this configuration;
}
}
Backtracking Algorithm
The idea is to place queens one by one in different columns, starting from
the leftmost column. When we place a queen in a column, we check for
clashes with already placed queens. In the current column, if we find a row
for which there is no clash, we mark this row and column as part of the
solution. If we do not find such a row due to clashes then we backtrack and
return false.
1) Start in the leftmost column
2) If all queens are placed
return true
3) Try all rows in the current column.
Do following for every tried row.
a) If the queen can be placed safely in this row
then mark this [row, column] as part of the
solution and recursively check if placing
queen here leads to a solution.
b) If placing the queen in [row, column] leads to
a solution then return true.
c) If placing queen doesn't lead to a solution then
unmark this [row, column] (Backtrack) and go to
step (a) to try other rows.
3) If all rows have been tried and nothing worked,
return false to trigger backtracking.
return true;
}
if (solveNQUtil(board, 0) == false) {
printf("Solution does not exist");
return false;
}
printSolution(board);
return true;
}
// driver program to test above function
int main()
{
solveNQ();
return 0;
}