You are on page 1of 138

Brute Force

A straightforward approach, usually based directly on the


problem’s statement and definitions of the concepts involved

Examples:
1. Computing an (a > 0, n a nonnegative integer)
2. Computing n!
3. Multiplying two matrices
4. Searching for a key of a given value in a list
Brute-Force Sorting Algorithm
• Selection Sort

• In this method, first we determine smallest element in


the given array and exchange with A[0] .

• Next out of the remaining elements find the next


smallest elements and swap with A[1], and so on up to
n-1.

• In the each iteration, the unsorted array size diminishes


and the partially sorted list increases.
Analysis of Selection Sort
// C function for selection sort

Void selection_sort( a, n )
{
int min, temp;
for (i=0;i<n-2;i++)
{
min = i;
for ( j = i+1; j < n-1; j++)
{
if (a[j] < a[min] )
min = j;
}
temp = a[ i ];
a[i] = a[min];
a[min] = temp;
}
}

Analysis:

C(n) = 0(n2)
Bubble Sort:

This is the simplest and easiest sorting


technique. In this technique the two successive items A[i]
& A[i+1] are exchanged whenever A[i] >= A[i+1].

Another name of bubble sort is “sinking


sort”. Because in this technique in each pass (iteration)
the larger values sink to the bottom of the array and
simultaneously smaller elements gradually “bubble”
their way upward to the top.
Hence it is called “Bubble sort”.
//c function for bubble sort.
Void bubble _sort ( a, n )
{
int temp;
for (i=0; i<n-2 ; i++)
{
for(j=0; j<n-2-i;j++)
{
If (a[j] > a[j+1])
{
temp = a[j];
a[j] = a[j+1];
a[j+1] = temp;
} } } }
Analysis:-
The algorithm consists of two for loops. The outer for
loop controls the number of passes and (n-1) Passes are required
to sort n elements.
So loop varies from O to n-2.
According to the function the inner loop works as follows

Pass 1 : i=0 j=0,1,2,3,4 n-2-0


Pass 2 : i=1 j=0,1,2,3 n-2-1
Pass 3 : i=2 j=0,1,2 n-2-2
Pass 4 : i=3 j=0,1 n-2-3
Pass 5 : i=4 j=0 n-2-4
In general n-2-i
So the complexity is C(n) = 0(n2)
Brute-Force String Matching
• Pattern : a string of m characters to search for
• Text : a (longer) string of n characters to search in
• problem: find a substring in the text that matches the
pattern

Brute-force algorithm
Step 1 Align pattern at beginning of text
Step 2 Moving from left to right, compare each character of
pattern to the corresponding character in text until
• all characters are found to match (successful search); or
• a mismatch is detected
Step 3 While pattern is not found and the text is not yet
exhausted, realign pattern one position to the right and
repeat Step 2
Pseudocode
Brute-Force Strengths and Weaknesses

• Strengths
• wide applicability
• simplicity
• yields reasonable algorithms for some important problems
(e.g., matrix multiplication, sorting, searching, string matching)

• Weaknesses
• rarely yields efficient algorithms
• some brute-force algorithms are unacceptably slow
• not as constructive as some other design techniques
Exhaustive Search

A brute force solution to a problem involving search for an


element with a special property, usually among
combinatorial objects such as permutations,
combinations, or subsets of a set.

Method:
• generate a list of all potential solutions to the problem in a
systematic manner

• evaluate potential solutions one by one, disqualifying


infeasible ones and, for an optimization problem, keeping
track of the best one found so far

• then search ends, announce the solution(s) found


Example 1: Traveling Salesman Problem

• Given n cities with known distances between each pair, find the
shortest tour that passes through all the cities exactly once
before returning to the starting city

• Alternatively: Find shortest Hamiltonian circuit in a weighted


connected graph
Example 2: Knapsack Problem

Given n items:
• weights: w1 w2 … wn
• values: v 1 v2 … v n
• a knapsack of capacity W

Find most valuable subset of the items that fit into the knapsack
Example 3: The Assignment Problem
There are n people who need to be assigned to n jobs, one
person per job. The cost of assigning person i to job j is C[i,j].
Find an assignment that minimizes the total cost.

Job 0 Job 1 Job 2 Job 3


Person 0 9 2 7 8
Person 1 6 4 3 7
Person 2 5 8 1 8
Person 3 7 6 9 4

Algorithmic Plan: Generate all legitimate assignments,


compute their costs, and select the cheapest one.
Pose the problem as the one about a cost matrix:
Final Comments on Exhaustive Search

• Exhaustive-search algorithms run in a realistic amount


of time only on very small instances

• In some cases, there are much better alternatives!

• In many cases, exhaustive search or its variation is the


only known way to get exact solution
Divide and Conquer
The divide and conquer is one of the most powerful design
strategies available in the theory of algorithms.

Divide and conquer algorithms work according to the


following general plan,

a) A problem’s instance is divided into several smaller


instances of the same problem, ideally of about the
same size.
b) The smaller instances are solved. ( By using recursive
techniques or by using different algorithms).
c) The solutions obtained for the smaller instances are
combined to get a solution to the original problem.
•The below figure shows the diagrammatic representation of divide and conquer strategy,

Problem of size
“n”

Sub problem 1 of Sub problem 2


size “n/2” of size “n/2”

Solution to sub Solution to sub


problem 1 problem 2

Solution to the
original problem
Quick sort
This sorting technique works very well on large set of data.
Basically, quick sort works as follows,

The first step in this technique is to partition the given table into
two sub-tables based on Key element.

Such that the elements towards left of the key element are less
than the key element , towards right of the key element are
greater than key element.

The above technique is called as “Partition” technique.


In the next step we sort the sub arrays recursively.
Partition technique :
We assume the first element of the given list of n
elements as the Key (pivot) element.

The remaining (n-1) elements will be compared


with this key element to place it at an appropriate
position.

This is achieved primarily by maintaining two


pointers i and j.

The initialization of i & j , comparison and


swapping is done based on the following function.
int partition (a, low, high)
{
int i , j , temp , key ;
key = a[low];
i=low+1;
j=high;
while (1)
{
while (i<high && key >=a[i])
i++;
while (key<a[j])
j--;
if ( i < j )
{
temp = a[i];
a[i] = a[j];
a[j] = temp;
}
else
{
temp=a[low];
a[low]=a[j];
a[j]=temp;
return j;
}
}
}
//c function for quick sort
Void quick sort( int a[], int low, int high)
{
int j;
if ( low < high )
{
j = partition (a, low, high);
quick sort (a, low, j-1);
quick sort (a, j+1, high);
}
}
// Complete program for Quick sort.
#include < stdio . h >
/* include above two functions */
void main()
{
int i , n , a[20];
print f (“enter the value of n”);
Scanf ( “%d”, &n);
print f(“enter the numbers to be sorted”);
for ( i=0; i<n; i++)
Scanf (“%d”,&a[i]);
quick sort(a,o,n-1);
print f (“the sorted array is”);
for (i=0; i<n; i++)
printf (“%d”, a[i]); }
25 12 32 11 45 36 18 38
Example:
Key or a[low] i j
Key>a[i] yes i++ i j
Key>a[i] no i j
Key<a[j] yes j--
Key<a[j] no * Because index value of i is less
*Swap a[i] and a[j] than j
25 12 18 11 45 36 32 38
i j
Key>a[i] yes i++ i j
Key>a[i] yes i++ i j
Key>a[i] no i j
Key<a[j] yes j--
Key<a[j] yes j-- i j
Key<a[j] yes j-- j i
Key<a[j] no * Because index value of i is
*Swap a[low] and a[j] greater than j
11 12 18 25 45 36 32 38
Best Case Analysis :

The best case arises when the array is always split


in the middle.
then,
T(n) = T([n/2]) + T([n/2]) + o(n).
= 2T(n/2) + d(n)
= 2 ( 2T(n/4) + d(n/2)) + d(n)
= 4T(n/4) + 2d(n)
= 4 ( 2T(n/8) + d(n/4)) + 2.d.n
= ------
= ------
= 2k.T(n/2k)+ k.d.n
= 2k.T(n/2k) + k.d.n
= n. T(1) + d.n.k since 2k = n
= n.1 + d.n.log(n) because n = 2k

T(n) = 0( n logn )
Merge Sort:

The process of merging of two sorted vectors


into a single sorted vector is called simple merge.

The only necessary condition for this problem


is that both vectors (arrays) should be sorted.
The Algorithm merge two sorted arrays as follows
Algorithm Merge ( A , B , C , m , n )
//A &B are two sorted i/p array of size m & n respectively.
//C is an o/p array.
i=j=k=o;
while ( i < m and j < n )
if ( A[i] < B[j] ) then
C[k] = A[i]
i = i+1;
k = k+1;
Else
C[k] = B[j]
J = j+1;
K = k+1;
End if
End while.
while ( i < m )
C[k] = A[i]
i = i+1;
k = k+1;
End while

while ( j < n )
C[k] = B[j]
j = j+1;
k = k+1;
End while
In the above algorithm we considered two i/p arrays.
Suppose if we consider only single array (vector) and which is
divided into two sorted parts as shown below ,
Vector A
10 20 30 40 50 15 25 35 45

low mid mid+1 high

Suppose if the array is as follows,

65 50 25 10 35 25 75 30

The above elements can be sorted by using merge sort as


follows,
Algorithm Merge Sort ( A , low , high )
{
if ( low < high)
mid = ( low + high ) / 2;
Merge sort ( A , low , mid );
Merge sort ( A , mid+1 , high );
Merge (a , low , mid , high );
End if .
}
//c program for merge sort.
#include < stdio.h >
#define MAX= 50;
Int low , mid , high;
void Merge (low, mid, high)
{
int i=low; j=mid+1; k=low; c[MAX];
while ( i <= mid & & j <= high)
{
if (a[i] < a[j])
{
C[k]=a[i];
i = i+1; k = k+1;
}
Else
{
C[k]=a[j];
j = j+1; k = k+1;
}
While ( i < low)
{
C[k++] = a[i++];
}
While ( j < high)
{
C[k++] = a[j++];
}
for (i=low; i<high; i++)
{
A[i]=c[i];
}
}
Void merge sort (a, low, high)
{
If (low<high)
{
mid = ( low + high ) / 2;
merge sort (a , low , mid );
merge sort(a, mid+1, high);
merge (a, low, mid, high);
}
}
Void main()
{
int a[MAX];
int n , i;
print f(“enter the number of elements to sort”);
Scanf(“%d”,& n);
print f (“enter the elements”);
for( i=0; i<n; i++)
Scan f (“%d”, &a[i]);
Merge sort (a , 0 , n-1);
print f (“the sorted elements are “);
for( i=0; i<n; i++)
Print f(“%d”, a[i]);
}
65 50 25 10 35 25 75 30
Dividing 1 65 50 25 10 35 25 75 30
2 65 50 25 10 35 25 75 30
3 65 50 25 10 35 25 75 30

Conquering 1 50 65 10 25 25 35 30 75
2 10 25 50 65 25 30 35 75
3 10 25 25 30 35 50 65 75
Binary Search :
It is a simple searching technique which can be applied if
the items to be compared are either in ascending order or
descending order.

Method :
The program can be designed as follows,

If low is considered as the position of the first element and


high as the position of the last element, the position of the
middle element can be obtained using the statement,

mid = ( low + high ) / 2 ;


The key to be searched is compared with middle
element. This can be done using the statement,

if ( key == a[mid] )
{
pos = mid;
return;
}

If this condition is false, key may be in the left part of the


array i.e. < mid or in the right part of the array i.e. > mid.
If the key is less than the mid, the table has to compared
from low to mid-1. otherwise the table has to compared from
mid+1 to high.

if ( key < a[mid] )


high = mid - 1;
else
low = mid + 1;
Analysis of Binary Search : ( Recursive ) :
The basic operation in binary search is the comparison of
key with the elements of the array.

Best-case analysis :
when the array is divided into half and if the key happens
to be A[mid], then no recursive calls are needed.
T ( n ) = O ( 1 ).
Worst-case analysis :
The worst-case occurs when either the key is not found in
the list or the key found as the first or the last element.
T ( n ) = C + T (n/2 )
.
T ( n ) = O ( log n ).
Binary Trees:
A binary is a tree which is a collection of zero or
more nodes. A tree can be empty or partitioned into three
subgroups namely root , left sub tree and right sub tree.

In general, a tree in which each node has either zero,


one or two sub trees is called a binary tree.

10
or or 10
10

20 12 20
Traversals :
Traversing is a method of visiting each node of a
tree exactly once in a systematic order.
During traversal, we may print the info field of
each node visited.

Different types of tree traversals:

Preorder
Inorder
Post order
Preorder traversal:

Step 1 : Process the root node


Step 2 : Traverse the left sub tree in preorder
Step 3 : Traverse the right sub tree in preorder

This can be done in two ways,

a) Recursive technique.
b) Iterative procedure.
// Recursive function for preorder traversal.

Void preorder(NODE root)


{
if(root == NULL) return;
printf(“%d”, root->info);
Preorder(root->llink);
Preorder(root->rlink);
}
// Iterative function for preorder traversal.

Void preorder ( NODE root )


{
NODE cur, s[20];
int top = -1;
if ( root == NULL )
{
printf ( “ Tree is empty “);
return;
}
cur = root;
for ( ; ; )
{
while ( cur != NULL )
{
printf ( “ %d “ , cur -> info );
s[++top] = cur;
cur = cur -> llink;
}
if ( top != -1 )
{
cur = s[top--];
cur = cur -> rlink;
}
else
return;
} // end of for loop.
}
Inorder traversal :

Step 1 : Traverse the left sub tree in Inorder


Step 2 : Process the root node
Step 3 : Traverse the right sub tree in Inorder

This can be done in two ways,

a) Recursive technique.
b) Iterative procedure.
// Recursive function for Inorder traversal

Void inorder(NODE root)


{
if(root == NULL) return;
Inorder(root->llink);
Printf(“%d”, root->info);
Inorder(root->rlink);
}
// Iterative function for inorder traversal.

Void inorder ( NODE root )


{
NODE cur, s[20];
int top = -1;
if ( root == NULL )
{
printf ( “ Tree is empty “);
return;
}
cur = root;
for ( ; ; )
{
while ( cur != NULL )
{
s[++top] = cur;
cur = cur -> llink;
}
if ( top != -1 )
{
cur = s[top--];
printf ( “ %d “ , cur -> info );
cur = cur -> rlink;
}
else
return;
} // end of for loop.
}
Postorder traversal :

Step 1 : Traverse the left sub tree in postorder


Step 2 : Traverse the right sub tree in postorder
Step 3 : Process the root node

This can be done in two ways,

a) Recursive technique.
b) Iterative procedure.
// Recursive function for Inorder traversal

Void postorder(NODE root)


{
if (root == NULL) return;
postorder(root->llink);
postorder(root->rlink);
printf(“%d”, root->info);
}
// Iterative function for post order traversal.
void postorder( NODE root )
{
struct stack
{
NODE address;
int flag; };
NODE cur;
struct stack s[20];
int top = -1;
if ( root == NULL )
{
printf (“ Tree is empty “);
return;
}
cur = root;
for (; ;)
{
while ( cur != NULL )
{
top++;
s[top].address = cur;
s[top].flag = 1;
cur = cur -> llink;
}
while ( s[top].flag < 0 )
{
cur = s[top].address;
top--;
printf (“%d”, cur -> info );
if ( top == -1 )
return;
}
cur = s[top].address;
cur = cur -> rlink;
s[top].flag = -1;
}
}
Example : A

B C

D
E F

G H
I

Preorder :-> ABDGHCEIF


Postorder :-> GHDBIEFCA
Inorder :-> GDHBAEICF
Examples :

1. write a binary tree based on the following traversals,

preorder : ABDEGHCFIJ
inorder : DBGEHACIFJ

2. write a binary tree based on the following traversals,

preorder : ABDFGCEHJI
inorder : DBGEHACIFJ
Decrease and Conquer

It is almost same as divide and conquer, but instead of


dividing the problem size into n/2, it is decremented by one
i.e. (n-1) or decrease by constant factor or variable size
decrease.

The decrease and conquer method can be applied to


problems such that they can be solved either by using
top-down (recursively) or bottom-up (non recursive).
The below figure shows the diagrammatic representation of decrease and conquer strategy.

Problem of size
“n”

Sub problem of
size “n-1”

Solution to the sub


problem

Solution to the
original problem
The following are the applications discussed under this
strategy.

a) Insertion Sort

b) DFS (depth first search)

c) BFS (breadth first search)

d) Topological sorting.
Graphs
A graph G consists of two things :
i.e., G = (V,E).
1. A set V of elements called nodes (or points or vertices)

2. A set E of edges such that each edge e in E is


identified with a unique (unordered) pair [u, v] of
nodes in V, denoted by e = [ u, v ].
E.g. :
2

v 3= { 1, 2, 3, 4, 5 }
1

E={(1,2),(1,5),(1,3),
5 4
(5,4),(4,3),(2,3)}
Graph Terminologies :

Directed graph & Undirected graph :


If every edge ( a, b ) in a graph is marked by a
direction from a to b , then we call it a directed graph
(digraph) . On the other hand, if directions are not marked on
the edges, then the graph is called Undirected graph.
Ex :
2 2

1 3 1 3

5 4 5 4

Undirected graph Directed graph


Adjacent vertices:
Two vertices a & b are said to be adjacent, if there
is an edge connecting a & b.
E.g.: 5 & 4 are adjacent .

Path:
A Path is defined as a sequence of distinct vertices in
which each vertex is adjacent to the next.

E.g.: path from 1 to 4=>((1,5) (5,4)) or ((1,3)(3,4))

A path P, of length, k, through a graph is a sequence of


connected vertices.
P = < V0,V1…..Vk >
Cycle :
A path from a node to itself is called a cycle otherwise it
is acyclic.
A directed acyclic graph is called a dag.

Weighted graph :
It is the cost associated with each edge.

Degree of a node :
The degree of a node is the number of edges
incident on it.
In degree & out degree:
The in degree of a node n is the number of edges
that have n as the head.
The out degree of a node n is the number of edges
that have n as the tail.

Successor & predecessor :


A node n is adjacent to a node m, if there is an
edge from m to n.
If n is adjacent to m, n is called a successor of m & m is a
predecessor of n.

Connected graph:
A connected graph is a graph in which path exists
b/w every pair of vertices.
Representation of a graph :

Adjacent list representation.


Adjacent matrix representation

Adjacent list representation:

An adjacent list representation of a graph


G = { V, E } consists of an array of adjacency lists denoted by
adj of v list.
i.e. , adj [v].
Ex :
1

3 2

5 4

The adjacency list representation of the above graph is,


Adj [1] = { 2, 3, 5 }
Adj [2] = { 1, 3, 4 }
Adj [3] = { 1, 2, 5 }
Adj [4] = { 2, 5 }
Adj [5] = { 1, 3, 4 }
Adjacency matrix representation :
An adjacency matrix representation of a graph
G = ( V, E ) is a matrix A(a ij).
Such that,
1 if edge( i, j) belongs to E
a ij =
0 otherwise.
i.e. , For above example the adjacency matrix is

1 2 3 4 5
1 0 1 1 0 1
2 1 0 1 1 0
3 1 1 0 0 1
4 0 1 0 0 1
5 1 0 1 1 0
Graph traversal algorithms :

Depth –first search & breadth-first search:

DFS & BFS are two graph traversal algorithms,


Both start at some vertex V and then visit all vertices
reachable from v.

If there are vertices that remain unvisited, that is


, if there are vertices that are not reachable from v, then the
only way they can be visited is by starting new traversal by
selecting new vertex.
Depth-first search ( DFS ) :->

The DFS method of graph traversal uses stack as


its main data structure.

BY using decrease & conquer technique the graph


is traversed.

The main application of DFS are

Checking the connectivity of a graph.


Checking whether a graph has cycles or not.
Method :

The i/p to DFS algorithm is a adjacency matrix of graph


G & the starting vertex V.

The o/p is a DFS spanning tree, ( if the given graph is


connected ) or a DFS forest ( if the given graph is
disconnected ).

Before initiating a DFS traversal, all the vertices in the


graph are marked as unvisited (set to 0).
We start from the initial vertex V and mark it as visited
(set to 1).

Next step is to find all unvisited adjacency vertices of


starting vertex and pick any one vertex from this list (if
you have many) and call it as w.

Now, DFS algorithm can be recursively invoked by


sending “w” as the new starting vertex & continue for all
vertices.
Algorithm dfs ( G , V )

// Input : A graph G & starting vertex V


// Output : DFS tree

Visited [V] = 1. //mark v as visited.


for each vertex w adjacency to V do
If visited [w] = 0 //vertex not yet visited
dfs ( G , W ) //continue with new vertex.
end dfs
e.g. : 1
Connected undirected graph :

2 3

6 7
4 5

8
2 . Strongly connected, Directed graph :

1 2 1 2

3 4 3 4

Input Output
3 . Undirected, Disconnected graph :

5
5 6
1 2
1 2
6
4 3
4 3
7
8 7

Input output 1
8

output 2
4 . Directed weakly connected graph :

1 2 3
1 2

3
5 4
5 4
Input Output 1 output 2
In the above four cases, our algorithm “ dfs ” will work for
only example-1 and example-2.
to make it work the algorithm has to invoked for all
vertices.
Algorithm dfs visit_all ( G )
//input: G {V,E}
//Output: DFS spanning tree or forest.
for each vertex v € V do
visited [v] = 0//mark all vertices as unvisited.
for each vertex v € V do
if visited [v] = 0
dfs ( G , V )
end dfs visited_all
//C function for dfs

dfs (int v)
{
visited [v] = 1;
for( int w = 2; w <= n; w++ )
{
if(visited [w] == 0 && a[v][w] == 1)
{
printf(“%d”, w);
dfs(w);
}
}
}
//C function for dfs visit _all.
dfs_visit _all()
{
for (int i = 1; i <=n; i++)
{
visited[i] = 0;
}
for (int i = 1; i <= n; i++)
{
if (visited [i] == 0)
{
printf(“%d”, i)
dfs(i);
}
}
}
//Complete C program.
# include<stdio.h>
# define max=10;
// include dfs & dfs_visit_all functions
void main()
{
int a[max] [max];
int n;
printf(“enter the no of vertices”);
Scanf(“%d”, &n);
printf(“enter the adjacency matrix”)
for (int i=1; i<=n; i++)
for (int j=1; j<=n;j++)
Scanf(“%d”, &a[i][j]);
dfs_visit_all(); }
Check whether a given graph is connected or not using DFS method using C
programming.

#include<stdio.h>
#include<conio.h>

int a[20][20],reach[20],n;

void dfs(int v)
{ int i;
reach[v]=1;

for(i=1;i<=n;i++)
if(a[v][i] && !reach[i])
{
printf("\n %d->%d",v,i);
dfs(i);
}
}
int main()
{
int i,j,count=0;
//clrscr();
printf("\n Enter number of vertices:");
scanf("%d",&n);

for(i=1;i<=n;i++)
{
reach[i]=0;
for(j=1;j<=n;j++)
a[i][j]=0;
}

printf("\n Enter the adjacency matrix:\n");


for(i=1;i<=n;i++)
for(j=1;j<=n;j++)
scanf("%d",&a[i][j]);
dfs(1);
printf("\n");

for(i=1;i<=n;i++)
{
if(reach[i])
count++;
}

if(count==n)
printf("\n Graph is connected");
else
printf("\n Graph is not connected");

getch();
}
OUTPUT:
Enter number of vertices : 8
Enter the adjacency matrix :
0 1 1 0 0 0 0 0
1 0 0 1 1 0 0 0
1 0 0 0 0 1 1 0
0 1 0 0 0 0 0 1
0 1 0 0 0 0 0 1
0 0 1 0 0 0 0 1
0 0 1 0 0 0 0 1
0 0 0 1 1 1 1 0

1->2 2->4 4->8 8->5 8->6 6->3 3->7


Graph is connected
BREADTH-FIRST SEARCH (BFS) :

The BFS method of graph traversal uses queue as its


main data structure.
BFS is somewhat similar to DFS except that all the
adjacent vertices are visited from a starting vertex.

Method :
A BFS starting at a vertex V first visits V, then it
visits all neighbors of V.
Then it visit all neighbors of the neighbors that have
not been visited before & so on.
Algorithm bfs (G, V)

//Input:Graph with V vertices & E edges.


//Output: BFS tree.
visited[v] = 1;
inqueue(v);
while queue is not empty() do
v = delqueue();
for all w adjacent to v to
if visited[w] = 0
inqueue(w);
visited[w] = 1;
end bfs.
e.g. : 1
Connected undirected graph :

2 3

6 7
4 5

8
// c function for bfs

bfs(int V)
{
f=0; r=-1;
visited[v] = 1;
insqueue(v);

while( f > r )
{
v = delqueue();
for ( int w = 1; w <= n; w++)
{
if(visited[w] == 0 && a[v][w] == 1)
{
insqueue(w);
printf("%d",w);
visited[w] = 1;
}
}
v = delqueue;
}
}
void insqueue( int e)
{
q[++r] = e;
}
int delqueue()
{
int t = q[f];
f++;
if( f > r)
{
f = 0;
r = -1;
}
return t;
}
//complete c program for bfs
#include <stdio.h>
#define max = 10;
int visited[max];
/* include the bfs, insqueue, delqueue functions.

void main()
{
int s; int i; int j; int n; a[10][10]
printf("Enter the no. of vertices");
scanf("%d", &n);
printf("Enter the adjacency matrix");
for(i = 1; i <= n; i++)
for( j =1; j <= n; j++)
scanf("%d",&a[i][j]);
for( i = 1; i<= n; i++)
visitied[i] = 0;

printf("Enter the starting vertex");


scanf("%d", &s);
printf("The vertices visited from %d "s" are);

bfs(s);
}
TOPOLOGICAL SORTING :
Suppose you have list of tasks to do, some of
which depend on others to be completed first.

e.g. : A work may involve 6 tasks numbered 1 to 6.

Condition:
Tasks 3 & 4 must be carried out only after task1. Task 3
must be completed before task 4.
Task 4 & 5 can be done after task 2.
Task 6 can be done only after all the rest have been
completed.
we can arrange this information in a dependency graph.

The vertices of this directed graph are the tasks to


performed, & there is an edge from task v to task w if v
must
be completed before w can be started.

3
1

4 6
Topological orders,
2
1->3->4->2->5->6.
5
2->5->1->3->4->6.
Conditions to apply topological sorting :

The graph should be directed to apply topological


sorting.
If the graph has directed cycle then there is no topological
order.
It should be dag. (directed acyclic graph).

Given a directed acyclic graph as input, to compute the


topological ordering,
there are two methods that we discuss,

a) using DFS
b) source removal method.
TOPOLOGICAL ORDER USING DFS :

This can be done by applying DFS algorithm with


small change.
Simply execute the DFS algorithm & keep track of
the stack popping up order of vertices.

then,
Reversing this order directly gives you the topological
ordering of a given dag.
E.g.
3
1

4 6

2
5
Topological orders,
First connected component : 1->3->4->6.
DFS is called again with 2 as start vertex,
Second connected component : 2->5.

Stack popping up order : 6->4->3->1->5->2.

Topological order : 2->5->1->3->4->6.


TOPLOGICAL ORDER USING SOURCE REMOVAL :

This method is based on removing a vertex which


does not have incoming edges( zero in-degree) called as source
vertex.
Once we remove this vertex all its outgoing edges
must also be removed.

During every stage (iteration), identify such a


source vertex & iterate until all vertices are covered.

The sequence generated by this removal process will


form the topological ordering.
Obtain the Topological ordering of vertices in a given graph with the help of a c
programming.
#include<stdio.h> #include<conio.h>
int a[10][10],n,indegre[10];

void find_indegre()
{
int j,i,sum;

for(j=0;j<n;j++)
{ sum=0;
for(i=0;i<n;i++)
sum+=a[i][j];
indegre[j]=sum;
}
}

void topology()
{
int i,u,v,t[10],s[10],top=-1,k=0;

find_indegre();
for(i=0;i<n;i++)
{
if(indegre[i]==0) s[++top]=i;
}
while(top!=-1)
{
u=s[top--]; t[k++]=u;

for(v=0;v<n;v++)
{
if(a[u][v]==1)
{
indegre[v]--;
if(indegre[v]==0) s[++top]=v;
}
}
}
printf("The topological Sequence is:\n");
for(i=0;i<n;i++)
printf("%d ",t[i]);
}
void main()
{
int i,j;

clrscr();

printf("Enter number of Nodes:");


scanf("%d",&n);

printf("\nEnter the adjacency matrix:\n");


for(i=0;i<n;i++)
{
for(j=0;j<n;j++)
scanf("%d",&a[i][j]);
}

topology(); getch();
}
ENTER NUMBER OF Nodes: 5

ENTER THE ADJACENCY MATRIX:

0 1 0 0 0
0 0 0 1 1
0 1 0 0 0
0 0 0 0 1
0 0 0 0 0

THE TOPOLOGICAL SEQUENCE IS:


2 0 1 3 4
Greedy Technique
The greedy method is the most straight forward design
technique. These problems have “n” inputs and
require us to obtain a subset that satisfies some
constraints.

Any subsets that satisfies these constraints is called a


feasible solution.

We need to find a feasible solution that either


maximizes or minimizes a given objective function.

A feasible solution that does this is called an optimal


solution.
The greedy method suggests that one can device an
algorithm that works in stages.

Considering one input at a time. At each stage, a


decision is made regarding whether a particular
input is an optimal solution.

If the inclusion of the next input into the partially


constructed optimal solution will result in an
infeasible solution, then this input is not added to the
partial solution. Otherwise it is added.

The selection procedure itself is based on some


optimization measures. This measure may be the
objective function.
// The below algorithm selects the subsets from a given
set, if it is feasible.

Algorithm Greedy ( a , n )
// a[1..n] contains “n” inputs.
{
solution = null.
for i = 1 to n do
{
x = select ( i );
if feasible ( solution , x ) then
solution = Union ( solution , x );
}
return solution;
}
// Minimum cost spanning trees :

Spanning tree :

A spanning tree of a connected graph is its


connected acyclic sub graph that contains all the
vertices of the graph.

or

Let G = ( V, E ) be an undirected connected


graph, A sub graph t = ( V, E’ ) of G is a spanning
tree of G if and only if t is a tree.
Minimum Spanning tree :

A minimum spanning tree of a weighted connected


graph is its spanning tree of the smallest weight, where
the weight of a tree is defined as the sum of the weights
on all its edges.

a 1 b a 1 b a 1 b a 1 b
5 2 2 5 5 2

c d c d c d c d
3 3 3
Graph w(t1) = 6 w(t2) = 9 w(t3) = 8
// Minimum cost spanning trees :

For the previous example we obtain three spanning


trees, out of that minimum spanning tree is t1.

The minimum spanning tree of a any given


undirected graph can be obtained by the following
algorithms.

a) Prim’s Algorithm.

b) Kruskala’s Algorithm.
Prim’s Algorithm.
Prim’s algorithm constructs a minimum spanning tree
through a sequence of expanding sub trees.

The initial sub tree in such a sequence consists of a


single vertex selected arbitrarily from the set v of
the graphs vertices.

On each iteration, we expand the current tree in the


greedy manner by simply attaching to it the
nearest vertex not in that tree.

The algorithm stops after all the graphs vertices have


been included in the tree being constructed.
Algorithm prim’s ( G )
{
for i = 1 to n do
{
visited[i] = 0 ;
}
T = null; u = 1;
while there is still unvisited vertex do
{
Find, let < u,v > be the minimum edge between any
chosen u to any unchosen vertex v
visited [ v ] = 1;
T = T U < u, v >; // adding edge to spanning tree.
}
}
Pseudocode of the Prim
ALGORITHM Prim(G)
//Prim’s algorithm for constructing a minimum spanning tree
//Input: A weighted connected graph G = (V, E)
//Output: ET, the set of edges composing a minimum spanning tree
of G
VT  {v0} //v0 can be arbitrarily selected
ET  
for i 1 to |V|-1 do
find a minimum-weight edge e* = (v*, u*) among all the edges
(v, u) such that v is in VT and u is in V-VT
VT  VT {u*}
ET  ET {e*}
return ET
An example

5
a b
1 4 6 2

3 7
c d e
Kruskal’s Algorithm.
The Kruskal's algorithm also uses the greedy criterion to
find the minimum cost spanning.
Method :

Arrange all the edges of a given graph in the ascending


order of their weights.
The lowest cost edge is selected and included in the
spanning tree, which has only vertices.
Among the remaining edges, each edge is tried for
inclusion into the spanning tree.
By including this edge, if there is a cycle then it is
rejected, else it is included.
All the edges are tried in this way and the resulting sub
graph is a minimum cost spanning tree.
Algorithm kruskal ( G )
{
Et = null;
ecount = 0
k = 0;
while ecount < V-1
k = k + 1;
if Et U { ek } is acyclic
Et = Et U { ek }
ecount = ecount + 1;
} return Et.
Basic Kruskal’s Algorithm
ALGORITHM Kruscal(G)
//Input: A weighted connected graph G = <V, E>
//Output: ET, the set of edges composing a minimum spanning tree of G.

Sort E in nondecreasing order of the edge weights w(ei1) <= … <= w(ei|E|)

ET  ; ecounter  0 //initialize the set of tree edges and its size


k0
while encounter < |V| - 1 do
kk+1
if ET U {eik} is acyclic
ET  ET U {eik} ; ecounter  ecounter + 1
return ET
Dijkstra’s Algorithm :
The main objective of this algorithm is to find the
shortest path from the single source or vertex to the
remaining vertices of a graph.

This algorithm is applicable to graphs with non negative


weights only. It finds shortest paths to a graphs
vertices in the order of their distance from a given
source.

“ Edsger . W . Dijkstra “ proposed this algorithm in mid


of 1950’s , which is popular even today.
In ith iteration, algorithm identifies shortest paths
to (i – 1) other vertices nearest to source.

The set of vertices adjacent to vertices in Ti can be


referred to as “ Fringe vertices “.

Fringe vertices are the candidates for Dijkstras


algorithm, out of that the algorithm selects the
nearest vertex to the source.
Shortest Paths – Dijkstra’s Algorithm

• Shortest Path Problems


• All pair shortest paths (Floyd’s algorithm)
• Single Source Shortest Paths Problem (Dijkstra’s algorithm): Given
a weighted graph G, find the shortest paths from a source vertex s to
each of the other vertices.

4
a b
3 2 5 6

7 4
c d e
4
b c
3 6
7 2 5 4
a d e

4
Tree vertices Remaining vertices b c
3 6
a(-,0) b(a,3) c(-,∞) d(a,7) e(-,∞) 2 5
a d e
7 4

4
b(a,3) c(b,3+4) d(b,3+2) e(-,∞) b c
3 6
2 5
a d e
7 4

4
d(b,5) c(b,7) e(d,5+4) b c
3 6
2 5
a d e
7 4
4
c(b,7) e(d,9) b c
3 6
2 5
a d e
7 4
e(d,9)
From a given vertex in a weighted connected graph, find
shortest paths to other vertices using Dijkstra's
algorithm (C programming).

#include<stdio.h>
#include<conio.h>
#define infinity 999

void dij(int n,int v,int cost[10][10],int dist[100])


{
int i, u, count, w, flag[10], min;

for(i=1;i<=n;i++)
{
flag[i]=0;
dist[i]=cost[v][i];
}
count=2;
flag[v]=1;
while(count<=n)
{
min=99;
for(w=1;w<=n;w++)
if(dist[w]<min&&!flag[w])
{
min=dist[w];
u=w;
}
flag[u]=1;
count++;
for(w=1;w<=n;w++)
if((dist[u]+cost[u][w]<dist[w]) && !flag[w])
dist[w]=dist[u]+cost[u][w];
}
}
void main()
{
int n,v,i,j,cost[10][10],dist[10];

clrscr();

printf("\n Enter the number of nodes:");


scanf("%d",&n);

printf("\n Enter the cost matrix:\n");

for(i=1;i<=n;i++)
for(j=1;j<=n;j++)
{
scanf("%d",&cost[i][j]);

if(cost[i][j]==0)
cost[i][j]=infinity;
}
printf("\n Enter the source vertex:");
scanf("%d",&v);
dij(n,v,cost,dist);
printf("\n Shortest path:\n");
for(i=1;i<=n;i++)
if(i!=v)
printf("%d->%d,cost=%d\n",v,i,dist[i]);
getch();
}
OUTPUT:

Enter the number of nodes : 5

Enter the cost matrix :

0 3 999 7 999
3 0 4 2 999
999 4 0 5 6
7 2 5 0 4
999 999 6 4 0

Enter the source vertex : 1

Shortest path:
1 -> 2, cost = 3
1 -> 3, cost = 7
1 -> 4, cost = 5
1 -> 5, cost = 9
Find Minimum Cost Spanning Tree of a given
undirected graph using Kruskal's algorithm
( C programming)

#include<stdio.h>
#include<conio.h>
#include<stdlib.h>

int i,j,k,a,b,u,v,n,ne=1;
int min,mincost=0,cost[9][9],parent[9];

int find(int);
int uni(int,int);
void main()
{
clrscr();
printf("\n\tImplementation of Kruskal's algorithm\n");
printf("\nEnter the no. of vertices:");
scanf("%d",&n);
printf("\nEnter the cost adjacency matrix:\n");
for(i=1;i<=n;i++)
{
for(j=1;j<=n;j++)
{
scanf("%d",&cost[i][j]);
if(cost[i][j]==0)
cost[i][j]=999;
}
}
printf("The edges of Minimum Cost Spanning Tree
are\n");
while(ne < n)
{
for(i=1,min=999;i<=n;i++)
{
for(j=1;j <= n;j++)
{
if(cost[i][j] < min)
{
min=cost[i][j];
a=u=i;
b=v=j;
}
}
}
u=find(u);
v=find(v);
if(uni(u,v))
{
printf("%d edge (%d,%d) =%d\n",ne++,a,b,min);
mincost +=min;
}
cost[a][b]=cost[b][a]=999;
}
printf("\n\tMinimum cost = %d\n",mincost);
getch();
}
int find(int i)
{
while(parent[i])
i=parent[i];
return i;
}

int uni(int i,int j)


{
if(i!=j)
{
parent[j]=i;
return 1;
}
return 0;
}

You might also like