You are on page 1of 56

BANGALORE INSTITUTE OF TECHNOLOGY

Department of Information Science and Engineering


Subject Name: Design and Analysis of Algorithms
Subject Code: 18CS42
MODULE - 3

Staff In-charge
S MERCY
Assistant Professor
MODULE - 3
• General Method

• Coin Change problem

• Knapsack problem

• Job sequencing with deadlines

• Minimum cost spanning trees- Prim’s algorithm, Kruskal’s algorithm

• Single Source Shortest Paths- Dijkstra’s algorithm

• Optimal Tree problem- Huffman trees and codes

• Transform and Conquer Approach- Heaps and Heapsort


GENERAL METHOD
• Most straight forward design technique.

• For n inputs, require us to obtain a subset that satisfies some constraints.

• Any subset that satisfies the constraints called a feasible solution.

• Feasible solution either maximizes or minimizes a given objective function. A


feasible solution does this called optimal solution.

• Greedy method suggests that one can devise an algorithm that works in stages,
considering one input at a time.

• At each stage a decision is made regarding whether a particular input is in a


optimal solution. Done by considering the inputs in an order determined by some
selection procedure.

• Selection procedure based on some optimization measures.(objective function).


Strategies used for Optimization Problem
• Greedy Method-

Greedy algorithms, construct a solution through a


sequence of steps, each step expanding a partially
constructed solution obtained so far, until a complete
solution to the problem is reached.

• Dynamic Programming

• Branch and Bound


Control Abstraction
Algorithm Greedy(a, n) // a[1..n] contains the ‘n’ inputs
{
Solution:= 0; //Initialize the solution
for i:= 1 to n do
{
X: = Select(a);
If Feasible(Solution, x) then
Solution:= Union(Solution, x);
}
return Solution;
}
Applications of the Greedy Strategy
• Optimal solutions:
– change making for “normal” coin denominations
– Minimum Spanning Tree (MST)
– single-source shortest paths
– simple scheduling problems
– Huffman codes
• Approximations/heuristics:
– Traveling Salesman Problem (TSP)
– knapsack problem
– other combinatorial optimization problems
Difference between divide and conquer and greedy
COIN CHANGE PROBLEM
Problem Statement:
Given coins of several denominations find out a way to give a customer an
amount with fewest number of coins.
Example:
If denominations are 1, 5, 10, 25 and 100 and the change required is 30, the
solutions are,
Amount: 30

Solutions: 3 x 10 ( 3 coins )
6x5 ( 6 coins )
1 x 25 + 5 x 1 ( 6 coins )
1 x 25 + 1 x 5 ( 2 coins )
The last solution is the optimal one as it gives us change only with 2 coins.
KNAPSACK PROBLEM (FRACTIONAL KNAPSACK PROBLEM)
Given n objects and a knapsack or bag. Object i has a weight wi
and the knapsack has a capacity m. If the fraction Xi, 0<=Xi<=1, of
object i is placed into the knapsack, then a profit of Pi*Xi is earned.

The objective is to maximize the total profit earned. Since the


knapsack capacity is m, it is require that the total weight of all chosen
objects to be atmost m. Objects are arranged in decreasing order of
Pi/Wi.
Knapsack Problem - 1
Obtain the optimal solution for the knapsack problem using greedy method for the
given data.
M = 15 and n = 7
p1, p2, p3, p4, p5, p6, p7 = 10, 5, 15, 7, 6, 18, 3
w1, w2, w3, w4, w5, w6, w7= 2, 3, 5, 7, 1, 4, 1

There are several greedy methods to obtain the feasible solutions.


Method – 1 (Select object with maximum profit)

Solution Vector = (1, 0, 1, 4/7, 0, 1, 0)= (1, 0, 1, 0.57, 0, 1,


v0)

Optimal solution using this method is (x1, x2, x3, x4, x5, x6, x7) = (1, 0, 1, 0.57, 0, 1, 0) with
profit = 47
Method – 2 (Select object with minimum weight)

Solution Vector = (1, 1, 4/5, 0, 1, 1, 1)= (1, 1, 0.8, 0, 1, 1, 1)

Optimal solution using this method is (x1, x2, x3, x4, x5, x6, x7) = (1, 1, 0.8, 0, 1, 1,
1) with profit = 54
Optimal solution is not guaranteed using method 1 and 2
Method – 3 (Select object with minimum (pi/wi))

Solution Vector = (1, 2/3, 1, 0, 1, 1, 1)= (1, 0.67, 1, 0, 1, 1, 1)

Optimal solution is (x1, x2, x3, x4, x5, x6, x7) = (1, 0.67, 1, 0, 1, 1, 1)
Profit = [1*10 + 0.67*5 + 1*15 + 0*7 + 1*6 + 1*18 + 1*3] = 55.34
Weight = [1*2 + 0.67*3 + 1*5 + 0*7 + 1*1 + 1*4 + 1*1] = 15
Knapsack Algorithm
Algorithm GreedyKnapsack(m, n)
// p[1:n] and w[1:n] contain the profits and weights respectively of the n objects
ordered such that p[i]/w[i] >= p[i+1]/w[i+1].
// m is the knapsack size and x[1:n] is the solution vector
{
for i := 1 to n do

x[i] := 0.0; // Initialize x

U := m; // sack capacity
for i :=1 to n do
{
if (w[i] > U) then break; // weight of an object is greater than sack capacity
x[i] := 1.0; U :=U-w[i];
}
if(i<=n) then x[i] :=U/w[i];
}
Analysis: Disregarding the time to initially sort the object, each of the above strategies use O(n)
time
Lab Program 6b
import java.util.Scanner;
class KObject
{ // Knapsack object details
float w, p, r;
}
public class KnapsackGreedy2
{
static final int MAX = 20; // max. no. of objects
static int n; // no. of objects
static float M; // capacity of Knapsack
public static void main(String args[])
{
Scanner scanner = new Scanner(System.in);
System.out.println("Enter number of objects: ");
n = scanner.nextInt();
KObject[] obj = new KObject[n];
for(int i = 0; i<n;i++)
obj[i] = new KObject(); // allocate memory for members
ReadObjects(obj);
Knapsack(obj);
scanner.close();
}
static void ReadObjects(KObject obj[])
{
KObject temp = new KObject();
Scanner scanner = new Scanner(System.in);
System.out.println("Enter the max capacity of knapsack: ");
M = scanner.nextFloat();
System.out.println("Enter Weights: ");
for (int i = 0; i < n; i++)
obj[i].w = scanner.nextFloat();
System.out.println("Enter Profits: ");
for (int i = 0; i < n; i++)
obj[i].p = scanner.nextFloat();
for (int i = 0; i < n; i++)
obj[i].r = obj[i].p / obj[i].w;
// sort objects in descending order, based on p/w ratio
for(int i = 0; i<n-1; i++)
for(int j=0; j<n-1-i; j++)
if(obj[j].r < obj[j+1].r)
{
temp = obj[j];
obj[j] = obj[j+1];
obj[j+1] = temp;
}
scanner.close();
}
static void Knapsack(KObject kobj[])
{
float x[] = new float[MAX];
float totalprofit, U;
int i;
U = M; totalprofit = 0;
for (i = 0; i < n; i++)
x[i] = 0;
for (i = 0; i < n; i++)
{
if (kobj[i].w > U) break;
else
{
x[i] = 1; totalprofit = totalprofit + kobj[i].p;
U = U - kobj[i].w;
}
}
System.out.println("i = " + i);
if (i < n)
x[i] = U / kobj[i].w;
totalprofit = totalprofit + (x[i] * kobj[i].p);
System.out.println("The Solution vector, x[]: ");
for (i = 0; i < n; i++)
System.out.print(x[i] + " ");
System.out.println("\nTotal profit is = " + totalprofit);
}
}
JOB SEQUENCING WITH DEADLINES
Given an array of jobs where every job has a deadline di>=0
and an associated profit pi>0, the job is to be finished
before the deadline. It is also given that every job takes
single unit of time. So the minimum possible deadline for
any job is 1. The objective is to maximize total profit,
provided only one job can be scheduled at a time. The value
of a feasible solution is the sum of the profits of the jobs in
J. An optimal solution is a feasible solution with maximum
value.
Problem - 1
Solve the following job sequencing problem (maximizing the profit by completing
jobs before their deadlines) using greedy algorithm.
Arrange profit Pi in descending order.
Number of jobs n= 5
Profits associated with jobs
(p1, p2, p3, p4, p5) =
(20, 15, 10, 5, 1)
Deadlines associated with jobs
(d1, d2, d3, d4, d5) =
(2, 2, 1, 3, 3)
Job Sequencing with Deadlines Algorithm
Job Sequencing with Deadlines Algorithm
int JS(int D[], int J[], int n)
{
D[0]=J[0]=0 \\ initialize
J[1]=1; \\ include job 1
int k=1;
for(int i=2; i<=n; i++)
{ \\ Consider jobs in non increasing order of P[i]. Find position for i and
check feasibility of insertion.
int r=k;
while((D[J[r]]>D[i])&&(D[J[r]]!=r))
r--;
If((D[J[r]]<=D[i]&&(D[i]>r))
{ \\ insert i into J[]
for q:k to (r+1) step -1 do
J[q+1]=J[q];
J[r+1]=i;
k++;
}
}
return(k);
}
MINIMUM SPANNING TREE (MST)
Spanning tree of a connected graph G: a connected acyclic subgraph of G that
includes all of G’s vertices.

Minimum spanning tree of a weighted, connected graph G: a spanning tree of G of


the minimum total weight.
Example of MST
PRIM’S ALGORITHM
• Construct a MST through a sequence of expanding subtrees.
• The initial subtree in such a sequence consists of a single vertex selected arbitrarily from
the set V of the graph’s vertices.

• Algorithm stops after all the graphs vertices have been included in the tree being
constructed.
• Algorithm expands a tree by exactly one vertex on each of its iterations.
• Total number of iterations is n-1 where n is the number of vertices in the graph.

• A graph will have only one minimum spanning tree if and only if the weights associated
with all the edges in the graph are distinct.
• In case of digraph MST does not make complete sense.
Applications: used for designing efficient routing algorithm.
The algorithm proceeds by selecting adjacent edges with minimum weights. Care
should be taken for not forming circuit.
Prim’s Algorithm Example
Prim’s Algorithm Working Procedure

Total Cost= 15
Prim’s Algorithm Example
Lab Program 9
import java.util.Scanner;
public class PrimsClass
{
final static int MAX = 20;
static int n; // No. of vertices of G
static int cost[][]; // Cost matrix
static Scanner scan = new Scanner(System.in);
public static void main(String[] args)
{ ReadMatrix(); Prims(); }
static void ReadMatrix()
{
int i, j;
cost = new int[MAX][MAX];
System.out.println("\n No. of nodes:");
n = scan.nextInt();
System.out.println("\n Enter matrix:\n");
for (i = 1; i <= n; i++)
for (j = 1; j <= n; j++)
{
cost[i][j] = scan.nextInt();
if (cost[i][j] == 0)
cost[i][j] = 999; } }
static void Prims()
{
int visited[] = new int[10];
int ne = 1, i, j, min, a = 0, b = 0, u = 0, v = 0;
int mincost = 0; visited[1] = 1;
while (ne < n)
{
for (i = 1, min = 999; i <= n; i++)
for (j = 1; j <= n; j++)
if (cost[i][j] < min)
if (visited[i] != 0)
{
min = cost[i][j]; a = u = i;
b = v = j;
}
if (visited[u] == 0 || visited[v] == 0)
{
System.out.println("Edge" + ne++ + ":(" + a + "," + b + ")" + "cost:" + min);
mincost += min; visited[b] = 1;
}
cost[a][b] = cost[b][a] = 999;
}
System.out.println("\n Minimun cost" + mincost);
} }
KRUSKAL’S ALGORITHM
• Kruskal's algorithm finds MST of a weighted connected graph G=<V,E>

as an acyclic subgraph with |V| - 1 edges. Sum of all the edges


weight should be minimum.

• The algorithm begins by sorting the graph’s edges in increasing order


of their weights.

• Then it scans this sorted list starting with the empty sub graph and it
adds the next edge on the list to the current sub graph, if such an
inclusion doesn’t create a cycle and simply skipping the edges
otherwise.
Brute Force Example
Kruskal’s Example
Kruskal’s Algorithm
Algorithm : Kruskal(G)
// Kruskal’s algorithm for constructing a minimum spanning tree
// Input : A weighted connected graph G=(V,E)
// Output : ET, the set of edges composing a minimum spanning tree of G
Sort E in non decreasing order of the edge weights w(ei1)<=…….>= w(e i |E|)

ET ← Ø ; ecounter ← 0 //Initialize the set of tree edges and its size


k←0 //initialize the number of processed edges
while ecounter < |V|-1 do
k←k+1
if ET U {eik} is acyclic
ET←ET U {eik};
ecounter ←ecounter+1
return ET
Kruskal’s Algorithm
• Disjoint subsets and union find function

Makeset(x)- create a one element set {x}. This operation can be


applied to each of the element of set s only once.

Find(x)- returns a subset containing x.

Union(x,y)- construct the union of the disjoint subsets Sx and Sy


containing x and y respectively and adds it to the collection to replace
Sx and Sy which are deleted from it.
Kruskal’s Example
Time complexity
• The crucial check whether two vertices belong to the same tree can
be found out using union - find algorithms.
If the graph is to presented as an adjacency matrix then the
complexity of Kruskal algorithm is V^2.
While using heap and adjacency list, the complexity is of the order
of ElogV.
Application: useful in designing a low cost computer network
Lab Program 8
import java.util.Scanner;
public class KruskalsClass
{
final static int MAX = 20;
static int n; // No. of vertic es of G
static int cost[][]; // Cost matrix
static Scanner scan = new Scanner(System.in);
public static void main(String[] args)
{
ReadMatrix(); Kruskals();
}
static void ReadMatrix()
{
int i, j; cost = new int[MAX][MAX];
System.out.println("Implementation of Kruskal's algorithm"); System.out.println("Enter the no. of vertices");
n = scan.nextInt(); System.out.println("Enter the cost adjacency matrix");
for (i = 1; i <= n; i++)
{
for (j = 1; j <= n; j++)
{
cost[i][j] = scan.nextInt(); if (cost[i][j] == 0)
cost[i][j] = 999;
}
}
}
static void Kruskals()
{
int a = 0, b = 0, u = 0, v = 0, i, j, ne = 1, min, mincost = 0;
System.out.println("The edges of Minimum Cost Spanning Tree are");
while (ne < n)
{
for (i = 1, min = 999; i <= n; i++)
{
for (j = 1; j <= n; j++)
{
if (cost[i][j] < min)
{
min = cost[i][j]; a = u = i; b = v = j;
}
}
}
u = find(u); v = find(v);
if (u != v)
{
uni(u, v);
System.out.println(ne++ + "edge (" + a + "," + b + ") =" + min); mincost += min;
}
cost[a][b] = cost[b][a] = 999;
}
System.out.println("Minimum cost :" + mincost);
}
static int find(int i)
{
int parent[] = new int[9];
while (parent[i] == 1)
i = parent[i];
return i;
}
static void uni(int i, int j)
{
int parent[] = new int[9];
parent[j] = i;
}
}
SHORTEST PATHS – DIJKSTRA’S ALGORITHM
• The Dijkstra’s algorithm finds the shortest path from a given vertex to

all the remaining vertices in a diagraph.

• The constraint is that each edge has non-negative cost. The length of
the path is the sum of the costs of the edges on the path.

• Find out the shortest path from a given source vertex ‘S’ to each of
the destinations (other vertices ) in the graph.
Time Complexity-Dijkstra’s Algorithm
Working Procedure-Dijkstra’s Algorithm
Lab Program 7
import java.util.*;
public class DijkstrasClass
{
final static int MAX = 20; final static int infinity = 9999;
static int n; // No. of vertices of G
static int a[][]; // Cost matrix
static Scanner scan = new Scanner(System.in);
public static void main(String[] args)
{
ReadMatrix();
int s = 0; // starting vertex
System.out.println("Enter starting vertex: ");
s = scan.nextInt();
Dijkstras(s); // find shortest path
}
static void ReadMatrix()
{
a = new int[MAX][MAX];
System.out.println("Enter the number of vertices:");
n = scan.nextInt();
System.out.println("Enter the cost adjacency matrix:");
for (int i = 1; i <= n; i++)
for (int j = 1; j <= n; j++)
a[i][j] = scan.nextInt();
}
static void Dijkstras(int s)
{
int S[] = new int[MAX]; int d[] = new int[MAX]; int u, v; int i;
for (i = 1; i <= n; i++)
{
S[i] = 0; d[i] = a[s][i];
}
S[s] = 1; d[s] = 1; i = 2;
while (i <= n)
{
u = Extract_Min(S, d); S[u] = 1; i++;
for (v = 1; v <= n; v++)
{
if (((d[u] + a[u][v] < d[v]) && (S[v] == 0)))
d[v] = d[u] + a[u][v];
}
}
for (i = 1; i <= n; i++)
if (i != s)
System.out.println(i + ":" + d[i]);
}
static int Extract_Min(int S[], int d[])
{
int i, j = 1, min;
min = infinity;
for (i = 1; i <= n; i++)
{
if ((d[i] < min) && (S[i] == 0))
{
min = d[i];
j = i;
}
}
return (j);
}
}
OPTIMAL TREE PROBLEM- HUFFMAN TREES AND CODES
• Huffman algorithm developed by David Huffman.
• It is a coding technique for encoding data, encoded data used in
data compression technique.
• Algorithm:
step 1: Initialize n one node trees and label them with the
characters of the alphabet. Record the frequency of each character in
its tree’s root to indicate the tree’s weight.
step 2: Repeat the following operation until a single tree is
obtained. Find two trees with the smallest weights (tie can be broken
arbitrarily). Make them the left and right subtree of a new tree and
record the sum of their weights in the root of the new tree as its
weight.
A tree constructed using this algorithm called Huffman trees. It
defines in the manner called Huffman code.
Example
• Consider the five character alphabet {A, B, C, D, -} with the following occurrence
probabilities. Character A B C D -
Probability 0.35 0.1 0.2 0.2 0.15
• Withthe occurrence probabilities given and the codeword lengths obtained, the
expected number of bits per character in this code is,
Total number of bits in Huffman encoded message
= Total number of characters in the message x Average code length per character
= ∑ ( frequencyi x Code lengthi )

=2(0.35)+3(0.1)+2(0.2)+2(0.2)+3(0.15)=2.25
Huffman code achieves the compression ratio.
Compression ratio is a standard measure of a
compression algorithm’s effectiveness
=((3-2.25)/3)*100=25%
Huffman encoding of a text will use 25% less
memory than its fixed length encoding.
• To encode a text that comprises characters from some n character alphabet by assigning
to each of the text’s characters some sequence of bits called the codewords, use
Fixed length encoding- assign to each character a bit string of the same length m
Variable length encoding- which assigns codewords of different lengths of different
characters.
Problem, how to tell how many bits of an encoded text represent the 1st characters
To avoid this complication, limits to prefix free code.
In prefix code, no codeword is a prefix of a codeword of another character.
With this encoding simply scan a bit string until to get the 1st group of bits that is a
codeword for some character, replace these bits by this characters and repeat this
operation until the bit strings end is reached.
To create a binary prefix code for some alphabet associate the alphabets character with
leaves of a binary tree in which all the left edges are labelled by 0 and all the right edges are
labelled by 1. Codeword of a character obtained by recording the labels on the simple path
from the root to the characters’ leaf.
TRANSFORM AND CONQUER APPROACH- HEAPS AND HEAPSORT
• Transform and Conquer: This group of techniques solves a problem by a
transformation to a simpler/more convenient instance of the same problem (instance
simplification) to a different representation of the same instance (representation
change) to a different problem for which an algorithm is already available (problem
reduction).

• Heaps and Heapsort:

Heap is a partially ordered data structure that is suitable for implementing


priority queues. A heap can be defined as a binary tree with keys assigned to its nodes
(one key per node) provided the following 2 conditions are met.

1. The tree shape requirement- the binary tree is essentially complete, (ie) all its
levels are full except possibly the last level where only some rightmost leaves may be
missing.

2. The parental dominance requirement- the key at each node is greater than or
equal to the keys at its children.

You might also like