You are on page 1of 40

MODULE 2

CHAPTER 1: DIVIDE AND CONQUER:


MERGESORT,
QUICK SORT,
FINDING MAXIMUM AND MINIMUM ELEMENT IN THE LIST ,
STRASSEN’S MULTIPLICATION,
BINARY SEARCH

CHAPTER 2: DECREASE AND CONQUER:


TOPOLOGICAL SORTING
DIVIDE AND CONQUER
• General Plan:
1. A problem is divided into several sub-problems of
the same type, ideally of about equal size.

2. The sub-problems are solved (typically recursively,


though sometimes a different algorithm is
employed, especially when sub-problems
become small enough).

3. If necessary, the solutions to the sub-problems are


combined to get a solution to the original problem.
a problem of size n
(instance)

subproblem 1 subproblem 2
of size n/2 of size n/2

a solution to a solution to
subproblem 1 subproblem 2

a solution to It general leads to a


the original problem recursive algorithm!
Recurrence relation and Master theorem
T(n) = a T(n/b) + f (n)
f (n)- time spent on dividing the problem and on
combining their results
where f(n)  (nd), d  0

Master Theorem: If a < bd, T(n)  (nd)


If a = bd, T(n)  (nd log n)
If a > bd, T(n)  (nlog b a )
Note: The same results hold with O instead of .
Example: SUM of array elements
ALGORITHM: SUM( a , low , high)
//to compute sum of array elements//
//input: a-array //
// n number of elements in array//
// low and high; index of first and last elements//
// output: sum //
If( low>high) return 0;
If (low=high) return a[low]
mid=(low+high)/2
return SUM(a,low,mid)+ SUM(a,mid+1,high)
Analysis
0 if n=1
T(n)= T(n/2)+T(n/2)+1 otherwise

Time required
Time required Time required to add left and
to add items on to add items on right parts
the left part of the right part of
array array
T(n)=θ(n)
Solve using repetitive substitution method
Or using masters theorem
MERGESORT
• ALGORITHM : Mergesort(A[0..n − 1])
//Sorts array A[0….n − 1] by recursive mergesort
//Input: An array A[0….n − 1] of orderable elements
//Output: Array A[0….n − 1] sorted in nondecreasing order
if n > 1
copy A[0....n/2 − 1] to B[0....n/2 − 1]
copy A[n/2....n − 1] to C[0....n/2 − 1]
Mergesort(B[0..n/2 − 1])
Mergesort(C[0..n/2 − 1])
Merge(B, C, A)
• ALGORITHM: Merge(B[0..p − 1], C[0..q − 1], A[0..p + q − 1])
//Merges two sorted arrays into one sorted array
//Input: Arrays B[0..p − 1] and C[0..q − 1] both sorted
//Output: Sorted array A[0..p + q − 1] of the elements of B and C
i ←low; j ←mid+1; k←low
while i <=p and j <=q do
if B[i]≤ C[j ]
A[k]←B[i]; i ←i + 1
else A[k]←C[j ]; j ←j + 1
K←k+1
if i = p
copy C[j..q − 1] to A[k..p + q − 1]
else
copy B[i..p − 1] to A[k..p + q − 1]
ANALYSIS
0 if n=1
T(n)= 2T(n/2)+Tmerge(n) otherwise

In worst case Tmerge(n)=n-1

so T(n)= 2T(n/2)+n-1 if n>1


Solve either using repetitive substitution
method or master’s theorem
T(n)= θ(nlog2n)
ADVANTAGES AND DISADVANTAGES
ADVANTAGES
• It is stable algorithm
• It can be applied to files of any size
• Used for internal sorting using arrays in the main memory or
can be used for external sorting using files.

DISADVANTAGES
• Uses more memory on stack because of recursion.
• Uses extra space ,is proportional to N. so algorithm is not in
place.
• Though merging can be done in place, the resulting algorithm
is quite complicated and has significantly lager multiplicative
constant.
QUICK SORT(partition exchange sort)
• Unlike mergesort, which divides its input
elements according to their position in the
array, quicksort divides them according to
their value.

• A[0] . . . A[s − 1] A[s] A[s + 1] . . . A[n − 1]

all are ≤A[s] all are ≥A[s]


• ALGORITHM : Quicksort(A[l..r])
//Sorts a subarray by quicksort//
//Input: Subarray of array A[0..n − 1], defined by its left and right//
// indices l and r//
//Output: Subarray A[l….r] sorted in nondecreasing order//
if l < r
s ←Partition(A[l..r]) //s is a split position
Quicksort(A[l….s − 1])
Quicksort(A[s + 1…..r])
• ALGORITHM : Partition(A[l..r])
//Partitions a subarray by Hoare’s algorithm, using the first element as a pivot
//Input: Subarray of array A[0..n − 1], defined by its left and right indices l and
r (l<r)
//Output: Partition of A[l..r], with the split position returned as this function’s
value

p←A[l]
i ←l; j ←r + 1
repeat
repeat i ←i + 1 until A[i]≥ p
repeat j ←j − 1 until A[j ]≤ p
swap(A[i], A[j])
until i ≥ j
swap(A[i], A[j]) //undo last swap when i ≥ j
swap(A[l], A[j ])
return j
ANALYSIS
• BEST CASE: (pivot present exactly at center)
0 if n=1
T(n)= 2T(n/2)+n otherwise

T(n)= θ(nlog2n)

• WORST CASE: (partitioned into 2 sub arrays with one of


them being empty)
0 if n=1
T(n)= T(0)+T(n-1)+n otherwise

T(n)= O(n2)
• AVERAGE CASE: (pivot element can be placed
at any arbitrary position)
n-1

T(n)=1/n ∑ (n+1)+T(k)+T(N-1-K) FOR n>1


K=1

T(0)=0 and T(1)=1

T(n)= θ(nlog2n)
ADVANTAGES AND DISADVANTAGES

ADVANTAGES
• It has an extremely short inner loop
• Time complexity O(nlog2n) in best and average case to sort n
items.
• The algorithm is in PLACE: since it uses a small auxiliary stack.
DISADVANTAGES
• It is not stable
• Time complexity is O(n2) in worst case
• It is fragile, i.e, simple mistake in implementation go
unnoticed and algorithm may not work .
BINARY SEARCH
ALGORITHM: BS( key, a , low , high)
//search for key element in the list//
//input: a-array //
// key: element to be searched in the list/
// low and high; index of first and last elements//
// output: return position of key if found ,-1 otherwise//
If( low>high) return 0;
mid=(low+high)/2
If (key=a[mid]) return mid;
If (key<a[mid]) return BS(key,a,low,mid-1)
return BS(key,a,mid+1,high)
ANALYSIS
• BEST CASE: T(n)=Ω(1)

• WORST CASE:T(n)= 1 if n=1


T(n/2)+1 otherwise

T(n)=θ(log2n)
FINDING MINIMUM AND MAXIMUM ELEMENT
Algorithm: Maxmin(i,j,max,min)
If(i=j)then
Max=min=a[i];
If(i=j-1)then
If(a[i]<a[j])then
Max=a[j];
Min=a[i];
else
Max=a[i];
Min=a[j];
End if
End if
Else
Mid=(i+j)/2;
Maxmin(i,mid,max,min);
Maxmin(mid+1,j,max1,min1);
Max=max(max,max1);
min=min(min,min1);
endif
End maxmin
ANALYSIS
T(n/2)+T(n/2)+2 for n>2
• T(n)= 1 n=2
0 n=1

T(n)= θ(n)
STRASSEN’ MATRIX MULTILICATION
C00 C01 = A00 A01 * B00 B01
C10 C11 A10 A11 B10 B11

= m1+m4-m5+m7 m3+m5
m2+m4 m1+m3-m2+m6

m1=(A00+ A11)*(B00+ B11) m5=(A00 + A01)* B11


m2= (A10 + A11)* B00 m6=(A10- A00)*(B00+ B01)
m3= A00 *(B01- B11) m7=(A01- A11)*(B10+ B11)
ANALYSIS

T(n)= 7*T(n/2) for n>1


1 n=1

T(n)= θ(n2.807)
GENERAL METHOD
Algorithm : DAndC(P){
if Small(P) then returnS(P);
else
{
divide P into smallerinstancesP1,P2,..Pk, k > 1;
Apply DAndC to each of these sub-problems;
Return Combine(DAndC(P1),DAndC(P2),DAndC(Pk));
}
}
DECREASE AND CONQUER
• The decrease-and-conquer technique is based on exploiting the
relationship between a solution to a given instance of a problem and a
solution to its smaller instance.

• Once such a relationship is established, it can be exploited either top


down or bottom up.

• The bottom-up variation is usually implemented iteratively, starting with a


solution to the smallest instance of the problem; it is called sometimes the
incremental approach.

• There are three major variations of decrease-and-conquer:


1. decrease by a constant
2. decrease by a constant factor
3. variable size decrease
decrease-by-a-constant
• In the decrease-by-a-constant variation, the
size of an instance is reduced by the same
constant on each iteration of the algorithm.

• Typically, this constant is equal to one


although other constant size reductions do
happen occasionally.

• f (n) = f (n − 1) . a if n > 0,
1 if n = 0,
decrease-by-a-constant-factor
• decrease-by-a-constant-factor technique suggests reducing a
problem instance by the same constant factor on each
iteration of the algorithm.
• In most applications, this constant factor is equal to two.

• For an example, let us revisit the exponentiation problem.


• If the instance of size n is to compute an, the instance of half
its size is to compute an/2, with the obvious relationship
between the two: an = (an/2)2.

• But since we consider here instances with integer exponents


only, the former does not work for odd n.
• If n is odd, we have to compute an−1 by using the rule for even-
valued exponents and then multiply the result by a.
(an/2)2 if n is even and positive,
an = (a(n−1)/2)2 * a if n is odd,
1 if n = 0.
variable-size-decrease
• The variable-size-decrease variety of decrease-and-conquer,
the size-reduction pattern varies from one iteration of an
algorithm to another.

• Euclid’s algorithm for computing the greatest common divisor


provides a good example of such a situation.
Topological Sorting
• Topological sort of a directed acyclic graph G=(V,E),is linear
ordering of all vertices such that for every edge (u,v) in the
graph G, The vertex ‘u’ appears before the vertex ‘v’ in the
ordering.

• It can be viewed as an ordering of vertices along a horizontal


line so that all directed edges go from left to right.

• NOTE: for a cyclic graph no linear ordering is possible.

• Methods:
1. DFS method
2. Source removal method
DFS METHOD
Step 1: select any arbitrary vertex.

Step 2: when a vertex is visited for the first time, it is pushed on

to stack.

Step 3: when vertex become dead end, it is removed/popped


from the stack.

Step 4: repeat step 2and 3 for all the vertices in the graph.

Step 5: reverse the order of deleted vertices to get the


topological sequence.
DFS METHOD Algorithm
Algorithm1: Topologicalorder(n,a)
Step1: [initialize to indicate no vertex has been visited]
For i<-0 to n-1 do
s[i]<-0
End for
j<-0 //store vertex which are dead end
Step 2: [process each vertex]
For u<-0 to n-1 do
if(s[u]=0) call DFS(u,n,a)
end for
Step 3:[print topological order]
For i<-n-1 to 0 do
print res[i]
End for
Step 4: return
Algorithm2: DFS(u,n,a)
Step1: [visit the vertex u]
s[u] <- 1
Step 2: [traverse deeper into graph till we get dead end]
For v<-0 to n-1 do
if(a[u][v]=1 and s[v]=0) then call DFS(u,n,a)
End if
end for
Step 3:[store dead vertex]
j<- j+1
res[j]=u
Step 4: return
ANALYSIS
For v<-0 to n-1 do
if(a[u][v]=1 and s[v]=0) then
DFS(u,n,a)
n-1 n-1
T(n)= ∑ ∑ 1
v=0 v=0
T(n) € θ(n2)

T(n)= O(|V|+|E|)
SOURCE REMOVAL METHOD
• In this method, the vertex with no incoming edges is
selected and deleted along with edges.

• If there are several vertices with no incoming edges,


arbitrarily a vertex is selected.

• The order in which the vertex is visited and deleted


one by one results topological sorting.
Procedure: repeat the following steps till stack is
empty.
1. Find the vertices with indegree 0 and place
them on the stack.
2. Pop a vertex u and this is the task to be done.
3. add vertex u to the solution vector,
representing next job to be considered.
4. Find the vertex v adjacent to vertex u.
5. Decrement in-degree of v by one there by
reducing dependencies on v by 1.
Algorithm:
Algorithm
Tolopolocialsort(a,n,s) While top!=-1
For j<-0 to n-1 do u<- s[top]
sum<-0 Top<-top-1
Add u to solution vector T
For i<-0 to n-1 do
sum<-sum+ a[i][j] For each vertex v adjacent to u
end for decrement indegree[v] by 1
Indegree[j]<-sum If (Indegree[v]=0) then
End for Top<-top+1
S[top]=v
End if
Top<- -1
End for
For j<-0 to n-1 do End while
if(Indegree[i]=0)
top<- top+1 Print T
s[top]<-i Return
End if
End for

You might also like