Vth SEM ADA
UNIT-4 DIVIDE AND CONQUER
Introduction
Divide and Conquer is an algorithmic pattern. In algorithmic methods, the design is to take a
dispute on a huge input, break the input into minor pieces, decide the problem on each of the small
pieces, and then merge the piecewise solutions into a global solution. This mechanism of solving
the problem is called the Divide & Conquer Strategy.
Divide and Conquer algorithm consists of a dispute using the following three steps.
1. Divide the original problem into a set of sub problems.
2. Conquer: Solve every sub problem individually, recursively.
3. Combine: Put together the solutions of the sub problems to get the solution to the whole
problem.
Generally, we can follow the divide-and-conquer approach in a three-step process.
Examples: The specific computer algorithms are based on the Divide & Conquer approach:
1. Maximum and Minimum Problem
2. Binary Search
3. Sorting (merge sort, quick sort)
4. Tower of Hanoi.
Dept of BCA, SRNMNC Shivamogga Page 1
Vth SEM ADA
Fundamental of Divide & Conquer Strategy:
There are two fundamental of Divide & Conquer Strategy:
1. Relational Formula
2. Stopping Condition
1. Relational Formula: It is the formula that we generate from the given technique. After
generation of Formula we apply D&C Strategy, i.e. we break the problem recursively &
solve the broken sub problems.
2. Stopping Condition: When we break the problem using Divide & Conquer Strategy, then
we need to know that for how much time, we need to apply divide & Conquer. So the
condition where the need to stop our recursion steps of D&C is called as Stopping
Condition.
Applications of Divide and Conquer Approach:
Following algorithms are based on the concept of the Divide and Conquer Technique:
1. Binary Search: The binary search algorithm is a searching algorithm, which is also called
a half-interval search or logarithmic search. It works by comparing the target value with the
middle element existing in a sorted array. After making the comparison, if the value differs,
then the half that cannot contain the target will eventually eliminate, followed by
continuing the search on the other half. We will again consider the middle element and
compare it with the target value. The process keeps on repeating until the target value is
met. If we found the other half to be empty after ending the search, then it can be
concluded that the target is not present in the array.
2. Quick sort: It is the most efficient sorting algorithm, which is also known as partition-
exchange sort. It starts by selecting a pivot value from an array followed by dividing the
rest of the array elements into two sub-arrays. The partition is made by comparing each of
the elements with the pivot value. It compares whether the element holds a greater value or
lesser value than the pivot and then sort the arrays recursively.
3. Merge Sort: It is a sorting algorithm that sorts an array by making comparisons. It starts by
dividing an array into sub-array and then recursively sorts each of them. After the sorting is
done, it merges them back.
4. Closest Pair of Points: It is a problem of computational geometry. This algorithm
emphasizes finding out the closest pair of points in a metric space, given n points, such that
the distance between the pair of points should be minimal.
5. Strassen's Algorithm: It is an algorithm for matrix multiplication, which is named after
Volker Strassen. It has proven to be much faster than the traditional algorithm when works
on large matrices.
Dept of BCA, SRNMNC Shivamogga Page 2
Vth SEM ADA
Advantages of Divide and Conquer
Divide and Conquer tend to successfully solve one of the biggest problems, such as the
Tower of Hanoi, a mathematical puzzle. It is challenging to solve complicated problems for
which you have no basic idea, but with the help of the divide and conquer approach, it has
lessened the effort as it works on dividing the main problem into two halves and then solve
them recursively. This algorithm is much faster than other algorithms.
It efficiently uses cache memory without occupying much space because it solves simple
sub problems within the cache memory instead of accessing the slower main memory.
It is more proficient than that of its counterpart Brute Force technique.
Since these algorithms inhibit parallelism, it does not involve any modification and is
handled by systems incorporating parallel processing.
It is easy to implement and understand.
It reduces the time complexity of the algorithm by breaking down the problem into smaller
sub-problems.
It can be used to solve a wide range of problems, including sorting, searching, and
optimization.
It can be used in parallel computing to improve performance.
Disadvantages of Divide and Conquer
Since most of its algorithms are designed by incorporating recursion, so it necessitates high
memory management.
An explicit stack may overuse the space.
It may even crash the system if the recursion is performed rigorously greater than the stack
present in the CPU.
It can be difficult to determine the base case or stopping condition for the recursive calls.
It may not be the most efficient algorithm for all problems.
It may require more memory than other algorithms because it involves storing intermediate
results.
MERGE SORT
Merge sort is yet another sorting algorithm that falls under the category of Divide and Conquer
technique. It is one of the best sorting techniques that successfully build a recursive algorithm.
Divide and Conquer Strategy
In this technique, we segment a problem into two halves and solve them individually. After finding
the solution of each half, we merge them back to represent the solution of the main problem.
Suppose we have an array A, such that our main concern will be to sort the subsection, which
starts at index p and ends at index r, represented by A[p..r].
Divide
If assumed q to be the central point somewhere in between p and r, then we will fragment the sub
array A[p..r] into two arrays A[p..q] and A[q+1, r].
Dept of BCA, SRNMNC Shivamogga Page 3
Vth SEM ADA
Conquer
After splitting the arrays into two halves, the next step is to conquer. In this step, we individually
sort both of the subarrays A[p..q] and A[q+1, r]. In case if we did not reach the base situation,
then we again follow the same procedure, i.e., we further segment these sub arrays followed by
sorting them separately.
Combine
As when the base step is acquired by the conquer step, we successfully get our sorted sub arrays
A[p..q] and A[q+1, r], after which we merge them back to form a new sorted array [p..r].
Algorithm:
The merging of two sorted arrays can be done as follows. Two pointers (array indices)
are initialized to point to the first elements of the arrays being merged. The elements pointed
to are compared, and the smaller of them is added to a new array being constructed; after
that, the index of the smaller element is incremented to point to its immediate successor in
the array it was copied from. This operation is repeated until one of the two given arrays is
exhausted, and then the remaining elements of the other array are copied to the end of the
new array.
Dept of BCA, SRNMNC Shivamogga Page 4
Vth SEM ADA
Example:
Dept of BCA, SRNMNC Shivamogga Page 5
Vth SEM ADA
Analysis of Merge Sort:
Let T (n) be the total time taken by the Merge Sort algorithm.
Sorting two halves will take at the most time.
When we merge the sorted lists, we come up with a total n-1 comparison because the last
element which is left will need to be copied down in the combined list, and there will be no
comparison.
Thus, the relational formula will be
But we ignore '-1' because the element will take some time to be copied in merge lists.
Dept of BCA, SRNMNC Shivamogga Page 6
Vth SEM ADA
Dept of BCA, SRNMNC Shivamogga Page 7
Vth SEM ADA
Merge Sort Applications
The concept of merge sort is applicable in the following areas:
Inversion count problem
External sorting
E-commerce applications
Dept of BCA, SRNMNC Shivamogga Page 8
Vth SEM ADA
QUICK SORT
It is an algorithm of Divide & Conquer type.
Divide: Rearrange the elements and split arrays into two sub-arrays and an element in between
search that each element in left sub array is less than or equal to the average element and each
element in the right sub- array is larger than the middle element.
Conquer: Recursively, sort two sub arrays.
Combine: Combine the already sorted array.
Example of Quick Sort:
44 33 11 55 77 90 40 60 99 22 88
Let 44 be the Pivot element and scanning done from right to left
Comparing 44 to the right-side elements, and if right-side elements are smaller than
44, then swap it. As 22 is smaller than 44 so swap them.
22 33 11 55 77 90 40 60 99 44 88
Now comparing 44 to the left side element and the element must be greater than 44
then swap them. As 55 are greater than 44 so swap them.
22 33 11 44 77 90 40 60 99 55 88
Recursively, repeating steps 1 & steps 2 until we get two lists one left from pivot
element 44 & one right from pivot element.
22 33 11 40 77 90 44 60 99 55 88
Swap with 77:
22 33 11 40 44 90 77 60 99 55 88
Dept of BCA, SRNMNC Shivamogga Page 9
Vth SEM ADA
Now, the element on the right side and left side are greater than and smaller than 44
respectively.
Now we get two sorted lists:
Binary search
The Binary Search Algorithm is the process of finding some particular element in the list. If the
element is present in the list, then the process is called successful, and the process returns the
location of that element. Otherwise, the search is called unsuccessful.
Dept of BCA, SRNMNC Shivamogga Page 10
Vth SEM ADA
Linear Search and Binary Search are the two popular searching techniques. Here we will
discuss the Binary Search Algorithm.
Binary search is the search technique that works efficiently on sorted lists. Hence, to search an
element into some list using the binary search technique, we must ensure that the list is sorted.
Binary search follows the divide and conquer approach in which the list is divided into two
halves, and the item is compared with the middle element of the list. If the match is found then,
the location of the middle element is returned. Otherwise, we search into either of the halves
depending upon the result produced through the match.
Working of Binary search
Now, let's see the working of the Binary Search Algorithm.
To understand the working of the Binary search algorithm, let's take a sorted array. It will be
easy to understand the working of Binary search with an example.
There are two methods to implement the binary search algorithm -
Iterative method
Recursive method
The recursive method of binary search follows the divide and conquer approach.
Let the elements of array are –
Dept of BCA, SRNMNC Shivamogga Page 11
Vth SEM ADA
Dept of BCA, SRNMNC Shivamogga Page 12
Vth SEM ADA
Best Case Complexity - In Binary search, best case occurs when the element to search is
found in first comparison, i.e., when the first middle element itself is the element to be
searched. The best-case time complexity of Binary search is O(1).
Average Case Complexity - The average case time complexity of Binary search is O(logn).
Worst Case Complexity - In Binary search, the worst case occurs, when we have to keep
reducing the search space till it has only one element. The worst-case time complexity of
Binary search is O(logn).
Advantages of Binary Search Algorithm
1. When it comes to comparing large data, it is quite efficient as it works on the technique to
eliminate half of the array element.
2. It has less compilation time and thus better time complexity.
3. As it breaks the array in half, it is considered an improvement over linear search.
Disadvantages of Binary Search Algorithm
1. It can only be implemented over a sorted array.
2. The process of sorting and searching in an unsorted array will take time. Thus, binary
search is not a good option in such cases.
Dept of BCA, SRNMNC Shivamogga Page 13
Vth SEM ADA
Binary Tree traversal and related properties
What is a Binary Tree?
A binary tree is a tree data structure in which each node has at most two children, typically
referred to as the left and right child. The topmost node in the tree is called the root, and the nodes
with no children are called leaves. Each node in the tree consists of a value (or key) and pointers to
its children. Binary trees have various applications in computing, such as search algorithms, data
compression, and programming language parsing.
Properties of Binary Trees
Before diving into traversal techniques, let's discuss some properties of binary trees:
1. Height: The height of a binary tree is the length of the longest path from the root to a leaf.
An empty tree has a height of -1, and a tree with only the root node has a height of 0.
2. Balanced Binary Tree: A binary tree is balanced if the height difference between the left
and right subtrees of every node is at most 1. Balanced trees ensure optimal performance in
search operations.
3. Full Binary Tree: A binary tree is full if every level of the tree, except possibly the last, is
completely filled, and all nodes are as far left as possible.
4. Complete Binary Tree: A binary tree is complete if every level, except the last, is
completely filled, and all nodes in the last level are as far left as possible.
Binary Tree Traversals and Related Properties
In this section, we see how the divide-and-conquer technique can be applied to binary trees.
A binary tree T is defined as a finite set of nodes that is either empty or consists of a root
and two disjoint binary trees TL and TR called, respectively, the left and right subtree of the
root.
Since the definition itself divides a binary tree into two smaller structures of the same type,
the left subtree and the right subtree, many problems about binary trees can be solved by
applying the divide-and-conquer technique.
The most important divide-and-conquer algorithms for binary trees are the three classic
traversals: preorder, inorder, and postorder. All three traversals visit nodes of a binary
Dept of BCA, SRNMNC Shivamogga Page 14
Vth SEM ADA
tree recursively, i.e., by visiting the tree’s root and its left and right subtrees. They differ
only by the timing of the root’s visit:
In the preorder traversal, the root is visited before the left and right subtrees are visited (in
that order).
In the inorder traversal, the root is visited after visiting its left subtree but before visiting
the right sub tree.
In the postorder traversal, the root is visited after visiting the left and right subtrees (in that
order).
Dept of BCA, SRNMNC Shivamogga Page 15
Vth SEM ADA
UNIT -5 DECREASE AND CONQUER
Introduction
Decrease and Conquer Technique
As divide and conquer technique, which includes dividing the problem into smaller sub-
problems of the same problem, then conquering the sub-problems by solving them recursively and
if the sub-problem sizes are small enough, then the solving them in a straightforward manner and
in last combining the solutions to the sub-problems into the solution for the initial big problem.
Similarly, the approach decrease and conquer works, it also includes steps as, dividing the
problem into smaller instances of the same problem, then conquering the problem by solving
smaller instance of the problem using recursion and extending the smaller instance to obtain
solution to big problem.
But the difference is that, it divides the problem into sub-problem but solves only one of
the sub-problems which makes the condition holds true. Thus, decreasing the size of the original
problem. This approach is known as decrease and conquer approach and also known as
incremental or inductive approach.
Variations in Decrease and Conquer Technique
There are three major variations of decrease-and-conquer:
Decrease by a constant
Decrease by a constant factor
Variable size decrease
Decrease by a Constant : In this variation, the size of an instance is reduced by the same constant
on each iteration or the recursive step of the algorithm. Typically, this constant is equal to one ,
although other constant size reductions can happen. This variation is used in many algorithms like;
Insertion sort
Graph search algorithms: DFS, BFS
Topological sorting
Algorithms for generating permutations, or subsets
Dept of BCA, SRNMNC Shivamogga Page 16
Vth SEM ADA
Decrease by a Constant factor: This technique reducesa a problem instance by the same constant
factor on each iteration or the recursive step of the algorithm. In most applications, this constant
factor is equal to two. A reduction by a factor other than two is especially rare. Decrease by a
constant factor algorithms are very efficient especially when the factor is greater than 2 as in the
fake-coin problem. Examples where this approach is used;
Binary search
Fake-coin problems
Russian peasant multiplication
Dept of BCA, SRNMNC Shivamogga Page 17
Vth SEM ADA
Variable-Size-Decrease: In this variation, the size-reduction pattern varies from one iteration or
step of an algorithm to another. As, in problem of finding gcd of two number though the value of
the second argument is always smaller on the right-handside than on the left-hand side, it decreases
neither by a constant nor by a constant factor. Below are some examples which uses this approach:
Computing median and selection problem
Interpolation Search
Euclid’s algorithm
Advantages of Decrease and Conquer:
1. Simplicity: Decrease-and-conquer is often simpler to implement compared to other
techniques like dynamic programming or divide-and-conquer.
2. Efficient Algorithms: The technique often leads to efficient algorithms as the size of the
input data is reduced at each step, reducing the time and space complexity of the solution.
3. Problem-Specific: The technique is well-suited for specific problems where it’s easier to
solve a smaller version of the problem.
Disadvantages of Decrease and Conquer:
1. Problem-Specific: The technique is not applicable to all problems and may not be suitable
for more complex problems.
2. Implementation Complexity: The technique can be more complex to implement when
compared to other techniques like divide-and-conquer, and may require more careful
planning.
INSERTION SORT
Insertion sort is one of the simplest sorting algorithms for the reason that it sorts a single
element at a particular instance. It is not the best sorting algorithm in terms of performance, but it's
slightly more efficient than selection sort and bubble sort in practical scenarios. It is an intuitive
sorting technique.
Let's consider the example of cards to have a better understanding of the logic behind the insertion
sort.
Suppose we have a set of cards in our hand, such that we want to arrange these cards in ascending
order. To sort these cards, we have a number of intuitive ways.
One such thing we can do is initially we can hold all of the cards in our left hand, and we can start
taking cards one after other from the left hand, followed by building a sorted arrangement in the
right hand.
Assuming the first card to be already sorted, we will select the next unsorted card. If the unsorted
card is found to be greater than the selected card, we will simply place it on the right side, else to
Dept of BCA, SRNMNC Shivamogga Page 18
Vth SEM ADA
the left side. At any stage during this whole process, the left hand will be unsorted, and the right
hand will be sorted.
In the same way, we will sort the rest of the unsorted cards by placing them in the correct position.
At each iteration, the insertion algorithm places an unsorted element at its right place.
How Insertion Sort Works
1. We will start by assuming the very first element of the array is already sorted. Inside the key, we
will store the second element.
Next, we will compare our first element with the key, such that if the key is found to be smaller
than the first element, we will interchange their indexes or place the key at the first index. After
doing this, we will notice that the first two elements are sorted.
2. Now, we will move on to the third element and compare it with the left-hand side elements. If it
is the smallest element, then we will place the third element at the first index.
Else if it is greater than the first element and smaller than the second element, then we will
interchange its position with the third element and place it after the first element. After doing this,
we will have our first three elements in a sorted manner.
3. Similarly, we will sort the rest of the elements and place them in their correct position.
Consider the following example of an unsorted array that we will sort with the help of the Insertion
Sort algorithm.
A = (41, 22, 63, 14, 55, 36)
Initially,
Dept of BCA, SRNMNC Shivamogga Page 19
Vth SEM ADA
Dept of BCA, SRNMNC Shivamogga Page 20
Vth SEM ADA
Dept of BCA, SRNMNC Shivamogga Page 21
Vth SEM ADA
Time Complexities:
Best Case Complexity: The insertion sort algorithm has a best-case time complexity of
O(n) for the already sorted array because here, only the outer loop is running n times, and
the inner loop is kept still.
Average Case Complexity: The average-case time complexity for the insertion sort
algorithm is O(n2), which is incurred when the existing elements are in jumbled order, i.e.,
neither in the ascending order nor in the descending order.
Worst Case Complexity: The worst-case time complexity is also O(n2), which occurs
when we sort the ascending order of an array into the descending order.
In this algorithm, every individual element is compared with the rest of the elements, due to
which n-1 comparisons are made for every nth element.
Space Complexity
The insertion sort encompasses a space complexity of O(1) due to the usage of an extra variable
key.
Insertion Sort Applications
The insertion sort algorithm is used in the following cases:
When the array contains only a few elements.
When there exist few elements to sort.
Advantages of Insertion Sort
1. It is simple to implement.
2. It is efficient on small datasets.
3. It is stable (does not change the relative order of elements with equal keys)
4. It is in-place (only requires a constant amount O (1) of extra memory space).
5. It is an online algorithm, which can sort a list when it is received.
TOPOLOGICAL SORTING
Topological sorting for Directed Acyclic Graph (DAG) is a linear ordering of vertices
such that for every directed edge u-v, vertex u comes before v in the ordering.
Note1: Topological Sorting for a graph is not possible if the graph is not a DAG.
Note 2: As a rule, cyclic graphs don't have valid topological orderings.
What is Topological Sort
Topological sort is a technique used in graph theory to order the vertices of a directed acyclic
graph (DAG). It ensures that for every directed edge from vertex A to vertex B, vertex A comes
before vertex B in the ordering. This is useful in scheduling problems, where tasks depend on the
Dept of BCA, SRNMNC Shivamogga Page 22
Vth SEM ADA
completion of other tasks. The algorithm begins by selecting a vertex with no incoming edges,
adding it to the ordering, and removing all outgoing edges from the vertex. This process is
repeated until all vertices are visited, and the resulting ordering is a topological sort of the DAG.
Example:
1. Find the number of different topological orderings possible for the given graph-
Step-1: Write in-degree of each vertex-
Step-2:
Vertex-A has the least in-degree.
So, remove vertex-A and its associated edges.
Now, update the in-degree of other vertices.
Dept of BCA, SRNMNC Shivamogga Page 23
Vth SEM ADA
Step-03:
Vertex-B has the least in-degree.
So, remove vertex-B and its associated edges.
Now, update the in-degree of other vertices.
Step-04:
There are two vertices with the least in-degree. So, following 2 cases are possible-
Case-01
Remove vertex-C and its associated edges.
Then, update the in-degree of other vertices.
Case-02
Remove vertex-D and its associated edges.
Then, update the in-degree of other vertices.
Step-05:
Now, the above two cases are continued separately in the similar manner.
Dept of BCA, SRNMNC Shivamogga Page 24
Vth SEM ADA
Case-01
Remove vertex-D since it has the least in-degree.
Then, remove the remaining vertex-E.
Case-02
Remove vertex-C since it has the least in-degree.
Then, remove the remaining vertex-E.
Advantages of the Topological Sort
Some advantages of the topological sort in Java include:
Efficient ordering.
Easy implementation using adjacency lists.
Wide range of applications.
Low time and space complexity.
Scalability for large-scale data processing.
Disadvantages of the Topological Sort
It only works on directed acyclic graphs (DAGs) and cannot handle cyclic graphs.
This does not provide a unique solution if there are multiple valid orders for the nodes.
It may not be the best algorithm for certain graph-related problems, depending on the
specific requirements and constraints.
These may require additional data structures and memory overhead for larger graphs,
which can affect performance.
Dept of BCA, SRNMNC Shivamogga Page 25
Vth SEM ADA
Applications of Topological Sort in Data Structure
1. Task Scheduling:
In project management and task scheduling, topological sort helps determine the
optimal order of tasks with dependencies. Each node in the directed acyclic graph (DAG)
represents a task, and directed edges represent dependencies between tasks. By performing
a topological sort, you can schedule tasks in the correct sequence to ensure that dependent
tasks are completed before their dependents.
3. Software Dependency Resolution:
When managing software projects with multiple modules or libraries, topological sort
assists in resolving dependencies between software components. The algorithm ensures that
dependencies are resolved in the correct order during compilation or deployment,
preventing issues arising from missing dependencies.
4. Building Makefiles:
Makefiles in software development specify the sequence of commands required to
build an application from its source code. Topological sort can be used to determine the
correct order in which source files need to be compiled, taking into account their
interdependencies.
5. Compiler Optimizations:
In compilers, topological sort aids in optimizing the order in which code is
generated for different program segments. This optimization ensures that variables are
allocated in the correct order and that loops are compiled with appropriate loop unrolling
techniques.
6. Dependency Analysis:
In software engineering, topological sort is used to analyze dependencies between
modules or components. This helps in understanding the relationships between different
parts of a software system, enabling better maintenance and modification.
7. Deadlock Detection:
Topological sort can be used in resource allocation systems to detect deadlocks,
which occur when processes are waiting for resources that are blocked by other processes.
By creating a resource allocation graph and performing a topological sort, you can identify
cycles that indicate potential deadlocks.
8. Course Scheduling:
In educational institutions, topological sort can help create efficient course
schedules that ensure prerequisites are met. Each course is represented as a node, and
prerequisites are represented by directed edges.
9. Event Management:
In event scheduling systems, topological sort assists in determining the optimal
order of events or tasks to ensure that no event starts before its prerequisites are completed.
Dept of BCA, SRNMNC Shivamogga Page 26
Vth SEM ADA
Dept of BCA, SRNMNC Shivamogga Page 27