You are on page 1of 25

UNIT-2

DIVIDE AND CONQUER:

General method:

 Given a function to compute on „n‟ inputs the divide-and-conquer strategy suggests


splitting the inputs into „k‟ distinct subsets, 1<k<=n, yielding „k‟ sub problems.
 These sub problems must be solved, and then a method must be found to combine
sub solutions into a solution of thewhole.
 If the sub problems are still relatively large, then the divide-and-conquer strategy can
possibly bereapplied.
 Often the sub problems resulting from a divide-and-conquer design are of the same
type as the originalproblem.
 For those cases the re application of the divide-and-conquer principle is naturally
expressed by a recursivealgorithm.
 D And C(Algorithm) is initially invoked as D and C(P), where „p‟ is the problem to
besolved.
 Small(P) is a Boolean-valued function that determines whether the i/p size is small
enough that the answer can be computed withoutsplitting.
 Ifthisso,thefunction„S‟isinvoked.
 Otherwise, the problem P is divided into smaller subproblems.
 These sub problems P1, P2 …Pk are solved by recursive application of D AndC.
 Combine is a function that determines the solution to p using the solutions to the
„k‟ sub problems.
 If the size of „p‟ is n and the sizes of the „k‟ sub problems are n1, n2 ….nk,
respectively, then the computing time of D And C is described by the recurrence
relation.

T(n)={g(n) nsmall
T(n1)+T(n2)+……………+T(nk)+f(n); otherwise.

Where T(n)  is the time for D And C on any I/p of size „n‟.
g(n)  is the time of compute the answer directly for small I/ps. f(n)
 is the time for dividing P & combining the solution to
sub problems.
Merge Sort

It closely follows the divide & Conquers paradigm.

Conceptually, it works as follows:

1. Divide: Divide the unsorted list into two sublists of about half the size.
2. Conquer: Sort each of the two sublists recursively until we have list sizes of length 1, in
which case the list items are returned.
3. Combine: Join the two sorted Sub lists back into one sorted list.

The Main purpose is to sort the unsorted list in nondecreasing order.

Merge Sort Algorithm-


 
Merge Sort Algorithm works in the following steps-
 It divides the given unsorted array into two halves- left and right sub arrays.
 The sub arrays are divided recursively.
 This division continues until the size of each sub array becomes 1.
 After each sub array contains only a single element, each sub array is sorted trivially.
 Then, the above discussed merge procedure is called.
 The merge procedure combines these trivially sorted arrays to produce a final sorted array.

Let's consider an array with values {14, 7, 3, 12, 9, 11, 6, 12}


Below, we have a pictorial representation of how merge sort will sort the given array.
In merge sort we follow the following steps:

1. We take a variable p and store the starting index of our array in this. And we take another
variable r and store the last index of array in it.
2. Then we find the middle of the array using the formula (p + r)/2 and mark the middle
index as q, and break the array into two subarrays, from p to q and from q + 1 to r index.
3. Then we divide these 2 subarrays again, just like we divided our main array and this
continues.
4. Once we have divided the main array into subarrays with single elements, then we start
merging the subarrays.

Time Complexity Analysis-


 
In merge sort, we divide the array into two (nearly) equal halves and solve them recursively
using merge sort only.
So, we have-
 

 
Finally, we merge these two sub arrays using merge procedure which takes Θ(n) time as
explained above.
 
If T(n) is the time required by merge sort for sorting an array of size n, then the recurrence
relation for time complexity of merge sort is-
 
On solving this recurrence relation, we get T(n) = Θ(nlogn).
Thus, time complexity of merge sort algorithm is T(n) = Θ(nlogn).

Time Complexity:
The list of size N is divided into a max of logN parts, and the merging of all sublists into a single
list takes O(N) time, the worst case run time of this algorithm is O(NLogN)

Space Complexity Analysis-


 
 Merge sort uses additional memory for left and right sub arrays.
 Hence, total Θ(n) extra memory is needed.
 
Properties-
 
Some of the important properties of merge sort algorithm are-
 Merge sort uses a divide and conquer paradigm for sorting.
 Merge sort is a recursive sorting algorithm.
 Merge sort is a stable sorting algorithm.
 Merge sort is not an in-place sorting algorithm.
 The time complexity of merge sort algorithm is Θ(nlogn).
 The space complexity of merge sort algorithm is Θ(n).

NOTE
Merge sort is the best sorting algorithm in terms of time complexity Θ(nlogn)
if we are not concerned with auxiliary space used.
Quick sort

It is an algorithm of Divide & Conquer type.

Divide: Rearrange the elements and split arrays into two sub-arrays and an element in between
search that each element in left sub array is less than or equal to the average element and each
element in the right sub- array is larger than the middle element.

Conquer: Recursively, sort two sub arrays.

It is also called partition-exchange sort. This algorithm divides the list into three main parts:

1. Elements less than the Pivot element


2. Pivot element(Central element)
3. Elements greater than the pivot element

Pivot element can be any element from the array, it can be the first element, the last element or
any random element. In this tutorial, we will take the rightmost element or the last element
as pivot.

How Quick Sorting Works?


Following are the steps involved in quick sort algorithm:

1. After selecting an element as pivot, which is the last index of the array in our case, we
divide the array for the first time.
2. In quick sort, we call this partitioning. It is not simple breaking down of array into 2
subarrays, but in case of partitioning, the array elements are so positioned that all the
elements smaller than the pivot will be on the left side of the pivot and all the elements
greater than the pivot will be on the right side of it.
3. And the pivot element will be at its final sorted position.
4. The elements to the left and right, may not be sorted.
5. Then we pick subarrays, elements on the left of pivot and elements on the right of pivot,
and we perform partitioning on them by choosing a pivot in the subarrays.
Initial Step - First Partition

Sort Left Partition in the same way

For example: In the array {52, 37, 63, 14, 17, 8, 6, 25}, we take 25 as pivot. So after the first
pass, the list will be changed like this.
Quick Sort Analysis-
 
 To find the location of an element that splits the array into two parts, O(n) operations are
required.
 This is because every element in the array is compared to the partitioning element.
 After the division, each section is examined separately.
 If the array is split approximately in half (which is not usually), then there will be log2n splits.
 Therefore, total comparisons required are f(n) = n x log2n = O(nlog2n).
 

Order of Quick Sort = O(nlog2n)

Worst Case-
 
 Quick Sort is sensitive to the order of input data.
 It gives the worst performance when elements are already in the ascending order.
 It then divides the array into sections of 1 and (n-1) elements in each call.
 Then, there are (n-1) divisions in all.
 Therefore, here total comparisons required are f(n) = n x (n-1) = O(n2).
 

Order of Quick Sort in worst case = O(n2)

Advantages of Quick Sort-


 
The advantages of quick sort algorithm are-
 Quick Sort is an in-place sort, so it requires no temporary memory.
 Quick Sort is typically faster than other algorithms.
(because its inner loop can be efficiently implemented on most architectures)
 Quick Sort tends to make excellent usage of the memory hierarchy like virtual memory or
caches.
 Quick Sort can be easily parallelized due to its divide and conquer nature.
 
Disadvantages of Quick Sort-
 
The disadvantages of quick sort algorithm are-
 The worst case complexity of quick sort is O(n2).
 This complexity is worse than O(nlogn) worst case complexity of algorithms like merge sort,
heap sort etc.
 It is not a stable sort i.e. the order of equal elements may not be preserved.

What is Searching?

 Searching is the process of finding a given value position in a list of values.


 It decides whether a search key is present in the data or not.
 It is the algorithmic process of finding a particular item in a collection of items.
 It can be done on internal data structure or on external data structure.

Searching Techniques

To search an element in a given array, it can be done in following ways:

1. Linear Search or Sequential Search =====(unsorted order)


2. Binary Search=====(elements must be in sorted order)====(3,5,1,2,39,10)===sorted order
3. Lexicographic Search (Dictionary)=======sorted data…….
4. Fibonacci Search.

Searching Algorithms are designed to check for an element or retrieve an element from any data
structure where it is stored.

 Linear Search or Sequential Search


 Sequential search is also called as Linear Search.
 Sequential search starts at the beginning of the list and checks every element of the list.
 It is a basic and simple search algorithm.
 Sequential search compares the element with all the other elements given in the list. If the
element is matched, it returns the value index, else it returns -1.
Linear search is implemented using following steps...

 Step 1 - Read the search element from the user.


 Step 2 - Compare the search element with the first element in the list.
 Step 3 - If both are matched, then display "Given element is found!!!" and terminate the
function
 Step 4 - If both are not matched, then compare search element with the next element in
the list.
 Step 5 - Repeat steps 3 and 4 until search element is compared with last element in the
list.
 Step 6 - If last element in the list also doesn't match, then display "Element is not
found!!!" and terminate the function.

Algorithm :: Linear Search ( Array A, Value x)

Step 1: Set i to 1
Step 2: if i > n then go to step 7
Step 3: if A[i] = x then go to step 6
Step 4: Set i to i + 1
Step 5: Go to Step 2
Step 6: Print Element x Found at index i and go to step 8
Step 7: Print element not found
Step 8: Exit
Complexity of algorithm
Complexity Best Case Average Case Worst Case

Time O(1) O(n) O(n)

Space O(1)

Binary Search

Binary search is the search technique which works efficiently on the sorted lists. Hence, in order
to search an element into some list by using binary search technique, we must ensure that the list
is sorted.

Binary search follows divide and conquer approach in which, the list is divided into two halves
and the item is compared with the middle element of the list. If the match is found then, the
location of middle element is returned otherwise, we search into either of the halves depending
upon the result produced through the match.

Binary search algorithm is given below.

Binary search is implemented using following steps...

 Step 1 - Read the search element from the user.


 Step 2 - Find the middle element in the sorted list.
 Step 3 - Compare the search element with the middle element in the sorted list.
 Step 4 - If both are matched, then display "Given element is found!!!" and terminate the
function.
 Step 5 - If both are not matched, then check whether the search element is smaller or
larger than the middle element.
 Step 6 - If the search element is smaller than middle element, repeat steps 2, 3, 4 and 5
for the left sublist of the middle element.
 Step 7 - If the search element is larger than middle element, repeat steps 2, 3, 4 and 5 for
the right sublist of the middle element.
 Step 8 - Repeat the same process until we find the search element in the list or until
sublist contains only one element.
 Step 9 - If that element also doesn't match with the search element, then display
"Element is not found in the list!!!" and terminate the function.
Complexity
SN Performance Complexity

1 Worst case O(log n)

2 Best case O(1)

3 Average Case O(log n)

4 Worst case space complexity O(1)

Example
Consider the following list of elements and the element to be searched...
Example

Let us consider an array arr = {1, 5, 7, 8, 13, 19, 20, 23, 29}. Find the location of the item 23 in
the array.
Find 88 in data given

Array 0 1 2 3 4 5 6 7
index
Value 8 13 17 26 44 56 88 97
first mid last

1. = Mid= (first+last) / 2

=mid=(0+7)/2

=3.5====3

Mid===26

IS 88<26======NO Less than=====<,,,,,,,>

88>26====yes=====after this element

2.
Array value 4 5 6 7
Values 44 56 88 97
First mid last

===mid = (first+last)/2

=(4+7)/2

mid=5.5===5

IS 88<56=====no

88>56

3.

Array 6 7
value
Values 88 97
First last

==mid = (first+last)/2
=====(6+7)/2
=====6.5===6
Mid=88
Search element=88
88=88 I found the element in 6th position of the array.

Binary Tree Traversal


Traversing a tree means visiting every node in the tree. Any process for visiting all of the nodes
in some order is called a traversal. Any traversal that lists every node in the tree exactly once is
called an enumeration of the tree's nodes.

A binary tree data structure is a non-linear data structure unlike the linear data structures like arrays, linked lists, stacks,
and queues.
 A binary tree is a tree data structure in which each node has up to two child nodes that create the branches of the
tree.
 The two children are usually referred to as the left and right nodes.
 Parent nodes are nodes with children, while child nodes can contain references to their parents. 
 The topmost node of the tree is called the root node, the node to the left of the root is the left node which can serve
as the root for the left sub-tree and the node to the right of the root is the right node which can serve as the root for
the right sub-tree.

Why binary trees?

 Binary trees can be used to store data in a hierarchical order.


 Insertion and deletion operation is quicker in trees when compared to arrays and linked lists.
 Any number of nodes can be stored using a binary tree and the size is dynamic.
 Accessing elements in trees are faster when compared to linked lists and slower when compared to arrays.

Types of binary trees


Binary trees can be classified as follows

1. Complete binary tree:  All the levels are completely filled except possibly the last level and the nodes
in the last level are as left as possible.                                                                
2. Full binary tree: All the nodes of a binary tree possesses either 0 or 2 children.
3. Perfect binary tree: It is a full binary tree all the nodes contains exactly two children other than the
leaf nodes.
Applications of binary trees:
The following are the applications of binary trees:
Binary Search Tree - Used in many search applications that constantly show and hide
data, such as data. For example, map and set objects in many libraries.
Binary Space Partition - Used in almost any 3D video game to determine which
objects need to be rendered.
Binary Tries - Used in almost every high-bandwidth router to store router tables.
Syntax Tree - Constructed by compilers and (implicit) calculators to parse expressions.
Hash Trees - Used in P2P programs and special image signatures that require a hash
to be validated, but the entire file is not available.
Heaps - Used to implement efficient priority queues and also used in heap sort.
Treap - Randomized data structure for wireless networks and memory allocation.
T-Tree - Although most databases use a form of B-tree to store data on the drive,
databases that store all (most) data often use T-trees.
Huffman Coding Tree (Chip Uni) - Used in compression algorithms, eg. For example,
in .jpeg and .mp3.GGM Trees file formats - used in cryptographic applications to
generate a tree with pseudo-random numbers.
There are three standard methods to traverse the binary trees. These are as follows:

Binary tree traversal can be done in the following ways. Level order traversal

 Inorder traversal
 Preorder traversal
 Postorder traversal
 Levelorder traversal

Inorder traversal(Left, Root, Right)


Left → Root → Right

1. First, visit all the nodes in the left subtree

2. Then the root node

3. Visit all the nodes in the right subtree

inorder(root->left)
display(root->data)
inorder(root->right)
Preorder traversal (Root, Left, Right)
Root → Left → Right

1. Visit root node

2. Visit all the nodes in the left subtree

3. Visit all the nodes in the right subtree

display(root->data)
preorder(root->left)
preorder(root->right)
Postorder traversal (Left, Right, Root)
Left → Right → Root

1. Visit all the nodes in the left subtree

2. Visit all the nodes in the right subtree

3. Visit the root node

postorder(root->left)
postorder(root->right)
display(root->data)
Traverse the following binary tree by using in-order traversal.

 print the left most node of the left sub-tree i.e. 23.
 print the root of the left sub-tree i.e. 211.
 print the right child i.e. 89.
 print the root node of the tree i.e. 18.
 Then, move to the right sub-tree of the binary tree and print the left most node i.e. 10.
 print the root of the right sub-tree i.e. 20.
 print the right child i.e. 32.
 hence, the printing sequence will be 23, 211, 89, 18, 10, 20, 32.

Level Order Traversal: 1 , 2, 3, 4, 5.

Binary tree---------- (root node, left node, right node)

Inoder: left ,root, right

Preorder: root, left, right

Postorder: left, right, root.

You might also like