Professional Documents
Culture Documents
General method:
T(n)={g(n) nsmall
T(n1)+T(n2)+……………+T(nk)+f(n); otherwise.
Where T(n) is the time for D And C on any I/p of size „n‟.
g(n) is the time of compute the answer directly for small I/ps. f(n)
is the time for dividing P & combining the solution to
sub problems.
Merge Sort
1. Divide: Divide the unsorted list into two sublists of about half the size.
2. Conquer: Sort each of the two sublists recursively until we have list sizes of length 1, in
which case the list items are returned.
3. Combine: Join the two sorted Sub lists back into one sorted list.
1. We take a variable p and store the starting index of our array in this. And we take another
variable r and store the last index of array in it.
2. Then we find the middle of the array using the formula (p + r)/2 and mark the middle
index as q, and break the array into two subarrays, from p to q and from q + 1 to r index.
3. Then we divide these 2 subarrays again, just like we divided our main array and this
continues.
4. Once we have divided the main array into subarrays with single elements, then we start
merging the subarrays.
Finally, we merge these two sub arrays using merge procedure which takes Θ(n) time as
explained above.
If T(n) is the time required by merge sort for sorting an array of size n, then the recurrence
relation for time complexity of merge sort is-
On solving this recurrence relation, we get T(n) = Θ(nlogn).
Thus, time complexity of merge sort algorithm is T(n) = Θ(nlogn).
Time Complexity:
The list of size N is divided into a max of logN parts, and the merging of all sublists into a single
list takes O(N) time, the worst case run time of this algorithm is O(NLogN)
NOTE
Merge sort is the best sorting algorithm in terms of time complexity Θ(nlogn)
if we are not concerned with auxiliary space used.
Quick sort
Divide: Rearrange the elements and split arrays into two sub-arrays and an element in between
search that each element in left sub array is less than or equal to the average element and each
element in the right sub- array is larger than the middle element.
It is also called partition-exchange sort. This algorithm divides the list into three main parts:
Pivot element can be any element from the array, it can be the first element, the last element or
any random element. In this tutorial, we will take the rightmost element or the last element
as pivot.
1. After selecting an element as pivot, which is the last index of the array in our case, we
divide the array for the first time.
2. In quick sort, we call this partitioning. It is not simple breaking down of array into 2
subarrays, but in case of partitioning, the array elements are so positioned that all the
elements smaller than the pivot will be on the left side of the pivot and all the elements
greater than the pivot will be on the right side of it.
3. And the pivot element will be at its final sorted position.
4. The elements to the left and right, may not be sorted.
5. Then we pick subarrays, elements on the left of pivot and elements on the right of pivot,
and we perform partitioning on them by choosing a pivot in the subarrays.
Initial Step - First Partition
For example: In the array {52, 37, 63, 14, 17, 8, 6, 25}, we take 25 as pivot. So after the first
pass, the list will be changed like this.
Quick Sort Analysis-
To find the location of an element that splits the array into two parts, O(n) operations are
required.
This is because every element in the array is compared to the partitioning element.
After the division, each section is examined separately.
If the array is split approximately in half (which is not usually), then there will be log2n splits.
Therefore, total comparisons required are f(n) = n x log2n = O(nlog2n).
Worst Case-
Quick Sort is sensitive to the order of input data.
It gives the worst performance when elements are already in the ascending order.
It then divides the array into sections of 1 and (n-1) elements in each call.
Then, there are (n-1) divisions in all.
Therefore, here total comparisons required are f(n) = n x (n-1) = O(n2).
What is Searching?
Searching Techniques
Searching Algorithms are designed to check for an element or retrieve an element from any data
structure where it is stored.
Step 1: Set i to 1
Step 2: if i > n then go to step 7
Step 3: if A[i] = x then go to step 6
Step 4: Set i to i + 1
Step 5: Go to Step 2
Step 6: Print Element x Found at index i and go to step 8
Step 7: Print element not found
Step 8: Exit
Complexity of algorithm
Complexity Best Case Average Case Worst Case
Space O(1)
Binary Search
Binary search is the search technique which works efficiently on the sorted lists. Hence, in order
to search an element into some list by using binary search technique, we must ensure that the list
is sorted.
Binary search follows divide and conquer approach in which, the list is divided into two halves
and the item is compared with the middle element of the list. If the match is found then, the
location of middle element is returned otherwise, we search into either of the halves depending
upon the result produced through the match.
Example
Consider the following list of elements and the element to be searched...
Example
Let us consider an array arr = {1, 5, 7, 8, 13, 19, 20, 23, 29}. Find the location of the item 23 in
the array.
Find 88 in data given
Array 0 1 2 3 4 5 6 7
index
Value 8 13 17 26 44 56 88 97
first mid last
1. = Mid= (first+last) / 2
=mid=(0+7)/2
=3.5====3
Mid===26
2.
Array value 4 5 6 7
Values 44 56 88 97
First mid last
===mid = (first+last)/2
=(4+7)/2
mid=5.5===5
IS 88<56=====no
88>56
3.
Array 6 7
value
Values 88 97
First last
==mid = (first+last)/2
=====(6+7)/2
=====6.5===6
Mid=88
Search element=88
88=88 I found the element in 6th position of the array.
A binary tree data structure is a non-linear data structure unlike the linear data structures like arrays, linked lists, stacks,
and queues.
A binary tree is a tree data structure in which each node has up to two child nodes that create the branches of the
tree.
The two children are usually referred to as the left and right nodes.
Parent nodes are nodes with children, while child nodes can contain references to their parents.
The topmost node of the tree is called the root node, the node to the left of the root is the left node which can serve
as the root for the left sub-tree and the node to the right of the root is the right node which can serve as the root for
the right sub-tree.
1. Complete binary tree: All the levels are completely filled except possibly the last level and the nodes
in the last level are as left as possible.
2. Full binary tree: All the nodes of a binary tree possesses either 0 or 2 children.
3. Perfect binary tree: It is a full binary tree all the nodes contains exactly two children other than the
leaf nodes.
Applications of binary trees:
The following are the applications of binary trees:
Binary Search Tree - Used in many search applications that constantly show and hide
data, such as data. For example, map and set objects in many libraries.
Binary Space Partition - Used in almost any 3D video game to determine which
objects need to be rendered.
Binary Tries - Used in almost every high-bandwidth router to store router tables.
Syntax Tree - Constructed by compilers and (implicit) calculators to parse expressions.
Hash Trees - Used in P2P programs and special image signatures that require a hash
to be validated, but the entire file is not available.
Heaps - Used to implement efficient priority queues and also used in heap sort.
Treap - Randomized data structure for wireless networks and memory allocation.
T-Tree - Although most databases use a form of B-tree to store data on the drive,
databases that store all (most) data often use T-trees.
Huffman Coding Tree (Chip Uni) - Used in compression algorithms, eg. For example,
in .jpeg and .mp3.GGM Trees file formats - used in cryptographic applications to
generate a tree with pseudo-random numbers.
There are three standard methods to traverse the binary trees. These are as follows:
Binary tree traversal can be done in the following ways. Level order traversal
Inorder traversal
Preorder traversal
Postorder traversal
Levelorder traversal
inorder(root->left)
display(root->data)
inorder(root->right)
Preorder traversal (Root, Left, Right)
Root → Left → Right
display(root->data)
preorder(root->left)
preorder(root->right)
Postorder traversal (Left, Right, Root)
Left → Right → Root
postorder(root->left)
postorder(root->right)
display(root->data)
Traverse the following binary tree by using in-order traversal.
print the left most node of the left sub-tree i.e. 23.
print the root of the left sub-tree i.e. 211.
print the right child i.e. 89.
print the root node of the tree i.e. 18.
Then, move to the right sub-tree of the binary tree and print the left most node i.e. 10.
print the root of the right sub-tree i.e. 20.
print the right child i.e. 32.
hence, the printing sequence will be 23, 211, 89, 18, 10, 20, 32.