Q1: Create AVL tree from the following sequence of nodes: 10, 2, 20, 30, 25, 40, 8, 6, 55, 60, 19 Ans: step 1: insert 10,2,,20,30,40

10

20 2

30 40

Q2: How Binary search trees are different from Complete binary trees? Differentiate between the height and depth of a node by taking a suitable example.
Ans:
Binary Tree Complete Binary Tree

 A

A binary tree is made of nodes, where each node contains a "left" pointer, a "right" pointer, and a data element. The "root" pointer points to the topmost node in the tree. The left and right pointers recursively point to smaller "subtrees" on either side. A null pointer represents a binary tree with no elements -- the empty tree. The formal recursive definition is: a binary tree is either empty (represented by a null pointer), or is made of a single node, where the left and right pointers (recursive definition ahead) each point to a binary tree. A "binary search tree" (BST) or "ordered binary tree" is a type of binary tree where the nodes are arranged in order: for each node, all elements in its left subtree are less-or-equal to the node (<=), and all the elements in its right subtree are greater than the node (>). The tree shown above is a binary search tree -- the "root" node is a 5, and its left subtree nodes (1, 3, 4) are <= 5, and its right subtree nodes (6, 9) are > 5. Recursively, each of the subtrees must also obey the binary search tree constraint: in the (1, 3, 4) subtree, the 3 is the root, the 1 <= 3 and 4 > 3. Watch out for the exact wording in the problems -- a "binary search tree" is different from a "binary tree".

Q5: Why various notations are used in defining algorithmic complexity? List and explain the purpose of using- Big O , Little O and Omega notations. Ans:

Q6: Perform a comparative study of various sorting algorithms and list which algorithm will be best suited in which scenario.
Ans:

Insertion sort
Insertion sort gets penalized if comparison or copying is slow. In other words, the maximum array size which is faster to sort with insertion sort compared to O(nlogn) algorithms gets smaller if comparison or copying of the array elements is slow.

Shell sort
Shell sort is a very viable alternative to heap sort (at least with arrays of up to 1 million elements). If comparison or copying of the elements is slow, shell sort might even be slightly faster. Shell sort seems especially suited to the case where there's a sorted array with a small unsorted section of elements at its end, beating even (the linux gcc)std::sort in most cases. With other element distributions it is slower (sometimes significantly). Shell sort also seems to benefit from repetition in the array.

Heap sort
No surprises. Basic trustworthy O(nlogn). Seems to be slowest with very large arrays with completely random elements. Significantly slower than std::sort.

Merge sort
Surprisingly fast, at least with the optimizations used in this test (ie. the sorting function doesn't need to allocate the secondary array each time it is called). Given

that it is always O(nlogn), it is a very good alternative if the extra memory requirement is not a problem. Array elements with fast comparisons and slow copying seem to slightly penalize merge sort.

Quick sort
Showed clearly its unpredictable nature, even when optimized. When it worked well, it was very fast, but with pathological cases it was really slow. The reason why the "random end" test was pathological for the optimized quicksort is a complete mystery. One has to simply conclude that optimizing quicksort is far from trivial. Vanilla quicksort seems to get penalized if comparison is slow (but copying fast). If comparison is fast it suffers no penalty even if copying is slow.

std::sort
As expected, a sure bet. This standard function has been developed for many years and a lot of algorithmical expertise has been poured into it, and thus it performs in average very well and has no pathological cases (or they are rather hard to find). It can be beaten in individual cases, but in the general case, ie. in average, it's very difficult. (The speed of std::sort seems to depend on the platform too. I have performed similar tests on a Sparc/Solaris computer and I was completely unable to beat this function there. In linux it is possible in some individual cases for some reason.)

So, which one is the fastest?
It's still hard to answer this question with certainty, as I tested only a few cases. However, in the average case it seems that, not surprisingly, std::sort gives the overall best times, even though in individual cases some other algorithms may be a bit faster. uses a highly optimized quicksort enhanced with insertion sort and heap sort, and possibly some other tricks.
std::sort

Merge sort got close and even beat it in a few cases. In average it was slower, though.

Comparison sorts

Name

Best

Average

Worst

Memory

Stable

Method

Other notes

Quicksort is usually done in place with O(log(n)) stack space.[citation
needed]

Most

implementations are Quicksort Depends Partitionin unstable, as stable g in-place partitioning is more complex. Naïve vari ants use an O(n) space array to store the partition.[citation
needed]

Highly Depends; Merge sort worst case is Yes parallelizable (up to Merging O(log(n))) for processing large amounts of data.

In-place Merge sort

Yes

Merging Implemented in Standard Template Library (STL);[2] can be implemented as a stable sort based on stable in-place

Comparison sorts

Name

Best

Average

Worst

Memory

Stable

Method

Other notes

merging.[3]

Heapsort

No

Selection

O(n + d), where d is Insertion sort Yes Insertion the number of inversions

Stable with O(n) extra space, for example using lists. Selection sort No Selection
[4]

Used to sort this

table in Safari or other Webkit web browser.[5]

Depends on gap Shell sort or sequence; best known is No Insertion

Bubble sort

Yes

Exchangin g

Tiny code size

Master your semester with Scribd & The New York Times

Special offer for students: Only $4.99/month.

Master your semester with Scribd & The New York Times

Cancel anytime.