A1 a) BIG-O NOTATION(O) Let f and g be functions from the set of integers or the set of real numbers to the set

of real numbers. We say that f(x) is O(g(x)) if there are constants C and k such that f(x) <= Cg(x) whenever x > k. For example, when we say the running time T(n) of some program is O(n2), read “big oh of n squared” or just “oh of n squared,” we mean that there are positive constants c and n0 such that for n equal to or greater than n0, we have T(n)<=cn2. In support to the above discussion I am presenting a few examples to visualize the effect of this definition. BIG-OMEGA NOTATION (W) Let f and g be functions from the set of integers or the set of real numbers to the set of real numbers. We say that f(x) is W(g(x)) if there are constants C and k such that f(x) >= Cg(x) whenever x > k. BIG-THETA NOTATION (q) For the similar functions f and g as discussed in above two definitions, we say that f(x) is q(g(x)) if there are constants C1 ,C2 and k such that, 0<= C1f(x)<=f(x) <= C2f(x) whenever x > k. Since, q (g(x)) bounds a function from both upper and lower sides, it is also called tight bound for the function f(x).

b) A linear search is the simplest possible search algorithm where you simply iterate through a list of items until you find the item you're looking for. Clearly, the time it takes to find an item will be proportional to the number of items in the list (n say). If an item occurs just once in the list then, on average, (n + 1)/2 items will need to be examined before a match is found. Foe example, if a list of ints contains the 5 numbers 1,2,3,4 and 5 and you're looking for a random number in this range then, on average, you'll need to examine (5 +1)/2 = 3 numbers until you find the one you're looking for. In 'big O' notation, a linear search algorithm is O(n). c) In computer science, a binary search tree (BST), sometimes also called an ordered or sorted binary tree, is a node-based binary tree data structure which has the following properties:[1]

The left subtree of a node contains only nodes with keys less than the node's key.

There must be no duplicate nodes. The AVL tree is named after its two Soviet inventors. and deletions in logarithmic time. e) Postorder traversal: To traverse a binary tree in Postorder. sequential access. To find the degree of a graph. Unlike self-balancing binary search trees. G. One way to find the degree is to count the number of edges which has that vertx as an endpoint. It was the first such data structure to be invented. Insertions and deletions may require the tree to be rebalanced by one or moretree rotations. the heights of the two child subtrees of any node differ by at most one. d) To analyze a graph it is important to look at the degree of a vertex. figure out all of the vertex degrees. M. The degree of the graph will be its largest vertex degree. if at any time they differ by more than one. The B-tree is a generalization of a binary search tree in that a node can have more than two children. M.[1] In an AVL tree. Generally. The major advantage of binary search trees over other data structures is that the related sorting algorithms and search algorithms such as in-order traversal can be very efficient. where n is the number of nodes in the tree prior to the operation. (ii) Traverse the right subtree starting at the left external node which is then followed by bubble-up all the internal nodes. It is commonly used in databases and file systems. and (iii) Visit the root. nodes are compared according to their keys rather than any part of their associated records. insertion.   The right subtree of a node contains only nodes with keys greater than the node's key. the information represented by each node is a record rather than a single data element. who published it in their 1962 paper "An algorithm for the organization of information". f) AVL tree (Adelson-Velskii and Landis' tree. and deletion all take O(log n) time in both the average and worst cases. Lookup. . The left and right subtree each must also be a binary search tree. for sequencing purposes. following operations are carried-out (i) Traverse all the left external nodes starting with the left most subtree which is then followed by bubble-up all the internal nodes. However. named after the inventors) is a selfbalancing binary search tree. AdelsonVelskii and E. insertions.[2] g) B-tree is a tree data structure that keeps data sorted and allows searches. An easy way to do this is to draw a circle around the vertex and count the number of edges that cross the circle. Landis. rebalancing is done to restore this property. the B-tree is optimized for systems that read and write large blocks of data.

one node can be an ancestor of another node. A tree with n nodes has exactly n−1 branches or degree. you can define ancestralrelationships in a tree: that is.j])=Base(A)+w[m(j-l2)+(i-l1)] ADD (A[i. or a great-grandchild of another node. Nodes with children are referred to as parent nodes. C language makes use of Row-major ordering and use the following formula: ADD (A[i. w denotes the number of words per memory location for the array A. and child nodes may contain references to their parents. for example. Items can inserted and deleted from a queue in O(1) time. Following this convention. j]) of A[i.j])=Base(A)+w[n(i-l1)+(j-12)] Again. Note that the formulas are linear in i and j. The rootnode is the ancestor to all nodes of the tree. It follows FIFO principle. j]using the formula (Column-major order) (Row-major order) ADD (A[i.6 .the computer keeps the track of Base(A)-the address of the first element of A[0. To solve this problem by joining the front and rear ends of a queue to make the queue as a circular queue Circular queue is a linear data structure.h) binary tree is a tree in which each node has at most two child nodes (denoted as the left child and the right child). i) In a standard queue data structure re-buffering problem occurs for each dequeue operation. It is also called as “Ring buffer”. and computes the address ADD (A[i.0] of A. A tree which does not have any node other than root node is called a null tree. a descendant.  In circular queue the last node is connected back to the first node to make a circle. j]) = Base (A) + w (n * i + j) (2. In a binary tree. j) for any two-Dimensional m*n array A. and any node in the tree can be reached from the root node.   Circular linked list fallow the First In First Out principle Elements are added at the rear end and the elements are deleted at front end of the queue    Both the front and the rear pointers points to the beginning of the array. the degree of every node can be at most two.

A2 .

A3 The restrictions on queue imply that the first element which is inserted into the queue will be the first one to be removed. end. and queues are known as First In First Out (FIFO) lists. Addition into a queue procedure addq (item : items). end. {delete from the front of q and put into item} begin if front = rear then queueempty else begin front := front+1 item := q[front].{of addq} Deletion in a queue procedure deleteq (var item : items). q[rear]:=item. {add item to the queue q} begin if rear=n then queuefull else begin rear :=rear+1. end. end. Thus A is the first letter to be removed. {of deleteq} A4 .

} else { printf("\nEnter the element to insert= ").n").&ele). temp->next=head->next. break. temp->prev=head. node *temp. while(head!=NULL) { if(head->info==num) { flag=2. data processing. is not in list. return. Algorithms are used for calculation. } head=head->next. an algorithm is a step-by-step procedure for calculations.ele. temp=(node *)malloc(sizeof(node)).&num). } if(flag==1) { printf("\n\No. printf("\nEnter the element after which u want to insert= ").num. head->next=temp. An . scanf("%d". } A6 In mathematics and computer science. and automated reasoning.A5 void insert_at_position(node *head) { int flag=1. scanf("%d". temp->next->prev=temp. temp->info=ele.

* Analyze the algorithm. For instance. * Prove correctness. Algorithm analysis is an important part of a broader computational complexity theory. called . In computer science. the analysis of algorithms is the determination of the amount of resources (such as time and storage) necessary to execute them.Starting from an initial state and initial input (perhaps empty). * Understand the problem. known as randomized algorithms. Exact (not asymptotic) measures of efficiency can sometimes be computed but they usually require certain assumptions concerning the particular implementation of the algorithm. colloquially "in logarithmic time". Big O notation.algorithm is an effective method expressed as a finite list[1] of well-defined instructions for calculating a function. * Exact Vs Approximate solving. These estimates provide an insight into reasonable directions of search for efficient algorithms. The transition from one state to the next is not necessarily deterministic. when executed. Usually asymptotic estimates are used because different implementations of the same algorithm may differ in efficiency. In theoretical analysis of algorithms it is common to estimate their complexity in the asymptotic sense. to estimate the complexity function for arbitrarily large input. Most algorithms are designed to work with inputs of arbitrary length. Bigomega notation and Big-theta notation are used to this end. the efficiency or running time of an algorithm is stated as a function relating the input length to the number of steps (time complexity) or storage locations (space complexity). incorporate random input. some algorithms. binary search is said to run in a number of steps proportional to the logarithm of the length of the list being searched. or in O(log(n)). i. proceeds through a finite number of well-defined successive states. * Design an algorithm. Usually. which provides theoretical estimates for the resources needed by any algorithm which solves a given computational problem. By following the bellow steps we can write a simple algorithm.the instructions describe a computation that.e.. However the efficiencies of any two "reasonable" implementations of a given algorithm are related by a constant multiplicative factor called a hidden constant. eventually producing "output" and terminating at a final ending state. * Code the algorithm.

Post order. parsing. we first traverse (in order) the left sub tree. in searching for particular nodes. Let T be a binary tree. if T is not empty. There are a number of different ways to proceed. A7 TRAVERSALS OF A BINARY TREE A traversal of a graph is to visit each node exactly once. then visit the root node of T. generating code and evaluation of arithmetic expression. In this section we shall discuss traversal of a binary tree. Compilers commonly build binary trees in the process of scanning. e. and we can guarantee that each lookup of an element in the list can be done in unit time.g. and then traverse (in order) the right sub tree. then at most log2 n + 1 time units are needed to return an answer. The methods differ primarily in the order in which they visit the nodes. and/or by postulating that certain operations are executed in unit time. A model of computation may be defined in terms of an abstract computer. Consider the binary tree given Expression Tree This is an example of an expression tree for (A + B*C)-(D*E) . For example. IN ORDER TRAVERSAL It follows the general strategy of Left-Root-Right.. Turing machine. The four different traversals of T are In order.model of computation. For example. Preorder and Level-by-level traversal. It is useful in many applications. In this traversal. if the sorted list to which we apply binary search has n elements.

such trees are called expression trees. begin if TREE <>nil then begin .e. '+'. therefore. we move back to parent tree. B. Tree. Current T becomes rooted at 'B' since left(T) is empty. then prints out the operator at root and then a(parenthesized) right expression. We now perform in order traversal of right(T). which would give us 'C'. we visit its root i.e. The in order traversal produces a(parenthesized) left expression.A binary tree can be used to represent arithmetic expressions if the node value can be either operators or operand values and are such that: · each operator node has exactly two branches · each operand node has no branches. We visit T's root i. current T becomes rooted at 'A'. which would give us'* and E'. we visit root i. A.e.e. This method of traversal is probably the most widely used. the complete listing is A+B*C-D*E You may note that expression is in infix notation. current T becomes rooted at +. Now in order traversal of right(T)is performed.e. Since left(T) is empty. The following is a pascal procedure for in order traversal of a binary tree procedure INORDER (TREE: BINTREE). Since left(T) is not empty. cheek for right(T) which is empty. Therefore. '*'. We visit its root i. Since left(T) is not empty. Current T becomes rooted at '*'. at the start is rooted at '_'. 'D' and perform in order traversal of right(T). Since left(T) is not empty. T. We access T' root i.

then traverse Right(T) (in post order). end end.P Visit the root. Traverse the right sub tree in Post order.e. INORDER (TREE ^ RIGHT). For example. Traverse the left sub tree In Post order. POST ORDER TRAVERSAL In this traversal we first traverse left(T) (in post order). a post order traversal of the tree given in Figure 9 would be . like the definition for traversal is recursive. Write ln ( TREE^DATA). and finally visit root. It is a Left-Right-Root strategy. i. Root of the tree + A Empty sub tree Empty sub tree * B Empty sub tree Empty sub tree C Empty sub tree Empty sub tree * D Empty sub tree Empty sub tree E Empty sub tree Empty sub tree Output A + B * C D * E over Figure 10: Trace of in order traversal of tree given in figure 9 Please notice that this procedure.INORDER (TREE^LEFT). Figure gives a trace of the in order traversal of tree given in figure 9.

For example. suppose we make a depth first search of the binary tree given in Figure 11. (See Unit 4. Block 4). PREORDER TRAVERSAL In this traversal. we visit root first. You may also implement it using Pascal or C language. then recursively perform preorder traversal of Left(T).e. Traverse the right sub tree preorder.(D*E) Preorder traversal is employed in Depth First Search. traversal of Right(T) i. A preorder traversal of the tree given in Figure 9 would yield . i.ABC*+DE*You may notice that it is the postfix notation of the expression (A + (B*C)) -(D*E) We leave the details of the post order traversal method as an exercise. Visit the root Traverse the left sub tree preorder.+A*BC*DE It is the prefix notation of the expression (A+ (B*C)) . Figure 12: Binary tree example for depth first search .e. followed by pre order. a Root-Left-Right traversal.

b) Depth-first search (DFS) is an algorithm for traversing or searching tree or graph data structures. One starts at the root (selecting some node as the root in the graph case) and explores as far as possible along each branch before backtracking. 3) The concept of a spanning tree of a graph originated with optimization problems in communication networks. go left as deeply as possible before searching to its right. a) A communication network can be represented by a graph in which the vertices are the stations and the edges are the communication lines between stations. The order in which the nodes would be visited is ABDECFHIJKG A8 a) SPANNING TREES 2) A tree is a connected graph which contains no cycles. b) A subnetwork that connects all the stations without any redundancy will be a tree. and whose edge set is a subset of the edge set of the given graph. 5) Any connected graph will have a spanning tree. Example For the following graph: . 4) A spanning tree for a connected graph is a tree whose vertex set is the same as the vertex set of the given graph.We shall visit a node.

A. Performing the same search without remembering previously visited nodes results in visiting nodes in the order A. D. as follows: 1. D. F. breadth-first search (BFS) is a strategy for searching in a graph when search is limited to essentially two operations: (a) visit and inspect a node of a graph. it inspects their neighbor nodes which were unvisited. B. Enqueue the root node 2. and assuming the search remembers previously visited nodes and will not repeat them (since this is a small graph). B. Dequeue a node and examine it  If the element sought is found in this node. The edges traversed in this search form a Trémaux tree. but more memory-efficient Iterative deepening depth-first search and contrast with depth-first search. F. c) In graph theory. etc. B. Compare BFS with the equivalent. The algorithm uses a queue data structure to store intermediate results as it traverses the graph. (b) gain access to visit the nodes that neighbor the currently visited node. Then for each of those neighbor nodes in turn. and so on. C. E. Otherwise enqueue any successors (the direct child nodes) that have not yet been discovered. F.a depth-first search starting at A. assuming that the left edges in the shown graph are chosen before right edges. F. B. will visit the nodes in the following order: A. E cycle and never reaching C or G. D. forever. E. a structure with important applications in graph theory. D.  . quit the search and return a result. The BFS begins at a root node and inspects all the neighboring nodes. caught in the A. E. G.

if(root->left==NULL && root->right != NULL) return FullNodes(root->right). } //Nodes with no left child // Nodes with no right child . If the queue is empty. if(node->left == NULL && node->right==NULL) return 1. else return getLeafCount(node->left)+ getLeafCount(node->right). } int FullNodes(TreeNode* root){ if(root == NULL) //if tree is empty return 0. every node on the graph has been examined – quit the search and return "not found".3. repeat from Step 2. if(root->left!=NULL && root->right == NULL) return FullNodes(root->left). if(root->left!=NULL && root->right != NULL) // Full Nodes return 1 + FullNodes(root->left) + FullNodes(root->right). if(root->left == NULL && root->right == NULL) //leaf nodes return 0. If the queue is not empty. A9 /* Function to get the count of leaf nodes in a binary tree*/ unsigned int getLeafCount(struct node* node) { if(node == NULL) return 0. 4.