Spring 2012 Master of Computer Application (MCA) – Semester II MC0068 – Data Structures using ‘C’ – 4 Credits (Book ID: B0701

& BO702) Assignment Set – 1 (40 Marks) Questions From Book-ID: B0701
1. Write the advantages of implementation checks preconditions. [5 Marks] Ans: There are several advantages to having the implementation check its own preconditions: 1. It sometimes has access to information not available to the user (e.g. implementation details about space requirements), although this is often a sign of a poorly constructed specification. 2. Programs won‟t bomb mysteriously – errors will be detected (and reported) at the earliest possible moment. This is not true when the user checks preconditions, because the user is human and occasionally might forget to check, or might think that checking was unnecessary when in fact it was needed. 3. Most important of all, if we ever change the specification, and wish to add, delete, or modify preconditions, we can do this easily, because the precondition occurs in exactly one place in our program. There are arguments on both sides. The literature specifies that procedures should signal an error if their preconditions are not satisfied. This means that these procedures must check their own preconditions. That‟s what our model solutions will do too. We will thereby sacrifice some efficiency for a high degree of maintainability and robustness. An additional possibility is to selectively include or exclude the implementation‟s condition checking code, e.g. using #ifdef: #ifdef SAFE if(condition) error(“condition not satisfied”); #endif This code will get included only if we supply the DSAFE argument to the compiler (or otherwise define SAFE). Thus in an application where the user checks carefully for all preconditions, we have the option of omitting all checks by the implementation.

2. Discuss the STACK operation with Suitable example and Show how to implement stack operations of integer in C by using array. [10 Marks] Ans: Stack: A stack is a linear list of elements for which all insertions and deletions (usually accesses) are made at only one end of the list. They are also called as LIFO lists (Last Input First Output). The operations supported are: 1) IsEmpty: Returns whether the stack is empty or not. 2) IsFull: Return whether the stack is full or not. 3) Push (Element X,S): Pushes element X on to the top of the stack.

4) Pop: Pops an element from the top of the stack on to the output (printing on to the output console isn't necessary though in which case we can define another function Top(S) which gives the top element of the stack). All the above mentioned operations are of O (1) complexity. Implementation of stack using array #include<stdio.h> void push(int); int pop(void); void display(void); int main() { int choice=0,val=0; do { printf("\n\t1.Push 2.Pop 3.Display 4.Exit\n\tSelect Your Choice : "); scanf("%d",&choice); switch(choice) { case 1: printf("\tElement to be Pushed : "); scanf("%d",&val); push(val); break; case 2: val=pop(); if(val!=-1) printf("\tPopped Element : %d\n",val); break; case 3: display(); break; case 4: break; default: printf("\tWrong Choice"); break; } }while(choice!=4); return 0; } #define maxsize 100 int stack[maxsize]; int stacktop=0; void push(int val) { if(stacktop<maxsize) stack[stacktop++]=val; else printf("Stack Overflow"); }

int pop() { int a; if(stacktop>0) { stacktop--; a=stack[stacktop]; return a; } else { printf("Stack is Empty"); return -1; } } void display() { int i=0; if(stacktop>0) { printf("\tElements are:"); while(i<stacktop) { printf("\t%d",stack[i++]); } printf("\n"); } else printf("\tStack is Empty\n");

3. Discuss how Queues are differing from Stack. [5 Marks] Ans: Queues: A queue is defined as a special type of data structure where elements are inserted from one end and elements are deleted from the other end. The end from where the element are inserted is called „rear end‟ (r) and the end from where elements are deleted is called „front end‟(f). In a queues always elements are inserted from the rear end and elements are deleted from the front end. So queues follows First In First Out (FIFO) data structure. The operations that can be performed on queue are:  Delete an item from queue.  Display the contents of queue. Stack: A stack is defined as a special type of data structure where items are inserted from one end called top of stack and items are deleted from the same end. Here, the last item inserted will be on top of stack. Since deletion is done from the same end, Last item Inserted is First item to be deleted Out from the stack and so, stack is also called Last In First Out (LIFO) data structure.

The various operations that can be performed on stacks are:  Insert an item into the stack.  Delete an item from the stack.  Display the contents of the stack.

Questions From Book-ID: B0702
4. Explain the AVL Tree with suitable example? [4 Marks] Ans: An AVL tree is a binary search tree whose left subtree and right subtree differ in height by no more than 1, and whose left and right subtrees are they AVL trees. To maintain balance in a height balanced binary tree, each node will have to keep an additional piece of information that is needed to efficiently maintain balance in the tree after every insert and delete operation has been performed. For an AVL tree, this additional piece of information is called the balance factor, and it indicates if the difference in height between the left and right subtrees is the same or, if not, which of the two subtrees has height one unit larger. If a node has a balance factor rh (right high) it indicates that the height of the right subtree is 1 greater than the height of the left subtree. Similarly the balance factor for a node could be lh (left high) or eh (equal height). Example: Consider the AVL tree depicted below. The right subtree has height 1 and the left subtree, height 0. The balance factor of the root is tilted toward the right (right high – rh) since the right subtree has height one larger than the left subtree. Inserting the new node 21 into the tree will cause the right subtree to have height 2 and cause a violation of the definition of an AVL tree. This violation of the AVL property is indicated at the root by showing that the balance factor is now doubly unbalanced to the right. The other balance factors along the path of insertion will also be changed as indicated. The node holding 12 is also doubly unbalanced to the right. 4 4

2

12

2

12

24 21 21

24

The AVL property of this tree is restored through a succession of rotations. The root of this tree is doubly unbalanced to the right. The child of this unbalanced node (node 12) is also doubly unbalanced rh, but its child (node 24) is lh. Before a rotation around the root of the right subtree can be performed, a rotation around node 24 is required so that the balance factors of both the child and grandchild of the unbalanced node – the subtree root (node 12) – agree in direction (both rh in this case).

4 2

4 12 21 24 21

2

12

24

Now both nodes 12 and 21 have a balance factor of rh, and a left rotation can be performed about the node 12. This rotation reduces the height of the right subtree by 1 and will restore AVL balance to the tree. 4 4 2 2 12 12 21 24 21

24

Next we add a new key 6 to this tree. Addition of a new node holding this key causes the root to become doubly unbalanced to the right.

4 2 21 24 2 6

4 12 21 24

12

6

Since the root is doubly unbalanced right and its right child is left high, we must first perform a right rotation around node 21. Now a rotation around the root readjusts the balance of the tree.

4 2 12 21 6 24 2 4

12 21 24

The left rotation about the root promotes the right child of the original root (node 12), and makes the old root (node 4) the left child of the new root – replacing the left subtree originally attached to node 12. The former left subtree of node 12 is now the right subtree of node 4. In a left rotation, all of the keys in the left subtree of the right child must be greater than the key of the root and less than the key of the parent. When the root becomes the left child of the parent, the keys in this subtree remain in the left subtree of the new root and in the right subtree of the new left child.

6. Explain the adjacency Matrix and adjacency List with suitable example. [5 Marks] Ans: Adjacency Matrix and Adjacency List: Two main data structures for the representation of graphs are used in practice. The first is an adjacent matrix in which the rows and columns of two – dimensional array represent source and destination vertices and entries in the graph indicate whether an edge exists between the vertices. The second is called an adjacency list, and is implemented by representing each node as a data structure that contains a list of all adjacent nodes. The first is an adjacent matrix in which the rows and columns of two – dimensional array represent source and destination vertices and entries in the graph indicate whether an edge exists between the vertices. Adjacency lists are preferred for sparse graphs; otherwise, an adjacency matrix is a good choice. Finally, for very large graphs with some regularity in the placement of edges a symbolic graph is a possible choice of representation. Adjacency Matrix The adjacency matrix uses a vector (one-dimensional array) for the vertices and a matrix (two-dimensional array) to store the edges (see figure). If two vertices are adjacent – that is, if there is an edge between them – the matrix intersect has a value of 1; if there is no edge between them, the intersect is set to 0. If the graph is directed, then the intersection in the adjacency matrix indicates the direction. A A B C D E F 0 1 0 0 0 0 B 1 0 1 0 1 0 C 0 1 0 1 1 0 D 0 0 1 0 1 0 E 0 1 1 1 0 1 F 0 0 0 0 1 0

A

A B

B

E F

C D E F Vertex vector

C

D

Adjacency matrix for non – directed graph A A A B B E F C D C D E F Vertex vector Adjacency matrix for directed graph ADJACENCY MATRIX A B C D E F 0 1 0 0 0 0 B 1 0 1 0 1 0 C 0 1 0 1 1 0 D 0 0 1 0 1 0 E 0 1 1 1 0 1 F 0 0 0 0 1 0

In addition to the limitation that the size of the graph must be known before the program starts, there is another serious limitation in the adjacency matrix; only one edge can be stored between any two vertices. Although this limitation does not prevent many graphs from using the matrix format, some network structures require multiple lines between vertices. Adjacency Lists In graph theory, an adjacency list is the representation of all edges or arcs in a graph as a list. If the graph is undirected, every entry is a set of two nodes containing the two ends of the corresponding the two ends of the corresponding edge; if it is directed, every entry is a tuple of two nodes, one denoting the source node and the other denoting the destination node of the corresponding arc. Typically, adjacency lists are unordered. In computer science, an adjacency list is a closely related data structure for representing graphs. In an adjacency list representation, we keep, for each vertex in the graph, all other vertices which it has an edge to (that vertex‟s “adjacency list”). For instance, the representation suggested by van Rossum, in which a hash table is used to associate each vertex with an array of adjacency vertices, can be seen as an instance of this type of representation, as can the representation in Cormen et al in which an array indexed by vertex numbers points to a singly – linked list of the neighbors of each vertex. a c The graph pictured above has this adjacency list representation: a Adjacent to b, c b c Adjacent to Adjacent to a, c a, b b

a b

b a

c c

c

a

b

The adjacency list uses a two – dimensional ragged array to store the edges. An adjacency list is shown as follows: A

B

E F

C

D

A B C D E X F

B A B C D E

X C D E C X Adjacency list X D F X E E X X

Vertex list

The Vertex list is a single linked list of the vertices in the list. Depending on the application, it could also be implemented using doubly linked lists or circularly linked list. The pointer at the left of the list links the vertex entries. The pointer at the right in the vertex is a head pointer to a linked list of edges from the vertex. Thus, in the non directed graph on the figure there is a path from vertex B to vertices A, C, and E. To find these edges in the adjacency list, we start at B‟s vertex list vertex and traverse the linked list to A, then to C and finally to E. 7. Write an algorithm for the followings: i. Dijkstra Algorithm ii. Bellman - Ford Algorithm [5 Marks] Ans: 1) Dijkstra's algorithm Dijkstra's algorithm, conceived by Dutch computer scientist Edsger Dijkstra in 1959, is a graph search algorithm that solves the single-source shortest path problem for a graph with non negative edge path costs, outputting a shortest path tree. This algorithm is often used in routing. For a given source vertex (node) in the graph, the algorithm finds the path with lowest cost (i.e. the shortest path) between that vertex and every other vertex. It can also be used for finding costs of shortest paths from a single vertex to a single destination vertex by stopping the algorithm once the shortest path to the destination vertex has been determined. For example, if the vertices of the graph represent cities and edge path costs represent driving distances between pairs of cities connected by a direct road, Dijkstra's algorithm can be used to find the shortest route between one city and all other cities. As a result, the shortest path first is widely used in network routing protocols, most notably IS-IS and OSPF (Open Shortest Path First) Description of the algorithm Suppose you create a knotted web of strings, with each knot corresponding to a node, and the strings corresponding to the edges of the web: the length of each string is proportional to the weight of each edge. Now you compress the web into a small pile without making any knots or tangles in it. You then grab your starting knot and pull straight up. As new knots start to come up with the original, you can measure the straight up-down distance to these knots: this must be the shortest distance from the starting node to the destination node. The acts of "pulling up" and "measuring" must be abstracted for the computer, but the general idea of the algorithm

is the same: you have two sets, one of knots that are on the table, and another of knots that are in the air. Every step of the algorithm, you take the closest knot from the table and pull it into the air, and mark it with its length. If any knots are left on the table when you're done, you mark them with the distance infinity. Using a street map, suppose you're marking over the streets (tracing the street with a marker) in a certain order, until you have a route marked in from the starting point to the destination. The order is conceptually simple: from all the street intersections of the already marked routes, find the closest unmarked intersection - closest to the starting point (the "greedy" part). It's the whole marked route to the intersection, plus the street to the new, unmarked intersection. Mark that street to that intersection, draws an arrow with the direction, then repeat. Never mark to any intersection twice. When you get to the destination, follow the arrows backwards. There will be only one path back against the arrows, the shortest one. 2) The Bellman–Ford algorithm A label correcting algorithm, computes single-source shortest paths in a weighted digraph (where some of the edge weights may be negative). Dijkstra's algorithm solves the same problem with a lower running time, but requires edge weights to be non-negative. Thus, Bellman–Ford is usually used only when there are negative edge weights. According to Robert Sedgewick, "Negative weights are not merely a mathematical curiosity; arise in a natural way when we reduce other problems to shortest-paths problems", and he gives the specific example of a reduction from the NP-complete Hamilton path problem to the shortest paths problem with general weights. If a graph contains a cycle of total negative weight then arbitrarily low weights are achievable and so there's no solution; Bellman-Ford detects this case. If the graph does contain a cycle of negative weights, Bellman-Ford can only detect this; Bellman-Ford cannot find the shortest path that does not repeat any vertex in such a graph. This problem is at least as hard as the NP-complete longest path problem. 1) Algorithm 2) Proof of correctness 3) Applications in routing 4) Implementation 5) Yen's improvement 6) References 7) External links Bellman-Ford is in its basic structure very similar to Dijkstra's algorithm, but instead of greedily selecting the minimum-weight node not yet processed to relax, it simply relaxes all the edges, and does this |V| − 1 times, where |V| is the number of vertices in the graph. The repetitions allow minimum distances to accurately propagate throughout the graph, since, in the absence of negative cycles; the shortest path can only visit each node at most once. Unlike the greedy approach, which depends on certain structural assumptions derived from positive weights, this straightforward approach extends to the general case. Bellman–Ford runs in O(|V|·|E|) time, where |V| and |E| are the number of vertices and edges respectively.

Sign up to vote on this title
UsefulNot useful