This action might not be possible to undo. Are you sure you want to continue?

Q1. Write a program in C using one dimensional arrays to sort a given list of n numbers

**using any of the sorting techniques.
**

Answer:

**In the following program selection sort method is used to sort a given list of n numbers.
**

#include<stdio.h> #include<conio.h> void main() { int n,i,j,temp,a[50]; printf("\nEnter how many elements="); scanf("%d",&n); printf("\nEnter %d elements",n); for(i=0;i<n;i++) { scanf("%d",&a[i]); } for(i=0;i<n-1;i++) { for(j=i+1;j<n;j++) { if(a[i]>a[j]) { temp=a[j]; a[j]=a[i]; a[i]=temp; } } } printf("\nSorted Array:\n"); for(i=0;i<n;i++) { printf("%d\n",a[i]); } getch(); }

1

Q2. Write the advantages of implementation checks preconditions. Answer: Many components have operations with preconditions. That means that before calling such an operation, the programmer must make sure that its preconditions are satisfied. If they are not, the operation is allowed to do anything it pleases, including crashing the program in most unpleasant ways. While it may seem like a good idea to force every operation to check its preconditions, doing so would result in very severe and unnecessary efficiency penalties. The reason it is not always necessary to check the preconditions is that in the debugged or verified program one can be certain that they are satisfied. However, during the testing and debugging of a program, it is very helpful to know that if the program is calling an operation when its preconditions are not satisfied, a helpful message to that extent will be output and the program will stop immediately. To facilitate this, a special "checking" implementation of every component should be constructed. This implementation serves as a wrapper around any non-checking implementation of this component, which is accomplished with the use of templates. The checking wrapper provides a new implementation for every operation with preconditions, and simply re-exports all the operations without preconditions. The checking implementation of an operation with preconditions checks those preconditions, and calls through to the non-checking implementation of this operation only if the preconditions are satisfied. If they are not, it stops the program and produces an explanatory message. Thus, during testing and debugging, checking versions of all components are used. However, the final version of the program is compiled with non-checking versions of all components, since this preserves correctness, while improving efficiency. Q3. Explain the theory of non linear data structures. Answer: Trees We consider one of the most Important non-linear Information structures- trees. A tree Is often used to represent a hierarchy. This is because the relationships between the Items In the hierarchy suggest the branches of a botanical tree. For example, a tree-like organization charts often used to represent the lines of responsibility in a business as shown in Figure. The president of the company is shown at the top of the tree and

2

the vice-presidents are indicated below her. Under the vice-presidents we find the managers and below the managers the rest of the clerks. Each clerk reports to a manager. Each manager reports to a vice-president, and each vice-president reports to the president.

It just takes a little imagination to see the tree in Figure. Of course. The tree is upside-down. However, this is the usual way the data structure is drawn. The president is called the root of the tree and the clerks are the leaves. A tree is extremely useful for certain kinds of computations. For example. Suppose we wish to determine the total salaries paid to employees by division or by department. The total of the salaries in division A can be found by computing the sum of the salaries paid in departments Al and A2 plus the salary of the vice-president of division A. Similarly. The total of the salaries paid in department Al is the sum of the salaries of the manager of department Al and of the two clerks below her. Clearly, in order to compute all the totals. It is necessary to consider the salary of every employee. Therefore, an implementation of this computation must visit all the employees in the tree. An algorithm that systematically visits all the items in a tree is called a tree traversal. In the same chapter we consider several different kinds of trees as well as several different tree traversal algorithms. In addition. We show how trees can be used to represent arithmetic expressions and how we can evaluate an arithmetic expression by doing a tree traversal. The following is a mathematical definition of a tree: Definition (Tree) A tree T is a finite. Non-empty set of nodes , T = {r} U TI, U T2 U U Tn with the following properties: 3. A designated node of the set, r, is called the root of the tree: and 4. The remaining nodes are partitioned into n O subsets T, T. Tn each of which is a tree for convenience, we shall use the notation T= {r. T, T, T} denote the tree T. Notice that Definition is recursive-a tree is defined in terms of itself! Fortunately, we do not have a problem with infinite recursion because every tree has a finite number of nodes and because in the base case a tree has n=0 subtrees. It follows from Definition that the minimal tree is a tree comprised of a single root node. For example Ta = {A}.

3

Finally. The following Tb = {B, {C}} is also a tree Ta = {D, {E. {F}}, {G.{H,II}}, {J, {K}. {L}}, {M}}} How do Ta Tb. & Tc resemble their arboreal namesake? The similarity becomes apparent when we consider the graphical representation of these trees shown in Figure. To draw such a pictorial representation of a tree, T = {r. T1 ,T2, Tn, beside each other below the root. Finally, lines are drawn from rto the roots of each of the subtrees. T1T2 .Tn

Figure : Examples of trees. Of course, trees drawn in this fashion are upside down. Nevertheless, this is the conventional way in which tree data structures are drawn. In fact, it is understood that when we speak of up and down, we do so with respect to this pictorial representation. For example, when we move from a root to a subtree, we will say that we are moving down the tree. The inverted pictorial representation of trees is probably due to the way that genealogical lineal charts are drawn. A lineal chart is a family tree that shows the descendants of some person. And it is from genealogy that much of the terminology associated with tree data structures is taken. Figure shows one representation of the tree Tc defined in Equation. In this case, the tree is represented as a set of nested regions in the plane. In fact, what we have is a Venn diagram which corresponds to the view that a tree is a set of sets.

Figure: An alternate graphical representation for trees. Binary Tree Used to implement lists whose elements have a natural order (e.g. numbers) and either (a) the application would like the list kept in this order or (b) the order of elements is irrelevant to the application (e.g. this list is implementing a set). Each element in a binary tree is stored in a "node" class (or struct). Each node contains pointers to a left child node and a right child node. In some implementations, it may also contain a

4

pointer to the parent node. A tree may also have an object of a second "tree" class (or struct) which as a header for the tree. The "tree" object contains a pointer to the root of the tree (the node with no parent) and whatever other information the programmer wants to squirrel away in it (e.g. number of nodes currently in the tree). In a binary tree, elements are kept sorted in left to right order across the tree. That is if N is a node, then the value stored in N must be larger than the value stored in left-child(N) and less than the value stored in right-child(N). Variant trees may have the opposite order (smaller values to the right rather than to the left) or may allow two different nodes to contain equal values. Hash Tables A very common paradigm in data processing involves storing information in a table and then later retrieving the information stored there. For example, consider a database of driver s license records. The database contains one record for each driver s license issued. Given a driver s license number. we can look up the information associated with that number. Similar operations are done by the C compiler. The compiler uses a symbol table to keep track of the user-defined symbols in a Java program. As it compiles a program, the compiler inserts an entry in the symbol table every time a new symbol is declared. In addition, every time a symbol is used, the compiler looks up the attributes associated with that symbol to see that it is being used correctly. Typically the database comprises a collection of key-and-value pairs. Information is retrieved from the database by searching for a given key. In the case of the driver ~ license database, the key is the driver s license number and in the case of the symbol table, the key is the name of the symbol. In general, an application may perform a large number of insertion and/ or look-up operations. Occasionally it is also necessary to remove items from the database. Because a large number of operations will be done we want to do them as quickly as possible. Hash tables are a very practical way to maintain a dictionary. As with bucket sort, it assumes we know that the distribution of keys is fairly well-behaved. Once you have its index. A hash function is a mathematical function which maps keys to integers. In bucket sort, our hash function mapped the key to a bucket based on the first letters of the key. "Collisions" were the set of keys mapped to the same bucket. If the keys were uniformly distributed. then each bucket contains very few keys! The resulting short lists were easily sorted, and could just as easily be searched

5

We examine data structures which are designed specifically with the objective of providing efficient insertion and find operations. In order to meet the design objective certain concessions are made. Specifically, we do not require that there be any specific ordering of the items in the container. In addition, while we still require the ability to remove items from the container, it is not our primary objective to make removal as efficient as the insertion and find operations.

Ideally we would build a data structure for which both the insertion and find operations are 0(1) in the worst case. However, this kind of performance can only be achieved with complete a priori knowledge. We need to know beforehand specifically which items are to be inserted into the container. Unfortunately, we do not have this information in the general case. So, if we cannot guarantee 0(1) performance in the worst case, then we make it our design objective to achieve 0(1) performance in the average case. The constant time performance objective immediately leads us to the following conclusion: Our implementation must be based in some way Kh element of an array in constant time, whereas the same operation in a linked list takes O{k) time. In the previous section, we consider two searchable containers-the ordered list and the sorted list. In the case of an ordered list, the cost of an insertion is 0(1) and the cost of the find operation is O(n). For a sorted list the cost of insertion is O(n) and the cost of the find operation is O(log n) for the array implementation. Clearly, neither the ordered list nor the sorted list meets our performance objectives. The essential problem is that a search, either linear or binary, is always necessary. In the ordered list, the find operation uses a linear search to locate the item. In the sorted list, a binary search can be used to locate the item because the data is sorted. However, in order to keep the data sorted, insertion becomes O(n). In order to meet the performance objective of constant time insert and find operations. we need a way to do them without performing a search. That is, given an item x, we need to be able to determine directly from x the array position where it is to be stored.

6

Q4. Discuss the STACK operation with Suitable example and Show how to implement stack operations of integer in C by using array. Answer: A stack is a last in, first out (LIFO) abstract data type and data structure. A stack can have any abstract data type as an element, but is characterized by only two fundamental operations: push and pop. The push operation adds an item to the top of the stack, hiding any items already on the stack, or initializing the stack if it is empty. The pop operation removes an item from the top of the stack, and returns this value to the caller. A pop either reveals previously concealed items, or results in an empty stack. A stack is a restricted data structure, because only a small number of operations are performed on it. The nature of the pop and push operations also means that stack elements have a natural order. Elements are removed from the stack in the reverse order to the order of their addition: therefore, the lower elements are those that have been on the stack the long Following figure shows the simple representation of a stack.

Following example shows implementing stack operations of integer in C by using array.

#include<stdio.h> #include<conio.h> void create(void); void push(0void); void pop(void); void display(void); void topelement(void); int a[25];

7

int top; void create() { int i; printf("Enter the number of elements in the stack \n"); scanf("%d",&top); printf("Enter the elements \n"); for(i=0;i<top;++i) { scanf("%d",&a[i]); } return; } void display() { int i; if((top!=0) && (top!=25)) { printf("The elements in the stack are \n"); for(i=top-1;i>=0; i) printf("%d",a[i]); getch(); } }

8

void push() { if(top==25) { printf("The stack is full \n"); } else { printf("Enter the element \n"); scanf("%d",&a[top]); top++; } return; } void pop() { if(top==0) { printf("the stack is empty \n"); } else { printf("The popped element is %d",a[--top]); }

9

return; } void topelement() { int t; if(top==0) { printf("There is no top element \n"); } else { t=top-1; printf("The top element is %d\n",a[t]); } return; } void main() { int ch; clrscr(); create(); display(); do{ clrscr();

10

printf("Stack operations \n"); printf(" -\n");

printf("1. PUSH \n"); printf("2. POP \n"); printf("3. TOP ELEMENT \n"); printf("4. Displaying the stack \n"); printf("5. Quit \n"); printf("Enter your choice \n"); scanf("%d\n",&ch); switch(ch) { case 1: push(); display(); break; case 2: pop(); display(); break; case 3: topelement(); // display(); break; case 4: display(); break; case 5: printf("END \n"); break;

11

default: printf("Invalid Entry \n"); } getch(); }while(ch !=5); }

Sample output: Enter the number of elements in the stack 4 Enter the elements 1 2 3 4 The elements in the stack are 4321 Stack operations 1. PUSH 2. POP 3. TOP ELEMENT 4. Displaying the stack 5. Quit Enter your choice

12

1 0 Enter the element The elements in the stack are 04321 Enter your choice 3 The top element is 0 Enter your choice 2 The popped element is 0 Enter your choice 1 Enter the element 1 The elements in the stack are 14321 Enter your choice 5 END

13

Q5. Describe the theory and applications of Double Ended Queues (Deque) and circular queues. Answer: Double-ended queue (often abbreviated to deque) is an abstract data structure that implements a queue for which elements can only be added to or removed from the front (head) or back (tail).[1] It is also often called a head-tail linked list. There are at least two common ways to efficiently implement a deque: with a modified dynamic array or with a doubly-linked list. Dynamic array implementation uses a variant of a dynamic array that can grow from both ends, sometimes called array deques. These array deques have all the properties of a dynamic array, such as constant time random access, good locality of reference, and inefficient insertion/removal in the middle, with the addition of amortized constant time insertion/removal at both ends, instead of just one end. Three common implementations include: Storing deque contents in a circular buffer, and only resizing when the buffer becomes completely full. This decreases the frequency of resizings, but requires an expensive branch instruction for indexing. Allocating deque contents from the center of the underlying array, and resizing the underlying array when either end is reached. This approach may require more frequent resizings and waste more space, particularly when elements are only inserted at one end. Storing contents in multiple smaller arrays, allocating additional arrays at the beginning or end as needed. Indexing is implemented by keeping a dynamic array containing pointers to each of the smaller arrays. There are at least two common ways to efficiently implement a deque: with a modified dynamic array or with a doubly-linked list. Dynamic array implementation uses a variant of a dynamic array that can grow from both ends, sometimes called array deques. These array deques have all the properties of a dynamic array, such as constant time random access, good locality of reference, and inefficient insertion/removal in the middle, with the addition of amortized constant time insertion/removal at both ends, instead of just one end. Three common implementations include:

14

Storing deque contents in a circular buffer, and only resizing when the buffer becomes completely full. This decreases the frequency of resizings, but requires an expensive branch instruction for indexing.

Allocating deque contents from the center of the underlying array, and resizing the underlying array when either end is reached. This approach may require more frequent resizings and waste more space, particularly when elements are only inserted at one end. Storing contents in multiple smaller arrays, allocating additional arrays at the beginning or end as needed. Indexing is implemented by keeping a dynamic array containing pointers to each of the smaller arrays. Circular Queues: A circular buffer, cyclic buffer or ring buffer is a data structure that uses a single, fixed-size buffer as if it were connected end-to-end. This structure lends itself easily to buffering data streams. An example that could possibly use an overwriting circular buffer is with multimedia. If the buffer is used as the bounded buffer in the producer-consumer problem then it is probably desired for the producer (e.g., an audio generator) to overwrite old data if the consumer (e.g., the sound card) is unable to momentarily keep up. Another example is the digital waveguide synthesis method which uses circular buffers to efficiently simulate the sound of vibrating strings or wind instruments. The "prized" attribute of a circular buffer is that it does not need to have its elements shuffled around when one is consumed. (If a non-circular buffer were used then it would be necessary to shift all elements when one is consumed.) In other words, the circular buffer is well suited as a FIFO buffer while a standard, non-circular buffer is well suited as a LIFO buffer. Circular buffering makes a good implementation strategy for a Queue that has fixed maximum size. Should a maximum size be adopted for a queue, then a circular buffer is a completely ideal implementation; all queue operations are constant time. However, expanding a circular buffer requires shifting memory, which is comparatively costly. For arbitrarily expanding queues, a Linked list approach may be preferred instead. Q6. Write a algorithm for the followings:

i. Dijkstra Algorithm ii. Bellman-Ford Algorithm

**Answer: i. Dijkastra Algorithm:
**

#include<stdio.h> #include<stdlib.h>

15

void main() { int graph[15][15],s[15],pathestimate[15],mark[15]; int num_of_vertices,source,i,j,u,predecessor[15]; int count=0; int minimum(int a[],int m[],int k); void printpath(int,int,int[]); printf("\nenter the no.of vertices\n"); scanf("%d",&num_of_vertices); if(num_of_vertices<=0) { printf("\nthis is meaningless\n"); exit(1); } printf("\nenter the adjacent matrix\n"); for(i=1;i<=num_of_vertices;i++) { printf("\nenter the elements of row %d\n",i); for(j=1;j<=num_of_vertices;j++) { scanf("%d",&graph[i][j]); } } printf("\nenter the source vertex\n"); scanf("%d",&source); for(j=1;j<=num_of_vertices;j++) { mark[j]=0; pathestimate[j]=999; predecessor[j]=0; } pathestimate=0; while(count<num_of_vertices) { u=minimum(pathestimate,mark,num_of_vertices); s[++count]=u; mark[u]=1; for(i=1;i<=num_of_vertices;i++) { if(graph[u][i]>0) {

16

if(mark[i]!=1) { if(pathestimate[i]>pathestimate[u]+graph[u][i]) { pathestimate[i]=pathestimate[u]+graph[u][i]; predecessor[i]=u; } } } } } for(i=1;i<=num_of_vertices;i++) { printpath(source,i,predecessor); if(pathestimate[i]!=999) printf("->(%d)\n",pathestimate[i]); } } int minimum(int a[],int m[],int k) { int mi=999; int i,t; for(i=1;i<=k;i++) { if(m[i]!=1) { if(mi>=a[i]) { mi=a[i]; t=i; } }} return t; } void printpath(int x,int i,int p[]) { printf("\n"); if(i==x) { printf("%d",x); } else if(p[i]==0)

17

printf("no path from %d to %d",x,i); else { printpath(x,p[i],p); printf("..%d",i); } }

ii. Bellman-Ford Algorithm

#include <stdio.h> typedef struct { int u, v, w; } Edge; int n; /* the number of nodes */ int e; /* the number of edges */ Edge edges[1024]; /* large enough for n <= 2^5=32 */ int d[32]; /* d[i] is the minimum distance from node s to node i */ #define INFINITY 10000 void printDist() { int i; printf("Distances:\n"); for (i = 0; i < n; ++i) printf("to %d\t", i + 1); printf("\n"); for (i = 0; i < n; ++i) printf("%d\t", d[i]); printf("\n\n"); } void bellman_ford(int s) { int i, j;

18

for (i = 0; i < n; ++i) d[i] = INFINITY; d[s] = 0; for (i = 0; i < n - 1; ++i) for (j = 0; j < e; ++j) if (d[edges[j].u] + edges[j].w < d[edges[j].v]) d[edges[j].v] = d[edges[j].u] + edges[j].w; } int main(int argc, char *argv[]) { int i, j; int w; FILE *fin = fopen("dist.txt", "r"); fscanf(fin, "%d", &n); e = 0; for (i = 0; i < n; ++i) for (j = 0; j < n; ++j) { fscanf(fin, "%d", &w); if (w != 0) { edges[e].u = i; edges[e].v = j; edges[e].w = w; ++e; } } fclose(fin); /* printDist(); */ bellman_ford(0); printDist(); return 0; }

Q7. Explain the theory of Minimum spanning trees. Answer:

Basically a minimum spanning tree is a subset of the edges of the graph, so that there s a path from any node to any other node and that the sum of the weights of the edges is minimum.

19

Here s the minimum spanning tree of the example:

The above image contains all of the initial nodes and some of the initial edges. Actually it contains exactly n 1 edges, where n is the number of nodes. It s called a tree because there are no cycles. The graph can be depicted as a map, with the nodes being cities, the edges passable terrain, and the weights the distance between the cities. It s worth mentioning that a graph can have several minimum spanning trees. Taking the above example, but replace all the weight with 1. The resulting graph will have 6minimum spanning trees. Given a graph, find one of its minimum spanning trees. Prim s Algorithm One of the classic algorithms for this problem is that found by Robert C. Prim. It s agreedy style algorithm and it s guaranteed to produce a correct result. In the following discussion, let the distance from each node not in the tree to the tree be the edge of minimal weight between that node and some node in the tree. If there is no such edge, assume the distance is infinity (this shouldn t happen). The algorithm builds the minimal spanning tree by iteratively adding nodes into a working tree: 1. Start with a tree which contains only one node. 2. Identify a node (outside the tree) which is closest to the tree and add the minimum weight edge from that node to some node in the tree and incorporate the additional node as a part of the tree. 3. If there are less then n 1 edges in the tree, go to 2 For the example graph, here s how it would run:

20

Start with only node A in the tree.

Find the closest node to the tree, and add it.

Repeat until there are n 1 edges in the tree.

21

Q8. Explain the adjacency Matrix and adjacency List with suitable example. Answer:

Adjacency matrix: Each cell aij of an adjacency matrix contains 0, if there is an edge between i-th and j-th vertices, and 1 otherwise. Before discussing the advantages and disadvantages of this kind of representation, let us see an example.

Graph

Adjacency matrix

Edge (2, 5)

Cells for the edge (2, 5)

22

Edge (1, 3)

Cells for the edge (1, 3)

The graph presented by example is undirected. It means that its adjacency matrix is symmetric. Indeed, in undirected graph, if there is an edge (2, 5) then there is also an edge (5, 2). This is also the reason, why there are two cells for every edge in the sample. Loops, if they are allowed in a graph, correspond to the diagonal elements of an adjacency matrix. Advantages. Adjacency matrix is very convenient to work with. Add (remove) an edge can be done in O(1) time, the same time is required to check, if there is an edge between two vertices. Also it is very simple to program and in all our graph tutorials we are going to work with this kind of representation. To sum up, adjacency matrix is a good solution for dense graphs, which implies having constant number of vertices. Adjacency list This kind of the graph representation is one of the alternatives to adjacency matrix. It requires less amount of memory and, in particular situations even can outperform adjacency matrix. For every vertex adjacency list stores a list of vertices, which are adjacent to current one. Let us see an example.

23

Graph

Adjacency list

Vertices, adjacent to {2}

Row in the adjacency list

Advantages. Adjacent list allows us to store graph in more compact form, than adjacency matrix, but the difference decreasing as a graph becomes denser. Next advantage is that adjacent list allows to get the list of adjacent vertices inO(1) time, which is a big advantage for some algorithms.. To sum up, adjacency list is a good solution for sparse graphs and lets us changing number of vertices more efficiently, than if using an adjacent matrix. But still there are better solutions to store fully dynamic graphs.

24

December 2010 Master of Computer Application (MCA) Semester 2 MC0068 Data Structures using C 4 Credits

Q1. What do you mean by searching? Define searching problem and how do you evaluate the performance of any searching algorithm? Answer: Pulling up the records data and Information is one of the most vital applications of computers. It usually involves giving a piece of information called the key, and ask to find a record that contains other associated information. This is achieved by first going through the list to find if the given key exists or not, a process called searching. Computer systems are often used to sort large amounts of date from which individual records must be retrieved according to some search criterion. The process of searching for an item in a data structure can be quite straightforward or very complex. Searching can be done on internal data structures or on external data structures. Information retrieval in the required format is the central activity in all computer applications. This involves searching. Consider a list of n elements or can represent a file of n records, where each element is a key / number. The task is to find a particular key in the list in the shortest possible time. If you know you are going to search for an item in a set, you will need to think carefully about what type of data structure you will use for that set. At low level, the only searches that get mentioned are for sorted and unsorted arrays. However, these are not the only data types that are useful for searching. Sequential Search [Linear search] This is the most natural searching method. Simply put it means to go through a list or a file till the required record is found. It makes no demands on the ordering of records. The algorithm for a sequential search procedure is now presented. Algorithm: Sequential Search This represents the algorithm to search a list of values of to find the required one. INPUT: List of size N. Target value T

25

OUTPUT: Position of T in the list I BEGIN 1. Set FOUND to false Set I to 0 2. While (I<=N) and (FOUND is false) If List [I] = T FOUND = true Else I=I+1 END 3. If FOUND is false T is not present in List. END This algorithm can easily be extended for searching for a record with a matching key value. Analysis of Sequential Search Whether the sequential search is carried out on lists implemented as arrays or linked lists or on files, the criterial part in performance is the comparison loop step 2. Obviously the fewer the number of comparisons, the sooner the algorithm will terminate. The fewest possible comparisons = 1. When the required item is the first item in the list. The maximum comparisons = N when the required item is the last item in the list. Thus if the required item is in position I in the list, I comparisons are required. Hence the average number of comparisons done by sequential search is

26

Sequential search is easy to write and efficient for short lists. It does not require sorted data. However it is disastrous for long lists. There is no way of quickly establishing that the required item is not in the list or of finding all occurrences of a required item at one place. We can overcome these deficiencies with the next searching method namely the Binary search. Example : Program to search for an item using linear search. #include<stdio.h> /* Search for key in the table */ int seq_search(int key, int a[], int n) { Int I; for(i=0;i<n;i++) { If(a[i]==key) return i+1 } return 0; } void main() { int I,n,key,pos,a[20]; printf( Enter the value of nn );

27

scanf( %d ,&n); printf( Enter n valuesn ; for(i=0;i<n;i++) scanf(%d ,&a[i]); printf( Enter the item to be searchedn ); scanf( %d , &key); pos= seq_search(key,n,a); if(pos==0) printf( Search unscccessful n ); else printf( key found at position = %d n ,pos); } Binary Search The drawbacks of sequential search can be eliminated if it becomes possible to eliminate large portions of the list from consideration in subsequent iterations. The binary search method just that, it halves the size of the list to search in each iterations. Binary search can be explained simply by the analogy of searching for a page in a book. Suppose you were searching for page 90 in book of 150 pages. You would first open it at random towards the later half of the book. If the page is less than 90, you would open at a page to the right, it is greater than 90 you would open at a page to the left, repeating the process till page 90 was found. As you can see, by the first instinctive search, you dramatically reduced the number of pages to search. Binary search requires sorted data to operate on since the data may not be contiguous like the pages of a book. We cannot guess which quarter of the data the required item may be in. So we divide the list in the centre each time. We will first illustrate binary search with an example before going on to formulate the algorithm and analysing it. Example: Use the binary search method to find Scorpio in the following list of 11 zodiac signs.

28

Aries 1 Aquarius 2

Comparison 1 (Leo Scorpio)

Comparison 2 Cancer 3 (Sagittarius Scorpio) Comparison 3 Capricorn 4 Gemini 5 Leo 6 Libra 7 Pisces 8 Sagittarius 9 Scorpio 10 Taurus 11 This is a sorted list of size 11. The first comparison is with the middle element number 6 i.e. Leo. This eliminates the first 5 elements. The second comparison is with the middle element from 7 to 11, i.e. 9 Sagittarius. This eliminates 7 to 9. The third comparison is with the middle element from 9 to 11, i.e. 10 Scorpio. Thus we have found the target in 3 comparisons. Sequential search would be taken 10 comparisons. We will now formulate the algorithm for binary search. Algorithm Binary Search This represents the binary search method to find a required item in a list sorted in increasing order . INPUT: Sorted LIST of size N, Target Value T OUTPUT: Position of T in the LIST = I BEGIN 1. MAX = N ( =scorpio)

29

MIN = 1 FOUND = false 2. WHILE (FOUND is false) and (MAX > = MIN) 2.1 MID = (MAX + MIN)DIV 2 2.2 If T = LIST [MID] I=MID FOUND = true Else If T < LIST[MID] MAX = MID-1 Else MIN = MD+1 END It is recommended that the student apply this algorithm to some examples. Analysis of Binary Search : In general, the binary search method needs no; more than [Iog2n] + 1 comparisons. This implies that for an array of a million entries, only about twenty comparisons will be needed. Contrast this with the case of sequential search which on the average will need comparisons. The conditions (MAX = MIN) is necessary to ensure that step 2 terminates even in the case that the required element is not present. Consider the example of Zodiac signs. Suppose the l0th item was Solar (an imaginary Zodiac sign). Then at that point we would have MID = 10 MAX =11 MIN = 9 And from 2.2 get MAX = MID-l = 9

30

In the next iteration we get (2.1) MID = (9 + 9) DIV 2 = 9 (2.2) MAX= 9-1 = 8. Since MAX <MIN, the loop terminates. Since FOUND is false, we consider the target was not found. In the binary search method just described above, it is always the key in the middle of the list currently being examined that is used for comparison. The splitting of the list can be illustrated through a binary decision tree in which the value of a node is the index of the key being tested. Suppose there are 31 records, then the first key compared is at location 16 of the list since (1 + 31)/2 = 16. If the key is less than the key at location 16 the location 8 is tested since (1 + 15)/2 = 8; or if key is less than the key at location 16, then the location 24 is tested. The binary tree describing this process is shown in the figure below.

Illustrations of C Programmes

Q2. Illustrate to implement deque using circular linked list. Answer:

#include <stdio.h> #include <conio.h> #include <alloc.h> struct node { int data ; struct node *link ; };

31

struct dqueue { struct node *front ; struct node *rear ; }; void initdqueue ( struct dqueue * ) ; void addqatend ( struct dqueue *, int item ) ; void addqatbeg ( struct dqueue *, int item ) ; int delqatbeg ( struct dqueue * ) ; int delqatend ( struct dqueue * ) ; void display ( struct dqueue ) ; int count ( struct dqueue ) ; void deldqueue ( struct dqueue * ) ; void main( ) { struct dqueue dq ; int i, n ; clrscr( ) ; initdqueue ( &dq ) ; addqatend ( &dq, 11 ) ; addqatbeg ( &dq, 10 ) ; addqatend ( &dq, 12 ) ; addqatbeg ( &dq, 9 ) ; addqatend ( &dq, 13 ) ; addqatbeg ( &dq, 8 ) ; addqatend ( &dq, 14 ) ; addqatbeg ( &dq, 7 ) ; display ( dq ) ; n = count ( dq ) ; printf ( "\nTotal elements: %d", n ) ; i = delqatbeg ( &dq ) ; printf ( "\nItem extracted = %d", i ) ; i = delqatbeg ( &dq ) ; printf ( "\nItem extracted = %d", i ) ;

32

i = delqatbeg ( &dq ) ; printf ( "\nItem extracted = %d", i ) ; i = delqatend ( &dq ) ; printf ( "\nItem extracted = %d", i ) ; display ( dq ) ; n = count ( dq ) ; printf ( "\nElements Left: %d", n ) ; deldqueue ( &dq ) ; getch( ) ; } /* initializes elements of structure */ void initdqueue ( struct dqueue *p ) { p -> front = p -> rear = NULL ; } /* adds item at the end of dqueue */ void addqatend ( struct dqueue *p, int item ) { struct node *temp ; temp = ( struct node * ) malloc ( sizeof ( struct node ) ); temp -> data = item ; temp -> link = NULL ; if ( p -> front == NULL ) p -> front = temp ; else p -> rear -> link = temp ; p -> rear = temp ; } /* adds item at begining of dqueue */ void addqatbeg ( struct dqueue *p, int item ) { struct node *temp ; int *q ; temp = ( struct node * ) malloc ( sizeof ( struct node ) ); temp -> data = item ;

33

temp -> link = NULL ; if ( p -> front == NULL ) p -> front = p -> rear = temp ; else { temp -> link = p -> front ; p -> front = temp ; } } /* deletes item from begining of dqueue */ int delqatbeg ( struct dqueue *p ) { struct node *temp = p -> front ; int item ; if ( temp == NULL ) { printf ( "\nQueue is empty." ) ; return 0 ; } else { temp = p -> front ; item = temp -> data ; p -> front = temp -> link ; free ( temp ) ; if ( temp == NULL ) p -> rear = NULL ; return ( item ) ; } } /* deletes item from end of dqueue */ int delqatend ( struct dqueue *p ) { struct node *temp , *rleft, *q ; int item ; temp = p -> front ; if ( p -> rear == NULL ) { printf ( "\nQueue is empty." ) ; return 0 ;

34

} else { while ( temp != p -> rear ) { rleft = temp ; temp = temp -> link ; } q = p -> rear ; item = q -> data ; free ( q ) ; p -> rear = rleft ; p -> rear -> link = NULL ; if ( p -> rear == NULL ) p -> front = NULL ; return ( item ) ; } } /* displays the queue */ void display ( struct dqueue dq ) { struct node *temp = dq.front ; printf ( "\nfront -> " ) ; while ( temp != NULL ) { if ( temp -> link == NULL ) { printf ( "\t%d", temp -> data ) ; printf ( " <- rear" ) ; } else printf ( "\t%d", temp -> data ) ; temp = temp -> link ; } printf ( "\n" ) ; } /* counts the number of items in dqueue */ int count ( struct dqueue dq ) { int c = 0 ; struct node *temp = dq.front ;

35

while ( temp != NULL ) { temp = temp -> link ; c++ ; } return c ; } /* deletes the queue */ void deldqueue ( struct dqueue *p ) { struct node *temp ; if ( p -> front == NULL ) return ; while ( p -> front != NULL ) { temp = p -> front ; p -> front = p -> front -> link ; free ( temp ) ; } }

Q3. a. Compare and contrast DFS and BFS and DFS+ID approaches b. Discuss how Splay Tree differs from a Binary Tree? Justify your answer with appropriate example. Answer: a. Compare and contrast DFS and BFS and DFS+ID approaches BFS

Description: 1. A simple strategy in which the root is expanded first then all the root successors are expanded next then their successors. 2. Order in which nodes are expanded.

DFS

1. DFS progresses by expanding the first child node of the search tree that appears and thus going deeper and deeper until a goal node is found, or until it hits a node that has no children. Then the search backtracks, returning to the most recent node it hasn¶t finished exploring. 2. Order in which nodes are expanded.

DFS+ID

It is a search strategy resulting when you combine BFS and DFS, thus combining the advantages of each strategy, taking the completeness and optimality of BFS and the modest memory requirements of DFS. IDS works by looking for the best search depth d, thus starting with depth limit 0 and make a BFS and if the search failed it increase the depth limit by 1 and try a BFS

36

again with depth 1 and so on ± first d = 0, then 1 then 2 and so on ± until a depth d is reached where a goal is found.

Completeness:

it is easy to see that breadth -first search is complete that it visit all levels given that d factor is finite, so in some d it will find a solution.

Optimality:

Conclusion:

DFS is not complete, to convince IDS is like BFS, is complete when th yourself consider that our search branching factor b is finite. start expanding the left sub tree of the root for so long path (may be infinite) when different choice near the root could lead to a solution, now suppose that the left sub tree of the root has no solution, and it is unbounded, then the search will continue going deep infinitely, in this c we say that DFS is not complete. breadth-first search is not Consider the scenario that there is IDS is also like BFS optimal when optimal until all actions have more than one goal node, and our the steps are of the same cost. the same cost. search decided to first expand the left sub tree of the root where there is a solution at a very deep level of this left sub tree, in the same time the right sub tree of the root has a solution near the root, here comes the non-optimality of DFS that it is not guaranteed that the first goal to find is the optimal one, so we conclude that DFS is not optimal. We see that space complexity DFS may suffer from non-termination 1. We can conclude that IDS is a biggest problem for BFS than when the length of a path in the hybrid search strategy between its exponential execution time. search tree is infinite, so we perform BFS and DFS inheriting their DFS to a limited depth which is advantages. called Depth-limited Search. 2. IDS is faster than BFS and DFS. 3. It is said that ³IDS is the preferred uniformed search method when there is a large search space and the depth of the solution is not known´.

b. Discuss how Splay Tree differs from a Binary Tree? Justify your answer with appropriate example. A splay tree is a self-adjusting search algorithm for placing and locating files (called records or keys) in a database. The algorithm finds data by repeatedly making choices at decision points called nodes where as a binary tree is a method of placing and locating files (called records or keys) in a database, especially when all the data is known to be in random access memory (RAM). The algorithm finds data by repeatedly dividing the number of ultimately accessible records in half until only one remains.

Q4. Write note on :

37

i. Threaded lists ii. Dynamic Hashing Answer: i. Threaded lists: A threaded list is a list in which additional linkage structures, called threads, have been added to provide for traversals in special orders. This permits bounded workspace, i.e. read-only traversals along the direction provided by the threads. It does presuppose that the list and any sublists are not recursive and further that no sublist is shared.

ii. Dynamic Hashing: Number of buckets not fixed as with static hashing, but grows or shrinks as needed

y

y

y y

y

The hash function: Keys are mapped to an arbitrarily long pseudorandom bit string (often called a "signature" or "pseudokey") o only the first "so many" bits will be used o but we won't know in advance how many o a possible hash function: Key% (a prime), where the prime is safely above a maximum file size. If we add a record: o either it fits on the page it maps to (this should happen most often) o or - the page overflows and must be split File starts with a single bucket, once bucket is full and a new record is to be inserted, the bucket splits into 2 buckets and all values are re-inserted into the appropriate bucket On the first bucket split, all values whose hash value starts with a 0 are inserted into one bucket, while all values whose hash value starts with a 1 are inserted into the other bucket At this point, a binary tree structure, called a directory is which has 2 types of nodes: o Internal nodes: These guide the search - each has a left pointer corresponding to a 0 bit and a right pointer corresponding to a 1 bit o Leaf nodes (or bucket pointers): These hold a bucket address We will "fake out" the situation by declaring a record/object type that has 3 pointer fields: zero - which holds either null or a real pointer to another internal directory node bucket - which holds either null or a real pointer to a bucket one - which holds either null or a real pointer to another internal directory node

38

Thus, a true "leaf" node will look like this: zero bucket one null real address null

Where, a true "internal" node will look like this: zero bucket one address a null address b

Where either a or b can be null, but both cannot be null at the same time! (Usually, both will be non-null, for some "badly shaped" trees, we might have one be null.) Algorithm for the Search Procedure for Dynamic Hashing:

y y y

Let h be the hash value (pseudokey of 0s,1s) of a record, or h <--- hash value of record t <--- root of the directory i <--- 1

While t is an internal node of the directory do begin if the ith bit of h is a zero then t <--- left son of t else i <--- i+1 end; Search the bucket whose address is in node t - continue searching chained overflow buckets if necessary; Return null if not found, bucket pointer where found otherwise; t <--- right son of t;

39

Dynamic Hashing E.g. -

40

Spring 2012
Master of Computer Application (MCA) – Semester II
MC0068 – Data structutes using ‘C’ – 4 Credits (Book ID: B0701 & BO702)
Assignment Set – 1 (40 Marks)

Spring 2012

Master of Computer Application (MCA) – Semester II

MC0068 – Data structutes using ‘C’ – 4 Credits (Book ID: B0701 & BO702)

Assignment Set – 1 (40 Marks)

Master of Computer Application (MCA) – Semester II

MC0068 – Data structutes using ‘C’ – 4 Credits (Book ID: B0701 & BO702)

Assignment Set – 1 (40 Marks)

- BC0038
- MC061– 01_computer_programming_in_C_language
- MCOO68 SMU MCA SEM2 2011
- MC0066
- Data Structure
- Data structure using C - MC0068
- Answer BC0037 C++
- BC0038 Data Structure Using C
- BC0040
- MC0068 Summer Drive Assignment 2012[1]
- MC0067 SMU MCA SEM2 2011
- C++
- MC0063 August 2010 Q & Ans
- BC0038 Spring Drive Assignment 2012
- MCA Assignment (Semester 2 + 3 Full) Sikkim Manipal University, SMU
- MC0068 - Set 1
- MC0068
- ERDDataModeling
- MC0069 SMU MCA SEM2 2011
- BC0040 Computer Organization and Architecture
- Solved, BC0037 Spring Drive Assignment
- MC0070 - Set 1
- BC0039 Discrete Mathematics
- MC0070
- BC0040
- BC0042 Operating Systems
- BC0038 Data Structure Using C
- SMU_MCA NEW FALL 2010_Discrete Mathematics(MC0063)_SEM_1_ASSIGNMENTS_set1
- MC0067 Database Management System
- BC0039 Discrete Mathematics
- Master of Computer Application (MCA) – Semester II MC0068 – Data structutes using ‘C’

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

We've moved you to where you read on your other device.

Get the full title to continue

Get the full title to continue reading from where you left off, or restart the preview.

scribd