Algorithms | Queue (Abstract Data Type) | Vertex (Graph Theory)

Chapter 1

Review
1.1 CONCEPT OF ALGORITHM
very difficult to make people realize that it is not really the computer but the man behind computer A common man’s belief is that a computer can do anything and everything that he imagines. It is who does everything. In the modern internet world, man feels that just by entering what he wants to search into the computers he can get information as desired by him. He believes that, this is done by computer. A common man seldom understands that a man made procedure called search has done the entire job and the only support provided by the computer is the execution speed and organized storage of information. In the above instance, a designer of the information system should know what one frequently searches for. He should make a structured organization of all those details to store in memory of the computer. Based on the requirement, the right information is brought out. This is accomplished through a set of instructions created by the designer of the information system to search the right information matching the requirement of the user. This set of instructions is termed as program. It should be evident by now that it is not the computer, which generates automatically the program but it is the designer of the information system who has created this. Thus, the program is the one, which through the medium of the computer executes to perform all the activities as desired by a user. This implies that programming a computer is more important than the computer itself while solving a problem using a computer and this part of programming has got to be done by the man behind the computer. Even at this stage, one should not quickly jump to a conclusion that coding is programming. Coding is perhaps the last stage in the process of programming. Programming involves various activities form the stage of conceiving the problem up to the stage of creating a model to solve the problem. The formal representation of this model as a sequence of instructions is called an algorithm and coded algorithm in a specific computer language is called a program.

One can now experience that the focus is shifted from computer to computer programming and then to creating an algorithm. This is algorithm design, heart of problem solving.

1.2 CHARACTERISTICS OF AN ALGORITHM
Let us try to present the scenario of a man brushing his own teeth(natural denture) as an algorithm as follows: Step 1. Take the brush Step 2. Apply the paste Step 3. Start brushing Step 4. Rinse Step 5. Wash Step 6. Stop If one goes through these 6 steps without being aware of the statement of the problem, he could possibly feel that this is the algorithm for cleaning a toilet. This is because of several ambiguities while comprehending every step. Step-1 may imply tooth brush, paint brush, toilet brush, etc. Such an ambiguity arises from the instruction of the above algorithmic step. Thus every step has to be made unambiguous. An unambiguous step is called definite instruction. Even if step 2 is rewritten as ‘apply the tooth paste’, to eliminate ambiguities, yet the conflicts such as, where to apply the tooth paste and where is the source of the tooth paste, need to be resolved. Hence, the act of applying the toothpaste is not mentioned. Although unambiguous, such unrealizable steps can’t be included as algorithmic instruction as they are not effective. The definiteness and effectiveness of an instruction implies the successful termination of that instruction. However the above two may not be sufficient to guarantee the termination of the algorithm. Therefore, while designing an algorithm care should be taken to provide a proper termination for algorithm. Thus, every algorithm should have the following five characteristic feature 1) Input 2) Output 3) Definiteness 4) Effectiveness 5) Termination Therefore, an algorithm can be defined as a sequence of definite and effective instructions, which terminates with the production of correct output from the given input.

In other words, viewed little more formally, an algorithm is a step by step formalization of a mapping function to map input set onto an output set. The problem of writing down the correct algorithm for the above problem of brushing the teeth is left to the reader.

1.3 HOW TO DEVISE THE ALGORITHMS
The process of devising an algorithm is both an art and a science. This is one part that cannot be

automated fully. Given a problem description, one have to think of converting this into a series of steps, which, when executed in a given sequence solve the problem. To do this, one has to be familiar with the problem domain and also the computer domains. This aspect may never be taught fully and most often, given a problem description, how a person proceeds to covert it into an algorithm becomes a matter of his “style” – no firm rules become applicable here. For the purpose of clarity in understanding, let us consider the following example. Problem: Finding the largest value among n>=1 numbers. Input: the value of n and n numbers Output: the largest value Steps : 1. Let the value of the first be the largest value denoted by BIG 2. Let R denote the number of remaining numbers. R=n-1 3. If R != 0 then it is implied that the list is still not exhausted. Therefore look the next number called NEW. 4. Now R becomes R-1 5. If NEW is greater than BIG then replace BIG by the value of NEW 6. Repeat steps 3 to 5 until R becomes zero. 7. Print BIG 8. Stop End of algorithm

1.4 HOW TO VALIDATE THE ALGORITHMS
Once an algorithm has been devised, it becomes necessary to show that it works. i.e it computes the correct answer to all possible, legal inputs. One simple way is to code it into a program. However, converting the algorithms into programs is a time consuming process. Hence, it is essential to be reasonably sure about the effectiveness of the algorithm before it is coded. This process, at the algorithm level, is called “validation”. Several mathematical and other empirical methods of validation are available. Providing the validation of an algorithm is a fairly complex process and most often a complete theoretical validation, though desirable, may not be provided. Alternately, algorithm segments, which have been proved else where may be used and the overall working algorithm may be empirically validated for several test cases. Such methods, although suffice in most cases, may often lead to the presence of unidentified bugs or side effects later on.

1.5 HOW TO TEST THE ALGORITHMS
If there are more then one possible way of solving a problem, then one may think of more than one algorithm for the same problem. Hence, it is necessary to know in what domains these algorithms are applicable. Data domain is an important aspect to be known in the field of algorithms. Once we have more than one algorithm for a given problem, how do we choose the best among them? The solution is to devise some data sets and determine a performance profile for each of the algorithms. A best case data set can be obtained by having all distinct data in the set. The ultimate test of an algorithm is that the programs based on the algorithm should run satisfactorily. Testing a program really involves two phases a) debugging and b) profiling. Debugging is the process of executing programs with sample datasets to determine if the results obtained are satisfactory. When unsatisfactory results are generated, suitable changes are made in the program to get the desired results. On the other hand, profiling or performance measurement is the process of executing a correct program on different data sets to measure the time and space that it takes to compute the results. However, it is pointed out that “debugging can only indicate the presence of errors but not the absence of it”. i.e., a program that yields unsatisfactory results with a sample data set is definitely faulty, but just because a program is producing the desirable results with one/more data sets cannot prove that the program is ideal. Even after it produces satisfactory results with say 10000 data sets, it’s results may be
faulty with the 10001t h set. In order to actually prove that a program is perfect, a process called “proving”

is taken up. Here, the program is analytically proved to correct and in such cases, it is bound to yield perfect results for all possible sets of data.

1.6 ALGORITHMIC NOTATIONS
In this section we present the pseudo code that we use through out the book to describe algorithms.

The pseudo code used resembles PASCAL and C language control structures. Hence, it is expected that the reader be aware of PASCAL/C. Even otherwise at least now it is required that the reader should know preferably C to practically test the algorithm in this course work. However, for the sake of completion we present the commonly employed control constructs present in the algorithms. 1. A conditional statement has the following form If < condition> then Block 1 Else Block 2 If end. This pseudo code executes block1 if the condition is true otherwise block2 is executed. 2. The two types of loop structures are counter based and conditional based and they are as follows For variable = value1 to value2 do

Block For end Here the block is executed for all the values of the variable from value 1 to value 2. There are two types of conditional looping, while type and repeat type. While (condition) do Block While end. Here block gets executed as long as the condition is true. Repeat Block Until<condition> Here block is executed as long as condition is false. It may be observed that the block is executed at least once in repeat type.

SUMMARY
In this chapter, the concepts of algorithm are presented and the properties of the algorithm are also given. The concept of designing algorithm is explained with a specific example. The concepts of validating the algorithms and test the devised algorithms is also presented.

EXERCISE
1. __________ is the process of executing a correct program on data sets and measuring the time and space it takes to compute the results. 2. Define algorithm? What are its properties 3. What is debugging and What is profiling? 4. One of the properties of an algorithm is beauty (true / false)

Chapter 2

Elementary Data Structures
2.1 FUNDAMENTALS
enhancing the qualities of associated algorithms. In this chapter, we review only the non-primitive Data structure is a method of organizing data with a sort of relationship with an intension of data structures. The non-primitive data structures can be broadly classified into two types linear and non-linear data structures. Under linear data structures Arrays, Stacks and Queues and under nonlinear data structures graphs and trees are reviewed.

2.2 LINEAR DATA STRUTURES
A data structure in which every data element has got exactly two neighbors or two adjacent elements except two elements having exactly one data element is called a linear data structure. Otherwise it is called a nonlinear data structure.

2.2.1 Array and its representation
Array is a finite ordered list of data elements of same type. In order to create an array it is required to reserve adequate number of memory locations. The allocated memory should be contiguous in nature. The size of the array is finite and fixed up as a constant. Some of the important operations related to arrays are, Creation () g A[n], Array created (reservation of adequate number of memory locations) memory reserved;
Write (A, i, e) g Updated array with ‘e’ at it h position;

Compare (i, j, Relationl operator) g Boolean;
Read (A, i) g ‘e’ element at it h position;

Search (A, e) g Boolean;

REPRESENTATION OF A SINGLE DIMENSIONAL ARRAY
1 2 3 4 5 6 3 7 -8 10 15 5 …… 50 20 1000 1002 1004 1006 1008 1010 Any array is associated with a lower bound l and an upper bound u in general. When the array is created then it starts at some base address B. Since the computer is byte addressable, each address can hold a byte. If an integer takes 2 bytes for representation and if B=1000 and if l=1, then
the 1st element is at (1000 + 0)t h location 2nd element is at (1000 + 2)th location 3rd element is at (1000 + 4)t h location

.. .. ..
it h element is at (1000 + (i-1)*2 )t h location In general, if w is the size of the data element, then ith element is at (B + (i – l) w)th location. i.e., A[i]

= B + (i – l) w.

REPRESENTATION OF A TWO DIMENSIONAL ARRAY
The two dimensional array A[l1..u1] [l2..u2] may be interpreted as n = u1-l1+1 rows and m = u2 -l2+1 columns i.e., each row consists of u2-l2+1 elements. l2 l2+1 l2+2 l2+3 l2+4 …. u2

l1 l1+1 l1+2

.
u1

Although we have an n´m matrix representation the allocations is not in the form of matrix but is
contiguous in nature and will be as shown below. Incase if l1=1, and l2=1 the indices start at [1, 1].

1,1 1,2 1,3 …. 1,m 2,1 2,2 2,3 … 2,m …. ….. n,1 n,2 …. n,m
Given the indices (i, j), the address can be computed using A[i, j] = B + (i-1) m+ (j-1).
In general, if li £ i £ui and lj £ j £uj , then A[i, j] = B + { (i - li ) (uj – lj +1) +(j - lj )}w.

For higher n-dimensional arrays, the address can be computed as
A( i1, i2, i3, ….in ) where l1 £ i1 £u1, l2 £ i2 £u2 , … ln £ in £un

A( i1, i2, i3 , ….in ) = B + { (i1 - l1 ) (u2 - l2 +1)(u3-l3+1) ….(un - ln+1) + (i2 - l2) (u3 - l3 +1)(u4-l4+1) …. (un- ln +1) + (i3 - l3 ) (u4 - l4 +1) (u5-l5 +1) ….(un- ln +1)

+ ……+
(in -1 - ln -1) (un - ln +1) + (in - ln) }w

The algorithm thus designed to compute the row major address is as follows Algorithm: Address Computation (Row Major) Input: (1) n, dimension
(2) l1 , l2, l3 , … ln (3) u1, u2, u3 , … un n lower limits n upper limits

(4) w, word size
(5) i1, i2, i3, …., in values of subscripts

(6) B, the base address
Output: ‘A’ address of the element at (i1, i2, i3 , …., in )

Method:

n

+= wPliBA )(
jjj j1 =

n

+ - = l u P where ) 1 (
kkj +=jk 1

Algorithm ends.

S[T] =e If end Algorithm ends . Algorithm: Create Output: S. Insertion(S. Some of the functions related to a stack are Create ( ) g S. stack. insertion and deletion operations are. called the top. e) g updated S.2 Stacks A stack is an ordered list in which all insertions and deletions are made at one end. Stacks are generally used to remember things in reverse. element to be inserted. It is a linear data structure which is open for operations at only one end (both insertions and deletions are defined at only one end). Destroy(S) It has to be noted that with respect to a stack. e. Stack created Method: Declare S[SIZE] //Array of size=SIZE Declare and Initialize T=0 //Top pointer to remember the number of elements Algorithm ends Algorithm: Isempty Input: S.2. is the processing of procedure calls and their terminations. stack Output: Boolean Method: If (T==SIZE) Return (yes) Else Return (no) If end Algorithm ends Algorithm: Push Input: (1) S. Isempty(S) g Boolean. (4) T. stack Output: Boolean Method: If (T==0) Return (yes) Else Return (no) If end Algorithm ends Algorithm: Isfull Input: S. updated. called PUSH and POP operations respectively. It finds a major use in backtracking approaches. Following are the algorithms for some functions of stack. updated Method: If (Isfull(S)) then Print (‘stack overflow’) Else T=T+1. Isfull(S) g Boolean. the top pointer Output: (1) S. size of the stack. which arises in computer programming. Stack is a linear data structure which works based on the strategy last-in-first out (LIFO).2. Top(S) g e. (2) T. One natural example of stack. (3) SIZE. (2) e. in general. Deletion (S) g S.

2. while all deletions take place at the other end called the front end. A minimal set of useful operations on queue includes the following. Insertion (Q.3 Queues A queue is an ordered list in which all insertions take place at one end called the rear end. Queue created Method: Declare Q[SIZE] //Array with size=SIZE Declare and Initialize F=0. Output: (1) S. (2) T. Following are the algorithms for some functions of queue. If end Algorithm ends 2. Back (Q). Deletion (Q) g Q. Algorithm: Create Output: Q.Algorithm: Pop Input: (1) S. Queue Output: Boolean Method: If (R==SIZE) Return (yes) Else Return (no) If end Algorithm ends . updated. e. stack. e) g updated Q. R=0 //Front and Rear pointers to keep track of the front element and the rear element respectively Algorithm ends Algorithm: Isempty Input: Q. Queue Output: Boolean Method: If (F==0) Return (yes) Else Return (no) If end Algorithm ends Algorithm: Isfull Input: Q. Unlike stacks. queues also arise quite naturally in the computer solution of many problems. Destroy (Q). Isfull (Q) g Boolean. Queue is a linear data structure which works based on the strategy first-in-first out (FIFO). Front (Q) g e. Isempty (Q) g Boolean. Create ( ) g Q. updated (3) ‘e’ element popped Method: If (Isempty(S)) then Print (‘stack is empty’) Else e = S[T] T=T-1. Perhaps the most common occurrence of a queue in computer applications is for scheduling of jobs.

Method: If (Isempty (Q)) then Print (‘Queue is empty’) Else e = Q[F] If (F==R) F=R=0. (5) R. Similarly. element if deleted. (2) e. Queue. 2. element to be inserted. the front pointer. (2) F. (3) R. updated.e. If end If end Algorithm ends Algorithm: Deletion Input: (1) Q. it will be necessary to move rear one position clockwise. Nevertheless. (5) R. updated. Q[R]=e If (F==0) F=1. the front pointer. the front pointer. Queue Output: element in the front Method: If (Isempty (Q)) Print ‘no front element’ Else Return (Q[F]) If end Algorithm ends Algorithm: Rear Input: Q. updated. the ‘return (isfull (Q)) =yes’ does not necessarily imply that there are n elements in the queue. size of the Queue. updated. This can be overcome by the using an alternate queue called the circular queue. The algorithms for other functions are Algorithm: Insertion Input: (1) CQ. updated. Circular Queue. (4) F. updated Method: If (Isfull (CQ)) then Print (‘overflow’) . (3) SIZE. (2) SIZE. (3) R. In order to add an element. (4) F. the algorithms for create (). the linear queue makes less utilization of memory i. (2) F. (4) R. Else F=F+1. size of the Queue. (3) R.. (2) F. Isempty ().4 Circular Queue A circular queue uses the same conventions as that of linear queue. Queue. Using Front will always point one position counterclockwise from the first element in the queue. Queue Output: element in the rear Method: If (Isempty (Q)) Print ‘no back element’ Else Return (Q[R]) If end Algorithm ends Algorithm: Insertion Input: (1) Q. the rear pointer Output: (1) CQ. (3) F. updated. (3) SIZE. If end If end Algorithm ends However. (4) e. the rear pointer Output: (1) Q. updated Method: If (Isfull (Q)) then Print (‘overflow’) Else R=R+1. (2) e. Front () and Rear () are same as that of linear queue. size of the Circular Queue. element to be inserted. it will be necessary to move front one position clockwise each time a deletion is made. the rear pointer Output: (1) Q.Algorithm: Front Input: Q. Isfull (). updated.2.

It should also be noted that. family structure. l The number of elements adjacent to a given element is called ‘arity’ or the degree of the element. for example. 2. In the Fig. v2. otherwise. There may be more than one edge associated with a given pair of vertices. edges e2.e. The most common representation of graph is by means of a diagram. (4) R. 2. If end If end Algorithm ends Algorithm: Deletion Input: (1) CQ.3. Else F=F mod SIZE +1. the nervous system of the human body. updated. Few of the specific to Computer Science are: graph. l There is no upper limit placed on this number. (2) SIZE.1 edge ek are called the end vertices of ek. E) consists of a set of objects V = {v1 . it is an infinite graph. otherwise it is called general graph. A graph with a finite number of vertices as well as a finite number of edges is called a finite graph. vi and ej are said to be incident with (on or to) each other. Such edges are referred to as parallel edges.3. etc. 2. Two nonparallel edges are . Some of the examples in general are : Friendship among classmates. degree = number of relations the element has with others. A graph can be used to represent almost any physical situation involving discrete objects and a relationship among them.1 edge e1 having same vertex as both its end vertices is called a self-loop. …} whose elements are called edges. The vertices vi . i. (2) F.1. 2. Method: If (Isempty (CQ)) then Print (‘Queue is empty’) Else e = CQ[F] If (F==R) F=R=0. the rear pointer Output: (1) CQ. 2. (4) e. size of the CQ.2 Incidence and Degree When a vertex vi is an end vertex of some edge ej . e6 . physical. e2. graph theory has a very wide range of applications in engineering.. Because of its inherent simplicity. and another set E = {e1 . Meaning that. tree. long or short: what is important is the incidence between the edges and vertices. 2. updated. Circular Queue. in most of the theory and almost all applications these sets are finite. and biological sciences. (3) F.3 NON-LINEAR DATA STRUCTURES A data structure which is not linear data structure is said to be non-linear data structure. and in numerous other areas.1. That is. 2. Each edge ek in E is identified with an unordered pair (vi . linguistics.1 Introduction to Graph Theory A graph G = (V. vj associated with Fig. If end If end Algorithm ends 2. in drawing a graph. in which the vertices are represented as points and each edge as a line segment joining its end vertices. (3) R. a data structure is said to be non-linear if data elements are allowed to have more than two adjacent elements. it is immaterial whether the lines are drawn straight or curved. for example e4 and e5 in Fig. updated.3. a data element can have any number of relations with other data elements. CQ[R] =e If (Isempty (CQ)) F=1. social.1.Else R=R mod SIZE + 1. the front pointer. element if deleted.1 Finite and Infinite Graphs Although in the definition of a graph neither the vertex set V nor the edge set E need be finite. In Fig. vj) of vertices.1. …} called vertices. University with every component being a sub-component and the relations among them being non-linear. A graph that has neither self-loop nor parallel edges are called a simple graph. and e7 are incident with vertex v4 .

3 Null graph of six vertices In the definition of a graph G = (V. Since each edge contributes two degrees. 2. otherwise. the two edges incident on v1 are in series. the sum of the degrees of all vertices in G is twice the number of edges in G. d(vi ). 2. isolated vertices are vertices with zero degree. for example. and d(v5 ) = 1. v4 and v5 are adjacent. the vertex set V must not be empty. without any edges.2 Graph containing isolated vertices. In Fig. there is no graph. The number of edges incident on a vertex vi . In Fig.1. Fig. Two adjacent edges are said to be in series if their common vertex is of degree two. with self-loops counted twice is called the degree. 2-3. 2. Such a graph.1. Vertex v3 in Fig. but v1 and v4 are not.4 Walk. two vertices are said to be adjacent if they are the end vertices of the same edge. 2. A null graph of six vertices is shown in Fig.2. starting with a vertex and ending with a vertex with any number of revisiting vertices and retracing of edges. by definition.1 are adjacent. A vertex of degree one is called a pendent vertex or an end vertex. For example. Similarly. In Fig. Path and Connected Graph A “walk” is a sequence of alternating vertices and edges. In other words. 2. a graph must have at least one vertex. 2-1. Vertex v4 and v7 in Fig. it is possible for the edge set E to be empty.2. d(v2) = 4. If a walk has the restriction of no repetition of vertices and no edge is retraced it is called a “path”.3.2 is a pendant vertex.said to be adjacent if they are incident on a common vertex. In other words. every vertex in a null graph is an isolated vertex. Although the edge set E may be empty.3. series edges and a pendant vertex. e2 and e7 in Fig. 2.1. for example. d(v1) = d(v3 ) = d(v4) = 3. If there is a walk to every vertex from any other vertex of the graph then it is called a “connected” graph. 2. 3. . of vertex vi .3 Isolated Vertex. Pendent Vertex and Null Graph A vertex having no incident edge is called an isolated vertex. are isolated vertices. E). is called a null graph. 2. Fig. In other words.

the above matrix X is called an adjacency matrix.3. If the diagonal elements are high then the graph has a self loop.3.5 respectively. The incidence matrix contains only two elements. 2.. if jth edge ej is incident on it h vertex vi . 2. matrices also turn out to be the natural way of expressing the problem. the respective row index gives the vertex label with self loop. 2. 2. many known results of matrix algebra can be readily applied to study the structural properties of graphs from an algebraic point of view. Define an n by e matrix A =[a ij ]. 3. 0 and 1. and = 0.2 Incidence Matrix Let G be a graph with n vertices. Such a matrix is called a binary matrix or a (0. Consider a 2D matrix X of size |v| × |v where |v| is the number of vertices in Fig. 2.2. The sum of the it h row elements gives the degree of the vertex Vi . and no self-loops.4 is each column and row corresponds to the vertex in the V1 V2 V3 V4 V5 V6 V1 0 1 0 0 1 0 V2 1 0 1 1 1 0 V3 0 1 0 0 0 0 V4 0 1 0 0 1 1 (2) V5 1 1 0 1 0 0 V6 0 0 0 1 (2) 0 0 This matrix X uniquely represents the graph. i. It is indeed possible to reconstruct the graph back from the matrix. While calculating the row sum of the adjacency matrix of a non-simple graph. . Adjacency matrix is a symmetric matrix.2. A matrix is a convenient and useful way of representing a graph to a computer. The high entry in the matrix says the vertex corresponding to the row and vertex corresponding to the column is adjacent.3. ( G. Since every edge is incident on exactly two vertices.1 Adjacency matrix Representation Since the edges are the relationship between two vertices. A graph and its incidence matrix are shown in Fig. Matrices lend themselves easily to mechanical manipulations. otherwise. or simply incidence matrix. The number of 1’s in each row equals the degree of the corresponding vertex.e. as follows: The matrix element Ai j = 1. therefore. the graph can be represented by a matrix. Such a matrix A is called the vertex-edge incidence matrix.4. 2. each column of A has exactly two1’s . two vertices are adjacent if there is an edge connecting them. Since. = otherwise i Xj 0 Thus the matrix where graph G shown in Fig. 2. whose n rows correspond to the n vertices and the e columns correspond to the e edges. In many applications of graph theory.2.4 E to belongs v v if ) . other representations are better for computer processing. 2. 1)-matrix.5 Incidence matrix of the graph in Fig. represents an isolated vertex. a weightage 1 has to be given if the diagonal cell is high and additional weightage to every parallel edge.2 Matrix Representation of Graphs Although a pictorial representation of a graph is very convenient for a visual study. Matrix A for a graph G is sometimes also written as A(G). such as in electrical network analysis and operation research. The following observations about the incidence matrix A can readily be made: 1. and the matrix represents the same. abcdef gh v1 0 0 0 1 0 1 0 0 v2 0 0 0 0 1 1 1 1 v3 0 0 0 0 0 0 0 1 v4 1 1 1 0 1 0 0 0 v5 0 0 1 1 0 0 1 0 v6 1 1 0 0 0 0 0 0 (b) Fig. The entries in the matrix are either zero or one. 2. A row with all 0’s .4 and Fig. e edges. Besides.

3. or 2.3 Trees The concept of a tree is probably the most important in graph theory. Any connected graph with n vertices and n-1 edges is a tree. columns 1 and 2 in Fig. G is circuit less and has n-1 edges. 2. . The genealogy of a family is often represented by means of a tree. 3. A tree with n vertices has n-1 edges.6 for instance. Therefore a graph with n vertices is called a tree if 1. especially for those interested in applications of graphs. or 5. 4. for example. is a tree. It follows immediately from the definition that a tree has to be a simple graph. There is one and only one path between every pair of vertices in a tree. The graph in Fig 2.5. Tree Trees appear in numerous instances. or 4. or 3.1 Some Properties of Tree 1. 2. A river with its tributaries and sub-tributaries can also be represented by a tree. 2.3.22 Chapter 2 . 2.Elementary Data Structures 4.3. The sorting of mail according to zip code and the sorting of punched cards are done according to a tree (called decision tree or sorting tree). that is. G is connected and has n-1 edges. A tree is a connected graph without any circuits. G is a minimally connected graph. T. 2-6. A graph is a tree if and only if it is minimally connected. having neither a self-loop nor parallel edges (because they both form circuits). There is exactly one path between every pair of vertices in G. G is connected and is circuit less. Parallel edges in a graph produce identical columns in its incidence matrix. Fig.

One of the basic technique for improving algorithms is to structure the data in such a way that the resulting operations can be carried out efficiently.3. EXERCISE 1. Draw a connected graph that becomes disconnected when any edge is removed from it 5. Sketch all binary trees with six pendent edges 7. This chapter also presents some terminologies used in the graph. 3. Write adjacency and incidence matrix for all the graphs developed . graphs and trees have been exposed. stack. Give at least 5 real life examples where we use stack operations 2.2. Though the chapter does not present all the data structure. queue. Explain what each vertex and edge represent 4. Name 10 situations that can be represented by means of graphs. The notions such as arrays. we have selected several which occur more frequently in this book.4 and 5 6. Draw all trees of n labeled vertices for n=1.23 BSIT 41 Algorithms SUMMARY This chapter devotes itself to present an overview of very fundamental data structures essential for the design of algorithms. Give at least 5 real life applications where queue is used.

1 ADDITION OF TWO NUMBERS In a programming situation if we need to add two numbers . Consider the following example : 3+6=9 a b c The ‘a’ . In this chapter we present some simple problems and discuss the issues in solving the problem and finally design the algorithms for the same. it appears as c= a + b To make it even more simple. The Algorithm to add two numbers is now given below. 3. totally three memory locations. When we generalize this. the program follows to achieve the end result. we have two inputs and one output. The end result thus depends to the programming language.‘b’ . To store one output and two inputs we need three variables or memory locations.Some Simple Algorithms .‘c’ are the memory locations from now on called the variables.Chapter 3 Some Simple Algorithms on the proper understanding of the logic constructs rather than the programming constructs specific Algorithm is a blueprint. we require two memory locations to take the input and one more memory location to store the result. 24 Chapter 3 .

Hence the sorted list becomes 1. the three numbers to be sorted Output : The Ascending order sequence of the 3 elements Method: small = a // Assign ‘a’ to small k1 = b k2 = c if (small > b ) .25 BSIT 41 Algorithms Algorithm : Add_two_numbers Input : a. two numbers to be added Output : c updated Method c= a + b Display c Algorithm ends 3. 1.2 INPUT THREE NUMBERS AND OUTPUT THEM IN ASCENDING ORDER The problem of sorting is major computer science problem. 3. b. 7. Find the smallest among the three numbers. c. b. and find the smallest among the remaining two elements and that becomes the second element and remaining becomes the last element. 7 is the input The smallest amongst these three is 1. Amongst 3 and 7. therefore 1 becomes the first element in the sorted list. Consider the following example: 3 . that becomes the first element in the sorted list. 3 is smaller. one has to follow the procedure described here. The problem of sorting is dealt in depth in the next chapter. but here only the problem of sorting of three numbers is presented. hence 3 is the second element and 7 is left out and it becomes the last element. Algorithm : Sorting_3_Elements Input : a. To sort three numbers in ascending order.

26 Chapter 3 .Some Simple Algorithms b = small k1 = a k2 = c end_if if (small> c) c = small k1 = a k2 = b end_if Display small If (k1 > k2 ) Display k2 Display k1 Else Display k1 Display k2 end_if Algorithm ends 3.1 below to make facts more clearer. . 3.3 TO FIND THE QUADRANT OF A GIVEN CO-ORDINATE POSITION The problem of finding the Quadrant in which a given co-ordinate position(x. If X and Y are both negative then it said to be Third Quadrant of the Cartesian plane. In a Cartesian system if both X and Y co-ordinates are positive then it is said to be First Quadrant of the Cartesian plane. If X is negative and Y is positive then it is said to be Second Quadrant and in case if X is positive and Y is negative then it said to be in the Fourth Quadrant of the Cartesian plane. The Diagram of the Cartesian plane is given in Fig.y) lies is a Computer Graphics problem.

1 Cartesian plane The co-ordinate positions thus entered have to fall in either of those Quadrant based on the sign of the X and Y co-ordinate positions. 3. Y co-ordinate Output : Corresponding Co-ordinate Method If( x >=0) If(y>=0) Display ‘I –Quadrant’ Else Display ‘IV-Quadrant’ end_if else If(y>=0) Display ‘II –Quadrant’ Else . Algorithm : Quadrant_Finder Input : x.27 BSIT 41 Algorithms Fig. X co-ordinate y.

. ac b b 4 + R2= 12- a ac b b 4 . Basically the roots can be real or imaginary in nature. 2- ac b 4 Then. If the roots are real in nature then they can be either real and distinct. i. because square root of any negative number always results in an imaginary result. is complex in nature. If the discriminant is negative then the roots will be of imaginary in nature. i. If they are imaginary the roots exists in complex conjugates. The term (b2-4ac) in the solution to the root is called the discriminant of the root. If the roots are real in nature then they can be either real and distinct.R2= 22- a And these are two roots of the equation. The problem of identifying the nature of the roots is achieved in checking the nature of the discriminant.e if (b2-4ac) = some –ve value.e if (b2-4ac) = some –ve value. The nature of the discriminant reflects the nature of the solution of the quadratic equation.Some Simple Algorithms Display ‘III-Quadrant’ end_if end_if Algorithm ends 3. And these are two roots of the equation. The problem of identifying the nature of the roots is achieved in checking the nature of the discriminant. or both of them can be equal. The term (b2-4ac) in the solution to the root is called the discriminant of the root. Basically the roots can be real or imaginary in nature. If they are imaginary the roots exists in complex conjugates. or both of them can be equal. The nature of the discriminant reflects the nature of the solution of the quadratic equation.4 TO FIND THE ROOTS OF A QUADRATIC EQUATION The general form of the Quadratic equation is ax2+bx+c=0.28 Chapter 3 . If the discriminant is negative then the roots will be of imaginary in nature.

29 BSIT 41 Algorithms Now the roots R1 and R2 has imaginary parts and hence they are imaginary in nature. The algorithm to find the solution to a quadratic equation is given below.which are R1 = R2 = -b/2a. Now the other possibility is that the roots being real.c the co-efficients of the Quadratic Equation Output : The two roots of the Equation Method disc = ((b*b) –(4*a*c)) if (disc = 0) display ‘roots are real and equal’ r1 = -b/2a r2 = -b/2a display r1 display r2 else if(disc>0) display ‘ roots are real and distinct’ r1 = (-b+sqrt(disc))/2a r2 = (-b-sqrt(disc))/2a else display ‘roots are complex’ display ‘real part’.-b/2a . If it is equal to zero than the discriminant vanishes and hence the roots are real and equal. In case (b2-4ac) > 0 . then the roots are real and distinct.b. For that to happen the disciminant (b2-4ac) = 0. Algorithm: Quadratic_solver Input : a.

But in the case of 47 no number between 2 and 24 performs such sort of division operation and hence it is declared as a prime number. Prime number checking is technically called as Primality testing. Algorithm: Primality_Testing (First approach) Input: n .5 approx. test condition Output: flag updated Method flag = 0 for(i=2 to n/2 in steps of +1 and flag = 0) if( n % i = 0) // n mod i flag = 1 end-if end-for if(flag = 0) .30 Chapter 3 .Some Simple Algorithms display ‘imaginary part’. number flag. First Approach Here we keep dividing a number from 2 to half the value of that number. sqrt(absolute_value_of(disc)) display ‘the two root exists in conjugates’ end_if Algorithm ends 3. For ex: if 47 is the number under consideration for primality testing then we would divide 47 form 2 to 47/2(23. A prime number is divisible by either 1 or itself and not by any other number. 24) and check whether any number in this interval performs a division operation on 47 such that there is no remainder.5 CHECKING FOR PRIME Prime number checking has been a very interesting problem for computer science for a very long time. If so then the given number is not prime.

number flag. Note that in this algorithm we achieve the same result with lesser number of operations because we reduce the size of the interval. test condition Output : flag updated Method flag = 0 for(i=2 to square_root(n) in steps of +1 and flag = 0) if( n % i = 0) // n mod i flag = 1 end_if end-for if(flag = 0) display ‘Number is prime’ else display ‘Number is not prime’ end_if Algorithm ends .31 BSIT 41 Algorithms display ‘Number is prime’ else display ‘Number is not prime’ end_if Algorithm ends Second Approach It is proved in number theory that instead of setting the interval of divisor to n/2 we can simply set that to square root of the given number under consideration of Primality and it would work perfectly fine as the previous algorithm. Algorithm: Primality_Testing (Second approach) Input : n .

. *1 Algorithm : Factorial Input : n Output : Factorial of n Method fact = 1 for i = n to 1 in steps of –1 do fact = fact*i end_for display ‘factorial = ‘. Not to forget that 1! = 1 and 0! = 1. . . Mathematically represented as n! . The same can be achieved by the following algorithm which follows incremental steps rather than decremental steps of the given algorithm. Algorithm : Factorial Input : n Output : Factorial of n . i.6 FACTORIAL OF A NUMBER Finding Factorial of a given number is another interesting problem.32 Chapter 3 .e n! = n * (n – 1) * (n – 2 ) * . We can now generalize the factorial of a given number which is any thing other than zero and one as the product of all the numbers ranging from given number to 1.Some Simple Algorithms 3. *1. For ex: 5! = 5*4*3*2*1.fact Algorithm ends In the above algorithm we have implemented the logic of the equation n! = n * (n – 1) * (n – 2 ) * . .

fact Algorithm ends 3.7 TO GENERATE FIBONACCI SERIES The Fibonacci Series is as follows 0 . 2 . consider 8. 1 .33 BSIT 41 Algorithms Method fact = 1 for i = 1 to n in steps of 1 do fact = fact*i end_for display ‘factorial = ‘. Consider 3. 5 . Therefore. Its first and second predecessors are 2 and 1. 1 . 8 . The following is the algorithm to generate the Fibonacci series up to a given number of elements. … The property of this series is that any given element in the series after the third element is the sum of its first and second predecessor. 5 + 3 = 8. 3 . The property of Fibonacci series holds. For example. the number of elements in the series Output : fibonacci series upto the nth element Method a = -1 b= 1 for(i=1 to n in steps of +1 do) c=a+b display ‘c’ a=b b=c end_for Algorithm ends . 2 + 1 = 3. Algorithm : Fibonacci_Series Input : n. Its first and second predecessors are 5 and 3.

15 . Algorithm : Sum_and_Average Input : n .34 Chapter 3 .8 SUM OF N NUMBERS AND AVERAGE Consider the addition of five numbers 12 .Some Simple Algorithms 3. number of elements a(n) .sum/n Algorithm ends . array of n elements Output : Sum and Average of ‘n’ array elements Method Display ‘Enter the number of elements ‘ Accept ‘n’ Display ‘ Enter the elements one by one’ For (i = 1 to n in steps of +1 do) Accept a(i) end_for sum = 0 For (i = 1 to n in steps of +1 do) sum = sum + a(i) end_for Display ‘Sum = ’. The following logic of adding two successive elements and iterating the same process over the other remaining elements is described in the algorithm given below. The resulting sum divided by the number of elements yields the average of the domain of input set.sum Display ‘Average =’. 5 and 1. 10 . Initially add 12 and 15 => 12 +15 = 27 Subsequently add 10 to 27 => 10 + 27 = 37 Subsequently add 5 to 37 => 37 + 5 = 42 And Finally add 1 to 42 => 42 + 1 = 43 Which is nothing but 12 + 15 + 10 + 5 + 1 = 43.

9 TO ADD TWO MATRICES Consider aaa bbb bababa +++ 13 1 2 1 1 1 3 12 11 1 3 1 3 1 2 1 2 1 1 11 aaa + bbb = bababa +++ 23 2 2 21 23 2 2 2 1 23 2 3 22 2 2 2 1 21 aaa bbb bababa +++ 33 3 2 3 1 33 3 2 3 1 33 3 3 32 3 2 31 31 The above example is of matrix addition. Algorithm : Matrix Addition Input : n .j) end_for end_for for(i = 1 to n in steps + 1 do) for(j= 1 to n in steps of + 1do) accept b(i.35 BSIT 41 Algorithms 3. which leads to the realization of Matrix addition. The procedure is thus described below. Respective matrix elements are added together in a third matrix and the results are thus obtained.j) end_for end_for . order of the matrices a(n.b(n.n). Matrix addition is possible iff orders of the two matrices are same. The two Matrices has to read initially in a two dimensional array and then the respective row element and column elements in the Matrices have to be added to get a third matrix.n) the resultant Sum matrix Method { Accept ‘n’ for(i = 1 to n in steps + 1 do) for(j= 1 to n in steps of + 1do) accept a(i.n) the two input matrices Output : c(n.

j) end_for end_for Algorithm ends SUMMARY In this chapter some simple problems.j) = a(i. The students are expected to implement all the above explained algorithms and experience the way they work.36 Chapter 3 . the problem solving concepts and the concept of designing algorithms for the problems has been presented. Design an algorithm to find the reverse of a number. For the ease of students some simple problems are considered and the algorithms are developed. Design a algorithm to check whether a number is palindrome or not 6. . 3.j) end_for end_for for(i = 1 to n in steps + 1 do) for(j= 1 to n in steps of + 1do) display c(i. EXERCISE 1. A number if said to be a palindrome if the reverse of a number is same as the original. Design and Develop algorithms for multiplying n integers. Design and develop algorithm for finding the middle element in the three numbers. 5. Design a algorithm to check whether a given string is palindrome or not 7. Develop algorithm to find the number of Permutations and Combinations for a given n and r 4. Hint: Follow the algorithm to add n numbers given in the text 2. 6.Some Simple Algorithms for(i = 1 to n in steps + 1 do) for(j= 1 to n in steps of + 1do) c(i.j) + b(i. Implement all the devised algorithms and also the algorithms discussed in the chapter. Design a algorithm to generate all prime numbers within the limits l1 and l2.

we have to search the entire file from the beginning till the end to check whether the Let us assume that we have a sequential file and we wish to retrieve an element matching with key element matching k is present in the file or not. The linear search and binary search methods are relatively straight forward methods of searching. 4.1.Chapter 4 Searching and Sorting 4. In the worst case the item is not found or the search item is the last (nt h ) element. The algorithm for sequential search is as follows. vector of n elements K. If the desired element is found we stop the search and return the index of that element. There are a number of complex searching algorithms to serve the purpose of searching. Algorithm : sequential search Input : A. then. we start to search from the beginning of the list and examine each element till the end of the list. search element Output : j –index of k 37 BSIT 41 Algorithms .1 SEARCHING ‘k’. If the item is not found and the list is exhausted the search returns a zero value.1 Sequential search In this method. For both situations we must examine all n elements of the array.

A search for a particular item with X resembles the search for a word in the dictionary.Searching and Sorting Method : i=1 While(i<=n) { if(A[i]=k) { write(“search successful”) write(k is at location i) exit().38 Chapter 4 . If the mid value is greater than X. then the list is chopped off at (mid+1) t h location. 4. This procedure is repeated until the item is found or the list has no more elements. The approximate mid entry is located and its key value is examined. search element . algorithm ends. then the list is chopped off at the (mid-1)t h location. } else i++ if end while end write (search unsuccessful). On the other hand. Algorithm : binary search Input : A. For this method it is necessary to have the vector in an alphabetical or numerically increasing order.1. The middle entry of the right-reduced list is examined and the procedure is continued until desired key is found or the search interval is exhausted. Now the list gets reduced to half the original list. if the mid value is lesser than X. vector of n elements K.2 Binary Search Binary search method is also relatively simple method. The middle entry of the left-reduced list is examined in a similar manner. The algorithm for binary search is as follows.

39 BSIT 41 Algorithms Output : low –index of k Method : low=1.2 SORTING One of the major applications in computer science is the sorting of information in a table. if end. The most common types of data are string information and numerical information. The ordering relation for numeric data simply . Sorting algorithms arrange items in a set according to a predefined ordering relation. Algorithm ends. 4. } else write (search unsuccessful).high=n While(low<=high-1) { mid=(low+high)/2 if(k<a[mid]) high=mid else low=mid if end } while end if(k=A[low]) { write(“search successful”) write(k is at location low) exit().

e. After this is done. not even one of them is best for all applications.16. S. : occupy the same array.16.1 Insertion Sorting The first class of sorting algorithm that we consider comprises algorithms that sort by insertion. contains the first i elements of the unsorted sequence S. i. abacus. .11. unsorted sequence. a}is in descending order. The figure shows the progression of the insertion sorting algorithm as it sorts an array of ten integers.. become. n '=. ' + S into the correct position in i 1i S ' . Each sequence i S' . Therefore. Performance of the methods depends on parameters like.23}. which is called ascending and descending order respectively. for n i = = 0 . n i = = 0 . SS' n Fig.7} Similarly for string information.2.19. above. Array positions (i+1) to (n-1) contain the remaining n-icontain the i+1 elements of 1 '+ i 1 elements of the unsorted sequence S.40 Chapter 4 . '0 = 1. : ' { 3 S = . and the series of sorted sequences.. ..16. beyond}is in ascending order and { beyond. S S S ' : .1 illustrates the insertion sorting algorithm.16. is obtained by inserting the t h 1i S' . n '1 0. the initial unsorted sequence. The first sequence in the series.. abacus. The items in a set arranged in non-decreasing order are {7. But. : ' I. {a.11.. {} S . } .. An algorithm that sorts by insertion takes the initial.13.13. 4. There are numerous methods available for sorting information. The array is sorted in place.. i. 0 ' S in the se ries. s s s s S S S ' : . degree of relative order already present in the data etc. is the sorted sequence we seek. the element at position i in the array is inserted into the sorted sequence S ' which occupies array positions 0 to (i-1). array positions 0 to i i S . be... 2. size of the data set. the final sequence in the series.Searching and Sorting involves arranging items in sequence from smallest to largest and from largest to smallest. Given a sequence 0 1 ( + element of the unsorted sequence i) '+ S . above. The items in a set arranged in descending order is of the form {23. the next sequence in the series. as follows: :2:1n: ' S is the empty sequence. become. be.e. and computes a series of sorted sequences n ' 1 0 . In the ith step.19. 4.e.

.1. 4.1 Insertion sort Algorithm : Insertion Sort Input : n. n-1 non-trivial insertions are required to sort a list of n elements.n].41 BSIT 41 Algorithms As shown in Fig. the first step (i=0) is trivial—inserting an element into the empty list involves no work. array of n elements Output : a[1. 4. Altogether.n] sorted . Size of the input domain a[1.. Fig.

. Therefore.Searching and Sorting Method for j= 2 to n in steps of 1 do item = a[j] i = j-1 while((i>=1) and (item<a[i])) do a[i+1] = a[i] i = i-1 while end a[i+1] = item for end Algorithm ends 4. the next element to be added to the sorted sequence is selected from the remaining elements.2. Straight Selection Sorting The simplest of the selection sorts is called straight selection . Both selection sorts described in this section sort the arrays in place. they are always added at one end.4.e.2 illustrates how straight selection works.42 Chapter 4 . the sorts are implemented by exchanging array elements. In the version shown. selection differs from exchange sorting because at each step we select the next element of the sorted sequence from the remaining elements and then we move it into its final position in the array by exchanging it with whatever happens to be occupying that position. the sorted list is constructed from the right (i. At each step.2 Selection Sorting Such algorithms construct the sorted sequence one element at a time by adding elements to the sorted sequence in order. from the largest to the smallest element values). Because the elements are added to the sorted sequence in order. This is what makes selection sorting different from insertion sorting. Consequently. Nevertheless. Fig. . In insertion sorting elements are added to the sorted sequence in an arbitrary order. the position in the sorted sequence at which each subsequent element is inserted is arbitrary.

43 BSIT 41 Algorithms At each step of the algorithm. The second step of the algorithm identifies 6 as the largest remaining element an moves it next to the 9. we swap it with the 4 that initially occupies that position. Since 9 is the largest element. Therefore. 4. it belongs in the last array position. in the first step shown in Fig 4. That element is then moved into the correct position of the array by swapping it with the element which currently occupies that position. . To move it there. Fig. Each subsequent step of the algorithm moves one element into its final position.2 Selection sort For example. the algorithm is done after n-1 such steps. a linear search of the entire array reveals that 9 is the largest element.2. a linear search of the unsorted elements is made in order to determine the position of the largest remaining element.

. 9 . array of n elements Output: a[1. Try to know more sorting techniques and make a comparative study of them. 45 . 2. 3. 150. Trace the Binary search algorithm on the same data set and same key elements of problem 2. Implement all the algorithms designed in the chapter. 5.Searching and Sorting Algorithm : Selection Sort Input : n. 213.n]. 56. 12. 4. What are the serious short comings of the binary search method and sequential search method.44 Chapter 4 . 1. Linear search is best employed when data searching operation is used minimally. Consider a data set of nine elements {10. 10. Two more sorting algorithms are discussed in the next chapter. Insertion sort builds a sorted list by inserting elements to a small sub sorted list. The chapter also presents some sorting techniques in sequel to searching. 40} 6. SUMMARY In this chapter two techniques for checking whether an element is presented in the list of elements is presented. EXERCISE 1. . Binary search reduces the effort in searching as in case of Linear search. 700 are present in the data set or not. 54. Size of the input domain a[1. But when data searching on the same data set has to be done several times then it is better to sort that data set and apply binary search. 415. Hand Simulate Insertion Sort on the data set { 13 .n] sorted Method: for i = 1 to n in steps of 1 do j=i for k = i+1 to n in steps of 1 do if(a[k]< a[j]) then j = k for end Interchange a[i] and a[j] For end Algorithm ends. 30. 45.. 500} and trace the linear search algorithm to find whether the keys 30. 78. Selection sort is simplest of all sorting algorithms and it goes for the selection of the largest element at each iteration.

but even more importantly. local variables and return addresses. But if the caller program calls itself. At the same time. the subprogram no doubt. Then came a concept wherein any subprogram can call any other subprogram. which can be called from other points in the program or other programs.1 WHAT IS RECURSION? supported by most of the languages. The recursive mechanisms are extremely powerful. All these and many other aspects are dealt with in this chapter. The general procedure for any recursive algorithm is as follows.Chapter 5 Recursion 5. In fact. Recursion is one of the applications of stacks. which the caller subprogram cannot perform itself. Also. very clearly. Hence. the recursive program in that case could be tougher to understand. which is one of the very powerful programming concepts. recursion can be used when the problem itself can be defined recursively. In such cases. We now understand that recursion is a process of defining a process/ problem/ an object in terms of itself. but with a different value of the parameter. Then comes the question – can a program call itself? Theoretically it is possible. the calling continues until some specific value of the parameter is reached. it is also a fact that most beginners are We look at the concept of recursion. Of course. 1. A subprogram as we know is the concept of writing separate modules. Recursion is an offshoot of the concept of subprograms. Save the parameters. Any program can be written using recursion. where do we use them normally? A subprogram is called to perform a function. performs the assigned tasks and comes back to the caller programs. many times they can express an otherwise complex process. If so. confused by the way it works and are unable to use it effectively. 45 BSIT 41 Algorithms . some languages may not support recursion. it may become necessary to rewrite recursive functions into nonrecursive ones. in most cases. calls itself. what purpose does it serve? The answer is. The control goes to the called subprogram.

There must be a decision criterion for stopping the process. Restore the most recently saved parameters.2 WHY DO WE NEED RECURSION? When iteration can be easily used and also supported by most programming languages. then that product * 3. otherwise perform final computations and go to step 1.*(n-1). local variables and return address and go to the latest return address. with little duplication of tasks. Let us now look at some basic examples which are often devised using recursion. Recursion is a top down approach to problem solving. …. then evaluate 1 * 2. The factorial of a number can also be obtained recursively. 3. It begins with what is known and from this constructs the solution step by step. . An iterative way of obtaining the factorial of a given number is to put a ‘for’ loop to repeat the multiplication n times. 1. Mathematical functions such as factorial and fibonacci series generation can be easily implemented using recursion than iteration. 5. We start with 1. Whereas. 5. postponing the rest. why do we need recursion at all? The answer is that iteration has certain demerits as is made clear below: 1. If the termination criterion is reached perform final computation and go to step 3. then recursion is suitable. it is always advisable to consider a tree structure for the problem. If the tree appears quite bushy.3 WHEN TO USE RECURSION? Recursion can be used for repetitive computations in which each action is stated in terms of previous result. In making the decision about whether to write an algorithm in recursive or non-recursive form. 2.46 Chapter 5 . If the structure is simple then use nonrecursive form. 5. 2. Each time a function calls itself it should get nearer to the solution.Recursion 2. It divides the problem into pieces or selects out one key step. In iterative techniques looping of statement is very much necessary. iteration is more of a bottom up approach. There are two conditions that must be satisfied by any recursive procedure.4 FACTORIAL OF A POSITIVE INTEGER The factorial of a number ‘n’ = n * (n-1) * (n-2)* … * 3 * 2 * 1.

the position at which the Fibonacci number has to be computed . For instance. the integer value whose factorial is to be computed Output: factorial of n Method: If (n==1) then Return (1) Else Return (n * factorial (n-1) If end Algorithm ends The students are advised to try various values of n to actually see how the method works. if kt h Fibonacci number is expected then that can be obtained by summing up (k-1)th and (k-2)th fibonacci numbers. The Fibonacci sequence starts from 0. If we somehow know (N-1)! then we can evaluate N! = N*(N1)!. a Fibonacci series is a sequence of integers 0. Thus. 5. The problem of finding the factorial of a given number can be recursively defined as . This process can be recursively done and finally one can obtain the kt h Fibonacci number.1.5 FINDING THE NTH FIBONACCI NUMBER As already introduced in Chapter 3. if we say the first Fibonacci number then it is 0.47 BSIT 41 Algorithms Suppose we are asked to evaluate N!.* 1 ) 1 ( n if n Factorial n Factorial (n) [where n is a positive integer] = = 1 1 n if Thus. In general.1. if the 6th Fibonacci number asked then we are expected to produce the number 5.5…… i.. .3. the algorithm developed to compute the factorial of a given number is Algorithm: Factorial Input: n.e. But how do we get (N-1)!? The same logic can be employed to evaluate (N-1)! = (N-1) * (N-2)!. 1 and after that each new term will be the sum of the previous two terms.2.We shall here. following is the recursive algorithm designed to find the nt h Fibonacci number Algorithm: Fibonacci Input: n. The second Fibonacci number is 1. at finding out what could be the nth Fibonacci number in the series. Thus.

the upper limit Output : Sum of first n positive integers Method: if (n <= 0) // We only want positive integers return 0.Recursion Output: nt h Fibonacci number Method: If (n==0) Return (0) Else If (n == 1) Return (1) Else Return (Fibonacci (n-1) + Fibonacci (n-2)) If end If end Algorithm ends Again. etc. we can define a terminating condition for some small subset of the problem.6 SUM OF FIRST N INTEGERS The sum of the integers to n is the sum of the integers through n -1 + n. The recursive algorithm to achieve this is as follows Algorithm : SumPosInt Input : n. 5.48 Chapter 5 . The sum of the integers to n -1 is the sum to n -2 to n -1. We use a third example. Therefore. . Eventually. we know that the sum of the first positive integer is 1. the correctness can be checked for various input values. nevertheless it is a very useful method. though normally this method is not used to explain recursion.

If the mid value is greater than X. // recursive step if end if end Algorithm ends 5. else return (n + SumPosInt( n -1 ). the lower limit High. The recursive algorithm for binary search is as follows. the upper limit //initially low=1 and high=n the number of elements Output : the position of the K Method : if (low <= high) mid=(low+high)/2 if( a[mid] == K) return(mid) else . Now the list gets reduced to half the original list. one can always think of using a recursive algorithm to solve the same.7 BINARY SEARCH Binary search method as explained earlier is a process of searching for the presence or absence of a key element in the sorted list. vector of n elements K. Thus. The middle entry of the left-reduced list is examined in a similar manner. then the list is chopped off at the (mid-1)th location. The approximate mid entry is located and its key value is examined. search element Low.49 BSIT 41 Algorithms else if (n == 0) // Our terminating condition return 1. Algorithm : binary search Input : A.

K. The recursive algorithm designed to serve this purpose is as follows. mid) else Binary Search(A. High. 5. Low. q.8 MAXIMUM AND MINIMUM IN THE GIVEN LIST OF N ELEMENTS Here the problem is to find out the maximum values in a give list of n data elements.50 Chapter 5 . Algorithm: Max-Min Input: p.Recursion if (a[mid]< K) Binary search(A. the lower and upper limits of the dataset max. K. min. mid) if end if end else return(0) if end Algorithm ends. two variables to return the maximum and minimum values in the list Output: the maximum and minimum values in the data set Method: If (p = q) Then max = a(p) min = a(q) Else If ( p – q-1) Then If a(p) > a(q) Then .

Look at the elements A(1) and B(1). 14. 18. A(1) is 3. 5. 30}.max2.min2) max f large(max1.q. is a process of arranging a set of given numbers in some order. Let the second set be B = {2. A(2) ……A(n/2) and A(n/2 + 1). Now we want to merge these two lists to form a common list C. A(n). Since A(1) is smaller then B(2). 15. C[ ] = {2.. B(1) is 2. 6. The two lists need not be equal in length.min2) If End If End Algorithm Ends. we merge the two sets into one common set. The basic concept of merge sort is like this. A(1) will become the second element of C.51 BSIT 41 Algorithms max = a(p) min = a(q) Else max = a(q) min = a(p) If End Else m ¬ (p+q)/2 max-min(p. We first look into the concept of arranging two individually sorted series of numbers into a common series using an example: Let the first set be A = {3.9 MERGE SORT Sorting as stated in Chapter 4. Since B(1) < A(1).e. C(1)=2. A(n/2 + 2) ……. 5. Consider a series of n numbers. Now compare A(1) =3 with B(2) =6. 32}. 8. 27. To get the final sorted list. For example the first list can have 8 elements and the second 5. 3} . B(1) will be the first element of C i.m. say A(1).min1) max-min(m+1.max2) min f small(min1. 9. Suppose we individually sort the first set and also the second set.max1.

15). we get (5. high) into two lists(low. high). This is the sorted list. You are now expected to take different sets of examples and see that the method always works. The algorithm MERGE does the merging operation as discussed earlier. 6.Recursion Similarly compare A(2) with B(2). how do we sort them in the first? To do this and show the consequent merging process. But later. Merge this with 6 to get (5. 7. high) MERGE(A. 15) again as ((7. Each time it divides the list (low. 5) is divided and hence ((7. Consider the series A= (7 5 15 6 4). Divide (7. 7) merge this with 15 to get (5. the list of elements Output: A. Again (7. we cannot divide again. Now. since A(2) is smaller. In the above example. low. 15) and (6. 15. mid) and (mid+1. Algorithm: MERGESORT Input: low.52 Chapter 5 . mid. we presume that both A & B are originally sorted. 8. 5. 3. 7. But. the lower and upper limits of the list to be sorted A. we start merging them. 5. 4). high) If end Algorithm ends You may recall that this algorithm runs on lines parallel to the binary search algorithm. C is built up as C[ ]= {2. Now divide A into 2 parts (7. We design two algorithms in the following. Then only they can be merged. mid) MERGESORT (mid. 7 and 15). Sorted list Method: If (low<high) mid¬ (low + high)/2 MERGESORT(low. However the main problem remains. 5. 32}. 14. calls for merging the two lists. we finally get (4. Now since every element has only one number. 18. 5) and (15)) becomes (((7) and (5)) and (15)). Finally. it will be the third element and so on. 6. 30. The main algorithm is a recursive algorithm (some what similar to the binary search algorithm that we saw earlier) which calls at times the other algorithm called MERGE. 4) as ((6) (4)). 5) and (15)) and (6. high. 27. . 15). taking two lists at a time. 9. 5. we look at the following example. When we merge 7 and 5 as per the example above. 6. Merging this with 4.

53 BSIT 41 Algorithms Algorithm: Merge Input: low. A(low.. mid) and A(mid+1. h = h+1. else B(i) = A(j). i = i+1 For end If end While end Algorithm ends . If (h > mid) For k = j to high B(i) = A(k). limits of two lists to be merged i. mid. i = i+1. j = mid + 1. high) A. While ((h dŠ mid) and (j dŠ high)) do If (A(h) dŠ A(j) ) B(i) = a(h). j = j+1. the merged and sorted list Method: h = low. For end Else For k = h to mid B(i) = A(k). If end i = i+1. high. i = low. the list of elements Output: B.e.

10 QUICKSORT This is another method of sorting that uses a different methodology to arrive at the same sorted result. 5.mid] and A[mid+1. all elements to the left of 75 should be less than 75 and those to the right should be greater than 75. but at an arbitrarily “pivot” place and ensures that all elements to the left of the pivot element are lesser than the element itself and all those to the right of it are greater than the element.54 Chapter 5 . interchange them .high] to write into another array B. say 1000 at the end. 75 55 60 90 95 70 65 85 80 1000 Similarly A(4) is larger than A(1) and A(7) is less than A(1). 75 80 85 90 95 70 65 60 55 1000 A(1) A(2) A(3) A(4) A(5) A(6) A(7) A(8) A(9) A(10) Now consider the first element. but not necessarily at the centre. we add a very large element. so interchange them. we use the elements of the same array A[low. We keep in mind that this is what we have added and is not a part of the list. Now it is not necessary that both the lists from which we keep picking elements to write into B should get exhausted simultaneously.Recursion The first portion of the algorithm works exactly similar to the explanation given earlier. This we do as follows: Start from A(2) and keep moving forward until an element which is greater than 75 is obtained. Simultaneously start from A(10) and keep moving backward until an element smaller than 75 is obtained. 75 55 85 90 95 70 65 60 80 1000 Again A(3) is larger than A(1) and A(8) is less than A(1). 75 80 85 90 95 70 65 60 55 To facilitate ordering. So interchange them and continue the process. then the elements of the second list are directly written into B. Consider the following example. except that instead of using two lists A and B to fill another array C. It “Partitions” the list into 2 parts (similar to merge sort). A(1) A(2) A(3) A(4) A(5) A(6) A(7) A(8) A(9) A(10) 75 80 85 90 95 70 65 60 55 1000 Now A(2) is larger than A(1) and A(9) is less than A(1). We want to move this element 75 to its correct position in the list. If the fist list gets exhausted earlier. This aspect will be taken care of by the second half of the algorithm. At the end of the operation. without any comparisons being needed and vice versa.

hence Interchange A(1) and A(5). the lower and upper limits of the list of elements A to be sorted Output: A. q) If end Algorithm ends Algorithm: PARTITION Input: m.55 BSIT 41 Algorithms 75 55 60 65 95 70 90 85 80 1000 In the next stage A(5) is larger than A(1) and A(6) is lesser than A(1). we can see that the pointers have crossed each other. PARTITION (p. All elements to its left are lesser and to it’s right are greater. Note that 75 is at its proper place. the sorted list Method: If (p < q) j = q+1. q. j) QuickSort(P. The main algorithm. As before. the upper limit of the list Output: the position of mt h element . This we keep repeating till single element lists are arrived at. Now we suggest a detailed algorithm to do the same. the position of the element whose actual position in the sorted list has to be found p. after interchanging we have 75 55 60 65 70 95 90 85 80 1000 In the next stage A(6) is larger than A(1) and A(5) is lesser than A(1). Next we repeat the same sequence of operations from A(1) to A(4) and also between A(6) to A(10). called QuickSort repeatedly calls itself with lesser and lesser number of elements. the sequence of operations explained above is done by another algorithm called PARTITION Algorithm: QuickSort Input: p. 70 55 60 65 75 95 90 85 80 1000 We have completed one series of operations. two algorithms are written. However. j-1) Quicksort(j+1.

11 DEMERITS OF RECURSION Now.Recursion Method: v = A(m). If (i < p) INTERCHANGE(A(i). Even though mathematical functions can be easily implemented using recursion it is always at the cost of execution time and memory space. Many programming languages do not support recursion. Repeat Repeat i = i+1 Until (A(i) e> v). A(p) = v. . you are wrong. Recursion has some demerits too. Some of the demerits of recursive algorithms are listed below: 1. hence recursive mathematical function is implemented using iterative methods. i = m. which often make it a not-so-favoured solution to a problem. 2. are you thinking that recursion is the best programming technique to be followed? Well. Repeat p=p-1 Until (A(p) d” v). A(p)) If end Until (i e” p) A(m) = A(p) . Algorithm ends 5.56 Chapter 5 .

1.1.1 Time Space tree of the algorithm Fibonacci For example.2. the concepts of recursion are introduced.17. Trace out the algorithm Quick Sort on the data set {12 . 45. the recursion tree for generating 6 numbers in a fibonacci series generation is given in fig 5.1. 12.13…etc. The data structure used by recursive algorithms is _____________ 7. 5. a return to the proper location will result when the return to a calling statement is made. 6. 9. List out the merits and demerits of Recursion. 1. Trace out the algorithm Merge Sort on the data set {1. Two sorting algorithms namely Quick sort and merge sort are presented and the recursive algorithms are designed.7. f (n-2) is computed twice. 5. 3. The recursive programs needs considerably more storage and will take more time. f (n-3) is computed thrice. 5. 4. 8.57 BSIT 41 Algorithms Fig. Trace out the algorithm MaxMin on a data set consisting of atleast 8 elements.2. When is it appropriate to use recursion? . A fibonacci series is of the form 0. SUMMARY In this chapter.5. It can be noticed from the fig 5. f (n-4) is computed 5 times. where the third number is the sum of preceding two numbers and so on. 4.3. EXERCISE 1.5.1 that.19.4. Some simple problems which were discussed in the earlier chapters are reconsidered and the recursive algorithms are designed to achieve the same outcome. 10} 3. Implement all the algorithms designed in this chapter.19.8.15. 6} 2. A recursive procedure can be called from within or outside itself and to ensure its proper functioning it has to save in some order the return addresses so that.

where each class is a rooted tree. It can also be said to be a graph with n vertices and (n-1) edges without a circuit. A tree is. a As already seen in Chapter 2. therefore.Representation and Traversal of a Binary Tree . For a graph connected graph with n vertices and (n-1) edges. 6. This tree is a collection of vertices and edges in which a vertex has been identified as the root and the remaining vertices are categorized into many classes (k >= 0). it is called a “rooted” tree. Then.Chapter 6 Representation and Traversal of a Binary Tree 6.1 BINARY TREE with n vertices to be minimally connected there have to be n-1 edges. (b) (a) (c) Fig. a tree is a minimally connected graph without a circuit. Sometimes a specific node is given importance. Such a rooted tree class is called the sub-tree of the given tree.1 (a) A complete binary binary tree tree (b) A full binary tree (c)A binary tree showing levels 58 Chapter 6 .

. And a binary tree which is maximally accommodated with all leaves at the same level is called “full” binary tree.1 (a) and (b) shows the examples for complete and full binary trees.1 Adjacency Matrix Representation A two dimensional array can be used to store the adjacency relations very easily and can be used to represent a binary tree. i. it implies that there are no children. the siblings are first accommodated before the children of any one of them. The number of levels in the tree is called the “depth” of the tree. 1 2 l 21=+ 10==k A binary indicating the levels is shown in Fig. A full binary tree is always complete but complete binary tree need not be full..1 (c) 6. The maximum number of vertices at each level in a binary tree can be found out as follows: At level 0: 20 number of vertices At level 1: 21 number of vertices At level 2: 22 number of vertices … At level i: 2i number of vertices Therefore. A “complete” binary tree is one which allows sequencing of the nodes and all the previous levels are maximally accommodated before the next level is accommodated. A node without any children is called a “leaf” node. 6. 2 2 2 + + + + l21 23 k i. and l Dynamic allocation In static allocation. Fig.e. to represent a binary tree with n vertices we use a n×n . If k = 0.e. maximum number of vertices in a binary tree of depth ‘l’ is: 0+ 2 .. 6.59 BSIT 41 Algorithms In the above definition if k is restricted to be maximum of 2 then the rooted tree is called “binary tree”. One is through the use of Adjacency matrices and the other through the use of Single dimensional array representation. In this representation.2.. we have two ways of representing the binary tree.2 REPRESENTATION OF A BINARY TREE Binary Tree can be represented using the two methods: l Static allocation. 6.

Fig. Each row may have 0.2 is given below. All other columns have only one element. Here the row indices correspond to the parent nodes and the column corresponds to the child nodes. There is space allocated for n x n matrix. 6. only one entry means that the node has only one child and two entries denote that the node has both the children. the percentage of space utilization is given as follows: . i.Representation and Traversal of a Binary Tree matrix. 1 or 2 entries. Zero entry in the row means that.. A row corresponding to the vertex vi having the entries ‘L’ and ‘R’ indicate that vi has as its left child the index corresponding to the column with the entry ‘L’ and the index corresponding to the column with ‘R’ entry as its right child. let us see the space utilization of this method of binary tree representation. Therefore.. ABCDEFGHIJK A LR B LR C L D E LR F R G H I LR J K Now. Let ‘n’ be the number of vertices. we have n2 space allocated. Zero entry in the column indicates that the node is the root. “L” is used to represent left child and “R” is used to represent right child entries. that element is a leaf node. The column with no entries corresponds to the root node.e.e.2 A Binary tree The adjacency matrix representation for the binary tree shown in Fig. 6. i.60 Chapter 6 . but we have only n-1 entries in the matrix.

l The right child of the it h node is placed at the (2i+1) t h position. 1+l i. 4 5 6 7 8 9 10 11 1 2 13 14 15 1 6 17 18 19 2 0 21 22 23 2 4 25 26 27 2 8 2 9 30 31 DEF GH I JK If l is the depth of the binary tree then.2 turns out to be as follows.6. the number of possible nodes in the binary tree is 2l + 1-1. therefore. not the most efficient method of representing a binary tree. This is. 100 1× 12+l An important observation to be made here is that the organization of the data in the binary tree decides the space utilization of the representation used. Hence it is necessary to have 2l +1 -1 locations allocated to represent the binary tree. l The parent of the it h node is at the (i/2) t h position in the array. In this representation. the percentage of utilization becomes negligible. For large ‘n’. we have to note the following points: l The left child of the ith node is placed at the 2it h position. Thus the single dimensional array representation for the binary tree shown in Fig.2. there is 100% utilization and there is a maximum wastage if the binary tree is right skewed or left skewed. If ‘n’ is the number of nodes. then the percentage of utilization is 1-n 100 1×+l 2l For a complete and full binary tree.2 Single Dimensional Array Representation Since the two dimensional array is a sparse matrix. we can consider the prospect of mapping it onto a single dimensional array for better space utilization. is the worst possible utilization in the single dimensional array representation.. 6. . The other alternative way of representing the binary tree is based on Linked Allocation (dynamic allocation method).e. where only l+1 spaces are utilized out of the 2l+ 1 – 1 spaces possible.61 BSIT 41 Algorithms 11-n % 100 = n 2× nn The space utilization decreases as n increases.

This is pictorially represented below: Lchild Data Rchild Fig. ordering of nodes is not mandatory and we require a header to point to the root node of the binary tree.3. there is a pointer to a left child and another to the right child and space for the data of that node.Representation and Traversal of a Binary Tree 6. If there are ‘n’ nodes to be represented.3 TRAVERSAL OF A BINARY TREE Traversal is the most important operation done on a binary tree. In this representation. Systematic means that every time the tree is traversed it should yield the same result. .2. This means that there will be 2n link fields. the ‘structure’ for each node becomes regular and in each node structure.62 Chapter 6 .3 Linked representation of a binary tree A binary tree represented using the linked representation is shown in Fig. 6. Traversal is the process of visiting all the vertices of the tree in a systematic order. only ‘n’ node-structures will be allocated.3 Linked Representation Here. Therefore the percentage of utilization is given as: 1 1=-n % 50 100 × = 2 nn 6. 6. Out of 2n link fields only (n-1) will be actually used to point to the next nodes and the remaining are wasted.

3. 3 After the completion of the left sub-tree. The root is visited last. the left sub-tree is traversed first. We go to the sibling sub-tree after the traversal of a sub-tree. They are: 1. the nodes are visited in the order of left child. In order to remember the nodes already visited it is necessary to maintain a stack data structure. i. i. then the root is visited and then the right sub-tree is traversed. 6.e.3 is : A B D E G H C F IJK In-Order Traversal In this traversal.63 BSIT 41 Algorithms There are three major methods of traversals and the rest are reversals of them. Here also. right child and then root. The post-order traversal for the above example (Fig. The pre-order traversal sequence for the binary tree shown in Fig. firstly requires to find out the root node. left child and then right child. the left sub-tree is traversed first. the leaf nodes denote the stopping criteria. 6. the nodes are visited in the order of left child. following are the steps involved in traversing trough the binary tree given in an adjacency matrix representation.. 2 Go to the first left sub-tree.. The in-order traversal sequence for the above considered example (Fig. In-order traversal 3. root and then right child. 1 Visit the root node first.3) is: D B G E H A F J I K C Post_Order Traversal In this traversal. the leaf nodes represent the stopping criteria. Go to the right sub-tree.e. Thus. the nodes are visited in the order of root. 6.1 Traversal of a Binary Tree Represented in an Adjacency Matrix The steps involved in traversing a binary tree from the adjacency matrix representation. 1 Locate the root (the column sum is zero for the root) 2 Display . Here. i.3) is: D G H E B J K I F C A 6. Pre-order traversal 2. Then it entails to traverse through the left subtree and then the right subtree in specific orders.e. Post-order traversal Pre-Order Traversal In this traversal. then the sibling is traversed next..

we need to check whether the stack is empty or not. and is given in Fig.4 shows flow graph of preorder traversal. . Preorder Traversal Fig.6.Representation and Traversal of a Binary Tree 3 Push in to the stack 4 Scan the row in search of ‘L’ for the left child information 5 Pop from the stack 6 Scan the row in search of ‘R’ for the right child information 7 Check if array IsEmpty(). inorder and postorder traversal sequences.4 Preorder flow graph Note that before popping. Inorder Traversal The in-order traversal will also have the same steps explained above.64 Chapter 6 . 8 Stop Sequencing the above stated steps helps us in arriving at preorder. 6. ‘L’ found Fig. 6.5. Only the flow graph sequence changes.

6. 2i +1) If end Algorithm ends . the root address //initially i=1 Output: Preorder sequence Method: If (A[i] != 0) Display(A[i]) Preorder Traversal (A.65 BSIT 41 Algorithms Fig.2 Binary Tree Traversal from one Dimensaional Array Representation Preorder Traversal Algorithm: Preorder Traversal Input: A[]. one dimensional array representing the binary tree i. 6.3.5 Inorder flow graph Designing a flow graph for post order traversal is left as an exercise to the students. 2i) Preorder Traversal (A.

the root address //initially i=1 Output: Postorder sequence Method: If (A[i] != 0) Postorder Traversal (A. 2i +1) Display(A[i]) If end Algorithm ends We shall now look into the tree traversals in linked representation .66 Chapter 6 . one dimensional array representing the binary tree i.Representation and Traversal of a Binary Tree Inorder Traversal Algorithm: Inorder Traversal Input: A[]. 2i +1) If end Algorithm ends Postorder Traversal Algorithm: Postorder Traversal Input: A[]. 2i) Postorder Traversal (A. one dimensional array representing the binary tree i. 2i) Display(A[i]) Inorder Traversal (A. the root address //initially i=1 Output: Inorder sequence Method: If (A[i] != 0) Inorder Traversal (A.

3. every node has a structure which has links to the left and right children.Rchild) } Algorithm ends. Algorithm: In-order Traversal Input: bt. address of the root node Output: Inorder sequence Method: if(bt != NULL) { In-order Traversal([bt].Lchild) Display([bt]. address of the root node Output: Preorder sequence Method: if(bt != NULL) { Display([bt].Rchild) } Algorithm ends.Lchild) Pre-order Traversal([bt]. Algorithm: Pre-order Traversal Input: bt. The algorithms for traversing the binary tree in linked representation are given below.data) In-order Traversal([bt].67 BSIT 41 Algorithms 6. .data) Pre-order Traversal([bt].3 Binary Tree in Linked Representation As we already studied.

2D array and a linked representation. What are the wastage of memory for a binary tree with 16 nodes represented in a 1D array. The algorithms for traversing the binary tree represented in various representations are also presented. 8 and 9 4. . EXERCISE 1. linear.68 Chapter 6 . 5. For a atleast 5 binary trees of different depths greater than or equal to 6 of your choice obtain the preorder. postorder and inorder sequences.Lchild) Post-order Traversal([bt]. two dimensional arrays and linked representation are discussed. Trees form the core of non-linear data structure and Binary tree helps in storing the data more efficiently although the access is not as simple as arrays. address of the root node Output: Postorder sequence Method: if(bt != NULL) { Post-order Traversal([bt].Representation and Traversal of a Binary Tree Algorithm: Post-order Traversal Input: bt. Differentiate complete and full binary trees 3.Rchild) Display([bt]. SUMMARY In this chapter the binary trees are introduced.data) } Algorithm ends. The various ways of representing the binary tree. What is the maximum number of nodes in a binary tree of level 7. What is a binary tree 2.

6. algorithms and applications in C++. McGraw Hill international edition. Computer algorithms. Sahni and Rajasekaran. Horowitz. Tremblay and Sorenson. Fundamentals of Data structure. India. 2000. Narsingh Deo. Galgotia publications. Dromey R. 1999. McGraw Hill edition.. Fundamentals of Computer algorithm. 5.69 BSIT 41 Algorithms References 1. Prentice Hall publications . Prentice hall publications. 1991. Horowitz and Sahni. . Horowitz and Sahni. 1990. 2000. 4. 1998. Graph theory with applications to engineering and computer science. G. Data structures. How to solve it by computers. Galgotia publications 3. An introduction to data structures with applications. Sartaj Sahni. 2. 1983. Galgotia publications 7.

Sign up to vote on this title
UsefulNot useful