You are on page 1of 122

MARTHANDAM COLLEGE OF ENGINEERING AND TECHNOLOGY

DEPARTMENT OF COMPUTER SIENCE AND ENGINEERING (UG & PG)

DATA STRUCTURE

rd
Second Year Computer Science and Engineering, 3 Semester

16 Marks Question and Answer

Subject Code & Name: CS 2201 & DATA STRUCTURE

Prepared by: SAHITHA RAJ A.D.


UNIT –I

LINEAR STRUCTURES

1.Explain the linked list implementation?

Function To test whether a linked list is empty

/* Return True if L is empty */


int IsEmpty(List L)
{
return L→Next ==NULL;
}

Example:

Function to test whether current position is last in a linked list

/* Return True if P is the Position in list */


int IsLast(Position P, List L)
{
return P→Next==NULL;
}

Example:
Find routine

/* Return Position of X in L;NULL if not found */

Position Find(ElementType X, List L)


{
Position P;
P=L→Next;
While(P!=NULL&& P→Element!=X)
P=P→Next;
return P;
}
Example: Consider X=20

Deletion Routine for linked list

void Delete (ElementType X,List L)


{
Position p,Tmpcell;
P=FindPrevious(X,L);
if(! IsLast(P,L))
{
Tmpcell=P→Next;
Pnext=Tmpcellnext;
Free(Tmpcell);
}
}

Example:

Find Previous

/* If x is found,then Next field of returned Position is NULL*/

Position FindPrevious(ElementType X,List L)


{
Position P;
P=L;
while(P→Next!=NULL && P→Next→Element!=X)
P=P→Next;
return P;
}
Eg: FindPrevious(20,L)
Insertion Routine for Linked List

 The next routine is Insertion routine. We will pass an element to be inserted along with
list L and a Position P. The insertion routine will insert an element after Position P.

void Insert (ElementType X, List L, Position P)


{
Position Tmpcell;
Tmpcell=malloc(sizeof(struct Node));
if(Tmpcell==NULL)
FatalError(“Out of space”);
Tmpcell→Element=X;
Tmpcell→Next=P→Next;
P→Next=Tmpcell;
}

2.Explain cursor based implementation?

Function to test whether P is last in a linked list

/*return true if p is the last position in List L*/


int IsLast(Position P, List L)
{
return CursorSpace[p].Next==0;
}

Find routine

/* return position of x in L;0 if not found*/


/*uses a header node*/
Position Find (elementType X, List L)
{
Position P;
P=CursorSpace[L].Next;
while(P && CursorSpace[P].Element!=x)
P=CursorSpace[P].Next;
return P;
}

Deletion Routine

/*Delete first occurrence of X from a list */


/*Assume use of a header node */
void Delete (ElementType X, List L)
{
Position P, Tmpcell;
P=FindPrevious(X,L);
if(!IsLast(P,L))
{
Tmpcell=CursorSpace[P].Next;
CursorSpace [P].Next= CursorSpace[Tmpcell].Next;
CursorFree (Tmpcell);
}
}

Insertion Routine

/*Insert after Position P*/

void Insert (Element Type X, List L,Position P)

Position Tmpcell;

Tmpcell= CursorAlloc ();

if(Tmpcell ==0)
Fatalerror (“out of space”);

CursorSpace [Tmpcell]. Element=X;

CursorSpace[Tmpcell].Next= CursorSpace[P].Next;

CursorSpace[P].Next = Tmpcell;

Example of a cursor implementation of linked lists

Slot Element Next


-----------------------------------------------------
0 - 6
1 b 9
2 f 0
3 header 7
4 - 0
5 header 10
6 - 4
7 c 8
8 d 2
9 e 0
10 a 1

Figure: Example of a cursor implementation of linked lists

 In the above example, if the value of L is 5 and the value of M is 3,then L represents the
List a,b,c and M represents the list c,d,f.
 The ‘L’ List is as follows

 The Header is in the position 3,in the cursor space.


 The ‘M’ List as follows:

 Here the header is in the position 5 in the cursor space.

Difference between cursor implementation and pointer implementation

Pointer implementation Cursor implementation


1. Data are stored in a collection of 1. Data are stored in a global array of
structures. Each structure contains a structures. Here array index is
data and next pointer considered as an address.
2. Malloc function is used to create a node 2. Cursor Alloc function is used to
and free function is used to release the Allocate a space for new node and
cell. cursor free function is used to release
the cell.

3.Explain doubly linked list?

 A Doubly linked list is a linked list in which each node has three fields namely,
o Data field
o Forward Link(FLINK)
o Backward Link(BLINK)

FLINK-Points to the successor node in the list

BLINK-Points to the predecessor node in the list

Node in Doubly Linked List

BLINK DATA FLINK


ELEMENT

Doubly Linked List

Figure: A doubly linked list

A Doubly linked list with a header


Insert an element in a Doubly linked list

Delete an element in a doubly linked list

Advantages

 Deletion operation is easier.


 Finding the predecessor and successor of node is easier.

Disadvantage

 More memory space is required since it has two pointers.

4.Explain the application of stack?


 The applications of lists are the following:
1. Polynomial ADT
2. Radix Sort
3. Multilist

1. Polynomial ADT
o An Abstract Data Type for single _variable polynomials (with non negative
exponents) can be defined by using a list.
o Let ∑ . If most of the coefficients Ai are non zero, then a simple
array is used to store the coefficients.
o Then write the routines to perform addition, subtraction, multiplication and other
operations on these polynomials.
Linked list representation of polynomials

Consider the polynomials

P1(X)=10X1000 +5X14 +1 and

P2(X)=3X1990 -2X1492 +11X+5

P1(X)=10X1000+5X14+1

P2(X)=3X1990-2X1492+11X+5

 The first node of the list is header.


 The first field of the node is the coefficient.
 The second field is the exponent.
 The third field is the pointer to the next node.

Type Declaration For Array Implementation Of Polynomial ADT:

typedef struct
{
int CoeffArray[MaxDegree + 1];
int HighPower;
}*Polynomial;

Procedure To Initialize A Polynomial To Zero:

void ZeroPolynomial(Polynomial Poly)


{
int i;
for(i=0; i<=MaxDegree; i++)
Poly→CoeffArray[i]=0;
Poly→HighPower =0;
}

Procedure To Add Two Polynomials:

void AddPolynomial(const Polynomial Poly1,const Polynomial polySum)


{
int i;
ZeroPolynomial(PolySum);
PolySum→HighPower=Max(Poly1→HighPower,Poly2→HighPower);
for(i=PolySum→HighPower;i>=0;i--)
PolySum→coeffArray[i]=Poly1→coeffArray[i]+Poly2→coeffArray[i];
}

Procedure To Multiply Two Polynomials:

void MultPolynomials(const Polynomial Poly1,const Polynomial Poly2,Polynomial PolyProd)


{
int i,j;
ZeroPolynomial(PolyProd);
PolyProd→HighPower=Poly1→HighPower+Poly2→HighPower;
if(PolyProd→HighPower>MaxDegree)
Error(“Exceeded Array size”);
else
for(i=0;i<=Poly1→HighPower;i++)
for(j=0;j<=Poly2→HighPower;j++)
PolyProd→CoeffArray[i+j]+=Poly1→CoeffArray[i]*Poly2→CoeffArray[j];
}

2. Radix sort

 The Radix Sort is implemented using Linked List.


 The Radix Sort perform sorting, by using bucket sorting technique.
 In Radix Sort, first, bucket sort is performed by least significant digit, then next digit and
so on.
 It is sometime known as Card Sort.
 The following example shows the action of radix sort on 10 numbers.

The input is 64,8,216,512,27,729,0,1,343,125.

 The first step bucket sorts by the least significant digit. so, the list sorted by least
significant digit is:0,1,512,343,64,125,216,27,8,729.

0 1 512 343 64 125 216 27 8 729


--------------------------------------------------
0 1 2 3 4 5 6 7 8 9

Figure: Buckets after first step of radix sort

 These are now sorted by the next significant digit.0,1,8,512,216,125,27,729,343,64.

8 729
1 216 27
0 512 125 343 64
--------------------------------------
0 1 2 3 4 5 6 7 8 9

Figure: Buckets after the second pass of radix sort

 The list is now sorted with respect to the two least significant digits.
 The final pass, bucket-sorts by the most significant digit.
 The final list is 0,1,8,27,64,125,216,343,512,729.

64
27
8
1
0 125 216 343 512 729
------------------------------------------
0 1 2 3 4 5 6 7 8 9

Figure: Buckets after the last pass of radix sort


3. MULTILISTS
 Two or more lists are combined into one is called Multilists.

Example:

A University with 40,000 students and 2,500 courses need to be able to generate
two types of reports.
1. The first report lists the registration for each class.
2. The second report lists, by student the classes that each student is
registered for.
 The implementation might be use a two-dimensional array. Such an array would
have 100 million entries. The average student registers for about three courses, so
only 120,000 of these entries, would have meaningful data.
 This problem can be easily solved using linked list. Two lists are needed for this
implementation.
 A list for each class containing the student in the class, and a list for each student
containing the classes the student is registered for.
 The following figure shows the implementation.
 As the figure shows, two lists are combined into one.
 All lists use a header and are circular.
Figure: Multilist implementation for registration problem

 To list all of the student in classc3, start at c3 and traverse its list by going right.
 The first cell belongs to student s1.

5.Explain the applications of stack?

Routine to Test Whether a stack is empty

int IsEmpty(Stack S)
{
return S→Next ==NULL;
}

Routine to create an Empty Stack

Stack CreateStack(void)
{
Stack S;
S=malloc(sizeof(struct Node));
if(S==NULL)
Fatal Error (“Out of space”);
MakeEmpty(S);
return S;
}

Routine for MakeEmpty

void MakeEmpty(Stack S)
{
if(S==NULL)
Error(“Must use CreateStack first”);
else
while(!IsEmpty(S))
Pop(S);
}

Routine to push an element into a Stack

void Push(ElementType X,Stack S)


{
ptrToNode Tmpcell;
Tmpcell=malloc(sizeof (struct Node));
if(Tmpcell==NULL)
FatalError(“Out of space”);
else
{
Tmpcell→Element=x;
Tmpcell→Next=S→Next;
S→Next=Tmpcell;
}
}
Example:

Insert 5

 Insertion is done at front of list

Routine to return top element in aStack

ElementType Top(Stack S)
{
if(!IisEmpty(S))
return S→Next→Element;
Error(“Empty Stack”);
return 0;
}

Examle:
 It returns 10

Routine to Pop from a Stack

void Pop(Stack S)
{
PtrToNode FirstCell;
if(IsEmpty(S))
Error(“EmptyStack”);
else
{
FirstCell=S→Next;
S→Next=S→Next →Next;
Free(FirstCell);
}
}
Example:

Drawbacks of Linked list Implementation

 Malloc and free calls are expensive.


o This can be avoided by using a second stack, which is initially empty.
When a cell is to be dropped from the first stack, it is placed in the second stack. Then, when
new cells are needed for the first stack, the second stack is checked first.

6.Explain the array implementation of stack?


Empty Stack Initialization

 Each Stack has TopofStack whose value is set to -1 for empty Stack.

PUSH

 To Push some elements “X” onto the Stack, increment TopofStack and then set
Stack[TopofStack]=X, where Stack is the representing the actual Stack.

POP:

 To Pop, set the return value to Stack[TopofStack] and then decrement TopofSatck.

The Push and Pop operations are very fast. In some machines it takes only one machine
instruction.

 The structure contains the TopofStack and capacity fields.


 Once the maximum size is known the stack array can be dynamically allocated.

#define EmptyTOS(-1)

#define MinStackSize(5)

struct StackRecord
{
int Capacity;
int TopOfStack;
ElementType *Array;
};

 Now the maximum size of stack is known, Stack can be dynamically allocated.

Stack Creation

Stack CreateStack(int MaxElements)


{
Stack S;
if(MaxElements<MinStackSize)
Error(“Stack size is too small”);
S=malloc(sizeof(struct StackRecord);
if(S==NULL)
FatalError(“Out of space”);
S→Array=malloc(sizeof(ElementType)*MaxElements);
if(S→array==NULL)
FatalError(“Out of space”);
S→Capacity=MaxElements;
MakeEmpty(S);
return S;
}
 This routine creates a Stack of a given maximum size. The malloc function allocates the
Stack Structure and array. Then TopOfStack and Capacity fields are initialized.

Routine for Freeing Stack

 Dispose stack should be written to free the stack structure.

void DisposeStack ( Stack S)


{
if (S!=NULL)
{
Free (S→Array);
Free (S);
}
}
Example

Routine to test whether a stack is empty

int IsEmpty(Stack S)
{
return S→TopofStack==EmptyTOS;
}
Routine to create an empty stack

void MakeEmpty (Stack S)


{
S→TopofStack = EmptyTOS;
}

Routine to Pop from Stack

void Pop (Stack S)


{
if ( IsEmpty (s) )
Error(“Empty Stack”);
else
S→TopofStack--;
}
Example:

Routine to return top of Stack


ElementType(Stack S)
{
if (! IsEmpty (S))
return S→Array [S→Top of Stack];
Error (“Empty Stack”);
return 0;
}
Routine to give top element and Pop a stack

ElementType TopAndPop (Stack S)


{
if (! IsEmpty (S))
return S→Array [S→TopofStack--];
Error (“Empty Stack”);
return 0;
}

7.Convert the below expressions to postfix


A*B+(C-D/E) #

The postfix expression is AB*CDE/-+.

8.Convert the infix expression is a + b * c+( d*e + f ) * g

First, the symbol a is read, so it is passed through to the output. Then '+' is read and
pushed onto the stack. Next b is read and passed through to the output.

Next '*' is read. The top entry on the operator stack has lower precedence than '*', so nothing is
output and '*' is put on the stack. Next, c is read and output. Thus far, we have
The next symbol is a '+'. Checking the stack, we find that we will pop a '*' and place it on the
output, pop the other '+', which is not of lower but equal priority, on the stack, and then push the
'+'.

The next symbol read is an '(', which, being of highest precedence, is placed on the stack. Then d
is read and output.

We continue by reading a '*'. Since open parentheses do not get removed except when a closed
parenthesis is being processed, there is no output. Next, e is read and output.

The next symbol read is a '+'. We pop and output '*' and then push '+'. Then we read and output
.
Now we read a ')', so the stack is emptied back to the '('. We output a '+'.

We read a '*' next; it is pushed onto the stack. Then g is read and output.

The input is now empty, so we pop and output symbols from the stack until it is empty.

 The postfix expression is abc * + de * f + g * +


UNIT II

TREE STRUCTURES

1.Explain the tree traversal with example?


There are 3 types of tree traversals
 Inorder Traversal
 Preorder Traversal
 Postorder Traversal

Inorder Traversal
 Traverse the left subtree
 process the node
 process the right subtree

Pseudocode for Inorder


void Inorder(Tree T)
{
If (T!=NULL)
{
Inorder(Tleft);
PrintElement(TElement);
Inorder(Tright);
}
}

Preorder Traversal
 process the node
 process left subtree in preorder
 process right subtree in preorder

Pseudocode for Preorder


Void preorder(Tree T)
{
If(T!=NULL)
{
printElement(TElement);
Preorder(Tleft);
Preorder(Tright);
}
}

Postorder Traversal:
 process the left subtree
 process the right subtree
 process the root

Pseudocode for Postorder


void Postorder(Tree T)
{
If(T!=NULL)
{
Postorder(TLeft);
Postorder(TRight);
PrintElement(TElement);
}
}

2.Explain the expression tree with examples?

Expression tree is a binary tree in which the leaf nodes are operands and other nodes
contain operators.
For eg:
Expression tree for (a+b*c)+((d*e+f)*g)

 If the operator is unary minus operator, the node can have only one child.

Infix Expression (Inorder Traversal)


To get infix expression, produce the parenthesized left expression then print the operator
and finally parenthesized right expression. Eg. a+b*c+d*e+f*g

Postorder Traversal
 Recursively print left subtree
 Then right subtree
 Operator
 Eg: abc*+de*f+g*+

Preorder Traversal
 Print the operator
 Left subtree
 Right subtree

3.Explain the application of tree?

There are many applications for trees. One of the popular uses is the directory structure in
many common operating systems, including UNIX, VAX/VMS, and DOS.

Fig: Unix Directory


The root of this directory is /usr. /usr has three children, mark, alex, and bill, which are
themselves directories. Thus, /usr contains three directories and no regular files. The filename
/usr/mark/book/ch1.r is obtained by following the leftmost child three times. Each / after the first
indicates an edge; the result is the full pathname. This hierarchical file system is very popular,
because it allows users to organize their data logically. Furthermore, two files in different
directories can share the same name, because they must have different paths from the root and
thus have different pathnames.

Routine to list a directory in a hierarchical file system


Void list_directory ( Directory_or_file D )
{
list_dir ( D, 0 );
}

Void list_dir ( Directory_or_file D, unsigned int depth )


{
if ( D is a legitimate entry)
{
print_name ( depth, D );
if( D is a directory )
for each child, c, of D
list_dir( c, depth+1 );
}
}

The popular methods for traversing a tree is preorder, postorder and inorder. In a
preorder traversal, work at a node is performed before (pre) its children are processed. In a
postorder traversal, the work at a node is performed after (post) its children are evaluated.

Routine to calculate the size of a directory


unsigned int size_directory( Directory_or_file D )
{
unsigned int total_size;
total_size = 0;
if( D is a legitimate entry)
{
total_size = file_size( D );
if( D is a directory )
for each child, c, of D
total_size += size_directory( c );
}
return( total_size );
}

4.Explain the binary search tree operations?

Binary Search tree declarations


Struct TreeNode
{
ElementType Element;
SearchTree Left;
SearchTree Right;
};

Make Empty
This operation is mainly for initialization. When programmers prefer to initialize the first
element as a one node tree.

SearchTree MakeEmpty(SearchTree T)
{
If(T!=NULL)
{
MakeEmpty(T→left);
MakeEmpty(T→Right);
Free(T);
}
return NULL;
}

Find:
This operation returns a pointer to the node in tree T that has a key X or NULL if there is
no such node.
Position Find(ElementType X, SearchTree T)
{
If(T==NULL)
return NULL;
if(X < T→Element)
return Find(X, T→Left);
elseif(X >T→Element)
return Find(X, T→Right);
else
return T;
}

Eg: Find an element 10 from binary search tree(X=10)

 10 is checked with root


 Here, 10>8, goto right child of 8
 10 is checked with 15
 10<15, goto left child of 15

 10 is checked with 10
 Element is found

FindMin and FindMax:

FindMin – returns the position of smallest elements in the tree

Position FindMin(SearchTree T)
{
if(T==NULL)
return NULL;
elseif(T→Left==NULL)
return T;
else
return FindMin(T→Left);
}

Non-recursive routine for FindMin


Position FindMin(SearchTree T)
{
if (T!=NULL)
while (T→Left!=NULL)
T=T→Left;
Return T;
}

FindMax – This operation returns the largest elements in the tree


 Start at the root and go right as long as there is right child. The stopping point is the
largest element.

Recursive routine for FindMax:


Position FindMax(SearchTree T)
{
if(T==NULL)
return NULL;
elseif(T→Right==NULL)
return T;
else FindMax(T→Right);
}
Non-recursive routine for FindMax
Position FindMax(SearchTree T)
{
if(T!=NULL)
while(T→Right!=NULL)
T=T→Right;
return T;
}
The Maximum element is 20

5.Explain the insert and delete operation in binary tree?


Insert
 To insert an element X into the tree
 Check with root node T
 If X < root: Traverse left subtree recursively until it reaches the T→left equals to NULL,
then X is placed in T→left
 If x > root: Traverse the right subtree recursively until it reaches the T→right equals to
NULL. Then X is placed in T→right

Routine to insert into a Binary Search Tree


SearchTree Insert( ElementType X, SearchTree T )
{
if( T == NULL )
{
T = malloc( sizeof( struct TreeNode ) );
if( T == NULL )
FatalError( "Out of space!!!" );
else
{
T->Element = X;
T->Left = T->Right = NULL;
}
}
else
if( X < T->Element )
T->Left = Insert( X, T->Left );
elseif( X > T->Element )
T->Right = Insert( X, T->Right );
return T;
}

Eg: Insert the element 8,5,10,15,20,18,3


Insert 8:

Insert 5:

Insert 10:

Insert 15:
Insert 20:

Insert 18:

Insert 3:

5.Explain the threaded binary tree?


A Threaded Binary Tree is a binary tree in which every node that does not have a right child has
a THREAD (in actual sense, a link) to its INORDER successor. By doing this threading we
avoid the recursive method of traversing a Tree, which makes use of stacks and consumes a lot
of memory and time.

The node structure for a threaded binary tree varies a bit and its like this
struct NODE
{
struct NODE *leftchild;
int node_value;
struct NODE *rightchild;
struct NODE *thread;
}

Let's make the Threaded Binary tree out of a normal binary tree...

The INORDER traversal for the above tree is -- D B A E C. So, the respective Threaded Binary
tree will be –

B has no right child and its inorder successor is A and so a thread has been made in between
them. Similarly, for D and E. C has no right child but it has no inorder successor even, so it has a
hanging thread.

Non recursive Inorder traversal for a Threaded Binary Tree

As this is a non-recursive method for traversal, it has to be an iterative procedure; meaning, all
the steps for the traversal of a node have to be under a loop so that the same can be applied to all
the nodes in the tree.

consider the INORDER traversal again. Here, for every node, we'll visit the left sub-tree (if it
exists) first (if and only if we haven't visited it earlier); then we visit (i.e print its value, in our
case) the node itself and then the right sub-tree (if it exists). If the right sub-tree is not there, we
check for the threaded link and make the threaded node the current node in consideration. Please,
follow the example given below.

step-1:

'A' has a left child i.e B, which has not been visited. So, we put B in our "list of visited nodes"
and B becomes our current node in consideration.

List of visited node

Inorder

step-2:

'B' also has a left child, 'D', which is not there in our list of visited nodes. So, we put 'D' in that
list and make it our current node in consideration.

List of visited node

BD

Inorder

step-3:

'D' has no left child, so we print 'D'. Then we check for its right child. 'D' has no right child and
thus we check for its thread-link. It has a thread going till node 'B'. So, we make 'B' as our
current node in consideration.

List of visited node

BD

Inorder
D

step-4:

'B' certainly has a left child but its already in our list of visited nodes. So, we print 'B'. Then we
check for its right child but it doesn't exist. So, we make its threaded node (i.e 'A') as our current
node in consideration.

List of visited node

BD

Inorder

DB

step-5:

'A' has a left child, 'B', but its already there in the list of visited nodes. So, we print 'A'. Then we
check for its right child. 'A' has a right child, 'C' and its not there in our list of visited nodes. So,
we add it to that list and we make it our current node in consideration.

List of visited node

BDC

Inorder

DBA

step-6:

'C' has 'E' as the left child and its not there in our list of visited nodes even. So, we add it to that
list and make it our current node in consideration.

List of visited node

BDCE

Inorder

DBA

and finally.....

DBEAC
6.Explain the C implementation for threaded binary tree?

Algorithm:-
Step-1: For the current node check whether it has a left child which is not there in the visited list.
If it has then go to step-2 or else step-3.
Step-2: Put that left child in the list of visited nodes and make it your current node in
consideration. Go to step-6.
Step-3: For the current node check whether it has a right child. If it has then go to step-4 else go
to step-5
Step-4: Make that right child as your current node in consideration. Go to step-6.
Step-5: Check for the threaded node and if its there make it your current node.
Step-6: Go to step-1 if all the nodes are not over otherwise quit

struct NODE
{
struct NODE *left;
int value;
struct NODE *right;
struct NODE *thread;
}

inorder(struct NODE *curr)


{
while(all the nodes are not over )
{
if(curr->left != NULL && ! visited(curr->left))
{
visit(curr->left);
curr = curr->left;
}
else
{
printf("%d", curr->value);
if(curr->right != NULL)
curr = curr->right;
else
if(curr->thread != NULL)
curr = curr->thread;
}
}
}

7.Explain the deletion routine for binary tree?


Deletion
Deletion is the complex operation in binary search tree. There are three possibilities:
 If the node is a leaf node, delete it immediately
 If the node has one child, it can be deleted by adjusting its parent pointer that point to its
child node
 If the node has 2 children, replace the data of node to be deleted with its smallest date of
right subtree and recursively delete that node.

Deletion Routine:

SearchTree Delete( ElementType X, SearchTree T )


{
Position TmpCell;

if( T == NULL )
Error( "Element not found" );
else
if( X < T->Element )
T->Left = Delete( X, T->Left );
else
if( X > T->Element )
T->Right = Delete( X, T->Right );
else
if( T->Left && T->Right )
{
TmpCell = FindMin( T->Right );
T->Element = TmpCell->Element;
T->Right = Delete( T->Element, T->Right );
}
else
{
TmpCell = T;
if( T->Left == NULL )
T = T->Right;
else if( T->Right == NULL )
T = T->Left;
free( TmpCell );
}

return T;
}

Case (i) Node to be deleted is leaf node:


Eg: Delete 8:

Case (ii) Node with one child


Eg: Delete 5:

Case (iii) Node with two children


Eg: Delete 5:
After deleting 5:

Example 2:
Delete 25
8.With a neat example Construct an expression tree:
Expression tree can be constructed from an postfix expression.
Algorithm for construction of expression tree:
1. If infix expression is given, convert it to postfix expression
2. Read the expression one symbol at a time
3. If the symbol is an operand, create a one node tree and push a pointer onto a stack
4. If the symbol is an operator, pop pointers of two trees T1 and T2 from the stack and
form a new tree whose root is the operator, T2 is the left child and T1 is the right
child. A pointer to this new tree is then pushed onto the stack.

Eg: Consider the expression a b + c d e + * *


The first two symbols are operands, so create one-node trees and push pointers to them
onto a stack.

Next, a '+' is read, so two pointers to trees are popped, a new tree is formed, and a pointer
to it is pushed onto the stack.*

Next, c, d, and e are read, and for each a one-node tree is created and a pointer to the
corresponding tree is pushed onto the stack.
Now a '+' is read, so two trees are merged.

Now, a '*' is read, so we pop two tree pointers and form a new tree with a '*' as root.

Finally, the last symbol is read, two trees are merged, and a pointer to the final tree is left
on the stack.
UNIT –III

BALANCED TREE

1.Explain AVL tree with its rotation?

An AVL tree (Adelson – Velskii and Landis) tree is a binary search tree with a balance
condition. The balance condition
 Must be easy to maintain
 Ensures that the depth of the tree is O(log N)
 Require that left and right subtree have the same height
Definition:
An AVL tree is a binary search tree, except that for every node in the tree, the height of
left and right subtrees can differ by atmost 1.
 The height of empty tree is defined to be -1.

Balance Factor:
 A Balance factor is the height of the left subtree minus height of the right subtree.
 BF=height of left subtree – height of right subtree
 For an AVL tree all balance factor should be +1,0,-1

Root node consider root node


Height of left subtree = 3 height of left subtree = 3
Height of right subtree = 2 heght of right subtree = 1
Differ by atmost 1
Rotation
If we insert a node into an AVL tree, then the property of AVL tree may be violated. In
this case the property has to be restored by doing a simple modification to the tree known as
rotation.
Consider that the node to be rebalanced as α. An AVL tree causes height imbalance when
any one of following conditions occur.
1) An insertion into the left subtree of the left child of α.
2) An insertion into right subtree of left child of α.
3) An insertion into left subtree of right child of α.
4) An insertion into the right subtree of the right child of α.

Two types of rotations:


Single rotation (insertion occurs outside (i.e.) left-left or right-right (i.e.) case I and case IV)
Double rotation
1. Insertion occurs on inside (i.e.) left-right, right-left
2. Overcomes case II and IV

Single Rotation:
Case (i): An insertion into the left subtree of left child of k2.
Single rotation to fix case 1

 K2 satisfies AVL property before insertion, but violates it afterwards.


 To rebalance the tree, move X up a level and Z down a level, Y stays at the same level.
 The height of entire subtree is exactly the same as the height of original subtree prior to
the insertion.
Eg: Insertion of 6 to the AVL tree

Routine to perform single rotation with left


static Position SingleRotateWithLeft( Position K2 )
{
Position K1;
K1 = K2->Left;
K2->Left = K1->Right;
K1->Right = K2;
K2->Height = Max( Height( K2->Left ), Height( K2->Right ) ) + 1;
K1->Height = Max( Height( K1->Left ), K2->Height ) + 1;
return K1;
}

Case (iv): An insertion into the right subtree of the right child of K1
Single Rotation to fix case 4:
Eg: Insertion of 10 to the AVL tree

Before Rotation After Rotation

Routine to perform single rotation with right:

static Position SingleRotateWithRight( Position K1 )


{
Position K2;
K2 = K1->Right;
K1->Right = K2->Left;
K2->Left = K1;
K1->Height = Max( Height( K1->Left ), Height( K1->Right ) ) + 1;
K2->Height = Max( Height( K2->Right ), K1->Height ) + 1;
return K2; /* New root */
}

Double Rotation:

Case (iii): An insertion into right subtree of left child K3.

General Representation

Routine to perform double rotation with left:

static Position DoubleRotateWithLeft( Position K3 )


{
/* Rotate between K1 and K2 */
K3->Left = SingleRotateWithRight( K3->Left );

/* Rotate between K3 and K2 */


return SingleRotateWithLeft( K3 );
}

Double rotation can be performed with 2 single rotations.


Eg: insertion of either 12 or 18 makes AVL tree imbalance. This can be done by
performing single rotation with right of 10 & then perform the single rotation with left of 20.

Case (iii):Insertion into left subtree of right child of K1.


This can be performed by single rotation with left and then single rotation with right.

Routine to perform double rotation with right:


static Position DoubleRotateWithRight( Position K1 )
{
/* Rotate between K3 and K2 */
K1->Right = SingleRotateWithLeft( K1->Right );

/* Rotate between K1 and K2 */


return SingleRotateWithRight( K1 );
}
Eg: insertion of either 11 or 14 makes AVL tree imbalance. This can be done by perform
single rotation with left of 15 and then performing single rotation with right of 10.

2.Explain the routines for AVL insertion and deletion?

Declaration
struct AVLNode
{
ElementType element;
AVLTree Left;
AVLTree Right;
int height
};

Function to compute height of an of an AVL node:


static int Height(Position P)
{
if(P==NULL)
return -1
else
return P→Height
}

Routine to insert in an AVL tree:


AvlTree Insert( ElementType X, AvlTree T )
{
if( T == NULL )
{
T = malloc( sizeof( struct AvlNode ) );
if( T == NULL )
FatalError( "Out of space!!!" );
else
{
T->Element = X; T->Height = 0;
T->Left = T->Right = NULL;
}
}
else
if( X < T->Element )
{
T->Left = Insert( X, T->Left );
if( Height( T->Left ) - Height( T->Right ) == 2 )
if( X < T->Left->Element )
T = SingleRotateWithLeft( T );
else
T = DoubleRotateWithLeft( T );
}
else
if( X > T->Element )
{
T->Right = Insert( X, T->Right );
if( Height( T->Right ) - Height( T->Left ) == 2 )
if( X > T->Right->Element )
T = SingleRotateWithRight( T );
else
T = DoubleRotateWithRight( T );
}

T->Height = Max( Height( T->Left ), Height( T->Right ) ) + 1;


return T;

3.Insert the following nodes into the AVL tree?

Eg: Insert the following elements into the AVL tree 2,1,4,5,9,3,6,7
4.Explain binary heap?

BINARY HEAP (HEAP)


The efficient way of implementing priority queue is binary heap. Binary heap is merely
referred as Heaps.
Properties of Binary Heap
 Structure property
 Heap order property

Structure Property:
 A heap is a binary tree that is completely filled with the possible exception of bottom
level which is filled from left to right. Such a tree is known as complete binary tree.
 A complete binary tree of height h has between 2h and 2h+1-1 nodes.
 For eg. If the height is 3 than the number of nodes is in between 8 and 15(i.e 23 and 24-1).

A Complete binary tree

Array implementation of complete binary tree

A B C D E F G H I J
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
 For any element in array position i, the left child is in position 2i, the right child is in
position (2i + 1) and parent is in position (i/2).

Advantage:
 It doesn’t require pointers
 Operations required to traverse the tree is simple & fast.

Disadvantage:
 Estimation of maximum heap size is required in advance.
Heap Order Property:
 It allows operations to be performed quickly.
 In order to find the minimum quickly, the smallest element should be the root.
 Every node should be smaller than all of its descendants.
 For every node X, the key in the parent of X is smaller than or equal to the key in X, with
the exception of the root.

Binary Tree with Structure and Heap order property:

5.Explain the binary heap operation?

Basic Heap Operation:

Insert
To insert an element X into heap, we create a hole in next available location; otherwise
the tree will not be complete. If X can be placed in hole without violating heap order, then place
element X there itself. Otherwise, we slide the element that is in hole’s parent node into the hole,
thus bubbling the hole up toward root. This process continues until X can be placed in hole. This
strategy is known as Percolate up (the new element is percolated up the heap until current
location is found.)
Routine to insert into a binary key:
void Insert(ElementType X, PriorityQueue H)
{
int i;
if (IsFull(H)) {
Error("Priority queue is full");
return;
}
for (i = ++H->Size; H->Elements[ i / 2 ] > X; i /= 2)
H->Elements[ i ] = H->Elements[ i / 2 ];
H->Elements[ i ] = X;
}

Eg. Insert 14 into heap tree


DeleteMin:
In binary heap, the minimum element is found in the root. When the minimum is
removed, a hole is created at the root. Since the heap becomes one smaller, the last element X in
the heap must move somewhere in the heap.
If X can be placed in the hole without violating heaporder property place it. Otherwise,
we slide the smaller of the hole’s children into the hole, pushing the hole down one level.
Function to perform DeleteMin in a binary heap:
ElementType DeleteMin(PriorityQueue H)
{
int i, Child;
ElementType MinElement, LastElement;

if (IsEmpty(H))
{
Error("Priority queue is empty");
return H->Elements[ 0 ];
}
MinElement = H->Elements[ 1 ];
LastElement = H->Elements[ H->Size-- ];
for (i = 1; i * 2 <= H->Size; i = Child) {
/* Find smaller child */
Child = i * 2;
if (Child != H->Size && H->Elements[ Child + 1 ]
< H->Elements[ Child ])
Child++;
/*Percolate one level */
if (LastElement > H->Elements[ Child ])
H->Elements[ i ] = H->Elements[ Child ];
else
break;
}
H->Elements[ i ] = LastElement;
return MinElement;
}

Eg. Remove 13
6.Explain B Tree?

Definition of a B-tree

• A B-tree of order m is an m-way tree (i.e., a tree where each node may have up to m
children) in which:

1. the number of keys in each non-leaf node is one less than the number of its children and
these keys partition the keys in the children in the fashion of a search tree

2. all leaves are on the same level

3. all non-leaf nodes except the root have at least m / 2 children

4. the root is either a leaf node, or it has from two to m children

5. a leaf node contains no more than m – 1 keys

• The number m should always be odd

An example B-Tree

Constructing a B-tree
• Suppose we start with an empty B-tree and keys arrive in the following order:1 12 8 2
25 5 14 28 17 7 52 16 48 68 3 26 29 53 55 45

• We want to construct a B-tree of order 5

• The first four items go into the root:

• To put the fifth item in the root would violate condition 5

• Therefore, when 25 arrives, pick the middle key to make a new root

6, 14, 28 get added to the leaf nodes:

Adding 17 to the right leaf node would over-fill it, so we take the middle key, promote it (to the
root) and split the leaf

7, 52, 16, 48 get added to the leaf nodes


Adding 68 causes us to split the right most leaf, promoting 48 to the root, and adding 3 causes us
to split the left most leaf, promoting 3 to the root; 26, 29, 53, 55 then go into the leaves

Adding 45 causes a split of

and promoting 28 to the root then causes the root to split

Inserting into a B-Tree

• Attempt to insert the new key into a leaf

• If this would result in that leaf becoming too big, split the leaf into two, promoting the
middle key to the leaf’s parent

• If this would result in the parent becoming too big, split the parent into two, promoting
the middle key

• This strategy might have to be repeated all the way to the top

• If necessary, the root is split in two and the middle key is promoted to a new root, making
the tree one level higher
7.Explain the steps to insert into B Tree?
Exercise in Inserting a B-Tree

• Insert the following keys to a 5-way B-tree:

• 3, 7, 9, 23, 45, 1, 5, 14, 25, 24, 13, 11, 8, 19, 4, 31, 35, 56

Removal from a B-tree

• During insertion, the key always goes into a leaf. For deletion we wish to remove from a
leaf. There are three possible ways we can do this:

• 1 - If the key is already in a leaf node, and removing it doesn’t cause that leaf node to
have too few keys, then simply remove the key to be deleted.

• 2 - If the key is not in a leaf then it is guaranteed (by the nature of a B-tree) that its
predecessor or successor will be in a leaf -- in this case we can delete the key and
promote the predecessor or successor key to the non-leaf deleted key’s position.

• If (1) or (2) lead to a leaf node containing less than the minimum number of keys then we
have to look at the siblings immediately adjacent to the leaf in question:

• 3: if one of them has more than the min. number of keys then we can promote one
of its keys to the parent and take the parent key into our lacking leaf

4: if neither of them has more than the min. number of keys then the lacking leaf and one of its
neighbours can be combined with their shared parent (the opposite of promoting a key) and the
new leaf will have the correct number of keys; if this step leave the parent with too few keys then
we repeat the process up to the root itself, if required

Type #1: Simple leaf deletion

Assuming a 5-way B-Tree, as before...


Delete 2: Since there are enough keys in the node, just delete it

Simple non-leaf deletion

Too few keys in node and its siblings

Too few keys in node and its siblings

Enough siblings
Enough siblings

Exercise in Removal from a B-Tree

• Given 5-way B-tree created by these data (last exercise):

• 3, 7, 9, 23, 45, 1, 5, 14, 25, 24, 13, 11, 8, 19, 4, 31, 35, 56

• Add these further keys: 2, 6,12

• Delete these keys: 4, 5, 7, 3, 14

Analysis of B-Trees

• The maximum number of items in a B-tree of order m and height h:

root m–1

level 1 m(m – 1)

level 2 m2(m – 1)

. . .

level h mh(m – 1)

• So, the total number of items is


2
(1 + m + m + m3 + … + h
m )(m – 1) =
[(mh+1 – 1)/ (m – 1)] (m – 1) = mh+1 – 1

• When m = 5 and h = 2 this gives 53 – 1 = 124

Reasons for using B-Trees

• When searching tables held on disc, the cost of each disc transfer is high but doesn't
depend much on the amount of data transferred, especially if consecutive items are
transferred
– If we use a B-tree of order 101, say, we can transfer each node in one disc read
operation

– A B-tree of order 101 and height 3 can hold 1014 – 1 items (approximately 100
million) and any item can be accessed with 3 disc reads (assuming we hold the
root in memory)

• If we take m = 3, we get a 2-3 tree, in which non-leaf nodes have two or three children
(i.e., one or two keys)

– B-Trees are always balanced (since the leaves are all at the same level), so 2-3
trees make a good type of balanced tree

Comparing Trees

• Binary trees

– Can become unbalanced and lose their good time complexity (big O)

– AVL trees are strict binary trees that overcome the balance problem

– Heaps remain balanced but only prioritise (not order) the keys

– Multi-way trees

– B-Trees can be m-way, they can have any (odd) number of children

– One B-Tree, the 2-3 (or 3-way) B-Tree, approximates a permanently balanced
binary tree, exchanging the AVL tree’s balancing operations for insertion and
(more complex) deletion operations

8.Explain motivation of B Tree?

Motivation for B-Trees

• Index structures for large datasets cannot be stored in main memory

• Storing it on disk requires different approach to efficiency

• Assuming that a disk spins at 3600 RPM, one revolution occurs in 1/60 of a second, or
16.7ms

• Crudely speaking, one disk access takes about the same time as 200,000 instructions

• Assume that we use an AVL tree to store about 20 million records


• We end up with a very deep binary tree with lots of different disk accesses; log2
20,000,000 is about 24, so this takes about 0.2 seconds

• We know we can’t improve on the log n lower bound on search for a binary tree

• But, the solution is to use more branches and thus reduce the height of the tree!

– As branching increases, depth decreases


UNIT IV

HASHING AND SET

1.Explain separate chaining with its routine?

SEPARATE CHAINING

Separate chaining is a technique that keeps a list of elements that hash to the same value.
This is called separate chaining because each hash table element is a separate chain(linked list).
A separate chaining hash table

Type declaration
#define MinTableSize(10)
struct ListNode
{
ElementType Element;
Position Next;
};

typedef Position List;


struct HashTbl
{
Int TableSize;
List *TheLists;
};
The hash table structure contains an array of linked lists which are dynamically allocated
when the table is initialized. The hashtable type is a pointer to this structure. The lists field is a
pointer to a pointer to a List node Structure.

Initialization of routine for separate chaining hash table.


HashTable InitializeTable( int TableSize )
{
HashTable H;
int i;
if( TableSize < MinTableSize )
{
Error( "Table size too small" );
return NULL;
}
H = malloc( sizeof( struct HashTbl ) );
if( H = = NULL )
FatalError( "Out of space!!!" );
H->TableSize = NextPrime( TableSize );
H->TheLists = malloc( sizeof( List ) * H->TableSize );
if( H->TheLists = = NULL )
FatalError( "Out of space!!!" );
for( i = 0; i < H->TableSize; i++ )
{
H->TheLists[ i ] = malloc( sizeof( struct ListNode ) );
if( H->TheLists[ i ] = = NULL )
FatalError( "Out of space!!!" );
else
H->TheLists[ i ]->Next = NULL;
}
return H;
}

Find routine:

Uses the hash function to determine which list to traverse


Find (Key, H) return a pointer to the cell containing key.

Position Find( ElementType Key, HashTable H )


{
Position P;
List L;
L = H->TheLists[ Hash( Key, H->TableSize ) ];
P = L->Next;
while( P != NULL && P->Element != Key )
P = P->Next;
return P;
}

Insertion

Traverse down the list to check whether the element is already available.
If element is new, it is inserted either in front of the list or at end of the list.

void Insert( ElementType Key, HashTable H )


{
Position Pos, NewCell;
List L;

Pos = Find( Key, H );


if( Pos == NULL ) /* Key is not found */
{
NewCell = malloc( sizeof( struct ListNode ) );
if( NewCell == NULL )
FatalError( "Out of space!!!" );
else
{
L = H->TheLists[ Hash( Key, H->TableSize ) ];
NewCell->Next = L->Next;
NewCell->Element = Key; /* Probably need strcpy! */
L->Next = NewCell;
}
}
}

Disadvantage of separate chaining


 Requires Pointers
 Slows the algorithm

2.Explain the collision resolution stratergies with example?

Three common collision resolution strategies


 Linear probing
 Quadratic probing
 Double hashing

LINEAR PROBING

 Probing is the process of getting next available cell.


 In linear probing, F is a linear function of i, typically F (i) = i.
 This amounts to trying cells sequentially in search of an empty cell.
 If the end of the table is reached and no empty cells has been found then the search is
continued from the beginning of the table. It has tendency to create clusters in the table.

Example: Insert the keys {89, 18, 49, 58, 69} into a hash table.

Empty After 89 After 18 After 49 After 58 After 69


Table
0 49 49 49
1 58 58
2 69
3
4
5
6
7
8 18 18 18 18
9 89 89 89 89 89

The first collision occurs when 49 is inserted, it is put in next available spot (spot 0). The
key 58 collides with 18, 89 and then 49 before an empty cell is found. The collision for
69 is handled in similar manner.

Primary Clustering:
Any key that hashes into cluster will require several attempts to resolve the collision, and
then it will add to the cluster.
Advantage:
It does not require pointers .

Disadvantage:
It forms clusters which degrades the performance of the Hash table for storing and
retrieving data.

Quadratic Probing
Quadratic probing is a collision resolution method that eliminates the primary clustering
problem of linear probing. The collision function is quadratic, the popular choice is (i) = i2.

When 49 collides with 89, the next position attempted is one cell away. This cell is
empty, so 49 is placed there. Next 58 collides at position 8. Then the cell one away is tried but
another collision occurs. A vacant cell is found at the next cell tried, which is 2 2 = 4 away. 58 is
thus placed in cell 2. The same thing happens for 69.

Disadvantage:
There is no guarantee of finding an empty cell once the table gets more than half full, or
even before the table gets half full if the table size is not prime.
If quadratic probing is used, and the table size is prime, then a new element can always
be inserted if the table is at least half empty.

type declarations
enum kind_of_entry { legitimate, empty, deleted };
struct hash_entry
{
ElementType Element;
enum KindOfEntry Info;
};

typedef struct HashEntry Cell;

struct HashTbl
{
int TableSize;
Cell *TheCells;
};

Routine to initialize closed(Open Addressing) hash table

HashTable InitializeTable(int TableSize )


{
HashTable H;
int i;
if(TableSize < MinTableSize )
{
error("Table size too small");
return NULL;
}
H = malloc( sizeof ( struct HashTbl) );
if( H = = NULL )
fatal_error("Out of space!!!");
H-> TableSize = NextPrime(TableSize);
H->the cells = malloc( sizeof ( cell ) * H-> TableSize);
if( H->TheCells = = NULL )
fatal_error("Out of space!!!");
for( i=0; i < H-> TableSize; i++ )
H->TheCells[i].Info = Empty;
return H;
}

Find routine for (closed) hashing with quadratic probing


Find(key, H) will return the position of key in the hash table. If key is not present, then
find will return the last cell. The quadratic resolution function is, f(i) = f(i - 1) + 2i -1 .
Position Find( ElementType Key, HashTable H )
{
Position CurrentPos;
int CollisionNum;

CollisionNum = 0;
CurrentPos = Hash( Key, H->TableSize );
while( H->TheCells[CurrentPos].Info != empty &&
H->TheCells[CurrentPos]. Element != key )
{
CurrentPos += 2 * ++ CollisionNum - 1;
if(CurrentPos >= H-> TableSize)
CurrentPos -= H-> TableSize;
}
return CurrentPos;
}

Insert routine for (closed )hash tables with quadratic probing


void Insert( ElementType Key, HashTable H )
{
Position Pos;
Pos = find( key, H );
if( H-> TheCells [Pos].info != legitimate )
{
H-> TheCells [Pos].info = legitimate;
H-> TheCells [Pos].element = key;
}
}

Secondary Clustering
Although quadratic probing eliminates primary clustering, elements that hash to the
same position will probe the same alternate cells. This is known as secondary clustering.

3.Explain rehashing?

Rehashing
If the table gets too full, the running time for the operations will start taking too long and inserts
might fail for closed hashing with quadratic resolution. This can happen if there are too many
deletions intermixed with insertions. A solution, then, is to build another table that is about twice
as big and scan down the entire original hash table, computing the new hash value for each (non-
deleted) element and inserting it in the new table.
As an example, suppose the elements 13, 15, 24, and 6 are inserted into a closed hash
table of size 7. The hash function is h(x) = x mod 7.
If 23 is inserted into the table, the resulting table will be over 70 percent full. Because the
table is so full, a new table is created. The size of this table is 17, because this is the first prime
which is twice as large as the old table size. The new hash function is then h(x) = x mod 17.
Theold table is scanned, and elements 6, 15, 23, 24, and 13 are inserted into the new table. The
result after rehashing is given below.
This entire operation is called rehashing. This is a very expensive operation.

HashTable Rehash( HashTable H )


{
int i, OldSize;
Cell *OldCells;

OldCells = H->TheCells;
OldSize = H->TableSize;

/* Get a new, empty table */


H = InitializeTable( 2 * OldSize );

/* Scan through old table, reinserting into new */


for( i=0; i < OldSize; i++ )
if( OldCells[i].info = = Legitimate )
Insert( OldCells[i].Element, H );
free( OldCells );
return H;
}

4.Explain dynamic equivalence problem?

Given an equivalence relation ~, the natural problem is to decide, for any a and b, if a ~ b.
suppose the equivalence relation is defined over the five-element set {a1, a2, a3, a4, a5}. Then
there are 25 pairs of elements, each of which is either related or not. However, the information a1
~ a2, a3 ~ a4, a5 ~ a1, a4 ~ a2 implies that all pairs are related.
The equivalence class of an element a €S is the subset of S that contains all the elements
that are related.
Every member of S appears in exactly one equivalence class. To decide if a ~ b, we need
only to check whether a and b are in the same equivalence class. This provides our strategy to
solve the equivalence problem.
The input is initially a collection of n sets, each with one element. This initial
representation is that all relations are false. Each set has a different element, so that Si Sj =Sk; this
makes the sets disjoint.
There are two permissible operations.
1. The first is find, which returns the name of the set (that is, the equivalence class)
containing a given element.
2. The second operation is Union which merges the two equivalence classes containing
a and b into a new equivalence class.
This algorithm is dynamic because, during the course of the algorithm, the sets can
change via the union operation.
In on-linealgorithm when a find is performed, it gives an answer before continuing.off-
line algorithm sees the entire sequence of unions and finds.

Basic Data Structure


A tree is used to represent each set, since each element in a tree has the same root. Thus,
the root can be used to name the set. The trees used are not binary trees, but their representation
is easy, because the only information we will need is a parent pointer.
The name of a set is given by the node at the root. Since only the name of the parent is
required, we can assume that this tree is stored implicitly in an array: each entry p[i] in the array
represents the parent of element i. If i is a root, then p[i] = 0.

Figure : Eight elements, initially in different sets

To perform a union of two sets, we merge the two trees by making the root of one tree point to
the root of the other.
Figure After union (5, 6)

Figure After union (7, 8)

Figure After union (5, 7)

Figure: Implicit representation of previous tree

Disjoint set initialization routine


void initialize( DisjSet S )
{
int i;
for( i = NumSets; i > 0; i-- )
S[i] = 0;
}
A simple disjoint set find algorithm

A find(x) on element x is performed by returning the root of the tree containing x.


SetType Find( ElementType x, DisjSet S )
{
if( S[x] <= 0 )
return x;
else
return Find( S[x], S ) ;
}
Union
unions are performed on the roots of the trees.
void SetUnion( DisjSet S, SetType Root1, SetType Root2)
{
S[root2] = root1;
}

5.Explain smart union algorithm?

The unions are performed by making the second tree a subtree of the first.

1) Union-By-Size

In this method the smaller tree is made a subtree of the larger one.
The three unions in the preceding example were all ties, and so we can consider that they were
performed by size.

Eg: If the next operation were union (4, 5),

Result of union-by-size
To implement this strategy, we need to keep track of the size of each tree. Since we are
really just using an array, we can have the array entry of each root contain the negative of the
size of its tree.
Thus, initially the array representation of the tree is all -1s. When a union is performed,
check the sizes; the new size is the sum of the old.
2) Union-By-Height

An alternative implementation, which also guarantees that all the trees will have depth at
most O(log n), is union-by-height.
We keep track of the height, instead of the size, of each tree and perform unions by
making the shallow tree a subtree of the deeper tree.

Code for union-by-height


void SetUnion (DisjSet S, SetType Root1, SetType Root2 )
{
if( S[Root2] < S[Root1] )
S[Root1] = Root2;
else
{
if( S[Root2] = = S[Root1] )
S[Root1]--;
S[Root2] = Root1;
}
}
6.Explain path compression?

If there many number of find operation than unions, this running time is worse than that
of the quick-find algorithm. Moreover, no more improvements possible is possible for the
union algorithm.
Therefore, the only way to speed the algorithm up, without reworking the data structure
entirely, is to do something clever on the find operation.

The clever operation is known as path compression.


Path compression is performed during a find operation and is independent of the strategy
used to perform unions.
Suppose the operation is find(x). Then the effect of path compression is that every node
on the path from x to the root has its parent changed to the root.
Eg:The effect of path compression is that with an extra two pointer moves, nodes 13 and
14 are now one position closer to the root and nodes 15 and 16 are now two positions closer.

Figure: An example of path compression

The only change to the find routine is that S[x] is made equal to the value returned by
find; thus after the root of the set is found recursively, x is made to point directly to it. This
occurs recursively to every node on the path to the root, so this implements path compression
Code for disjoint set find with path compression

SetType Find( ElementType x, DisjSet S )


{
if( S[x] <= 0 )
return x;
else
return S[x] = Find( S[x] , S ) ;
}
Path compression is perfectly compatible with union-by-size, and thus both routines can
be implemented at the same time.
Path compression is not entirely compatible with union-by-height, because path
compression can change the heights of the trees.

7.Explain extendible hashing?

Extendible hashing:
Extendible hashing is used where the amount of data is too large to fit in main memory.
The main consideration then is the number of disk accesses required to retrieve data.
We assume that at any point we have n records to store, and at most m records fit in one
disk block so assume m = 4. Extendible hashing, allows a find to be performed in two disk
accesses. Insertions also require few disk accesses. Let us suppose, for the moment, that our data
consists of several six-bit integers.
The root of the "tree" contains four pointers determined by the leading two bits of the
data. Each leaf has up to m = 4 elements. D will represent the number of bits used by the root,
which is sometimes known as the directory. The number of entries in the directory is thus 2D. dl
is the number of leading bits that all the elements of some leaf l have in common. dl will depend
on the particular leaf, and dl D.

Suppose that we want to insert the key 100100. This would go into the third leaf, but as
the third leaf is already full, there is no room. We thus split this leaf into two leaves, which are
now determined by the first three bits. This requires increasing the directory size to 3.
If the key 000000 is now inserted, then the first leaf is split, generating two leaves with dl
= 3. Since D = 3, the only change required in the directory is the updating of the 000 and 001
pointers.

This very simple strategy provides quick access times for insert and find operations on
large databases.

8.Explain double hashing?

Double Hashing
For double hashing, one popular choice is F(i) = i .h2(x).
This formula says that we apply a second hash function to x and probe at a distance
h2(x), 2h2(x), . . ., and so on.
The second hash function such as h2(x) = R - (x mod R), with R a prime smaller than
TableSize, will work well. If we choose R = 7, then the results of inserting is shown below.
The first collision occurs when 49 is inserted. h2(49) = 7 - 0 = 7, so 49 is inserted in
position 6. h2(58) = 7 - 2 = 5, so 58 is inserted at location 3. Finally, 69 collides and is inserted at
a distance h2(69) = 7 - 6 = 1 away.

The table size should prime when double hashing is used. If we attempt to insert 23 into
the table, it would collide with 58. Since h2(23) = 7 - 2 = 5, and the table size is 10, we
essentially have only one alternate location, and it is already taken. Thus, if the table size is not
prime, it is possible to run out of alternate locations prematurely.
UNIT V

GRAPHS

1.Explain topological sorting?


A topological sort is a linear ordering of vertices in a directed acyclic graph such that if there is a
path from vi to vj appears after vi in the linear ordering. Topological ordering is not possible if
the graph has a cycle, since for two vertices v and w on the cycle, v precedes w and w precedes
v.
To implement the topological sort perform the following steps.
Step 1: Find the In Degree for every vertex.
Step2: Place the vertices whose In Degree is ‘0’ on the empty queue.
Step3: Dequeue the vertex v and decrement the In Degree’s of all its adjacent vertices.
Step4: Enqueue the vertex on the queue, if its In Degree falls to zero.
Step5: Repeat from step 3 until the queue becomes empty.
Step6: The topological ordering is the order in which the vertices dequeued.

Routine to perform Topological Sort


Void Topsort (Graph G)
{
Queue Q;
int Counter = 0;
Vertex V, W;
Q=CreateQueue ( NumVertex );
MakeEmpty( Q );
for each vertex V
if( Indegree[ V ] = = 0 )
Enqueue( V, Q );
while( !IsEmpty( Q ) )
{
V = Dequeue( Q );
TopNum[ V ] = ++counter;
for each W adjacent to V
If( - -Indegree[ W ] = = 0 )
Enqueue( W, Q );
}
if ( Counter != NumVertex )
Error ( “Graph has a cycle” );
DisposeQueue( Q );
}

Example :1: Topological Sorting

Step 1:
Number of 1’s present in each column of adjacency matrix represents the Indegree of the
corresponding vertex.
In figure. Indegree[a] = 0
Indegree[b] = 2
Indegree[c] =1
Indegree[d] = 2
Step 2:
Enqueue the vertex, whose Indegree is ‘0’
Vertex ‘a’ is 0, so place it on the queue.
Step 3:
Dequeue the vertex ‘a’ from the queue and decrement the Indegree’s of its adjacent
vertex ‘b’ & ‘c’.
Hence, Indegree[b] = 1
Indegree[c] = 0
Now, Enqueue the vertex ‘c’ as it’s Indegree becomes zero.
Step 4:
Dequeue the vertex ‘c’ from Q and decrement the Indegree’s of it’s adjacent vertex ‘b’ and ‘d’.
Hence, Indegree[b] = 0
Indegree[d] =1
Now, Enqueue the vertex ‘b’ as it’s Indegree falls to zero.
Step 5:
Dequeue the vertex ‘b’ from Q and decrement the Indegree’s of it’s adjacent vertex ‘d’.
Hence, Indegree[d]=0
Now, Enqueue the vertex ‘d’ as it’s Indegree falls to zero.
Step 6:
Dequeue the vertex ‘d’.
Step 7:
As the queue becomes empty, topological ordering is performed which is nothing but, the
order in which the vertices are dequeued.
Result of applying Topological sort to the graph, in the above figure.

Vertex 1 2 3 4
a 0 0 0 0
b 2 1 0 0
c 1 0 0 0
d 2 2 1 0
Enqueue a c b d
Dequeue a c b d

Topological Ordering for the given graph is: a c b d


2.Explain shortest path algorithm?

To implement the unweighted Shortest Path, perform the following steps:


Step 1: Assign the source node as ‘s’ and Enqueue ‘s’.
Step 2: Dequeue the vertex ‘s’ from queue and assign the value of that vertex to be known and
then find its adjacency vertices.
Step 3: If the distance of the adjacent vertices is equal to infinity then change the distance of that
vertex as the distance of its source vertex increment by ‘1’ and Enqueue the Vertex.
Step 4: Repeat from step 2, until the queue becomes empty.
Routine for Unweighted Shortest Path
Void Unweighted ( Table T )
{
Queue Q;
Vertex V, W;
Q = CreateQueue( NumVertex );
MakeEmpty( Q );
/*Enqueue the start vertex s*/
Enqueue( S , Q );
while( !IsEmpty( Q ) )
{
V =Dequeue( Q );
T[ V ].Known = True;
for each W adjacent to V
if( T[ W ].Dist = = Infinity )
{
T[ W ].Dist = T[ V ].Dist + 1;
T[ W ].Path = V;
Enqueue( W,Q );
}
}
DisposeQueue( Q ); /*Free the memory*/
}
EXAMPLE 1: Unweighted Shortest Path

Initial Configuration:
1) Source vertex ‘a’ is initially assigned a path length ‘0’.

V Known dv Pv
a 0 0 0
b 0 ∞ 0
c 0 ∞ 0
d 0 ∞ 0
Queue a

2)After finding all vertices whose path length from ‘a’ is 1.

‘a’ is dequeued
V Known dv Pv
a 1 0 0
b 0 1 a
c 0 1 a
d 0 ∞ 0
Queue b,c
3) After finding all vertices whose path length from ‘a’ is 2.

‘b’ is dequeued
V Known dv Pv
a 1 0 0
b 1 1 a
c 0 1 a
d 0 2 b
Queue c,d

‘c’ is dequeued
V Known dv Pv
a 1 0 0

b 1 1 a

c 1 1 a
d 0 2 b
Queue d

‘c’ is dequeued

V Known dv Pv
a 1 0 0
b 1 1 a
c 1 1 a
d 1 2 b

Queue empty

Shortest path from source Vertex “a” to other vertices is


a→b is 1
a→c is 1,a→d is 2
3.Explain Dijikstras algorithm?
The Strategy:
1. Create a table with Known, pv, dv parameters, where the size of the table equivalent to
the number of vertices(0 to N-1). The information in known, pv, dv are same as an
undirected graph.
2. Read graph from the adjacency list representation.
3. Select a vertex V which has smallest dv among all known vertices and sets the shortest
path from S to V as known.
4. The adjacent vertices W to V is located and dw is set as dw=dv + Cv, w, if the new sum
is less than the existing dw. That is, in every stage dw is updated only if there is an
improvement in the path distance.
5. Repeat step 3 and 4 until all vertices are classified under known.
Declarations for Dijkstra's algorithm
struct TableEntry
{
List Header;
int Known;
DistType Dist;
Vertex Path;
};
Table initialization routine
void InitTable( Vertex Start, Graph G, Table T )
{
int i;
ReadGraph( G, T );
for( i = 0; i < NumVertex; i++ )
{
T[ i ].Known = False;
T[ i ].Dist = Infinity;
T[ i ].Path = NotAVertex;
}
T[ Start ].dist = 0;
}
Routine for Dijkstra Algorithm
void Dijkstra( Table T )
{
Vrtex V,W;
for( ; ; )
{
V = smallest unknown distance vertex;
if( V = = NotAVertex )
break;
T[ V ].Known = True;
for each W adjacent to V
if( !T[ W ].Known )
if( T[ V ].Dist + Cvw < T[ W ].Dist )
{
ecrease( T[ W ].Dist to T[ V ].Dist + Cv,w );
T[ W ].Path = V;
}
}
}
EXAMPLE 1: The directed graph G:

Intial Configuration:

V Known dv Pv
a 0 0 0
b 0 ∞ 0
c 0 ∞ 0
d 0 ∞ 0

Vertex a is chosen as source and is declared as known vertex. Then the adjacent vertices
of a is found and its distance are updated as follows:
T [ b ].Dist = Min[ T [ b ].Dist, T[ a ].Dist + Ca,b ]
= Min[ ∞, 0+2 ]
= 2
T[ d ].Dist = Min[ T[ d ].Dist,T[ a ].Dist + Ca,d ]
= Min[ ∞, 0+1 ]
= 1
After ‘a’ is declared known

V Known dv Pv
a 1 0 0
b 0 2 a
c 0 ∞ 0
d 0 1 a

Now select the vertex with minimum distance, which is not known and mark that vertex
as visited. Here ‘d’ is the next minimum distance vertex. The adjacent vertex to ‘d’ is ‘c’
therefore, the distance of c is updated as follows
T[ c ].Dist = Min[ T[ c ].Dist, T[ d ].Dist + Cd,c ]
= Min[ ∞, 1 + 1 ]
= 2
After ‘d’ is declared known
V Known dv Pv
a 1 0 0
b 0 2 a
c 0 2 d
d 1 1 a

The next minimum vertex is b and mark it as visited. Since the adjacent vertex d is
already visited, select the next minimum vertex ‘c’ and mark it as visited.
After ‘b’ is declared known
V Known dv Pv
a 1 0 0
b 1 2 a
c 0 2 d
d 1 1 a

After ‘c’ is declared known and algorithm terminates

V Known dv Pv
a 1 0 0
b 1 2 a
c 1 2 d
d 1 1 a

4.Explain minimum spanning tree?


A minimum spanning tree of a weighted connected graph G is its spanning tree of the smallest
weight. Where the weight of a tree is defined as the sum of the weights on all its edges. The total
number of edges in minimum spanning tree (MST) is|V|-1. Where V is the number of vertices. A
minimum spanning tree exists if and only if G is connected. For any spanning Tree T, if an edge
e that is not in T is added, a cycle is created. The removal of any edge on the cycle reinstates the
spanning tree property.
Example 1: Connected Graph G

Spanning Tree for the above Graph Cost = 7

Cost = 8

Cost = 9
Cost = 8

Cost = 5

Minimum Spanning Tree:

Cost=5

EXAMPLE 2:
Minimum Spanning Tree( MST )

Cost=16
Applications of MST in real world are:
1. Wiring a house with a minimum of cable
2. Cheapest cost tour of traveling salesman
3. Networking the PC’s with low cost

Implementation of Minimum Spanning Tree:


1. Prims’s Algorithm
2. Kruskal’s Algorithm

5.Explain Prims algorithm?


Prim’s Algorithm is one of the ways to compute a minimum spanning tree which uses a greedy
technique. In this method, minimum spanning tree is constructed in successive stages. This
Algorithm begins with a set U initiated to {1}. It grows a spanning tree, one edge at a time. At
each step, it finds a shortest edge (u,v) such that the cost of(u,v) is the smallest among all edges,
where u is in Minimum Spanning Tree.

Routine For Prim’s Algorithm


Void Prims( Table T)
{
Vertex V , W ;
/* Table initialization */
for( i = 0; i < Numvertex; i++ )
{
T[ i ].Known = False;
T[ i ].Dist = Infinity
T[ i ].Path = 0;
}
for( ; ; )
{
Let V be the start vertex with the smallest distance
T[ V ].Dist = 0;
T[ V ].Known = True;
for each W adjacent to V
If( !T[ W ].Known )
{
T[ W ].Dist = Min( T[ W ].Dist, Cvw )
T[ W ].Path = V;
}
}
}
Example 1: Undirected Graph G
Initial Configuration:

V Known dv Pv
V1 0 0 0
V2 0 ∞ 0
V3 0 ∞ 0
V4 0 ∞ 0
V5 0 ∞ 0
V6 0 ∞ 0
V7 0 ∞ 0

Consider V1 as source vertex and proceed from there. Vertex V1 is marked as visited and
then the distance of its adjacent vertices are updated as follows.
T[ V2 ].Dist = Min[ T[ V2 ].Dist, Cv1, v2 ]
= Min[ ∞, 2 ]
=2
T[ V3 ].dist = Min[ T[ V3 ].Dist, Cv1, v3 ]
= Min[ ∞, 4 ]
= 4

V Known dv Pv
V1 1 0 0
V2 0 2 V1
The table after V1 is declared known V3 0 4 V1
V4 0 1 V1
V5 0 ∞ 0
V6 0 ∞ 0
V7 0 ∞ 0

Vertex V4 is marked as visited and then the distance of its adjacent vertices are updated.

The table after V4 is declared known


V Known dv Pv
V1 1 0 0
V2 0 2 V1
V3 0 2 V4
V4 1 1 V1
V5 0 7 V4
V6 0 8 V4
V7 0 4 V4
T[ V3 ] = Min[ T[ V3 ].Dist, Cv4, Cv3 ]
= Min[ 4 , 2 ] = 2

T[ V5 ] = Min[ T[ V5 ].Dist, Cv4, Cv5 ]


= Min[ ∞, 7 ] = 7
T[ V6 ] = Min[ T[ V6 ].Dist, Cv4, Cv6 ]
= Min[ ∞, 8 ] = 8

T[ V7 ] = Min[ T[ V7 ].Dist, Cv4, Cv7 ]


= Min[ ∞, 4 ] = 4
Vertex V2 is visited and its adjacent vertices distances were updated.
T[ V4 ].Dist = Min [ T [ V4 ].Dist, Cv2, v5 ]
= Min [7, 10 ]
=7
The table after V2 is declared known

V Known dv Pv
V1 1 0 0
V2 1 2 V1
V3 0 2 V4
V4 1 1 V1
V5 0 7 V4
V6 0 8 V4
T[ V6 ].Dist = Min[ T[ V6 ].Dist, Cv3, v6 ] V7 0 4 V4
= Min[ 8, 5 ]
=5
The table after V3 is declared known
V Known dv Pv
V1 1 0 0
V2 1 2 V1
V3 1 2 V4
V4 1 1 V1
V5 0 7 V4
V6 0 5 V3
V7 0 4 V4

V7 is declared known
T[ V6 ].Dist = Min[ T [ V6 ].Dist, Cv7, v6 ]
= Min[ 5, 1 ]
=1
T[ V5 ].Dist = Min[T [ V5 ].Dist, Cv7, v5 ]
= Min[ 7, 6 ] = 6

The table after V7 is declared known

V Known dv Pv
V1 1 0 0
V2 1 2 V1
V3 1 2 V4
V4 1 1 V1
V5 0 6 V7
V6 0 1 V7
V7 1 4 V4

The table after V6 is declared known

V Known dv Pv
V1 1 0 0
V2 1 2 V1
V3 1 2 V4
V4 1 1 V1
V5 0 6 V7
V6 1 1 V7
V7 1 4 V4

The table after V5 is declared known

V Known dv Pv
V1 1 0 0
V2 1 2 V1
V3 1 2 V4
V4 1 1 V1
V5 1 6 V7
V6 1 1 V7
V7 1 4 V4

The minimum cost of the spanning Tree is 16

6.Explain depth first trversal and depth first traversal?

Depth first search is a kind of tree traversal. Starting vertex may be determined by the
problem or chosen arbitrarily. The analogy with tree traversal is easier to discuss with directed
graphs, because the edges are directed as tree edges, (one way direction).
The two important key points of depth first search are:
1. If path exists from one node to another node walk across the edge- exploring the edge.
2. If path does not exist from one specific node to any other node, return to the previous node
where we have been before - backtracking.
The theme of depth first search is to explore if possible, otherwise backtrack.

Example :
Given directed graph G = ( V, E), where V={ A, B, C, D, E, F, G }.
For simplicity, assume the start vertex is A and exploration is done in alphabetical order. From
start vertex A explores to B, now AB is explored edge.
Algorithm: Depth First Search Or Traversal (DFS)
dfs ( G, v )
Mark v as "discovered"
for each vertex w such that edge vw is in G:
if w is undiscovered;
dfs(G,w); that is, explore vw, visit w,
explore from there as much as possible, and backtrack from w to v.
Otherwise:
"check" vw without visiting w
Mark v as "finished".
From the
dfssweep( G )
initialise all vertices of G to "undiscovered"
for each vertex v € G, in some order
if v is undiscovered;
dfs (G, v);

Breadth first search performs simultaneous explorations starting from a common point and spreading out

Assume start vertex as A. Explore all paths from vertex A. From A the explored edges are AF, AB. Explore all
paths from vertex B and F. From B the explored edges are BD, BC From F the explored edges
are FA, FC. The dashed lines show the edges that were explored but went to vertices that were
Previously discovered, (i.e. FA, FC).From D the explored edge is DA. But A already exists in the discovered
vertices list. So, we will say that the edge DA is checked rather than explored.
Algorithm: Breadth First Search Or Traversal (BFS)

bfs( G, v )
Queue Q = create (n); enqueue ( Q, v );
while Q is non empty
v = front ( Q );
dequeue( Q );
For each vertex w adjacent to v
enqueue ( Q, w );
parent[ w ] = v;

7.Explain biconnectivity?

A connected undirected graph is biconnected if there are no vertices whose removal


connects the rest of the graph.
Articulation point
The vertices whose removal would disconnect the graph are known as articulation points.
The graph is not biconnected, if it has articulation points. Depth First Search provides a linear time
algorithm to find all articulation points in a connected graph.
Step to find Articulation Points
Step 1: Perform Depth first search starting at any vertex.
Step 2: Number the vertex as they are visited as Num(v)
Step 3: Compute the lowest numbered vertex for every vertex v in the depth first spanning tree
as low(w), that is reachable from v by taking zero or more tree edges and then possibly
one back edge.
By definition:
Low(v) is the minimum of
i. Num(v)
ii. The lowest Low(w) among all back edges(v,w)
iii. The lowest Low(w) among all tree edges(v,w)
Step 4: (i)The root is an articulation if and only if it has more than two children
(ii) Any vertex v other than root is an articulation point if and only if v has same child w such
that Low(w)>=Num(v), the time taken to compute this algorithm an a graph is O(|E|+|V|)
Note
For any edge(v,w) we can tell whether it is a tree edge or back merely by checking Num(v) and
Num(w). if Num(w)>Num(v) then the edge(v,w) is a back edge.

Routine to compute low and test for articulation points


Void AssignLow(Vertex V)
{
Vertex W;
Low[V]=Num[V];
for each W adjacent to V
{
if(Num[W]>Num[V])
{
Assign Low(W);
if(Low[W]>=Num[V])
Printf(“%V is an articulation pt\n”,V);
Low[V]=Min(Low[V],Low[V]);
}
else
if(parent[V!=W])
Low[V]=Min(Low[V],Num[W])
}
}
Example:

Depth First Tree for Num and Low

Low can be computed by performing a postorder traversal of the depth-first spanning tree(i.e.)
Low(F)=Min(Num(F),Low(D))
/*Since there is no tree edge & only one back edge*/
=Min(6,4) =4
Low(E)=Min(Num(E),Low(F))
/*there is no back edge*/
=Min(5,4)
=4
Low(D)=Min(Num(D),Low(E),Num(A))
=Min(4,4,1)
=1

Low(G)=Min(Num(G))
=Min(7)
=7
Low(C)=Min(Num(C),Low(D),Low(G))
=Min(3,1,7)
=1
Low(B)=Min(Num(B),Low(C))
=Min(2,1)
=1
Low(A)=Min(Num(A),Low(B))
=Min(1,1)
=1
From figure it is clear that Low(G)>Num(C) (i.e.) 7>3 /* if Low(W)>=Num(V) */ the ‘V’ is as
articulation point therefore ‘C’ is an articulation point
Similarly Low(E)=Num(D), Hence D is an articulation point.

8.Explain Euilers circuit?


A path is the sequence of vertices such that for each of the vertices, there is an
edge to its successor vertex. A cycle, on the other hand, is a path where both the starting and
ending vertex in the path are the same. An Eulerian path (also known as Eulerian trail or Euler
walk) is a path that uses each edge exactly once, while an Eulerian circuit (also known as
Eulerian cycle or Euler tour) is a cycle that uses each edge exactly once.
As an example, consider the graph in following figure
It is easily seen that this graph has an Euler
circuit. Suppose we start at vertex 5, and traverse
the circuit 5, 4, 10, 5. Then we are stuck, and most
of the graph is still untraversed.

Graph remaining after 5, 4, 10, 5


We then continue from vertex 4, which still
has unexplored edges. A depth-first search might
come up with the path 4, 1, 3, 7, 4, 11, 10, 7, 9, 3, 4.
If we splice this path into the previous path of 5, 4,
10, 5, then we get a new path of 5, 4, 1, 3, 7 ,4, 11,
10, 7, 9, 3, 4, 10, 5.
Graph after the path 5, 4, 1, 3, 7, 4, 11, 10, 7,
9, 3, 4, 10, 5

The next vertex on the path that has untraversed


edges is vertex 3. A possible circuit would then be 3, 2,
8, 9, 6, 3. When spliced in, this gives the path 5, 4, 1, 3,
2, 8, 9, 6, 3, 7, 4, 11, 10, 7, 9, 3, 4, 10, 5.
On this path, the next vertex with an untraversed edge is
9, and the algorithm finds the circuit 9, 12, 10, 9. When
this is added to the current path, a circuit of 5, 4, 1, 3, 2,
8, 9, 12, 10, 9, 6, 3, 7, 4, 11, 10, 7, 9, 3, 4, 10, 5 is
obtained. As all the edges are traversed, the algorithm
terminates with an Euler circuit.

Graph remaining after the path 5, 4, 1, 3, 2, 8, 9, 6,


3, 7, 4, 11, 10, 7, 9, 3, 4, 10, 5

Conditions for existence of eulerian


circuit/path
EulerCircuit
1. An eulerian circuit exist on an undirected graph if
the graph is connected and every
Vertex has an even degree.
2. An eulerian circuit exist on a directed graph if the
graph is connected and the in-degree
of every vertex is equal to its out-degree.
EulerPath
1. An euler path exist on an undirected graph if the
graph is connected and only two
of the vertrices are of odd degree. These two
vertrices are the starting and ending
vertrices of the eulerian path.

You might also like