You are on page 1of 82

Persistent data structures

A persistent data structure is a data structure that always preserves the previous version of itself when it is
modified. They can be considered as ‘immutable’ as updates are not in-place.

A data structure is partially persistent if all versions can be accessed but only the newest version can be
modified. Fully persistent if every version can be both accessed and modified. Confluently persistent is when
we merge two or more versions to get a new version. This induces a DAG on the version graph.

Persistence can be achieved by simply copying, but this is inefficient in CPU and RAM usage as most
operations will make only a small change in the DS. Therefore, a better method is to exploit the similarity
between the new and old versions to share structure between them.

Examples:

1. Linked List Concatenation : Consider the problem of concatenating two singly linked lists with n and
m as the number of nodes in them. Say n>m. We need to keep the versions, i.e., we should be able to original
list.
One way is to make a copy of every node and do the connections. O(n + m) for traversal of lists and O(1)
each for adding (n + m – 1) connections.
Other way, and a more efficient way in time and space, involves traversal of only one of the two lists and
fewer new connections. Since we assume m<n, we can pick list with m nodes to be copied. This means O(m)
for traversal and O(1) for each one of the (m) connections. We must copy it otherwise the original form of
list wouldn’t have persisted.
2. Binary Search Tree Insertion : Consider the problem of insertion of a new node in a binary search tree.
Being a binary search tree, there is a specific location where the new node will be placed. All the nodes in
the path from the new node to the root of the BST will observe a change in structure (cascading). For example,
the node for which the new node is the child will now have a new pointer. This change in structure induces
change in the complete path up to the root. Consider tree below with value for node listed inside each one of
them.
Approaches to make data structures persistent

For the methods suggested below, updates and access time and space of versions vary with whether we are
implementing full or partial persistence.

1. Path copying: Make a copy of the node we are about to update. Then proceed for update. And finally,
cascade the change back through the data structure, something very similar to what we did in example two
above. This causes a chain of updates, until you reach a node no other node points to — the root.
How to access the state at time t? Maintain an array of roots indexed by timestamp.
2. Fat nodes: As the name suggests, we make every node store its modification history, thereby making it
‘fat’.
3. Nodes with boxes: Amortized O(1) time and space can be achieved for access and updates. This method
was given by Sleator, Tarjan and their team. For a tree, it involves using a modification box that can hold:
a. one modification to the node (the modification could be of one of the pointers, or to the node’s key or to
some other node-specific data)
b. the time when the mod was applied
The timestamp is crucial for reaching to the version of the node we care about. One can read more about it
here. With this algorithm, given any time t, at most one modification box exists in the data structure with
time t. Thus, a modification at time t splits the tree into three parts: one
part contains the data from before time t, one part contains the data from after time t, and one part was
unaffected by the modification.
Non-tree data structures may require more than one modification box, but limited to in-degree of the node
for amortized O(1).
Splay Tree
Splay trees are the self-balancing or self-adjusted binary search trees. In other words, we can say that the
splay trees are the variants of the binary search trees. The prerequisite for the splay trees that we should know
about the binary search trees.

As we already know, the time complexity of a binary search tree in every case. The time complexity of a
binary search tree in the average case is O(logn) and the time complexity in the worst case is O(n). In a
binary search tree, the value of the left subtree is smaller than the root node, and the value of the right subtree
is greater than the root node; in such case, the time complexity would be O(logn). If the binary tree is left-
skewed or right-skewed, then the time complexity would be O(n). To limit the skewness, the AVL and Red-
Black tree came into the picture, having O(logn) time complexity for all the operations in all the cases. We
can also improve this time complexity by doing more practical implementations, so the new Tree data
structure was designed, known as a Splay tree.

What is a Splay Tree?

A splay tree is a self-balancing tree, but AVL and Red-Black trees are also self-balancing trees then. What
makes the splay tree unique two trees. It has one extra property that makes it unique is splaying.

A splay tree contains the same operations as a Binary search tree, i.e., Insertion, deletion and searching, but
it also contains one more operation, i.e., splaying. So. all the operations in the splay tree are followed by
splaying.

Splay trees are not strictly balanced trees, but they are roughly balanced trees. Let's understand the search
operation in the splay-tree.

Suppose we want to search 7 element in the tree, which is shown below:


To search any element in the splay tree, first, we will perform the standard binary search tree operation. As
7 is less than 10 so we will come to the left of the root node. After performing the search operation, we need
to perform splaying. Here splaying means that the operation that we are performing on any element should
become the root node after performing some rearrangements. The rearrangement of the tree will be done
through the rotations.

Note: The splay tree can be defined as the self-adjusted tree in which any operation performed on the element
would rearrange the tree so that the element on which operation has been performed becomes the root node
of the tree.

Rotations
There are six types of rotations used for splaying:

1. Zig rotation (Right rotation)

2. Zag rotation (Left rotation)

3. Zig zag (Zig followed by zag)

4. Zag zig (Zag followed by zig)

5. Zig zig (two right rotations)


6. Zag zag (two left rotations)

Factors required for selecting a type of rotation

The following are the factors used for selecting a type of rotation:

● Does the node which we are trying to rotate have a grandparent?

● Is the node left or right child of the parent?

● Is the node left or right child of the grandparent?

Cases for the Rotations

Case 1: If the node does not have a grand-parent, and if it is the right child of the parent, then we carry out
the left rotation; otherwise, the right rotation is performed.

Case 2: If the node has a grandparent, then based on the following scenarios; the rotation would be
performed:

Scenario 1: If the node is the right of the parent and the parent is also right of its parent, then zig zig right
right rotation is performed.

Scenario 2: If the node is left of a parent, but the parent is right of its parent, then zig zag right left rotation
is performed.

Scenario 3: If the node is right of the parent and the parent is right of its parent, then zig zig left left rotation
is performed.

Scenario 4: If the node is right of a parent, but the parent is left of its parent, then zig zag right-left rotation
is performed.

Now, let's understand the above rotations with examples.

To rearrange the tree, we need to perform some rotations. The following are the types of rotations in the
splay tree:
● Zig rotations

The zig rotations are used when the item to be searched is either a root node or the child of a root node (i.e.,
left or the right child).

The following are the cases that can exist in the splay tree while searching:

Case 1: If the search item is a root node of the tree.

Case 2: If the search item is a child of the root node, then the two scenarios will be there:

1. If the child is a left child, the right rotation would be performed, known as a zig right rotation.

2. If the child is a right child, the left rotation would be performed, known as a zig left rotation.

Let's look at the above two scenarios through an example.

Consider the below example:

In the above example, we have to search 7 element in the tree. We will follow the below steps:

Step 1: First, we compare 7 with a root node. As 7 is less than 10, so it is a left child of the root node.

Step 2: Once the element is found, we will perform splaying. The right rotation is performed so that 7
becomes the root node of the tree, as shown below:
Let's consider another example.

In the above example, we have to search 20 element in the tree. We will follow the below steps:

Step 1: First, we compare 20 with a root node. As 20 is greater than the root node, so it is a right child of the
root node.
Step 2: Once the element is found, we will perform splaying. The left rotation is performed so that 20 element
becomes the root node of the tree.

● Zig zig rotations

Sometimes the situation arises when the item to be searched is having a parent as well as a grandparent. In
this case, we have to perform four rotations for splaying.

Let's understand this case through an example.

Suppose we have to search 1 element in the tree, which is shown below:

Step 1: First, we have to perform a standard BST searching operation in order to search the 1 element. As 1
is less than 10 and 7, so it will be at the left of the node 7. Therefore, element 1 is having a parent, i.e., 7 as
well as a grandparent, i.e., 10.
Step 2: In this step, we have to perform splaying. We need to make node 1 as a root node with the help of
some rotations. In this case, we cannot simply perform a zig or zag rotation; we have to implement zig zig
rotation.

In order to make node 1 as a root node, we need to perform two right rotations known as zig zig rotations.
When we perform the right rotation then 10 will move downwards, and node 7 will come upwards as shown
in the below figure:

Again, we will perform zig right rotation, node 7 will move downwards, and node 1 will come upwards as
shown below:
As we observe in the above figure that node 1 has become the root node of the tree; therefore, the searching
is completed.

Suppose we want to search 20 in the below tree.

In order to search 20, we need to perform two left rotations. Following are the steps required to search 20
node:
Step 1: First, we perform the standard BST searching operation. As 20 is greater than 10 and 15, so it will
be at the right of node 15.

Step 2: The second step is to perform splaying. In this case, two left rotations would be performed. In the
first rotation, node 10 will move downwards, and node 15 would move upwards as shown below:

In the second left rotation, node 15 will move downwards, and node 20 becomes the root node of the tree, as
shown below:
As we have observed that two left rotations are performed; so it is known as a zig zig left rotation.

● Zig zag rotations

Till now, we have read that both parent and grandparent are either in RR or LL relationship. Now, we will
see the RL or LR relationship between the parent and the grandparent.

Let's understand this case through an example.

Suppose we want to search 13 element in the tree which is shown below:


Step 1: First, we perform standard BST searching operation. As 13 is greater than 10 but less than 15, so
node 13 will be the left child of node 15.

Step 2: Since node 13 is at the left of 15 and node 15 is at the right of node 10, so RL relationship exists.
First, we perform the right rotation on node 15, and 15 will move downwards, and node 13 will come
upwards, as shown below:
Still, node 13 is not the root node, and 13 is at the right of the root node, so we will perform left rotation
known as a zag rotation. The node 10 will move downwards, and 13 becomes the root node as shown below:

As we can observe in the above tree that node 13 has become the root node; therefore, the searching is
completed. In this case, we have first performed the zig rotation and then zag rotation; so, it is known as a
zig zag rotation.

● Zag zig rotation

Let's understand this case through an example.

Suppose we want to search 9 element in the tree, which is shown below:


Step 1: First, we perform the standard BST searching operation. As 9 is less than 10 but greater than 7, so it
will be the right child of node 7.

Step 2: Since node 9 is at the right of node 7, and node 7 is at the left of node 10, so LR relationship exists.
First, we perform the left rotation on node 7. The node 7 will move downwards, and node 9 moves upwards
as shown below:
Still the node 9 is not a root node, and 9 is at the left of the root node, so we will perform the right rotation
known as zig rotation. After performing the right rotation, node 9 becomes the root node, as shown below:
As we can observe in the above tree that node 13 is a root node; therefore, the searching is completed. In this
case, we have first performed the zag rotation (left rotation), and then zig rotation (right rotation) is
performed, so it is known as a zag zig rotation.

Advantages of Splay tree


● In the splay tree, we do not need to store the extra information. In contrast, in AVL trees, we need to store
the balance factor of each node that requires extra space, and Red-Black trees also require to store one extra
bit of information that denotes the color of the node, either Red or Black.

● It is the fastest type of Binary Search tree for various practical applications. It is used in Windows NT
and GCC compilers.

● It provides better performance as the frequently accessed nodes will move nearer to the root node, due to
which the elements can be accessed quickly in splay trees. It is used in the cache implementation as the
recently accessed data is stored in the cache so that we do not need to go to the memory for accessing the
data, and it takes less time.

Drawback of Splay tree

The major drawback of the splay tree would be that trees are not strictly balanced, i.e., they are roughly
balanced. Sometimes the splay trees are linear, so it will take O(n) time complexity.

Insertion operation in Splay tree

In the insertion operation, we first insert the element in the tree and then perform the splaying operation on
the inserted element.

15, 10, 17, 7

Step 1: First, we insert node 15 in the tree. After insertion, we need to perform splaying. As 15 is a root node,
so we do not need to perform splaying.

Step 2: The next element is 10. As 10 is less than 15, so node 10 will be the left child of node 15, as shown
below:

Now, we perform splaying. To make 10 as a root node, we will perform the right rotation, as shown below:
Step 3: The next element is 17. As 17 is greater than 10 and 15 so it will become the right child of node 15.

Now, we will perform splaying. As 17 is having a parent as well as a grandparent so we will perform zig zig
rotations.
In the above figure, we can observe that 17 becomes the root node of the tree; therefore, the insertion is
completed.

Step 4: The next element is 7. As 7 is less than 17, 15, and 10, so node 7 will be left child of 10.

Now, we have to splay the tree. As 7 is having a parent as well as a grandparent so we will perform two right
rotations as shown below:
Still the node 7 is not a root node, it is a left child of the root node, i.e., 17. So, we need to perform one more
right rotation to make node 7 as a root node as shown below:

Algorithm for Insertion operation

Insert(T, n)
temp= T_root
y=NULL
while(temp!=NULL)
y=temp
if(n->data <temp->data)
temp=temp->left
else
temp=temp->right
n.parent= y
if(y==NULL)
T_root = n
else if (n->data < y->data)
y->left = n
else
y->right = n
Splay(T, n)

In the above algorithm, T is the tree and n is the node which we want to insert. We have created a temp
variable that contains the address of the root node. We will run the while loop until the value of temp becomes
NULL.

Once the insertion is completed, splaying would be performed

Algorithm for Splaying operation

Splay(T, N)
while(n->parent !=Null)
if(n->parent==T->root)
if(n==n->parent->left)
right_rotation(T, n->parent)
else
left_rotation(T, n->parent)
else
p= n->parent
g = p->parent
if(n=n->parent->left && p=p->parent->left)
right.rotation(T, g), right.rotation(T, p)
else if(n=n->parent->right && p=p->parent->right)
left.rotation(T, g), left.rotation(T, p)
else if(n=n->parent->left && p=p->parent->right)
right.rotation(T, p), left.rotation(T, g)
else
left.rotation(T, p), right.rotation(T, g)

Implementation of right.rotation(T, x)
right.rotation(T, x)
y= x->left
x->left=y->right
y->right=x
return y

In the above implementation, x is the node on which the rotation is performed, whereas y is the left child of
the node x.

Implementation of left.rotation(T, x)

1. left.rotation(T, x)
2. y=x->right
3. x->right = y->left
4. y->left = x
5. return y

In the above implementation, x is the node on which the rotation is performed and y is the right child of the
node x.

Deletion in Splay tree

As we know that splay trees are the variants of the Binary search tree, so deletion operation in the splay tree
would be similar to the BST, but the only difference is that the delete operation is followed in splay trees by
the splaying operation.

Types of Deletions:

There are two types of deletions in the splay trees:

1. Bottom-up splaying

2. Top-down splaying

Bottom-up splaying
In bottom-up splaying, first we delete the element from the tree and then we perform the splaying on the
deleted node.

Let's understand the deletion in the Splay tree.

Suppose we want to delete 12, 14 from the tree shown below:

● First, we simply perform the standard BST deletion operation to delete 12 element. As 12 is a leaf node,
so we simply delete the node from the tree.

The deletion is still not completed. We need to splay the parent of the deleted node, i.e., 10. We have to
perform Splay(10) on the tree. As we can observe in the above tree that 10 is at the right of node 7, and node
7 is at the left of node 13. So, first, we perform the left rotation on node 7 and then we perform the right
rotation on node 13, as shown below:
Still, node 10 is not a root node; node 10 is the left child of the root node. So, we need to perform the right
rotation on the root node, i.e., 14 to make node 10 a root node as shown below:

● Now, we have to delete the 14 element from the tree, which is shown below:

As we know that we cannot simply delete the internal node. We will replace the value of the node either
using inorder predecessor or inorder successor. Suppose we use inorder successor in which we replace the
value with the lowest value that exist in the right subtree. The lowest value in the right subtree of node 14 is
15, so we replace the value 14 with 15. Since node 14 becomes the leaf node, so we can simply delete it as
shown below:
Still, the deletion is not completed. We need to perform one more operation, i.e., splaying in which we need
to make the parent of the deleted node as the root node. Before deletion, the parent of node 14 was the root
node, i.e., 10, so we do need to perform any splaying in this case.

Top-down splaying

In top-down splaying, we first perform the splaying on which the deletion is to be performed and then delete
the node from the tree. Once the element is deleted, we will perform the join operation.

Let's understand the top-down splaying through an example.


Suppose we want to delete 16 from the tree which is shown below:

Step 1: In top-down splaying, first we perform splaying on the node 16. The node 16 has both parent as well
as grandparent. The node 16 is at the right of its parent and the parent node is also at the right of its parent,
so this is a zag zag situation. In this case, first, we will perform the left rotation on node 13 and then 14 as
shown below:
The node 16 is still not a root node, and it is a right child of the root node, so we need to perform left rotation
on the node 12 to make node 16 as a root node.

Once the node 16 becomes a root node, we will delete the node 16 and we will get two different trees, i.e.,
left subtree and right subtree as shown below:
As we know that the values of the left subtree are always lesser than the values of the right subtree. The root
of the left subtree is 12 and the root of the right subtree is 17. The first step is to find the maximum element
in the left subtree. In the left subtree, the maximum element is 15, and then we need to perform splaying
operation on 15.

As we can observe in the above tree that the element 15 is having a parent as well as a grandparent. A node
is right of its parent, and the parent node is also right of its parent, so we need to perform two left rotations
to make node 15 a root node as shown below:

After performing two rotations on the tree, node 15 becomes the root node. As we can see, the right child of
the 15 is NULL, so we attach node 17 at the right part of the 15 as shown below, and this operation is known
as a join operation.
Note: If the element is not present in the splay tree, which is to be deleted, then splaying would be performed.
The splaying would be performed on the last accessed element before reaching the NULL.

Algorithm of Delete operation

1. If(root==NULL)
2. return NULL
3. Splay(root, data)
4. If data!= root->data
5. Element is not present
6. If root->left==NULL
7. root=root->right
8. else
9. temp=root
10. Splay(root->left, data)
11. root1->right=root->right
12. free(temp)
13. return root

In the above algorithm, we first check whether the root is Null or not; if the root is NULL means that the tree
is empty. If the tree is not empty, we will perform the splaying operation on the element which is to be
deleted. Once the splaying operation is completed, we will compare the root data with the element which is
to be deleted; if both are not equal means that the element is not present in the tree. If they are equal, then
the following cases can occur:

Case 1: The left of the root is NULL, the right of the root becomes the root node.

Case 2: If both left and right exist, then we splay the maximum element in the left subtree. When the splaying
is completed, the maximum element becomes the root of the left subtree. The right subtree would become
the right child of the root of the left subtree.
Van Emde Boas Tree
Basics and Construction
Van Emde Boas Tree supports search, successor, predecessor, insert and delete operations in O(lglgN) time
which is faster than any of related data structures like priority queue, binary search tree, etc. Van Emde Boas
Tree works with O(1) time-complexity for minimum and maximum query. Here N is the size of the universe
over which tree is defined and lg is log base 2.

Note: Van Emde Boas Data Structure’s key set must be defined over a range of 0 to n(n is positive integer
of the form 2k) and it works when duplicate keys are not allowed.

Abbreviations:

1. VEB is an abbreviation of Van Emde Boas tree.


2. VEB(√u) is an abbreviation for VEB containing u number of keys.

Structure of Van Emde Boas Tree:

Van Emde Boas Tree is a recursively defined structure.


1. u: Number of keys present in the VEB Tree.
2. Minimum: Contains the minimum key present in the VEB Tree.
3. Maximum: Contains the maximum key present in the VEB Tree.
4. Summary: Points to new VEB(√u) Tree which contains overview of keys present
in clusters array.

5. Clusters: An array of size √u each place in the array points to new VEB(√u) Tree.

See the image below to understand the basics of Van Emde Boas Tree, although it does not represent the
actual structure of Van Emde Boas Tree:

Basic Understanding of Van Emde Boas Tree:

1. Van Emde Boas Tree is recursively defined structure similar to Proto Van Emde Boas Tree.
2. In Van Emde Boas Tree, Minimum and Maximum queries works in O(1) time as Van Emde Boas Tree
stores Minimum and Maximum keys present in the tree structure.
3. Advantages of adding Maximum and Minimum attributes, which help to decrease time complexity:
○ If any of Minimum and Maximum value of VEB Tree is empty(NIL or -1 in code) then there is no element
present in the Tree.
○ If both Minimum and Maximum is equal then only one value is present in the structure.
○ If both are present and distinct then two or more elements are present in the Tree.
○ We can insert and delete keys by just setting maximum and minimum values as per conditions in constant
time( O(1) ) which helps in decreasing recursive call chain: If only one key is present in the VEB then to
delete that key we simply set min and max to the nil value. Similarly, if no keys are present then we can
insert by just setting min and max to the key we want to insert. These are O(1) operations.
○ In successor and predecessor queries, we can take decisions from minimum and maximum values of VEB,
which will make our work easier.

In Proto Van Emde Boas Tree the size of universe size is restricted to be of type 22k but in Van Emde Boas
Tree, it allows the universe size to be exact power of two. So we need to modify High(x), low(x),
generate_index() helper functions used in Proto Van Emde Boas Tree as below.

1. High(x): It will return floor( x/ceil(√u) ), which is basically the cluster index
in which the key x is present.

High(x) = floor(x/ceil(√u))

2. Low(x): It will return x mod ceil( √u ) which is its position in the cluster.
Low(x) = x % ceil( √u)

3. generate_index(a, b) : It will return position of key from its position in


cluster b and its cluster index a.

generate_index(a, b) = a * ceil(√u) + b

4. Construction of Van Emde Boas Tree: Construction of Van Emde Boas Tree is very similar to Proto
Van Emde Boas Tree. Difference here is that we are allowing the universe size to be any power of two, so
that high(), low(), generate_index() will be different.
To construct, empty VEB: The procedure is the same as Proto VEB just two things minimum and maximum
will be added in each VEB. To represent that minimum and maximum is null we will represent it as -1.
Note: In the base case, we just need minimum and maximum values because adding a cluster of size 2 will
be redundant after the addition of min and max values.

Below is the implementation:

C++ implementation of the approach


#include <bits/stdc++.h>
using namespace std;
class Van_Emde_Boas {

public:
int universe_size;
int minimum;
int maximum;
Van_Emde_Boas* summary;
vector<Van_Emde_Boas*> clusters;
// Function to return cluster numbers
// in which key is present
int high(int x)
{
int div = ceil(sqrt(universe_size));
return x / div;
}

// Function to return position of x in cluster


int low(int x)
{
int mod = ceil(sqrt(universe_size));
return x % mod;
}

// Function to return the index from


// cluster number and position
int generate_index(int x, int y)
{
int ru = ceil(sqrt(universe_size));
return x * ru + y;
}

// Constructor
Van_Emde_Boas(int size)
{
universe_size = size;
minimum = -1;
maximum = -1;

// Base case
if (size <= 2) {
summary = nullptr;
clusters = vector<Van_Emde_Boas*>(0, nullptr);
}
else {
int no_clusters = ceil(sqrt(size));

// Assigning VEB(sqrt(u)) to summary


summary = new Van_Emde_Boas(no_clusters);

// Creating array of VEB Tree pointers of size sqrt(u)


clusters = vector<Van_Emde_Boas*>(no_clusters, nullptr);

// Assigning VEB(sqrt(u)) to all of its clusters


for (int i = 0; i < no_clusters; i++) {
clusters[i] = new Van_Emde_Boas(ceil(sqrt(size)));
}
}
}
};
Driver code
int main()
{
// New Van_Emde_Boas tree with u = 16

//Van_Emde_Boas* akp = new Van_Emde_Boas(4);


}

Van Emde Boas Tree - Insertion, Find, Minimum and


Maximum Queries
Procedure for Insert :

1. If no keys are present in the tree then simply assign minimum and maximum of the tree to the key.
2. Otherwise we will go deeper in the tree and do the following:
○ If the key we want to insert is less than the current minimum of the tree, then we swap both values because
the new key will be the real minimum of the tree and the key which was already at the place of the minimum
will be used for the further process.
This concept can be thought of as lazy propagation in Van Emde Boas Tree. Because this old minimum value
really is a minimum of one of the clusters of recursive Van Emde Boas Structure. So actually we don’t go
deeper into the structure until the need arises.
○ If we are not at the base case means universe size of the tree is greater than 2 then :
■ If the tree’s cluster[High(key)] is empty, then we recursively call insert over the summary and as we are
doing lazy propagation, we just assign minimum and maximum value to the key and stop the recursion.
■ Otherwise, we call insert over the cluster in which the key is present.
3. Similarly, We check for maximum and set the key as maximum if it is greater than the current maximum.

Below Image represents empty VEB(4) Tree:


Now we insert 1, then it will just set the minimum and maximum of the tree to 1. You can see the Lazy
propagation of 1:

Now if we insert 0, then 1 will propagate to the 1st cluster and zero will be the new minimum:
Procedure for isMember Query :

● At any point of our search, If the key is minimum or maximum of the tree, which means the key is present,
then return true.
● If we reach the base case, but above condition is false then the key must not be present in the tree so return
true.
● Otherwise recursively call the function over the cluster of the key i.e.(High(key)) and its position in the
cluster i.e.(Low(key)).
● Here we are allowing universe_size to be any power of 2, so that if the situation arises in which
universe_size is less than the key value then return false.

Minimum & Maximum : Van Emde Boas Tree stores minimum and maximum as its attributes, so we can
return its value if it is present and null otherwise.

Implementation:

#include <bits/stdc++.h>
using namespace std;
class Van_Emde_Boas {
public:
int universe_size;
int minimum;
int maximum;
Van_Emde_Boas* summary;
vector<Van_Emde_Boas*> clusters;
// Function to return cluster numbers
// in which key is present
int high(int x)
{
int div = ceil(sqrt(universe_size));
return x / div;
}
// Function to return position of x in cluster
int low(int x)
{
int mod = ceil(sqrt(universe_size));
return x % mod;
}
// Function to return the index from
// cluster number and position
int generate_index(int x, int y)
{
int ru = ceil(sqrt(universe_size));
return x * ru + y;
}
// Constructor
Van_Emde_Boas(int size)
{
universe_size = size;
minimum = -1;
maximum = -1;
// Base case
If (size <= 2) {
summary = nullptr;
clusters = vector<Van_Emde_Boas*>(0, nullptr);
}
else {
int no_clusters = ceil(sqrt(size));
// Assigning VEB(sqrt(u)) to summary
summary = new Van_Emde_Boas(no_clusters);
// Creating array of VEB Tree pointers of size sqrt(u)
clusters = vector<Van_Emde_Boas*>(no_clusters, nullptr);
// Assigning VEB(sqrt(u)) to all its clusters
for (int i = 0; i < no_clusters; i++) {
clusters[i] = new Van_Emde_Boas(ceil(sqrt(size)));
}
}
}
};
// Function to return the minimum value
// from the tree if it exists
int VEB_minimum(Van_Emde_Boas* helper)
{
return (helper->minimum == -1 ? -1 : helper->minimum);
}
// Function to return the maximum value
// from the tree if it exists
int VEB_maximum(Van_Emde_Boas* helper)
{
return (helper->maximum == -1 ? -1 : helper->maximum);
}
// Function to insert a key in the tree
void insert(Van_Emde_Boas* helper, int key)
{
// If no key is present in the tree
// then set both minimum and maximum
// to the key (Read the previous article
// for more understanding about it)
if (helper->minimum == -1) {
helper->minimum = key;
helper->maximum = key;
}
else {
if (key < helper->minimum) {
// If the key is less than current minimum
// then swap it with the current minimum
// because this minimum is actually
// minimum of one of the internal cluster
// so as we go deeper into the Van Emde Boas
// we need to take that minimum to its real position
// This concept is similar to "Lazy Propagation"
swap(helper->minimum, key);
}
// Not base case then...
if (helper->universe_size > 2) {
// If no key is present in the cluster then insert key into
// both cluster and summary
if (VEB_minimum(helper->clusters[helper->high(key)]) == -1) {
insert(helper->summary, helper->high(key));
// Sets the minimum and maximum of cluster to the key
// as no other keys are present we will stop at this level
// we are not going deeper into the structure like
// Lazy Propagation
helper->clusters[helper->high(key)]->minimum = helper->low(key);
helper->clusters[helper->high(key)]->maximum = helper->low(key);
}
else {
// If there are other elements in the tree then recursively
// go deeper into the structure to set attributes accordingly
insert(helper->clusters[helper->high(key)], helper->low(key));
}
}
// Sets the key as maximum it is greater than current maximum
if (key > helper->maximum) {
helper->maximum = key;
}
}
}
// Function that returns true if the
// key is present in the tree
bool isMember(Van_Emde_Boas* helper, int key)
{
// If universe_size is less than the key
// then we can not search the key so returns
// false
if (helper->universe_size < key) {
return false;
}
// If at any point of our traversal
// of the tree if the key is the minimum
// or the maximum of the subtree, then
// the key is present so returns true
if (helper->minimum == key || helper->maximum == key) {
return true;
}
else {
// If after attending above condition,
// if the size of the tree is 2 then
// the present key must be
// maximum or minimum of the tree if it
// is not then it returns false because key
// can not be present in the sub tree
if (helper->universe_size == 2) {
return false;
}
else {
// Recursive call over the cluster
// in which the key can be present
// and also pass the new position of the key
// i.e., low(key)
return isMember(helper->clusters[helper->high(key)],
helper->low(key));
}
}
}
// Driver code
int main()
{
Van_Emde_Boas* veb = new Van_Emde_Boas(8);
// Inserting Keys
insert(veb, 2);
insert(veb, 3);
insert(veb, 6);
cout << boolalpha;
// Checking isMember query
cout << isMember(veb, 3) << endl;
cout << isMember(veb, 4) << endl;
// Maximum of VEB
cout << VEB_maximum(veb) << endl;
// Minimum of VEB
cout << VEB_minimum(veb) << endl;
}

Output:
true
false
6
2

Van Emde Boas Tree – Successor and Predecessor

Procedure for successor:

● Base case: If the size of the tree is 2 then if query-key is 0 and key – 1 is present in the tree then return 1,
as it will be the successor. Otherwise, return null.
● If the key is less than minimum then we can easily say that minimum will be the successor of the query-
key.
● Recursive case:
○ We first search for the successor in the cluster in which the key is present.
○ If we find any successor in the cluster then generate its index and return it.
○ Otherwise, search for the next cluster, with at least one key present, in summary, and return the index the
minimum of that cluster.

See query for the successor of 0 the in below image:

The below image represents the successor of 1 query over VEB tree containing key 1 & 2:
Procedure for Predecessor:

● Base case: If the size of the tree is 2 then if query-key is 1 and key-0 is present in the tree then return 0,
as it will be the predecessor. Otherwise, return null.
● If the key is greater than the maximum then we can easily say that maximum will be the predecessor of
the query-key.
● Recursive case:
○ We first search for the predecessor in the cluster in which the key is present.
○ If we find any predecessor in the cluster then generate its index and return it.
○ Otherwise, search for the previous cluster, with at least one key present, in summary. If any cluster is
present then return the index of the maximum of that cluster.
○ If no cluster with that property is present then see if due to lazy propagation, the minimum of the tree(In
which the cluster is present) is less than the key, if yes then return minimum otherwise return null.

Below image represents query predecessor of key-2:


#include <bits/stdc++.h>
using namespace std;
class Van_Emde_Boas {
public:
int universe_size;
int minimum;
int maximum;
Van_Emde_Boas* summary;
vector<Van_Emde_Boas*> clusters;
// Function to return cluster numbers
// in which key is present
int high(int x)
{
int div = ceil(sqrt(universe_size));
return x / div;
}
// Function to return position of x in cluster
int low(int x)
{
int mod = ceil(sqrt(universe_size));
return x % mod;
}
// Function to return the index from
// cluster number and position
int generate_index(int x, int y)
{
int ru = ceil(sqrt(universe_size));
return x * ru + y;
}
// Constructor
Van_Emde_Boas(int size)
{
universe_size = size;
minimum = -1;
maximum = -1;

// Base case
if (size <= 2) {
summary = nullptr;
clusters = vector<Van_Emde_Boas*>(0, nullptr);
}
else {
int no_clusters = ceil(sqrt(size));
// Assigning VEB(sqrt(u)) to summary
summary = new Van_Emde_Boas(no_clusters);
// Creating array of VEB Tree pointers of size sqrt(u)
clusters = vector<Van_Emde_Boas*>(no_clusters, nullptr);
// Assigning VEB(sqrt(u)) to all its clusters
for (int i = 0; i < no_clusters; i++) {
clusters[i] = new Van_Emde_Boas(ceil(sqrt(size)));
}
}
}
};
// Function to return the minimum value
// from the tree if it exists
int VEB_minimum(Van_Emde_Boas* helper)
{
return (helper->minimum == -1 ? -1 : helper->minimum);
}
// Function to return the maximum value
// from the tree if it exists
int VEB_maximum(Van_Emde_Boas* helper)
{
return (helper->maximum == -1 ? -1 : helper->maximum);
}
// Function to insert a key in the tree
void insert(Van_Emde_Boas* helper, int key)
{
// If no key is present in the tree
// then set both minimum and maximum
// to the key (Read the previous article
// for more understanding about it)
if (helper->minimum == -1) {
helper->minimum = key;
helper->maximum = key;
}
else {
if (key < helper->minimum) {
// If the key is less than the current minimum
// then swap it with the current minimum
// because this minimum is actually
// minimum of one of the internal cluster
// so as we go deeper into the Van Emde Boas
// we need to take that minimum to its real position
// This concept is similar to "Lazy Propagation"
swap(helper->minimum, key);
}
// Not base case then...
if (helper->universe_size > 2) {
// If no key is present in the cluster then insert key into
// both cluster and summary
if (VEB_minimum(helper->clusters[helper->high(key)]) == -1) {
insert(helper->summary, helper->high(key));
// Sets the minimum and maximum of cluster to the key
// as no other keys are present we will stop at this level
// we are not going deeper into the structure like
// Lazy Propagation
helper->clusters[helper->high(key)]->minimum = helper->low(key);
helper->clusters[helper->high(key)]->maximum = helper->low(key);
}
else {
// If there are other elements in the tree then recursively
// go deeper into the structure to set attributes accordingly
insert(helper->clusters[helper->high(key)], helper->low(key));
}
}
// Sets the key as maximum it is greater than current maximum

if (key > helper->maximum) {

helper->maximum = key;

// Function that returns true if the


// key is present in the tree
bool isMember(Van_Emde_Boas* helper, int key)
{
// If universe_size is less than the key
// then we can not search the key so returns
// false
if (helper->universe_size < key) {
return false;
}
// If at any point of our traversal
// of the tree if the key is the minimum
// or the maximum of the subtree, then
// the key is present so returns true
if (helper->minimum == key || helper->maximum == key) {
return true;
}
else {
// If after attending above condition,
// if the size of the tree is 2 then
// the present key must be
// maximum or minimum of the tree if it
// is not then it returns false because key
// can not be present in the sub tree
if (helper->universe_size == 2) {
return false;
}
else {
// Recursive call over the cluster
// in which the key can be present
// and also pass the new position of the key
// i.e., low(key)
return isMember(helper->clusters[helper->high(key)],
helper->low(key));
}
}
}
// Function to find the successor of the given key
int VEB_successor(Van_Emde_Boas* helper, int key)
{
// Base case: If key is 0 and its successor
// is present then return 1 else return null
if (helper->universe_size == 2) {
if (key == 0 && helper->maximum == 1) {
return 1;
}
else {
return -1;
}
}
// If key is less then minimum then return minimum
// because it will be successor of the key
else if (helper->minimum != -1 && key < helper->minimum) {
return helper->minimum;
}
else {
// Find successor inside the cluster of the key
// First find the maximum in the cluster
int max_incluster = VEB_maximum(helper->clusters[helper->high(key)]);
int offset{ 0 }, succ_cluster{ 0 };
// If there is any key( maximum!=-1 ) present in the cluster then find
// the successor inside of the cluster
if (max_incluster != -1 && helper->low(key) < max_incluster) {
offset = VEB_successor(helper->clusters[helper->high(key)],
helper->low(key));
return helper->generate_index(helper->high(key), offset);
}
// Otherwise look for the next cluster with at least one key present
else {
succ_cluster = VEB_successor(helper->summary, helper->high(key));
// If there is no cluster with any key present
// in summary then return null
if (succ_cluster == -1) {
return -1;
}
// Find minimum in successor cluster which will
// be the successor of the key
else {
offset = VEB_minimum(helper->clusters[succ_cluster]);
return helper->generate_index(succ_cluster, offset);
}
}
}
}
// Function to find the predecessor of the given key

int VEB_predecessor(Van_Emde_Boas* helper, int key)

// Base case: If the key is 1 and it's predecessor

// is present then return 0 else return null

if (helper->universe_size == 2) {

if (key == 1 && helper->minimum == 0) {

return 0;

else

return -1;

// If the key is greater than maximum of the tree then

// return key as it will be the predecessor of the key

else if (helper->maximum != -1 && key > helper->maximum) {

return helper->maximum;

else {

// Find predecessor in the cluster of the key


// First find minimum in the key to check whether any key
// is present in the cluster
int min_incluster = VEB_minimum(helper->clusters[helper->high(key)]);
int offset{ 0 }, pred_cluster{ 0 };
// If any key is present in the cluster then find predecessor in
// the cluster
if (min_incluster != -1 && helper->low(key) > min_incluster) {
offset = VEB_predecessor(helper->clusters[helper->high(key)],
helper->low(key));
return helper->generate_index(helper->high(key), offset);
}
// Otherwise look for predecessor in the summary which
// returns the index of predecessor cluster with any key present
else {
pred_cluster = VEB_predecessor(helper->summary, helper->high(key));
// If no predecessor cluster then...
if (pred_cluster == -1) {
// Special case which is due to lazy propagation
if (helper->minimum != -1 && key > helper->minimum) {
return helper->minimum;
}
else
return -1;
}
// Otherwise find maximum in the predecessor cluster
else {
offset = VEB_maximum(helper->clusters[pred_cluster]);
return helper->generate_index(pred_cluster, offset);
}
}
}
}
Driver code
int main()
{
Van_Emde_Boas* veb = new Van_Emde_Boas(8);
// Inserting Keys
insert(veb, 2);
insert(veb, 3);
insert(veb, 4);
insert(veb, 6);
// Queries
cout << VEB_successor(veb, 2) << endl;
cout << VEB_predecessor(veb, 6) << endl;
cout << VEB_successor(veb, 4) << endl;
return 0;
}

Output:
3

Van Emde Boas Tree - Deletion

Procedure for Delete:

Here we are assuming that the key is already present in the tree.
● First we check if only one key is present, then assign the maximum and minimum of the tree to null value
to delete the key.
● Base Case: If the universe size of the tree is two then, after the above condition of only one key is present
is false, exactly two key is present in the tree (after the above condition turns out to false), So delete the query
key by assigning maximum and minimum of the tree to another key present in the tree.
● Recursive Case:
○ If the key is the minimum of the tree then find the next minimum of the tree and assign it as the minimum
of the tree and delete query key.
○ Now the query key is not present in the tree. We will have to change the rest of the structure in the tree to
eliminate the key completely:
1. If the minimum of the cluster of the query key is null then we will delete it from summary as well. Also,
if the key is the maximum of the tree then we will find new maximum and assign it as the maximum of the
tree.

2. Otherwise, if the key is maximum of the tree then find the new maximum and assign it as the maximum
of the tree.

Below is the series of images representing ‘delete key-0 query’ over the VEB Tree with 0, 1, 2 keys are
present:
Step 1: As 0 is the minimum of the tree, it will satisfy the first condition of else part of the algorithm.

First, it finds the next maximum which is 1 and set is as minimum.

Step 2: Now it will delete key 1 from the cluster[0].


Step 3: Next condition, cluster[0] has no key, is true, so it will clear the key from the summary as well.

Time Complexity: O(N).

Auxiliary Space: O(N).


Dynamic Connectivity -Incremental

Dynamic connectivity is a data structure that dynamically maintains the information about the connected
components of graph. In simple words suppose there is a graph G(V, E) in which no. of vertices V is constant
but no. of edges E is variable. There are three ways in which we can change the number of edges

1. Incremental Connectivity : Edges are only added to the graph.


2. Decremental Connectivity : Edges are only deleted from the graph.
3. Fully Dynamic Connectivity : Edges can both be deleted and added to the graph.

In this article only Incremental connectivity is discussed. There are mainly two operations that need to be
handled.

1. An edge is added to the graph.


2. Information about two nodes x and y whether they are in the same connected components or not.

Example:

Input : V = 7

Number of operations = 11

101

201

212

102

202

223

234
105

245

256

126

Note: 7 represents number of nodes,

11 represents number of queries.

There are two types of queries

Type 1: 1 x y in this if the node

x and y are connected print

Yes else No

Type 2: 2 x y in this add an edge

between node x and y

Output: No

Yes

No

Yes

Explanation :

Initially there are no edges so node 0 and 1 will be disconnected so answer will be No Node 0 and 2 will be
connected through node 1 so answer will be Yes similarly for other queries we can find whether two nodes
are connected or not

To solve the problems of incremental connectivity disjoint data structure is used. Here each connected
component represents a set and if the two nodes belong to the same set it means that they are connected.

Implementation is given below here we are using union by rank and path compression
A C++ implementation of incremental connectivity

#include<bits/stdc++.h>

using namespace std;

// Finding the root of node i

int root(int arr[], int i)

while (arr[i] != i)

arr[i] = arr[arr[i]];

i = arr[i];

return i;

// union of two nodes a and b

void weighted_union(int arr[], int rank[],

int a, int b)

int root_a = root (arr, a);

int root_b = root (arr, b);

// union based on rank

if (rank[root_a] < rank[root_b])

{
arr[root_a] = arr[root_b];

rank[root_b] += rank[root_a];

else

arr[root_b] = arr[root_a];

rank[root_a] += rank[root_b];

// Returns true if two nodes have same root

bool areSame(int arr[], int a, int b)

return (root(arr, a) == root(arr, b));

// Performing an operation according to query type

void query(int type, int x, int y, int arr[], int rank[])

// type 1 query means checking if node x and y

// are connected or not

if (type == 1)

// If roots of x and y is same then yes

// is the answer
if (areSame(arr, x, y) == true)

cout << "Yes" << endl;

else

cout << "No" << endl;

// type 2 query refers union of x and y

else if (type == 2)

// If x and y have different roots then

// union them

if (areSame(arr, x, y) == false)

weighted_union(arr, rank, x, y);

// Driver function

int main()

// No.of nodes

int n = 7;

// The following two arrays are used to

// implement disjoint set data structure.

// arr[] holds the parent nodes while rank


// array holds the rank of subset

int arr[n], rank[n];

// initializing both array and rank

for (int i=0; i<n; i++)

arr[i] = i;

rank[i] = 1;

// number of queries

int q = 11;

query(1, 0, 1, arr, rank);

query(2, 0, 1, arr, rank);

query(2, 1, 2, arr, rank);

query(1, 0, 2, arr, rank);

query(2, 0, 2, arr, rank);

query(2, 2, 3, arr, rank);

query(2, 3, 4, arr, rank);

query(1, 0, 5, arr, rank);

query(2, 4, 5, arr, rank);

query(2, 5, 6, arr, rank);

query(1, 2, 6, arr, rank);

return 0;

}
Output
No

Yes

No

Yes

Time Complexity:

The amortized time complexity is O(alpha(n)) per operation where alpha is inverse ackermann function
which is nearly constant.

Dynamic Connectivity - DSU with Rollback

Dynamic connectivity, in general, refers to the storage of the connectivity of the components of a graph,
where the edges change between some or all the queries. The basic operations are –

1. Add an edge between nodes a and b


2. Remove the edge between nodes a and b

Types of problems using Dynamic Connectivity

Problems using dynamic connectivity can be of the following forms –

● Edges added only (can be called “Incremental Connectivity”) – use a DSU data structure.
● Edges removed only (can be “Decremental Connectivity”) – start with the graph in its final state after
all the required edges are removed. Process the queries from the last query to the first (in the opposite order).
Add the edges in the opposite order in which they are removed.
● Edges added and removed (can be called “Fully Dynamic Connectivity”) – this requires a rollback
function for the DSU structure, which can undo the changes made to a DSU, returning it to a certain point in
its history. This is called a DSU with rollback.

DSU with Rollback

The DSU with rollback is performed following the below steps where the history of a DSU can be stored
using stacks.

● In the following implementation, we use 2 stacks:


○ One of the stacks (Stack 1) stores a pointer to the position in the array (rank array or parent array) that we
have changed, and
○ In the other (Stack 2) we store the original value stored at that position (alternatively we can use a single
stack of a structure like pairs in C++).
● To undo the last change made to the DSU, set the value at the location indicated by the pointer at the top
of Stack 1 to the value at the top of Stack 2. Pop an element from both stacks.
● Each point in the history of modification of the graph is uniquely determined by the length of the stack
once the final modification is made to reach that state.
● So, if we want to undo some changes to reach a certain state, all we need to know is the length of the stack
at that point. Then, we can pop elements off the stack and undo those changes, until the stack is of the required
length.

The code for the generic implementation is as follows:

A C++ implementation

#include <iostream>

using namespace std;


const int MAX_N = 1000;

int sz = 0, v[MAX_N], p[MAX_N], r[MAX_N];

int* t[MAX_N];

void update(int* a, int b)

if (*a != b) {

t[sz] = a;

v[sz] = *a;

*a = b;

sz++;

void rollback(int x)

// Undo the changes made,

// until the stack has length sz

for (; sz > x;) {

sz--;

*t[sz] = v[sz];

}
int find(int n)

return p[n] ? find(p[n]) : n;

void merge(int a, int b)

// Parent elements of a and b

a = find(a), b = find(b);

if (a == b)

return;

// Merge small to big

if (r[b] > r[a])

std::swap(a, b);

// Update the rank

update(r + a, r[a] + r[b]);

// Update the parent element

update(p + b, a);

int main()

return 0;
}

Example to understand Dynamic Connectivity

Let us look into an example for a better understanding of the concept

Given a graph with N nodes (labelled from 1 to N) and no edges initially, and Q queries. Each query either
adds or removes an edge to the graph. Our task is to report the number of connected components after each
query is processed (Q lines of output). Each query is of the form {i, a, b} where

● if i = 1 then an edge between a and b is added


● If i = 2, then an edge between a and b is removed

Examples

Input: N = 3, Q = 4, queries = { {1, 1, 2}, {1, 2, 3}, {2, 1, 2}, {2, 2, 3} }

Output: 2 1 2 3

Explanation:
The image shows how the graph changes in each of the 4 queries, and how many connected components

there are in the graph.

Input: N = 5, Q = 7, queries = { {1, 1, 2}, {1, 3, 4}, {1, 2, 3}, {1, 1, 4}, {2, 2, 1}, {1, 4, 5}, {2, 3, 4} }

Output: 4 3 2 2 2 1 2

Explanation:

The image shows how the graph changes in each of the 7 queries, and how many connected components

there are in the graph.

Approach: The problem can be solved with a combination of DSU with rollback and divide and conquer
approach based on the following idea:

The queries can be solved offline. Think of the Q queries as a timeline.


● For each edge, that was at some point a part of the graph, store the disjoint intervals in the timeline where
this edge exists in the graph.
● Maintain a DSU with rollback to add and remove edges from the graph.

The divide and conquer approach will be used on the timeline of queries. The function will be called for
intervals (l, r) in the timeline of queries. that will:

● Add all edges which are present in the graph for the entire interval (l, r).
● Recursively call the same function for the intervals (l, mid) and (mid+1, r) [if the interval (l, r) has length
1, answer the lth query and store it in an answers array).
● Call the rollback function to restore the graph to its state at the function call.

Below is the implementation of the above approach:

A C++ code to implement the approach

#include <bits/stdc++.h>

using namespace std;

int N, Q, ans[10];

// Components and size of the stack

int nc, sz;

map<pair<int, int>, vector<pair<int, int> > > graph;

// Parent and rank array

int p[10], r[10];

int *t[20], v[20];


// Stack3 - stores change in number of components

// component) only changes for updates to p, not r

int n[20];

// Function to set the stacks

// for performing DSU rollback

int setv(int* a, int b, int toAdd)

t[sz] = a;

v[sz] = *a;

*a = b;

n[sz] = toAdd;

++sz;

return b;

// Function fro performing rollback

void rollback(int x)

for (; sz > x;) {

--sz;
*t[sz] = v[sz];

nc += n[sz];

// Function to find the parents

int find(int n)

return p[n] ? find(p[n]) : n;

// Function to merge two disjoint sets

bool merge(int a, int b)

a = find(a), b = find(b);

if (a == b)

return 0;

nc--;

if (r[b] > r[a])

std::swap(a, b);

setv(r + b, r[a] + r[b], 0);

return setv(p + b, a, 1), 1;


}

// Function to find the number of connected components

void solve(int start, int end)

// Initial state of the graph,

// at function call determined by

// the length of the stack at this point

int tmp = sz;

// Iterate through the graph

for (auto it = graph.begin();

it != graph.end(); ++it) {

// End nodes of edge

int u = it->first.first;

int v = it->first.second;

// Check all intervals where its present

for (auto it2 = it->second.begin();

it2 != it->second.end(); ++it2) {

// Start and end point of interval

int w = it2->first, c = it2->second;

if (w <= start && c >= end) {

// If (w, c) is superset of (start, end),


// merge the 2 components

merge(u, v);

break;

// If the interval is of length 1,

// answer the query

if (start == end) {

ans[start] = nc;

return;

// Recursively call the function

int mid = (start + end) >> 1;

solve(start, mid);

solve(mid + 1, end);

// Return the graph to the state

// at function call

rollback(tmp);

// Utility function to solve the problem


void componentAtInstant(vector<int> queries[])

// Initially graph empty, so N components

nc = N;

for (int i = 0; i < Q; i++) {

int t = queries[i][0];

int u = queries[i][1], v = queries[i][2];

// To standardise the procedure

if (u > v)

swap(u, v);

if (t == 1) {

// Add edge and start a new interval

// for this edge

graph[{ u, v }].push_back({ i, Q });

else {

// Close the interval for the edge

graph[{ u, v }].back().second = i - 1;

// Call the function to find components


solve(0, Q);

// Driver code

int main()

N = 3, Q = 4;

vector<int> queries[] = { { 1, 1, 2 }, { 1, 2, 3 }, { 2, 1, 2 }, { 2, 2, 3 } };

// Function call

componentAtInstant(queries);

for (int i = 0; i < Q; i++)

cout << ans[i] << " ";

return 0;

Output

2123

Time Complexity: O(Q * logQ * logN)

● Analysis: Let there be M edges that exist in the graph initially.


○ The total number of edges that could possibly exist in the graph is M+Q (an edge is added on every query;
no edges are removed).
○ The total number of edge addition and removal operations is O((M+Q) log Q), and each operation takes
logN time.
○ In the above problem, M = 0. So, the time complexity of this algorithm is O(Q * logQ * logN).
Auxiliary Space: O(N+Q)

You might also like