You are on page 1of 29

DATA STRUCTURES USING C

CAP2001
School of Engineering and Sciences
Department of Computer Science and Engineering

Submitted By:
Student Name Shreya Prakash
Roll No 220160212091
Programme BCA
Section B, T4
Department Computer Science and Engineering
Session/Semester 2023-24/3rd Semester
Submitted To:
Faculty Name Ms. Sapna Sharma

G D Goenka University
Gurgaon, Haryana
Q1. What do you mean by Searching? Explain Sequential search and Binary search with
help of example.
Ans. Searching is the process of finding a particular item or value in a collection of data. It's a
common operation in computer science and is used in various applications such as databases,
information retrieval, and algorithms. There are different searching algorithms, and two common
ones are sequential search and binary search.
Sequential Search:
• Definition: Sequential search, also known as linear search, is a simple searching algorithm
that iterates through each element in a collection until it finds the target value or reaches
the end of the collection.
• Example: Let's consider an array of integers: [3, 1, 4, 7, 2, 8, 5]. Suppose we want to search
for the value 7.
Array: [3, 1, 4, 7, 2, 8, 5]
Index: 0 1 2 3 4 5 6

• Sequential search steps:


Start from the first element (index 0).
Compare each element with the target value (7 in this case).
When the target value (7) is found at index 3, stop the search.
If the target value is not found by the end of the array, conclude that the value is not present.
Binary Search:
• Definition: Binary search is a more efficient searching algorithm, but it requires the data
to be sorted. It works by repeatedly dividing the search interval in half.
• Example: Consider a sorted array of integers: [1, 2, 3, 4, 5, 7, 8]. We want to search for
the value 7.
Array: [1, 2, 3, 4, 5, 7, 8]
Index: 0 1 2 3 4 5 6

Binary search steps:


Start with the entire sorted array.
Compare the target value (7) with the middle element (index 3).
Since 7 > 4, the search is narrowed to the right half of the array: [5, 7, 8].
Repeat the process in the right half by comparing with the middle element (index 5).
The target value (7) is found at index 5.
Binary search is more efficient than sequential search, especially for large datasets, as it eliminates
half of the remaining elements in each step. However, it requires the data to be sorted, which is a
limitation compared to sequential search.

Q2. Define Linked List. What are the types of linked lists? What are the ways of
implementing linked list? State the advantages of circular lists over doubly linked list.

Ans. A linked list is a data structure used for organizing and storing a sequence of elements. Unlike
arrays, elements in a linked list are not stored in contiguous memory locations; instead, each
element points to the next one, forming a chain-like structure. The basic building block of a linked
list is a node, which contains data and a reference (or link) to the next node in the sequence.

Types of Linked Lists:

There are several types of linked lists, with the main distinctions being in the way nodes are
connected. The common types include:

• Singly Linked List: Each node points to the next node in the sequence.
• Doubly Linked List: Each node has references to both the next and the previous nodes in
the sequence.
• Circular Linked List: In a circular linked list, the last node points back to the first node,
forming a circle.

Ways of Implementing Linked Lists:

Linked lists can be implemented using various programming languages, and the choice of
implementation may depend on the language and the specific requirements. The main ways to
implement linked lists include:

• Node-based Implementation: Define a node structure that contains data and a reference
to the next (and possibly previous) node.
• Class-based Implementation: In object-oriented languages like Java or Python, you can
define a class for the linked list, where each instance of the class represents a node.
• Dynamic Memory Allocation (C): In languages like C, nodes can be dynamically
allocated using pointers.

Advantages of Circular Lists over Doubly Linked Lists:

Circular linked lists have some advantages over doubly linked lists in certain scenarios:

• Simplicity of Implementation: Circular lists can be simpler to implement in some cases


because the last node in the list points back to the first node, eliminating the need to check
for the end of the list explicitly.
• Traversal in Both Directions: Similar to doubly linked lists, circular linked lists support
traversal in both directions, but with less code, as the last node connects back to the first.
• Memory Utilization: In certain scenarios, circular linked lists might have better memory
utilization, especially when dealing with a continuous stream of data.

However, it's essential to note that the choice between circular and doubly linked lists depends on
the specific requirements of the application, and each has its own set of advantages and trade-offs.

Q3. What do you mean by Link list? Write an algorithm to insert and delete a node in Singly
Linked List.

Ans. A linked list is a linear data structure consisting of a sequence of elements where each element
points to the next element in the sequence. The basic building block of a linked list is a node, which
contains data and a reference (or link) to the next node in the sequence.

• Algorithm to Insert a Node in a Singly Linked List:

Insert(Node, Data, Position)

1. Create a new node with the given data.

2. If the specified position is 1:

a. Set the new node's next pointer to the current head.

b. Set the head of the list to the new node.

3. Else, traverse the list to the node at position - 1.

a. Set the new node's next pointer to the next node of the current node.

b. Set the next pointer of the current node to the new node.

• Algorithm to Delete a Node from a Singly Linked List:

Delete(Node, Position)

1. If the specified position is 1:

a. Set the head of the list to the next node of the current head.

2. Else, traverse the list to the node at position - 1.

a. Set the next pointer of the current node to the next node of the next node.
Pseudocode:

Node Structure:

Node:

data

next

Linked List Operations:

InsertNode(head, data, position):

1. Create a new node with the given data.

2. If position is 0:

a. Set new_node.next to head.

b. Set head to new_node.

3. Else:

a. Initialize a pointer current to head.

b. Repeat (position - 1) times:

i. Move current to the next node.

ii. If current is null, return an error (position out of bounds).

c. Set new_node.next to current.next.

d. Set current.next to new_node.

DeleteNode(head, position):

1. If head is null, return an error (empty list).

2. If position is 0:
a. Set temp to head.

b. Set head to head.next.

c. Free the memory allocated for temp.

3. Else:

a. Initialize a pointer current to head.

b. Repeat (position - 1) times:

i. Move current to the next node.

ii. If current is null or current.next is null, return an error (position out of bounds).

c. Set temp to current.next.

d. Set current.next to temp.next.

e. Free the memory allocated for temp.

This pseudocode assumes a zero-based index for the positions, and it performs error checking to
handle cases such as an empty list or attempting to insert/delete at an out-of-bounds position.
Additionally, it dynamically allocates and frees memory for nodes as needed.

Program:

#include <stdio.h>

#include <stdlib.h>

// Node structure

struct Node {

int data;

struct Node* next;

};

// Function to print the linked list

void printList(struct Node* head) {


struct Node* current = head;

while (current != NULL) {

printf("%d -> ", current->data);

current = current->next;

printf("NULL\n");

// Function to insert a node at the beginning of the linked list

struct Node* insertNodeAtBeginning(struct Node* head, int newData) {

struct Node* newNode = (struct Node*)malloc(sizeof(struct Node));

newNode->data = newData;

newNode->next = head;

head = newNode;

return head;

// Function to delete a node with a given key from the linked list

struct Node* deleteNode(struct Node* head, int key) {

struct Node *current = head, *prev = NULL;

// If the key is present in the first node

if (current != NULL && current->data == key) {

head = current->next;

free(current);

printf("Node with data %d deleted.\n", key);

return head;

}
// Search for the key to be deleted, keeping track of the previous node

while (current != NULL && current->data != key) {

prev = current;

current = current->next;

// If the key is not present

if (current == NULL) {

printf("Node with data %d not found.\n", key);

return head;

// Unlink the node from the linked list

prev->next = current->next;

free(current);

printf("Node with data %d deleted.\n", key);

return head;

// Main function

int main() {

struct Node* head = NULL;

// Insert nodes at the beginning

head = insertNodeAtBeginning(head, 3);

head = insertNodeAtBeginning(head, 7);

head = insertNodeAtBeginning(head, 9);

// Print the original linked list

printf("Original Linked List: ");


printList(head);

// Delete a node with data 7

head = deleteNode(head, 7);

// Print the modified linked list

printf("Linked List after deletion: ");

printList(head);

return 0;

Q4. What do you mean by Searching? Write a C program for Linear Search and binary
search. Write their time complexity.

Ans. Searching is the process of finding a particular item or value in a collection of data. It involves
examining the elements of a data structure and determining if a particular element matches the
target value. Searching is a fundamental operation in computer science and is used in various
algorithms and applications.

Linear Search: Linear search, also known as sequential search, is a simple searching algorithm
that examines each element in a collection one by one until the target element is found or the end
of the collection is reached.

• Program for Linear Search:

Input:

#include <stdio.h>

int linearSearch(int arr[], int n, int key) {

for (int i = 0; i < n; i++) {

if (arr[i] == key) {

return i; // Return the index if key is found


}

return -1; // Return -1 if key is not found

int main() {

int n, key;

// Input the size of the array

printf("Enter the number of elements in the array: ");

scanf("%d", &n);

int arr[n]; // Declare an array with a fixed size

// Input the elements of the array

printf("Enter %d elements of the array:\n", n);

for (int i = 0; i < n; i++) {

scanf("%d", &arr[i]);

// Input the element to be searched

printf("Enter the element to be searched: ");

scanf("%d", &key);

// Perform linear search

int result = linearSearch(arr, n, key);

// Display the result

if (result != -1) {

printf("Element found at index %d\n", result);

} else {

printf("Element not found in the array\n");


}

return 0;

Output:

Enter the number of elements in the array: 10

Enter 10 elements of the array:

1 4 7 9 10 13 16 18 23 25

Enter the element to be searched: 13

Element found at index 5

• Time Complexity of Linear Search:

The time complexity of linear search is O(n), where 'n' is the number of elements in the array. In
the worst case, the algorithm may need to iterate through all elements to find the target.

Binary Search: Binary search is a more efficient searching algorithm that works on sorted arrays.
It repeatedly divides the search interval in half.

• Program for Binary Search:

Input:

#include <stdio.h>

int binarySearch(int arr[], int low, int high, int key) {

while (low <= high) {

int mid = low + (high - low) / 2;

if (arr[mid] == key) {

return mid; // Return the index if key is found


} else if (arr[mid] < key) {

low = mid + 1; // If key is greater, search in the right half

} else {

high = mid - 1; // If key is smaller, search in the left half

return -1; // Return -1 if key is not found

int main() {

int n, key;

// Input the size of the array

printf("Enter the number of elements in the array: ");

scanf("%d", &n);

int arr[n]; // Declare an array with a fixed size

// Input the elements of the array (assuming the array is sorted)

printf("Enter %d sorted elements of the array:\n", n);

for (int i = 0; i < n; i++) {

scanf("%d", &arr[i]);

// Input the element to be searched

printf("Enter the element to be searched: ");

scanf("%d", &key);

// Perform binary search

int result = binarySearch(arr, 0, n - 1, key);

// Display the result


if (result != -1) {

printf("Element found at index %d\n", result);

} else {

printf("Element not found in the array\n");

return 0;

Output:

Enter the number of elements in the array: 10

Enter 10 sorted elements of the array:

1 2 3 5 6 8 10 13 16 18

Enter the element to be searched: 13

Element found at index 7

• Time Complexity of Binary Search:

The time complexity of binary search is O(log n), where 'n' is the number of elements in the array.
This is because, in each step, the search space is halved, leading to a more efficient search
compared to linear search.

Q5. Define:

Ans.

1. Graph: A graph is a collection of nodes (or vertices) and edges that connect pairs of nodes.
Graphs are widely used to represent relationships between entities. The edges may be
directed or undirected, and they may have weights.
2. Weighted Graph: A weighted graph is a type of graph in which each edge is assigned a
numerical value or weight. This weight represents some quantitative measure of the
relationship between the nodes connected by the edge. Weighted graphs are used when the
connections between nodes have associated costs, distances, or other numerical values.
3. Directed Graph: A directed graph (or digraph) is a graph in which edges have a direction,
meaning they are ordered pairs. If there is a directed edge from node A to node B, it doesn't
imply the existence of an edge from B to A unless explicitly defined. Directed graphs are
useful for representing relationships with a sense of direction, such as dependencies or
flows.
4. Undirected Graph: An undirected graph is a graph in which edges do not have a direction.
The edges are unordered pairs, meaning that if there is an edge between node A and node
B, it implies an edge between B and A. Undirected graphs are used to represent symmetric
relationships.
5. Indegree and Outdegree of a Graph:
• Indegree: For a directed graph, the indegree of a node is the number of incoming edges to
that node. It represents how many edges are directed towards the node.
• Outdegree: For a directed graph, the outdegree of a node is the number of outgoing edges
from that node. It represents how many edges are directed away from the node.
6. Adjacency Matrix: An adjacency matrix is a two-dimensional array used to represent a
finite graph. The rows and columns of the matrix correspond to the vertices of the graph,
and the presence or absence of an edge between two vertices is indicated by the values in
the matrix. For an undirected graph, the matrix is symmetric, and for a directed graph, it
may not be symmetric. The matrix may also contain weights if it's a weighted graph. An
entry A[i][j] is usually 1 if there is an edge between vertices i and j, and 0 otherwise. If it's
a weighted graph, A[i][j] may represent the weight of the edge.

Q6. What is sorting? What do you mean by internal and external sorting? List the different
types of sorting techniques. Write their times complexity.

Ans. Sorting is the process of arranging elements in a specific order, typically in ascending or
descending order based on some criteria. It is a fundamental operation in computer science and
finds applications in various areas, such as searching, data analysis, and information retrieval.

Internal Sorting and External Sorting:

• Internal Sorting: Internal sorting refers to sorting the entire data set in the main memory
(RAM). It assumes that the entire data set fits into the computer's memory.
• External Sorting: External sorting is used when the data set is too large to fit entirely into
the main memory. It involves dividing the data into smaller chunks, sorting each chunk
internally, and then merging the sorted chunks.

Types of Sorting Techniques and Their Time Complexity:

1. Bubble Sort:

Time Complexity:
• Worst Case: O(n^2)
• Best Case: O(n) (when the list is already sorted)

void bubbleSort(int arr[], int n);

2. Selection Sort:

Time Complexity:

• Worst Case: O(n^2)


• Best Case: O(n^2)

void selectionSort(int arr[], int n);

3. Insertion Sort:

Time Complexity:

• Worst Case: O(n^2)


• Best Case: O(n) (when the list is nearly sorted)

void insertionSort(int arr[], int n);

4. Merge Sort:

Time Complexity:

• Worst Case: O(n log n)


• Best Case: O(n log n)

void mergeSort(int arr[], int l, int r);

5. Quick Sort:

Time Complexity:

• Worst Case: O(n^2) (rare, usually O(n log n))


• Best Case: O(n log n)

void quickSort(int arr[], int low, int high);

6. Heap Sort:
Time Complexity:

• Worst Case: O(n log n)


• Best Case: O(n log n)

void heapSort(int arr[], int n);

7. Radix Sort:

Time Complexity:

• Worst Case: O(k * n) (k is the number of digits in the maximum number)


• Best Case: O(k * n)

void radixSort(int arr[], int n);

8. Counting Sort:

Time Complexity:

• Worst Case: O(n + k) (k is the range of input)


• Best Case: O(n + k)

void countingSort(int arr[], int n);

9. Bucket Sort:

Time Complexity:

• Worst Case: O(n^2) (when each element is in the same bucket)


• Best Case: O(n + k) (k is the number of buckets)

void bucketSort(int arr[], int n);

Q7. Define an AVL tree. Obtain an AVL tree by inserting one integer at a time in the
following sequence: 50, 55, 60, 15, 10, 40, 20, 45, 30, 47, 70, 80. Show all the steps.

Ans. An AVL tree (Adelson-Velsky and Landis tree) is a self-balancing binary search tree. In an
AVL tree, the heights of the two child subtrees of any node differ by at most one, ensuring that the
tree remains balanced. If, at any time during an insertion or a deletion operation, the AVL property
is violated, rotations are performed to restore balance.
Here are the steps to obtain an AVL tree by inserting one integer at a time in the following
sequence: 50, 55, 60, 15, 10, 40, 20, 45, 30, 47, 70, 80.

Steps:
1. 50
/
2. 55
/
3. 60
(Right Rotation at node 50)

4. 55
/ \
50 60
/
5. 15
/
6. 10
(Right Rotation at node 50)

7. 55
/ \
15 60
/ \
10 50
(Similarly add the rest of the integers)

The final AVL tree is shown below:


40
/ \
20 50
/ \ / \
15 30 45 60
/ \ / \
10 47 55 70
\
80

Now, the AVL tree is balanced after inserting all the integers in the given sequence. Each node's
balance factor (the height difference between the left and right subtrees) is either -1, 0, or 1,
ensuring that the tree remains balanced.
Q8. Define binary search tree. Draw the binary search tree that is created if the following
numbers are inserted in the tree in the given order: Kavita, Helan, Reena, Sonam, Amir,
Ranbir, Salman, Abhishek.

Ans. A Binary Search Tree (BST) is a binary tree data structure in which each node has at most
two child nodes, referred to as the left child and the right child. The key (value) of nodes in the left
subtree is less than the key of the root, and the key of nodes in the right subtree is greater than the
key of the root. This ordering property extends to all nodes in the tree, making it an efficient data
structure for searching, insertion, and deletion operations.

BST for the given names: Kavita, Helan, Reena, Sonam, Amir, Ranbir, Salman, Abhishek.
Kavita
/ \
Helan Reena
/ \
Amir Sonam
/ / \
Abhishek Ranbir Salman

Q9. Distinguish between:


• Iteration and Recursion.
• Datatypes and Data Structures.
• Time complexity and Space complexity.
• Binary tree and Binary Search tree.
Ans.
• Iteration and Recursion:
Iteration: Iteration is a repetitive process where a set of instructions or statements are executed
repeatedly. It is a looping construct that repeats a block of code until a certain condition is met.
Recursion: Recursion is a programming technique where a function calls itself directly or
indirectly to solve a problem. In recursion, the problem is divided into subproblems, and the
solution for the main problem is obtained by solving the subproblems.
• Datatypes and Data Structures:
Datatypes: Datatypes define the type of data that a variable can hold in a programming language.
Examples include integers, floating-point numbers, characters, and custom-defined structures.
Data Structures: Data structures are collections of data elements and the relationships among
them. They provide a way to organize and store data to perform operations efficiently. Examples
include arrays, linked lists, stacks, queues, and trees.
• Time Complexity and Space Complexity:
Time Complexity: Time complexity is a measure of the amount of time an algorithm takes with
respect to the input size. It quantifies the amount of time taken by an algorithm to run as a function
of the size of the input.
Time complexity is often expressed using Big O notation, which represents the upper bound or
worst-case scenario of an algorithm's running time. It provides an asymptotic upper bound on the
growth rate of the running time concerning the input size.
Space Complexity: Space complexity is a measure of the amount of memory space an algorithm
needs concerning the input size. It quantifies the amount of memory space required by an algorithm
during its execution.
Similar to time complexity, space complexity is also often expressed using Big O notation,
representing the upper bound or worst-case scenario of the space requirements.

• Binary Tree and Binary Search Tree:


Binary Tree: A binary tree is a hierarchical data structure where each node has at most two
children, referred to as the left child and the right child. Nodes in a binary tree are connected by
edges, and there is a unique path from the root to each node.
Binary Search Tree (BST): A binary search tree is a binary tree with an additional property: for
each node, the values in its left subtree are less than or equal to the node's value, and the values in
its right subtree are greater than or equal to the node's value. This ordering property allows for
efficient search, insertion, and deletion operations.

Q10. What are the two types of Complexities? Explain them. Explain the concept of Big O
and Big Omega and Theta.
Ans. In the context of algorithm analysis, two types of complexities are often discussed: time
complexity and space complexity.
• Time Complexity:
Time complexity is a measure of the amount of time an algorithm takes to complete concerning
the input size. It provides an estimation of the running time of an algorithm as a function of the
size of the input.
Time complexity is often expressed using Big O notation, which represents the upper bound or
worst-case scenario of an algorithm's running time. It provides an asymptotic upper bound on the
growth rate of the running time concerning the input size.
• Space Complexity:
Space complexity is a measure of the amount of memory space an algorithm needs to complete
concerning the input size. It quantifies the amount of memory space required by an algorithm
during its execution.
Similar to time complexity, space complexity is also often expressed using Big O notation,
representing the upper bound or worst-case scenario of the space requirements.
Concepts of Big O, Big Omega, and Theta:
• Big O (O):
Big O notation describes the upper bound or worst-case scenario of an algorithm's time or space
complexity. It represents the maximum amount of resources an algorithm may consume
concerning the input size.
For example, if an algorithm has a time complexity of O(f(n)), it means that the running time of
the algorithm grows at most proportionally to the function f(n).
• Big Omega (Ω):
Big Omega notation describes the lower bound or best-case scenario of an algorithm's time or
space complexity. It represents the minimum amount of resources an algorithm requires
concerning the input size.
If an algorithm has a time complexity of Ω(f(n)), it means that the running time of the algorithm
grows at least proportionally to the function f(n).
• Theta (Θ):
Theta notation represents both the upper and lower bounds of an algorithm's time or space
complexity. It provides a tight asymptotic bound, indicating that the running time or space
requirements of the algorithm grow at the same rate as the function f(n).
If an algorithm has a time complexity of Θ(f(n)), it means that the running time of the algorithm
grows at the same rate as the function f(n), both in the worst and best cases.

➢ Big O provides an upper bound, Big Omega provides a lower bound, and Theta provides a
tight bound on the growth rate of an algorithm's time or space complexity. These notations
are essential for analyzing and comparing the efficiency of different algorithms.

Q11. What is Queue? Why is it known as FIFO? Write an algorithm to insert and delete an
element from a simple Queue.
Ans. A queue is a linear data structure that follows the First-In-First-Out (FIFO) principle. In a
queue, elements are added at the rear (enqueue operation), and elements are removed from the
front (dequeue operation). The element that has been in the queue the longest is the first one to be
removed.
FIFO Principle: The FIFO (First-In-First-Out) principle means that the element that is added first
to the queue is the one that will be removed first. It mimics the behavior of a physical queue or
line, where the first person to join the line is the first one to be served.
Algorithm to Insert and Delete an Element from a Simple Queue:
Queue: a linear data structure with operations enqueue and dequeue.

Operation: Enqueue (Insert)


1. Check if the queue is full.
- If the queue is full, return an overflow error.
2. Increment the rear pointer.
3. Add the new element to the position pointed by the rear.

Operation: Dequeue (Delete)


1. Check if the queue is empty.
- If the queue is empty, return an underflow error.
2. Get the element at the front of the queue.
3. Increment the front pointer.
4. If the front becomes greater than the rear, reset both front and rear to -1 (indicating an empty
queue).

Pseudocode:
Queue:
- Initialize front and rear pointers to -1.

Enqueue(element):
1. if (rear == MAX_SIZE - 1)
return "Queue Overflow"
2. else if (front == -1 && rear == -1)
front = rear = 0
3. else
rear = rear + 1
4. queue[rear] = element
5. return "Element enqueued successfully"

Dequeue():
1. if (front == -1)
return "Queue Underflow"
2. else if (front == rear)
front = rear = -1
3. else
element = queue[front]
front = front + 1
4. return element

In this pseudocode, MAX_SIZE is the maximum size of the queue. The enqueue operation adds
an element at the rear, and the dequeue operation removes an element from the front. The front
and rear pointers are adjusted accordingly.

Q12. Explain Breadth First Search traversal and Depth First Search traversal of Graph
using an example.
Ans. Example Graph:
Consider the following undirected graph:

A -- B
| |
C -- D -- E

• Breadth-First Search (BFS):


BFS explores a graph level by level, visiting all the neighbors of a node before moving on to the
next level.
• Algorithm for BFS:
Start with an initial node.
Enqueue the initial node.
Dequeue a node, visit it, and enqueue its unvisited neighbors.
Repeat until the queue is empty.
• BFS Traversal:
Starting from node A:
Enqueue A (Queue: A)
Dequeue A, visit A, enqueue its neighbors B, C (Queue: B, C)
Dequeue B, visit B, enqueue its neighbors A, D (Queue: C, A, D)
Dequeue C, visit C, enqueue its neighbors A, D (Queue: A, D, A, D)
Dequeue A, skip (A is already visited), enqueue its neighbors B, C (Queue: D, B, C, B, C)
Dequeue D, visit D, enqueue its neighbors A, E (Queue: B, C, B, C, A, E)
Continue the process until the queue is empty.
The BFS traversal order is A, B, C, D, E.

• Depth-First Search (DFS):


DFS explores a graph by going as deep as possible along each branch before backtracking.
• Algorithm for DFS:
Start with an initial node.
Visit the node and mark it as visited.
Recursively visit all unvisited neighbors.
• DFS Traversal:
Starting from node A:
Visit A, mark it as visited.
Recursively visit neighbors of A (B, C).
For B, visit B, mark it as visited, and recursively visit its neighbors (A, D).
For A (already visited), skip.
For D, visit D, mark it as visited, and recursively visit its neighbors (B, E).
For B (already visited), skip.
For E, visit E, mark it as visited.
Continue the process until all nodes are visited.
The DFS traversal order is A, B, D, E, C.
➢ BFS explores the graph level by level, while DFS explores the graph by going as deep as
possible before backtracking. The order in which nodes are visited may vary based on the
specific implementation details.

Q13. What is Circular Linked List? State the advantages and disadvantages of Circular Link
List Over Doubly Linked List and Singly Linked List. Also write advantages of Linked List
over an Array.
Ans. A circular linked list is a variation of a linked list in which the last node of the list points back
to the first node, forming a loop or circle. In a circular linked list, there is no NULL at the end;
instead, the last node points back to the first node.
Advantages of Circular Linked List:
• Efficient Operations on Both Ends: Since the last node points to the first node, operations
like insertion and deletion at both ends (front and rear) are more efficient compared to a
singly linked list.
• Traversal from Any Node: It's easy to traverse the entire list starting from any node, as
there is no NULL indicating the end of the list.
Disadvantages of Circular Linked List:
• Complexity: The implementation of a circular linked list is more complex than that of a
singly linked list, especially in handling edge cases and avoiding infinite loops.
• Memory Overhead: The extra link from the last node to the first node introduces
additional memory overhead.
Advantages of Circular Linked List Over Doubly Linked List:
• Space Efficiency: Circular linked lists can be more space-efficient than doubly linked lists
because they only need one pointer to the next node instead of two.
• Traversal in Both Directions: Circular linked lists can be traversed in both forward and
backward directions efficiently.
Disadvantage of Circular Linked List Over Doubly Linked List:
• Complexity of Operations: Operations like insertion and deletion are more complex in
circular linked lists compared to doubly linked lists due to the need to update the next
pointer of the previous node and the previous pointer of the next node.
Advantages of Circular Linked List Over Singly Linked List:
• Efficient Operations at Both Ends: Similar to the advantage over doubly linked lists,
circular linked lists support efficient operations at both ends.
• Traversal from Any Node: Traversing the circular linked list from any node is more
straightforward compared to a singly linked list.
Advantage of Linked List Over an Array:
• Dynamic Size: Linked lists can easily grow or shrink in size during program execution,
whereas arrays have a fixed size determined during their declaration.
• Efficient Insertion and Deletion: Insertion and deletion operations in a linked list can be
more efficient than in an array, especially for large datasets, as they involve changing
pointers rather than shifting elements.
• No Wasted Memory: Linked lists use memory efficiently by allocating memory only
when needed. In contrast, arrays may have unused allocated space.
• Ease of Implementation: Implementing certain data structures and algorithms can be
simpler with linked lists compared to arrays.
However, it's essential to note that the choice between linked lists and arrays depends on the
specific requirements of the application, as each has its own advantages and disadvantages.

Q14. Discuss following with reference to trees.


• Height of the tree.
• Complete Binary Tree.
• Expression tree.
• Sibling.
• Full Binary Tree.
Ans. 1. Height of the Tree:
In a tree data structure, the height of a node is the length of the longest path from the node to a
leaf. The height of the tree is the height of the root node. It is commonly used to measure the
efficiency of certain tree operations.
2. Complete Binary Tree:
A complete binary tree is a binary tree in which all levels, except possibly the last, are completely
filled, and all nodes are as left as possible. In other words, every level of the tree is filled from left
to right, and the last level is filled from left to right as much as possible.
3. Expression Tree:
An expression tree is a binary tree in which each leaf node represents an operand, and each internal
node represents an operator. Expression trees are used to represent mathematical expressions in a
tree form, making it easier to evaluate expressions and perform operations.
4. Sibling:
In a tree, siblings are nodes that share the same parent. For example, in a binary tree, if two nodes
have the same parent, they are considered siblings. Sibling nodes are at the same level in the tree.
5. Full Binary Tree:
A full binary tree is a binary tree in which every node has either 0 or 2 children. In other words,
every node in a full binary tree either has no children (a leaf node) or has two children. It is also
known as a proper binary tree.
These concepts are fundamental in understanding and working with tree structures in computer
science and data structures. Each concept has specific properties and use cases that are relevant in
different contexts.

Q15. Explain the various representation of graph with example in detail.


Ans. Graphs can be represented in various ways, and the choice of representation depends on the
type of graph and the operations to be performed on it. The two most common representations are
the Adjacency Matrix and the Adjacency List. Let's discuss each in detail:
1. Adjacency Matrix:
In an adjacency matrix, a 2D array is used to represent a graph. The rows and columns of the
matrix represent the vertices of the graph, and the values in the matrix indicate the presence or
absence of edges between vertices.
Example:
Consider the following undirected graph:

A
/ \
B---C

The adjacency matrix for this graph is:


A B C
A 0 1 1
B 1 0 1
C 1 1 0

Here, a '1' in the matrix indicates the presence of an edge between the corresponding vertices, and
'0' indicates no edge.
Pros:
• Easy to implement and understand.
• Suitable for dense graphs (graphs with many edges).
Cons:
• Consumes more space for sparse graphs.
• Inefficient for graphs with a large number of vertices and few edges.

2. Adjacency List:
In an adjacency list, each vertex maintains a list of its neighboring vertices. This can be
implemented using an array of lists or a hash table.
Example:
Consider the same undirected graph:

A
/ \
B---C

The adjacency list for this graph is:

A: B, C
B: A, C
C: A, B

Here, each vertex is associated with a list of its neighboring vertices.


Pros:
• Efficient for sparse graphs (graphs with fewer edges).
• Consumes less space compared to the adjacency matrix.
• Suitable for dynamic data structures.
Cons:
• Traversing edges takes longer in comparison to the adjacency matrix.

3. Incidence Matrix:
An incidence matrix is used to represent a graph in which both vertices and edges are represented
as rows and columns. It is often used in bipartite graphs.
Example:
Consider the bipartite graph:
A---B
\ /
C

The incidence matrix for this graph is:

A B C
A 1 1 0
B 1 1 0
C 0 0 1

Here, '1' in the matrix indicates that the corresponding vertex is incident to the corresponding edge.
4. Edge List:
In an edge list, the graph is represented as a list of edges, where each edge is represented by a pair
of vertices.
Example:
For the graph:

A
/ \
B---C

The edge list is:


[(A, B), (A, C), (B, C)]

Here, each tuple represents an edge in the graph.


Pros:
• Simple representation.
• Efficient for graphs with a small number of edges.
Cons:
• Inefficient for operations requiring adjacency information.
These representations offer different trade-offs in terms of space and time complexity, and the
choice depends on the specific requirements and characteristics of the graph being represented.

You might also like