You are on page 1of 82

Units Topics Contents

1 1.1 Introduction to Data Structures and Algorithms

A data structure is a particular way of organizing data in a computer so that it can be


used effectively. Depending on your requirement and project, it is important to choose
the right data structure for your project. For example, if you want to store data
sequentially in the memory, then you can go for the Array data structure.

Data structure is a storage that is used to store and organize data. It is a way of
arranging data on a computer so that it can be accessed and updated efficiently.

What is an Algorithm?
In computer programming terms, an algorithm is a set of well-defined instructions to solve a
particular problem. It takes a set of input and produces a desired output. For example,
An algorithm to add two numbers:
1. Take two number inputs
2. Add numbers using the + operator
3. Display the result
Qualities of Good Algorithms
 Input and output should be defined precisely.
 Each step in the algorithm should be clear and unambiguous.
 Algorithms should be most effective among many different ways to solve a problem.
 An algorithm shouldn't include computer code. Instead, the algorithm should be written in
such a way that it can be used in different programming languages.
Why Learn Data Structures and Algorithms? (programiz.com)
1.2 Data Structures: Definition and Types
Types of Data Structure
Basically, data structures are divided into two categories:

1. Linear data structure


In linear data structures, the elements are arranged in sequence one after the
other. Since elements are arranged in particular order, they are easy to
implement.
a. Array Data Structure
In an array, elements in memory are arranged in continuous memory. All the
elements of an array are of the same type. And, the type of elements that
can be stored in the form of arrays is determined by the programming
language.

b. Stack Data Structure


In stack data structure, elements are stored in the LIFO principle. That is, the
last element stored in a stack will be removed first.
It works just like a pile of plates where the last plate kept on the pile will be
removed first.
c. Queue Data Structure
Unlike stack, the queue data structure works in the FIFO principle where first
element stored in the queue will be removed first.

It works just like a queue of people in the ticket counter where first person
on the queue will get the ticket first.

https://www.programiz.com/dsa/data-structure-types

2. Non-linear data structure

Unlike linear data structures, elements in non-linear data structures are not in any
sequence. Instead they are arranged in a hierarchical manner where one element will be
connected to one or more elements.

Non-linear data structures are further divided into graph and tree based data structures.
a. Graph Data Structure

In graph data structure, each node is called vertex and each vertex is connected to other
vertices through edges.

To learn more, visit Graph Data Structure.

Graph data structure example

Popular Graph Based Data Structures:


 Spanning Tree and Minimum Spanning Tree
 Strongly Connected Components
 Adjacency Matrix
 Adjacency List
b. Trees Data Structure

Similar to a graph, a tree is also a collection of vertices and edges. However, in tree data
structure, there can only be one edge between two vertices.

Tree data structure example

Popular Tree based Data Structure


 Binary Tree
 Binary Search Tree
 AVL Tree
 B-Tree
 B+ Tree
 Red-Black Tree

Topics:

Array
Linked List
Stack
Queue
Binary Tree
Binary Search Tree
Heap
Hashing
Graph
Matrix
Misc
Advanced Data Structure

Linear Vs Non-linear Data Structures

Linear Data Structures Non Linear Data Structures

The data items are arranged in


The data items are arranged in
non-sequential order (hierarchical
sequential order, one after the other.
manner).

All the items are present on the single The data items are present at
layer. different layers.

It can be traversed on a single run. It requires multiple runs. That is, if


That is, if we start from the first we start from the first element it
element, we can traverse all the might not be possible to traverse
elements sequentially in a single pass. all the elements in a single pass.
Different structures utilize
The memory utilization is not efficient. memory in different efficient
ways depending on the need.

The time complexity increase with the Time complexity remains the
data size. same.

Example: Arrays, Stack, Queue Example: Tree, Graph, Map

Assignment 1
Write a case study about implication of binary tree and binary search tree.
1. Introduction
2. Advantage of binary tree and binary search tree
3. Use of binary tree and binary search tree
4. Literature review about binary tree and binary search tree
5. Finding about binary tree and binary search tree (conclusion)
6. References
To make presentation slide about this case study and demonstrate in class
presentation.
Assignment 2
To read and mention any three article, journals and book about dynamic memory.

1.3 Abstract Date Type


Abstract Data type (ADT) is a type (or class) for objects whose behavior is defined by a
set of value and a set of operations. The definition of ADT only mentions what
operations are to be performed but not how these operations will be implemented. It
does not specify how data will be organized in memory and what algorithms will be
used for implementing the operations. It is called “abstract” because it gives an
implementation-independent view. The process of providing only the essentials and
hiding the details is known as abstraction.
The user of data type does not need to know how that data type is implemented, for
example, we have been using Primitive values like int, float, char data types only with
the knowledge that these data type can operate and be performed on without any
idea of how they are implemented. So a user only needs to know what a data type can
do, but not how it will be implemented. Think of ADT as a black box which hides the
inner structure and design of the data type. Now we’ll define three ADTs namely List
ADT, Stack ADT, Queue ADT.

List ADT
The data is generally stored in key sequence in a list which has a head structure
consisting of count, pointers and address of compare function needed to compare the
data in the list.

The data node contains the pointer to a data structure and a self-referential
pointer which points to the next node in the list.

//List ADT Type Definitions


typedef struct node
{
void *DataPtr;
struct node *link;
} Node;
typedef struct
{
int count;
Node *pos;
Node *head;
Node *rear;
int (*compare) (void *argument1, void *argument2)
} LIST;
class Triangle {
double a;
double b;
double c;

public:
Triangle(double a_in, double b_in, double c_in);

double perimeter() const {


return this->a + this->b + this->c;
}

void scale(double s) {
this->a *= s;
this->b *= s;
this->c *= s;
}
};
Stack ADT

Stack is a linear data structure that follows a particular order in which the operations are
performed. The order may be LIFO(Last In First Out) or FILO(First In Last Out).
Mainly the following four basic operations are performed in the stack:
 Push: Adds an item in the stack. If the stack is full, then it is said to be an Overflow
condition.
 Pop: Removes an item from the stack. The items are popped in the reversed order in
which they are pushed. If the stack is empty, then it is said to be an Underflow
condition.
 Peek or Top: Returns the top element of the stack.
 isEmpty: Returns true if the stack is empty, else false.
How to understand a stack practically?

There are many real-life examples of a stack. Consider the simple example of plates
stacked over one another in a canteen. The plate which is at the top is the first one to be
removed, i.e. the plate which has been placed at the bottommost position remains in the
stack for the longest period of time. So, it can be simply seen to follow the LIFO/FILO
order.
Time Complexities of operations on stack:
push(), pop(), isEmpty() and peek() all take O(1) time. We do not run any loop in any of
these operations.

Applications of stack:
 Balancing of symbols
 Infix to Postfix /Prefix conversion
 Redo-undo features at many places like editors, photoshop.
 Forward and backward feature in web browsers
 Used in many algorithms like Tower of Hanoi, tree traversals, stock span
problem, histogram problem.
 Backtracking is one of the algorithm designing techniques. Some examples of
backtracking are the Knight-Tour problem, N-Queen problem, find your way through a
maze, and game-like chess or checkers in all these problems we dive into someway if
that way is not efficient we come back to the previous state and go into some another
path. To get back from a current state we need to store the previous state for that
purpose we need a stack.
 In Graph Algorithms like Topological Sorting and Strongly Connected Components
 In Memory management, any modern computer uses a stack as the primary
management for a running purpose. Each program that is running in a computer
system has its own memory allocations
 String reversal is also another application of stack. Here one by one each character
gets inserted into the stack. So the first character of the string is on the bottom of the
stack and the last element of a string is on the top of the stack. After Performing the
pop operations on the stack we get a string in reverse order.
Implementation:
There are two ways to implement a stack:
 Using array
 Using linked list

/* C++ program to implement basic stack


operations */
#include <bits/stdc++.h>
using namespace std;
#define MAX 1000
class Stack {
int top;
public:
int a[MAX]; // Maximum size of Stack
Stack() { top = -1; }
bool push(int x);
int pop();
int peek();
bool isEmpty();
};

bool Stack::push(int x)
{
if (top >= (MAX - 1)) {
cout << "Stack Overflow";
return false;
}
else {
a[++top] = x;
cout << x << " pushed into stack\n";
return true;
}
}
int Stack::pop()
{
if (top < 0) {
cout << "Stack Underflow";
return 0;
}
else {
int x = a[top--];
return x;
}
}
int Stack::peek()
{
if (top < 0) {
cout << "Stack is Empty";
return 0;
}
else {
int x = a[top];
return x;
}
}
bool Stack::isEmpty()
{
return (top < 0);
}

// Driver program to test above functions


int main()
{
class Stack s;
s.push(10);
s.push(20);
s.push(30);
cout << s.pop() << " Popped from stack\n";
//print all elements in stack :
cout<<"Elements present in stack : ";
while(!s.isEmpty())
{
// print top element in stack
cout<<s.peek()<<" ";
// remove top element in stack
s.pop();
}

return 0;
}

Queue ADT
A Queue is a linear structure which follows a particular order in which the operations are
performed. The order is First In First Out (FIFO). A good example of a queue is any queue of
consumers for a resource where the consumer that came first is served first. The difference
between stacks and queues is in removing.
1.4 Dynamic Memory: malloc, calloc, realloc and free
C is a structured language, it has some fixed rules for programming. One of them includes
changing the size of an array. An array is a collection of items stored at contiguous
memory locations.

As it can be seen that the length (size) of the array above made is 9. But what if there is a
requirement to change this length (size). For Example,
 If there is a situation where only 5 elements are needed to be entered in this array. In
this case, the remaining 4 indices are just wasting memory in this array. So there is a
requirement to lessen the length (size) of the array from 9 to 5.
 Take another situation. In this, there is an array of 9 elements with all 9 indices filled.
But there is a need to enter 3 more elements in this array. In this case, 3 indices more
are required. So the length (size) of the array needs to be changed from 9 to 12.

This procedure is referred to as Dynamic Memory Allocation in C.

Therefore, C Dynamic Memory Allocation can be defined as a procedure in which the


size of a data structure (like Array) is changed during the runtime.
C provides some functions to achieve these tasks. There are 4 library functions
provided by C defined under <stdlib.h> header file to facilitate dynamic memory
allocation in C programming. They are:
1. malloc()
2. calloc()
3. free()
4. realloc()
C malloc() method

The “malloc” or “memory allocation” method in C is used to dynamically allocate a single


large block of memory with the specified size. It returns a pointer of type void which can
be cast into a pointer of any form. It doesn’t Initialize memory at execution time so that it
has initialized each block with the default garbage value initially.
Syntax:
ptr = (cast-type*) malloc(byte-size)
For Example:
ptr = (int*) malloc(100 * sizeof(int));

Since the size of int is 4 bytes, this statement will allocate 400 bytes of memory. And, the
pointer ptr holds the address of the first byte in the allocated memory.

If space is insufficient, allocation fails and returns a NULL pointer.


#include <stdio.h>
#include <stdlib.h>
int main()
{
// This pointer will hold the
// base address of the block created
int* ptr;
int n, i;

// Get the number of elements for the array


printf("Enter number of elements:");
scanf("%d",&n);
printf("Entered number of elements: %d\n", n);

// Dynamically allocate memory using malloc()


ptr = (int*)malloc(n * sizeof(int));

// Check if the memory has been successfully


// allocated by malloc or not
if (ptr == NULL) {
printf("Memory not allocated.\n");
exit(0);
}
else {

// Memory has been successfully allocated


printf("Memory successfully allocated using malloc.\n");

// Get the elements of the array


for (i = 0; i < n; ++i) {
ptr[i] = i + 1;
}

// Print the elements of the array


printf("The elements of the array are: ");
for (i = 0; i < n; ++i) {
printf("%d, ", ptr[i]);
}
}
return 0;
}
calloc()
The calloc() function in C++ allocates a block of memory for an array of objects and
initializes all its bits to zero. The calloc() function returns a pointer to the first byte of
the allocated memory block if the allocation succeeds.
Example

#include <cstdlib>
#include <iostream>
using namespace std;
int main() {
int *ptr;
ptr = (int *)calloc(5, sizeof(int));
if (!ptr) {
cout << "Memory Allocation Failed";
exit(1);
}
cout << "Initializing values..." << endl
<< endl;
for (int i = 0; i < 5; i++) {
ptr[i] = i * 2 + 1;
}
cout << "Initialized values" << endl;
for (int i = 0; i < 5; i++) {
/* ptr[i] and *(ptr+i) can be used interchangeably */
cout << *(ptr + i) << endl;
}
free(ptr);
return 0;
}
Example
#include <cstdlib>
#include <iostream>
using namespace std;

int main() {
int *ptr = (int *)calloc(0, 0);
if (ptr == NULL) {
cout << "Null pointer";
} else {
cout << "Address = " << ptr << endl;
}
free(ptr);
return 0;
}
C++ free()

The free() function in C++ deallocates a block of memory previously allocated using
calloc, malloc or realloc functions, making it available for further allocations.
The free() function in C++ deallocates a block of memory previously allocated using calloc,
malloc or realloc functions, making it available for further allocations.
The free() function does not change the value of the pointer, that is it still points to
the same memory location.
Example 1
#include <iostream>
#include <cstdlib>
using namespace std;

int main()
{
int *ptr;
ptr = (int*) malloc(5*sizeof(int));
cout << "Enter 5 integers" << endl;

for (int i=0; i<5; i++)


{
// *(ptr+i) can be replaced by ptr[i]
cin >> *(ptr+i);
}
cout << endl << "User entered value"<< endl;

for (int i=0; i<5; i++)


{
cout << *(ptr+i) << " ";
}
free(ptr);
/* prints a garbage value after ptr is free */
cout << "Garbage Value" << endl;
for (int i=0; i<5; i++)
{
cout << *(ptr+i) << " ";
}
return 0;
}

https://www.programiz.com/cpp-programming/library-function/cstdlib/free

C++ realloc()

The realloc() function in C++ reallocates a block of memory that was previously
allocated but not yet freed.The realloc() function reallocates memory that was
previously allocated using malloc(), calloc() or realloc() function and yet not freed
using the free() function.

If the new size is zero, the value returned depends on the implementation of the
library. It may or may not return a null pointer.
Example 1: How realloc() function works?
#include <iostream>
#include <cstdlib>
using namespace std;
int main()
{
float *ptr, *new_ptr;
ptr = (float*) malloc(5*sizeof(float));
if(ptr==NULL)
{
cout << "Memory Allocation Failed";
exit(1);
}

/* Initializing memory block */


for (int i=0; i<5; i++)
{
ptr[i] = i*1.5;
}

/* reallocating memory */
new_ptr = (float*) realloc(ptr, 10*sizeof(float));
if(new_ptr==NULL)
{
cout << "Memory Re-allocation Failed";
exit(1);
}

/* Initializing re-allocated memory block */


for (int i=5; i<10; i++)
{
new_ptr[i] = i*2.5;
}
cout << "Printing Values" << endl;

for (int i=0; i<10; i++)


{
cout << new_ptr[i] << endl;
}
free(new_ptr);

return 0;
}

1.5 Introduction to Algorithms: Definition and properties of algorithms


An algorithm is a process or a set of rules required to perform calculations or some
other problem-solving operations especially by a computer. The formal definition of
an algorithm is that it contains the finite set of instructions which are being carried in
a specific order to perform the specific task. It is not the complete program or code; it
is just a solution (logic) of a problem, which can be represented either as an informal
description using a Flowchart or Pseudocode.
The following are the characteristics of an algorithm:

o Input: An algorithm has some input values. We can pass 0 or some input value to an
algorithm.
o Output: We will get 1 or more output at the end of an algorithm.
o Unambiguity: An algorithm should be unambiguous which means that the
instructions in an algorithm should be clear and simple.
o Finiteness: An algorithm should have finiteness. Here, finiteness means that the
algorithm should contain a limited number of instructions, i.e., the instructions
should be countable.
o Effectiveness: An algorithm should be effective as each instruction in an algorithm
affects the overall process.
o Language independent: An algorithm must be language-independent so that the
instructions in an algorithm can be implemented in any of the languages with the
same output.

The following are the factors that we need to consider for designing an algorithm:

o Modularity: If any problem is given and we can break that problem into small-small
modules or small-small steps, which is a basic definition of an algorithm, it means
that this feature has been perfectly designed for the algorithm.
o Correctness: The correctness of an algorithm is defined as when the given inputs
produce the desired output, which means that the algorithm has been designed
algorithm. The analysis of an algorithm has been done correctly.
o Maintainability: Here, maintainability means that the algorithm should be designed
in a very simple structured way so that when we redefine the algorithm, no major
change will be done in the algorithm.
o Functionality: It considers various logical steps to solve the real-world problem.
o Robustness: Robustness means that how an algorithm can clearly define our
problem.
o User-friendly: If the algorithm is not user-friendly, then the designer will not be able
to explain it to the programmer.
o Simplicity: If the algorithm is simple then it is easy to understand.
o Extensibility: If any other algorithm designer or programmer wants to use your
algorithm then it should be extensible.

The major categories of algorithms are given below:

o Sort: Algorithm developed for sorting the items in a certain order.


o Search: Algorithm developed for searching the items inside a data structure.
o Delete: Algorithm developed for deleting the existing element from the data
structure.
o Insert: Algorithm developed for inserting an item inside a data structure.
o Update: Algorithm developed for updating the existing element inside a data
structure.
Algorithm Analysis
The algorithm can be analyzed in two levels, i.e., first is before creating the algorithm, and
second is after creating the algorithm. The following are the two analysis of an algorithm:

o Priori Analysis: Here, priori analysis is the theoretical analysis of an algorithm which
is done before implementing the algorithm. Various factors can be considered before
implementing the algorithm like processor speed, which has no effect on the
implementation part.
o Posterior Analysis: Here, posterior analysis is a practical analysis of an algorithm. The
practical analysis is achieved by implementing the algorithm using any programming
language. This analysis basically evaluate that how much running time and space
taken by the algorithm.

Algorithm Complexity
The performance of the algorithm can be measured in two factors:

o Time complexity: The time complexity of an algorithm is the amount of time


required to complete the execution. The time complexity of an algorithm is denoted
by the big O notation. Here, big O notation is the asymptotic notation to represent
the time complexity. The time complexity is mainly calculated by counting the
number of steps to finish the execution. Let's understand the time complexity
through an example.

sum=0;
// Suppose we have to calculate the sum of n numbers.
for i=1 to n
sum=sum+i;
// when the loop ends then sum holds the sum of the n numbers
return sum;
print sum
o Space complexity: An algorithm's space complexity is the amount of space required
to solve a problem and produce an output. Similar to the time complexity, space
complexity is also expressed in big O notation.

For an algorithm, the space is required for the following purposes:

1. To store program instructions


2. To store constant values
3. To store variable values
4. To track the function calls, jumping statements, etc.

Auxiliary space: The extra space required by the algorithm, excluding the input size, is
known as an auxiliary space. The space complexity considers both the spaces, i.e., auxiliary
space, and space used by the input.

So,

Space complexity = Auxiliary space + Input size.

Types of Algorithms
The following are the types of algorithm:

o Search Algorithm
o Sort Algorithm

1.6 Asymptotic Notations: Big- O, Big-Ω and Big-θ

data structure is a way of organizing the data efficiently and that efficiency is
measured either in terms of time or space. So, the ideal data structure is a structure
that occupies the least possible time to perform all its operation and the memory
space. Our focus would be on finding the time complexity rather than space
complexity, and by finding the time complexity, we can decide which data structure is
the best for an algorithm.
The main question arises in our mind that on what basis should we compare the time
complexity of data structures?. The time complexity can be compared based on
operations performed on them.
How to find the Time Complexity or running time for performing the operations?
The measuring of the actual running time is not practical at all. The running time to
perform any operation depends on the size of the input. Let's understand this statement
through a simple example.

Suppose we have an array of five elements, and we want to add a new element at the
beginning of the array. To achieve this, we need to shift each element towards right, and
suppose each element takes one unit of time. There are five elements, so five units of time
would be taken. Suppose there are 1000 elements in an array, then it takes 1000 units of
time to shift. It concludes that time complexity depends upon the input size.
Therefore, if the input size is n, then f(n) is a function of n that denotes the time
complexity.

How to calculate f(n)?


Let's look at a simple example.

f(n) = 5n2 + 6n + 12

where n is the number of instructions executed, and it depends on the size of the input.

When n=1

% of running time due to 5n2 = * 100 = 21.74%

% of running time due to 6n = * 100 = 26.09%

% of running time due to 12 = * 100 = 52.17%

From the above calculation, it is observed that most of the time is taken by 12. But, we
have to find the growth rate of f(n), we cannot say that the maximum amount of time is
taken by 12. Let's assume the different values of n to find the growth rate of f(n).

n 5n2 6n 12
1 21.74 26.09 52.17%
% %
10 87.41 10.49 2.09%
% %
100 98.79 1.19% 0.02%
%
100 99.88 0.12% 0.0002
0 % %
Usually, the time required by an algorithm comes under three types:

Worst case: It defines the input for which the algorithm takes a huge time.

Average case: It takes average time for the program execution.

Best case: It defines the input for which the algorithm takes the lowest time

Asymptotic Notations
The commonly used asymptotic notations used for calculating the running time complexity
of an algorithm is given below:

o Big oh Notation (?)


o Omega Notation (Ω)
o Theta Notation (θ)

Big oh Notation (O)


o Big O notation is an asymptotic notation that measures the performance of an
algorithm by simply providing the order of growth of the function.
o This notation provides an upper bound on a function which ensures that the function
never grows faster than the upper bound. So, it gives the least upper bound on a
function so that the function never grows faster than this upper bound.

It is the formal way to express the upper boundary of an algorithm running time. It
measures the worst case of time complexity or the algorithm's longest amount of time to
complete its operation. It is represented as shown below:

For example:

If f(n) and g(n) are the two functions defined for positive integers,

then f(n) = O(g(n)) as f(n) is big oh of g(n) or f(n) is on the order of g(n)) if there exists
constants c and no such that:

f(n)≤c.g(n) for all n≥no

This implies that f(n) does not grow faster than g(n), or g(n) is an upper bound on the
function f(n). In this case, we are calculating the growth rate of the function which
eventually calculates the worst time complexity of a function, i.e., how worst an algorithm
can perform.

Let's understand through examples

Example 1: f(n)=2n+3 , g(n)=n

Now, we have to find Is f(n)=O(g(n))?

To check f(n)=O(g(n)), it must satisfy the given condition:

f(n)<=c.g(n)
First, we will replace f(n) by 2n+3 and g(n) by n.

2n+3 <= c.n

Let's assume c=5, n=1 then

2*1+3<=5*1

5<=5

For n=1, the above condition is true.

If n=2

2*2+3<=5*2

7<=10

For n=2, the above condition is true.

We know that for any value of n, it will satisfy the above condition, i.e., 2n+3<=c.n. If the
value of c is equal to 5, then it will satisfy the condition 2n+3<=c.n. We can take any value
of n starting from 1, it will always satisfy. Therefore, we can say that for some constants c
and for some constants n0, it will always satisfy 2n+3<=c.n. As it is satisfying the above
condition, so f(n) is big oh of g(n) or we can say that f(n) grows linearly. Therefore, it
concludes that c.g(n) is the upper bound of the f(n). It can be represented graphically as:
The idea of using big o notation is to give an upper bound of a particular function, and
eventually it leads to give a worst-time complexity. It provides an assurance that a
particular function does not behave suddenly as a quadratic or a cubic fashion, it just
behaves in a linear manner in a worst-case.

Omega Notation (Ω)


o It basically describes the best-case scenario which is opposite to the big o notation.
o It is the formal way to represent the lower bound of an algorithm's running time. It
measures the best amount of time an algorithm can possibly take to complete or the
best-case time complexity.
o It determines what is the fastest time that an algorithm can run.

If we required that an algorithm takes at least certain amount of time without using an
upper bound, we use big- Ω notation i.e. the Greek letter "omega". It is used to bound the
growth of running time for large input size.

If f(n) and g(n) are the two functions defined for positive integers,

then f(n) = Ω (g(n)) as f(n) is Omega of g(n) or f(n) is on the order of g(n)) if there exists
constants c and no such that:

f(n)>=c.g(n) for all n≥no and c>0


Let's consider a simple example.

If f(n) = 2n+3, g(n) = n,

Is f(n)= Ω (g(n))?

It must satisfy the condition:

f(n)>=c.g(n)

To check the above condition, we first replace f(n) by 2n+3 and g(n) by n.

2n+3>=c*n

Suppose c=1

2n+3>=n (This equation will be true for any value of n starting from 1).

Therefore, it is proved that g(n) is big omega of 2n+3 function.

As we can see in the above figure that g(n) function is the lower bound of the f(n) function
when the value of c is equal to 1. Therefore, this notation gives the fastest running time.
But, we are not more interested in finding the fastest running time, we are interested in
calculating the worst-case scenarios because we want to check our algorithm for larger
input that what is the worst time that it will take so that we can take further decision in the
further process.

Theta Notation (θ)


o The theta notation mainly describes the average case scenarios.
o It represents the realistic time complexity of an algorithm. Every time, an algorithm
does not perform worst or best, in real-world problems, algorithms mainly fluctuate
between the worst-case and best-case, and this gives us the average case of the
algorithm.
o Big theta is mainly used when the value of worst-case and the best-case is same.
o It is the formal way to express both the upper bound and lower bound of an
algorithm running time.

Let's understand the big theta notation mathematically:

Let f(n) and g(n) be the functions of n where n is the steps required to execute the program
then:

f(n)= θg(n)

The above condition is satisfied only if when

c1.g(n)<=f(n)<=c2.g(n)

where the function is bounded by two limits, i.e., upper and lower limit, and f(n) comes in
between. The condition f(n)= θg(n) will be true if and only if c1.g(n) is less than or equal to
f(n) and c2.g(n) is greater than or equal to f(n). The graphical representation of theta
notation is given below:
Let's consider the same example where
f(n)=2n+3
g(n)=n

As c1.g(n) should be less than f(n) so c1 has to be 1 whereas c2.g(n) should be greater than
f(n) so c2 is equal to 5. The c1.g(n) is the lower limit of the of the f(n) while c2.g(n) is the
upper limit of the f(n).

c1.g(n)<=f(n)<=c2.g(n)

Replace g(n) by n and f(n) by 2n+3

c1.n <=2n+3<=c2.n

if c1=1, c2=2, n=1

1*1 <=2*1+3 <=2*1

1 <= 5 <= 2 // for n=1, it satisfies the condition c1.g(n)<=f(n)<=c2.g(n)

If n=2

1*2<=2*2+3<=2*2
2<=7<=4 // for n=2, it satisfies the condition c1.g(n)<=f(n)<=c2.g(n)

Therefore, we can say that for any value of n, it satisfies the condition
c1.g(n)<=f(n)<=c2.g(n). Hence, it is proved that f(n) is big theta of g(n). So, this is the
average-case scenario which provides the realistic time complexity.

2 Stacks
2.1 Definition, Stack as ADT

2.2 Stack Operations: Concept and Algorithms


2.3 Stack Applications
3 Queues
3.1 Definition, Queue as ADT
3.2 Queue Operations: Concept and Algorithms
Enqueue
Dequeue
Peek()
Isempty()
Isfull()

Enqueue Operation

Queues maintain two data pointers, front and rear. Therefore, its operations are
comparatively difficult to implement than that of stacks.
The following steps should be taken to enqueue (insert) data into a queue −
Step 1 − Check if the queue is full.
Step 2 − If the queue is full, produce overflow error and exit.
Step 3 − If the queue is not full, increment rear pointer to point the next empty space.
Step 4 − Add data element to the queue location, where the rear is pointing.
Step 5 − return success.

Sometimes, we also check to see if a queue is initialized or not, to handle any unforeseen
situations.
Algorithm for enqueue operation
procedure enqueue(data)
step1: start to procedure enqueue s[3]={8,1,5};
step 2: if queue is full?
return or exit
step3: else
if queue is not full
increase rear place
step4: assign value enqueue
step5: to take any new space/place
step6:- end

if queue is full
return overflow
end if

rear ← rear + 1
queue*rear+ ← data
return true

end procedure
Implementation of enqueue() in C programming language −
Example
int enqueue(int data)
if(isfull())
return 0;

rear = rear + 1;
queue[rear] = data;

return 1;
end procedure

Dequeue Operation

Accessing data from the queue is a process of two tasks − access the data where front is
pointing and remove the data after access. The following steps are taken to
perform dequeue operation −
Step 1 − Check if the queue is empty.
Step 2 − If the queue is empty, produce underflow error and exit.
Step 3 − If the queue is not empty, access the data where front is pointing.
Step 4 − Increment front pointer to point to the next available data element.
Step 5 − Return success.

Algorithm for dequeue operation


procedure dequeue
if queue is empty
return underflow
end if
data = queue[front]
front ← front + 1
return true
end procedure
Implementation of dequeue() in C programming language −
Example
int dequeue() {
if(isempty())
return 0;

int data = queue[front];


front = front + 1;

return data;
}
peek() operation
This function helps to see the data at the front of the queue. The algorithm of peek()
function is as follows −
Algorithm
begin procedure peek
return queue[front]
end procedure
Implementation of peek() function in C programming language −
Example
int peek() {
return queue[front];
}
isfull()
As we are using single dimension array to implement queue, we just check for the
rear pointer to reach at MAXSIZE to determine that the queue is full. In case we
maintain the queue in a circular linked-list, the algorithm will differ. Algorithm of
isfull() function −
Algorithm
begin procedure isfull
if rear equals to MAXSIZE
return true
else
return false
endif

end procedure
Implementation of isfull() function in C programming language −
Example
bool isfull() {
if(rear == MAXSIZE - 1)
return true;
else
return false;
}
isempty()
Algorithm of isempty() function −
Algorithm
begin procedure isempty

if front is less than MIN OR front is greater than rear


return true
else
return false
endif

end procedure
If the value of front is less than MIN or 0, it tells that the queue is not yet initialized,
hence empty.
Here's the C programming code −
Example
bool isempty() {
if(front < 0 || front > rear)
return true;
else
return false;
}

3.3 Queue Applications


 Queues are widely used as waiting lists for a single shared resource like printer, disk,
CPU.
 Queues are used in asynchronous transfer of data (where data is not being
transferred at the same rate between two processes) for eg. pipes, file IO, sockets.
 Queues are used as buffers in most of the applications like MP3 media player, CD
player, etc.
 Queue are used to maintain the play list in media players in order to add and remove
the songs from the play-list.
 Queues are used in operating systems for handling interrupts.

3.4 Linear vs Circular Queue


Linear Queue: A Linear Queue is generally referred to as Queue. It is a linear data
structure that follows the FIFO (First In First Out) order. A real-life example of a queue is
any queue of customers waiting to buy a product from a shop where the customer that
came first is served first. In Queue all deletions (dequeue) are made at the front and all
insertions (enqueue) are made at the rear end.

Circular Queue: Circular Queue is just a variation of the linear queue in which front and
rear-end are connected to each other to optimize the space wastage of the Linear queue
and make it efficient.

1.1.1 Tabular difference between linear and circular queue:


S.n
o. Linear Queue Circular Queue

Arranges the data in a circular order


Arranges the data in a linear where the rear end is connected
1. pattern. with the front end.

The insertion and deletion


operations are fixed i.e, done at
the rear and front end Insertion and deletion are not fixed
2. respectively. and it can be done in any position.

Linear queue requires more


3. memory space. It requires less memory space.

In the case of a linear queue, In the case of circular queue, the


4. the element added in the first order of operations performed on
position is going to be deleted an element may change.
in the first position. The order
of operations performed on any
element is fixed i.e, FIFO.

It is inefficient in comparison to It is more efficient in comparison to


5. a circular queue. linear queue.

In a linear queue, we can easily In a circular queue, we cannot fetch


6. fetch out the peek value. out the peek value easily.

3.5 Circular Queue Operations: Concept and Algorithms


Prerequisite – Queues
Circular Queue is a linear data structure in which the operations are performed based on
FIFO (First In First Out) principle and the last position is connected back to the first
position to make a circle. It is also called ‘Ring Buffer’.

In a normal Queue, we can insert elements until queue becomes full. But once queue
becomes full, we can not insert the next element even if there is a space in front of
queue.
Operations on Circular Queue:

 Front: Get the front item from queue.


 Rear: Get the last item from queue.
 enQueue(value) This function is used to insert an element into the circular queue. In a
circular queue, the new element is always inserted at Rear position.
1. Check whether queue is Full – Check ((rear == SIZE-1 && front == 0) || (rear ==
front-1)).
2. If it is full then display Queue is full. If queue is not full then, check if (rear == SIZE –
1 && front != 0) if it is true then set rear=0 and insert element.
 deQueue() This function is used to delete an element from the circular queue. In a
circular queue, the element is always deleted from front position.
1. Check whether queue is Empty means check (front==-1).
2. If it is empty then display Queue is empty. If queue is not empty then step 3
3. Check if (front==rear) if it is true then set front=rear= -1 else check if (front==size-
1), if it is true then set front=0 and return the element.

// C or C++ program for insertion and


// deletion in Circular Queue
#include<bits/stdc++.h>
using namespace std;

class Queue
{
// Initialize front and rear
int rear, front;

// Circular Queue
int size;
int *arr;
public:
Queue(int s)
{
front = rear = -1;
size = s;
arr = new int[s];
}

void enQueue(int value);


int deQueue();
void displayQueue();
};

/* Function to create Circular queue */


void Queue::enQueue(int value)
{
if ((front == 0 && rear == size-1) ||
(rear == (front-1)%(size-1)))
{
printf("\nQueue is Full");
return;
}

else if (front == -1) /* Insert First Element */


{
front = rear = 0;
arr[rear] = value;
}

else if (rear == size-1 && front != 0)


{
rear = 0;
arr[rear] = value;
}

else
{
rear++;
arr[rear] = value;
}
}

// Function to delete element from Circular Queue


int Queue::deQueue()
{
if (front == -1)
{
printf("\nQueue is Empty");
return INT_MIN;
}

int data = arr[front];


arr[front] = -1;
if (front == rear)
{
front = -1;
rear = -1;
}
else if (front == size-1)
front = 0;
else
front++;

return data;
}

// Function displaying the elements


// of Circular Queue
void Queue::displayQueue()
{
if (front == -1)
{
printf("\nQueue is Empty");
return;
}
printf("\nElements in Circular Queue are: ");
if (rear >= front)
{
for (int i = front; i <= rear; i++)
printf("%d ",arr[i]);
}
else
{
for (int i = front; i < size; i++)
printf("%d ", arr[i]);

for (int i = 0; i <= rear; i++)


printf("%d ", arr[i]);
}
}

/* Driver of the program */


int main()
{
Queue q(5);

// Inserting elements in Circular Queue


q.enQueue(14);
q.enQueue(22);
q.enQueue(13);
q.enQueue(-6);

// Display elements present in Circular Queue


q.displayQueue();

// Deleting elements from Circular Queue


printf("\nDeleted value = %d", q.deQueue());
printf("\nDeleted value = %d", q.deQueue());

q.displayQueue();
q.enQueue(9);
q.enQueue(20);
q.enQueue(5);

q.displayQueue();

q.enQueue(20);
return 0;
}
4 Recursion
Computer programming languages allow a module or function to call itself. This technique
is known as recursion.
a function calling itself.
int function(int value) {
if(value < 1)
return;
function(value - 1);

printf("%d ",value);
}
a function that calls another function which in turn calls it again.
int function1(int value1) {
if(value1 < 1)
return;
function2(value1 - 1);
printf("%d ",value1);
}
int function2(int value2) {
function1(value2);
}

4.1 Definition, Recursion vs Iteration


Recursion:- The process in which a function calls itself directly or indirectly is called
recursion and the corresponding function is called a recursive function. Using a
recursive algorithm, certain problems can be solved quite easily.
Iteration:-
Recursion vs Iteration
Property Recursion Iteration
Definition Function calls itself. A set of instructions repeatedly executed.
Application For functions. For loops.
Termination Through base case, where When the termination condition for the
there will be no function call. iterator ceases to be satisfied.
Usage Used when code size needs Used when time complexity needs to be
to be small, and time balanced against an expanded code size.
complexity is not an issue.
Code Size Smaller code size Larger Code Size.
Time Very high(generally Relatively lower time complexity(generally
Complexity exponential) time polynomial-logarithmic).
complexity.

A program is called recursive when an entity calls itself. A program is call iterative
when there is a loop
// C++ program to find factorial of given number
#include<bits/stdc++.h>
using namespace std;

// ----- Recursion -----


// method to find factorial of given number
int factorialUsingRecursion(int n)
{
if (n == 0)
return 1;
// recursion call
return n * factorialUsingRecursion(n - 1);
}

// ----- Iteration -----


// Method to find the factorial of a given number
int factorialUsingIteration(int n)
{
int res = 1, i;

// using iteration
for (i = 2; i <= n; i++)
res *= i;
return res;
}

int main()
{
int num = 5;
cout << "Factorial of " << num <<
" using Recursion is: " <<
factorialUsingRecursion(5) << endl;

cout << "Factorial of " << num <<


" using Iteration is: " <<
factorialUsingIteration(5);

return 0;
}

4.3 Factorial, Fibonacci sequence, and TOH


Fibonacci sequence:-
The Fibonacci numbers are the numbers in the following integer sequence.
0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, ……..
//Fibonacci Series using Recursion
#include<bits/stdc++.h>
using namespace std;

int fib(int n)
{
if (n <= 1)
return n;
return fib(n-1) + fib(n-2);
}

int main ()
{
int n = 9;
cout << fib(n);
getchar();
return 0;
}
TOH
Tower of Hanoi is a mathematical puzzle where we have three rods and n disks. The
objective of the puzzle is to move the entire stack to another rod, obeying the following
simple rules:
1. Only one disk can be moved at a time.
2. Each move consists of taking the upper disk from one of the stacks and placing it on top
of another stack i.e. a disk can only be moved if it is the uppermost disk on a stack.
3. No disk may be placed on top of a smaller disk.

// C++ recursive function to


// solve tower of hanoi puzzle
#include <bits/stdc++.h>
using namespace std;

void towerOfHanoi(int n, char from_rod,


char to_rod, char aux_rod)
{
if (n == 0)
{
return;
}
towerOfHanoi(n - 1, from_rod, aux_rod, to_rod);
cout << "Move disk " << n << " from rod " << from_rod <<
" to rod " << to_rod << endl;
towerOfHanoi(n - 1, aux_rod, to_rod, from_rod);
}

int main()
{
int n = 4; // Number of disks
towerOfHanoi(n, 'A', 'C', 'B'); // A, B and C are names of rods
return 0;
}

4.4 Applications and Efficiency of recursion


The application of recursion is vital in computer science,

 This is the backbone of AI.


 The NP problem can’t be solved in general, but that can only be solved using
recursion up to a certain extent(not completely) by limiting depth of recursion.
 The most important data structure ‘Tree’ doesn’t exist without recursion we can
solve that in iterative way also but that will be a very tough task.
 Many of the well-known sorting algo (Quick sort, Merge sort, etc) uses recursion.
 All the puzzle games(Chess, Candy crush, etc) broadly uses recursion.
 The uses of recursion is uncountable, now a days because it is the backbone of
searching, which is most important thing.

5 Linked lists
5.1 Definition, Linked List as ADT
A linked list is a sequence of data structures, which is connect together via links. Linked List is
a sequence of links, which contains items. Each link contains a connection to another link.
Linked list is the second most-used data structure after array. Following are the important
terms to understand the concept of Linked List.
 Link − Each link of a linked list can store a data called an element.
 Next − Each link of a linked list contains a link to the next link called Next.
 LinkedList − A Linked List contains the connection link to the first link called First.

Linked List is an Abstract Data Type (ADT) that holds a collection of node and the nodes can
be accessed in a sequential way. When the Nodes are connect with only the next pointer the
list is called Singly Link List.
★ A linked list is a series of connected nodes, where each node is a data structure.
★ A linked list can grow or shrink in size as the program runs. This is possible
because the nodes in a linked list are dynamically allocated.
★ If new data need to be added to a linked list, the program simply allocates another
node and inserts it into the series.
★ If a particular piece of data needs to be removed from the linked list, the program
deletes the node containing that data.

★ Linked lists are among the simplest and most common data structures. They can be
used to implement other common abstract data types, including lists, stacks, queues,
and so on.

The Composition of a Linked List


★ The elements of a linked list are called the Nodes.
★ Each node is consists of two fields: Data and Pointer.
★ Each node in a linked list contains one or more members that represent data.
★ The Nodes stored in a Linked List can be anything from primitives types such as
integers to more complex types like instances of classes.
★ In addition to the data, each node contains a pointer, which can point to another
node.

The pointer contains the address of the next node.

5.2 Types of Linked List


There are four key types of linked lists:

 Singly linked lists


 Doubly linked lists
 Circular linked lists
 Circular doubly linked lists

5.3 Basic operations in Singly Linked List:


creation, node insertion and deletion from beginning, end, and specified position
#include <stdio.h>
#include <malloc.h>
#include <stdlib.h>

struct node {
int value;
struct node *next;
};

void insert();
void display();
void delete();
int count();

typedef struct node DATA_NODE;

DATA_NODE *head_node, *first_node, *temp_node = 0, *prev_node, next_node;


int data;
int main() {
int option = 0;

printf("Singly Linked List Example - All Operations\n");

while (option < 5) {

printf("\nOptions\n");
printf("1 : Insert into Linked List \n");
printf("2 : Delete from Linked List \n");
printf("3 : Display Linked List\n");
printf("4 : Count Linked List\n");
printf("Others : Exit()\n");
printf("Enter your option:");
scanf("%d", &option);
switch (option) {
case 1:
insert();
break;
case 2:
delete();
break;
case 3:
display();
break;
case 4:
count();
break;
default:
break;
}
}

return 0;
}
void insert() {
printf("\nEnter Element for Insert Linked List : \n");
scanf("%d", &data);

temp_node = (DATA_NODE *) malloc(sizeof (DATA_NODE));

temp_node->value = data;

if (first_node == 0) {
first_node = temp_node;
} else {
head_node->next = temp_node;
}
temp_node->next = 0;
head_node = temp_node;
fflush(stdin);
}

void delete() {
int countvalue, pos, i = 0;
countvalue = count();
temp_node = first_node;
printf("\nDisplay Linked List : \n");

printf("\nEnter Position for Delete Element : \n");


scanf("%d", &pos);

if (pos > 0 && pos <= countvalue) {


if (pos == 1) {
temp_node = temp_node -> next;
first_node = temp_node;
printf("\nDeleted Successfully \n\n");
} else {
while (temp_node != 0) {
if (i == (pos - 1)) {
prev_node->next = temp_node->next;
if(i == (countvalue - 1))
{
head_node = prev_node;
}
printf("\nDeleted Successfully \n\n");
break;
} else {
i++;
prev_node = temp_node;
temp_node = temp_node -> next;
}
}
}
} else
printf("\nInvalid Position \n\n");
}

void display() {
int count = 0;
temp_node = first_node;
printf("\nDisplay Linked List : \n");
while (temp_node != 0) {
printf("# %d # ", temp_node->value);
count++;
temp_node = temp_node -> next;
}
printf("\nNo Of Items In Linked List : %d\n", count);
}

int count() {
int count = 0;
temp_node = first_node;
while (temp_node != 0) {
count++;
temp_node = temp_node -> next;
}
printf("\nNo Of Items In Linked List : %d\n", count);
return count;
}

https://www.c-lang.thiyagaraaj.com/data-structures/linked-list/simple-singly-linked-
list-example-program-in-c
5.4 Linked List Implementation of Stack and Queue
Stack can be implemented using both, arrays and linked list. The limitation in case of array is
that we need to define the size at the beginning of the implementation. This makes our Stack
static. It can also result in "Stack overflow" if we try to add elements after the array is full. So, to
alleviate this problem, we use linked list to implement the Stack so that it can grow in real time.

First, we will create our Node class which will form our Linked List. We will be using this same
Node class to implement the Queue also in the later part of this article.

1. internal class Node


2. {
3. internal int data;
4. internal Node next;
5.
6. // Constructor to create a new node.Next is by default initialized as nu
ll
7. public Node(int d)
8. {
9. data = d;
10. next = null;
11. }
12. }

Now, we will create our Stack Class. We will define a pointer, top, and initialize it to null.
So, our LinkedListStack class will be –

internal class LinkListStack

Node top;
public LinkListStack()

this.top = null;

Push an element into Stack

Now, our Stack and Node class is ready. So, we will proceed to Push operation on Stack. We
will add a new element at the top of Stack.

Algorithm
 Create a new node with the value to be inserted.
 If the Stack is empty, set the next of the new node to null.
 If the Stack is not empty, set the next of the new node to top.
 Finally, increment the top to point to the new node.
The time complexity for Push operation is O(1). The method for Push will look like this.

internal void Push(int value)

Node newNode = new Node(value);

if (top == null)

newNode.next = null;

else

newNode.next = top;

top = newNode;
Console.WriteLine("{0} pushed to stack", value);

Pop an element from Stack

We will remove the top element from Stack.

Algorithm
 If the Stack is empty, terminate the method as it is Stack underflow.
 If the Stack is not empty, increment top to point to the next node.
 Hence the element pointed by top earlier is now removed.
The time complexity for Pop operation is O(1). The method for Pop will be like following.

1. internal void Pop()


2. {
3. if (top == null)
4. {
5. Console.WriteLine("Stack Underflow. Deletion not possible");
6. return;
7. }
8.
9. Console.WriteLine("Item popped is {0}", top.data);
10. top = top.next;
11. }

Peek the element from Stack

The peek operation will always return the top element of Stack without removing it from Stack.

Algorithm
 If the Stack is empty, terminate the method as it is Stack underflow.
 If the Stack is not empty, return the element pointed by the top.

The time complexity for Peek operation is O(1). The Peek method will be like following.

internal void Peek()

if (top == null)

Console.WriteLine("Stack Underflow.");
return;

Console.WriteLine("{0} is on the top of Stack", this.top.data);

Uses of Stack
 Stack can be used to implement back/forward button in the browser.
 Undo feature in the text editors are also implemented using Stack.
 It is also used to implement recursion.
 Call and return mechanism for a method uses Stack.
 It is also used to implement backtracking.

Implementing Queue functionalities using Linked List

Similar to Stack, the Queue can also be implemented using both, arrays and linked list. But it
also has the same drawback of limited size. Hence, we will be using a Linked list to implement
the Queue.

The Node class will be the same as defined above in Stack implementation. We will define
LinkedListQueue class as below.

internal class LinkListQueue

Node front;

Node rear;

public LinkListQueue()

this.front = this.rear = null;


}

Here, we have taken two pointers - rear and front - to refer to the rear and the front end of the
Queue respectively and will initialize it to null.

Enqueue of an Element

We will add a new element to our Queue from the rear end.

Algorithm
 Create a new node with the value to be inserted.
 If the Queue is empty, then set both front and rear to point to newNode.
 If the Queue is not empty, then set next of rear to the new node and the rear to point to
the new node.
The time complexity for Enqueue operation is O(1). The Method for Enqueue will be like the
following.

internal void Enqueue(int item)

Node newNode = new Node(item);

// If queue is empty, then new node is front and rear both

if (this.rear == null)

this.front = this.rear = newNode;

else

// Add the new node at the end of queue and change rear

this.rear.next = newNode;
this.rear = newNode;

Console.WriteLine("{0} inserted into Queue", item);

Dequeue of an Element

We will delete the existing element from the Queue from the front end.
Algorithm
 If the Queue is empty, terminate the method.
 If the Queue is not empty, increment front to point to next node.
 Finally, check if the front is null, then set rear to null also. This signifies empty Queue.
The time complexity for Dequeue operation is O(1). The Method for Dequeue will be like
following.

internal void Dequeue()

// If queue is empty, return NULL.

if (this.front == null)

Console.WriteLine("The Queue is empty");

return;

1. // Store previous front and move front one node ahead


2. Node temp = this.front;
3. this.front = this.front.next;
4.
5. // If front becomes NULL, then change rear also as NULL
6. if (this.front == null)
7. {
8. this.rear = null;
9. }
10.
11. Console.WriteLine("Item deleted is {0}", temp.data);
12. }

Uses of Queue
 CPU scheduling in Operating system uses Queue. The processes ready to execute and the
requests of CPU resources wait in a queue and the request is served on first come first
serve basis.
 Data buffer - a physical memory storage which is used to temporarily store data while it is
being moved from one place to another is also implemented using Queue.

5.5 Concept of other types of Linked Lists


5.6 Applications of Linked List
A linked list is a linear data structure, in which the elements are not stored at contiguous
memory locations. The elements in a linked list are linked using pointers as shown in the
below
image:

Applications of linked list in computer science –


1. Implementation of stacks and queues
2. Implementation of graphs : Adjacency list representation of graphs is most popular which
is uses linked list to store adjacent vertices.
3. Dynamic memory allocation : We use linked list of free blocks.
4. Maintaining directory of names
5. Performing arithmetic operations on long integers
6. Manipulation of polynomials by storing constants in the node of linked list
7. representing sparse matrices

6 Trees
6.1 Concept and Definition: Concept of level, depth, number of nodes
A tree is non-linear and a hierarchical data structure consisting of a collection of
nodes such that each node of the tree stores a value and a list of references to
other nodes (the “children”). It consists of a central node, structural nodes, and
sub-nodes, which are connected via edges. We can also say that tree data
structure has roots, branches, and leaves connected with one another.
A tree consists of a root, and zero or more subtrees T1, T2, … , Tk such that there
is an edge from the root of the tree to the root of each subtree.

Basic Terminologies In Tree Data Structure:


7 Parent Node: The node which is a predecessor of a node is called the parent node
of that node. {2} is the parent node of {6, 7}.
8 Child Node: The node which is the immediate successor of a node is called the child
node of that node. Examples: {6, 7} are the child nodes of {2}.
9 Root Node: The topmost node of a tree or the node which does not have any parent
node is called the root node. {1} is the root node of the tree. A non-empty tree must
contain exactly one root node and exactly one path from the root to all other nodes of
the tree.
10 Leaf Node or External Node: The nodes which do not have any child nodes are
called leaf nodes. {6, 14, 8, 9, 15, 16, 4, 11, 12, 17, 18, 19} are the leaf nodes of the
tree.
11 Ancestor of a Node: Any predecessor nodes on the path of the root to that node are
called Ancestors of that node. {1, 2} are the ancestor nodes of the node {7}
12 Descendant: Any successor node on the path from the leaf node to that node. {7,
14} are the descendants of the node. {2}.
13 Sibling: Children of the same parent node are called siblings. {8, 9, 10} are called
siblings.
14 Level of a node: The count of edges on the path from the root node to that node.
The root node has level 0.
15 Internal node: A node with at least one child is called Internal Node.
16 Neighbour of a Node: Parent or child nodes of that node are called neighbors of that
node.
17 Subtree: Any node of the tree along with its descendant.
Properties of a Tree:
18 Number of edges: An edge can be defined as the connection between two nodes. If
a tree has N nodes then it will have (N-1) edges. There is only one path from each
node to any other node of the tree.
19 Depth of a node: The depth of a node is defined as the length of the path from the
root to that node. Each edge adds 1 unit of length to the path. So, it can also be
defined as the number of edges in the path from the root of the tree to the node.
20 Height of a node: The height of a node can be defined as the length of the longest
path from the node to a leaf node of the tree.
21 Height of the Tree: The height of a tree is the length of the longest path from the
root of the tree to a leaf node of the tree.
22 Degree of a Node: The total count of subtrees attached to that node is called the
degree of the node. The degree of a leaf node must be 0. The degree of a tree is the
maximum degree of a node among all the nodes in the tree.
Some more properties are:
23 Traversing in a tree is done by depth first search and breadth first search algorithm.
24 It has no loop and no circuit
25 It has no self-loop
26 Its hierarchical model.

26.1 Binary Tree and Binary Search Tree


Binary Tree Data Structure
A tree whose elements have at most 2 children is called a binary tree. Since each element
in a binary tree can have only 2 children, we typically name them the left and right
children.
Binary Search Tree is a node-based binary tree data structure which has the following properties:
27 The left subtree of a node contains only nodes with keys lesser than the node’s key.
28 The right subtree of a node contains only nodes with keys greater than the node’s key.
29 The left and right subtree each must also be a binary search tree.

Difference between Binary Tree and Binary Search Tree:


S. Basis of BINARY TREE BINARY SEARCH TREE
No. Comparison
1. Definition BINARY TREE is a nonlinear BINARY SEARCH TREE is a node
data structure where each node based binary tree that further has right
can have at most two child and left subtree that too are binary search
nodes. tree.
2. Types  Full binary tree  AVL tree
 Complete binary tree  Splay Tree
 Extended Binary tree and  T-trees and more
more
3. Operations BINARY TREE is unordered Insertion, deletion, searching of an
hence slower in process of element is faster in BINARY SEARCH
insertion, deletion, and TREE than BINARY TREE due to the
searching. ordered characteristics
4. Structure In BINARY TREE there is no In BINARY SEARCH TREE the left
ordering in terms of how the subtree has elements less than the nodes
nodes are arranged element and the right subtree has
elements greater than the nodes element.
5. Data Data Representation is carried Data Representation is carried out in the
Representation out in a hierarchical format. ordered format.
6. Duplicate Binary trees allow duplicate Binary Search Tree does not allow
Values values. duplicate values.
7. Speed The speed of deletion, insertion, Because the Binary Search Tree has
and searching operations in ordered properties, it conducts element
Binary Tree is slower as deletion, insertion, and searching faster.
compared to Binary Search Tree
because it is unordered.
8. Complexity Time complexity is usually O(n). Time complexity is usually O(logn).
9. Application It is used for retrieval of fast and It works well at element deletion,
quick information and data insertion, and searching.
lookup.
10. Usage It serves as the foundation for It is utilized in the implementation of
implementing Full Binary Tree, Balanced Binary Search Trees such as
BSTs, Perfect Binary Tree, and AVL Trees, Red Black Trees, and so on.
others.

29.1 Insertion, Deletion, and Traversal of BST


Traversal of BST
A binary search tree is traversed in exactly the same way a binary tree is traversed.
In other words, BST traversal is same as binary tree traversal.
Various tree traversal techniques are-
Consider the following binary search tree-

Now, let us write the traversal sequences for this binary search tree-
Preorder Traversal-
100 , 20 , 10 , 30 , 200 , 150 , 300
Inorder Traversal-
10 , 20 , 30 , 100 , 150 , 200 , 300
Postorder Traversal-
10 , 30 , 20 , 150 , 300 , 200 , 100

Insertion Binary search tree:


To insert an element in BST, we have to start searching from the root node; if the node
to be inserted is less than the root node, then search for an empty location in the left
subtree.
Deletion binary search tree:

There are three possible cases to consider deleting a node from BST:

Case 1: Deleting a node with no children: remove the node from the tree.

Case 2: Deleting a node with two children: call the node to be deleted N . Do not delete N . Instead,
choose either its inorder successor node or its inorder predecessor node, R . Copy the value
of R to N , then recursively call delete on R until reaching one of the first two cases. If we choose
the inorder successor of a node, as the right subtree is not NULL (our present case is a node with 2
children), then its inorder successor is a node with the least value in its right subtree, which will
have at a maximum of 1 subtree, so deleting it would fall in one of the first 2 cases.
Case 3: Deleting a node with one child: remove the node and replace it with its child.

Broadly speaking, nodes with children are harder to delete. As with all binary trees, a node’s
inorder successor is its right subtree’s leftmost child, and a node’s inorder predecessor is the left
subtree’s rightmost child. In either case, this node will have zero or one child. Delete it according
to one of the two simpler cases above.
#include <stdio.h>
#include <stdlib.h>

struct node {
int data;
struct node* left;
struct node* right;
};

struct node* newNode(int data)


{
struct node* temp
= (struct node*)malloc(sizeof(struct node));

temp->data = data;
temp->left = temp->right = NULL;

return temp;
}

struct node* constructTreeUtil(int pre[], int* preIndex,


int low, int high, int size)
{

if (*preIndex >= size || low > high)


return NULL;

struct node* root = newNode(pre[*preIndex]);


*preIndex = *preIndex + 1;

if (low == high)
return root;

int i;
for (i = low; i <= high; ++i)
if (pre[i] > root->data)
break;

root->left = constructTreeUtil(pre, preIndex, *preIndex,


i - 1, size);
root->right
= constructTreeUtil(pre, preIndex, i, high, size);

return root;
}

struct node* constructTree(int pre[], int size)


{
int preIndex = 0;
return constructTreeUtil(pre, &preIndex, 0, size - 1,
size);
}

void printInorder(struct node* node)


{
if (node == NULL)
return;
printInorder(node->left);
printf("%d ", node->data);
printInorder(node->right);
}

// Driver code
int main()
{
int pre[] = { 10, 5, 1, 7, 40, 50 };
int size = sizeof(pre) / sizeof(pre[0]);

struct node* root = constructTree(pre, size);

printf("Inorder traversal of the constructed tree: \n");


printInorder(root);

return 0;
}

29.2 Applications of Tree, Concept of Balanced Trees


Applications of tree data structure

What is Tree ?
Tree is collection of nodes. A tree is a hierarchical data structure. Tree is a non-linear data structure
which contains nodes and edges.
Terminologies :
According to the above example image of tree
Nodes : 1 2 3 4 5 6 7 8 9 10 11 13 14
Root : 1
Internal Nodes : 1 2 3 4 5 6 7
External nodes : 8 9 10 11 13 14
(Parent , Child) : (1, 2 and 3), (2, 4 and 5), (3, 6 and 7),(4, 8 and 9), (5, 10 and 11) , (6, 13) , (7,14)
Siblings : (2, 3) , (4, 5), (6, 7), (8, 9), (10, 11)
Why Tree?
Unlike Array and Linked List, which are linear data structures, tree is hierarchical (or non-linear)
data structure.
One reason to use trees might be because you want to store information that naturally forms a
hierarchy. For example, the file system on a computer: file system

/ <-- root
/ \
... home
/ \
ugrad course
/ / | \
... cs101 cs112 cs113
If we organize keys in form of a tree (with some ordering e.g., BST), we can search for a given
key in moderate time (quicker than Linked List and slower than arrays). Self-balancing search
trees like AVL and Red-Black trees guarantee an upper bound of O(Logn) for search.
We can insert/delete keys in moderate time (quicker than Arrays and slower than Unordered
Linked Lists). Self-balancing search trees like AVL and Red-Black trees guarantee an upper
bound of O(Logn) for insertion/deletion.
Like Linked Lists and unlike Arrays, Pointer implementation of trees don’t have an upper
limit on number of nodes as nodes are linked using pointers.
Other Applications :

1. Store hierarchical data, like folder structure, organization structure, XML/HTML data.
2. Binary Search Tree is a tree that allows fast search, insert, delete on a sorted data. It also
allows finding closest item
3. Heap is a tree data structure which is implemented using arrays and used to implement
priority queues.
4. B-Tree and B+ Tree : They are used to implement indexing in databases.
5. Syntax Tree: Scanning, parsing , generation of code and evaluation of arithmetic expressions
in Compiler design.
6. K-D Tree: A space partitioning tree used to organize points in K dimensional space.
7. Trie : Used to implement dictionaries with prefix lookup.
8. Suffix Tree : For quick pattern searching in a fixed text.
9. Spanning Trees and shortest path trees are used in routers and bridges respectively in
computer networks
10. As a workflow for compositing digital images for visual effects.
11. Decision trees.
12. Organization chart of a large organization.
13. In XML parser.
14. Machine learning algorithm.
15. For indexing in database.
16. IN server like DNS (Domain Name Server)
17. In Computer Graphics.
18. To evaluate an expression.
19. In chess game to store defense moves of player.
20. In java virtual machine.

7 Graphs
7.1 Representation and Applications of Graph
Graphs are powerful data structures that represent real-life relationships between entities. Graphs are used
everywhere, from social networks, Google maps, and the world wide web to blockchains and
neural networks. Due to their ability to provide abstractions to real-life, they are used in a
variety of practical problems.
A graph is a non-linear data structure, which consists of vertices(or nodes) connected by
edges(or arcs) where edges may be directed or

undirected.
 In Computer science graphs are used to represent the flow of computation.
 Google maps uses graphs for building transportation systems, where intersection of two(or
more) roads are considered to be a vertex and the road connecting two vertices is
considered to be an edge, thus their navigation system is based on the algorithm to
calculate the shortest path between two vertices.
 In Facebook, users are considered to be the vertices and if they are friends then there is an
edge running between them. Facebook’s Friend suggestion algorithm uses graph theory.
Facebook is an example of undirected graph.
 In World Wide Web, web pages are considered to be the vertices. There is an edge from a
page u to other page v if there is a link of page v on page u. This is an example of Directed
graph. It was the basic idea behind Google Page Ranking Algorithm.
 In Operating System, we come across the Resource Allocation Graph where each process
and resources are considered to be vertices. Edges are drawn from resources to the
allocated process, or from requesting process to the requested resource. If this leads to any
formation of a cycle then a deadlock will occur.
 In mapping system we use graph. It is useful to find out which is an excellent place from
the location as well as your nearby location. In GPS we also use graphs.
 Facebook uses graphs. Using graphs suggests mutual friends. it shows a list of the f
following pages, friends, and contact list.
 Microsoft Excel uses DAG means Directed Acyclic Graphs.
 In the Dijkstra algorithm, we use a graph. we find the smallest path between two or many
nodes.
 On social media sites, we use graphs to track the data of the users. liked showing preferred
post suggestions, recommendations, etc.

7.2 Graph Traversal Algorithms: Depth First Traversal and Breadth First Traversal
Depth first traversal
Depth First Traversal (or Search) for a graph is similar to Depth First Traversal of a
tree. The only catch here is, unlike trees, graphs may contain cycles (a node may be
visited twice). To avoid processing a node more than once, use a boolean visited array.
Example:
Input: n = 4, e = 6
0 -> 1,
0 -> 2,
1 -> 2,
2 -> 0,
2 -> 3,
3 -> 3
Output: DFS from vertex 1 : 1 2 0 3
Explanation:
DFS Diagram:

Algorithm:
Create a recursive function that takes the index of the node and a visited array.
1. Mark the current node as visited and print the node.
2. Traverse all the adjacent and unmarked nodes and call the recursive function with the
index of the adjacent node
// C++ program to print DFS traversal from
// a given vertex in a given graph
#include <bits/stdc++.h>
using namespace std;

class Graph {
public:
map<int, bool> visited;
map<int, list<int> > adj;

void addEdge(int v, int w);

void DFS(int v);


};

void Graph::addEdge(int v, int w)


{
adj[v].push_back(w);
}

void Graph::DFS(int v)
{
visited[v] = true;
cout << v << " ";
list<int>::iterator i;
for (i = adj[v].begin(); i != adj[v].end(); ++i)
if (!visited[*i])
DFS(*i);
}

int main()
{
Graph g;
g.addEdge(0, 1);
g.addEdge(0, 2);
g.addEdge(1, 2);
g.addEdge(2, 0);
g.addEdge(2, 3);
g.addEdge(3, 3);
cout << "Following is Depth First Traversal"
" (starting from vertex 2) \n";
g.DFS(2);

return 0;
}

Breadth First Traversal


Greedy best-first search algorithm always selects the path which appears best at that moment. It is the
combination of depth-first search and breadthfirst search algorithms. To avoid processing a node
more than once, we divide the vertices into two categories:
 Visited and
 Not visited.
A boolean visited array is used to mark the visited vertices. For simplicity, it is assumed
that all vertices are reachable from the starting vertex. BFS uses a queue data
structure for traversal.
Example:
In the following graph, we start traversal from vertex 2.

When we come to vertex 0, we look for all adjacent vertices of it.


 2 is also an adjacent vertex of 0.
 If we don’t mark visited vertices, then 2 will be processed again and it will become a
non-terminating process.
There can be multiple BFS traversals for a graph. Different BFS traversals for the above
graph :
2, 3, 0, 1
2, 0, 3, 1
Implementation of BFS traversal:
Follow the below method to implement BFS traversal.
 Declare a queue and insert the starting vertex.
 Initialize a visited array and mark the starting vertex as visited.
 Follow the below process till the queue becomes empty:
 Remove the first vertex of the queue.
 Mark that vertex as visited.
 Insert all the unvisited neighbours of the vertex into the queue.
// Program to print BFS traversal from a given
// source vertex. BFS(int s) traverses vertices
// reachable from s.
#include<bits/stdc++.h>
using namespace std;

class Graph
{
int V;

vector<list<int>> adj;
public:
Graph(int V);

void addEdge(int v, int w);


void BFS(int s);
};

Graph::Graph(int V)
{
this->V = V;
adj.resize(V);
}

void Graph::addEdge(int v, int w)


{
adj[v].push_back(w);

void Graph::BFS(int s)
{
vector<bool> visited;
visited.resize(V,false);

list<int> queue;
visited[s] = true;
queue.push_back(s);

while(!queue.empty())
{
s = queue.front();
cout << s << " ";
queue.pop_front();

for (auto adjecent: adj[s])


{
if (!visited[adjecent])
{
visited[adjecent] = true;
queue.push_back(adjecent);
}
}
}
}

int main()
{
Graph g(4);
g.addEdge(0, 1);
g.addEdge(0, 2);
g.addEdge(1, 2);
g.addEdge(2, 0);
g.addEdge(2, 3);
g.addEdge(3, 3);

cout << "Following is Breadth First Traversal "


<< "(starting from vertex 2) \n";
g.BFS(2);

return 0;
}
7.3 Minimum Spanning Trees: Kruskal’s Algorithms
https://www.youtube.com/watch?v=plqvOcnYWkg

8 Sorting and Searching


8.1 Concept of Sorting and Searching
A Sorting Algorithm is used to rearrange a given array or list elements according to a comparison operator on
the elements. The comparison operator is used to decide the new order of elements in the respective data
structure.
For example: The below list of characters is sorted in increasing order of their ASCII values. That is, the
character with lesser ASCII value will be placed first than the character with higher ASCII value.
Searching Algorithms are designed to check for an element or retrieve an element from any data structure where
it is stored. Based on the type of search operation, these algorithms are generally classified into two categories:
1. Sequential Search: In this, the list or array is traversed sequentially and every element is checked. For
example: Linear Search.
Ary[3]={34,543,2}
Ary[0]=34
2. Interval Search: These algorithms are specifically designed for searching in sorted data-structures.
These type of searching algorithms are much more efficient than Linear Search as they repeatedly target
the center of the search structure and divide the search space in half. For Example: Binary Search.
1.1.2 Difference between Searching and Sorting Algorithm:

S.No. Searching Algorithm Sorting Algorithm


1. Searching Algorithms are designed A Sorting Algorithm is used to arranging the
to retrieve an element from any data data of list or array into some specific order.
structure where it is used.
2. These algorithms are generally There are two different categories in sorting.
classified into two categories i.e. These are Internal and External Sorting.
Sequential Search and Interval
Search.
3. The worst-case time complexity of The worst-case time complexity of many
searching algorithm is O(N). sorting algorithms like Bubble Sort, Insertion
Sort, Selection Sort, and Quick Sort is O(N2).
4. There is no stable and unstable Bubble Sort, Insertion Sort, Merge Sort etc are
searching algorithms. the stable s tc are the unstable sorting
algorithms.orting algorithms whereas Quick
Sort, Heap Sort e
5. The Linear Search and the Binary The Bubble Sort, Insertion Sort, Selection
Search are the examples of Sort, Merge Sort, Quick Sort etc are the
Searching Algorithms. examples of Sorting Algorithms.

8.2 Comparison Sorting: Bubble, Selection, and Insertion Sort and their complexity
a. Comparison Sorting: Bubble
Bubble Sort is the simplest sorting algorithm that works by repeatedly swapping the
adjacent elements if they are in the wrong order. This algorithm is not suitable for large
data sets as its average and worst-case time complexity is quite high.
1.1.3 How Bubble Sort Works?
Consider an array arr[] = {5, 1, 4, 2, 8}
First Pass:
 Bubble sort starts with very first two elements, comparing them to check which one is
greater.
 ( 5 1 4 2 8 ) –> ( 1 5 4 2 8 ), Here, algorithm compares the first two elements,
and swaps since 5 > 1.
 ( 1 5 4 2 8 ) –> ( 1 4 5 2 8 ), Swap since 5 > 4
 ( 1 4 5 2 8 ) –> ( 1 4 2 5 8 ), Swap since 5 > 2
 ( 1 4 2 5 8 ) –> ( 1 4 2 5 8 ), Now, since these elements are already in order (8
> 5), algorithm does not swap them.
Second Pass:
 Now, during second iteration it should look like this:
 ( 1 4 2 5 8 ) –> ( 1 4 2 5 8 )
 ( 1 4 2 5 8 ) –> ( 1 2 4 5 8 ), Swap since 4 > 2
 ( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
 ( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
Third Pass:
 Now, the array is already sorted, but our algorithm does not know if it is completed.
 The algorithm needs one whole pass without any swap to know it is sorted.
 ( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
 ( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
 ( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
 ( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
b. Comparison Sorting: Selection
The selection sort algorithm sorts an array by repeatedly finding the minimum element
(considering ascending order) from unsorted part and putting it at the beginning. The
algorithm maintains two subarrays in a given array.
 The subarray which is already sorted.
 Remaining subarray which is unsorted.
In every iteration of selection sort, the minimum element (considering ascending order)
from the unsorted subarray is picked and moved to the sorted subarray
Lets consider the following array as an example: arr[] = {64, 25, 12, 22, 11}
First pass:
 For the first position in the sorted array, the whole array is traversed from index 0 to 4
sequentially. The first position where 64 is stored presently, after traversing whole array
it is clear that 11 is the lowest value.
64 25 12 22 11

 Thus, replace 64 with 11. After one iteration 11, which happens to be the least value in
the array, tends to appear in the first position of the sorted list.
11 25 12 22 64

Second Pass:
 For the second position, where 25 is present, again traverse the rest of the array in a
sequential manner.
11 25 12 22 64

 After traversing, we found that 12 is the second lowest value in the array and it should
appear at the second place in the array, thus swap these values.
11 12 25 22 64

Third Pass:
 Now, for third place, where 25 is present again traverse the rest of the array and find
the third least value present in the array.
11 12 25 22 64

 While traversing, 22 came out to be the third least value and it should appear at the
third place in the array, thus swap 22 with element present at third position.
11 12 22 25 64

Fourth pass:
 Similarly, for fourth position traverse the rest of the array and find the fourth least
element in the array
 As 25 is the 4th lowest value hence, it will place at the fourth position.

11 12 22 25 64

Fifth Pass:
 At last the largest value present in the array automatically get placed at the last position
in the array
 The resulted array is the sorted array.

11 12 22 25 64
Complexity Analysis of Selection Sort:
Time Complexity: The time complexity of Selection Sort is O(N 2) as there are two nested
loops:
 One loop to select an element of Array one by one = O(N)
 Another loop to compare that element with every other Array element = O(N)
Therefore overall complexity = O(N)*O(N) = O(N*N) = O(N 2)
Auxiliary Space: O(1) as the only extra memory used is for temporary variable while
swapping two values in Array. The good thing about selection sort is it never makes more
than O(n) swaps and can be useful when memory write is a costly operation.

c. Comparison Sorting: Insertion


Insertion sort is a simple sorting algorithm that works similar to the way you sort
playing cards in your hands. The array is virtually split into a sorted and an unsorted
part. Values from the unsorted part are picked and placed at the correct position in the
sorted part.
Characteristics of Insertion Sort:
 This algorithm is one of the simplest algorithm with simple implementation
 Basically, Insertion sort is efficient for small data values
 Insertion sort is adaptive in nature, i.e. it is appropriate for data sets which are already
partially sorted.
1.1.4 Working of Insertion Sort algorithm:
Consider an example: arr[]: {12, 11, 13, 5, 6}

12 11 13 5 6
First Pass:
 Initially, the first two elements of the array are compared in insertion sort.

12 11 13 5 6

 Here, 12 is greater than 11 hence they are not in the ascending order and 12 is not at
its correct position. Thus, swap 11 and 12.
 So, for now 11 is stored in a sorted sub-array.
11 12 13 5 6

Second Pass:
 Now, move to the next two elements and compare them
11 12 13 5 6

 Here, 13 is greater than 12, thus both elements seems to be in ascending order, hence,
no swapping will occur. 12 also stored in a sorted sub-array along with 11
Third Pass:
 Now, two elements are present in the sorted sub-array which are 11 and 12
 Moving forward to the next two elements which are 13 and 5
11 12 13 5 6

 Both 5 and 13 are not present at their correct place so swap them
11 12 5 13 6

 After swapping, elements 12 and 5 are not sorted, thus swap again
11 5 12 13 6

 Here, again 11 and 5 are not sorted, hence swap again


5 11 12 13 6

 here, it is at its correct position


Fourth Pass:
 Now, the elements which are present in the sorted sub-array are 5, 11 and 12
 Moving to the next two elements 13 and 6

5 11 12 13 6

 Clearly, they are not sorted, thus perform swap between both
5 11 12 6 13

 Now, 6 is smaller than 12, hence, swap again


5 11 6 12 13

 Here, also swapping makes 11 and 6 unsorted hence, swap again


5 6 11 12 13

 Finally, the array is completely sorted.


Insertion Sort Algorithm
To sort an array of size N in ascending order:
 Iterate from arr[1] to arr[N] over the array.
 Compare the current element (key) to its predecessor.
 If the key element is smaller than its predecessor, compare it to the elements
before. Move the greater elements one position up to make space for the swapped
element.
Time Complexity: O(N^2)
Auxiliary Space: O(1)
What are the Boundary Cases of Insertion Sort algorithm?
Insertion sort takes maximum time to sort if elements are sorted in reverse order. And it
takes minimum time (Order of n) when elements are already sorted.

8.3 Divide and Conquer Sorting: Merge, and Quick Sort and their Complexity
Divide and Conquer is an algorithmic paradigm in which the problem is solved using the
Divide, Conquer, and Combine strategy.
A typical Divide and Conquer algorithm solves a problem using following three steps:
1. Divide: This involves dividing the problem into smaller sub-problems.
2. Conquer: Solve sub-problems by calling recursively until solved.
3. Combine: Combine the sub-problems to get the final solution of the whole problem.
1.2 Example of Divide and Conquer algorithm
A classic example of Divide and Conquer is Merge Sort demonstrated below. In Merge Sort, we divide array
into two halves, sort the two halves recursively, and then merge the sorted halves.

a. Divide and Conquer Sorting: Merge

b. Divide and Conquer Sorting: Quick Sort


8.4 Searching Algorithms: Sequential, and Binary Search
8.5 Concept of Hash Data Structure and Hash Function

You might also like