You are on page 1of 42

Unit 1

Syllabus:
Basic Concepts:
Arrays, Structures and Unions, Internal Implementation of Structures, Self-Referential
Structures
Simple Searching and Sorting Techniques:
Introduction to Searching, Sequential Search, Binary Search, Interpolation Search, Selection
Sort, Bubble Sort, Insertion Sort, Introduction to Merge Sort
Introduction to Recursion:
Towers of Hanoi, Quick Sort, Merge Sort

Basic Concepts:

What is an algorithm?

An algorithm is a step by step procedure to solve a problem. In normal language, the


algorithm is defined as a sequence of statements which are used to perform a task. In
computer science, an algorithm can be defined as follows...
An algorithm is a sequence of unambiguous instructions used for solving a problem, which
can be implemented (as a program) on a computer.

Specifications of Algorithms
Every algorithm must satisfy the following specifications...

1. Input - Every algorithm must take zero or more number of input values from
external.
2. Output - Every algorithm must produce an output as result.
3. Definiteness - Every statement/instruction in an algorithm must be clear and
unambiguous (only one interpretation).
4. Finiteness - For all different cases, the algorithm must produce result within a finite
number of steps.
5. Effectiveness - Every instruction must be basic enough to be carried out and it also
must be feasible.

Example for an Algorithm


Let us consider the following problem for finding the largest value in a given list of values.
Problem Statement : Find the largest number in the given list of numbers?
Input : A list of positive integer numbers. (List must contain at least one number).
Output : The largest number in the given list of positive integer numbers.

Consider the given list of numbers as 'L' (input), and the largest number as 'max' (Output).

Algorithm

1. Step 1: Define a variable 'max' and initialize with '0'.


2. Step 2: Compare first number (say 'x') in the list 'L' with 'max', if 'x' is larger than
'max', set 'max' to 'x'.
3. Step 3: Repeat step 2 for all numbers in the list 'L'.
4. Step 4: Display the value of 'max' as a result.

What is Performance Analysis of an algorithm?

Performance of an algorithm is a process of making evaluative judgement about algorithms.

It can also be defined as follows...


Performance of an algorithm means predicting the resources which are required to
an algorithm to perform its task.
That means when we have multiple algorithms to solve a problem, we need to select a
suitable algorithm to solve that problem. We compare algorithms with each other which are
solving the same problem, to select the best algorithm. To compare algorithms, we use a set
of parameters or set of elements like memory required by that algorithm, the execution
speed of that algorithm, easy to understand, easy to implement, etc.,
Generally, the performance of an algorithm depends on the following elements...
1. Whether that algorithm is providing the exact solution for the problem?
2. Whether it is easy to understand?
3. Whether it is easy to implement?
4. How much space (memory) it requires to solve the problem?
5. How much time it takes to solve the problem? Etc.,

When we want to analyse an algorithm, we consider only the space and time required by
that particular algorithm and we ignore all the remaining elements. Based on this
information, performance analysis of an algorithm can also be defined as follows…

Performance analysis of an algorithm is the process of calculating space and time


required by that algorithm.

Performance analysis of an algorithm is performed by using the following measures...


1. Space required to complete the task of that algorithm (Space Complexity). It
includes program space and data space
2. Time required to complete the task of that algorithm (Time Complexity)

What is Space complexity?


When we design an algorithm to solve a problem, it needs some computer memory to
complete its execution. For any algorithm, memory is required for the following purposes...

1. To store program instructions.


2. To store constant values.
3. To store variable values.
4. And for few other things like funcion calls, jumping statements etc,.
Space complexity of an algorithm can be defined as follows...

Total amount of computer memory required by an algorithm to complete its execution is


called as space complexity of that algorithm.

Generally, when a program is under execution it uses the computer memory for THREE
reasons. They are as follows...
1. Instruction Space: It is the amount of memory used to store compiled version of
instructions.
2. Environmental Stack: It is the amount of memory used to store information of
partially executed functions at the time of function call.
3. Data Space: It is the amount of memory used to store all the variables and
constants.

What is Time complexity?


Every algorithm requires some amount of computer time to execute its instruction to
perform the task. This computer time required is called time complexity.
The time complexity of an algorithm can be defined as follows…

The time complexity of an algorithm is the total amount of time required by an


algorithm to complete its execution.

Generally, the running time of an algorithm depends upon the following...


1. Whether it is running on Single processor machine or Multi processor machine.
2. Whether it is a 32 bit machine or 64 bit machine.
3. Read and Write speed of the machine.
4. The amount of time required by an algorithm to
perform Arithmetic operations, logical operations, return value
and assignment operations etc.,
5. Input data

What is Asymptotic Notation?


Whenever we want to perform analysis of an algorithm, we need to calculate the complexity
of that algorithm. But when we calculate the complexity of an algorithm it does not provide
the exact amount of resource required. So instead of taking the exact amount of resource,
we represent that complexity in a general form (Notation) which produces the basic nature
of that algorithm. We use that general form (Notation) for analysis process.
Majorly, we use THREE types of Asymptotic Notations and those are as follows...
1. Big - Oh (O)
2. Big - Omega (Ω)
3. Big - Theta (Θ)

Big - Oh Notation (O)


Big - Oh notation is used to define the upper bound of an algorithm in terms of Time
Complexity.
That means Big - Oh notation always indicates the maximum time required by an algorithm
for all input values. That means Big - Oh notation describes the worst case of an algorithm
time complexity.
Big - Oh Notation can be defined as follows…

Consider function f(n) as time complexity of an algorithm and g(n) is the most
significant term. If f(n) <= C g(n) for all n >= n0, C > 0 and n0 >= 1. Then we can
represent f(n) as O(g(n)).

Big - Omege Notation (Ω)


Big - Omega notation is used to define the lower bound of an algorithm in terms of Time
Complexity.
That means Big-Omega notation always indicates the minimum time required by an
algorithm for all input values. That means Big-Omega notation describes the best case of an
algorithm time complexity.
Big - Omega Notation can be defined as follows…

Consider function f(n) as time complexity of an algorithm and g(n) is the most
significant term. If f(n) >= C g(n) for all n >= n0, C > 0 and n0 >= 1. Then we can
represent f(n) as Ω(g(n)).

Big - Theta Notation (Θ)


Big - Theta notation is used to define the average bound of an algorithm in terms of Time
Complexity.
That means Big - Theta notation always indicates the average time required by an algorithm
for all input values. That means Big - Theta notation describes the average case of an
algorithm time complexity.
Big - Theta Notation can be defined as follows…
Consider function f(n) as time complexity of an algorithm and g(n) is the most
significant term. If C1 g(n) <= f(n) <= C2 g(n) for all n >= n0, C1 > 0, C2 > 0 and n0 >= 1.
Then we can represent f(n) as Θ(g(n)).

What is Data Structure?


Whenever we want to work with a large amount of data, then organizing that data is very
important. If that data is not organized effectively, it is very difficult to perform any task on
that data. If it is organized effectively then any operation can be performed easily on that
data.
A data structure can be defined as follows…

Data structure is a method of organizing a large amount of data more efficiently so


that any operation on that data becomes easy

Based on the organizing method of data structure, data structures are divided into two
types.

● Linear Data Structures


● Non - Linear Data Structures

Linear Data Structures


If a data structure organizes the data in sequential order, then that data structure is
called a Linear Data Structure.
Example

1. Arrays
2. List (Linked List)
3. Stack
4. Queue

Non - Linear Data Structures


If a data structure organizes the data in random order, then that data structure is
called as Non-Linear Data Structure.
Example

1. Tree
2. Graph
3. Dictionaries
4. Heaps etc.,
Arrays

Whenever we want to work with large number of data values, we need to use that much
number of different variables. As the number of variables are increasing, complexity of the
program also increases and programmers get confused with the variable names. There may
be situations in which we need to work with large number of similar data values. To make
this work more easy, C programming language provides a concept called "Array".

An array is a variable which can store multiple values of same data type at a time.

An array is a variable which can store multiple values of same data type at a time.

An array can also be defined as follows…

"Collection of similar data items stored in continuous memory locations with single
name".

To understand the concept of arrays, consider the following example declaration.


int a, b, c;
Here, the compiler allocates 2 bytes of memory with name 'a', another 2 bytes of memory
with name 'b' and more 2 bytes with name 'c'. These three memory locations are may be in
sequence or may not be in sequence. Here these individual variables store only one value at
a time.

Now consider the following declaration...

int a[3];

Here, the compiler allocates total 6 bytes of continuous memory locations with single name
'a'. But allows to store three different integer values (each in 2 bytes of memory) at a time.
And memory is organized as follows...
That means all these three memory locations are named as 'a'. But "how can we refer
individual elements?" is the big question. Answer for this question is, compiler not only
allocates memory, but also assigns a numerical value to each individual element of an array.
This numerical value is called as "Index". Index values for the above example are as
follows...

The individual elements of an array are identified using the combination of 'name' and
'index' as follows...

arrayName[indexValue]

For the above example, the individual elements can be referred as follows...

If I want to assign a value to any of these memory locations (array elements), we can assign
as follows...

a[1] = 100;

The result will be as follows...

Structure and Union Data Structure

Structure
The structure is a user-defined data-type in the data structure that allows the combination
of various types of the data-type. It is used to describe a record. It is represented by the
struct keyword in the program.
To define the structure, it uses the struct structure_name statement, as shown in syntax.

Syntax
struct structure_name
{
data-type member 1;
data-type member 2; …
data-type member n;
};
Note: struct structure_name is used to define the name of the structure and data-type
member defines the data-type.

Advantages of Structure

1. It can store different types of the data-type.


2. It is user-friendly and simple to understand.
3. It requires less time to write in the program.

Program
#include<stdio.h>
#include<conio.h>
void main( )
{
struct Teacher
{
int teacher_id ;
float salary ;
int mobile_no ;
} ;
struct Teacher t1,t2,t3;
clrscr();
printf("\nEnter the teacher_id, salary & mobile_no of the teacher\n");
scanf("%d %f %d", &t1.teacher_id, &t1.salary, &t1.mobile_no);
scanf("%d%f %d", &t2.teacher_id, &t2.salary, &t2.mobile_no);
scanf("%d %f %d", &t3.teacher_id, &t3.salary, &t3.mobile_no);
printf("\n Entered Result ");
printf("\n%d %f %d", t1.teacher_id, t1.salary, t1.mobile_no);
printf("\n%d%f %d", t2.teacher_id, t2.salary, t2.mobile_no);
printf("\n%d %f %d", t3.teacher_id, t3.salary, t3.mobile_no);
}

Union

The Union is a user-defined data type in the data structure that allows different data types
to be stored in the same memory location. It is represented by the union keyword in the
program. A union of several members may be defined together, but at any time, only one
member can be valued. Union provides an effective way to use the same memory space for
multiple purposes.
To define the Union, it uses the union union_name statement, as shown in syntax.
Syntax
union union_name
{
data-type member 1;
data-type member 2; …
data-type member n;
};

Note: union union_name is used to define the name of the Union and data-type
member defines the data-type.
Union works like structure while making the program of the Union, union is written instead
of struct keyword.

Program

#include<stdio.h>
#include<conio.h>
void main( )
{
union Teacher
{
int teacher_id ;
float salary ;
int mobile_no ;
};
union Teacher t1,t2,t3;
printf("\nEnter the teacher_id, salary & mobile_no of the teacher\n");
scanf("%d %f %d", &t1.teacher_id, &t1.salary, &t1.mobile_no);
scanf("%d%f %d", &t2.teacher_id, &t2.salary, &t2.mobile_no);
scanf("%d %f %d", &t3.teacher_id, &t3.salary, &t3.mobile_no);
printf("\n Entered Result ");
printf("\n%d %f %d", t1.teacher_id, t1.salary, t1.mobile_no);
printf("\n%d%f %d", t2.teacher_id, t2.salary, t2.mobile_no);
printf("\n%d %f %d", t3.teacher_id, t3.salary, t3.mobile_no);
}

Advantages of Union

1. It takes the less memory space than the structure.


2. It is used when two or more data member has to use the same memory place.
Difference between structure and union

Structure Union

It is represented by the struct It is represented by the union


keyword in the program. keyword in the program.

In the structure, every member is In the Union, every member is


assigned a separate memory shared the same memory
location. location.

The memory size of the structure is The memory size of the union is
equal to the sum of the memory equal to the largest size of all
size of each data-type member. data-type in the union.

It does not support a flexible


It supports a flexible array.
array.

In the structure, it can be stored


In the union, it can be stored one
multiple values of the different data
value at a time.
members at a time.

Syntax struct structure_name { Syntax union union_name {


data-type member 1; data-type data-type member 1;
member 2; … data-type data-type member 2; …
member n; }; data-type member n; };

Internal Implementation of Structures


In most cases you need not be concerned with exactly how the c compiler will store the
fields of a structure in memory.
Generally if you have a structure definition such as:
Struct emp
{
int eno;
char ename[20];
float esal;
};
main()
{
int a;
Struct emp.e;
}
Total it take 26 bytes of memory space to store the structure.
In the above example struct Emp is data type and ‘e’ as a variable.

Self Referential Structures

Self Referential structures are those structures that have one or more pointers which point
to the same type of structure, as their member.

n other words, structures pointing to the same type of structures are self-referential in
nature

struct node {
int data1;
char data2;
struct node* link;
};

int main()
{
struct node ob;
return 0;
}
In the above example ‘link’ is a pointer to a structure of type ‘node’. Hence, the structure
‘node’ is a self-referential structure with ‘link’ as the referencing pointer.
An important point to consider is that the pointer should be initialized properly before
accessing, as by default it contains garbage value.
Types of Self Referential Structures
1.Self Referential Structure with Single Link
2.Self Referential Structure with Multiple Links
1.Self Referential Structure with Single Link: These structures can have only one
self-pointer as their member. The following example will show us how to connect the
objects of a self-referential structure with the single link and access the corresponding data
members. The connection formed is shown in the following figure.

Program:
#include <stdio.h>
struct node {
int data1;
char data2;
struct node* link;
};
int main()
{
struct node ob1; // Node1
// Initialization
ob1.link = NULL;
ob1.data1 = 10;
ob1.data2 = 20;
struct node ob2; // Node2
// Initialization
ob2.link = NULL;
ob2.data1 = 30;
ob2.data2 = 40;
// Linking ob1 and ob2
ob1.link = &ob2;
// Accessing data members of ob2 using ob1
printf("%d", ob1.link->data1);
printf("\n%d", ob1.link->data2);
return 0;
}

Output

30
40
2. Self Referential Structure with Multiple Links: Self referential structures with
multiple links can have more than one self-pointers. Many complicated data structures can
be easily constructed using these structures. Such structures can easily connect to more
than one nodes at a time. The following example shows one such structure with more than
one links.
The connections made in the above example can be understood using the following figure.

Program:

#include <stdio.h>
struct node {
int data;
struct node* prev_link;
struct node* next_link;
};
int main()
{
struct node ob1; // Node1
// Initialization
ob1.prev_link = NULL;
ob1.next_link = NULL;
ob1.data = 10;
struct node ob2; // Node2
// Initialization
ob2.prev_link = NULL;
ob2.next_link = NULL;
ob2.data = 20;
struct node ob3; // Node3
// Initialization
ob3.prev_link = NULL;
ob3.next_link = NULL;
ob3.data = 30;
// Forward links
ob1.next_link = &ob2;
ob2.next_link = &ob3;
// Backward links
ob2.prev_link = &ob1;
ob3.prev_link = &ob2;
// Accessing data of ob1, ob2 and ob3 by ob1
printf("%d\t", ob1.data);
printf("%d\t", ob1.next_link->data);
printf("%d\n", ob1.next_link->next_link->data);
// Accessing data of ob1, ob2 and ob3 by ob2
printf("%d\t", ob2.prev_link->data);
printf("%d\t", ob2.data);
printf("%d\n", ob2.next_link->data);
// Accessing data of ob1, ob2 and ob3 by ob3
printf("%d\t", ob3.prev_link->prev_link->data);
printf("%d\t", ob3.prev_link->data);
printf("%d", ob3.data);
return 0;
}

Output

10 20 30
10 20 30
10 20 30

In the above example we can see that ‘ob1’, ‘ob2’ and ‘ob3’ are three objects of the self
referential structure ‘node’. And they are connected using their links in such a way that any
of them can easily access each other’s data. This is the beauty of the self referential
structures. The connections can be manipulated according to the requirements of the
programmer.
Applications: Self-referential structures are very useful in creation of other complex data
structures like:

Linked Lists
Stacks
Queues
Trees
Graphs etc.
Simple Searching and Sorting Techniques:
Introduction to Searching:

Searching: In the data structure, searching is the process in which an element is searched
in a list that satisfies one or more than one condition.
Types of searching

There are two types of searching in the data structure.

1. Linear searching
2. Binary searching

Linear Search Algorithm (Sequential Search Algorithm)


Linear search algorithm finds a given element in a list of elements with O(n) time
complexity where n is total number of elements in the list. This search process starts
comparing search element with the first element in the list. If both are matched then result
is element found otherwise search element is compared with the next element in the list.
Repeat the same until search element is compared with the last element in the list, if that
last element also doesn't match, then the result is "Element not found in the list". That
means, the search element is compared with element by element in the list.

Linear search is implemented using following steps...


● Step 1 - Read the search element from the user.
● Step 2 - Compare the search element with the first element in the list.
● Step 3 - If both are matched, then display "Given element is found!!!" and terminate
the function
● Step 4 - If both are not matched, then compare search element with the next
element in the list.
● Step 5 - Repeat steps 3 and 4 until search element is compared with last element in
the list.
● Step 6 - If last element in the list also doesn't match, then display "Element is not
found!!!" and terminate the function.

Example
Consider the following list of elements and the element to be searched…
Complexity of Linear searching

Complexity Best case Average case Worst case

Time O(1) O(n) O(n)

Space O(1)
Binary Search Algorithm
Binary search algorithm finds a given element in a list of elements with O(log n) time
complexity where n is total number of elements in the list. The binary search algorithm can
be used with only a sorted list of elements. That means the binary search is used only with
a list of elements that are already arranged in an order. The binary search can not be used
for a list of elements arranged in random order. This search process starts comparing the
search element with the middle element in the list. If both are matched, then the result is
"element found". Otherwise, we check whether the search element is smaller or larger than
the middle element in the list. If the search element is smaller, then we repeat the same
process for the left sublist of the middle element. If the search element is larger, then we
repeat the same process for the right sublist of the middle element. We repeat this process
until we find the search element in the list or until we left with a sublist of only one
element. And if that element also doesn't match with the search element, then the result is
"Element not found in the list".

Binary search is implemented using following steps…

● Step 1 - Read the search element from the user.


● Step 2 - Find the middle element in the sorted list.
● Step 3 - Compare the search element with the middle element in the sorted list.
● Step 4 - If both are matched, then display "Given element is found!!!" and terminate
the function.
● Step 5 - If both are not matched, then check whether the search element is smaller
or larger than the middle element.
● Step 6 - If the search element is smaller than middle element, repeat steps 2, 3, 4
and 5 for the left sublist of the middle element.
● Step 7 - If the search element is larger than middle element, repeat steps 2, 3, 4 and
5 for the right sublist of the middle element.
● Step 8 - Repeat the same process until we find the search element in the list or until
sublist contains only one element.
● Step 9 - If that element also doesn't match with the search element, then
display "Element is not found in the list!!!" and terminate the function.

Example
Consider the following list of elements and the element to be searched...
Complexity of Binary search

Complexity Best case Average case Worst case

Time O(1) O(log n) O(log n)

Space O(1)

Interpolation Search
Improvement Over binary search.
Binary search only checks middle index but interpolation search checks different locations
based on the element being search.
Works more efficiently if elements are sorted.
To reduce the time complexity by using the interpolation search.

Interpolation Search Algorithm


Step 1:-- The array elements are sorted order. If the elements are not sorted order then
arrange in sorted order.
Step 2:-- Start index =0 and end index = n-1
Step 3:-- Find the position
Where a[index] is the value of an array at that index
Key is the element you want to search.
Step 4:-- If a[pos] == key then element found directly.
If key > a[pos] then make [low = pos +1].
If key < a[pos] then make [high = pos – 1].
Example:

Here low = 0
High = n-1
High = 9-1
High = 8
Key = 4

Pos = 1.75
Pos = 1
🡺 A[pos] == key
A[1] == 4
2 == 4 (False)
🡺 Key > a[pos]
4 > a[1]
4 > 2 (True)
🡺 Low = pos + 1
Low= 1 + 1
Low = 2
Here low = 2
High = n-1
High = 9-1
High = 8
Key = 4
Pos = 2
🡺 A[pos] == key
A[2] == 4
4 == 4 (True)
So the element is found in the list

Program:
#include< stdio.h>
Int interpolationsearch (int a[], int n, int key)
{
If (n == 0)
{
return -1;
}
Int low = 0, high = n-1, pos;
While ( low < = high)
{

If ( key == a[pos])
{
return pos;
}
else if ( key < a[pos])
{
end = pos – 1;
}
}
main()
{
int a[] = { 1, 2, 4, 6, 7, 10, 11, 14, 15};
int key = 4;
int n = size of (a)/ size of (a[0]);
int index = interpolation search (a , n , key)
if (index! = -1)
{
printf ( “elements found at index: %d”, index);
}
else
{
printf(“elements not found in the array”);
}

Case Runtime
Best O(1)
O(log(log
Average
n))
Worst O(n)
Auxiliary
O(1)
Space

Interpolation search works best when the data are equally distributed and can converge to
the solution in an average runtime of O(log (log n)).

Selection Sort:
Selection Sort algorithm is used to arrange a list of elements in a particular order
(Ascending or Descending). In selection sort, the first element in the list is selected and it is
compared repeatedly with all the remaining elements in the list. If any element is smaller
than the selected element (for Ascending order), then both are swapped so that first
position is filled with the smallest element in the sorted order. Next, we select the element
at a second position in the list and it is compared with all the remaining elements in the list.
If any element is smaller than the selected element, then both are swapped. This procedure
is repeated until the entire list is sorted.

The selection sort algorithm is performed using the following steps...


● Step 1 - Select the first element of the list (i.e., Element at first position in the list).
● Step 2: Compare the selected element with all the other elements in the list.
● Step 3: In every comparision, if any element is found smaller than the selected
element (for Ascending order), then both are swapped.
● Step 4: Repeat the same procedure with element in the next position in the list till
the entire list is sorted.

Selection Sort Logic

//Selection sort logic

for(i=0; i<size; i++){


for(j=i+1; j<size; j++){
if(list[i] > list[j])
{
temp=list[i];
list[i]=list[j];
list[j]=temp;
}
}
}

Example
Complexity of the Selection Sort Algorithm:

To sort an unsorted list with 'n' number of elements, we need to


make ((n-1)+(n-2)+(n-3)+......+1) = (n (n-1))/2 number of comparisions in the worst
case. If the list is already sorted then it requires 'n' number of comparisions.
WorstCase: O(n2)
BestCase: Ω(n2)
Average Case : Θ(n2)

Program:

#include<stdio.h>
#include<conio.h>
void main(){
int size,i,j,temp,list[100];
clrscr();
printf("Enter the size of the List: ");
scanf("%d",&size);
printf("Enter %d integer values: ",size);
for(i=0; i<size; i++)
scanf("%d",&list[i]);
//Selection sort logic
for(i=0; i<size; i++){
for(j=i+1; j<size; j++){
if(list[i] > list[j])
{
temp=list[i];
list[i]=list[j];
list[j]=temp;
}
}
}
printf("List after sorting is: ");
for(i=0; i<size; i++)
printf(" %d",list[i]);
getch();
}
Output:
Enter the size of the List: 5
Enter 5 integer values: 20 30 10 50 40
List after sorting is: 10 20 30 40 50

Insertion Sort Algorithm

Sorting is the process of arranging a list of elements in a particular order (Ascending or


Descending).
Insertion sort algorithm arranges a list of elements in a particular order. In insertion sort
algorithm, every iteration moves an element from unsorted portion to sorted portion until
all the elements are sorted in the list.
The insertion sort algorithm is performed using the following steps...

● Step 1 - Assume that first element in the list is in sorted portion and all the
remaining elements are in unsorted portion.
● Step 2: Take first element from the unsorted portion and insert that element into
the sorted portion in the order specified.
● Step 3: Repeat the above process until all the elements from the unsorted portion
are moved into the sorted portion.
Insertion Sort Logic

//Insertion sort logic


for i = 1 to size-1 {
temp = list[i];
j = i-1;
while ((temp < list[j]) && (j > 0)) {
list[j] = list[j-1];
j = j - 1;
}
list[j] = temp;
}

Example
Complexity of the Insertion Sort Algorithm:
To sort an unsorted list with 'n' number of elements, we need to make (1+2+3+......+n-1) =
(n (n-1))/2 number of comparisions in the worst case. If the list is already sorted then it
requires 'n' number of comparisions.
WorstCase: O(n2)
BestCase: Ω(n)
Average Case : Θ(n2)
Program:

#include<stdio.h>
#include<conio.h>
void main(){
int size, i, j, temp, list[100];
printf("Enter the size of the list: ");
scanf("%d", &size);
printf("Enter %d integer values: ", size);
for (i = 0; i < size; i++)
scanf("%d", &list[i]);
//Insertion sort logic
for (i = 1; i < size; i++) {
temp = list[i];
j = i - 1;
while ((temp < list[j]) && (j >= 0)) {
list[j + 1] = list[j];
j = j - 1;
}
list[j + 1] = temp;
}
printf("List after Sorting is: ");
for (i = 0; i < size; i++)
printf(" %d", list[i]);
}

Output:
Enter the size of the list: 5
Enter 5 integer values: 63 45 76 98 80
List after Sorting is: 45 63 76 80 98

Bubble Sort
Bubble sort is a sorting algorithm that compares two adjacent elements and swaps them
until they are in the intended order.

Just like the movement of air bubbles in the water that rise up to the surface, each element
of the array move to the end in each iteration. Therefore, it is called a bubble sort.

Working of Bubble Sort

Suppose we are trying to sort the elements in ascending order.


1. First Iteration (Compare and Swap)

1. Starting from the first index, compare the first and the second elements.
2. If the first element is greater than the second element, they are swapped.
3. Now, compare the second and the third elements. Swap them if they are not in order.
4. The above process goes on until the last element.

2. Remaining Iteration

The same process goes on for the remaining iterations.

After each iteration, the largest element among the unsorted elements is placed at the end.
In each iteration, the comparison takes place up to the last unsorted element.

The array is sorted when all the unsorted elements are placed at their correct
positions.
Algorithm:

algorithm Bubble_Sort(list)
Pre: list != fi
Post: list is sorted in ascending order for all values
for i <- 0 to list:Count - 1
for j <- 0 to list:Count - 1
if list[i] < list[j]
Swap(list[i]; list[j])
end if
end for
end for
return list
end Bubble_Sort

Merge Sort
Merge sort is a sorting algorithm based on the Divide and conquer strategy. It works by
recursively dividing the array into two equal halves, then sort them and combine them. It
takes a time of (n logn) in the worst case.

What is Merge Sort?


The merge sort algorithm is an implementation of the divide and conquer technique. Thus,
it gets completed in three steps:
1. Divide: In this step, the array/list divides itself recursively into sub-arrays until the base
case is reached.
2. Recursively solve: Here, the sub-arrays are sorted using recursion.
3. Combine: This step makes use of the merge( ) function to combine the sub-arrays into
the final sorted array.

Algorithm for Merge Sort


Step 1: Find the middle index of the array.
Middle = 1 + (last – first)/2
Step 2: Divide the array from the middle.
Step 3: Call merge sort for the first half of the array
MergeSort(array, first, middle)
Step 4: Call merge sort for the second half of the array.
MergeSort(array, middle+1, last)
Step 5: Merge the two sorted halves into a single sorted array.

Pseudo-code for Merge Sort


Consider an array Arr which is being divided into two sub-arrays left and right. Let L1 =
Number of elements in the left sub-array; and L2 = Number of elements in the right
sub-array.
Let variable i point to the first index of the left sub-array and j point to the first index of the
right sub-array. Then, the pseudo-code for the merge( ) function will be:
procedure merge(Arr[], lt, mid, rt):
int L1 = mid - lt + 1
int L2 = rt-mid
int left[L1], right[L2]
for i = 0 to L1:
left[i] = Arr[lt + i]
END for loop

for j = 0 to L2:
right[j] = Arr[mid+1+j]
END for loop
while(left and right hve elments):
if(left[i] < right[j])
Add left[i] to the end of Arr
else
Add right[i] to the end of Arr
END while loop
END procedure
procedure Merge_sort(Arr[]):
l1 = Merge_sort(L1)
l2 = Merge_sort(L2)
return merge(l1, l2)
END procedure

Working of Merge Sort

Consider an array having the following elements: 1,6,3,2,7,5,8,4. We need to sort this array
using merge sort.
There is a total of 8 elements in the array. Thus, mid = 4. So we will first divide this array
into two arrays of size 4 as shown:

Next, we recursively divide these arrays into further halves. The half of 4 is 2. So, now we
have 4 arrays of size 2 each.

We will keep on dividing into further halves until we have reached an array length of size 1
as shown:

After this, we have the combining step. We will compare each element with its consecutive
elements and arrange them in a sorted manner.

In the next iteration, we will compare two arrays and sort them as shown:

Finally, we will compare the elements of the two arrays each of size 4 and we will get our
resultant sorted array as shown:

Characteristics of Merge Sort

1. External sorting algorithm: Merge sort can be used when the data size is more than the
RAM size. In such a case, whole data cannot come into RAM at once. Thus, data is loaded
into the RAM in small chunks and the rest of the data resides in the secondary memory.
2. Non-inplace sorting algorithm: Merge sort is not an inplace sorting technique as its
space complexity is O(n). Inplace or space-efficient algorithms are those whose space
complexity is not more than O(logn).

Program:
#include <stdio.h>
#include <stdlib.h>
void Merge_sort(int[], int, int);
void merge(int[], int, int, int);
void Display_array(int[], int);
int main()
{
int array[] = {1,6,3,2,7,5,8,4};
int arr_size = sizeof(array) / sizeof(arr[0]);

printf("Given array is: \n");


Display_array(array, arr_size);
Merge_sort(array, 0, arr_size - 1);
printf("\nSorted array is: \n");
Display_array(array, arr_size);
return 0;
}

void merge(int Arr[], int lt, int mid, int rt)


{
int i, j, k;
int L1 = mid - lt + 1;
int L2 = rt - mid;
int left[L1], right[L2]; //Creating temporary arrays, additional memory needed
for (i = 0; i < L1; i++)
left[i] = Arr[lt + i];

for (j = 0; j < L2; j++)


right[j] = Arr[mid + 1 + j];
i = j = 0;
k = 1;
while (i < L1 && j < L2) {
if (left[i] <= right[j]) {
Arr[k] = left[i];
i++;
}
else {
Arr[k] = right[j];
j++;
}
k++;
}

while (i < L1)


Arr[k++] = left[i++];

while (j < L2)


Arr[k++] = right[j++];
}
void Merge_sort(int Arr[], int l, int r)
{
if (l < r) {
int mid = l + (r - l) / 2;
Merge_sort(Arr, l, mid);
Merge_sort(Arr, mid + 1, r);
merge(Arr, l, mid, r);
}
}

void Display_array(int a[], int size)


{
for (int i = 0; i < size; i++)
printf("%d ", a[i]);
printf("\n");
}

Complexity of Merge Sort

Scenario Time complexity


Worst case O(n logn)
Average case O(n logn)
Best case O(n logn)
The space complexity of merge sort is O(n).

Applications of Merge Sort


1. To sort linked lists in O(n logn) time: In the case of a linked list, we can insert the
element in the middle or anywhere in between the linked list with time complexity of O(1).
If we use merge sort to sort such a linked list, we can do so with the space complexity of
O(1). Also, we cannot do random access in a linked list and merge sort also does not work
well in case of random access. Thus, merge sort is the best algorithm to sort a linked list
with the complexity of O(n logn).
2. Inversion count problem: Merge sort helps to solve this problem by telling the number
of inversion pairs in an unsorted array. The inversion count problem tells how many pairs
need to be swapped in order to get a sorted array. Merge sort works best for solving this
problem.
3. External sorting: Merge sort is an external sorting technique. Thus, if we have data of
1GB but the available RAM size is 500MB, then we will use merge sort.
Drawbacks of Merge Sort
● Merge sort is not a space-efficient algorithm. It makes use of an extra O(n) space.
● In the case of smaller input size, merge sort works slower in comparison to other sorting
techniques.
● If the data is already sorted, merge sort will be a very expensive algorithm in terms of
time and space. This is because it will still traverse the whole array and perform all the
operations.

Introduction to Recursion
Tower Of Hanoi
In Tower Of Hanoi puzzle we have three towers and some disks. We have to move this disk
from intial tower to destination tower using aux tower.
For an example lets take we have two disks and we want to move it from source to
destination tower.
So the approach we will follow
● First, we will move the top disk to aux tower.
● Then we will move the next disk (which is the bottom one in this case) to the
destination tower.
● And at last, we will move the disk from aux tower to the destination tower.
Similarly if we will have n number of disk then our aim is to move bottom which is disk n
from source to destination. And then put all other (n-1) disks onto it.
Steps we will follow is
● Step 1 − Move n-1 disks from source to aux
● Step 2 − Move nth disk from source to dest
● Step 3 − Move n-1 disks from aux to dest
Means to move n > 1 disks from tower 1 to tower 2, using auxiliary tower 3
● Step 1- Move n – 1 disks from Tower 1 to tower 3
● Step 2 – Move the n-th disk from Tower 1 to Tower 2
● Step 3 – Move n – 1 disk from Tower 3 to Tower 2
Algorithm
START
Procedure TOH(disk, source, dest, aux)
IF disk == 1, THEN
move disk from source to dest
ELSE
TOH(disk - 1, source, aux, dest) // Step 1
moveDisk(source to dest) // Step 2
TOH(disk - 1, aux, dest, source) // Step 3
END IF
END Procedure
STOP
Pictorial Representation of How Tower of Hanoi works
Let’s see the below example where we have three disks that have to move in the destination
tower which is the middle one with the help of the auxiliary tower.

Move disk 1 from tower source to tower destination

Move disk 2 from tower source to tower auxiliary

Move disk 1 from tower destination to tower auxiliary

Move disk 3 from tower source to tower destination


Move disk 1 from tower auxiliary to tower source

Move disk 2 from tower auxiliary to tower destination

Move disk 1 from tower source to tower destination

Quick Sort
● Quick sort is a fast sorting algorithm used to sort a list of elements. Quick sort
algorithm is invented by C. A. R. Hoare.
The quick sort algorithm attempts to separate the list of elements into two parts and
then sort each part recursively. That means it use divide and conquer strategy. In
quick sort, the partition of the list is performed based on the element called pivot. Here
pivot element is one of the elements in the list.
The list is divided into two partitions such that "all elements to the left of pivot are
smaller than the pivot and all elements to the right of pivot are greater than or
equal to the pivot".

In Quick sort algorithm, partitioning of the list is performed using following steps...

● Step 1 - Consider the first element of the list as pivot (i.e., Element at first position
in the list).
● Step 2 - Define two variables i and j. Set i and j to first and last elements of the list
respectively.
● Step 3 - Increment i until list[i] > pivot then stop.
● Step 4 - Decrement j until list[j] < pivot then stop.
● Step 5 - If i < j then exchange list[i] and list[j].
● Step 6 - Repeat steps 3,4 & 5 until i > j.
● Step 7 - Exchange the pivot element with list[j] element.
Following is the sample code for Quick sort...

Quick Sort Logic


//Quick Sort Logic
void quickSort(int list[10],int first,int last){
int pivot,i,j,temp;
if(first < last){
pivot = first;
i = first;
j = last;
while(i < j){
while(list[i] <= list[pivot] && i < last)
i++;
while(list[j] && list[pivot])
j--;
if(i < j){
temp = list[i];
list[i] = list[j];
list[j] = temp;
}
}

temp = list[pivot];
list[pivot] = list[j];
list[j] = temp;
quickSort(list,first,j-1);
quickSort(list,j+1,last);
}
}
Complexity of the Quick Sort Algorithm:
To sort an unsorted list with 'n' number of elements, we need to
make ((n-1)+(n-2)+(n-3)+......+1) = (n (n-1))/2 number of comparisions in the worst
case. If the list is already sorted, then it requires 'n' number of comparisions.
Worst Case : O(n2)
Best Case : O (n log n)
Average Case : O (n log n)

Program:
#include<stdio.h>
#include<conio.h>
void quickSort(int [10],int,int);
void main(){
int list[20],size,i;
printf("Enter size of the list: ");
scanf("%d",&size);
printf("Enter %d integer values: ",size);
for(i = 0; i < size; i++)
scanf("%d",&list[i]);
quickSort(list,0,size-1);
printf("List after sorting is: ");
for(i = 0; i < size; i++)
printf(" %d",list[i]);
getch();
}
void quickSort(int list[10],int first,int last){
int pivot,i,j,temp;

if(first < last){


pivot = first;
i = first;
j = last;
while(i < j){
while(list[i] <= list[pivot] && i < last)
i++;
while(list[j] > list[pivot])
j--;
if(i <j){
temp = list[i];
list[i] = list[j];
list[j] = temp;
}
}
temp = list[pivot];
list[pivot] = list[j];
list[j] = temp;
quickSort(list,first,j-1);
quickSort(list,j+1,last);
}
}
Output:
Enter the size of the list: 8
Enter 8 integer values: 5 3 8 1 4 6 2 7
List after sorting is: 1 2 3 4 5 6 7 8

You might also like