Professional Documents
Culture Documents
Syllabus:
Basic Concepts:
Arrays, Structures and Unions, Internal Implementation of Structures, Self-Referential
Structures
Simple Searching and Sorting Techniques:
Introduction to Searching, Sequential Search, Binary Search, Interpolation Search, Selection
Sort, Bubble Sort, Insertion Sort, Introduction to Merge Sort
Introduction to Recursion:
Towers of Hanoi, Quick Sort, Merge Sort
Basic Concepts:
What is an algorithm?
Specifications of Algorithms
Every algorithm must satisfy the following specifications...
1. Input - Every algorithm must take zero or more number of input values from
external.
2. Output - Every algorithm must produce an output as result.
3. Definiteness - Every statement/instruction in an algorithm must be clear and
unambiguous (only one interpretation).
4. Finiteness - For all different cases, the algorithm must produce result within a finite
number of steps.
5. Effectiveness - Every instruction must be basic enough to be carried out and it also
must be feasible.
Consider the given list of numbers as 'L' (input), and the largest number as 'max' (Output).
Algorithm
When we want to analyse an algorithm, we consider only the space and time required by
that particular algorithm and we ignore all the remaining elements. Based on this
information, performance analysis of an algorithm can also be defined as follows…
Generally, when a program is under execution it uses the computer memory for THREE
reasons. They are as follows...
1. Instruction Space: It is the amount of memory used to store compiled version of
instructions.
2. Environmental Stack: It is the amount of memory used to store information of
partially executed functions at the time of function call.
3. Data Space: It is the amount of memory used to store all the variables and
constants.
Consider function f(n) as time complexity of an algorithm and g(n) is the most
significant term. If f(n) <= C g(n) for all n >= n0, C > 0 and n0 >= 1. Then we can
represent f(n) as O(g(n)).
Consider function f(n) as time complexity of an algorithm and g(n) is the most
significant term. If f(n) >= C g(n) for all n >= n0, C > 0 and n0 >= 1. Then we can
represent f(n) as Ω(g(n)).
Based on the organizing method of data structure, data structures are divided into two
types.
1. Arrays
2. List (Linked List)
3. Stack
4. Queue
1. Tree
2. Graph
3. Dictionaries
4. Heaps etc.,
Arrays
Whenever we want to work with large number of data values, we need to use that much
number of different variables. As the number of variables are increasing, complexity of the
program also increases and programmers get confused with the variable names. There may
be situations in which we need to work with large number of similar data values. To make
this work more easy, C programming language provides a concept called "Array".
An array is a variable which can store multiple values of same data type at a time.
An array is a variable which can store multiple values of same data type at a time.
"Collection of similar data items stored in continuous memory locations with single
name".
int a[3];
Here, the compiler allocates total 6 bytes of continuous memory locations with single name
'a'. But allows to store three different integer values (each in 2 bytes of memory) at a time.
And memory is organized as follows...
That means all these three memory locations are named as 'a'. But "how can we refer
individual elements?" is the big question. Answer for this question is, compiler not only
allocates memory, but also assigns a numerical value to each individual element of an array.
This numerical value is called as "Index". Index values for the above example are as
follows...
The individual elements of an array are identified using the combination of 'name' and
'index' as follows...
arrayName[indexValue]
For the above example, the individual elements can be referred as follows...
If I want to assign a value to any of these memory locations (array elements), we can assign
as follows...
a[1] = 100;
Structure
The structure is a user-defined data-type in the data structure that allows the combination
of various types of the data-type. It is used to describe a record. It is represented by the
struct keyword in the program.
To define the structure, it uses the struct structure_name statement, as shown in syntax.
Syntax
struct structure_name
{
data-type member 1;
data-type member 2; …
data-type member n;
};
Note: struct structure_name is used to define the name of the structure and data-type
member defines the data-type.
Advantages of Structure
Program
#include<stdio.h>
#include<conio.h>
void main( )
{
struct Teacher
{
int teacher_id ;
float salary ;
int mobile_no ;
} ;
struct Teacher t1,t2,t3;
clrscr();
printf("\nEnter the teacher_id, salary & mobile_no of the teacher\n");
scanf("%d %f %d", &t1.teacher_id, &t1.salary, &t1.mobile_no);
scanf("%d%f %d", &t2.teacher_id, &t2.salary, &t2.mobile_no);
scanf("%d %f %d", &t3.teacher_id, &t3.salary, &t3.mobile_no);
printf("\n Entered Result ");
printf("\n%d %f %d", t1.teacher_id, t1.salary, t1.mobile_no);
printf("\n%d%f %d", t2.teacher_id, t2.salary, t2.mobile_no);
printf("\n%d %f %d", t3.teacher_id, t3.salary, t3.mobile_no);
}
Union
The Union is a user-defined data type in the data structure that allows different data types
to be stored in the same memory location. It is represented by the union keyword in the
program. A union of several members may be defined together, but at any time, only one
member can be valued. Union provides an effective way to use the same memory space for
multiple purposes.
To define the Union, it uses the union union_name statement, as shown in syntax.
Syntax
union union_name
{
data-type member 1;
data-type member 2; …
data-type member n;
};
Note: union union_name is used to define the name of the Union and data-type
member defines the data-type.
Union works like structure while making the program of the Union, union is written instead
of struct keyword.
Program
#include<stdio.h>
#include<conio.h>
void main( )
{
union Teacher
{
int teacher_id ;
float salary ;
int mobile_no ;
};
union Teacher t1,t2,t3;
printf("\nEnter the teacher_id, salary & mobile_no of the teacher\n");
scanf("%d %f %d", &t1.teacher_id, &t1.salary, &t1.mobile_no);
scanf("%d%f %d", &t2.teacher_id, &t2.salary, &t2.mobile_no);
scanf("%d %f %d", &t3.teacher_id, &t3.salary, &t3.mobile_no);
printf("\n Entered Result ");
printf("\n%d %f %d", t1.teacher_id, t1.salary, t1.mobile_no);
printf("\n%d%f %d", t2.teacher_id, t2.salary, t2.mobile_no);
printf("\n%d %f %d", t3.teacher_id, t3.salary, t3.mobile_no);
}
Advantages of Union
Structure Union
The memory size of the structure is The memory size of the union is
equal to the sum of the memory equal to the largest size of all
size of each data-type member. data-type in the union.
Self Referential structures are those structures that have one or more pointers which point
to the same type of structure, as their member.
n other words, structures pointing to the same type of structures are self-referential in
nature
struct node {
int data1;
char data2;
struct node* link;
};
int main()
{
struct node ob;
return 0;
}
In the above example ‘link’ is a pointer to a structure of type ‘node’. Hence, the structure
‘node’ is a self-referential structure with ‘link’ as the referencing pointer.
An important point to consider is that the pointer should be initialized properly before
accessing, as by default it contains garbage value.
Types of Self Referential Structures
1.Self Referential Structure with Single Link
2.Self Referential Structure with Multiple Links
1.Self Referential Structure with Single Link: These structures can have only one
self-pointer as their member. The following example will show us how to connect the
objects of a self-referential structure with the single link and access the corresponding data
members. The connection formed is shown in the following figure.
Program:
#include <stdio.h>
struct node {
int data1;
char data2;
struct node* link;
};
int main()
{
struct node ob1; // Node1
// Initialization
ob1.link = NULL;
ob1.data1 = 10;
ob1.data2 = 20;
struct node ob2; // Node2
// Initialization
ob2.link = NULL;
ob2.data1 = 30;
ob2.data2 = 40;
// Linking ob1 and ob2
ob1.link = &ob2;
// Accessing data members of ob2 using ob1
printf("%d", ob1.link->data1);
printf("\n%d", ob1.link->data2);
return 0;
}
Output
30
40
2. Self Referential Structure with Multiple Links: Self referential structures with
multiple links can have more than one self-pointers. Many complicated data structures can
be easily constructed using these structures. Such structures can easily connect to more
than one nodes at a time. The following example shows one such structure with more than
one links.
The connections made in the above example can be understood using the following figure.
Program:
#include <stdio.h>
struct node {
int data;
struct node* prev_link;
struct node* next_link;
};
int main()
{
struct node ob1; // Node1
// Initialization
ob1.prev_link = NULL;
ob1.next_link = NULL;
ob1.data = 10;
struct node ob2; // Node2
// Initialization
ob2.prev_link = NULL;
ob2.next_link = NULL;
ob2.data = 20;
struct node ob3; // Node3
// Initialization
ob3.prev_link = NULL;
ob3.next_link = NULL;
ob3.data = 30;
// Forward links
ob1.next_link = &ob2;
ob2.next_link = &ob3;
// Backward links
ob2.prev_link = &ob1;
ob3.prev_link = &ob2;
// Accessing data of ob1, ob2 and ob3 by ob1
printf("%d\t", ob1.data);
printf("%d\t", ob1.next_link->data);
printf("%d\n", ob1.next_link->next_link->data);
// Accessing data of ob1, ob2 and ob3 by ob2
printf("%d\t", ob2.prev_link->data);
printf("%d\t", ob2.data);
printf("%d\n", ob2.next_link->data);
// Accessing data of ob1, ob2 and ob3 by ob3
printf("%d\t", ob3.prev_link->prev_link->data);
printf("%d\t", ob3.prev_link->data);
printf("%d", ob3.data);
return 0;
}
Output
10 20 30
10 20 30
10 20 30
In the above example we can see that ‘ob1’, ‘ob2’ and ‘ob3’ are three objects of the self
referential structure ‘node’. And they are connected using their links in such a way that any
of them can easily access each other’s data. This is the beauty of the self referential
structures. The connections can be manipulated according to the requirements of the
programmer.
Applications: Self-referential structures are very useful in creation of other complex data
structures like:
Linked Lists
Stacks
Queues
Trees
Graphs etc.
Simple Searching and Sorting Techniques:
Introduction to Searching:
Searching: In the data structure, searching is the process in which an element is searched
in a list that satisfies one or more than one condition.
Types of searching
1. Linear searching
2. Binary searching
Example
Consider the following list of elements and the element to be searched…
Complexity of Linear searching
Space O(1)
Binary Search Algorithm
Binary search algorithm finds a given element in a list of elements with O(log n) time
complexity where n is total number of elements in the list. The binary search algorithm can
be used with only a sorted list of elements. That means the binary search is used only with
a list of elements that are already arranged in an order. The binary search can not be used
for a list of elements arranged in random order. This search process starts comparing the
search element with the middle element in the list. If both are matched, then the result is
"element found". Otherwise, we check whether the search element is smaller or larger than
the middle element in the list. If the search element is smaller, then we repeat the same
process for the left sublist of the middle element. If the search element is larger, then we
repeat the same process for the right sublist of the middle element. We repeat this process
until we find the search element in the list or until we left with a sublist of only one
element. And if that element also doesn't match with the search element, then the result is
"Element not found in the list".
Example
Consider the following list of elements and the element to be searched...
Complexity of Binary search
Space O(1)
Interpolation Search
Improvement Over binary search.
Binary search only checks middle index but interpolation search checks different locations
based on the element being search.
Works more efficiently if elements are sorted.
To reduce the time complexity by using the interpolation search.
Here low = 0
High = n-1
High = 9-1
High = 8
Key = 4
Pos = 1.75
Pos = 1
🡺 A[pos] == key
A[1] == 4
2 == 4 (False)
🡺 Key > a[pos]
4 > a[1]
4 > 2 (True)
🡺 Low = pos + 1
Low= 1 + 1
Low = 2
Here low = 2
High = n-1
High = 9-1
High = 8
Key = 4
Pos = 2
🡺 A[pos] == key
A[2] == 4
4 == 4 (True)
So the element is found in the list
Program:
#include< stdio.h>
Int interpolationsearch (int a[], int n, int key)
{
If (n == 0)
{
return -1;
}
Int low = 0, high = n-1, pos;
While ( low < = high)
{
If ( key == a[pos])
{
return pos;
}
else if ( key < a[pos])
{
end = pos – 1;
}
}
main()
{
int a[] = { 1, 2, 4, 6, 7, 10, 11, 14, 15};
int key = 4;
int n = size of (a)/ size of (a[0]);
int index = interpolation search (a , n , key)
if (index! = -1)
{
printf ( “elements found at index: %d”, index);
}
else
{
printf(“elements not found in the array”);
}
Case Runtime
Best O(1)
O(log(log
Average
n))
Worst O(n)
Auxiliary
O(1)
Space
Interpolation search works best when the data are equally distributed and can converge to
the solution in an average runtime of O(log (log n)).
Selection Sort:
Selection Sort algorithm is used to arrange a list of elements in a particular order
(Ascending or Descending). In selection sort, the first element in the list is selected and it is
compared repeatedly with all the remaining elements in the list. If any element is smaller
than the selected element (for Ascending order), then both are swapped so that first
position is filled with the smallest element in the sorted order. Next, we select the element
at a second position in the list and it is compared with all the remaining elements in the list.
If any element is smaller than the selected element, then both are swapped. This procedure
is repeated until the entire list is sorted.
Example
Complexity of the Selection Sort Algorithm:
Program:
#include<stdio.h>
#include<conio.h>
void main(){
int size,i,j,temp,list[100];
clrscr();
printf("Enter the size of the List: ");
scanf("%d",&size);
printf("Enter %d integer values: ",size);
for(i=0; i<size; i++)
scanf("%d",&list[i]);
//Selection sort logic
for(i=0; i<size; i++){
for(j=i+1; j<size; j++){
if(list[i] > list[j])
{
temp=list[i];
list[i]=list[j];
list[j]=temp;
}
}
}
printf("List after sorting is: ");
for(i=0; i<size; i++)
printf(" %d",list[i]);
getch();
}
Output:
Enter the size of the List: 5
Enter 5 integer values: 20 30 10 50 40
List after sorting is: 10 20 30 40 50
● Step 1 - Assume that first element in the list is in sorted portion and all the
remaining elements are in unsorted portion.
● Step 2: Take first element from the unsorted portion and insert that element into
the sorted portion in the order specified.
● Step 3: Repeat the above process until all the elements from the unsorted portion
are moved into the sorted portion.
Insertion Sort Logic
Example
Complexity of the Insertion Sort Algorithm:
To sort an unsorted list with 'n' number of elements, we need to make (1+2+3+......+n-1) =
(n (n-1))/2 number of comparisions in the worst case. If the list is already sorted then it
requires 'n' number of comparisions.
WorstCase: O(n2)
BestCase: Ω(n)
Average Case : Θ(n2)
Program:
#include<stdio.h>
#include<conio.h>
void main(){
int size, i, j, temp, list[100];
printf("Enter the size of the list: ");
scanf("%d", &size);
printf("Enter %d integer values: ", size);
for (i = 0; i < size; i++)
scanf("%d", &list[i]);
//Insertion sort logic
for (i = 1; i < size; i++) {
temp = list[i];
j = i - 1;
while ((temp < list[j]) && (j >= 0)) {
list[j + 1] = list[j];
j = j - 1;
}
list[j + 1] = temp;
}
printf("List after Sorting is: ");
for (i = 0; i < size; i++)
printf(" %d", list[i]);
}
Output:
Enter the size of the list: 5
Enter 5 integer values: 63 45 76 98 80
List after Sorting is: 45 63 76 80 98
Bubble Sort
Bubble sort is a sorting algorithm that compares two adjacent elements and swaps them
until they are in the intended order.
Just like the movement of air bubbles in the water that rise up to the surface, each element
of the array move to the end in each iteration. Therefore, it is called a bubble sort.
1. Starting from the first index, compare the first and the second elements.
2. If the first element is greater than the second element, they are swapped.
3. Now, compare the second and the third elements. Swap them if they are not in order.
4. The above process goes on until the last element.
2. Remaining Iteration
After each iteration, the largest element among the unsorted elements is placed at the end.
In each iteration, the comparison takes place up to the last unsorted element.
The array is sorted when all the unsorted elements are placed at their correct
positions.
Algorithm:
algorithm Bubble_Sort(list)
Pre: list != fi
Post: list is sorted in ascending order for all values
for i <- 0 to list:Count - 1
for j <- 0 to list:Count - 1
if list[i] < list[j]
Swap(list[i]; list[j])
end if
end for
end for
return list
end Bubble_Sort
Merge Sort
Merge sort is a sorting algorithm based on the Divide and conquer strategy. It works by
recursively dividing the array into two equal halves, then sort them and combine them. It
takes a time of (n logn) in the worst case.
for j = 0 to L2:
right[j] = Arr[mid+1+j]
END for loop
while(left and right hve elments):
if(left[i] < right[j])
Add left[i] to the end of Arr
else
Add right[i] to the end of Arr
END while loop
END procedure
procedure Merge_sort(Arr[]):
l1 = Merge_sort(L1)
l2 = Merge_sort(L2)
return merge(l1, l2)
END procedure
Consider an array having the following elements: 1,6,3,2,7,5,8,4. We need to sort this array
using merge sort.
There is a total of 8 elements in the array. Thus, mid = 4. So we will first divide this array
into two arrays of size 4 as shown:
Next, we recursively divide these arrays into further halves. The half of 4 is 2. So, now we
have 4 arrays of size 2 each.
We will keep on dividing into further halves until we have reached an array length of size 1
as shown:
After this, we have the combining step. We will compare each element with its consecutive
elements and arrange them in a sorted manner.
In the next iteration, we will compare two arrays and sort them as shown:
Finally, we will compare the elements of the two arrays each of size 4 and we will get our
resultant sorted array as shown:
1. External sorting algorithm: Merge sort can be used when the data size is more than the
RAM size. In such a case, whole data cannot come into RAM at once. Thus, data is loaded
into the RAM in small chunks and the rest of the data resides in the secondary memory.
2. Non-inplace sorting algorithm: Merge sort is not an inplace sorting technique as its
space complexity is O(n). Inplace or space-efficient algorithms are those whose space
complexity is not more than O(logn).
Program:
#include <stdio.h>
#include <stdlib.h>
void Merge_sort(int[], int, int);
void merge(int[], int, int, int);
void Display_array(int[], int);
int main()
{
int array[] = {1,6,3,2,7,5,8,4};
int arr_size = sizeof(array) / sizeof(arr[0]);
Introduction to Recursion
Tower Of Hanoi
In Tower Of Hanoi puzzle we have three towers and some disks. We have to move this disk
from intial tower to destination tower using aux tower.
For an example lets take we have two disks and we want to move it from source to
destination tower.
So the approach we will follow
● First, we will move the top disk to aux tower.
● Then we will move the next disk (which is the bottom one in this case) to the
destination tower.
● And at last, we will move the disk from aux tower to the destination tower.
Similarly if we will have n number of disk then our aim is to move bottom which is disk n
from source to destination. And then put all other (n-1) disks onto it.
Steps we will follow is
● Step 1 − Move n-1 disks from source to aux
● Step 2 − Move nth disk from source to dest
● Step 3 − Move n-1 disks from aux to dest
Means to move n > 1 disks from tower 1 to tower 2, using auxiliary tower 3
● Step 1- Move n – 1 disks from Tower 1 to tower 3
● Step 2 – Move the n-th disk from Tower 1 to Tower 2
● Step 3 – Move n – 1 disk from Tower 3 to Tower 2
Algorithm
START
Procedure TOH(disk, source, dest, aux)
IF disk == 1, THEN
move disk from source to dest
ELSE
TOH(disk - 1, source, aux, dest) // Step 1
moveDisk(source to dest) // Step 2
TOH(disk - 1, aux, dest, source) // Step 3
END IF
END Procedure
STOP
Pictorial Representation of How Tower of Hanoi works
Let’s see the below example where we have three disks that have to move in the destination
tower which is the middle one with the help of the auxiliary tower.
Quick Sort
● Quick sort is a fast sorting algorithm used to sort a list of elements. Quick sort
algorithm is invented by C. A. R. Hoare.
The quick sort algorithm attempts to separate the list of elements into two parts and
then sort each part recursively. That means it use divide and conquer strategy. In
quick sort, the partition of the list is performed based on the element called pivot. Here
pivot element is one of the elements in the list.
The list is divided into two partitions such that "all elements to the left of pivot are
smaller than the pivot and all elements to the right of pivot are greater than or
equal to the pivot".
In Quick sort algorithm, partitioning of the list is performed using following steps...
● Step 1 - Consider the first element of the list as pivot (i.e., Element at first position
in the list).
● Step 2 - Define two variables i and j. Set i and j to first and last elements of the list
respectively.
● Step 3 - Increment i until list[i] > pivot then stop.
● Step 4 - Decrement j until list[j] < pivot then stop.
● Step 5 - If i < j then exchange list[i] and list[j].
● Step 6 - Repeat steps 3,4 & 5 until i > j.
● Step 7 - Exchange the pivot element with list[j] element.
Following is the sample code for Quick sort...
temp = list[pivot];
list[pivot] = list[j];
list[j] = temp;
quickSort(list,first,j-1);
quickSort(list,j+1,last);
}
}
Complexity of the Quick Sort Algorithm:
To sort an unsorted list with 'n' number of elements, we need to
make ((n-1)+(n-2)+(n-3)+......+1) = (n (n-1))/2 number of comparisions in the worst
case. If the list is already sorted, then it requires 'n' number of comparisions.
Worst Case : O(n2)
Best Case : O (n log n)
Average Case : O (n log n)
Program:
#include<stdio.h>
#include<conio.h>
void quickSort(int [10],int,int);
void main(){
int list[20],size,i;
printf("Enter size of the list: ");
scanf("%d",&size);
printf("Enter %d integer values: ",size);
for(i = 0; i < size; i++)
scanf("%d",&list[i]);
quickSort(list,0,size-1);
printf("List after sorting is: ");
for(i = 0; i < size; i++)
printf(" %d",list[i]);
getch();
}
void quickSort(int list[10],int first,int last){
int pivot,i,j,temp;