Professional Documents
Culture Documents
Also
implement the same using any programming language in recursive mode and iterative.
Derive the complexity for the Best and Worst case for Merge sort. Do specify various
applications of Merge sort
#include <iostream>
using namespace std;
int main() {
int arr[] = {6, 5, 12, 10, 9, 1};
int size = sizeof(arr) / sizeof(arr[0]);
return 0;
}
Iterative approach:
#include <iostream>
using namespace std;
int main() {
int arr[] = {6, 5, 12, 10, 9, 1};
int size = sizeof(arr) / sizeof(arr[0]);
mergeSortIterative(arr, size);
return 0;
}
The best and worst time complexity of merge sort is as follows:
• Sorting: Merge sort is primarily used for sorting large datasets efficiently. It guarantees a time complexity
of O(n log n) in both the best and worst cases, making it suitable for sorting applications where
performance is crucial.
• External Sorting: Merge sort is well-suited for external sorting scenarios where the data to be sorted
exceeds the available memory. It can handle large datasets by reading and writing data from and to
external storage devices, minimizing the need for excessive memory usage.
• Inversion Count: Merge sort can be utilized to count the number of inversions in an array. An inversion
occurs when two elements in an array are out of order with respect to their sorted order. This property of
merge sort makes it useful in applications such as analyzing data for inversions or inversions-based
problems.
• External Merging in External Memory Algorithms: Merge sort is extensively used in external memory
algorithms, particularly for external merging. When dealing with large datasets that cannot fit entirely in
memory, external merging involves merging sorted subsequences from disk or other external storage
devices. Merge sort's efficient merging operation is crucial for external memory algorithms.
• Inversion-based Problems: Merge sort's property of counting inversions can be used to solve various
inversion-based problems, such as finding the number of swaps required to sort an array or identifying
the closest pair of points in a plane.
2. Given a list with numbers (52, 37, 63, 14, 17, 8, 6, 25), Apply Quick sort and sort the given list. Also
implement the same using any programming language like c, c++ or Java in iterative mode and
recursive. Derive the complexity for the Best and Worst case for Quick sort. Do specify various
applications of Quicksort
Recursive Approach
#include <iostream>
using namespace std;
int main() {
int arr[] = {52, 37, 63, 14, 17, 8, 6, 25};
int size = sizeof(arr) / sizeof(arr[0]);
return 0;
}
Iterative Approach
#include <iostream>
#include <stack>
using namespace std;
stack.push(low);
stack.push(high);
while (!stack.empty()) {
high = stack.top();
stack.pop();
low = stack.top();
stack.pop();
int main() {
int arr[] = {52, 37, 63, 14, 17, 8, 6, 25};
int size = sizeof(arr) / sizeof(arr[0]);
return 0;
}
The best and worst case time complexity of QuickSort is as follows:
• Best case time complexity: O(n log n)
• Worst case time complexity: O(n^2)
• Sorting: QuickSort is widely used for sorting large datasets efficiently. Its average case
performance of O(n log n) and in-place partitioning make it a popular choice for sorting
applications.
• Language Compilers: QuickSort is often used in language compilers to optimize code generation. I
can be utilized for sorting function calls, register allocation, and other optimization techniques.
• Numerical Analysis: QuickSort finds applications in numerical analysis, such as solving linear
systems of equations, calculating eigenvalues, and matrix factorizations.
• Data Deduplication: QuickSort is used in data deduplication algorithms to efficiently identify and
eliminate duplicate entries from large datasets.
• Searching: QuickSort's partitioning property can be utilized for efficient searching algorithms, such
as quick select, where the goal is to find the kth smallest or largest element in an array.
Apply Cycle sort and sort the given list. Also implement the same using any programming language like c,
c++ or Java in recursive mode and iterative. Derive the complexity for the Best and Worst case for cycle
sort. Do specify various applications of cycle sort.
Recursive Approach:
#include <iostream>
using namespace std;
if (position == currentIndex)
return;
if (position != currentIndex) {
swap(item, arr[position]);
cycleSortRecursive(arr, n, position);
}
}
int main() {
int arr[] = {10, 7, 2, 5, 11};
int size = sizeof(arr) / sizeof(arr[0]);
return 0;
}
Iterative Approach:
#include <iostream>
using namespace std;
if (position == currentIndex)
continue;
if (item != arr[position])
swap(item, arr[position]);
}
}
}
int main() {
int arr[] = {10, 7, 2, 5, 11};
int size = sizeof(arr) / sizeof(arr[0]);
cycleSortIterative(arr, size);
return 0;
}
The best and worst case time complexity of Cycle Sort is as follows:
• Memory Systems: Cycle Sort is useful in memory systems where the cost of write or swap operations is
high compared to the number of comparisons. It minimizes the number of writes or swaps required to
sort an array.
• Embedded Systems: Cycle Sort finds applications in embedded systems, where memory and processing
power are limited. Its in-place nature and minimal memory usage make it suitable for such systems.
• Data Duplication: Cycle Sort can be used to eliminate duplicate entries from a dataset. It ensures that each
value appears only once in the sorted output.
• Small Data Sets: Cycle Sort is efficient for small data sets, as it reduces the number of writes or swaps
compared to other sorting algorithms. It can be a good choice when the number of writes or swaps is
costly.
• Stable Sorting: Although Cycle Sort itself is not a stable sorting algorithm, it can be combined with other
stable sorting algorithms to achieve stable sorting. By performing multiple cycles, stability can be
maintained.
Recursive Approach
#include <iostream>
#include <vector>
#include <algorithm> // Include the <algorithm> header for max_element and min_element
using namespace std;
int outputIndex = 0;
for (int i = 0; i < range; i++) {
while (count[i] > 0) {
output[outputIndex] = i + minElement;
count[i]--;
outputIndex++;
}
}
int main() {
vector<double> arr = {0.5, 2, 9, 5, 2, 3, 5};
int size = arr.size();
return 0;
}
The best and worst case time complexity of Count Sort is as follows:
• Frequency analysis: Counting Sort can be used to analyse the frequency of occurrence of elements in a
given dataset. It can provide insights into the distribution of elements and identify patterns or
anomalies.
• Stable sorting: Counting Sort is a stable sorting algorithm, meaning that it maintains the relative order
of elements with equal values. This property is useful in situations where preserving the order of equal
elements is important.
• Auxiliary algorithm: Counting Sort is often used as an auxiliary algorithm in combination with other
sorting algorithms to optimize performance. For example, it can be used as a subroutine within Radix
Sort or Bucket Sort.
• Histogram generation: Counting Sort can be used to generate a histogram from a set of data. It counts
the occurrences of different values and produces a histogram representing the frequency distribution.