You are on page 1of 13

Write an algorithm for Binary search method .

Here is a step-by-step algorithm for implementing binary search in


a program:

1. Start by defining the search range as the left and right indices
of the list, where the left index is the first element of the list
and the right index is the last element of the list.
2. Find the middle index of the search range by calculating the
average of the left and right indices.
3. Compare the value at the middle index with the target value.
4. If the target value is equal to the value at the middle index,
return the middle index as the result of the search.
5. If the target value is less than the value at the middle index,
set the right index to be one position to the left of the middle
index, effectively reducing the search range to the left half of
the list.
6. If the target value is greater than the value at the middle
index, set the left index to be one position to the right of the
middle index, effectively reducing the search range to the right
half of the list.
7. Repeat steps 2 to 6 until the target value is found or the
search range becomes empty (left index becomes greater than
right index).
8. If the target value was not found after the search range
became empty, return -1 to indicate that the target value was
not found in the list.

Note: It is important to ensure that the list being searched is sorted


in ascending order before applying the binary search algorithm.

Write an algorithm for Quick sort


Here is a step-by-step algorithm for implementing the Quick Sort
algorithm in a program:
1. Choose a pivot element from the list. This element can be any
element from the list, but it is usually the first, the last, or the
middle element.
2. Partition the list around the pivot element. Rearrange the
elements of the list so that all elements less than the pivot are
to the left of the pivot, and all elements greater than the pivot
are to the right of the pivot.
3. Recursively sort the sub-list to the left of the pivot using the
Quick Sort algorithm.
4. Recursively sort the sub-list to the right of the pivot using the
Quick Sort algorithm.
5. The list is now sorted in ascending order.
6. Return the sorted list.

Note: In practice, the Quick Sort algorithm can be implemented to


sort the elements in-place, without the need for additional memory.
The in-place implementation is more efficient, as it reduces the
amount of memory required to sort the list.

Explain Binary search method


Binary search is an efficient search algorithm that works by
repeatedly dividing the search interval in half. The idea behind
binary search is to eliminate half of the remaining elements after
just one comparison.

Binary search is used to search for a specific value (also known as


the target value) in a sorted list of elements. The algorithm
compares the target value to the middle element of the list. If the
target value is equal to the middle element, the search is over and
the middle element's index is returned as the result. If the target
value is less than the middle element, the search continues in the
left half of the list. If the target value is greater than the middle
element, the search continues in the right half of the list. The
process is repeated until the target value is found or until the search
interval becomes empty (indicating that the target value is not
present in the list).

Binary search has a time complexity of O(log n), where n is the


number of elements in the list. This makes it much faster than linear
search, which has a time complexity of O(n), and is a good choice
for searching large lists of elements. However, binary search can
only be used with sorted lists. If the list is not sorted, the results of
the search may be incorrect.

Explain Sequential search method.(


Sequential search, also known as linear search, is a basic search
algorithm that involves looking at each element of a list one-by-
one, in sequence, until the target value is found or it is determined
that the target value is not present in the list.

The sequential search algorithm starts at the first element of the list
and compares it to the target value. If the first element is equal to
the target value, the search is over and the first element's index is
returned as the result. If the first element is not equal to the target
value, the algorithm moves on to the next element and repeats the
comparison. This process is repeated for each element in the list,
until either the target value is found or it is determined that the
target value is not present in the list.

Sequential search has a time complexity of O(n), where n is the


number of elements in the list. This means that the time it takes to
perform a sequential search increases linearly with the number of
elements in the list. This makes sequential search less efficient than
binary search, which has a time complexity of O(log n). However,
sequential search can be used with lists that are not sorted, while
binary search requires the list to be sorted.
“If no interchanges occurred, then the table must be sorted and no
further passes are required.” Which sorting method works on this
principal?
The sorting method that works on this principal is the Bubble Sort
algorithm.

In Bubble Sort, elements are repeatedly compared and swapped


with their neighbors until the list is sorted. The algorithm works by
repeatedly passing through the list and comparing adjacent
elements. If two adjacent elements are in the wrong order, they are
swapped. This process continues until no more interchanges are
necessary, which means the list is sorted.

The statement "If no interchanges occurred, then the table must be


sorted and no further passes are required" reflects the idea behind
Bubble Sort that the algorithm can stop as soon as it makes a pass
through the list without making any swaps, indicating that the list is
already sorted. This optimization helps to reduce the number of
passes required to sort the list, making the Bubble Sort algorithm
more efficient.

. What is the complexity of binary search algorithm? b. Name two


divide and conquer algorithms for sorting
a. The time complexity of the binary search algorithm is O(log n),
where n is the number of elements in the list. This means that the
time it takes to perform a binary search increases logarithmically
with the number of elements in the list. Binary search is considered
to be an efficient search algorithm, especially when searching large
lists of elements.

b. Two divide and conquer algorithms for sorting are:

1. Merge Sort: Merge Sort is a sorting algorithm that works by


dividing the list of elements into two halves, sorting each half
separately, and then merging the two sorted halves back
together. Merge Sort has a time complexity of O(n log n),
making it a good choice for sorting large lists of elements.
2. Quick Sort: Quick Sort is a sorting algorithm that works by
choosing a pivot element from the list, partitioning the list
around the pivot, and then recursively sorting the two sub-lists
created by the partition. Quick Sort has an average time
complexity of O(n log n), but in the worst case it can have a
time complexity of O(n^2). However, in practice, Quick Sort is
usually faster than Merge Sort and is a popular choice for
sorting large lists of elements.

Describe Linear Probing with an example


Linear Probing is a collision resolution technique used in hash
tables. A hash table is a data structure that stores key-value pairs,
where the keys are hashed to produce an index into an array. The
idea behind hash tables is to store the key-value pairs at the index
corresponding to the hash value of the key. However, when two
keys have the same hash value, a collision occurs and the hash table
must resolve the collision by finding an empty spot in the array to
store the second key-value pair.

Linear Probing is a simple collision resolution technique that


involves looking for the next available spot in the array, starting
from the original hash value. If the original hash value is occupied,
the algorithm checks the next spot in the array. If that spot is
occupied, the algorithm checks the next spot, and so on, until an
empty spot is found. The key-value pair is then stored at the empty
spot.

For example, consider a hash table with an array of size 10 to store


key-value pairs. The hash function maps the keys to array indices
between 0 and 9. Let's say the hash values of the keys "John" and
"Jane" are both 5. When the hash table tries to store the key-value
pair ("John", "Smith") at index 5, it finds that the spot is already
occupied by the key-value pair ("Jane", "Doe"). To resolve the
collision, the hash table uses linear probing to look for the next
available spot in the array. It checks index 6 and finds that it is
empty, so it stores the key-value pair ("John", "Smith") at index 6.

What is the complexity of the quick sort algorithm on sorted data?


Justify your answer
The time complexity of Quick Sort on sorted data can be O(n^2) in
the worst case.

In Quick Sort, the pivot element is chosen as a dividing point to


partition the list into two sub-lists. Ideally, the pivot element should
be chosen such that the sub-lists are roughly equal in size. However,
if the data is already sorted, the worst-case scenario is that the pivot
element is always chosen as either the smallest or largest element,
resulting in one sub-list being empty and the other sub-list having
all n elements. This means that on each subsequent iteration, Quick
Sort will still have to sort n-1 elements, resulting in a time
complexity of O(n^2).

Therefore, it can be concluded that Quick Sort is not efficient on


already sorted data and performs much better on randomly shuffled
data. This highlights the importance of choosing a good pivot
element in Quick Sort in order to ensure good performance and to
avoid the worst-case scenario.

. Explain the difference between insertion sort and selection sort


with an example. What is the time complexity of these algorithms?
How? (Su.
Insertion Sort and Selection Sort are both simple sorting algorithms
that are commonly taught as introductory algorithms. The main
difference between the two algorithms lies in the way they select
and place elements in their final position in the sorted list.
Insertion Sort works by dividing the list into two parts: a sorted part
and an unsorted part. At the beginning of the algorithm, the sorted
part is empty and the unsorted part contains all elements. The
algorithm iterates over the unsorted part, selecting one element at
a time, and inserts it into its correct position in the sorted part. This
process continues until all elements in the unsorted part have been
inserted into the sorted part, resulting in a fully sorted list.

Selection Sort, on the other hand, works by dividing the list into two
parts: a sorted part and a remaining part. At the beginning of the
algorithm, the sorted part is empty and the remaining part contains
all elements. The algorithm iterates over the remaining part,
selecting the smallest element, and appends it to the end of the
sorted part. This process continues until all elements in the
remaining part have been added to the sorted part, resulting in a
fully sorted list.

The time complexity of Insertion Sort is O(n^2), where n is the


number of elements in the list. This is because the algorithm
requires n-1 passes to sort n elements and in the worst case, the
number of comparisons and swaps required in each pass grows
linearly with the number of elements.

The time complexity of Selection Sort is also O(n^2), where n is the


number of elements in the list. This is because the algorithm
requires n-1 passes to sort n elements and in each pass, it must
compare n-i elements, where i is the current pass number. This
results in a total of (n-1) + (n-2) + ... + 1 comparisons, which is
equivalent to n(n-1)/2, or O(n^2).

Here is a simple example to illustrate the difference between


Insertion Sort and Selection Sort:

Example: Let's say we have a list of integers [5, 2, 4, 6, 1, 3].


Insertion Sort: [2, 5, 4, 6, 1, 3] [2, 4, 5, 6, 1, 3] [2, 4, 5, 6, 1, 3] [2, 4, 5,
1, 6, 3] [1, 2, 4, 5, 6, 3] [1, 2, 3, 4, 5, 6]

Selection Sort: [1, 2, 4, 6, 5, 3] [1, 2, 4, 6, 5, 3] [1, 2, 4, 6, 5, 3] [1, 2, 4,


5, 6, 3] [1, 2, 3, 4, 5, 6] [1, 2, 3, 4, 5, 6]

As we can see, the main difference between the two algorithms is


the way they place the selected element in its final position in the
sorted list. In Insertion Sort, the selected element is inserted into its
correct position in the sorted part, whereas in Selection Sort, the
selected element is simply appended to the end of the sorted part.

. What are the factors to be considered during the selection of a


sorting techniques?
When selecting a sorting technique, several factors should be
considered, including:

1. Size of the data: The size of the data being sorted is an


important factor to consider. For small data sets, any sorting
algorithm may be appropriate, but for large data sets, more
efficient algorithms such as Quick Sort or Merge Sort are
preferred.
2. Structure of the data: The structure of the data can also play a
role in selecting a sorting algorithm. For example, if the data is
already partially sorted, Insertion Sort or Bubble Sort may be a
good choice, as they perform well on partially sorted data.
3. Memory constraints: If memory is limited, then algorithms that
require less memory, such as Insertion Sort or Selection Sort,
may be preferred over algorithms that require more memory,
such as Quick Sort or Merge Sort.
4. Stability: If the data contains elements with equal values, the
stability of the sorting algorithm is important to consider. A
stable sorting algorithm maintains the relative order of equal
elements, while an unstable sorting algorithm does not.
5. Time complexity: The time complexity of the algorithm is an
important factor to consider, as it determines the amount of
time it takes to sort the data. For large data sets, algorithms
with a lower time complexity, such as Quick Sort or Merge
Sort, are preferred over algorithms with a higher time
complexity, such as Insertion Sort or Selection Sort.
6. Code complexity: The code complexity of the sorting
algorithm is also important, as a more complex algorithm may
require more time to write, debug, and maintain.
7. Adaptability: The adaptability of the algorithm is important to
consider, as some algorithms may be more suitable for certain
data structures or architectures than others.

In conclusion, selecting a sorting algorithm depends on the specific


requirements of the task at hand, and a careful evaluation of the
various factors should be performed to determine the best
algorithm for the job.

Explain analysis of quick sort algorithm


The Quick Sort algorithm is a divide-and-conquer algorithm that
sorts an array of elements by repeatedly dividing the unsorted
portion into smaller sub-arrays and sorting these sub-arrays.

The time complexity of Quick Sort can be analyzed in two ways:


average case and worst case.

Average Case: In the average case, the Quick Sort algorithm has a
time complexity of O(n log n), where n is the number of elements in
the array. This is because the algorithm randomly selects a pivot
element and divides the array into two sub-arrays, one with
elements less than the pivot and the other with elements greater
than the pivot. The average case analysis assumes that the pivot
element is randomly selected and the sub-arrays are roughly the
same size, resulting in a balanced partition.
Worst Case: In the worst case, the Quick Sort algorithm has a time
complexity of O(n^2), where n is the number of elements in the
array. This can occur if the pivot element is always the largest or
smallest element in the array, causing one sub-array to have n-1
elements and the other to have 0 elements. In this scenario, the
algorithm will be inefficient and take much longer to complete.

However, the worst-case scenario is rare and can be avoided by


using a more sophisticated pivot selection strategy, such as
selecting the median of the array as the pivot element.

Overall, Quick Sort is a highly efficient sorting algorithm, especially


in the average case, and is widely used in practice for sorting large
data sets.

Apply merge sort algorithm for the following data and show the
steps. (Summer 2018) 66, 33, 40, 22, 55, 88, 11, 80, 20, 50, 44, 77, 30
The Merge Sort algorithm is a divide-and-conquer algorithm that
sorts an array of elements by dividing the array into smaller sub-
arrays and then merging these sub-arrays in sorted order.

Here's how to apply Merge Sort to the given data set:

1. Divide the data into two sub-arrays: [66, 33, 40, 22, 55], [88, 11,
80, 20, 50, 44, 77, 30]
2. Sort each of the sub-arrays by dividing them into smaller sub-
arrays: [66, 33], [40, 22], [55], [88, 11], [80, 20], [50, 44], [77, 30]
3. Sort the sub-arrays by dividing them again into even smaller
sub-arrays and merging them in sorted order: [33, 66], [22, 40],
[55], [11, 88], [20, 80], [44, 50], [30, 77]
4. Merge the sorted sub-arrays into larger sub-arrays: [22, 33, 40,
66], [55], [11, 88], [20, 44, 50, 80], [30, 77]
5. Merge the sorted sub-arrays into even larger sub-arrays: [11,
22, 33, 40, 55, 66, 88], [20, 30, 44, 50, 77, 80]
6. Finally, merge the two largest sub-arrays into the final sorted
array: [11, 20, 22, 30, 33, 40, 44, 50, 55, 66, 77, 80, 88]

And that's it! The final sorted array is [11, 20, 22, 30, 33, 40, 44, 50,
55, 66, 77, 80, 88]. The Merge Sort algorithm has a time complexity
of O(n log n), where n is the number of elements in the array.

Write a ‘C’ program for insertion sort and discuss its efficiency. (S
Here's a 'C' program for the Insertion Sort algorithm:

scssCopy code
#include <stdio.h > void insertionSort (int arr[], int n) { int i, j, key; for (i
= 1 ; i < n; i++) { key = arr[i]; j = i - 1 ; while (j >= 0 && arr[j] > key) {
arr[j + 1] = arr[j]; j = j - 1 ; } arr[j + 1] = key; } } void printArray(int
arr[], int n) { int i ; for (i = 0 ; i < n; i++) { printf("%d ", arr[i]); }
printf("\n"); } int main () { int arr[] = {66, 33, 40 , 22 , 55 , 88, 11 , 80 ,
20 , 50 , 44 , 77, 30 }; int n = sizeof(arr) / sizeof(arr[0 ]);
insertionSort (arr, n); printf("Sorted array: \n"); printArray(arr, n); return
0; }

The Insertion Sort algorithm sorts an array by repeatedly inserting


each element in the array into its correct position in the sorted
portion of the array. The time complexity of Insertion Sort is O(n^2),
where n is the number of elements in the array. This means that the
algorithm takes a long time to sort a large array and is not suitable
for use with large data sets.

However, Insertion Sort is efficient for small arrays and arrays that
are nearly sorted. It is also simple to implement and can be useful
for understanding the basic concepts of sorting algorithms.

Apply quick sort on following data(Winter 2017) 42 23 74 11 65 58


94 36 99 87
Here's how to apply the Quick Sort algorithm to the given data:
1. Choose a pivot element. In this example, we'll choose 42 as
the pivot.
2. Partition the elements into two sub-arrays, where all elements
less than the pivot are in one sub-array and all elements
greater than the pivot are in the other sub-array.
sqlCopy code
left sub -array: 23 11 36 right sub -array: 74 65 58 94 99 87
3. Repeat the process for each sub-array until they are each
sorted.
sqlCopy code
left sub -array: 11 23 36 right sub -array: 58 65 74 87 94 99
4. Combine the two sub-arrays to form the final sorted array.
pythonCopy code
sorted array: 11 23 36 42 58 65 74 87 94 99

Note that this is just one example of how the Quick Sort algorithm
could be applied to the given data. The pivot element and
partitioning could be different each time the algorithm is run,
resulting in a different final sorted array.

7. Write an algorithm for Selection sort method. Explain each step


with an example.
The Selection Sort algorithm is a simple sorting method that sorts
an array by repeatedly finding the minimum element in the
unsorted portion of the array and swapping it with the first
unsorted element.

Here is an algorithm for the Selection Sort method:

sqlCopy code
1. Input: unsorted array A of length n 2. Repeat the following steps n -1
times: a. Initialize the minimum element to the first unsorted element b.
For each unsorted element, compare it to the current minimum element.
c. If the current element is smaller than the current minimum, set it as
the new minimum. d. Swap the minimum element with the first unsorted
element. 3. Output: sorted array A

Here's an example of how the Selection Sort algorithm would sort


the array [5, 4, 3, 2, 1]:

1. First iteration: a. The first unsorted element is 5, which is set as


the minimum. b. The other unsorted elements are compared
to 5, and 4 is found to be smaller. c. The minimum element is
now 4. d. The first unsorted element is swapped with 4. Result:
[4, 5, 3, 2, 1]
2. Second iteration: a. The first unsorted element is now 5. b. The
other unsorted elements are compared to 5, and 3 is found to
be smaller. c. The minimum element is now 3. d. The first
unsorted element is swapped with 3. Result: [3, 5, 4, 2, 1]
3. Third iteration: a. The first unsorted element is now 5. b. The
other unsorted elements are compared to 5, and 2 is found to
be smaller. c. The minimum element is now 2. d. The first
unsorted element is swapped with 2. Result: [2, 5, 4, 3, 1]
4. Fourth iteration: a. The first unsorted element is now 5. b. The
other unsorted elements are compared to 5, and 1 is found to
be smaller. c. The minimum element is now 1. d. The first
unsorted element is swapped with 1. Result: [1, 5, 4, 3, 2]
5. Fifth iteration: a. The first unsorted element is now 5. b. The
other unsorted elements are compared to 5, and no elements
are smaller. c. The minimum element remains 5. d. The first
unsorted element is swapped with 5. Result: [1, 2, 3, 4, 5]

The time complexity of the Selection Sort algorithm is O(n^2),


where n is the number of elements in the array. This makes the
Selection Sort algorithm less efficient than other sorting algorithms
for large data sets. However, it is simple to implement and
understand, making it useful for educational purposes.

You might also like