Professional Documents
Culture Documents
complexity-analysis
1
https://www.linkedin.com/in/shaikismail0423/
1.1 Merge Sort
[1]: def merge(s1, s2, a):
i,j,k = 0,0,0
while i <len(s1) and j < len(s2):
if s1[i] <= s2[j]:
a[k] = s1[i]
k += 1
i += 1
else:
a[k] = s2[j]
k += 1
j += 1
while j <len(s2):
a[k] = s2[j]
k += 1
j += 1
def mergeSort(a):
if len(a) == 0 or len(a) == 1:
return
mid = (len(a)) // 2
s1 = a[:mid]
s2 = a[mid:]
mergeSort(s1)
mergeSort(s2)
merge(s1, s2, a)
2
https://www.linkedin.com/in/shaikismail0423/
[3]: def create_rev_array(n):
a = []
for i in range(n, 0, -1):
a.append(i)
return a
2 Experimental Analysis:
• Experimental analysis, also known as empirical analysis, involves measuring the actual per-
formance of an algorithm or program using real-world data and hardware.
• This approach provides insights into how an algorithm behaves in practice and can help
validate or refine the theoretical complexity analysis.
• Experimental analysis can be influenced by factors such as hardware, operating system, and
Python interpreter optimizations. Therefore, while experimental analysis provides practical
insights, it’s advisable to combine it with theoretical complexity analysis for a comprehensive
understanding of an algorithm’s behavior.
[4]: import time
time.time() # This gives you the current time in seconds
[4]: 1692967822.7111595
[5]: n = 1000 # For n = 10, 100, 1000, 10000: check the time taken
[6]: a = create_rev_array(n)
[8]: a = create_rev_array(n)
3
3 Theoretical Analysis
Theoretical analysis, also known as algorithmic analysis, involves evaluating the performance of
an algorithm based on mathematical reasoning and analysis, rather than actual implementation or
experimentation. This approach provides a high-level understanding of an algorithm’s behavior as
the input size grows. The two main aspects of theoretical analysis are time complexity and space
complexity.
1. Time Complexity Analysis: Time complexity analysis estimates the amount of time an
algorithm takes to complete as a function of the input size. The goal is to understand how the
algorithm’s runtime increases as the input size grows. The analysis typically focuses on the
number of basic operations (such as assignments, comparisons, and arithmetic operations)
performed by the algorithm.
Time complexity is often expressed using big O notation, which provides an upper bound on
the growth rate of the algorithm’s runtime. Some common time complexity notations include:
• O(1): Constant time
• O(log n): Logarithmic time
• O(n): Linear time
• O(n log n): Linearithmic time
• O(n^2): Quadratic time
• O(n^k): Polynomial time (for some constant k)
• O(2^n) or O(3^n): Exponential time
The goal is to find the tightest possible upper bound on the algorithm’s time complexity.
2. Space Complexity Analysis: Space complexity analysis estimates the amount of memory
an algorithm uses as a function of the input size. It considers the memory required for vari-
ables, data structures, and function call stacks. Similar to time complexity, space complexity
is also expressed using big O notation.
For space complexity analysis, consider the maximum amount of memory used by the algo-
rithm at any point during its execution. This often involves analyzing the memory required
for data structures and auxiliary variables.
• In theoretical analysis the time factor when determining the efficiency of algorithm is measured
by couting the number of unit operations
3.0.1 Factorial
[10]: n = 10
ans = 1 # 1 unit work
for i in range(1, n):
ans = ans * i #2 unit work as one for multiplication and one for␣
↪incrementation of i
print(ans)
362880
• TC: O(n)
4
4 Algorithmic Complexity Components
When analyzing algorithmic time complexity, the expression k1 + k2n + k3n^3 + k4*log(n) consists
of various terms:
• k1: A constant factor with minimal impact.
• **k2*n**: Linear growth (O(n)) that increases steadily with input.
• **k3*n^3**: Cubic growth (O(n^3)) dominating for larger inputs.
• **k4*log(n)**: Logarithmic growth (O(log(n))) with slower pace.
For instance, in the expression k1 + k2n^2 + k3n^3 + k4log(n)n^2, the cubic term k*n^3 remains
dominant for larger ‘n’.
5 Bubble sort
[11]: def bubble_sort(arr):
n = len(arr)
for i in range(n):
# Last i elements are already in place, no need to check them
for j in range(n - i - 1):
if arr[j] > arr[j + 1]:
arr[j], arr[j + 1] = arr[j + 1], arr[j]
6 Insertion Sort
[12]: def insertion_sort(arr):
for i in range(1, len(arr)):
key = arr[i]
j = i - 1
while j >= 0 and arr[j] > key:
arr[j + 1] = arr[j]
j -= 1
5
arr[j + 1] = key
6
i+=1
0
1
2
3
• Tc: O(n)
Example 2:
[15]: k=2
for i in range(n):
for j in range(k):
print(i+j)
0
1
1
2
2
3
3
4
• TC: if k is small number compared to n, then O(n)
Example 3:
[16]: n=10
for i in range(n):
k = n
while k>0:
k = k//2
• Time Complexity
= k1 + k2 * log(n) +k2*log(n) + k2*log(n)+.....k2* log(n)
= k1 + k2*log(n)*n
= k*log(n)*n
= O(n*log(n))
Example 4:
[17]: n = int(input())
print()
while n >0:
print(n)
n = n//4
7
1000
1000
250
62
15
3
• Time Complexity
log base 4
= k1 + k2 * log(n)
= k*log(n)
= O(log(n))
8
It’s important to note that while recursive algorithms can be elegant and intuitive, they may not
always be the most efficient solution. In some cases, they can lead to excessive function calls and
redundant work. In such cases, iterative approaches or dynamic programming techniques may
provide better performance.
[19]: num = 5
result = factorial(num)
print(f"The factorial of {num} is {result}")
T(n-1) = k + T(n-2)
T(n-2) = k + T(n-3)
..
.
T(1) = k
9
8 Binary Search
1. Iterative
[22]: def binary_search(arr, val):
start = 0
end= len(arr) - 1
[23]: binary_search([1,2,3,4,5,6,7,8,9], 9)
[23]: 8
2. Recursive
[24]: def binary_search_recursive(arr, val, start, end):
if start > end:
return -1
mid = (start + end) // 2
if arr[mid] == val:
return mid
elif val < arr[mid]:
return binary_search_recursive(arr, val, start, end = mid-1)
else:
return binary_search_recursive(arr, val, start = mid+1, end= end)
Index of 66 is 5
10
https://www.linkedin.com/in/shaikismail0423/
T(1) = k
x times to reaach 1
n/(2^x) = 1
logn = x
= k*logn
Time Complexity = O(n)
9 Merge Sort
[26]: def merge(s1, s2, a):
i,j,k = 0,0,0
while i <len(s1) and j < len(s2):
if s1[i] <= s2[j]:
a[k] = s1[i]
i += 1
else:
a[k] = s2[j]
j += 1
k += 1
while j <len(s2):
a[k] = s2[j]
k += 1
j += 1
def mergeSort(a):
if len(a) == 0 or len(a) == 1:
return
mid = (len(a)) // 2
s1 = a[:mid]
s2 = a[mid:]
mergeSort(s1)
mergeSort(s2)
merge(s1, s2, a)
11
[27]: a = [1,45,232,12,345,56,5763,4233]
mergeSort(a)
a
[37]: # For merging two sorted arrys of size m and n into a sorted array of size m+n
# We require operation O(m+n)
The recursive formula for the time complexity can be expressed as:
T(n) = T(n-1) + T(n-2) + O(1)
In other words, the time complexity of fib(n) depends on the time complexities of the two recursive
calls fib(n-1) and fib(n-2), plus the constant time work done within the function itself.
To analyze the worst-case time complexity, we’ll consider the upper bound scenario where each
level of recursion splits into two branches. This forms a binary tree structure with a height of n.
At each level, the number of nodes doubles compared to the previous level.
For each level i (0-based index), there will be approximately 2^i nodes, and since the height of the
tree is n, the total number of nodes in the tree will be:
1 + 2 + 2^2 + … + 2^(n-1) = 2^n - 1
This is the total number of recursive calls made by the function.
Considering that each call takes constant time (O(1)),
the total time complexity is:
T(n) = O(1) * (2^n - 1) = O(2^n)
So, the time complexity of this naive recursive Fibonacci function is exponential: O(2^n). This
means that the time taken by the function grows exponentially with the input value n. This
inefficiency is a result of repeated calculations of overlapping subproblems. For larger values of n,
this recursive approach becomes impractical due to its rapidly increasing time consumption.
12
in general we assume , computer can do 10^8 operations in 1 sec for 100 fiboncci we need 2^100 ~
10^30 operations 10^30 / 10^8 = 10^22 seconds
10 Space Complexity
1. The space complexity is maximum space required at any point of time
2. Auxillary Space
3. Recursion takes space
• Dont count input space requirement in your space complexity
• Only take the extra space we need
[29]: i = 1
n=5
while i<=n:
print(i)
i+=1
# Space complexity: O(1)
1
2
3
4
5
[30]: i = 1
n=5
while i<=n:
j = 0
print(i)
i+=1
# Space complexity: O(1)
1
2
3
4
5
13
10.1 Bubbele Sort
[31]: def bubble_sort(arr):
n = len(arr)
for i in range(n):
for j in range(n - i - 1):
if arr[j] > arr[j + 1]:
arr[j], arr[j + 1] = arr[j + 1], arr[j]
[32]: 120
factorial(5)
[33]: 120
Space complexity:
factn -> fact(n-1) -> fact(n-2) -> fact(n-3)
• n+1 functions in memory
• each function takes k space
• Space Complexity: O(n)
14
[34]: def multiplyRec(m, n):
if n==1:
return m
return m + multiplyRec(m, n - 1)
11 Merge Sort
[35]: def merge(s1, s2, a):
i,j,k = 0,0,0
while i <len(s1) and j < len(s2):
if s1[i] <= s2[j]:
a[k] = s1[i]
k += 1
i += 1
else:
a[k] = s2[j]
k += 1
j += 1
while j <len(s2):
a[k] = s2[j]
k += 1
j += 1
def mergeSort(a):
if len(a) == 0 or len(a) == 1:
return
mid = (len(a)) // 2
s1 = a[:mid]
s2 = a[mid:]
mergeSort(s1)
mergeSort(s2)
merge(s1, s2, a)
15
12 Fibanocci
[36]: def fib(n):
if n == 1 or n == 2:
return 1
return fib(n-1) + fib(n-2)
13 Quick Sort
QuickSort is a widely used sorting algorithm that follows the divide-and-conquer strategy to sort
an array or a list of elements.
1. Choose a pivot element from the array. The choice of pivot can affect the algorithm’s efficiency.
Common choices include the first element, the last element, the middle element, or a random
element.
2. Partition the array into two sub-arrays: elements less than the pivot and elements greater
than the pivot. This is typically done using two pointers, one scanning from the left and the
other from the right, swapping elements as needed.
3. Recursively apply QuickSort to the sub-arrays created in step 2.
4. Combine the sorted sub-arrays and the pivot in their correct order to obtain the final sorted
array.
[37]: def quicksort(arr):
if len(arr) <= 1:
return arr
arr = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5]
sorted_arr = quicksort(arr)
print(sorted_arr)
[1, 1, 2, 3, 3, 4, 5, 5, 5, 6, 9]
QuickSort is generally efficient and has an average-case time complexity of O(n log n), its worst-case
time complexity can be O(n^2) if the pivot choice consistently leads to unbalanced partitions.
• if the pivot we choosen is largest or smallest, then it is not reliable , it leads to O(n^2) time
complexity
• T(n) = T(n-1) +kn = >O(n^2)
16
• If the split in mid,
• Then T(N) = T(n/2) +kn = >O(nlogn)
However, techniques like choosing a good pivot and randomizing the pivot choice can mitigate this
issue in practice.
Quick Sort: Example of randomised algorithm
14 Problems
1. Power
[38]: def power(x,n):
ans = 1
for i in range(1,n+1):
ans*=x
return ans
[39]: 25
power(5,2) #Recursive
# Tc : O(n)
# Sc : O(n)
[40]: 25
17
[41]: def power(x, n):
if n == 0:
return 1
if n % 2 == 0:
temp = power(x, n // 2)
return temp * temp
else:
temp = power(x, (n - 1) // 2)
return x * temp * temp
result = power(5, 2)
print(result) # Output: 25
# Tc: O(logn)
# Sc : O(logn)
25
14.0.1 Note:
1. look at constraints
2. 1 sec -> 10^8 operations
if n <= 10^6, then n^2 <=10^12
18
intersec = []
i, j = 0, 0
while i<m and j<n:
if nums1[i] == nums2[j]:
intersec.append(nums2[j])
i+=1
j+=1
elif nums1[i] < nums2[j]:
i+=1
else:
j+=1
return intersec
intersection([1,1,2,3,4,5],5,[1,2,3,4],4)
[42]: [1, 2, 3, 4]
Equilibrium Index: 3
19
14.1.2 Unique Element
Approach 1: Using Hash Map - Create a hash map to store the frequency of each element in
the array. - Iterate through the array and update the frequency in the hash map. - Iterate through
the hash map and find the element with a frequency of 1. - Time complexity: O(n) - where n is
the number of elements in the array.
Approach 2: Using XOR - XOR all the elements in the array together. - The result will be the
unique element since XOR of two same numbers cancels out (resulting in 0). - Time complexity:
O(n) - where n is the number of elements in the array.
Approach 3: Using Sorting - XOR all the elements in the array together. - The result will be
the unique element since XOR of two same numbers cancels out (resulting in 0). - Time complexity:
O(n) - where n is the number of elements in the array.
arr = [4, 2, 3, 2, 4]
unique_element = find_unique_using_dict(arr)
print("Unique element:", unique_element)
Unique element: 3
return xor_result
Unique Element: 3
20
return arr[i]
i += 2
return -1
Unique Element: 3
return -1
result = duplicateElement([1, 2, 3, 5, 9,9])
print("Duplicate Element:", result)
Duplicate Element: 9
[50]: # Approach 2
def pair_sum(arr,val):
arr.sort()
print(arr)
pairs = 0
21
https://www.linkedin.com/in/shaikismail0423/
start, end = 0, len(arr) -1
while start<=end:
sum_start_end = arr[start] + arr[end]
if sum_start_end == val:
pairs+=1
start+=1
end-=1
elif sum_start_end > val:
end-=1
else:
start+=1
return pairs
[2, 3, 4, 6, 7]
[51]: 2
[52]: #Appraoch 3
def pair_sum_hash(arr, val):
num_map = {}
pairs = 0
return pairs
arr = [1, 2, 3, 4, 3, 5]
target = 6
result = pair_sum_hash(arr, target)
print("Number of pairs:", result)
Number of pairs: 3
22
new array back to the original array. - Time complexity: O(n), where n is the number of elements
in the array.
Approach 2: In-place Rotation - Reverse the entire array. - Reverse the first k elements. -
Reverse the remaining n - k elements. - Time complexity: O(n), where n is the number of elements
in the array.
[53]: def reverse_array(arr, start, end):
while start < end:
arr[start], arr[end] = arr[end], arr[start]
start += 1
end -= 1
arr = [1, 2, 3, 4, 5, 6, 7]
rotate_array_inplace(arr, 3)
print(arr)
[5, 6, 7, 1, 2, 3, 4]
if total == target:
triplets.append([arr[i], arr[left], arr[right]])
left += 1
right -= 1
23
right -= 1
return triplets
if current_sum == target_sum:
print(arr[i], arr[left], arr[right]) # Print the triplet
left += 1
right -= 1
elif current_sum < target_sum:
left += 1
else:
right -= 1
arr = [1, 2, 3, 4, 5, 6, 7]
target_sum = 12
find_triplets(arr, target_sum)
1 4 7
1 5 6
2 3 7
2 4 6
3 4 5
https://www.linkedin.com/in/shaikismail0423/
24