You are on page 1of 24

Complexity Analysis in Python

complexity-analysis

August 25, 2023

1 Time Complexity and Space Complexity


Complexity analysis, also known as algorithmic complexity or time and space complexity analysis,
is the process of evaluating the performance of an algorithm in terms of its resource usage as the
input size grows. In Python, this typically involves analyzing how the runtime and memory usage
of an algorithm change as the input data increases.
There are two main aspects of complexity analysis: time complexity and space complexity.
1. Time Complexity: Time complexity refers to the amount of time an algorithm takes to
complete as a function of the input size n. It is often expressed using big O notation, which
provides an upper bound on the growth rate of the algorithm’s runtime. Common notations
include O(1) for constant time, O(log n) for logarithmic time, O(n) for linear time, O(n log
n) for linearithmic time, O(n^2) for quadratic time, and so on.
2. Space Complexity: Space complexity refers to the amount of memory an algorithm uses
as a function of the input size. It’s also often expressed using big O notation, indicating the
upper bound on memory usage. This includes both auxiliary memory (for variables, data
structures, etc.) and input memory.
To perform complexity analysis in Python:
1. Analyze Code: Study the algorithm and identify its major operations, loops, and recursive
calls.
2. Count Operations: Count the number of basic operations (assignments, comparisons, arith-
metic operations) executed by the algorithm.
3. Determine Patterns: Determine how the number of operations or memory usage changes
with respect to the input size.
4. Express Complexity: Use big O notation to express the time and space complexity based
on the patterns observed.
5. Test and Validate: Use empirical testing on various input sizes to validate the theoretical
complexity analysis.
Complexity analysis provides a high-level understanding of an algorithm’s behavior as the input
grows, but constant factors and lower-order terms might not be accurately captured by it. It’s a
valuable tool for comparing algorithms and making informed decisions about their use.

1
https://www.linkedin.com/in/shaikismail0423/
1.1 Merge Sort
[1]: def merge(s1, s2, a):
i,j,k = 0,0,0
while i <len(s1) and j < len(s2):
if s1[i] <= s2[j]:
a[k] = s1[i]
k += 1
i += 1
else:
a[k] = s2[j]
k += 1
j += 1

while i < len(s1):


a[k] = s1[i]
k += 1
i += 1

while j <len(s2):
a[k] = s2[j]
k += 1
j += 1

def mergeSort(a):
if len(a) == 0 or len(a) == 1:
return

mid = (len(a)) // 2

s1 = a[:mid]
s2 = a[mid:]
mergeSort(s1)
mergeSort(s2)

merge(s1, s2, a)

1.2 Selection sort


[2]: def selectionSort(a):
for i in range(len(a)):
min_idx = i
for j in range(i+1, len(a)):
if a[min_idx] > a[j]:
min_idx = j
a[i], a[min_idx] = a[min_idx], a[i]
return a

2
https://www.linkedin.com/in/shaikismail0423/
[3]: def create_rev_array(n):
a = []
for i in range(n, 0, -1):
a.append(i)
return a

2 Experimental Analysis:
• Experimental analysis, also known as empirical analysis, involves measuring the actual per-
formance of an algorithm or program using real-world data and hardware.
• This approach provides insights into how an algorithm behaves in practice and can help
validate or refine the theoretical complexity analysis.
• Experimental analysis can be influenced by factors such as hardware, operating system, and
Python interpreter optimizations. Therefore, while experimental analysis provides practical
insights, it’s advisable to combine it with theoretical complexity analysis for a comprehensive
understanding of an algorithm’s behavior.
[4]: import time
time.time() # This gives you the current time in seconds

[4]: 1692967822.7111595

[5]: n = 1000 # For n = 10, 100, 1000, 10000: check the time taken

[6]: a = create_rev_array(n)

[7]: start_time = time.time()


mergeSort(a)
end_time = time.time()

execution_time = end_time - start_time


print(f"Merge Sort Execution time: {execution_time} seconds")

Merge Sort Execution time: 0.00498652458190918 seconds

[8]: a = create_rev_array(n)

[9]: start_time = time.time()


selectionSort(a)
end_time = time.time()
execution_time = end_time - start_time
print(f"Selection Sort Execution time: {execution_time} seconds")

Selection Sort Execution time: 0.04084348678588867 seconds

3
3 Theoretical Analysis
Theoretical analysis, also known as algorithmic analysis, involves evaluating the performance of
an algorithm based on mathematical reasoning and analysis, rather than actual implementation or
experimentation. This approach provides a high-level understanding of an algorithm’s behavior as
the input size grows. The two main aspects of theoretical analysis are time complexity and space
complexity.
1. Time Complexity Analysis: Time complexity analysis estimates the amount of time an
algorithm takes to complete as a function of the input size. The goal is to understand how the
algorithm’s runtime increases as the input size grows. The analysis typically focuses on the
number of basic operations (such as assignments, comparisons, and arithmetic operations)
performed by the algorithm.
Time complexity is often expressed using big O notation, which provides an upper bound on
the growth rate of the algorithm’s runtime. Some common time complexity notations include:
• O(1): Constant time
• O(log n): Logarithmic time
• O(n): Linear time
• O(n log n): Linearithmic time
• O(n^2): Quadratic time
• O(n^k): Polynomial time (for some constant k)
• O(2^n) or O(3^n): Exponential time
The goal is to find the tightest possible upper bound on the algorithm’s time complexity.
2. Space Complexity Analysis: Space complexity analysis estimates the amount of memory
an algorithm uses as a function of the input size. It considers the memory required for vari-
ables, data structures, and function call stacks. Similar to time complexity, space complexity
is also expressed using big O notation.
For space complexity analysis, consider the maximum amount of memory used by the algo-
rithm at any point during its execution. This often involves analyzing the memory required
for data structures and auxiliary variables.
• In theoretical analysis the time factor when determining the efficiency of algorithm is measured
by couting the number of unit operations

3.0.1 Factorial
[10]: n = 10
ans = 1 # 1 unit work
for i in range(1, n):
ans = ans * i #2 unit work as one for multiplication and one for␣
↪incrementation of i

print(ans)

362880
• TC: O(n)

4
4 Algorithmic Complexity Components
When analyzing algorithmic time complexity, the expression k1 + k2n + k3n^3 + k4*log(n) consists
of various terms:
• k1: A constant factor with minimal impact.
• **k2*n**: Linear growth (O(n)) that increases steadily with input.
• **k3*n^3**: Cubic growth (O(n^3)) dominating for larger inputs.
• **k4*log(n)**: Logarithmic growth (O(log(n))) with slower pace.
For instance, in the expression k1 + k2n^2 + k3n^3 + k4log(n)n^2, the cubic term k*n^3 remains
dominant for larger ‘n’.

5 Bubble sort
[11]: def bubble_sort(arr):
n = len(arr)
for i in range(n):
# Last i elements are already in place, no need to check them
for j in range(n - i - 1):
if arr[j] > arr[j + 1]:
arr[j], arr[j + 1] = arr[j + 1], arr[j]

arr = [64, 34, 25, 12, 22, 11, 90]


bubble_sort(arr)
print("Sorted array:", arr)

Sorted array: [11, 12, 22, 25, 34, 64, 90]


• Time Complexity
= k1 + k2 *(n-1) +k2*(n-2) + k2*(n-3)+.....k2
= k1 + k2*(1+2+3....(n-1))
= k1 + k2*((n-1)*(n)/2)
= k*n^2+k*n
= k*n^2
= O(n^2)
• one for loop doesnt mean O(n) , it is what we are working inside

6 Insertion Sort
[12]: def insertion_sort(arr):
for i in range(1, len(arr)):
key = arr[i]
j = i - 1
while j >= 0 and arr[j] > key:
arr[j + 1] = arr[j]
j -= 1

5
arr[j + 1] = key

arr = [12, 11, 13, 5, 6]


insertion_sort(arr)
print("Sorted array:", arr)

Sorted array: [5, 6, 11, 12, 13]


• Time Complexity
= k1 + k2 * 1 +k2*2 + k2*3+.....k2* (n-1)
= k1 + k2*(1+2+3....(n-1))
= k1 + k2*((n-1)*(n)/2)
= k*n^2+k*n
= k*n^2
= O(n^2)

6.1 Selection Sort


[13]: def selection_sort(arr):
n = len(arr)
for i in range(n - 1):
min_index = i
for j in range(i + 1, n):
if arr[j] < arr[min_index]:
min_index = j
arr[i], arr[min_index] = arr[min_index], arr[i]

arr = [64, 25, 12, 22, 11]


selection_sort(arr)
print("Sorted array:", arr)

Sorted array: [11, 12, 22, 25, 64]


• Time Complexity
= k1 + k2 * (n-1) +k2*(n-2) + k2*(n-3)+.....k2* 1
= k1 + k2*(1+2+3....(n-1))
= k1 + k2*((n-1)*(n)/2)
= k*n^2+k*n
= k*n^2
= O(n^2)
Example 1:
[14]: i = 0
n=4
while i<n:
while i<n:
print(i)

6
i+=1

0
1
2
3
• Tc: O(n)
Example 2:
[15]: k=2

for i in range(n):
for j in range(k):
print(i+j)

0
1
1
2
2
3
3
4
• TC: if k is small number compared to n, then O(n)
Example 3:
[16]: n=10
for i in range(n):
k = n
while k>0:
k = k//2

• Time Complexity
= k1 + k2 * log(n) +k2*log(n) + k2*log(n)+.....k2* log(n)
= k1 + k2*log(n)*n
= k*log(n)*n
= O(n*log(n))
Example 4:
[17]: n = int(input())
print()

while n >0:
print(n)
n = n//4

7
1000

1000
250
62
15
3
• Time Complexity
log base 4
= k1 + k2 * log(n)
= k*log(n)
= O(log(n))

7 Recursive functions Complexity


When analyzing the time complexity of recursive algorithms, We need to consider the number of
recursive calls made and the work done per call.
1. Number of Recursive Calls: The number of recursive calls made by an algorithm greatly
impacts its time complexity. If the number of recursive calls grows significantly with the
input size, the algorithm’s time complexity can become inefficient.
2. Work Done Per Call: The work done within each recursive call is also a critical factor.
This includes any operations, comparisons, and calculations performed within each call.
General approach to analyzing the time complexity of a recursive algorithm:
1. Identify the Recurrence Relation: Determine how the problem is broken down into
smaller instances, and express the time complexity of the problem in terms of the time
complexity of those smaller instances.
2. Write the Recurrence Equation: Formulate a mathematical equation that describes the
relationship between the time complexity of the problem and the time complexity of its
subproblems.
3. Solve the Recurrence Equation: Solve the recurrence equation to obtain a closed-form
solution that expresses the time complexity in terms of the input size.
• O(1): This is rare in recursive algorithms. It indicates constant time complexity, meaning
that each recursive call performs a constant amount of work.
• O(log n): Common in algorithms that use a binary search or divide-and-conquer strategy,
where the problem size is reduced by a constant factor with each recursive call.
• O(n): Linear time complexity, often seen in algorithms that involve traversing a data structure
or processing each element individually.
• O(n log n): Common in algorithms like Merge Sort or Quick Sort, which divide the problem
into smaller instances and then merge or combine the results.
• O(n^2), O(n^3), …: Polynomial time complexities, often seen in algorithms with nested
recursive calls or multiple nested loops.

8
It’s important to note that while recursive algorithms can be elegant and intuitive, they may not
always be the most efficient solution. In some cases, they can lead to excessive function calls and
redundant work. In such cases, iterative approaches or dynamic programming techniques may
provide better performance.

7.0.1 Factorial recursive


[18]: def factorial(n):
if n == 0 or n == 1:
return 1
else:
return n * factorial(n - 1)

[19]: num = 5
result = factorial(num)
print(f"The factorial of {num} is {result}")

The factorial of 5 is 120


• Time Complexity
T(n) = T(n-1) + k -> Recurssive Relation

T(n-1) = k + T(n-2)
T(n-2) = k + T(n-3)
..
.
T(1) = k

Substituting the above value


T(N) = k*n + T(0)
T(n) = k*n +k
T(n) = kn

Time complexity: O(n)


[20]: def multiplyRec(m, n):
if n==1:
return m
return m + multiplyRec(m, n - 1)

## Time Complexity:- O(n)

[21]: def sumOfDigits(n):


if n<10:
return n
return n%10 + sumOfDigits(n//10)

## Time Complexity: O(logN) base 10

9
8 Binary Search
1. Iterative
[22]: def binary_search(arr, val):
start = 0
end= len(arr) - 1

while start<= end:


mid = (start + end) // 2
if arr[mid] == val:
return mid
elif val <arr[mid]:
end = mid-1
else:
start = mid+1

[23]: binary_search([1,2,3,4,5,6,7,8,9], 9)

[23]: 8

2. Recursive
[24]: def binary_search_recursive(arr, val, start, end):
if start > end:
return -1
mid = (start + end) // 2
if arr[mid] == val:
return mid
elif val < arr[mid]:
return binary_search_recursive(arr, val, start, end = mid-1)
else:
return binary_search_recursive(arr, val, start = mid+1, end= end)

[25]: arr = [1,2,30,45,58,66,70,88,92]


start = 0
end = len(arr) - 1
val = 66
print("Index of ",val," is",binary_search_recursive(arr, val,start,end))

Index of 66 is 5

8.0.1 Time Complexity


T(n) = k + T(n/2) -> Recustive Realtion
T(n/2) = T(n/4) + k
T(n/4) = T(n/8) +k
..
.

10

https://www.linkedin.com/in/shaikismail0423/
T(1) = k

x times to reaach 1

n/2 -> n/4 -> n/8 ->....1

n/(2^x) = 1
logn = x

= k*logn
Time Complexity = O(n)

9 Merge Sort
[26]: def merge(s1, s2, a):

i,j,k = 0,0,0
while i <len(s1) and j < len(s2):
if s1[i] <= s2[j]:
a[k] = s1[i]
i += 1
else:
a[k] = s2[j]
j += 1
k += 1

while i < len(s1):


a[k] = s1[i]
k += 1
i += 1

while j <len(s2):
a[k] = s2[j]
k += 1
j += 1

def mergeSort(a):
if len(a) == 0 or len(a) == 1:
return
mid = (len(a)) // 2
s1 = a[:mid]
s2 = a[mid:]
mergeSort(s1)
mergeSort(s2)
merge(s1, s2, a)

11
[27]: a = [1,45,232,12,345,56,5763,4233]
mergeSort(a)
a

[27]: [1, 12, 45, 56, 232, 345, 4233, 5763]

9.0.1 Time Complexity


T(n) = k*n(for splitting) + T(n/2) +T(n/2) + k*n -> Recustive Realtion
T(n/2) = 2*T(n/4) + k*n

T(n) = 2*T(n/2) + O(n)


T(n/2) = 2T(n/4) +O(n)

Time complexity is n* logn

[37]: # For merging two sorted arrys of size m and n into a sorted array of size m+n
# We require operation O(m+n)

[28]: def fib(n):


if n == 0 or n == 1:
return n
return fib(n-1) + fib(n-2)

The recursive formula for the time complexity can be expressed as:
T(n) = T(n-1) + T(n-2) + O(1)
In other words, the time complexity of fib(n) depends on the time complexities of the two recursive
calls fib(n-1) and fib(n-2), plus the constant time work done within the function itself.
To analyze the worst-case time complexity, we’ll consider the upper bound scenario where each
level of recursion splits into two branches. This forms a binary tree structure with a height of n.
At each level, the number of nodes doubles compared to the previous level.
For each level i (0-based index), there will be approximately 2^i nodes, and since the height of the
tree is n, the total number of nodes in the tree will be:
1 + 2 + 2^2 + … + 2^(n-1) = 2^n - 1
This is the total number of recursive calls made by the function.
Considering that each call takes constant time (O(1)),
the total time complexity is:
T(n) = O(1) * (2^n - 1) = O(2^n)
So, the time complexity of this naive recursive Fibonacci function is exponential: O(2^n). This
means that the time taken by the function grows exponentially with the input value n. This
inefficiency is a result of repeated calculations of overlapping subproblems. For larger values of n,
this recursive approach becomes impractical due to its rapidly increasing time consumption.

12
in general we assume , computer can do 10^8 operations in 1 sec for 100 fiboncci we need 2^100 ~
10^30 operations 10^30 / 10^8 = 10^22 seconds

10 Space Complexity
1. The space complexity is maximum space required at any point of time
2. Auxillary Space
3. Recursion takes space
• Dont count input space requirement in your space complexity
• Only take the extra space we need
[29]: i = 1
n=5

while i<=n:
print(i)
i+=1
# Space complexity: O(1)

1
2
3
4
5

[30]: i = 1
n=5
while i<=n:
j = 0
print(i)
i+=1
# Space complexity: O(1)

1
2
3
4
5

[81]: a = [ i for i in range(0,n)]

# Space complexity: O(n)

13
10.1 Bubbele Sort
[31]: def bubble_sort(arr):
n = len(arr)
for i in range(n):
for j in range(n - i - 1):
if arr[j] > arr[j + 1]:
arr[j], arr[j + 1] = arr[j + 1], arr[j]

arr = [64, 34, 25, 12, 22, 11, 90]


bubble_sort(arr)
print("Sorted array:", arr)

# Space complexity: O(1)

Sorted array: [11, 12, 22, 25, 34, 64, 90]

10.1.1 Iterative Factorial


[32]: def factorial(n):
fact = 1
while n != 0:
fact *= n
n-=1
return fact
factorial(5)

# Space complexity: O(1)

[32]: 120

10.2 Recursive Factorial


[33]: def factorial(n):
if n==0 or n==1:
return 1
return n*factorial(n-1)

factorial(5)

[33]: 120

Space complexity:
factn -> fact(n-1) -> fact(n-2) -> fact(n-3)
• n+1 functions in memory
• each function takes k space
• Space Complexity: O(n)

14
[34]: def multiplyRec(m, n):
if n==1:
return m
return m + multiplyRec(m, n - 1)

# Space complexity: O(n)

11 Merge Sort
[35]: def merge(s1, s2, a):
i,j,k = 0,0,0
while i <len(s1) and j < len(s2):
if s1[i] <= s2[j]:
a[k] = s1[i]
k += 1
i += 1
else:
a[k] = s2[j]
k += 1
j += 1

while i < len(s1):


a[k] = s1[i]
k += 1
i += 1

while j <len(s2):
a[k] = s2[j]
k += 1
j += 1

def mergeSort(a):
if len(a) == 0 or len(a) == 1:
return

mid = (len(a)) // 2

s1 = a[:mid]
s2 = a[mid:]

mergeSort(s1)
mergeSort(s2)

merge(s1, s2, a)

# Space complexity: O(n)

15
12 Fibanocci
[36]: def fib(n):
if n == 1 or n == 2:
return 1
return fib(n-1) + fib(n-2)

# At any point of fib(n-1) -> fib(n-2) ....fib(0) : Highest calls


# Space complexity: O(n)

13 Quick Sort
QuickSort is a widely used sorting algorithm that follows the divide-and-conquer strategy to sort
an array or a list of elements.
1. Choose a pivot element from the array. The choice of pivot can affect the algorithm’s efficiency.
Common choices include the first element, the last element, the middle element, or a random
element.
2. Partition the array into two sub-arrays: elements less than the pivot and elements greater
than the pivot. This is typically done using two pointers, one scanning from the left and the
other from the right, swapping elements as needed.
3. Recursively apply QuickSort to the sub-arrays created in step 2.
4. Combine the sorted sub-arrays and the pivot in their correct order to obtain the final sorted
array.
[37]: def quicksort(arr):
if len(arr) <= 1:
return arr

pivot = arr[len(arr) // 2] # Choosing the middle element as the pivot


left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quicksort(left) + middle + quicksort(right)

arr = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5]
sorted_arr = quicksort(arr)
print(sorted_arr)

[1, 1, 2, 3, 3, 4, 5, 5, 5, 6, 9]
QuickSort is generally efficient and has an average-case time complexity of O(n log n), its worst-case
time complexity can be O(n^2) if the pivot choice consistently leads to unbalanced partitions.
• if the pivot we choosen is largest or smallest, then it is not reliable , it leads to O(n^2) time
complexity
• T(n) = T(n-1) +kn = >O(n^2)

16
• If the split in mid,
• Then T(N) = T(n/2) +kn = >O(nlogn)
However, techniques like choosing a good pivot and randomizing the pivot choice can mitigate this
issue in practice.
Quick Sort: Example of randomised algorithm

14 Problems
1. Power
[38]: def power(x,n):
ans = 1
for i in range(1,n+1):
ans*=x
return ans

[39]: power(5,2) #iterative


# Tc : O(n)
# Sc : O(1)

[39]: 25

[40]: def power(x,n):


if n == 0:
return 1
return x * power(x,n-1)

power(5,2) #Recursive

# Tc : O(n)
# Sc : O(n)

[40]: 25

Improvement Of Power Function


We can improve time complexity of this function using an approach called “exponentiation by
squaring,” which reduces the number of recursive calls and thus improves the overall efficiency.
This approach is based on the idea that for even values of n, we can calculate x^n as (x^(n/2))^2,
and for odd values of n, we can calculate x^n as x * (x^((n-1)/2))^2.
In the below implementation, the number of recursive calls is reduced, leading to a significant
improvement in the time complexity. - The time complexity of this optimized version is O(log n),
where n is the exponent. This is because at each step, the exponent is divided by 2, resulting in a
logarithmic number of recursive calls. - The space complexity remains O(log n) as well due to the
recursive call stack.

17
[41]: def power(x, n):
if n == 0:
return 1
if n % 2 == 0:
temp = power(x, n // 2)
return temp * temp
else:
temp = power(x, (n - 1) // 2)
return x * temp * temp

result = power(5, 2)
print(result) # Output: 25

# Tc: O(logn)
# Sc : O(logn)

25

14.0.1 Note:
1. look at constraints
2. 1 sec -> 10^8 operations
if n <= 10^6, then n^2 <=10^12

14.1 Array intersection Problem


Approach 1: - Iterate through each element in one array and check its presence in the other array.
- Time complexity: O(mn).
Approach 2: - If both arrays only contain unique elements, sort the second array and perform
binary search on it. - Sorting the second array takes nlogn time, and binary search for m elements
takes mlogn time. - Time complexity: O(nlogn + mlogn).
Approach 3: - Sort both arrays individually. - Find common elements by comparing elements
while traversing both arrays. - Sorting each array takes mlogm and nlogn time respectively, and
finding common elements takes O(m + n) time. - Time complexity: O(mlogm + nlogn + m + n)
= O(mlogm + nlogn).
Approach 4: - Use hashmaps to store elements from one array and then check for their presence
in the other array. - Time complexity is dependent on the efficiency of hashmaps, typically around
O(m + n).
Please note that the time complexities provided here are based on generalizations and might vary
depending on implementation details and specific scenarios.
[42]: # Approach 3
def intersection(nums1, m, nums2, n):
nums1.sort()
nums2.sort()

18
intersec = []
i, j = 0, 0
while i<m and j<n:
if nums1[i] == nums2[j]:
intersec.append(nums2[j])
i+=1
j+=1
elif nums1[i] < nums2[j]:
i+=1
else:
j+=1
return intersec

intersection([1,1,2,3,4,5],5,[1,2,3,4],4)

[42]: [1, 2, 3, 4]

14.1.1 Equilibrium Index Problem:


Approach 1: - For each index, calculate the sum of elements on the left side and the sum of elements
on the right side. - Compare the left and right sums for each index to find the equilibrium index.
- Time complexity: O(n^2) due to nested loop for sum calculations.
Approach 2: - Calculate the total sum of all elements in the array. - Initialize a variable to keep
track of the left sum as 0. - Iterate through each index, update the right sum as (total sum - current
element - left sum). - Compare the left and right sums for each index to find the equilibrium index.
- Time complexity: O(n) as it involves a single pass through the array to calculate left and right
sums.
[43]: def find_equilibrium_index(arr):
total_sum = sum(arr)
left_sum = 0

for index, value in enumerate(arr):


total_sum -= value
if left_sum == total_sum:
return index
left_sum += value

return -1 # No equilibrium index found

arr = [-7, 1, 5, 2, -4, 3, 0]


equilibrium_index = find_equilibrium_index(arr)

print("Equilibrium Index:", equilibrium_index)

Equilibrium Index: 3

19
14.1.2 Unique Element
Approach 1: Using Hash Map - Create a hash map to store the frequency of each element in
the array. - Iterate through the array and update the frequency in the hash map. - Iterate through
the hash map and find the element with a frequency of 1. - Time complexity: O(n) - where n is
the number of elements in the array.
Approach 2: Using XOR - XOR all the elements in the array together. - The result will be the
unique element since XOR of two same numbers cancels out (resulting in 0). - Time complexity:
O(n) - where n is the number of elements in the array.
Approach 3: Using Sorting - XOR all the elements in the array together. - The result will be
the unique element since XOR of two same numbers cancels out (resulting in 0). - Time complexity:
O(n) - where n is the number of elements in the array.

[44]: def find_unique_using_dict(arr):


freq_elements = {}

for num in arr:


freq_elements[num] = freq_elements.get(num,0) +1

for num,freq in freq_elements.items():


if freq ==1:
return num

arr = [4, 2, 3, 2, 4]
unique_element = find_unique_using_dict(arr)
print("Unique element:", unique_element)

Unique element: 3

[48]: def uniqueElement(arr):


xor_result = 0

for num in arr:


xor_result ^= num

return xor_result

result = uniqueElement([1, 1, 2, 3, 2])


print("Unique Element:", result)

Unique Element: 3

[46]: def uniqueElement(arr):


arr.sort()
i = 0
while i < len(arr):
if i == len(arr) - 1 or arr[i] != arr[i + 1]:

20
return arr[i]
i += 2
return -1

result = uniqueElement([1, 1, 2, 3, 2])


print("Unique Element:", result)

Unique Element: 3

[49]: # Duplicate in a array


def duplicateElement(arr):
arr.sort()
i = 0

while i + 1 < len(arr):


if arr[i] == arr[i + 1]:
return arr[i]
i += 1

return -1
result = duplicateElement([1, 2, 3, 5, 9,9])
print("Duplicate Element:", result)

Duplicate Element: 9

15 Pair Sum problem:


Approach 1: - For each element in the array, check the remaining elements to find if there are
any other elements that sum up to the required target. - Time complexity: O(n^2), where n is the
number of elements in the array.
Approach 2: Two Pointer Approach - Sort the array. - Initialize two pointers, one at the
beginning (left) and one at the end (right) of the sorted array. - Compare the sum of elements
pointed by the left and right pointers with the target sum. - If the sum is less than the target,
move the left pointer to the right. - If the sum is greater than the target, move the right pointer
to the left. - If the sum equals the target, you’ve found the pair. - Time complexity: O(n log n)
due to sorting, but the search part only requires O(n) time.
Approach 3: Using Hash Map - Iterate through the array, for each element num, calculate
the complement which is target - num. - Check if complement exists in a hash map that stores
elements you’ve seen so far. - If it does, you’ve found the pair. - If it doesn’t, add the current
element to the hash map. - Time complexity: O(n), where n is the number of elements in the array.

[50]: # Approach 2
def pair_sum(arr,val):
arr.sort()
print(arr)
pairs = 0

21
https://www.linkedin.com/in/shaikismail0423/
start, end = 0, len(arr) -1
while start<=end:
sum_start_end = arr[start] + arr[end]
if sum_start_end == val:
pairs+=1
start+=1
end-=1
elif sum_start_end > val:
end-=1
else:
start+=1

return pairs

[51]: pair_sum([4, 6, 2, 7, 3], 10)

[2, 3, 4, 6, 7]

[51]: 2

[52]: #Appraoch 3
def pair_sum_hash(arr, val):
num_map = {}
pairs = 0

for num in arr:


complement = val - num
if complement in num_map:
pairs += num_map[complement]
if num in num_map:
num_map[num] += 1
else:
num_map[num] = 1

return pairs

arr = [1, 2, 3, 4, 3, 5]
target = 6
result = pair_sum_hash(arr, target)
print("Number of pairs:", result)

Number of pairs: 3

15.1 Rotate Array


Approach 1: Using Extra Space - Create a new array of the same size as the original array. -
Copy the elements from the original array to the new array with the appropriate shift. - Copy the

22
new array back to the original array. - Time complexity: O(n), where n is the number of elements
in the array.
Approach 2: In-place Rotation - Reverse the entire array. - Reverse the first k elements. -
Reverse the remaining n - k elements. - Time complexity: O(n), where n is the number of elements
in the array.
[53]: def reverse_array(arr, start, end):
while start < end:
arr[start], arr[end] = arr[end], arr[start]
start += 1
end -= 1

def rotate_array_inplace(arr, d):


n = len(arr)
d = d % n

reverse_array(arr, 0, n - 1) #total elements reverse


reverse_array(arr, 0, d - 1) # first d elements reverse
reverse_array(arr, d, n - 1) # from d to n-1 reverse :(n-d) elements

arr = [1, 2, 3, 4, 5, 6, 7]
rotate_array_inplace(arr, 3)
print(arr)

[5, 6, 7, 1, 2, 3, 4]

[54]: def three_sum(arr, target):


arr.sort()
triplets = []

for i in range(len(arr) - 2):


if i > 0 and arr[i] == arr[i - 1]:
continue

left, right = i + 1, len(arr) - 1

while left < right:


total = arr[i] + arr[left] + arr[right]

if total == target:
triplets.append([arr[i], arr[left], arr[right]])
left += 1
right -= 1

while left < right and arr[left] == arr[left - 1]:


left += 1
while left < right and arr[right] == arr[right + 1]:

23
right -= 1

elif total < target:


left += 1
else:
right -= 1

return triplets

[77]: def find_triplets(arr, target_sum):


arr.sort() # Sort the array in ascending order

for i in range(len(arr) - 2):


left = i + 1
right = len(arr) - 1

while left < right:


current_sum = arr[i] + arr[left] + arr[right]

if current_sum == target_sum:
print(arr[i], arr[left], arr[right]) # Print the triplet
left += 1
right -= 1
elif current_sum < target_sum:
left += 1
else:
right -= 1

arr = [1, 2, 3, 4, 5, 6, 7]
target_sum = 12
find_triplets(arr, target_sum)

1 4 7
1 5 6
2 3 7
2 4 6
3 4 5

https://www.linkedin.com/in/shaikismail0423/

24

You might also like