You are on page 1of 12

Unit: 1

Asymptotic Notations for Time and Space Complexity


Asymptotic notations are used to analyze the efficiency of algorithms in terms of time and
space. The three common notations are Big O, Omega, and Theta:

Big O Notation (O): Describes the upper bound on the growth rate of an algorithm's
resource consumption. It provides an upper limit for the worst-case scenario. For example,
O(n) means the algorithm's complexity grows linearly with the input size.
Omega Notation (Ω): Represents the lower bound, meaning it provides a lower limit for the
best-case scenario. For example, Ω(n) implies that the algorithm will take at least linear
time.
Theta Notation (Θ): Gives both upper and lower bounds, indicating that the algorithm's
complexity is tightly bound. For example, Θ(n) means the algorithm has a linear time
complexity.

Methods for Solving Recurrence Relations


Recurrence relations are equations that describe a function in terms of its previous values. In
algorithm analysis, they are often used to express the time complexity of recursive algorithms.
Methods for solving them include:

1. General Idea: Guess the form of the solution and then use mathematical induction to
prove it.
Steps:
Guess the form of the solution based on the recurrence relation.
Use mathematical induction to prove that your guess is correct.
Example:
If the recurrence relation is T(n) = 2T(n/2) + n, you can guess T(n) = O(n log n)
and then use induction to prove it.

2. Master Theorem:

Applicability: Primarily used to solve divide-and-conquer recurrence relations.


General Idea: A general framework that provides solutions for recurrence relations of
the form T(n) = aT(n/b) + f(n), where a, b, and f(n) are constants.
Steps:
Compare the function f(n) with n^(log_b(a)).
Depending on the comparison, the Master Theorem provides a solution in terms of
big O notation.
Example:
If T(n) = 2T(n/2) + n, you can apply the Master Theorem to find that T(n) = O(n log
n).

3. Recurrence Tree Method:

General Idea: Convert the recurrence relation into a tree and analyze its structure to
find the time complexity.
Steps:
Create a tree diagram where each node represents the cost of a subproblem.
Sum the costs of all nodes in each level of the tree.
Analyze the total cost by adding up the costs at each level.
Example:
For T(n) = T(n/2) + T(n/2) + n, you can visualize the tree with two child nodes at
each level, and then sum the costs at each level.

4. Back Substitution:

General Idea: This method is used to find the solution iteratively, starting from the base
case.
Steps:
Start with the base case (usually T(1) or T(0)) and calculate T(2), T(3), and so on,
using the recurrence relation.
Continue this process until you reach T(n).
Example:
If the base case is T(1) = 1 and the recurrence relation is T(n) = T(n-1) + n, you
can use back substitution to find T(n) as T(n) = 1 + 2 + 3 + ... + n.

Brief Review of Graphs


Graphs are a fundamental data structure used to represent relationships between objects. Key
concepts in graph theory include:

Vertices and Edges: Graphs consist of vertices (nodes) connected by edges (lines).
Directed vs. Undirected Graphs: Edges can have a direction (directed) or no direction
(undirected).
Types of Graphs: Common types include trees, bipartite graphs, and complete graphs.
Graph Traversal Algorithms: Depth-First Search (DFS) and Breadth-First Search (BFS)
are fundamental for exploring graphs.

Sets:
A set is a fundamental data structure in computer science and mathematics. It's a collection of
distinct elements with the following key properties:

Uniqueness: Every element in a set is unique; there are no duplicate elements.


No Particular Order: Sets don't have a specific order, so the elements are not arranged in
any particular sequence.
Fast Membership Check: Sets are designed to efficiently determine whether an element is
present in the set or not. This operation should be close to constant time, on average.

Common Set Operations:

1. Adding an Element: You can add an element to a set. If it's already in the set, the set
remains unchanged.
2. Removing an Element: You can remove an element from a set. If the element is not in the
set, it has no effect.
3. Checking Membership: You can check if an element is in the set. This is a fast operation.
4. Union of Sets: The union of two sets combines all the elements from both sets into a new
set.
5. Intersection of Sets: The intersection of two sets contains only the elements that exist in
both sets.
6. Difference of Sets: The difference between two sets contains the elements that are in one
set but not in the other.

Sets are often used in various algorithms and data structures to keep track of unique elements
and to efficiently perform set operations.

Disjoint Sets (Union-Find Data Structure):


A disjoint-set data structure (also known as a union-find data structure) is used to efficiently
manage a collection of disjoint sets. The main operations are:

1. Make Set: Create a new set with a single element.


2. Union: Combine two sets into one set, merging their elements.
3. Find: Determine which set an element belongs to.
Disjoint sets are widely used in applications like graph theory and network connectivity
problems. They are particularly helpful in algorithms for finding connected components in
graphs or in algorithms like Kruskal's Minimum Spanning Tree algorithm.

Disjoint Set Operations:

Make Set: Creates a new set with one element, and each element is initially its
representative.
Find: Given an element, find the representative of the set to which it belongs. This
operation is used to determine which set an element is part of.
Union: Given two elements, merge the sets to which they belong by making one set's
representative the representative of the other set. This operation combines two sets into
one.

Sorting and Searching Algorithms and Their Analysis

Sorting Algorithms:
Sorting algorithms are used to rearrange elements in a specific order, such as in ascending or
descending order. There are various sorting algorithms, each with its own characteristics and
performance.

Common Sorting Algorithms:

1. Bubble Sort:

Idea: Repeatedly swap adjacent elements if they are in the wrong order.
Time Complexity: O(n^2) in the worst and average cases, where n is the number of
elements.

2. Selection Sort:

Idea: Select the minimum element from the unsorted part and place it at the beginning.
Time Complexity: O(n^2) in the worst and average cases.

3. Insertion Sort:

Idea: Build the sorted array one element at a time by inserting the next element into its
proper position.
Time Complexity: O(n^2) in the worst and average cases.

4. Merge Sort:
Idea: Divide the unsorted list into n sublists, each with one element, and then
repeatedly merge sublists to produce new sorted sublists.
Time Complexity: O(n log n) in the worst and average cases, making it more efficient
for large datasets.

5. Quick Sort:

Idea: Select a "pivot" element and partition the other elements into two subarrays,
according to whether they are less than or greater than the pivot.
Time Complexity: O(n^2) in the worst case but O(n log n) on average. It's widely used
due to its average-case efficiency.

Searching Algorithms:
Searching algorithms are used to find a specific element in a collection of data, such as an array
or a list. Different searching algorithms are used based on the structure of the data and the
specific requirements of the search.

Common Searching Algorithms:

1. Linear Search:

Idea: Iterate through the elements one by one until the target element is found.
Time Complexity: O(n), where n is the number of elements. It performs a simple
comparison at each step.

2. Binary Search:

Idea: Divide the sorted array in half repeatedly until the target element is found.
Time Complexity: O(log n), where n is the number of elements. Binary search is
highly efficient for sorted data.

3. Hashing:

Idea: Map the target element to an index using a hash function. Involves creating a
data structure like a hash table.
Time Complexity: O(1) on average for hash table-based searching, but it depends on
the quality of the hash function.

Analysis of Sorting and Searching Algorithms:


The analysis of these algorithms is crucial to understand their performance in different
scenarios:
Time Complexity: Describes how the running time of an algorithm scales with the size of
the input data. Sorting and searching algorithms can have different time complexities, such
as O(n), O(log n), or O(n log n), depending on the algorithm.

Space Complexity: Quantifies the amount of memory required by an algorithm. It's


important to consider not only the time but also the space needed by an algorithm,
especially when working with limited resources.

Best, Worst, and Average Cases: Sorting and searching algorithms may have different
performance characteristics depending on the distribution of data. The best-case scenario
represents the most favorable data arrangement, while the worst-case represents the least
favorable.

Stability: In sorting algorithms, stability refers to the preservation of the relative order of
equal elements in the sorted output.

In-Place Sorting: Some sorting algorithms can sort the data with minimal additional
memory usage. This property is known as "in-place sorting."

Adaptive Sorting: Some sorting algorithms can take advantage of existing order in the
data to perform more efficiently. These are called "adaptive sorting" algorithms.

Understanding the analysis of sorting and searching algorithms helps in selecting the most
suitable algorithm for a specific task and optimizing the performance of algorithms in various
applications.

Divide and Conquer: General Method


General Idea: Divide and Conquer is a problem-solving strategy that breaks a problem into
smaller subproblems, solves the subproblems recursively, and then combines their
solutions to obtain the solution to the original problem.
Steps:
1. Divide: Break the problem into smaller, more manageable subproblems.
2. Conquer: Solve the subproblems, often recursively.
3. Combine: Merge the solutions of subproblems to produce the solution of the original
problem.
Use Cases: Divide and Conquer is commonly applied in sorting, searching, and various
other algorithmic problems.

Binary Search
Idea: Binary search is a Divide and Conquer algorithm used to find a target element in a
sorted array. It repeatedly divides the search space in half.
Steps:
1. Divide: Compare the target with the middle element to determine if it's in the left or
right subarray.
2. Conquer: Recursively apply binary search to the appropriate subarray.
3. Combine: Return the index of the target if found, or an indication that the target is not
in the array.
Time Complexity: O(log n), where n is the number of elements in the array.

Merge Sort
Idea: Merge Sort is a sorting algorithm that uses the Divide and Conquer approach. It
divides the unsorted array into smaller subarrays, recursively sorts them, and then merges
them to create a fully sorted array.
Steps:
1. Divide: Split the unsorted array into two equal subarrays.
2. Conquer: Recursively sort both subarrays.
3. Combine: Merge the sorted subarrays to produce the final sorted array.
Time Complexity: O(n log n) in the worst, average, and best cases, making it highly
efficient.

Quick Sort
Idea: Quick Sort is a Divide and Conquer sorting algorithm that chooses a "pivot" element
and partitions the array into two subarrays: elements less than the pivot and elements
greater than the pivot. The subarrays are then recursively sorted.
Steps:
1. Divide: Choose a pivot element and partition the array into two subarrays.
2. Conquer: Recursively sort both subarrays.
3. Combine: No specific combine step is needed as the subarrays are sorted in place.
Time Complexity: On average, O(n log n), but in the worst case, it can be O(n^2).
However, Quick Sort is often faster in practice due to its smaller constant factors.

Selection Sort
Idea: Selection Sort is a simple sorting algorithm that divides the array into two subarrays:
one with sorted elements and another with unsorted elements.
Steps:
1. Divide: Divide the array into two subarrays: one with sorted elements and one with
unsorted elements.
2. Conquer: Find the minimum element in the unsorted subarray and swap it with the first
unsorted element.
3. Combine: No specific combine step is needed, as the sorted subarray grows with each
iteration.
Time Complexity: O(n^2) in the worst, average, and best cases. It is not efficient for large
datasets.

Strassen's Matrix Multiplication


Idea: Strassen's Matrix Multiplication is an efficient algorithm for multiplying two matrices by
dividing them into smaller submatrices and recursively calculating their products.
Steps:
1. Divide: Divide each matrix into four smaller submatrices.
2. Conquer: Recursively compute seven products of these submatrices.
3. Combine: Combine these products to obtain the final result using addition and
subtraction.
Time Complexity: O(n^2.81), which is faster than the traditional matrix multiplication
algorithm for large matrices.

Analysis of Algorithms for These Problems


The analysis of these algorithms involves determining their time and space complexities:

Binary Search: O(log n) time complexity.


Merge Sort: O(n log n) time complexity, efficient for large datasets.
Quick Sort: O(n log n) on average, but it may be O(n^2) in the worst case.
Selection Sort: O(n^2) time complexity, not suitable for large datasets.
Strassen's Matrix Multiplication: O(n^2.81) time complexity, faster than traditional matrix
multiplication for large matrices.

Unit: 2
Greedy Method: General Method
General Idea: The Greedy Method is a problem-solving strategy where, at each step, you
choose the best option available without considering the consequences of your choice on
future steps. It's suitable for problems where local optimization leads to a globally optimal
solution.
Steps:
1. Selection: At each step, choose the best available option.
2. Feasibility: Ensure that the selected option satisfies problem constraints.
3. Optimality: Prove that the selected option leads to an optimal solution.
Use Cases: The Greedy Method is often used in various optimization problems, including
scheduling, graph algorithms, and more.

Knapsack Problem
Idea: The Knapsack Problem is an optimization problem where you need to select items
from a given set, each with a weight and a value, to maximize the total value while not
exceeding a given weight capacity.
Greedy Approach:
Fractional Knapsack: Sort items by value-to-weight ratio and take fractions of items
as long as they fit.
0/1 Knapsack: Use dynamic programming to ensure that each item is either chosen or
rejected.
Analysis: The fractional knapsack has a time complexity of O(n log n), while the 0/1
knapsack has a time complexity of O(nW), where n is the number of items and W is the
capacity.

Huffman Codes
Idea: Huffman Codes are used in data compression to create variable-length codes for
characters, with shorter codes assigned to more frequent characters.
Steps:
1. Frequency Analysis: Determine the frequency of each character in the input.
2. Build Huffman Tree: Create a binary tree with characters as leaves and the most
frequent characters closer to the root.
3. Assign Codes: Assign binary codes to characters based on their position in the tree.
Analysis: Building the Huffman tree has a time complexity of O(n log n), where n is the
number of unique characters.

Job Sequencing with Deadlines


Idea: Job sequencing with deadlines is an optimization problem where you need to
schedule jobs with associated deadlines and profits. The goal is to maximize the total profit
by completing jobs within their respective deadlines.
Greedy Approach:
Sort Jobs: Sort the jobs by profit in decreasing order.
Allocate Jobs: Allocate each job to the latest possible time slot such that it meets its
deadline.
Analysis: Sorting the jobs has a time complexity of O(n log n), where n is the number of
jobs.

Minimum Spanning Trees


Idea: Minimum Spanning Trees (MSTs) are used to connect all vertices of a weighted graph
with the minimum total edge weight.
Greedy Algorithms:
Kruskal's Algorithm: Sort edges by weight and add them to the MST if they don't
create cycles.
Prim's Algorithm: Start with an arbitrary vertex and add the closest edge to the MST,
repeating until all vertices are included.
Analysis: Kruskal's has a time complexity of O(E log V), and Prim's has a time complexity
of O(V^2) with a simple data structure or O(E + V log V) with a more efficient data structure.

Single Source Paths


Idea: Finding the shortest path from a source vertex to all other vertices in a weighted
graph is a common problem.
Greedy Algorithms:
Dijkstra's Algorithm: Select the unvisited vertex with the smallest distance and
update its neighbors' distances.
Bellman-Ford Algorithm: Iteratively relax all edges until the shortest paths are found,
even with negative edge weights.
Analysis: Dijkstra's has a time complexity of O(V log V + E), and Bellman-Ford has a time
complexity of O(V^2) but can handle negative weights.

Analysis of Algorithms for These Problems


The analysis of these algorithms involves determining their time and space complexities. Time
complexity describes how the algorithm's running time scales with input size, and space
complexity quantifies the amount of memory required.

The time complexity of Greedy Algorithms varies depending on the specific problem.
Knapsack Problem: O(n log n) for sorting items (Fractional Knapsack), O(nW) for 0/1
Knapsack (where n is the number of items and W is the capacity).
Huffman Codes: O(n log n) to build the Huffman tree.
Job Sequencing with Deadlines: O(n log n) for sorting jobs.
Minimum Spanning Trees: O(E log V) for Kruskal's and O(V^2) for Prim's.
Single Source Paths: O(V^2) for Bellman-Ford and O(V log V + E) for Dijkstra's.

Backtracking: General Method


General Idea: Backtracking is a problem-solving strategy where you explore the solution
space by making a series of choices. If a choice leads to a dead-end, you backtrack and try
another choice.
Steps:
1. Choose: Make a choice that seems promising.
2. Explore: Explore the consequences of the choice, often recursively.
3. Unchoose: If the choice leads to a dead-end, undo it and try another choice.
Use Cases: Backtracking is used when problems have multiple solutions, and you need to
find all or the best ones.

8 Queen's Problem
Idea: The 8 Queen's problem is a classic puzzle where you need to place eight queens on
an 8x8 chessboard so that no two queens threaten each other.
Backtracking Approach:
Choose: Place queens one by one in different columns.
Explore: Check if the placement violates any rules (horizontal, vertical, and diagonal),
and recursively proceed.
Unchoose: If a valid placement isn't found, backtrack and explore other possibilities.
Analysis: The 8 Queen's problem is typically solved using backtracking with a time
complexity of O(8^8) in the worst case.

Graph Coloring
Idea: Graph coloring involves assigning colors to vertices in such a way that no adjacent
vertices have the same color. The Backtracking approach can be used to find a proper
vertex coloring.
Backtracking Approach:
Choose: Pick a color for a vertex and move to the next vertex.
Explore: Check if the chosen color is valid for the vertex (i.e., not used by any adjacent
vertices) and proceed.
Unchoose: If a valid coloring isn't found, backtrack and explore other color options.
Analysis: The time complexity for graph coloring using backtracking depends on the
specific graph, but it can be exponential in the worst case.

Hamiltonian Cycles
Idea: A Hamiltonian Cycle is a path in a graph that visits every vertex exactly once and
returns to the starting vertex. Finding Hamiltonian Cycles is an NP-complete problem.
Backtracking Approach:
Choose: Start at a vertex and add it to the path.
Explore: Recursively try to add adjacent unvisited vertices to the path.
Unchoose: If a Hamiltonian cycle isn't found, backtrack and explore other paths.
Analysis: The time complexity for finding Hamiltonian Cycles using backtracking is typically
factorial (O(N!)), as it explores all possible permutations of vertices.

Analysis of These Problems


The analysis of Backtracking algorithms involves determining their time and space complexities.
Time complexity describes how the algorithm's running time scales with input size, and space
complexity quantifies the amount of memory required.

The time complexity of Backtracking algorithms varies depending on the specific problem.
8 Queen's Problem: O(8^8) in the worst case since there are 8 choices for each of the 8
queens.
Graph Coloring: O(m^N), where m is the number of colors and N is the number of vertices.
Hamiltonian Cycles: O(N!), which is factorial time complexity due to the combinatorial
nature of the problem.

You might also like