0% found this document useful (0 votes)
23 views14 pages

Unit 1

Uploaded by

adukupraise
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views14 pages

Unit 1

Uploaded by

adukupraise
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

UNIT 1: Basics & Algorithm Analysis

1. Algorithm

An algorithm is a well-defined, step-by-step procedure for solving a problem or performing a


computation. It takes an input, processes it using a finite number of well-defined steps, and
produces an output. Algorithms must be:

 Definite (clear and unambiguous instructions).


 Correct (produce the correct output for every valid input).
 Finite (terminate after a finite number of steps).
 Efficient (execute within reasonable time and space).
 Independent (work on various computing platforms).

Example: The Euclidean algorithm for finding the greatest common divisor (GCD).

2. Asymptotic Notations

Asymptotic notations describe the behavior of an algorithm as the input size n approaches
infinity. It helps in analyzing time and space complexity.

 Big-O (O): Represents the upper bound or worst-case time complexity. It provides a
guarantee that an algorithm will not take longer than a certain time.
o Example: If an algorithm runs in O(n²) time, its execution time grows
quadratically with input size.
 Small-o (o): Represents a loose upper bound where the function grows strictly slower
than another function.
o Example: f(n) = o(g(n)) means f(n) grows slower than g(n) but is not
asymptotically tight.
 Omega (Ω): Represents the lower bound or best-case time complexity, ensuring that an
algorithm will take at least this much time.
o Example: If an algorithm runs in Ω(n log n) time, then it will not run faster than
this in any case.
 Theta (Θ): Represents a tight bound, meaning that an algorithm runs in at least and at
most this time complexity.
o Example: If an algorithm runs in Θ(n log n), it means its complexity remains in
this range for all cases.

3. Time Complexity
Time complexity measures the amount of time an algorithm takes to complete as a function of
the input size n. It depends on:

 Number of operations performed


 Input size (n)
 Efficiency of each operation

Example: Linear Search has a time complexity of O(n) since it checks every element in the
worst case.

4. Space Complexity

Space complexity refers to the amount of memory an algorithm needs to execute. It includes:

 Fixed space: Constants, program variables.


 Dynamic space: Data structures like arrays, linked lists, recursion stacks.

Example: Recursive functions generally have higher space complexity due to stack usage.

5. Worst, Best, and Average Case Analysis

 Worst Case: The maximum time an algorithm takes (e.g., searching for an element that
does not exist in an unsorted array).
 Best Case: The minimum time an algorithm takes (e.g., searching for the first element in
an unsorted array).
 Average Case: The expected time an algorithm takes over all possible inputs.

Example: QuickSort has a worst-case complexity of O(n²) but an average case of O(n log n).

6. Recursive Algorithm

A recursive algorithm is one that calls itself to solve a smaller instance of the same problem. It
consists of:

 Base case: The condition when recursion stops.


 Recursive case: The function calling itself with modified parameters.

Example:

cpp
CopyEdit
int factorial(int n) {
if (n == 0) return 1; // Base case
return n * factorial(n - 1); // Recursive case
}

7. Recurrence Relations

A recurrence relation is an equation that expresses a function in terms of its smaller inputs.

 Substitution Method: Solve by expanding until a pattern emerges.


 Recursion Tree Method: Visualize recursion as a tree and sum all levels.
 Master’s Theorem: Used to analyze divide-and-conquer algorithms of the form:

T(n)=aT(n/b)+O(nd)T(n) = aT(n/b) + O(n^d)T(n)=aT(n/b)+O(nd)

where a, b, and d are constants.

Example: MergeSort recurrence relation is:

T(n)=2T(n/2)+O(n)T(n) = 2T(n/2) + O(n)T(n)=2T(n/2)+O(n)

which solves to O(n log n).

UNIT 2: Algorithmic Strategies


8. Brute Force

A brute force algorithm tries all possible solutions and selects the best one.
Example: Checking all substrings to find the longest palindrome has O(n²) complexity.

9. Greedy Algorithm

A greedy algorithm makes locally optimal choices at each step in the hope of finding a global
optimum.
Example: Huffman coding for data compression.

10. Dynamic Programming (DP)


A dynamic programming algorithm breaks a problem into smaller overlapping subproblems
and stores intermediate results to avoid recomputation.
Example: Fibonacci series using DP runs in O(n) instead of O(2ⁿ) (plain recursion).

11. Backtracking

A backtracking algorithm explores all possible solutions and discards those that fail to satisfy
constraints.
Example: N-Queens problem, Sudoku Solver.

UNIT 3: Graph and Tree Algorithms


(Detailed Notes)
Graph and tree algorithms are fundamental in computer science, used in networking, pathfinding,
scheduling, and various optimizations. This unit focuses on graph traversal, shortest path algorithms,
minimum spanning trees, topological sorting, and network flow algorithms.

1. Graphs and Their Representations

A graph is a data structure consisting of vertices (nodes) and edges (connections between nodes). It is
represented as:

 Adjacency Matrix: A 2D array where each cell (i, j) stores 1 if an edge exists between i and
j, otherwise 0.
o Space Complexity: O(V²)
 Adjacency List: An array of linked lists where each index represents a vertex, and the list
contains all adjacent vertices.
o Space Complexity: O(V + E)

Types of Graphs

1. Directed Graph (Digraph): Edges have directions (e.g., a one-way road).


2. Undirected Graph: Edges do not have direction (e.g., a two-way road).
3. Weighted Graph: Edges have weights representing distances or costs.
4. Unweighted Graph: All edges have equal weight.
5. Cyclic Graph: Contains cycles (a path that returns to the starting vertex).
6. Acyclic Graph: No cycles exist.
7. Connected Graph: Every vertex is reachable from any other vertex.
8. Disconnected Graph: At least one vertex is isolated from the rest.
2. Graph Traversal Algorithms

A. Depth-First Search (DFS)

DFS is a traversal algorithm that explores as far as possible before backtracking.

Algorithm

1. Start at a chosen vertex.


2. Visit the vertex and mark it as visited.
3. Recursively visit all unvisited adjacent vertices.
4. Backtrack if no more unvisited neighbors exist.

Implementation (Recursive)
cpp
CopyEdit
#include <iostream>
#include <vector>
using namespace std;

void DFS(vector<int> adj[], vector<bool> &visited, int node) {


visited[node] = true;
cout << node << " ";
for (int neighbor : adj[node]) {
if (!visited[neighbor])
DFS(adj, visited, neighbor);
}
}

int main() {
int V = 5;
vector<int> adj[V] = {{1, 2}, {0, 3, 4}, {0}, {1}, {1}};
vector<bool> visited(V, false);

DFS(adj, visited, 0);


return 0;
}

Time Complexity: O(V + E)

Applications:

 Maze and puzzle solving


 Detecting cycles in graphs
 Topological sorting
B. Breadth-First Search (BFS)

BFS explores all neighbors at the present depth before moving to the next level.

Algorithm

1. Start at a chosen vertex and enqueue it.


2. Dequeue a vertex, mark it as visited, and enqueue all unvisited adjacent vertices.
3. Repeat until the queue is empty.

Implementation (Using Queue)


cpp
CopyEdit
#include <iostream>
#include <vector>
#include <queue>
using namespace std;

void BFS(vector<int> adj[], int V, int start) {


vector<bool> visited(V, false);
queue<int> q;

visited[start] = true;
q.push(start);

while (!q.empty()) {
int node = q.front();
q.pop();
cout << node << " ";

for (int neighbor : adj[node]) {


if (!visited[neighbor]) {
visited[neighbor] = true;
q.push(neighbor);
}
}
}
}

int main() {
int V = 5;
vector<int> adj[V] = {{1, 2}, {0, 3, 4}, {0}, {1}, {1}};

BFS(adj, V, 0);
return 0;
}

Time Complexity: O(V + E)

Applications:

 Shortest path in unweighted graphs


 Social networking (friend recommendations)
 Web crawling

3. Shortest Path Algorithms

A. Dijkstra’s Algorithm

Dijkstra’s algorithm finds the shortest path from a source vertex to all other vertices in a weighted graph
(non-negative weights).

Algorithm

1. Initialize distances from the source to all vertices as infinity, except for the source itself (0).
2. Use a priority queue to select the vertex with the minimum distance.
3. Update the distances of adjacent vertices if a shorter path is found.
4. Repeat until all vertices are processed.

Time Complexity: O((V + E) log V)

B. Floyd-Warshall Algorithm

This algorithm finds the shortest paths between all pairs of vertices using dynamic programming.

Algorithm

1. Create a distance matrix initialized with direct edge weights (infinity if no direct edge exists).
2. Use an intermediate vertex k and update distances using:

dist[i][j]=min⁡(dist[i][j],dist[i][k]+dist[k][j])dist[i][j] = \min(dist[i][j], dist[i][k] + dist[k][j])dist[i]


[j]=min(dist[i][j],dist[i][k]+dist[k][j])

3. Repeat for all vertices.

Time Complexity: O(V³)

Applications:

 Network routing
 Flight price calculation between cities
4. Minimum Spanning Tree (MST)

A spanning tree is a subset of edges that connects all vertices with the minimum total edge weight.

A. Prim’s Algorithm

Greedy approach that grows the MST by selecting the smallest edge at each step.

 Time Complexity: O(E log V)

B. Kruskal’s Algorithm

Sorts edges by weight and adds them while avoiding cycles using the Union-Find method.

 Time Complexity: O(E log E)

Applications of MST:

 Network design
 Clustering algorithms
 Image segmentation

5. Topological Sorting

Topological sorting orders vertices in a Directed Acyclic Graph (DAG) such that for every directed edge
(u, v), u appears before v.

Algorithm (Using DFS)

1. Perform DFS and push vertices onto a stack after all descendants are visited.
2. Pop and print stack contents.

Time Complexity: O(V + E)

Applications:

 Scheduling jobs/tasks
 Resolving dependencies (e.g., package managers)
6. Network Flow Algorithm

Ford-Fulkerson Algorithm (Maximum Flow)

Used to find the maximum possible flow in a flow network where each edge has a capacity limit.

Algorithm

1. Start with 0 flow and augment paths using BFS/DFS.


2. Reduce capacities along paths and repeat.

Time Complexity: O(E * max_flow)

Applications:

 Airline scheduling
 Traffic management
 Bipartite matching

Summary
Algorithm Time Complexity Applications

DFS O(V + E) Graph traversal, cycle detection

BFS O(V + E) Shortest path in unweighted graphs

Dijkstra O((V + E) log V) Shortest path in weighted graphs

Floyd-Warshall O(V³) All-pairs shortest path

Prim’s Algorithm O(E log V) Minimum spanning tree

Kruskal’s Algorithm O(E log E) Minimum spanning tree

Topological Sorting O(V + E) Task scheduling

Ford-Fulkerson O(E * max_flow) Maximum flow problem

This concludes Unit 3: Graph and Tree Algorithms in detail. Let me know if you need further
clarifications! 🚀
UNIT 4: TRACTABLE AND
INTRACTABLE PROBLEMS (DETAILED
EXPLANATION)
Computational problems are categorized based on their tractability, meaning how efficiently they can
be solved. Problems that can be solved in polynomial time (O(n^k)) are considered tractable, while
problems requiring exponential or super-polynomial time are considered intractable. This unit explores
computability, complexity classes, NP-completeness, Cook’s theorem, and advanced algorithmic
techniques like approximation and randomized algorithms.

1. COMPUTABILITY OF ALGORITHMS
Computability theory studies whether a problem can be solved using an algorithm. A problem is said to
be computable if an algorithm exists that produces a correct answer in a finite number of steps.

Decidable and Undecidable Problems

1. Decidable Problems: Problems for which an algorithm exists that always produces a correct
answer in finite time.
o Example: Determining if a given number is prime (O(√n)).
2. Undecidable Problems: Problems for which no algorithm can determine the correct answer for
all inputs.
o Example: The Halting Problem (Alan Turing, 1936) – It is impossible to create an
algorithm that decides whether another program will halt or run forever.

2. COMPLEXITY THEORY AND CLASSES


OF PROBLEMS
Complexity theory studies how efficiently problems can be solved. Problems are classified into
complexity classes based on the computational resources required.

A. The Complexity Classes

P (Polynomial Time Complexity)


 Definition: Class of problems solvable in polynomial time O(n^k), where k is a constant.
 Examples:
o Sorting Algorithms: Merge Sort (O(n log n))
o Graph Algorithms: Dijkstra’s Algorithm (O(V log V))
o Matrix Multiplication: Strassen’s Algorithm (O(n^2.81))
 Significance: Problems in P are considered efficiently solvable.

NP (Nondeterministic Polynomial Time Complexity)

 Definition: The class of problems for which a given solution can be verified in polynomial time,
even if finding the solution is hard.
 Examples:
o Traveling Salesman Problem (TSP)
o Knapsack Problem
o Graph Coloring
 Significance: If a problem is in NP, it means that, although finding the solution may take
exponential time, verifying a given solution is easy (polynomial time).

NP-Complete (NPC) Problems

 Definition: A problem is NP-Complete if:


1. It is in NP (a solution can be verified in polynomial time).
2. Every NP problem can be reduced to it in polynomial time.
 Examples:

o Boolean Satisfiability Problem (SAT)


o Hamiltonian Cycle Problem
o Graph Coloring Problem
 Significance: If any NP-Complete problem is solved in polynomial time, then P = NP (one of the
biggest unsolved questions in computer science).

NP-Hard Problems

 Definition: Problems that are at least as hard as NP-complete problems but do not necessarily
belong to NP (i.e., their solutions might not be verifiable in polynomial time).
 Examples:
o Halting Problem
o Generalized Chess (Finding the best move is NP-Hard)
 Significance: NP-hard problems are generally intractable unless P = NP.
3. COOK’S THEOREM (FIRST NP-
COMPLETE PROBLEM)
A. Statement of Cook’s Theorem (1971)

 Proposed by: Stephen Cook


 Statement: The Boolean Satisfiability Problem (SAT) is NP-Complete.
 Significance:
o SAT was the first NP-complete problem.
o If SAT can be solved in polynomial time, then P = NP.
o Established the foundation for proving other problems NP-complete via polynomial-
time reductions.

B. Reduction Technique

To prove a problem is NP-Complete, we show:

1. The problem belongs to NP.


2. A known NP-Complete problem (like SAT) can be reduced to it in polynomial time.

4. STANDARD NP-COMPLETE
PROBLEMS AND REDUCTION
TECHNIQUES
A. Common NP-Complete Problems

1. Boolean Satisfiability Problem (SAT): Can a Boolean formula be satisfied?


2. Graph Coloring Problem: Can a graph be colored with k colors such that no adjacent nodes
share the same color?
3. Traveling Salesman Problem (TSP): Find the shortest route that visits all cities exactly once.
4. Vertex Cover Problem: Find the minimum number of vertices covering all edges in a graph.
5. Subset Sum Problem: Can a subset of numbers sum to a given value?

B. Reduction Techniques

Reduction is the process of transforming one problem into another.


1. Transforming SAT to 3-SAT (Converting Boolean expressions into CNF with three literals per
clause).
2. Reducing 3-SAT to CLIQUE Problem (Graph representation of Boolean formulas).
3. Hamiltonian Path to TSP (Converting a graph problem into a path optimization problem).

5. ADVANCED TOPICS IN COMPLEXITY


A. Approximation Algorithms

Since NP-hard problems are difficult to solve exactly, approximation algorithms provide near-optimal
solutions within a guaranteed factor.

Approximation Ratio:

 If an algorithm guarantees a solution at most α times the optimal solution, it has an


approximation ratio of α.
 Example:
o Greedy Algorithm for Vertex Cover: Provides a 2-approximation.
o Approximate TSP using MST: Provides a 2-approximation using a Minimum Spanning
Tree (MST).

B. Randomized Algorithms

Randomized algorithms use randomness to improve performance.

 Monte Carlo Algorithm: Produces a correct solution with high probability (e.g., Randomized
QuickSort).
 Las Vegas Algorithm: Always produces the correct result but has a randomized runtime (e.g.,
Randomized Primality Testing).
 Example:
o Randomized Min-Cut Algorithm (Graph partitioning).

C. Complexity Beyond NP (P-SPACE and EXPTIME)

P-SPACE (Polynomial Space Complexity)

 Problems solvable in polynomial space but possibly exponential time.


 Example: Chess endgames with a fixed number of pieces.
EXPTIME (Exponential Time Complexity)

 Problems requiring exponential time, making them practically unsolvable for large inputs.
 Example: Solving Chess or Go optimally.

6. SUMMARY TABLE
Complexity Class Definition Examples

P Solvable in polynomial time Sorting, Shortest Path

NP Solution verifiable in polynomial time TSP, Knapsack

NP-Complete Hardest problems in NP SAT, Graph Coloring

NP-Hard At least as hard as NP-Complete Halting Problem, TSP (Optimization)

P-SPACE Solvable in polynomial space Chess Endgames

EXPTIME Requires exponential time Chess, Go

CONCLUSION

You might also like