You are on page 1of 6

Asymptotic notations

1. A function 𝑡(𝑛) is said to be in 𝑂(𝑔(𝑛)) ( denoted by 𝑔(𝑛) ∈ 𝑂(𝑔(𝑛)) ) if 𝑡(𝑛) is bounded above by some constant multiple of 𝑔(𝑛) for all 𝑛 ∈ 𝑅, i.e.

𝑡(𝑛) ≤ 𝑐 ∗ 𝑔(𝑛) ∀ 𝑛 ≥ 𝑛0 𝑐 ≥ 0 𝑛0 ≥ 0

2. A function 𝑡(𝑛) is said to be in 𝜔(𝑔(𝑛)) ( denoted by 𝑡(𝑛) ∈ Ω(𝑔(𝑛)) ) if 𝑡(𝑛) is bounded below by some constant multiple of 𝑔(𝑛) for all 𝑛 ∈ 𝑅, i.e.

𝑡(𝑛) ≥ 𝑐 ∗ 𝑔(𝑛) ∀ 𝑛 ≥ 𝑛0 𝑐 > 0 𝑛0 ≥ 0

3. A function 𝑡(𝑛) is said to be in 𝜃(𝑔(𝑛)) ( denoted by 𝑡(𝑛) ∈ 𝜃(𝑔(𝑛)) ) if it is bounded both above and below by some positive constant multiples of 𝑔(𝑛) for all 𝑛 ∈ 𝑅,
i.e

𝑐1 ∗ 𝑔(𝑛) ≤ 𝑡(𝑛) ≤ 𝑐2 ∗ 𝑔(𝑛) ∀ 𝑛 ≥ 𝑛0 𝑐1 & 𝑐2 > 0 𝑛0 ≥ 0

General plan for analysing time efficiency


Non recursive algorithms
1. Find the parameter that best denotes the input size.
2. Identify the algorithm's basic operation. (Generally located at the inner-most loop).
3. Check whether the running time depends only on the input size. If it depends on external factors, then the worst case, best case and average case efficiencies have to be
calculated separately.
4. set up a summation expression that denotes the number of times algorithm's basic operation is executed.
5. using standard arithmetic rules, find out the closed-form formula for the count, or at the very least, establish its order of growth.

Recursive algorithms
1. Find the parameter(s) that indicate the input size.
2. Identify the algorithms’ basic operation.
3. Check whether the running time can vary on different inputs of the same size. If yes, then the worst-case, average-case, and best-case efficiencies must be investigated
separately.
4. Formulate a recurrence relation with an appropriate initial condition for the number of times the basic operation is executed.
5. solve the recurrence relation, or at least ascertain the order of growth of its solution.

Techniques
Greedy
This involves constructing a solution through a series of steps, each expanding a partially constructed solution, until a complete solution is obtained. At each step, the choice must
be made in the following manner:

1. feasible i.e. it has to satisfy the problem constraints.


2. locally optimized i.e. it has to be the best local choice among all feasible choices on that step.
3. irrevocable i.e. once the choice is made, it cannot be changed on subsequent steps of the algorithm.

Divide and conquer


1. Divide the problem into of size n into k distinct subsets, 1 < 𝑘 ≤ 𝑛 yielding 𝑘 subproblems, of same type, ideally of equal size.
2. Solve the subproblems (usually recursively).
3. Combine the subproblems into a whole, to get a solution for the original problem.

𝑔(𝑛), 𝑛 𝑖𝑠 𝑠𝑚𝑎𝑙𝑙
𝑇(𝑛) = {
2𝑇(𝑛/2) + 𝑓(𝑛), 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒

procedure DANDC(p, q)
if SMALL(p, q)
then return (G(p, q))
else
m <- DIVIDE(p, q)
return (COMBINE(DANDC(p, m), DANDC(m + 1, q)))
endif
end

Decrease and conquer


The decrease and conquer technique is based on exploiting the relationship between a solution to its given instance and a solution to a smaller instance.

1. decrease by a constant:
The size of the problem is reduced by the same constant on each iteration of the algorithm.

𝑎𝑛 = 𝑎𝑛−1 ∗ 𝑎

𝐹(𝑛) = {𝐹(𝑛 − 1) ∗ 𝑎 𝑛>0


1 𝑛=0

2. decrease by a constant factor:


The size of the problem is reduced by the same constant factor on each iteration of the algorithm.

𝑛 2
𝑎𝑛 = (𝑎 2 )

𝑛 2
𝐹( ) 𝑛 𝑖𝑠 𝑒𝑣𝑒𝑛
2
𝐹(𝑛) = 𝑛−1 2
𝐹( ) ∗𝑎 𝑛 𝑖𝑠 𝑜𝑑𝑑
2
{1 𝑛=0

3. variable size decrease:


The size reduction pattern varies from one iteration of the algorithm to another.

𝑔𝑐𝑑(𝑚, 𝑛) = 𝑔𝑐𝑑(𝑛, 𝑚 𝑚𝑜𝑑 𝑛)

Dynamic

Backtracking
The principle idea is to create the solution one component at a time. If a partially constructed solution can be developed further without violating the problem constraints, it is done
by taking the first legitimate option for the next component. If there is no legitimate option for the next component, then no alternatives for any remaining components need to be
considered. In such case, the algorithm backtracks to replace the last component of the partially constructed solution with its next option.

ALGORITHM 𝐵𝑎𝑐𝑘𝑡𝑟𝑎𝑐𝑘(𝐴[1. . 𝑖])


//Template for a generic backtracking algorithm
//Input: 𝑋[1. . 𝑖] specifies the first 𝑖 promising components of a solution
//Output: All the tuples representing the problem’s solution
if 𝑋[1. . 𝑖] is a solution
write 𝑋[1. . 𝑖]
else
for each element 𝑥 ∈ 𝑆𝑖+1 consistent with 𝑋[1. . 𝑖] and the constraints do
𝑋[𝑖 + 1] ← 𝑥
𝐵𝑎𝑐𝑘𝑡𝑟𝑎𝑐𝑘(𝑋[1. . 𝑖 + 1]
Branch and bound
Compared to backtracking branch and bound requires two additional items:
1. A way to provide for every node, a bound on the objective function on any solution that can be obtained by adding further components to the partially constructed solution
represented by the node
2. The value of the best solution seen so far

In general, we can terminate a branch and bound node in a state-space tree for any one of the following three reasons:
1. The bound value of the node is not better than the best solution seen so far
2. The node represents no feasible solutions as it already violates the problem constraints.
3. The subset of feasible solutions represented by a node consist of a single point (i.e. there are no further choices to be made). In this case, the value of the objective function
for this feasible solution is compared with the best solution obtained so far.

Types of problems
1. sorting
2. searching
3. graph
4. combinatorial
5. string processing

Transitive closure
The transitive closure of a directed graph with 𝑛 vertices is defined as an 𝑛 × 𝑛 boolean matrix 𝑇 = {𝑡𝑖 𝑗 }, in which an element in the 𝑖 𝑡ℎ row and 𝑗 𝑡ℎ column is 1 if there exists a
non-trivial path (i.e., directed path of positive length) from 𝑖 𝑡ℎ vertex to 𝑗 𝑡ℎ vertex; otherwise 𝑡𝑖𝑗 is 0.

Time efficiency for Warshall’s and Floyd is cubic.

ALGORITHM 𝑊𝑎𝑟𝑠ℎ𝑎𝑙𝑙(𝐴[1. . 𝑛, 1. . 𝑛])


//Implements Warshall’s algorithm for computing transitive closure
//Input: The adjacency matrix 𝐴 of diagraph with 𝑛 vertices
//Output: The transitive closure of the diagraph
𝑅 (0) ← 𝐴
for 𝑖 ← 1 to 𝑛 do
for 𝑗 ← 1 to 𝑛 do
for 𝑘 ← 1 to 𝑛 do
𝑅 (𝑘) [𝑖, 𝑗] ← 𝑅 (𝑘−1) [𝑖, 𝑗] or ( 𝑅 (𝑘−1) [𝑖, 𝑘] and 𝑅 (𝑘−1) [𝑘, 𝑗] )
return R(n)

(All pairs shortest path algorithm)


ALGORITHM 𝐹𝑙𝑜𝑦𝑑(𝑊[1. . 𝑛, 1. . 𝑛])
//Implements Floyd’s algorithm for the all-pairs shortest-paths problem.
//Input: The weight matrix 𝑊 with no negative length cycle.
//Output: The distance matrix of the shortest paths’ length.
for 𝑖 ← 1 to 𝑛 do
for 𝑗 ← 1 to 𝑛 do
for 𝑘 ← 1 to 𝑛 do
𝑊[𝑖, 𝑗] ← min{ 𝑊[𝑖, 𝑗], 𝑊[𝑖, 𝑘] + 𝑊[𝑘, 𝑗]}
return W

Minimum spanning tree


A spanning tree of an undirected graph is its undirected acyclic graph (i.e. tree) that contains all its vertices.
If such a graph has weights assigned to its edges, then the min spanning tree is the spanning tree with the min sum of the weights of its edges

ALGORITHM 𝑃𝑟𝑖𝑚𝑠(G)
//Applies Prim’s algorithm to the graph 𝐺
//Input: A weighted connected graph 𝐺 = < 𝑉, 𝐸 >
//Output: 𝐸𝑡 , the set of edges comprising of the minimum spanning tree for 𝐺
𝐸𝑡 ← 𝜙
𝑉𝑡 ← {𝑣0 }
for 𝑖 ← 1 to |𝑉| − 1
find the minimum cost edge 𝑒 = (𝑢, 𝑣), among all edge pairs such that 𝑢 lies in 𝑉𝑡 and 𝑣 lies in 𝑉 − 𝑉𝑡
𝑉𝑡 ← 𝑉𝑡 ∪ {𝑣}
𝐸𝑡 ← 𝐸𝑡 ∪ {𝑒}
return 𝐸𝑡

ALGORITHM 𝐾𝑟𝑢𝑠𝑘𝑎𝑙(𝐺)
//Kruskal’s algorithm for constructing a minimum spanning tree
//Input: A weighted connected graph 𝐺 = ⟨𝑉 , 𝐸⟩
//Output: 𝐸𝑇 , the set of edges composing a minimum spanning tree of 𝐺
sort the edges 𝐸 in nondecreasing order of their weights 𝑤(𝑒1 ) ≤ ⋯ ≤ 𝑤(𝑒|𝐸| )
𝐸𝑡 ← 𝜙
𝑒𝑐𝑜𝑢𝑛𝑡 ← 0
𝑘←0
while 𝑒𝑐𝑜𝑢𝑛𝑡 < |𝑉| − 1
𝑘 ←𝑘+1
if 𝐸𝑡 ∪ 𝑒𝑘 is acyclic then
𝑒𝑐𝑜𝑢𝑛𝑡 ← 𝑒𝑐𝑜𝑢𝑛𝑡 + 1
𝐸𝑡 ← 𝐸𝑡 ∪ 𝑒𝑘
return 𝐸𝑡

Multistage graphs
Optimal BST
𝑗
Let 𝐶(𝑖, 𝑗) be the smallest avg. number of comparisons made in the successful search in a binary tree 𝑇𝑖 made up of keys 𝑎𝑖 , … , 𝑎𝑗 , 1 ≤ 𝑖 ≤ 𝑗 ≤ 𝑛
𝑗

𝐶(𝑖, 𝑗) = min { 𝐶(𝑖, 𝑘 − 1) + 𝐶(𝑘 + 1, 𝑗)} + ∑ 𝑝𝑘 ∀ 1 ≤ 𝑖 ≤ 𝑗 ≤ 𝑛


𝑖≤𝑘≤𝑗
𝑘=𝑖
Where, 𝐶(𝑖, 𝑖 − 1) = 0 and 𝐶(𝑖, 𝑖) = 𝑃𝑖

ALGORITHM 𝑂𝑝𝑡𝑖𝑚𝑎𝑙𝐵𝑆𝑇(𝑃[1. . 𝑛])


//Finds an optimal BST by dynamic programming
//Input:
for 𝑖 ← 1 to 𝑛 do
𝐶[𝑖, 𝑖] ← 𝑃𝑖
𝐶[𝑖, 𝑖 − 1] ← 0
𝑅[𝑖, 𝑖] ← 𝑖
𝐶[𝑛 + 1, 𝑛] ← 0
for 𝑑 ← 1 to 𝑛 − 1 do
for 𝑖 ← 1 to 𝑛 − 𝑑 do
𝑗 ←𝑖+𝑑
𝑚𝑖𝑛𝑣𝑎𝑙 ← ∞
𝑘𝑚𝑖𝑛 ← 𝑖
𝑠𝑢𝑚 ← 0

for 𝑘 ← 𝑖 to 𝑗 do
𝑠𝑢𝑚 ← 𝑠𝑢𝑚 + 𝑝𝑘
𝑥 ← 𝐶[𝑖, 𝑘 − 1] + 𝑐[𝑘 + 1, 𝑗]
if 𝑥 < 𝑚𝑖𝑛𝑣𝑎𝑙 then
𝑚𝑖𝑛𝑣𝑎𝑙 ← 𝑥
𝑘𝑚𝑖𝑛 ← 𝑘

𝑅[𝑖, 𝑗] ← 𝑘
𝐶[𝑖, 𝑗] ← 𝑚𝑖𝑛𝑣𝑎𝑙 + 𝑠𝑢𝑚

return 𝐶, 𝑅

Dijkstra
(SSSP – single source shortest path algorithm)

ALGORITHM 𝐷𝑖𝑗𝑘𝑠𝑡𝑎(𝐺, 𝑠)
for every vertex v in V
𝑑𝑣 ← ∞
𝑝𝑣 ← null
𝐼𝑛𝑠𝑒𝑟𝑡(𝑄, 𝑣, 𝑑𝑣 )

𝑑𝑠 ← 0
𝐷𝑒𝑐𝑟𝑒𝑎𝑠𝑒(𝑄, 𝑠, 𝑑𝑠 )
𝑉𝑡 = 𝜙

for 𝑖 ← 0 to |𝑉| − 1 do
𝑢 = 𝐷𝑒𝑙𝑒𝑡𝑒𝑀𝑖𝑛(𝑄)
𝑉𝑡 = 𝑉𝑡 ∪ {𝑢}

for every vertex 𝑣 in 𝑉 − 𝑉𝑡 that is adjacent to 𝑢


𝑥 ← 𝑑𝑢 + 𝑤(𝑢, 𝑣)
if 𝑥 < 𝑑𝑣 then
𝑑𝑣 ← 𝑥
𝐷𝑒𝑐𝑟𝑒𝑎𝑠𝑒(𝑄, 𝑣, 𝑑𝑣 )

Travelling Salesman Problem


(a.k.a to find the shortest Hamiltonian circuit)

Using dynamic programming


Let 𝑔(𝑖, 𝑆) be the length of shortest path starting from 𝑖, going through all the vertices in 𝑆 and terminating at 1.
Then, 𝑔(1, 𝑉 − {1}) is the length of an optimal salesperson tour.

𝑔(𝑖, 𝑆) = min { ci𝑗 + g(k, S − { 𝑗 }) }


𝑗∈𝑆
Using branch and bound
𝑙𝑏 = 𝑠/2
Where, 𝑠𝑖 is the sum of distances to the two nearest cities from all cities.

Knapsack

using dynamic programming


𝐹(𝑖, 𝑗) is defined as the value of the most valuable subset of the first 𝑖 items that fit into a knapsack of size j.

𝐹(𝑖 − 1, 𝑗) 𝑤𝑖 > 𝑗
𝐹(𝑖, 𝑗) = {
max{ 𝐹(𝑖 − 1, 𝑗), 𝑣𝑖 + 𝐹(𝑖 − 1, 𝑗 − 𝑤𝑖 ) } 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒

The initial conditions are defined as follows:

𝐹(0, 𝑗) = 0 for 𝑗 ≥ 0 and 𝐹(𝑖, 0) = 0 for 𝑖 ≥ 0

Using branch and bound


𝑣𝑖+1
𝑢𝑏 = 𝑣 + (𝑊 − 𝑤)( )
𝑤𝑖+1

𝑢𝑏: upper bound


𝑣: current node value
𝑊: capacity of knapsack
𝑤: total weight
𝑊 − 𝑤: remaining capacity
𝑣𝑖+1
: unit/weight of remaining items
𝑤𝑖+1

Other Algorithms
ALGORITHM 𝑀𝐴𝑋𝑀𝐼𝑁(𝐴, 𝑖, 𝑗, 𝑚𝑖𝑛, 𝑚𝑎𝑥)
//assigns the min and max values to largest and smallest values in 𝐴(𝑖: 𝑗)
//Input: An array, A of integers. 𝑖, 𝑗 are the bounds for the search.
//Output: min and max
case
: 𝑖 = 𝑗:
max ← 𝑚𝑖𝑛 ← 𝐴[𝑖]
: 𝑗 = 𝑖 + 1:
if 𝐴[𝑗] > 𝐴[𝑖] then
𝑚𝑎𝑥 ← 𝐴[𝑗]
𝑚𝑖𝑛 ← 𝐴[𝑖]
else
𝑚𝑎𝑥 ← 𝐴[𝑖]
𝑚𝑖𝑛 ← 𝐴[𝑗]
else
𝑚𝑖𝑑 = (𝑖 + 𝑗) / 2
𝑀𝐴𝑋𝑀𝐼𝑁(𝐴, 𝑖, 𝑚𝑖𝑑, 𝑔𝑚𝑖𝑛, 𝑔𝑚𝑎𝑥)
𝑀𝐴𝑋𝑀𝐼𝑁(𝐴, 𝑚𝑖𝑑 + 1, 𝑚𝑎𝑥, ℎ𝑚𝑖𝑛, ℎ𝑚𝑎𝑥)
𝑚𝑎𝑥 ← 𝑚𝑎𝑥(𝑔𝑚𝑎𝑥, ℎ𝑚𝑎𝑥)
𝑚𝑖𝑛 ← 𝑚𝑖𝑛(𝑔𝑚𝑖𝑛, ℎ𝑚𝑖𝑛)

0, 𝑛=1
1, 𝑛=2
𝑇(𝑛) = { 𝑛
2∗ 𝑇( ), 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
2
n
T(n) = 3 ∗ − 2
2

ALORITHM 𝑀𝐸𝑅𝐺𝐸𝑆𝑂𝑅𝑇(𝑙𝑜𝑤, ℎ𝑖𝑔ℎ)


if 𝑙𝑜𝑤 < ℎ𝑖𝑔ℎ then
𝑚𝑖𝑑 ← (𝑙𝑜𝑤 + ℎ𝑖𝑔ℎ) / 2
𝑀𝐸𝑅𝐺𝐸𝑆𝑂𝑅𝑇(𝑙𝑜𝑤, 𝑚𝑖𝑑)
𝑀𝐸𝑅𝐺𝐸𝑆𝑂𝑅𝑇(𝑚𝑖𝑑 + 1, ℎ𝑖𝑔ℎ)
𝑀𝐸𝑅𝐺𝐸(𝑙𝑜𝑤, 𝑚𝑖𝑑, ℎ𝑖𝑔ℎ)

ALGORITHM 𝑀𝐸𝑅𝐺𝐸(𝑙𝑜𝑤, 𝑚𝑖𝑑, ℎ𝑖𝑔ℎ)


k←0
while i \le mid and j \le high do
if A[i] \le A[j] then
B[k] ← A[i]
i ← i+1
else
B[k] ← A[j]
j ← j+1
k ← k+1
while i \le mid do
B[k] ← A[i]
i ← i+1
k ← k+1
while j \le high do
B[k] ← A[j]
i ← j+1
k ← k+1
COPY B to A

a, n=1
T(n) = { 𝑛
2 ∗ 𝑇 ( ) + 𝑐 ∗ 𝑛, 𝑛 > 1, 𝑐 𝑖𝑠 𝑎 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡
2

𝑇(𝑛) = 𝑂(𝑛 ∗ log 2 𝑛)

ALGORITHM 𝑄𝑢𝑖𝑐𝑘𝑆𝑜𝑟𝑡(𝐴[𝑙. . 𝑟])


//Sorts a subarray by quicksort
//Input: Subarray of 𝐴[1. . 𝑛] defined by 𝑙 and 𝑟
//Output: Subarray 𝐴[𝑙. . 𝑟] sorted in ascending order
if 𝑙 < 𝑟 then
𝑚 = 𝐻𝑜𝑎𝑟𝑒𝑃𝑎𝑟𝑡𝑖𝑡𝑖𝑜𝑛(𝐴[𝑙. . 𝑟])
𝑄𝑢𝑖𝑐𝑘𝑆𝑜𝑟𝑡(𝐴[𝑙. . 𝑚 − 1])
𝑄𝑢𝑖𝑐𝑘𝑆𝑜𝑟𝑡(𝐴[𝑚 + 1. . 𝑟])
return 𝐴[𝑙. . 𝑟]

(repeat-until stops looping when the condition is True)

ALGORITHM 𝐻𝑜𝑎𝑟𝑒𝑃𝑎𝑟𝑡𝑖𝑡𝑖𝑜𝑛(𝐴[𝑙. . 𝑟])


//Partitions the array 𝐴[0. . 𝑛 − 1] using first element as the pivot
//Input: subarray of 𝐴[0. . 𝑛 − 1] defined by 𝑙 and 𝑟 (𝑙 < 𝑟)
//Output: partition of 𝐴[𝑙. . 𝑟], with split position as returned value

𝑝 ← 𝐴[𝑙]
𝑖←𝑙
𝑗 ←𝑟+1

repeat
repeat 𝑖 ← 𝑖 + 1 until 𝐴[𝑖] ≥ 𝑝
repeat 𝑗 ← 𝑗 − 1 until 𝐴[𝑗] < 𝑝
𝑠𝑤𝑎𝑝(𝐴[𝑖], 𝐴[𝑗])
until 𝑖 ≥ 𝑗

𝑠𝑤𝑎𝑝(𝐴[𝑖], 𝐴[𝑗]) //undo last swap when 𝑖 ≥ 𝑗


𝑠𝑤𝑎𝑝(𝐴[𝑙], 𝐴[𝑗])

return 𝑗

ALGORITHM 𝐸𝑈𝐶𝐿𝐼𝐷(𝑚, 𝑛)
while 𝑛 ! = 0 do
𝑟 < − 𝑚 𝑚𝑜𝑑 𝑛
𝑚 <−𝑛
𝑛 <−𝑟
return 𝑚

You might also like