Professional Documents
Culture Documents
1. A function 𝑡(𝑛) is said to be in 𝑂(𝑔(𝑛)) ( denoted by 𝑔(𝑛) ∈ 𝑂(𝑔(𝑛)) ) if 𝑡(𝑛) is bounded above by some constant multiple of 𝑔(𝑛) for all 𝑛 ∈ 𝑅, i.e.
𝑡(𝑛) ≤ 𝑐 ∗ 𝑔(𝑛) ∀ 𝑛 ≥ 𝑛0 𝑐 ≥ 0 𝑛0 ≥ 0
2. A function 𝑡(𝑛) is said to be in 𝜔(𝑔(𝑛)) ( denoted by 𝑡(𝑛) ∈ Ω(𝑔(𝑛)) ) if 𝑡(𝑛) is bounded below by some constant multiple of 𝑔(𝑛) for all 𝑛 ∈ 𝑅, i.e.
3. A function 𝑡(𝑛) is said to be in 𝜃(𝑔(𝑛)) ( denoted by 𝑡(𝑛) ∈ 𝜃(𝑔(𝑛)) ) if it is bounded both above and below by some positive constant multiples of 𝑔(𝑛) for all 𝑛 ∈ 𝑅,
i.e
Recursive algorithms
1. Find the parameter(s) that indicate the input size.
2. Identify the algorithms’ basic operation.
3. Check whether the running time can vary on different inputs of the same size. If yes, then the worst-case, average-case, and best-case efficiencies must be investigated
separately.
4. Formulate a recurrence relation with an appropriate initial condition for the number of times the basic operation is executed.
5. solve the recurrence relation, or at least ascertain the order of growth of its solution.
Techniques
Greedy
This involves constructing a solution through a series of steps, each expanding a partially constructed solution, until a complete solution is obtained. At each step, the choice must
be made in the following manner:
𝑔(𝑛), 𝑛 𝑖𝑠 𝑠𝑚𝑎𝑙𝑙
𝑇(𝑛) = {
2𝑇(𝑛/2) + 𝑓(𝑛), 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
procedure DANDC(p, q)
if SMALL(p, q)
then return (G(p, q))
else
m <- DIVIDE(p, q)
return (COMBINE(DANDC(p, m), DANDC(m + 1, q)))
endif
end
1. decrease by a constant:
The size of the problem is reduced by the same constant on each iteration of the algorithm.
𝑎𝑛 = 𝑎𝑛−1 ∗ 𝑎
𝑛 2
𝑎𝑛 = (𝑎 2 )
𝑛 2
𝐹( ) 𝑛 𝑖𝑠 𝑒𝑣𝑒𝑛
2
𝐹(𝑛) = 𝑛−1 2
𝐹( ) ∗𝑎 𝑛 𝑖𝑠 𝑜𝑑𝑑
2
{1 𝑛=0
Dynamic
Backtracking
The principle idea is to create the solution one component at a time. If a partially constructed solution can be developed further without violating the problem constraints, it is done
by taking the first legitimate option for the next component. If there is no legitimate option for the next component, then no alternatives for any remaining components need to be
considered. In such case, the algorithm backtracks to replace the last component of the partially constructed solution with its next option.
In general, we can terminate a branch and bound node in a state-space tree for any one of the following three reasons:
1. The bound value of the node is not better than the best solution seen so far
2. The node represents no feasible solutions as it already violates the problem constraints.
3. The subset of feasible solutions represented by a node consist of a single point (i.e. there are no further choices to be made). In this case, the value of the objective function
for this feasible solution is compared with the best solution obtained so far.
Types of problems
1. sorting
2. searching
3. graph
4. combinatorial
5. string processing
Transitive closure
The transitive closure of a directed graph with 𝑛 vertices is defined as an 𝑛 × 𝑛 boolean matrix 𝑇 = {𝑡𝑖 𝑗 }, in which an element in the 𝑖 𝑡ℎ row and 𝑗 𝑡ℎ column is 1 if there exists a
non-trivial path (i.e., directed path of positive length) from 𝑖 𝑡ℎ vertex to 𝑗 𝑡ℎ vertex; otherwise 𝑡𝑖𝑗 is 0.
ALGORITHM 𝑃𝑟𝑖𝑚𝑠(G)
//Applies Prim’s algorithm to the graph 𝐺
//Input: A weighted connected graph 𝐺 = < 𝑉, 𝐸 >
//Output: 𝐸𝑡 , the set of edges comprising of the minimum spanning tree for 𝐺
𝐸𝑡 ← 𝜙
𝑉𝑡 ← {𝑣0 }
for 𝑖 ← 1 to |𝑉| − 1
find the minimum cost edge 𝑒 = (𝑢, 𝑣), among all edge pairs such that 𝑢 lies in 𝑉𝑡 and 𝑣 lies in 𝑉 − 𝑉𝑡
𝑉𝑡 ← 𝑉𝑡 ∪ {𝑣}
𝐸𝑡 ← 𝐸𝑡 ∪ {𝑒}
return 𝐸𝑡
ALGORITHM 𝐾𝑟𝑢𝑠𝑘𝑎𝑙(𝐺)
//Kruskal’s algorithm for constructing a minimum spanning tree
//Input: A weighted connected graph 𝐺 = ⟨𝑉 , 𝐸⟩
//Output: 𝐸𝑇 , the set of edges composing a minimum spanning tree of 𝐺
sort the edges 𝐸 in nondecreasing order of their weights 𝑤(𝑒1 ) ≤ ⋯ ≤ 𝑤(𝑒|𝐸| )
𝐸𝑡 ← 𝜙
𝑒𝑐𝑜𝑢𝑛𝑡 ← 0
𝑘←0
while 𝑒𝑐𝑜𝑢𝑛𝑡 < |𝑉| − 1
𝑘 ←𝑘+1
if 𝐸𝑡 ∪ 𝑒𝑘 is acyclic then
𝑒𝑐𝑜𝑢𝑛𝑡 ← 𝑒𝑐𝑜𝑢𝑛𝑡 + 1
𝐸𝑡 ← 𝐸𝑡 ∪ 𝑒𝑘
return 𝐸𝑡
Multistage graphs
Optimal BST
𝑗
Let 𝐶(𝑖, 𝑗) be the smallest avg. number of comparisons made in the successful search in a binary tree 𝑇𝑖 made up of keys 𝑎𝑖 , … , 𝑎𝑗 , 1 ≤ 𝑖 ≤ 𝑗 ≤ 𝑛
𝑗
for 𝑘 ← 𝑖 to 𝑗 do
𝑠𝑢𝑚 ← 𝑠𝑢𝑚 + 𝑝𝑘
𝑥 ← 𝐶[𝑖, 𝑘 − 1] + 𝑐[𝑘 + 1, 𝑗]
if 𝑥 < 𝑚𝑖𝑛𝑣𝑎𝑙 then
𝑚𝑖𝑛𝑣𝑎𝑙 ← 𝑥
𝑘𝑚𝑖𝑛 ← 𝑘
𝑅[𝑖, 𝑗] ← 𝑘
𝐶[𝑖, 𝑗] ← 𝑚𝑖𝑛𝑣𝑎𝑙 + 𝑠𝑢𝑚
return 𝐶, 𝑅
Dijkstra
(SSSP – single source shortest path algorithm)
ALGORITHM 𝐷𝑖𝑗𝑘𝑠𝑡𝑎(𝐺, 𝑠)
for every vertex v in V
𝑑𝑣 ← ∞
𝑝𝑣 ← null
𝐼𝑛𝑠𝑒𝑟𝑡(𝑄, 𝑣, 𝑑𝑣 )
𝑑𝑠 ← 0
𝐷𝑒𝑐𝑟𝑒𝑎𝑠𝑒(𝑄, 𝑠, 𝑑𝑠 )
𝑉𝑡 = 𝜙
for 𝑖 ← 0 to |𝑉| − 1 do
𝑢 = 𝐷𝑒𝑙𝑒𝑡𝑒𝑀𝑖𝑛(𝑄)
𝑉𝑡 = 𝑉𝑡 ∪ {𝑢}
Knapsack
𝐹(𝑖 − 1, 𝑗) 𝑤𝑖 > 𝑗
𝐹(𝑖, 𝑗) = {
max{ 𝐹(𝑖 − 1, 𝑗), 𝑣𝑖 + 𝐹(𝑖 − 1, 𝑗 − 𝑤𝑖 ) } 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
Other Algorithms
ALGORITHM 𝑀𝐴𝑋𝑀𝐼𝑁(𝐴, 𝑖, 𝑗, 𝑚𝑖𝑛, 𝑚𝑎𝑥)
//assigns the min and max values to largest and smallest values in 𝐴(𝑖: 𝑗)
//Input: An array, A of integers. 𝑖, 𝑗 are the bounds for the search.
//Output: min and max
case
: 𝑖 = 𝑗:
max ← 𝑚𝑖𝑛 ← 𝐴[𝑖]
: 𝑗 = 𝑖 + 1:
if 𝐴[𝑗] > 𝐴[𝑖] then
𝑚𝑎𝑥 ← 𝐴[𝑗]
𝑚𝑖𝑛 ← 𝐴[𝑖]
else
𝑚𝑎𝑥 ← 𝐴[𝑖]
𝑚𝑖𝑛 ← 𝐴[𝑗]
else
𝑚𝑖𝑑 = (𝑖 + 𝑗) / 2
𝑀𝐴𝑋𝑀𝐼𝑁(𝐴, 𝑖, 𝑚𝑖𝑑, 𝑔𝑚𝑖𝑛, 𝑔𝑚𝑎𝑥)
𝑀𝐴𝑋𝑀𝐼𝑁(𝐴, 𝑚𝑖𝑑 + 1, 𝑚𝑎𝑥, ℎ𝑚𝑖𝑛, ℎ𝑚𝑎𝑥)
𝑚𝑎𝑥 ← 𝑚𝑎𝑥(𝑔𝑚𝑎𝑥, ℎ𝑚𝑎𝑥)
𝑚𝑖𝑛 ← 𝑚𝑖𝑛(𝑔𝑚𝑖𝑛, ℎ𝑚𝑖𝑛)
0, 𝑛=1
1, 𝑛=2
𝑇(𝑛) = { 𝑛
2∗ 𝑇( ), 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
2
n
T(n) = 3 ∗ − 2
2
a, n=1
T(n) = { 𝑛
2 ∗ 𝑇 ( ) + 𝑐 ∗ 𝑛, 𝑛 > 1, 𝑐 𝑖𝑠 𝑎 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡
2
𝑝 ← 𝐴[𝑙]
𝑖←𝑙
𝑗 ←𝑟+1
repeat
repeat 𝑖 ← 𝑖 + 1 until 𝐴[𝑖] ≥ 𝑝
repeat 𝑗 ← 𝑗 − 1 until 𝐴[𝑗] < 𝑝
𝑠𝑤𝑎𝑝(𝐴[𝑖], 𝐴[𝑗])
until 𝑖 ≥ 𝑗
return 𝑗
ALGORITHM 𝐸𝑈𝐶𝐿𝐼𝐷(𝑚, 𝑛)
while 𝑛 ! = 0 do
𝑟 < − 𝑚 𝑚𝑜𝑑 𝑛
𝑚 <−𝑛
𝑛 <−𝑟
return 𝑚