Professional Documents
Culture Documents
Introduction
Evaluation
• Your evaluation in this course is based on three components:
• 20% Minor 1
• 20% Minor 2
• 30% Major
• 30% Homeworks, Quizzes and Projects
Evaluation
• A student who misses exams must provide:
• A Verification of Illness form indicating a severe illness, or
• Other formal documentation, as appropriate
• No re-minors: Weighted average of the Major and the other
Minor will be awarded
• Re-major can take its time. Incomplete may be awarded.
• Late Homeworks:
• 20% penalty per late day.
• 0 will be awarded after the fifth day
Evaluation
• The graders may penalize for unreadable homeworks
• The graders may award bonus points for “nice” homeworks
• The graders may not grade all the assigned problems in the
homeworks
• The graders may not grade all the homeworks
Academic Offences
• Academic Offences include, but are not limited to:
• Infringing unreasonably on the work of other members
• E.g., disrupting classes
• Cheating
• Plagiarism
• Misrepresentations
Introduction
ADA
1-5
What is an algorithm?
• An algorithm is a sequence of unambiguous instructions for
solving a problem, i.e., for obtaining a required output for any
legitimate input in a finite amount of time.
problem
algorithm
1-6
Algorithm
• An algorithm is a sequence of unambiguous instructions for
solving a problem, i.e., for obtaining a required output for any
legitimate input in a finite amount of time.
1-7
Historical Perspective
• Euclid’s algorithm for finding the greatest common divisor
1-8
Notion of algorithm
problem
algorithm
Algorithmic solution
1-9
Example of computational problem: sorting
• Statement of problem:
• Input: A sequence of n numbers <a1, a2, …, an>
• Output: A reordering of the input sequence <a´1, a´2, …, a´n>
so that a´i ≤ a´j whenever i < j
• Instance: The sequence <5, 3, 2, 8, 3>
• Algorithms:
• Selection sort
• Insertion sort
• Merge sort
• (many others)
1-10
Selection Sort
• Input: array a[1],…,a[n]
• Output: array a sorted in non-decreasing order
• Algorithm:
for i=1 to n
swap a[i] with smallest of a[i],…a[n]
1-11
Some Well-known Computational Problems
• Sorting
• Searching
• Shortest paths in a graph
• Minimum spanning tree
• Primality testing
• Traveling salesman problem
• Knapsack problem
• Chess
• Towers of Hanoi
• Program termination
1-12
Basic Issues Related to Algorithms
• How to design algorithms
• How to express algorithms
• Proving correctness
• Efficiency
• Theoretical analysis
• Empirical analysis
• Optimality
1-13
Algorithm design strategies
• Brute force
• Divide and conquer
• Decrease and conquer
• Transform and conquer
• Greedy approach
• Dynamic programming
• Backtracking and Branch and bound
• Space and time tradeoffs
1-14
Analysis of Algorithms
• How good is the algorithm?
• Correctness
• Time efficiency
• Space efficiency
1-15
What is an algorithm?
• Recipe, process, method, technique, procedure, routine,…
with following requirements:
• Finiteness
• terminates after a finite number of steps
• Definiteness
• rigorously and unambiguously specified
• Input
• valid inputs are clearly specified
• Output
• can be proved to produce the correct output given a valid input
• Effectiveness
• steps are sufficiently simple and basic
1-16
Why study algorithms?
• Theoretical importance
• the core of computer science
• Practical importance
• A practitioner’s toolkit of known algorithms
• Framework for designing and analyzing algorithms for new
problems
1-17
Euclid’s Algorithm
• Problem: Find gcd(m,n), the greatest common divisor of two
nonnegative, not both zero integers m and n
• Examples:
• gcd(60,24) = 12, gcd(60,0) = 60, gcd(0,0) = ?
• Euclid’s algorithm is based on repeated application of equality
• gcd(m,n) = gcd(n, m mod n)
• until the second number becomes 0, which makes the
problem trivial.
• Example: gcd(60,24) = gcd(24,12) = gcd(12,0) = 12
1-18
Two descriptions of Euclid’s algorithm
• Step 1
• If n = 0, return m and stop; otherwise go to Step 2
• Step 2
• Divide m by n and assign the value fo the remainder to r
• Step 3
• Assign the value of n to m and the value of r to n.
• Go to Step 1.
• while n ≠ 0 do
• r ← m mod n
• m← n
•n←r
• return m
1-19
Other methods for computing gcd(m,n)
1-20
Other methods for gcd(m,n) [cont.]
• Middle-school procedure
• Step 1
• Find the prime factorization of m
• Step 2
• Find the prime factorization of n
• Step 3
• Find all the common prime factors
• Step 4
• Compute the product of all the common prime factors and
return it as gcd(m,n)
• Is this an algorithm?
1-21
Sieve of Eratosthenes
• Input: Integer n ≥ 2
• Output: List of primes less than or equal to n
• for p ← 2 to n do A[p] ← p
• for p ← 2 to n do
• if A[p] 0 //p hasn’t been previously eliminated from the list
j←p*p
• while j ≤ n do
• A[j] ← 0 //mark element as eliminated
•j←j+p
• Example: 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
1-22
Why study algorithms?
• Theoretical importance
• the core of computer science
• Practical importance
• A practitioner’s toolkit of known algorithms
• Framework for designing and analyzing algorithms for new
problems
1-23
Two main issues related to algorithms
• How to design algorithms
• How to analyze algorithm efficiency
1-24
Algorithm design techniques/strategies
• Brute force
• Divide and conquer
• Decrease and conquer
• Transform and conquer
• Space and time tradeoffs
• Greedy approach
• Dynamic programming
• Iterative improvement
• Backtracking
• Branch and bound
1-25
Analysis of algorithms
• How good is the algorithm?
• time efficiency
• space efficiency
1-26
Important problem types
• sorting
• searching
• string processing
• graph problems
• combinatorial problems
• geometric problems
• numerical problems
1-27
Algorithm Efficiency
ADA
Analysis of algorithms
• Issues:
• correctness
• time efficiency
• space efficiency
• optimality
• Approaches:
• theoretical analysis
• empirical analysis
Theoretical analysis of time efficiency
• Time efficiency is analyzed by determining the number of
repetitions of the basic operation as a function of input size
• Basic operation: the operation that contributes most towards
the running time of the algorithm
Matrix dimensions
Multiplication of Multiplication of
or total number of
two matrices two numbers
elements
• Worst case
• Best case
• Average case
Types of formulas for basic operation’s count
• Exact formula
• e.g., C(n) = n(n-1)/2
• Formula indicating order of growth with specific multiplicative
constant
• e.g., C(n) ≈ 0.5 n2
• Formula indicating order of growth with unknown multiplicative
constant
• e.g., C(n) ≈ cn2
Order of growth
• Most important: Order of growth within a constant multiple as
n→∞
• Example:
• How much faster will algorithm run on computer that is twice as
fast?
• How much longer does it take to solve problem of double input
size?
Values of some important functions as n →
Asymptotic order of growth
• A way of comparing functions that ignores constant factors
and small input sizes
• O(g(n)): class of functions f(n) that grow no faster than g(n)
• Θ(g(n)): class of functions f(n) that grow at same rate as g(n)
• Ω(g(n)): class of functions f(n) that grow at least as fast as
g(n)
Big-oh
Big-omega
Big-theta
Establishing the order of growth
• Definition: f(n) is in O(g(n)) if order of growth of f(n) ≤ order of
growth of g(n) (within constant multiple),
• i.e., there exist positive constant c and non-negative integer
n0 such that
• f(n) ≤ c g(n) for every n ≥ n0
• Examples:
• 10n is O(n2)
• 5n+20 is O(n)
Properties of asymptotic order of growth
• f(n) O(f(n))
• f(n) O(g(n)) iff g(n) (f(n))
• If f (n) O(g (n)) and g(n) O(h(n)),
• then f(n) O(h(n))
• Note similarity with a ≤ b
• If f1(n) O(g1(n)) and f2(n) O(g2(n)),
• then f1(n) + f2(n) O(max{g1(n), g2(n)})
Establishing order of growth using limits
• Given functions, f(n) and g(n), how to decide for whether
• f(n) = O(g(n)) or
• f(n) = Θ(g(n)) or
• f(n) = Ω(g(n))
Establishing order of growth using limits
• Given functions, f(n) and g(n), how to decide for whether
• f(n) = O(g(n)) or
• f(n) = Θ(g(n)) or
• f(n) = Ω(g(n))
• Examples:
• 10n vs. n2
• n(n+1)/2 vs. n2
Establishing order of growth using limits
• Given functions, f(n) and g(n), how to decide for whether
• f(n) = O(g(n)) or
• f(n) = Θ(g(n)) or
• f(n) = Ω(g(n))
• Examples:
• 10n vs. n2
• n(n+1)/2 vs. n2
• Size:
• Basic operation:
• Recurrence relation:
Solving the recurrence for M(n)
• M(n) = M(n-1) + 1, M(0) = 0
Example 2: The Tower of Hanoi Puzzle
n-1 n-1
1 1 1 1 1 1 1 1
Example 3: Counting #bits
Fibonacci numbers
• The Fibonacci numbers:
• 0, 1, 1, 2, 3, 5, 8, 13, 21, …
• The Fibonacci recurrence:
• F(n) = F(n-1) + F(n-2)
• F(0) = 0
• F(1) = 1
• General 2nd order linear homogeneous recurrence with
constant coefficients:
• aX(n) + bX(n-1) + cX(n-2) = 0
Solving aX(n) + bX(n-1) + cX(n-2) = 0
• Set up the characteristic equation (quadratic)
• ar2 + br + c = 0
• Solve to obtain roots r1 and r2
• General solution to the recurrence
• if r1 and r2 are two distinct real roots: X(n) = αr1n + βr2n
• if r1 = r2 = r are two equal real roots: X(n) = αrn + βnrn
• Particular solution can be found by using initial conditions
Application to the Fibonacci numbers
• F(n) = F(n-1) + F(n-2) or F(n) - F(n-1) - F(n-2) = 0
• Characteristic equation:
F(n-1) F(n) n
0 1
=
F(n) F(n+1) 1 1
• Examples:
• Computing an (a > 0, n a nonnegative integer)
• Computing n!
• Example: 7 3 2 5
Analysis of Selection Sort
• Time efficiency:
• Space efficiency:
• Stability:
Brute-Force String Matching
• pattern: a string of m characters to search for
• text: a (longer) string of n characters to search in
• problem: find a substring in the text that matches the pattern
• Brute-force algorithm
• Step 1 Align pattern at beginning of text
• Step 2 Moving from left to right, compare each character of
pattern to the corresponding character in text until
• all characters are found to match (successful search); or
• a mismatch is detected
• Step 3 While pattern is not found and the text is not yet
exhausted, realign pattern one position to the right and
repeat Step 2
Examples of Brute-Force String Matching
• Pattern: 001011
Text: 10010101101001100101111010
• Pattern: happy
Text: It is never too late to have a happy childhood.
Pseudocode and Efficiency
• Efficiency:
Brute-Force Polynomial Evaluation
• Problem: Find the value of polynomial
• p(x) = anxn + an-1xn-1 +… + a1x1 + a0
• at a point x = x0
Brute-Force Polynomial Evaluation
• Brute-force algorithm
• p 0.0
• for i n downto 0 do
• power 1
• for j 1 to i do //compute xi
• power power x
• p p + ai power
• return p
• Efficiency?
Polynomial Evaluation: Improvement
• We can do better by evaluating from right to left:
Polynomial Evaluation: Improvement
• We can do better by evaluating from right to left:
• Efficiency?
Closest-Pair Problem
• Find the two closest points in a set of n points (in the two-
dimensional Cartesian plane).
Closest-Pair Problem
• Find the two closest points in a set of n points (in the two-
dimensional Cartesian plane).
• Brute-force algorithm
• Compute the distance between every pair of distinct points
• and return the indexes of the points for which the distance is
the smallest.
Closest-Pair Brute-Force Algorithm (cont.)
• Efficiency?
• Weaknesses
• rarely yields efficient algorithms
• some brute-force algorithms are unacceptably slow
• not as constructive as some other design techniques
Exhaustive Search
Exhaustive Search
• A brute force solution to a problem involving search for an
element with a special property, usually among combinatorial
objects such as permutations, combinations, or subsets of a
set.
• Method:
• generate a list of all potential solutions to the problem in a
systematic manner
2
a b
5 3
8 4
c d
7
TSP by Exhaustive Search
• Tour Cost
• a→b→c→d→a 2+3+7+5 = 17
• a→b→d→c→a 2+4+7+8 = 21
• a→c→b→d→a 8+3+4+5 = 20
• a→c→d→b→a 8+7+4+2 = 21
• a→d→b→c→a 5+4+3+8 = 20
• a→d→c→b→a 5+7+3+2 = 17
• More tours?
• Less tours?
Example 2: Knapsack Problem
• Given n items:
• weights: w1, w2, … , wn
• values: v1, v2, … , vn
• a knapsack of capacity W
• Find most valuable subset of the items that fit into the
knapsack
Example 2: Knapsack Problem
• Given n items:
• weights: w1 w2 … wn
• values: v1 v2 … vn
• a knapsack of capacity W
• Find most valuable subset of the items that fit into the
knapsack
a problem of size n
subproblem 1 subproblem 2
of size n/2 of size n/2
a solution to a solution to
subproblem 1 subproblem 2
a solution to
the original problem
Divide-and-Conquer Examples
• Sorting: mergesort and quicksort
1-99
General Divide-and-Conquer Recurrence
T(n) = aT(n/b) + f (n) where f(n) (nd), d 0
1-100
General Divide-and-Conquer Recurrence
T(n) = aT(n/b) + f (n) where f(n) (nd), d 0
1-101
Mergesort
• Split array A[0..n-1] in two about equal halves and make
copies of each half in arrays B and C
• Sort arrays B and C recursively
• Merge sorted arrays B and C into array A as follows:
• Repeat the following until no elements remain in one of the
arrays:
• compare the first elements in the remaining unprocessed
portions of the arrays
• copy the smaller of the two into A, while incrementing the index
indicating the unprocessed portion of that array
• Once all elements in one of the arrays are processed, copy the
remaining unprocessed elements from the other array into A.
Pseudocode of Mergesort
Pseudocode of Merge
Mergesort Example
8 3 2 9 7 1 5 4
8 3 2 9 7 1 5 4
8 3 2 9 71 5 4
8 3 2 9 7 1 5 4
3 8 2 9 1 7 4 5
2 3 8 9 1 4 5 7
1 2 3 4 5 7 8 9
Analysis of Mergesort
• All cases have same efficiency: Θ(n log n)
A[i]p A[i]p
• Exchange the pivot with the last element in the first (i.e., )
subarray — the pivot is now in its final position
• Sort the two subarrays recursively
Partitioning Algorithm
Quicksort Example
•5 3 1 9 8 2 4 7
Analysis of Quicksort
• Best case: split in the middle — Θ(n log n)
• Worst case: sorted array! — Θ(n2)
• Average case: random arrays — Θ(n log n)
• Improvements:
• better pivot selection: median of three partitioning
• switch to insertion sort on small subfiles
• elimination of recursion
• These combine to 20-25% improvement
l 0; r n-1
while l r do
m (l+r)/2
if K = A[m] return m
else if K < A[m] r m-1
else l m+1
return -1
Analysis of Binary Search
• Time efficiency
• worst-case recurrence: Cw (n) = 1 + Cw( n/2 ), Cw (1) = 1
solution: Cw(n) = log2(n+1)
• Efficiency: Θ(n)
1-114
Binary Tree Algorithms (cont.)
• Ex. 2: Computing the height
of a binary tree
• Efficiency: Θ(n) TL TR
Multiplication of Large Integers
• Consider the problem of
multiplying two (large) n-digit
integers represented by
arrays of their digits such as: a1 a2 … an
A = 12345678901357986429 b1 b2 … bn
B = 87654321284820912836 (d10) d11 d12 … d1n
(d20) d21 d22 … d2n
… … … … ...
• The grade-school algorithm:
(dn0) dn1 dn2 … dnn
• Efficiency: n2 one-digit
multiplications
First Divide-and-Conquer Algorithm
• Solution: M(n) = n2
Second Divide-and-Conquer Algorithm
• A B = A1B110n + (A1B2 + A2B1)10n/2 + A2 B2
M1 + M4 - M5 + M7 M3 + M5
=
M2 + M4 M1 + M3 - M2 + M6
Formulas for Strassen’s Algorithm
• M1 = (A00 + A11) (B00 + B11)
• M2 = (A10 + A11) B00
• M3 = A00 (B01 - B11)
• M4 = A11 (B10 - B00)
• M5 = (A00 + A01) B11
• M6 = (A10 - A00) (B00 + B01)
• M7 = (A01 - A11) (B10 + B11)
Analysis of Strassen’s Algorithm
• If n is not a power of 2, matrices can be padded with zeros.
• Number of multiplications:
• M(n) = 7M(n/2), M(1) = 1
• Solution:
• M(n) = 7log 2n = nlog 27 ≈ n2.807
• vs. n3 of brute-force alg.
• Efficiency?
• How to make it faster?
Closest-Pair Problem by Divide-and-Conquer
• Step 1 Divide the points given into two subsets S1 and S2 by
a vertical line x = c so that half the points lie to the left or on
the line and half the points lie to the right or on the line.
P1
Efficiency of Quickhull Algorithm
• Finding point farthest away from line P1P2 can be done in
linear time
• Time efficiency:
• worst case: Θ(n2) (as quicksort)
• average case: Θ(n)
• under reasonable assumptions about distribution of points given
Decrease-and-Conquer
Decrease-and-Conquer
• Reduce problem instance to smaller instance of the same
problem
• Solve smaller instance
• Extend solution of smaller instance to obtain solution to
original instance
• Can be implemented either top-down or bottom-up
• Also referred to as inductive or incremental approach
3 Types of Decrease and Conquer
• Variable-size decrease
• Euclid’s algorithm
• selection by partition
What’s the difference?
• Consider the problem of exponentiation: Compute an
• Brute Force:
• Decrease by one:
• Example: Sort 6, 4, 1, 8, 5
6 4 1 8 5
4 6 1 8 5
1 4 6 8 5
1 4 6 8 5
1 4 5 6 8
Pseudocode of Insertion Sort
Analysis of Insertion Sort
• Time efficiency
• Cworst(n) = n(n-1)/2 Θ(n2)
• Cavg(n) ≈ n2/4 Θ(n2)
• Cbest(n) = n - 1 Θ(n) (also fast on almost sorted arrays)
• Stability: yes
• Uses a stack
• a vertex is pushed onto the stack when it’s reached for the first
time
• a vertex is popped off the stack when it becomes a dead end,
i.e., when there is no adjacent unvisited vertex
a b c d
e f g h
Notes on DFS
• DFS can be implemented with graphs represented as:
• adjacency matrices: Θ(V2)
• adjacency lists: Θ(|V|+|E|)
• Applications:
• checking connectivity, finding connected components
• checking acyclicity
• finding articulation points and biconnected components
• searching state-space of problems for solution (AI)
Breadth-first search (BFS)
• Visits graph vertices by moving across to all the neighbors of
last visited vertex
a b c d
e f g h
Notes on BFS
• BFS has same efficiency as DFS and can be implemented
with graphs represented as:
• adjacency matrices: Θ(V2)
• adjacency lists: Θ(|V|+|E|)
a b a b
a dag not a dag
c d c d
tiger
human
fish
sheep
shrimp
plankton wheat
DFS-based Algorithm
• DFS-based algorithm for topological sorting
• Perform DFS traversal, noting the order vertices are popped off
the traversal stack
• Reverse order solves topological sorting problem
• Back edges encountered?→ NOT a dag!
• Example:
a b c d
e f g h
• Efficiency:
Source Removal Algorithm
• Source removal algorithm
• Repeatedly identify and remove a source (a vertex with no
incoming edges) and all the edges incident to it until either no
vertex is left (problem is solved) or there is no source among
remaining vertices (not a dag)
• Example:
a b c d
e f g h
• Examples:
• Binary search and the method of bisection
• Exponentiation by squaring
• Fake-coin puzzle
• Josephus problem
Exponentiation by Squaring
• The problem: Compute an where n is a nonnegative integer
• Examples:
• Euclid’s algorithm for greatest common divisor
• Partition-based algorithm for selection problem
• Interpolation search
• Some algorithms on binary search trees
• Nim and Nim-like games
Euclid’s Algorithm
• Euclid’s algorithm is based on repeated application of equality
• gcd(m, n) = gcd(n, m mod n)
A[r] .
v
A[l] .
index
l x r
Analysis of Interpolation Search
• Efficiency
• average case: C(n) < log2 log2 n + 1
• worst case: C(n) = n
<k >k
Searching in Binary Search Tree
Algorithm BTS(x, v)
//Searches for node with key equal to v in BST rooted at node x
• if x = NIL return -1
• else if v = K(x) return x
• else if v < K(x) return BTS(left(x), v)
• else return BTS(right(x), v)
• Efficiency
• worst case: C(n) = n
• average case: C(n) ≈ 2ln n ≈ 1.39log2 n
One-Pile Nim
• There is a pile of n chips.
• Two players take turn by removing from the pile at least 1 and
at most m chips.
• (The number of chips taken can vary from move to move.)
• The winner is the player that takes the last chip.
• Who wins the game – the player moving first or second, if
both player make the best moves possible?
• Presorting
• Many problems involving lists are easier when list is sorted.
• searching
• computing the median (selection problem)
• checking if all elements are distinct (element uniqueness)
• Also:
• Topological sorting helps solving some problems for dags.
• Presorting is used in many geometric algorithms.
How fast can we sort ?
• Efficiency of algorithms involving sorting depends on
efficiency of sorting.
• Presorting-based algorithm:
• Stage 1 Sort the array by an efficient sorting algorithm
• Stage 2 Apply binary search
• Good or bad?
• Efficiency: O(n2)
for i ←1 to n-1 do
replace each of the subsequent rows (i.e., rows i+1, …, n) by
a difference between that row and an appropriate multiple of
the i-th row to make the new coefficient in the i-th column of
that row 0
Example of Gaussian Elimination
2x1 - 4x2 + x3 = 6
• Solve:
3x1 - x2 + x3 = 11
x1 + x2 - x3 = -3
• Gaussian elimination
2 -4 1 6 2 -4 1 6
3 -1 1 11 row2 – (3/2)*row1 0 5 -1/2 2
1 1 -1 -3 row3 – (1/2)*row1 0 3 -3/2 -6
2 -4 1 6
0 5 -1/2 2
row3–(3/5)*row2 0 0 -6/5 -36/5
• Backward substitution
x1 = (-36/5) / (-6/5) = 6
x2 = (2+(1/2)*6) / 5 = 1
x3 = (6 - 6 + 4*1)/2 = 2
Gaussian Elimination
Pseudocode and Efficiency
• Stage 1: Reduction to the upper-triangular matrix
for i ← 1 to n-1 do
for j ← i+1 to n do
for k ← i to n+1 do
A[j, k] ← A[j, k] - A[i, k] * A[j, i] / A[i, i] //improve!
• Stage 2: Backward substitution
for j ← n downto 1 do
t←0
for k ← j +1 to n do
t ← t + A[j, k] * x[k]
x[j] ← (A[j, n+1] - t) / A[j, j]
• Tree searching
• binary search tree
• binary balanced trees: AVL trees, red-black trees
• multiway balanced trees: 2-3 trees, 2-3-4 trees, B trees
• Hashing
• open hashing (separate chaining)
• closed hashing (open addressing)
Binary Search Tree
• Arrange keys in a binary tree with the binary search tree
property:
<K >K
10 10
0 1 0 0
5 20 5 20
1 -1 0 1 -1
4 7 12 4 7
0 0 0 0
2 8 2 8
(a) (b)
Rotations
• If a key insertion violates the
balance requirement at 2 0
some node, the subtree 3 2
leaf.) -1
LR
0 0
1 > 1 3
0
2
(c)
Double LR-rotation
General case: Single R-rotation
General case: Double LR-rotation
AVL tree construction - an example
• Construct an AVL tree for the list 5, 6, 8, 3, 2, 4, 7
0 -1 -2 0
L(5)
5 5 5 6
0 -1 > 0 0
6 6 5 8
0
8
1 2 1
6 6 6
1 0 2 0 R (5) 0 0
5 8 5 8 > 3 8
0 1 0 0
3 3 2 5
0
2
AVL tree construction - an example (cont.)
2 0
6 5
-1 0 LR (6) 0 -1
3 8 > 3 6
0 1 0 0 0
2 5 2 4 8
0
4
-1 0
5 5
0 -2 0 0
3 6 3 7
RL (6)
0 0 1 > 0 0 0 0
2 4 8 2 4 6 8
0
7
Analysis of AVL trees
• h 1.4404 log2 (n + 2) - 1.3277
• average height: 1.01 log2 n + 0.1 for large n (found empirically)
• Disadvantages:
• frequent rotations
• Complexity
K K1, K2
8 8
>
9 5, 9 5, 8, 9 5 9 3, 5 9
8 3, 8 3, 8
>
2, 3, 5 9 2 5 9 2 4, 5 9
3, 8 > 3, 5, 8 >
3 8
2 4, 5, 7 9 2 4 7 9 2 4 7 9
Analysis of 2-3 trees
• log3 (n + 1) - 1 h log2 (n + 1) - 1
10 10 10
5 7 5 7 5 7
4 2 1 2 1 6 2 1
9
123456
5 3 953142
1 4 2
• Left child of node j is at 2j
• Right child of node j is at 2j+1
• Parent of node j is at j/2
• Parental nodes are represented in the first n/2 locations
Heap Construction (bottom-up)
• Step 0: Initialize the structure with keys in the order given
• Step 1: Starting with the last (rightmost) parental node, fix the
heap rooted at it, if it doesn’t satisfy the heap condition: keep
exchanging it with its largest child until the heap condition
holds
2 2 2
9 7 > 9 8 9 8
6 5 8 6 5 7 6 5 7
2 9 9
9 8 > 2 8 > 6 8
6 5 7 6 5 7 2 5 7
Pseudopodia of bottom-up heap construction
Heapsort
• Stage 1: Construct a heap for a given list of n keys
•197658 •968257
•298657 •76825|9
•298657 •86725|9
•928657 •5672|89
•968257 •7652|89
•265|789
•625|789
•52|6789
•52|6789
•2|56789
Analysis of Heapsort
• Stage 1: Build heap for a given list of n keys
• worst-case
• C(n) = h-1
2(h-i) 2i = 2 ( n – log2(n + 1)) (n)
i=0
# nodes at level i
6 8 > 6 10 > 6 9
2 5 7 10 2 5 7 8 2 5 7 8
Horner’s Rule For Polynomial Evaluation
• Given a polynomial of degree n and a specific value of x, find
the value of p at that point.
• p(x) = anxn + an-1xn-1 + … + a1x + a0
• Two brute-force algorithms:
p 0; p a0; power 1
for i n downto 0 do for i 1 to n do
power 1 power power * x
for j 1 to i do p p + ai * power
power power * x return p
p p + ai * power
return p
Horner’s Rule Example
• p(x) = 2x4 - x3 + 3x2 + x - 5 =
= x(2x3 - x2 + 3x + 1) - 5 =
= x(x(2x2 - x + 3) + 1) - 5 =
= x(x(x(2x - 1) + 3) + 1) - 5
• coefficients 2 -1 3 1 -5
• x=3
Horner’s Rule pseudocode
• Efficiency of Horner’s Rule: # multiplications = # additions = n
Horner’s Rule
• Synthetic division of of p(x) by (x-x0)
• Example: Let p(x) = 2x4 - x3 + 3x2 + x - 5. Find p(x):(x-3)
Computing an (revisited)
• Left-to-right binary exponentiation:
• Initialize product accumulator by 1.
• Scan n’s binary expansion from left to right and do the
following:
• If the current binary digit is 0, square the accumulator (S);
if the binary digit is 1, square the accumulator and multiply it by
a (SM).
• Efficiency: (b-1) ≤ M(n) ≤ 2(b-1) where b = log2 n + 1
• Example: Compute a13. Here, n = 13 = 11012.
• linear programming
1 2 6 6 6 6 6 6 6 6 6 6 6 6 3 6 6 6 6 6 6 6 6 6 6 6
Example of Horspool’s alg. application
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z _
1 2 6 6 6 6 6 6 6 6 6 6 6 6 3 6 6 6 6 6 6 6 6 6 6 6 6
• BARD LOVED BANANAS
• BAOBAB
• BAOBAB
• BAOBAB
• BAOBAB (unsuccessful search)
Boyer-Moore algorithm
• Based on same two ideas:
• comparing pattern characters to text from right to left
•
Boyer-Moore Algorithm (cont.)
• Step 1 Fill in the bad-symbol shift table
• Step 2 Fill in the good-suffix shift table
• Step 3 Align the pattern against the beginning of the text
• Step 4 Repeat until a matching substring is found or text ends:
• Compare the corresponding characters right to left.
• If no characters match, retrieve entry t1(c) from the bad-
symbol table for the text’s character c causing the mismatch
and shift the pattern to the right by t1(c).
If 0 < k < m characters are matched, retrieve entry t1(c) from
the bad-symbol table for the text’s character c causing the
mismatch and entry d2(k) from the good-suffix table and shift
the pattern to the right by
• d = max {d1, d2}
where d1 = max{t1(c) - k, 1}.
Example of Boyer-Moore alg. application
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z _
1 2 6 6 6 6 6 6 6 6 6 6 6 6 3 6 6 6 6 6 6 6 6 6 6 6 6
• BE
k Spattern
S _ K Nd E W _ A B O
U T _ B A O B A2 B S
1 BAOBAB 2
• BA OBAB
2 BAOBAB 5
• d13= t1(K)
BAOBAB= 6 5B A O B A B
• d14= t1(_)-2
BAOBAB = 54
5 BAOBAB
• d2(2) =5 5
• BAOBAB
Hashing
• A very efficient method for implementing a dictionary, i.e., a
set with the operations:
• find
• insert
• delete
• Important applications:
• symbol tables
• databases (extendible hashing)
Hash tables and hash functions
• The idea of hashing is to map keys of a given file of size n into
• a table of size m, called the hash table, by using a predefined
• function, called the hash function,
• h: K → location (cell) in the hash table
0 1 2 3 4 5 6 7 8 9 10 11 12
SOON
Search for KID
Open hashing (cont.)
• If hash function distributes keys uniformly, average length of
linked list will be α = n/m. This ratio is called load factor.
0 1 2 3 4 5 6 7 8 9 10 11 12
A
A FOOL
A AND FOOL
A AND FOOL HIS
A AND MONEY FOOL HIS
A AND MONEY FOOL HIS ARE
A AND MONEY FOOL HIS ARE SOON
PARTED A AND MONEY FOOL HIS ARE SOON
Closed hashing (cont.)
• Does not work if n > m
• Avoids pointers
• Deletions are not straightforward
• Number of probes to find/insert/delete a key depends on load
factor α = n/m (hash table density) and collision resolution
strategy. For linear probing:
• S = (½) (1+ 1/(1- α)) and U = (½) (1+ 1/(1- α)²)
• As the table gets filled (α approaches 1), number of probes in
linear probing increases dramatically:
Chapter 8
Dynamic Programming
• Main idea:
- set up a recurrence relating a solution to a larger instance
to solutions of some smaller instances
- solve smaller instances once
- record solutions in a table
- extract solution to the initial instance from that table
Example: Fibonacci numbers
F(n)
F(n-1) + F(n-2)
F(0) = 0
F(1) = 1
F(2) = 1+0 = 1
…
F(n-2) =
F(n-1) =
F(n) = F(n-1) + F(n-2)
3 3
1 1
4 4 0010
2 0010 2
1001 1111
0000 0000
0100 1111
Warshall’s Algorithm
4 4 4 2 4 4
2 2 2 2
{
R(k)[i,j] = or
R(k-1)[i,j] (path using just 1 ,…,k-1)
j
Warshall’s Algorithm (matrix generation)
3
1 0010 0010
1001 1011
R(0) = 0000 R(1) = 0000
2 4
0100 0100
Example: 4 3
1
1
6
1 5
4
2 3
Floyd’s Algorithm (matrix generation)
D(k-1)[i,k]
k
i
D(k-1)[k,j]
D(k-1)[i,j]
j
Floyd’s Algorithm (example)
2
1 2 0∞3∞2 0∞3∞
3 6 7 0∞∞ 205∞
D(0) = ∞701 D(1) = ∞701
3
1
4 6∞∞0 6∞90
0∞3∞ 0 10 3 4 0 10 3 4
205∞ 2056 2056
D(2) = 9701 D(3) = 9701 D(4) = 7701
6∞90 6 16 9 0 6 16 9 0
Floyd’s Algorithm (pseudocode and analysis)
s=i
j
∑ ps (level as in T[k+1,j] +1)}
s =k+1
DP for Optimal BST Problem (cont.)
0 p2
i C[i,j]
pn
n+1 0
Example: key A B C D
probability 0.1 0.2 0.4 0.3
The tables below are filled diagonal by diagonal: the left one is filled
using the recurrence j
C[i,j] = min {C[i,k-1] + C[k+1,j]} + ∑ ps , C[i,i] = pi ;
i≤k≤j s=i
the right one, for trees’ roots, records k’s values giving the minima
j j
i 0 1 2 3 4 i 0 1 2 3 4
C
1 0 .1 .4 1.1 1.7 1 1 2 3 3
2 2 3 3 B D
2 0 .2 .8 1.4
3 0 .4 1.0 3 3 3
A
4 0 .3 4 4
optimal BST
5 0 5
Optimal Binary Search Trees
Analysis DP for Optimal BST Problem
• feasible
• locally optimal
• irrevocable
• Approximations:
• traveling salesman problem (TSP)
• knapsack problem
• other combinatorial optimization problems
Change-Making Problem
• Given unlimited amounts of coins of denominations d1 > … >
dm ,
• give change for amount n with the least number of coins
• Greedy solution:
• Greedy solution is
• optimal for any amount and “normal’’ set of denominations
• may not be optimal for arbitrary coin denominations
Minimum Spanning Tree (MST)
• Spanning tree of a connected graph G: a connected acyclic
subgraph of G that includes all of G’s vertices
• Example:
6 c
a
4 1
2
d
b 3
Prim’s MST algorithm
• Start with tree T1 consisting of one (any) vertex and “grow”
tree one vertex at a time to produce MST through a series of
expanding subtrees T1, T2, …, Tn
d
b 3
Notes about Prim’s algorithm
• Proof by induction that this construction actually yields MST
• Efficiency
• O(n2) for weight matrix representation of graph and array
implementation of priority queue
• O(m log n) for adjacency list representation of graph with n
vertices and m edges and min-heap implementation of priority
queue
Another greedy algorithm for MST: Kruskal’s
• Sort the edges in nondecreasing order of lengths
• On each iteration, add the next edge on the sorted list unless
this would create a cycle. (If it would, skip the edge.)
Example
•
4 c
a
1
6
2
d
b 3
Notes about Kruskal’s algorithm
• Algorithm looks easier than Prim’s but is harder to implement
(checking for cycles!)
1 1 vs
b d b d
1
Shortest paths – Dijkstra’s algorithm
• Single Source Shortest Paths Problem: Given a weighted
• connected graph G, find shortest paths from source vertex s
• to each of the other vertices
4
b c
b(a,3) c(b,3+4) d(b,3+2) e(-,∞) 3 6
2 5
a d e
7 4
4
d(b,5) c(b,7) e(d,5+4) 3
b c
6
2 5
a d e
7 4
4
b c
c(b,7) e(d,9) 3 6
2 5
a d e
7 4
e(d,9)
Notes on Dijkstra’s algorithm
• Doesn’t work for graphs with negative weights
• Efficiency
• O(|V|2) for graphs represented by weight matrix and array
implementation of priority queue
• O(|E|log|V|) for graphs represented by adj. lists and min-heap
implementation of priority queue
• Huffman’s algorithm
• Initialize n one-node trees with alphabet characters and the
tree weights with their frequencies.
• Repeat the following step n-1 times: join two binary trees with
smallest weights into one (as left and right subtrees) and
make its weight equal the sum of the weights of the two trees.
• Mark edges leading to left and right subtrees with 0’s and 1’s,
respectively.
Example
• character A B C D _
0.1 0.15 0.2 0.2 0.35
0.1 0.15
B _
0.1 0.15
B _
Chapter 10
Iterative
Improvement
x + 3y = 6
• ( 0, 2 )
( 3, 1 )
x
Feasible region is the ( 0, 0 ) ( 4, 0 )
by the constraints
Geometric solution
• maximize 3x + 5y
• subject to x + y ≤ 4
• x + 3y ≤ 6 y
• x ≥ 0, y ≥ 0
( 0, 2 )
( 3, 1 )
x
( 0, 0 ) ( 4, 0 )
3x + 5y = 20
• Optimal solution: x = 3, y = 1
3x + 5y = 14
3x + 5y = 10
Extreme Point Theorem Any LP problem with a nonempty bounded feasible
region has an optimal solution; moreover, an optimal solution can always be
found at an extreme point of the problem's feasible region.
3 possible outcomes in solving an LP problem
• has a finite optimal solution, which may no be unique
• maximize 3x + 5y maximize 3x + 5y + 0u + 0v
• subject to x + y ≤ 4 subject to x + y + u = 4
• x + 3y ≤ 6 x + 3y + v = 6
• x≥0, y≥0 x≥0, y≥0, u≥0, v≥0
•
• Variables u and v, transforming inequality constraints into
• equality constrains, are called slack variables
Basic feasible solutions
• A basic solution to a system of m linear equations in n
unknowns (n ≥ m) is obtained by setting n – m variables to 0
and solving the resulting system to get the values of the other
m variables. The variables set to 0 are called nonbasic; the
variables obtained by solving the system are called basic.
• A basic solution is called feasible if all its (basic) variables are
nonnegative.
• Example x + y + u = 4
• x + 3y + v = 6
•
basic basic feasible solution
variables (0, 0, 4, 6)
x + 3y + v = 6 v 1 3 0 1 6
x≥0, y≥0, u≥0, v≥0
3 5 0 0 0
x y u v
x 1 0 3/2 1/3 3
y 0 1 1/2 1/2 1
x y 0 u 0 v 2 1 14
2 1
u 0 1 2
3 3
1 1
y 1 0 2
3 3
basic feasible sol. basic feasible sol. basic feasible sol.
4 5
0 0
(0, 0, 4, 6) (0, 2, 2, 0) 3 (3, 31, 0,100)
z=0 z = 10 z = 14
Notes on the Simplex Method
• Finding an initial basic feasible solution may pose a problem
4
3
Source
2 5 2
1 2 3 6
3 Sink
1
4
Definition of a Flow
• A flow is an assignment of real numbers xij to edges (i,j) of a
given network that satisfy the following:
• flow-conservation requirements
The total amount of material entering an intermediate
vertex must be equal to the total amount of the material
leaving the vertex
0/3 0/4
0/3
0/1
xij/uij
4 Augmenting path:
1→2 →3 →6
Example 1 (cont.)
0/3 0/4
0/3
0/1
4
Augmenting path:
1 →4 →3←2 →5 →6
Example 1 (maximum flow)
1/3 1/4
1/3
1/1
max flow value = 3
4
Finding a flow-augmenting path
• To find a flow-augmenting path for a flow x, consider paths
from source to sink in the underlying undirected graph in
which any two consecutive vertices i,j are either:
• connected by a directed edge (i to j) with some positive unused
capacity rij = uij – xij
• known as forward edge ( → )
• OR
• connected by a directed edge (j to i) with positive flow xji
• known as backward edge ( ← )
• If a flow-augmenting path is found, the current flow can be
increased by r units by increasing xij by r on each forward
edge and decreasing xji by r on each backward edge, where
• Example 2
2
0/U 0/U
0/U 0/U
4
Example 2 (cont.)
2 2
0/U 0/U 1/U 0/U
2 2
U/U U/U 1/U 1/U
V=2U
1 0/1 3 ●●● 1 0/1 3 V=2
0/3 0/1
4 3,1+
Augment the flow by 2 (the sink’s first
label) along the path 1→2→3→6
Example (cont.)
5
0/3 0/4
0/3 0/1
4 3,1+
Augment the flow by 1 (the sink’s first
label) along the path 1→4→3←2→5→6
Example (cont.)
5
1/3 1/4
1/3 1/1 5
1/3 1/4
4
Queue: 1 4
↑↑ ∞,- 2/2 1/5 2/2
1 2 3 6
1/3 1/1
4 2,1+
No augmenting path (the sink is unlabeled)
the current flow is maximum
Definition of a Cut
• Let X be a set of vertices in a network that includes its source
but does not include its sink, and let X, the complement of X,
be the rest of the vertices including the sink. The cut induced
by this partition of the vertices is the set of all the edges with a
tail in X and a head in X.
• Capacity of a cut is defined as the sum of capacities of the
edges that compose the cut.
3
1
4
Max-Flow Min-Cut Theorem
• The value of maximum flow in a network is equal to the
capacity of its minimum cut
V 1 2 3 4 5
U 6 7 8 9 10
Bipartite Graphs (cont.)
• A bipartite graph is 2-colorable: the vertices can be colored in
two colors so that every edge has its vertices colored
differently
V 1 2 3 4 5
U 6 7 8 9 10
Matching in a Graph
• A matching in a graph is a subset of its edges with the
property that no two edges share a vertex
V 1 2 3 4 5
a matching
in this graph
M = {(4,8), (5,9)}
U 6 7 8 9 10
V 1 2 3 4 5 A matched
A matching vertex
in this graph (M)
A free
vertex
U 6 7 8 9 10
U 6 7 8 9 10 U 6 7 8 9 10
V 1 2 3 4 5 V 1 2 3 4 5
U 6 7 8 9 10 U 6 7 8 9 10
Augmentation along
3, 8, 4, 9, 5, 10
V 1 2 3 4 5 V 1 2 3 4 5
U 6 7 8 9 10 U 6 7 8 9 10
1
Each vertex is labeled with the vertex it was reached from. Queue deletions are
indicated by arrows. The free vertex found in U is shaded and labeled for clarity;
the new matching obtained by the augmentation is shown on the next slide.
Example (cont.)
V 1 2 3 4 5 V 1 2 3 4 5
U 6 7 8 9 10 U 6 7 8 9 10
2 1 3
V 1 2 3 4 5 V 1 2 3 4 5
U 6 7 8 9 10 U 6 7 8 9 10
3 3 4 4
U 6 7 8 9 10
maximum
matching
Notes on Maximum Matching Algorithm
• Each iteration (except the last) matches two free vertices (one
each from V and U). Therefore, the number of iterations
cannot exceed n/2 + 1, where n is the number of vertices in
the graph. The time spent on each iteration is in O(n+m),
where m is the number of edges in the graph. Hence, the time
efficiency is in O(n(n+m))
U 6 7 8 9 10
1 1 1 1
1
t
Stable Marriage Problem
• There is a set Y = {m1,…,mn} of n men and a set X =
{w1,…,wn} of n women. Each man has a ranking list of the
women, and each woman has a ranking list of the men (with
no ties in these lists).
• A marriage matching M is a set of n pairs (mi, wj).
• A pair (m, w) is said to be a blocking pair for matching M if
man m and woman w are not matched in M but prefer each
other to their mates in M.
• A marriage matching M is called stable if there is no blocking
pair for it; otherwise, it’s called unstable.
• The stable marriage problem is to find a stable marriage
matching for men’s and women’s given preferences.
Instance of the Stable Marriage Problem
• An instance of the stable marriage problem can be specified
either by two sets of preference lists or by a ranking matrix, as
in the example below.
• men’s preferences women’s preferences
• 1st 2nd 3rd 1st 2nd 3rd
• Bob: Lea Ann Sue Ann: Jim Tom Bob
• Jim: Lea Sue Ann Lea: Tom Bob Jim
• Tom: Sue Lea Ann Sue: Jim Tom Bob
• Step 1 While there are free men, arbitrarily select one of them
and do the following:
Proposal The selected free man m proposes to w, the next
woman on his preference list
• Response If w is free, she accepts the proposal to be
matched with m. If she is not free, she compares m with her
current mate. If she prefers m to him, she accepts m’s
proposal, making her former mate free; otherwise, she simply
rejects m’s proposal, leaving m free
• Examples:
• number of comparisons needed to find the largest element in
a set of n numbers
• number of comparisons needed to sort an array of size n
• number of comparisons necessary for searching in a sorted
array
• number of multiplications needed to multiply two n-by-n
matrices
Lower Bounds (cont.)
• Lower bound can be
• an exact count
• an efficiency class ()
• adversary arguments
• problem reduction
Trivial Lower Bounds
• Trivial lower bounds: based on counting the number of items
that must be processed in input and generated as output
• Examples
• finding max element
• polynomial evaluation
• sorting
• element uniqueness
abc bac
yes no yes no
b< c a<c
• log2n! n log2n
• Possible answers:
• yes (give examples)
• no
• because it’s been proved that no algorithm exists at all
(e.g., Turing’s halting problem)
• because it’s been be proved that any algorithm takes
exponential time
• unknown
Problem Types: Optimization and Decision
• Optimization problem: find a solution that maximizes or
minimizes some objective function
• Examples:
• searching
• element uniqueness
• graph connectivity
• graph acyclicity
• Big question: P = NP ?
NP-Complete Problems
• A decision problem D is NP-complete if it’s as hard as any
• problem in NP, i.e.,
• D is in NP
• every problem in NP is polynomial-time reducible to D
NP problems
NP-complete
problem
NP problems
known
NP-complete
problem
candidate
for NP -
completeness
NP-complete
problem
• dynamic programming
• applicable to some problems (e.g., the knapsack problem)
• backtracking
• eliminates some unnecessary cases from consideration
• yields solutions in reasonable time for many instances but
worst case is still exponential
• branch-and-bound
• further refines the backtracking idea for optimization problems
Backtracking
• Construct the state-space tree
• nodes: partial solutions
• edges: choices in extending partial solutions
1 2 3 4
1 queen 1
2 queen 2
3 queen 3
4 queen 4
State-Space Tree of the 4-Queens Problem
Example: Hamiltonian Circuit Problem
0
with 3 w/o 3
3 0
a b with 5 w/o 5 with 5 w/o 5
8 3 5 0
c f w/o 6 with 6 w/o 6 with 6 w/o 6 X
with 6
0+13<15
14 8
d e with 7 w/o 7
9 3 11 5
X X X X X
14+7>15 9+7>15 3+7<15 11+7>14 5+7<15
15 8
solution X
8<15
Branch-and-Bound
• An enhancement of backtracking
Example
Job 1 Job 2 Job 3 Job 4
Person a 9 2 7 8
Person b 6 4 3 7
Person c 5 8 1 8
Person d 7 6 9 4
Lower bound: Any solution to this problem will have total cost
at least: 2 + 3 + 1 + 4 (or 5 + 2 + 1 + 4)
Example: First two levels of the state-space
tree
Example (cont.)
Example: Complete state-space tree
Example: Traveling Salesman Problem
Approximation Approach
• Apply a fast (i.e., a polynomial-time) approximation algorithm
to get a solution that is not necessarily optimal but hopefully
close to it
• Accuracy measures:
• accuracy ratio of an approximate solution sa
• r(sa) = f(sa) / f(s*) for minimization problems
• r(sa) = f(s*) / f(sa) for maximization problems
• where f(sa) and f(s*) are values of the objective function f for
the approximate solution sa and actual optimal solution s*
• Stage 2: Add next edge on the sorted list to the tour, skipping
those whose addition would’ve created a vertex of
degree 3 or a cycle of length less than n. Repeat
this step until a tour of length n is obtained
12
a e
9 9
8 4 7 11
b d
8
6 10
a e
b d
Walk: a – b – c – b – d – e – d – b – a Tour: a – b – c – d – e – a
c
Christofides Algorithm
• Stage 1: Construct a minimum spanning tree of the graph
a e
4 4 7 11
12 b d
a e 8
9 9 c
8 4 7 11
b d a e
8
6 10
c 4 9 7 11
b d
c
Euclidean Instances
• Theorem If P ≠ NP, there exists no approximation algorithm
for TSP with a finite performance ratio.
• Definition An instance of TSP is called Euclidean, if its
distances satisfy two conditions:
• 1. symmetry d[i, j] = d[j, i] for any pair of cities i and j
2. triangle inequality d[i, j] ≤ d[i, k] + d[k, j] for any cities i, j, k
• Example of a 2-change C1 C2
Example of a 3-change
C5 C4
C6 C3
C1 C2
C5 C4
C6 C3
C1 C2
C5 C4
C6 C3
C1 C2
Empirical Data for Euclidean Instances
Greedy Algorithm for Knapsack Problem
• Step 1: Order the items in decreasing order of relative values:
v1/w1… vn/wn
• Step 2: Select the items in this order skipping those that don’t
• fit into the knapsack
•
Example: The knapsack’s capacity is 16
• item weight value v/w
• 1 2 $40 20
• 2 5 $30 6
• 3 10 $50 5
• 4 5 $10 2
•
Accuracy
Approximation Scheme for Knapsack
Problem
• Step 1: Order the items in decreasing order of relative values:
v1/w1… vn/wn
• Step 2: For a given integer parameter k, 0 ≤ k ≤ n, generate all
subsets of k items or less and for each of those that fit the
knapsack, add the remaining items in decreasing
order of their value to weight ratios
• Step 3: Find the most valuable subset among the subsets
generated in Step 2 and return it as the algorithm’s output
• Accuracy
• Number of extra bins never exceeds optimal by more than
70% (i.e., RA ≤ 1.7)
• Empirical average-case behavior is much better. (In one
experiment with 128,000 bins, the relative error was found
to be no more than 2%.)
Bin Packing: First-Fit Decreasing Algorithm
• First-Fit Decreasing (FFD) Algorithm: Sort the items in
decreasing order (i.e., from the largest to the smallest). Then
proceed as above by placing an item in the first bin in which it
fits and starting a new bin if there are no such bins
• Accuracy
• Number of extra bins never exceeds optimal by more than
50% (i.e., RA ≤ 1.5)
• Empirical average-case behavior is much better, too
Numerical Algorithms
• Numerical algorithms concern with solving mathematical
problems such as
• evaluating functions (e.g., x, ex, ln x, sin x)
• solving nonlinear equations
• finding extrema of functions
• computing definite integrals
ex 1 + x + x2/2! + … + xn/n!
• round-off errors
Solving Quadratic Equation
• Quadratic equation ax2 + bx + c = 0 (a 0)
• x1,2 = (-b D)/2a where D = b2 - 4ac
• Problems:
• computing square root
• use Newton’s method: xn+1 = 0.5(xn + D/xn)
• subtractive cancellation
• use alternative formulas (see p. 411)
• use double precision for D = b2 - 4ac
• Useful:
• sketch graph of f(x)
• separate roots
Three Classic Methods
• Three classic methods for solving nonlinear equation
• f(x) = 0
• in one unknown:
• bisection method
• Newton’s method
Bisection Method
• Based on
• Theorem: If f(x) is continuous on ax b and f(a) and f(b) have
opposite signs, then f(x) = 0 has a root on a < x < b
• binary search idea
f(x)
x
a x1 b
• x³ - x - 1=0
• with the absolute error not . .
0 2
x
an
. .xn
. bn
x
. f(xn )
. .
xn+1 x n
x