You are on page 1of 3

The Fibonacci Sequence Recursion

The Fibonacci numbers are: • Defining a function in terms of itself is called


recursion. We call a method that calls itself a
Recursion 1, 1, 2, 3, 5, 8, 13, 21, 34, . . .
recursive method.
COMS W1007 We can calculate the nth Fibonacci number (n ≥ 2) using • We don’t have to do anything special to write a
Introduction to Computer Science the formula: recursive method in Java; any method can call itself.
Christopher Conway Fn = Fn−1 + Fn−2 • Each recursive call has its own distinct set of
26 June 2003 parameters and local variables. A recursive call is a
separate entry on the execution stack.

The Basis Case The Fibonacci Sequence Redux The Factorial Function
• In order for recursion to work correctly, every recursive In the Fibonacci sequence, the basis cases are n = 0 and The factorial function is:
method must have a basis case. n = 1. Since the sequence is only defined for nonnegative
integers n, the recursive definition will always approach 0. n! = n(n − 1) · · · 1
• The basis case is an input for which the method does
not make a recursive call. The basis case prevents Basis cases: We can define the factorial function recursively as:
infinite recursion.
F0 = 1 Basis case:
• It’s not enough for there to simply be a basis case; the F1 = 1 1! = 1
values of the input must reliably approach the basis
value. Recursive case (n > 1):
Recursive case (n ≥ 2):
n! = n · (n − 1)!
Fn = Fn−1 + Fn−2

The Towers of Hanoi The Towers of Hanoi The Towers of Hanoi


The Towers of Hanoi problem is to move a stack of plates Step 1: Step 3:
from the first post to the third. You may only move one
plate at a time and a larger plate cannot be stacked on top
of a smaller one.

Step 2: Step 4:
The Towers of Hanoi The Towers of Hanoi The Towers of Hanoi
Step 5: Step 7: The Towers of Hanoi is a classic example of a recursive
problem. To solve it for n plates:

1. Move n − 1 plates from the first post to the extra post.

2. Move the largest plate to the destination post.

3. Move n − 1 plates from the extra post to the


Step 6: destination.

The basis case (1 plate) is trivial.

Merge Sort Merge Sort: Example Merge Sort: Efficiency


Input: A list of numbers X1 , X2 , . . . , Xn and a range i..j 5 9 2 1 6 From the top down, a merge sort of a list of length n will:
to sort. (i and j are initially 1 and n, respectively).
5 9 2 1 6 • Merge two lists of length n/2: approximately n
Output: A list in ascending order. operations.
5 9 2 1 6
1. If i = j, goto Step 6. • Merge four lists of length n/4: n operations.
2. k := (i + j) ÷ 2. 5 9 2 1 6 • Merge eight lists of length n/8: n operations.
3. Y := sort Xi..k . • And so on, until we have n lists of length one: n
5 9 1 2 6 operations.
4. Z := sort Xk+1..j .
5 9 1 2 6
5. Merge Y with Z into X 0 . If each step takes n operations, the question is: how many
1 2 5 6 9 steps until we reach the basis case?
6. Return sorted list X 0 .

Merge Sort: Efficiency, 2 Binary Search Binary Search: Efficiency


• If we continually divide n by 2, it takes blog 2 nc steps Input: A sorted list X1 , X2 , . . . , Xn and a number to find • Each step of the binary search divides the list in 2.
to reach 1. a. The number of steps it will take to find a number in a
Output: A boolean value F ound indicating whether a is list of length n is:
• Since we use log2 n a lot in computer science, we like
to abbreviate it lg n. contained in X.
T (n) ≈ lg n
1. F ound := 0, i := 1, j := n.
• Merge sort performs n operations in every one of • The log function grows quite slowly:
blg nc steps. The running time is: 2. k := (i + j) ÷ 2.
3. If j ≤ i, go to Step 6. lg 100 ≈ 5
T (n) ≈ n lg n
lg 1, 000 ≈ 10
4. If Xk > a, j := k − 1, go to Step 2.
• Recall that the other sorts we studied took lg 1, 000, 000 ≈ 20
approximately n2 operations. n lg n is much better 5. If Xk < a, i := k + 1, go to Step 2.
than n2 . (Compare them for n = 100 or n = 1000.) 6. If Xk = a, F ound := 1.
Calculating Fibonacci Numbers Calculating Fibonacci Numbers, 2 Calculating Fibonacci Numbers, 3
Consider a recursive method for calculating the nth From the top down, a call to fibo will: The running time of fibo grows exponentially with n:
Fibonacci number: • Add the result of two recursive method calls: 1 T (n) = 1 + 2 + 4 + · · · + 2n−2
int fibo(int n) { operation. n−2
if( n==0 || n==1 ) = 2i
• Each recursive call will add the value of two further
return 1 ; i=0
recursive calls (4 in all): 2 operations.
else = 2n−1 − 1
return fibo(n-1) + fibo(n-2) ; • Each of those calls will add the value of two further
≈ 2n
} recursive calls (8 in all): 4 operations.

What is the running time of fibo on an input n? • And so on, until we reach fibo(1) and fibo(0). This is bad. Exponential growth is worse than n2 . In fact,
its worse than nc for any c.
It will take n − 1 steps to reach the basis case.

Iteration vs. Recursion Iteration vs. Recursion: 2 Orders of Magnitude


Consider an iterative method for calculating Fibonacci • fibo2 performs one addition for each number in the • When we say the running time of an algorithm is
numbers: sequence, from 2 to n. Thus, it is linear in n. approximately f (n), what we really mean is it is on
int fibo2(int n) { the same order of magnitude as f (n).
T (n) ≈ n
int n = 1, n2 = 1 ; • We express orders of magnitude using the notation
for( int i=2 ; i < n ; i++ ) {
int tmp = n2 ;
• Recursion is not always the best solution to a problem. Θ(f (n)). T (n) = Θ(f (n)) means that an algorithm
n2 = n ; Even when the problem itself is defined recursively. grows neither faster nor slower than f (n).
n = tmp + n2 ;
}
• We can usually solve a problem iteratively (i.e., using • The orders of magnitude are related as follows:
return n ; loops) or recursively. Which one we choose depends
} on the particular problem and personal taste. c ≺ lg n ≺ n ≺ n lg n ≺ nc ≺ cn

where c is a constant.

Orders of Magnitude, 2
• Constant-time algorithms (Θ(1)) are as good as it
gets. That means we can calculate the result in a fixed
number of steps, irregardless of the input.
• Exponential algorithms (Θ(cn )) are just about as bad
as it gets. We call exponential algorithms
intractable—it is not practical to solve them for
anything but very small inputs.
• Quadratic and higher polynomial algorithms (Θ(nc ))
are tractable but slow. Linear (Θ(n)), logarithmic
(Θ(lg n)) and linear-logarithmic (Θ(n lg n)) algorithms
are what we shoot for.

You might also like