You are on page 1of 3

Recursion

Functions can call other functions. In fact, a function can call itself, either directly or
indirectly. When a function calls itself it is known as recursion, a Computer Science
methodology which can be implemented with or without functions.

Many examples of the use of recursion may be found: the technique is useful both for the
definition of mathematical functions and for the definition of data structures. Naturally, if a
data structure may be defined recursively, it may be processed by a recursive function3.4.1
Recursive functions
Many mathematical functions can be defined recursively:

 factorial
 Fibonacci
 Euclid's GCD (greatest common denominator)
 Fourier Transform

Many problems can be solved recursively, eg games of all types from simple ones like the
Towers of Hanoi problem to complex ones like chess. In games, the recursive solutions are
particularly convenient because, having solved the problem by a series of recursive calls, you
want to find out how you got to the solution. By keeping track of the move chosen at any
point, the program call stack does this housekeeping for you! This is explained in more detail
later.

3.4.2 Example: Factorial


One of the simplest examples of a recursive definition is that for the factorial function:
factorial( n ) = if ( n = 0 ) then 1
else n * factorial( n-1 )
A natural way to calculate factorials is to write a recursive function which matches this
definition:
function fact( int n )
{
if ( n == 0 ) return 1;
else return n*fact(n-1);
}

Note how this function calls itself to evaluate the next term. Eventually it will reach the
termination condition and exit. However, before it reaches the termination condition, it will
have pushed n stack frames onto the program's run-time stack.

The termination condition is obviously extremely important when dealing with recursive
functions. If it is omitted, then the function will continue to call itself until the program runs
out of stack space - usually with moderately unpleasant results!

Failure to include a correct termination condition in a recursive function is a recipe for


disaster!
Another commonly used (and abused!) example of a recursive function is the calculation of
Fibonacci numbers. Following the definition:
fib( n ) = if ( n = 0 ) then 1
if ( n = 1 ) then 1
else fib( n-1 ) + fib( n-2 )

one can write:


function fib( int n )
{
if ( (n == 0) || (n == 1) ) return 1;
else return fib(n-1) + fib(n-2);
}

Technically, a recursive function is a function that makes a call to itself. To prevent infinite
recursion, you need an if-else statement (of some sort) where one branch makes a recursive
call, and the other branch does not. The branch without a recursive call is usually the base
case (base cases do not make recursive calls to the function).

Tower of Hanoi

The Tower of Hanoi or Towers of Hanoi is a mathematical game or puzzle. It consists of


three rods, and a number of disks of different sizes which can slide onto any rod. The puzzle
starts with the disks in a neat stack in ascending order of size on one rod, the smallest at the
top, thus making a conical shape.

The objective of the puzzle is to move the entire stack to another rod, obeying the following
rules:

 Only one disk may be moved at a time.


 Each move consists of taking the upper disk from one of the rods and sliding it onto
another rod, on top of the other disks that may already be present on that rod.
 No disk may be placed on top of a smaller disk.

When my initial pile has one, two, or three rings you can probably work out hot to solve the
problem in each such case. But clearly the more rings there are the harder it gets to solve the
problem in each case. So what if I want to write a program which will work for any pile of
rings however large. We need an algorithm which will work for an arbitrarily large pile of
rings.

Logical analysis of the recursive solution

As in many mathematical puzzles, finding a solution is made easier by solving a slightly


more general problem: how to move a tower of h (h=height) disks from a starting peg A
(f=from) onto a destination peg C (t=to), B being the remaining third peg and assuming t≠f.
First, observe that the problem is symmetric for permutations of the names of the pegs (symmetric
group S3). If a solution is known moving from peg A to peg C, then, by renaming the pegs, the
same solution can be used for every other choice of starting and destination peg. If there is
only one disk (or even none at all), the problem is trivial. If h=1, then simply move the disk
from peg A to peg C. If h>1, then somewhere along the sequence of moves, the largest disk
must be moved from peg A to another peg, preferably to peg C. The only situation that
allows this move is when all smaller h-1 disks are on peg B. Hence, first all h-1 smaller disks
must go from A to B. Subsequently move the largest disk and finally move the h-1 smaller
disks from peg B to peg C. The presence of the largest disk does not impede any move of the
h-1 smaller disks and can temporarily be ignored. Now the problem is reduced to moving h-1
disks from one peg to another one, first from A to B and subsequently from B to C, but the
same method can be used both times by renaming the pegs. The same strategy can be used to
reduce the h-1 problem to h-2, h-3, and so on until only one disk is left. This is called
recursion. This algorithm can be schematized as follows. Identify the disks in order of
increasing size by the natural numbers from 0 up to but not including h. Hence disk 0 is the
smallest one and disk h-1 the largest one.

By means of mathematical induction, it is easily proven that the above procedure requires the
minimal number of moves possible, and that the produced solution is the only one with this
minimal number of moves. Using recurrence relations, the exact number of moves that this
solution requires can be calculated by: 2h − 1. This result is obtained by noting that steps 1
and 3 take Th − 1 moves, and step 2 takes one move, giving Th = 2Th − 1 + 1.

You might also like