You are on page 1of 6

Integre Technical Publishing Co., Inc. College Mathematics Journal 40:2 December 17, 2008 12:49 p.m. sultan.

sultan.tex page 87

CORDIC: How Hand Calculators Calculate


Alan Sultan

Alan Sultan is a professor of mathematics at Queens


College of the City University of New York. He is the author
of two books and several research articles, and is currently
heavily involved in the math education of prospective high
school math teachers. He enjoys singing bass in the
Oratorio Society of Queens, and loves to see how the math
we teach can be applied.

Almost every calculus teacher I ask answers the question, “How do calculators com-
pute sines and cosines?” with the words, “Taylor Polynomials.” It comes as quite a
surprise to most that, though it is reasonable to use Taylor polynomials, it is really
a method known as CORDIC that is used to compute these and other special func-
tions. CORDIC was discovered by Jack Volder in 1959 while working for the Convair
corporation and was developed to replace the analog resolver in the B-58 bomber’s
navigation computer. What was needed was a program that could compute trigono-
metric functions in real time without the use of much hardware. CORDIC, an acronym
for COordinate Rotation DIgital Computer, was the solution. CORDIC is fast; much
faster than Taylor series for the low level hardware used on a calculator, though when
you first see it, it is hard to believe.
What is nice about CORDIC is that it connects mathematics usually learned in
high school with computer hardware, making it especially interesting to students who
wonder where the math they learn might be used. The method can be adapted to com-
pute inverse trigonometric functions, hyperbolic functions, logarithms, exponentials,
and can even be modified to do multiplication and division! (See [2] or [3] for more
on this.) This amazing method is a genuinely key part of calculator technology and
knowledge of it can benefit both teacher and student.
We present CORDIC in its original binary form even though now it uses binary-
coded decimal (BCD). What we present is the heart of the algorithm, focusing on
a special case—the computation of sines and cosines. (For the history of computing
transcendental functions see [4]. For its actual implementation in hardware, see [1].)

The basics
What makes CORDIC fast are two facts: (1) When we take a binary number and mul-
tiply it by 2n , we shift the binary point n places to the right and when we divide by
2n we shift the binary point n places to the left. (2) The operations on a computer that
are cheapest and fastest to perform are (A) addition and subtraction, (B) comparing
numbers to see which is larger or smaller, (C) storing and retrieving numbers from
memory, and (D) shifting the binary point.
Addition and subtraction are very fast, but not multiplication or division. The ma-
chine does compute multiplications and divisions by powers of 2 very quickly how-
ever, by just shifting the binary point. CORDIC exploits this, and thus uses only
operations (A)–(D) to evaluate sines and cosines. The heart of the algorithm is a series
of cleverly performed rotations.

VOL. 40, NO. 2, MARCH 2009 THE COLLEGE MATHEMATICS JOURNAL 87


Integre Technical Publishing Co., Inc. College Mathematics Journal 40:2 December 17, 2008 12:49 p.m. sultan.tex page 88

Rotations
In high school, students learn to rotate a point P = (x0 , y0 ) through an angle of θ
degrees. The rotated point has coordinates P1 = (x, y) where

x = x0 cos θ − y0 sin θ,
y = x0 sin θ + y0 cos θ,

which can be written in matrix form as P1 = Rθ P0 where


 
cos θ − sin θ
Rθ = . (1)
sin θ cos θ

Now suppose we rotate the point (1, 0) by θ radians. Then the rotated point, P1 =
(x, y) has coordinates (cos θ, sin θ). However, we can’t use (1) to rotate by the angle
θ unless we already know sin θ and cos θ. But finding them is our goal! So what are
we to do?
CORDIC breaks down rotation by θ radians, into rotations by smaller angles θ0 , θ1 ,
θ2 , θ3 , . . . . These are cleverly chosen so that these rotations can be calculated using
angles hardwired into the computer. It then generates points P1 , P2 , P3 etc. on the unit
circle which approach the point (cos θ, sin θ). (See Figure 1.)

P2
(cos θ, sin θ)
P3
P1

P0 = (1, 0)

Figure 1.

Hard-wiring the machine


Which angles does CORDIC hard-wire? They are the angles whose tangents are of the
form 21n , where n = 0, 1, 2, 3, . . . .
Let us list these θi ’s:
π
θ0 = tan−1 (1) = ≈ .785398,
4
1
θ1 = tan−1 (1/2) ≈ .463648 ≈ ,
2
1
θ2 = tan−1 (1/4) ≈ .244979 ≈ ,
4

88 
c THE MATHEMATICAL ASSOCIATION OF AMERICA
Integre Technical Publishing Co., Inc. College Mathematics Journal 40:2 December 17, 2008 12:49 p.m. sultan.tex page 89

1
θ3 = tan−1 (1/8) ≈ .124355 ≈ ,
8
1
θ4 = tan−1 (1/16) ≈ .062419 ≈ .
16

We observe that when θ is small, tan θ ≈ θ. Thus, we don’t even need to store all the
values θi , since eventually tan θ and θ are the same for practical purposes.

A numerical example
We go through an example, to illustrate how CORDIC works. We set as our goal the
angle θ = 1 radian, and try to build it from the θi ’s we have chosen. We start with
z 0 = θ0 = .785398. (The letters z i will keep a running total of our rotation.) This is
too small, so the algorithm now adds θ1 attempting to bring the total up to 1 radian.
We get

z 1 = θ0 + θ1 ≈ .785398 + .463648 = 1.249046.

This is more than 1 radian, so the algorithm backs up. It subtracts θ2 , to get

z 2 = θ0 + θ1 − θ2 ≈ 1.249046 − .244979 = 1.004067.

This is still more than 1, so the algorithm backs up again getting

z 3 = θ0 + θ1 − θ2 − θ3 ≈ 1.004067 − .124355 = 0.879712.

This is less than 1, so we add θ4 = .062419, to get

z 4 = θ0 + θ1 − θ2 − θ3 + θ4 ≈ .879773 + .062419 = 0.942192,

and so on. With 40 terms, which is what many machines use,

z 39 = θ0 + θ1 − θ2 − θ3 + θ4 + θ5 + θ6 + θ7 + θ8 + · · · − θ39 . (2)

(Notice the −θ39 .) At each step, the machine checks to see if the running total z i
is more or less than the angle of 1 radian, and either adds or subtracts the next θ
accordingly.
Simultaneously, the algorithm performs the corresponding rotations. So, following
(2) we initially rotate (1, 0) by θ0 , then by θ1 , then rotate by −θ2 , then rotate by −θ3
and so on.
After 40 rotations, we get

P40 = R−θ39 · · · · · R−θ2 · Rθ1 · Rθ0 · P0


     
cos(−θ39 ) − sin(−θ39 ) cos θ1 − sin θ1 cos θ0 − sin θ0 1
= ··· .
sin(−θ39 ) cos(−θ39 ) sin θ1 cos θ1 sin θ0 cos θ0 0
(3)

Forty rotations may seem like a lot of work. We will see.

VOL. 40, NO. 2, MARCH 2009 THE COLLEGE MATHEMATICS JOURNAL 89


Integre Technical Publishing Co., Inc. College Mathematics Journal 40:2 December 17, 2008 12:49 p.m. sultan.tex page 90

How good is our approximation?


For any iterative procedure to be useful, we must have an idea of how accurate our
answers are. The following theorem tells us.

Theorem 1. Using the method described in this paper, after n + 1 iterations, the
approximations obtained for sin θ and cos θ are within 21n of the true values, provided
− π2 ≤ θ ≤ π2 . Thus, the CORDIC Method converges to (cos θ, sin θ).

A detailed proof is given in [5] and uses a combination of induction, trig identities
and the Mean Value Theorem. If the inequality − π2 ≤ θ ≤ π2 is false, then we subtract
multiples of 2π to yield an angle between 0 and 2π and then, using trig identities,
replace θ with an angle between − π2 and π2 .
Theorem 1 guarantees that the error we make in computing sin 1 and cos 1 by com-
puting P40 is not more than 2139 = 1.8190 × 10−12 . Thus, we have at least 10 places of
decimal accuracy!

Taming equation (3)


Equation (3) certainly looks daunting. You must be thinking “This method is far more
difficult than using Taylor polynomials.” And so it is, unless we can multiply those
matrices rapidly. In this section we show how to accomplish this.
Each matrix in the above product (3) is of the form
 
cos θi − sin θi
Rθi =
sin θi cos θi

which can be rewritten as


 
1 − tan θi
Rθi = cos θi , (4)
tan θi 1

and now we see how the tangent function comes into play. Since by design, tan θi = 21i ,
(4) becomes
 
1 − 21i
Rθi = cos θi 1 . (5)
2i
1

(Note: R−θi , is of the same form with off diagonal entries switched.) Finally, since for
− π2 ≤ θ ≤ π2 , cos θ = √ 1 2 = √ 1 −i 2 , (5) can be written as
1+tan θ 1+(2 )

 
1 1 − 21i
Rθi =  1
1 + (2−i )2 2i
1

Now substituting into (3), we get


   
1 1 1
1 1 − 211
P40 =  2 39
··· 
1 + (2 )
−39 2 − 1
239
1 1 + (2 ) 21
−1 2
1
1
  
1 1 − 210 1
· 1 .
1 + (2 ) 20
0 2 1 0

90 
c THE MATHEMATICAL ASSOCIATION OF AMERICA
Integre Technical Publishing Co., Inc. College Mathematics Journal 40:2 December 17, 2008 12:49 p.m. sultan.tex page 91

Pulling the constants, √ 1


to the front of this matrix product, we get that
1+(2−i )2

     
1 1
1 − 211 1 − 210 1
P40 = K 239 ···
− 2139 1 1
21
1 1
20
1 0

where

1 1 1 1
K =  ···   ,
1 + (2−39 )2 1 + (2−38 )2 1 + (2−1 )2 1 + (20 )2

and we are on our way. K is a constant! Once we compute it, we can, and do, hard-wire
it into the machine. It turns out that K ≈ .607252935. So finally (3) becomes
     
1 1
1 − 211 1 − 210 1
P40 ≈ .607252935 239 ··· (6)
− 2139 1 1
21
1 1
20
1 0

Observe that all entries in (6) are hard-wired into the machine. No computation is
needed to obtain them. Then, as we shall see, multiplying the matrices is very fast.

The speed with which the matrices are multiplied


We observe that
    
1 − 21n a a − 21n b
= 1 . (7)
1
2n
1 b 2n
a+b

Now 21n b is computed by shifting the binary point in b. So our first entry in the product,
a − 21n b, uses a shift and a subtraction (two fast operations). Our second entry, 21n a +
b, uses a shift and an addition (another 2 fast operations). Thus, when we multiply the
two matrices from (7) we do 4 very fast operations. We then store the two results for a
total of 6 operations, all of which are among the computer’s fastest.
Now consider all of the operations in (6). We must execute 40 matrix multiplica-
tions, each of which takes only 6 fast operations. The bottom line: this requires only
40 × 6 = 240 fast operations. (After we multiply all of the matrices in (6) we must not
forget to multiply our final result by K = .607252935.)
If, for the example, θ = 1 radian, we multiply just 11 matrices, and then multiply
by K , we obtain

(cos 1, sin 1) ≈ (0.5397, 0.84185).

According to Theorem (1) we should be within 1


210
= .00097656 of the correct
results, and in fact,

cos 1 ≈ 0.5403 and sin 1 ≈ 0.84147. (8)

Of course, if we multiply all 40 matrices together and then multiply by K, the approxi-
mations we get for sin θ and cos θ will be accurate to more than 10 places, as we have
pointed out.

VOL. 40, NO. 2, MARCH 2009 THE COLLEGE MATHEMATICS JOURNAL 91


Integre Technical Publishing Co., Inc. College Mathematics Journal 40:2 December 17, 2008 12:49 p.m. sultan.tex page 92

Final remarks
CORDIC is a robust and practical algorithm, preferred by calculator manufacturers
because it requires relatively low level electronics. It is perfect for calculators, and
good for the company’s bottom line. What is amazing about CORDIC in addition, is
that by just slightly modifying the method, we get all of the usual calculator functions.
The details of this are technical and some are even proprietary. But one cannot help
but be astounded by the fact that essentially one algorithm, and its variations, can be
used to compute sines, cosines, tangents, inverse trig functions, hyperbolic functions,
logarithms, exponentials, roots—even do multiplications and divisions. It is just mind-
boggling!

References

1. R. Andraka, A survey of CORDIC algorithms for FPGA based computers, FPGA ’98, Proc. of the 1998
ACM/SIGDA Sixth International Symposium on Field Programmable Gate Arrays, Monterey CA, 1998, 191–
200.
2. N. Eklund, CORDIC: Elementary function computation using recursive sequences, this J OURNAL 32 (2001)
330–333.
3. , CORDIC: Elementary function computation using recursive sequences, Electronic Proceeding of the
ICTCM 11 (1998) C027.
4. C. W. Schelin, Calculator function approximation, Amer. Math. Monthly 90 (1983) 317–325.
5. J. Underwood and B. Edwards, How do calculators calculate trigonometric functions? Educational Resources
Information Center (ERIC) document ED461519. Also available at http://www.math.ufl.edu/~haven/
papers/paper.pdf.

On Achieving the Petaflop


(Sung to the tune of the Ode to Joy in Beethoven’s Choral Symphony)

Hectic, hectic apoplectic


Life in the Computer Age
Things start fast
Then go yet faster
Soon we’ll reach the final stage
Mega-tasks in nano-seconds
Glad we’ve achieved the Petaflop
If we are forced to go yet faster
Things will come to a
Dead
Stop
—David Seppala-Holtzman (dholtzman@sjcny.edu)

[David Seppala-Holtzman is a professor of mathematics at St. Joseph’s College


where he has taught for the past 27 years. His mathematical interests are many
and varied, his love for the subject, in general, being the only constant.]

92 
c THE MATHEMATICAL ASSOCIATION OF AMERICA

You might also like