You are on page 1of 10

A n A l g o r i t h m for t h e D e t e r m i n a t i o n o f t h e P o l y n o m i a l

o f B e s t M i n i m a x - A p p r o x i m a t i o n to a F u n c t i o n
Defined on a F i n i t e P o i n t Set*
PHILIP C. CURTIS, JR. AND WERNER I~. FRANK

Space Technology Laboratories, Los Angeles, California

Statement of Problem
Given a function f ( x ) defined on a finite point set S = {xl, . . . , XN}, it is
desired to approximate this function by the unique polynomial p ( x ) , of degree
less than or equal to n, which satisfies the minimax or Chebyshev criterion

m i n [ max I f ( x i ) - q(x~)I] = max If(xi) - p(xi) l (1)


q(x) i~l,. ..,N ill,...,N

where q(x) is any polynomial of degree less than or equM to n.


Although the properties which determine p ( x ) uniquely are well known, we
state them here for future reference. I n what follows u = (u0, u l , • -- , Un+l)
will always denote a vector, the coordinates of which are points of S satisfying
uo < u~ < .. • < u~+~. We shall denote the class of such vectors b y T.
For u in T, consider the following equations:

ao + aluk + a2uk2 + . . . + anuk ~ + ( - 1 ) k ~ = f(u~) (2)

{~1 = max [f(xi) - (ao + alx~ + . . . + a,~xl '~) I (3)


i~1,. • .,N

The system (2) m a y always be solved for the unknowns a0, • • • , a~, & More-
over, there exists u in T such that the solution of (2) satisfies (3), and if q(x)
is the polynomial with coefficients a0, • • • , a~, then q(x) is the polynomial p ( x )
of best approximation to f ( x ) in the sense of (1). Conversely, if p ( x ) is a poly-
nomial satisfying (1), then there is a vector u in T such that the coefficients of
p ( x ) satisfy (2) and (3). The coefficients of p ( x ) are uniquely determined by
this property, although the vector u is not necessarily unique. For proofs of these
facts see [3].
A practical method for determining a set of n + 2 points with the desired
property is due to Remes [5]. This was extended to more general ciasses of func-
tion by Novodvorskii and Pinsker [4], and numerical applications have been
described by Shenitzer [6]. This algorithm is as follows:
1) Select an arbitrary vector u from T.
2) Solve the linear system (2). If the coefficient a0, al, • • • , am and the error
n j
~t satisfy (3), then the iteration ceases. If not, let p ( x ) = ~ j = o a~x and proceed
to step 3.
3) A new vector u' = (u0', ul', • .. , u~+~) from T is chosen such t h a t for each
• Received'August, 1958.
395
396 P.C. CURTIS, JR., AND W. L. FRANK

k = 0, 1, . - . , n -t- 1,
l f(u~') - / ~ ( u J ) I --> I~1
sgn [f(ud) - p(ud)] = (-1) sgn[f(uk_l)
' - p(uk-1)],
" ' k = 1, .-- n "b 1
and
max I f ( u d ) - p(uk') [ = max I f ( x 0 - /,(x,)I.
k = 0 , 1 , • • *, n-t-1 i--0,1,- • •,N

Such a selection is always possible. Step (2) is now repeated.


I t has been shown in [4] that the magnitudes of the successive ~'s chosen b y
this algorithm are monotonically increasing and converge in a finite number of
steps to the best approximation. Furthermore, the sequence of polynomials
7b(x) converges to the desired polynomial p ( x ) .
We propose an alternate procedure. Let u be a vector from T, and let
a0, a l , • • • a~, ~, be the corresponding solution of system (2). The polynomial
q(x) = ~ - o a~xy will be called the polynomial corresponding to u. Applying
Cramer's rule to the system (2) for the unknown ~, one gets the following
formula.
n+l

B~(u)
k~0

where
n+l
Bk(u) = ~ l u~ ~- uk 1-1. (4)
lm0

We note, t h a t for a fixed function f, ~ is a function of n -t- 2 variables. The fol-


lowing classical result of de la Vall~e Poussin ([3], theorems 56 and 63) shows
that if a vector v is picked, so that I ~ I is a maximum, then the corresponding
polynomial is the polynomial of best approximation." For completeness we include
the proof.
THEOREM 1. Let v be a vector from T, and p ( x ) the corresponding polynomial.
p ( x ) is the polynomial of best approximation to f in the sense of (1) if and only if
[3(V) [ = maxu,r [3(u) I.
PROOF. Let p ( x ) be the polynomial of best approximation. Choose a vector
v = (vo, vl, • • • v,+~) from T for which p ( x ) is the corresponding polynomial.
Suppose there is a vector u = (u0, u l , --. u~+l) in T for which [ ~(u) [ > ~(v)l.
If q(x) is the corresponding polynomial for t h a t vector, then
f(uk) - q(uk) = ( - 1 ) h E ( u )
and
l f ( u~ ) - q(uk) I > Ef(v~) - p(vk) I = max• . N ] f ( x O - p ( x , ) [
i=l,.

] / ( u ~ ) - p (u k ) ].
POLYNOMIAL OF BEST M I N I M A X A P P R O X I M A T I O N 397

I t now follows easily t h a t [f(x) -- q(x)] -- [f(x) - p ( x ) l = p ( x ) -- q(x) has


n + 2 changes in sign in the interval [xl, x~], which is a contradiction.
Conversely, if for some v e c t o r v, ] ~(v) I = m a x u : r I ~(u) I a n d q(x) is the
corresponding polynomial, t h e n it m u s t follow t h a t
[~(v)[ = max If(x,) - q(x,)1.
i=l,. • .,N

F o r if not, choose a point x' e S such t h a t


If(x') - q(x')i = max If(xi) - q(x~)1.

As in p a r t 3 of the R e m e s iteration one m a y f o r m a new v e c t o r v' in T b y replac-


ing one coordinate of v b y x' in such a w a y t h a t
sgn ( f ( v J ) -- q ( v J ) ) = ( - - 1 ) s g n (f(vk--1) - - q ( v ; - - 1 ) ) , ]C -~- 1, " ' " , n + 1,
and either
I f(vJ) - q(v~') I = [~(v) [
or
I f(v~') - q(v~')l = max i](x,) - q(z,)i, i = o, 1 , . . . , n + 1.

I t now follows easily t h a t


n + l ( - - 1 ) k f ( v k ' ) B k ( v ') I
[ ~(Vt) I = k--O n+l
Bk(v')
k~O /
n+l
I f(Vk') -- q(v~') [ Bk(v')
k--O
= ,,+~ > min ]f(v~') - q(vk')l
Bk(v') k=O,l,.., n + l
k=O
= I*(v) ].
This is a contradiction, a n d hence the t h e o r e m follows.
Before describing the a l g o r i t h m for m a x i m i z i n g [ ~ ], we introduce the follow-
ing notation. F o r an a r b i t r a r y u in T, if we write ~(u0, u l , . . . , u~+l) = ~(u),
t h e n we define for j = 1, - . . , n,
~j(Z) = ~(U0, "" ", U i--l, X, U j + I , "" ", U n + l )
where
Uj_l < x < Uj+x and xeS.
Forj = 0, n + 1,
$0(x) = ~(X, U l , . . . , U , + l ) if xl =< x < u l , x e S
-- ~(Ul, "'" , Un+l, X) if u,,+l < x =< xN, x e S
~n+l(X) = ~(Uo, "'" , Un, X) if Un < X ~ XN, X e S

= ~(x, u 0 , . . . , u , ) ifxz < x < u 0 , x e S .


898 P. C. CURTIS~ JR.~ AND W. L. FRANK

The algorithm now proceeds as follows.


1) Select an arbitrary initial vector u from T.
2) Maximize I ~l(x) I over its domain of definition. We choose 6 l ( x ) rather
than 60(x), since initially one usually has u0 = xl, u,+l = xN. Let x' be a point
for which I 61(x) I is a maximum. If I 6~(x') [ > [ ~(u) [, replace ul by x' forming
a new vector u'. If not, let u' = u. Using the new vector we now maximize
[ 62(x) I over its domain of definition and replace u2' by the point x " for which
I 6~(x) I achieves its maximum, provided that I 6 ~ ( x " ) I > I 6(u') I.
We now proceed inductively. Special attention is necessary for 6,+1(x) and
60(x). Let u be the vector obtained by maximizing I 6,(x) !. If x' is a point for
which I 6,+1(x) I is a maximum and I6,~+l(X') ] > 16(u) I, then u' is formed in
I
the following way. If xN _-> x ' > u,~ , then u i ' = u i , i = O, 1, • • • , n , u , + l = x ' ;
if xl __< x' < u 0 , t h e n u 0 ' = x ' , u i ' = ui-~, i = 2, - . . , n + 1. In the latter
case the next function maximized is I 6~(x) ]. If the first case occurs, then I 60(x) I
is maximized. Let x " be a point for which I ~0'(x) I is a maximum and I 60(x") I >
i~(u') }.If xl 6 , x " < u l ' , t h e n u 0 " , = x " , u i " = u i ' , i = 1~ 2, . . . , n + 1.
If x¢ > = x" > u,+~, then u~" = ui+l, i = 0,1, . ., n , . un+l . x". If the
first possibility occurs, then the next function maximized is I ~ ( x ) [; if the
second occurs I~0(x) I is maximized. Of course, if I~n+~(X') I ~ [ ~(U) I or
] &(X") I =< I~(u') I, then u' = u or u" = u', respectively. If n + 2 consecutive
maximizations produce no change in the vector u, then this is taken to be
the desired vector for which I ~ [ is a maximum, and the coefficients of the
polynomial of best approximation are then determined from (2).
We now prove that if the vector u is chosen in this way I~(u) I is actually a
maximum. For if not, let q ( x ) be the associated polynomial. There is a point
x ' e S for which

]f(x') - I = max If(x:) - ] > I-


i=1 ..... N

Form u' by replacing one coordinate of u by x ' in such a way that f ( x ) - q(x)
still oscillates on u ~ (i.e.,
sgn I f ( u / ) - q ( u / ) ] = ( - 1 ) sgn [f(u:.-1) - q(u~-l)], j = 1,..-,n + 1).
Then by the reasoning at the end of the proof of theorem 1,
I I > In(u) I.
On the other hand, suppose for some j, 0 =< j =< n, uj < x' < u j + ~ , t h e n x '
replaces u; or u;+~. But this contradicts the fact t h a t 1~;(x') [ _-< I ~(u) [ and
I ~j+l(X') [ ~ ] ~(u) [- If u~+~ < x' ~ XN, then u' is either ( n o , u l , . . . , u ~ , x ' )
or (u~, u~, - . . , u~+~, x'). But
] ~(U0, U i , " ' " , U n , Xt) I = I ~n+l(x/) [ ~ I~(u) I
and
f (ul, ..., u +l, z') [ = [ I < is(u) [.
A similar verification can be made if x0 =< x' ~ u0. Hence I ~(u') [ =< [6(u) l,
which is a contradiction.
POLYNOMIAL OF BEST MINIMAX APPROXIM2~T[ON 399

In the case that the finite point set S is replaced by an interval [a, b], a maxi-
mization procedure and proof of convergence has been announced by Bratten
[1]. For applications to more generM families of approximating functions see [2].

Numerical Results
In order to test the second algorithm a subroutine was prepared for the UNIVAC
Scientific 1103A. The computation is performed in floating point arithmetic,
where the number representation consists of an 8 binary bit exponent and a 27
binary bit fractional part. The input is the tabular data (xi, f(x~)) and the de-
gree of the desired polynomial approximation. Normalization of the independent
variable is made to the interval [ - 1 , . 1] by the routine.
The program gives the following output:
A. Basic Output
1. Coefficients of the normalized approximating polynomial over [ - 1 , 1]
2. ~, as obtained from equation (1)
3. Transformation which carries [xl, x~] to [--1, 1]
B. Optional Output
1. Identification of the successive vector u and associated approximation
/t for each step of the iteration
2. Coefficients of polynomial over Ix1, x~]
3. Table of xi , f(xi), p(xi), f(x~) -- p(xi), where p(x) is computed over
[ - 1 , 1] a n d / o r [xl, x~]
4. Additional table of values zi, p(z~) where the zi are any desired argu-
ments in [xl, XN]. The computation of p(zi) is performed by first
normalizing zi to [ - 1, 1] and using the normalized polynomial.
A number of features of the code deserve special comment:
1. To start the iteration off it is desirable to pick a vector u for which the
magnitude of the corresponding error 5 is as large as possible. An a priori best
choice for the coordinates of such a vector are the n + 2 abscissae associated
with the extrema of the Chebyshev polynomial of degree n + 1 over the interval
[ - 1 , 1], i.e.,
k~r
~k = - c o s k = 0,1, . . - n--l- 1.
n+l'
If S does not contain these points one selects close approximations to them. The
reason why this choice is "best" can be seen if one assumes that f ( x ) is a poly-
nomial of degree n ~ 1. The error function E(x), associated with the minimax
solution of order less than or equal to n is proportional to the Chebyshev poly-
nomial of degree n -b 1 which attains its extrema at the points defined above.
2. To avoid difficulties which numerical round-off might contribute, it is prac-
tical in the maximizing process to compare ~i(x) to (1 --k e)~(u) for some small
> 0. This has the desired effect of eliminating unnecessary iterations which
may be required due to noise generated in computing ~1(x). I t has been found
that a value of e = 10-5 is satisfactory.
3. A substantial reduction in the amount of computation is realized b y avoid-

r-
400 P.C. CURTIS~ JR.~ A N D W . L. F R A N K

i n g t h e c o m p l e t e c o m p u t a t i o n of t h e B~'s of f o r m u l a (4) for each n e w s e t of n "4- 2


points. R a t h e r for k ~ j one o b t a i n s t h e B / s , w h e n going f r o m ~j(x) to ~j(x'),
by forming

Ix -
B/ = Bk Ix' u~l"
I n t h e case t h a t k = j , t h e o r i g i n a l f o r m u l a for Bk is used. T h i s p r o c e d u r e has,
h o w e v e r , t h e d i s a d v a n t a g e of i n t r o d u c i n g some round-off errors d u e to t h e loss
of significance which results w h e n x o r x ' is close to u k .
T a b l e s 1 a n d 2 e x h i b i t n u m e r i c a l results in m a k i n g m i n i m a x a p p r o x i m a t i o n s
to a n u m b e r of functions. W e o b s e r v e t h e following:
1. A l t h o u g h t h e m a g n i t u d e of 6 decreases m o n o t o n i c a l l y as t h e degree of t h e
a p p r o x i m a t i n g p o l y n o m i a l increases, t h i s p r o p e r t y is v i o l a t e d w h e n 6 b e c o m e s of
t h e o r d e r of a few u n i t s in t h e 8 t h significant d i g i t of t h e i n p u t d a t a , f(x) I n t h i s
case, d u e to t h e effect of round-off error, one no longer a c h i e v e s p r o p e r con-
v e r g e n c e a n d I 6 I m a y b e g i n t o increase, a n d / o r t h e e x t r e m e s of t h e e r r o r func-
t i o n E(x) m a y be g r e a t e r t h a n [6 [. E v e n t h o u g h a m i n i m a x s o l u t i o n is t h e r e -
fore n o t o b t a i n e d , i t h a s b e e n o b s e r v e d , for t h e a p p r o x i m a t i o n s considered, t h a t
t h e fit o b t a i n e d is a t w o r s t o n l y a few u n i t s off in t h e 8 t h significant digit• W h i l e
t h e b e s t a p p r o x i m a t i n g p o l y n o m i a l is n o t d e t e r m i n e d one still h a s o b t a i n e d a
r e a s o n a b l e fit.
T a b l e 1, w h i c h is a s t u d y in a p p r o x i m a t i n g t h e a r c s i n e f u n c t i o n for t h e i n t e r v a l
[0, ~ / ~ / 2 ] , i l l u s t r a t e s t h e s e remarks• T h e h o r i z o n t a l lines in t h e t h r e e c o l u m n s

TABLE 1

ApproximationofArcsi, xover[O,~]
IS] max E(xO
Degree of i
Approximating
Polynomial x~ Arcsin
Arcsin • Arcsinx -- x -- ~- W ffi - -
x
xW

.17 X 10-1 .41 X 10-2 .29 X 10-2


.39 X 10-~ .41 X 10- a •29 X 10-3
.63 X 10-3 .63 X 10-3 .48 X 10-4 .34 X 10-4
• 1 4 X 10-3 .14 X 10-3 .60 X 10-s .42 X 10-5
.31 X 10- * .31 X 10-4 .81 X 10- e .56 X 10- 6
.74 X 10-5 .74 X.10 -6 • 13 X 10 -6 .89 X 10-7

• 1 8 X 10-5 • 18 X 10-5 .60 X 10-7 .37 X 10-7


.45 X 10-6 .44 X 10-~ • 52 X 10-7 .71 X 10-7
.11 X 10-6 .11 X 1 0 - 6

10 .42 X 10-7 .30 X 10-7


11 .~ X 1~ 7 .85 X 10- s

12 .46 X 10-s
POLYNOMIAL OF BEST MINIMAX APPROXIMATION 401

TABLE 2

Degree of [g] for Several Functions


Approximating
Polynomial 2z
log x Ix~l Acceleration D a t a

1 •30 X 10 -1 .43 X 10-1 .5 104,37


2 .34 X 10-~ .25 X 10-2 .16 8.87
3 .44 X 10-3 .11 X 10-a .16 1,40
4 .61 X 10- ' .37 X 10-6 • 15 X 10 - t .63
5 .87 X 10-6 •15 X 10 - 6 .15 X 10-I .62
6 .13 X 10-5 .12 X 10-~ .57
7 .20 X 10- t .11 X 10-s .12 X 10-2 .54
8 .37 X 10-7 •25 X 10 - a .54
9 .24 X 10-7 .11 × 10-s .25 X 10-3 .52
10 .78 X 10-4 .49
12 .31 X 10-4
14 .14 X 10 - 4
16 .71 X 10-5
18 .39 X 10-~

Range onx 1 ~ x =< 2 0 6 x =< 1 -1 ~ x ~ 1 12 ~ x =< 18

Range on y 0 ~ y ~ .69 1 ~ y ~2 0 ~ y ~ 1 6.10 * =< y ~ 10~

of ] 5 [ show t h e p o i n t a t w h i c h p r o p e r convergence ceases d u e t o n u m e r i c a l in-


stability.
S u c h round-off difficulties can be p a r t i a l l y c o n t r o l l e d b y p e r f o r m i n g s i m p l e
changes in t h e f u n c t i o n p r i o r to m a k i n g t h e a p p r o x i m a t i o n . I n t h e case of
arcsin x t h e first two t e r m s of t h e T a y l o r series a r e r e m o v e d a n d a fit is m a d e to
the function
3
g(x) = Arcsin x - x - z_
6

as i n d i c a t e d in c o l u m n three. T h e r e m o v i n g of a m a j o r c o m p o n e n t of t h e func-
t i o n d e l a y s t h e effect of round-off error in t h i s case for t w o m o r e stages. T h e de-
sired fit to arcsin x is t h e n r e c o v e r e d b y s i m p l y a d d i n g x + x3/6 to t h e p o l y n o m i a l
w h i c h a p p r o x i m a t e s g(x). I n t h e p r o b l e m cited here t h i s p r o c e d u r e h a d t h e de-
sired effect of r e d u c i n g t h e m a x i m u m d e v i a t i o n of t h e e r r o r f u n c t i o n f r o m 8 to 3
u n i t s in t h e 8 t h significant figure for t h e case n = 10.
A second device, which also h a s t h e effect of r e m o v i n g e r r o r d u e t o round-off,
is to a p p r o x i m a t e t h e i n p u t f u n c t i o n f(x) b y a p o l y n o m i a l p(x) a n d t h e n per-
f o r m a f i t of t h e error f u n c t i o n E(x) = f(x) - p(x). W h i l e t h e s e c o n d p o l y -
n o m i a l should be zero in t h e o r y , one a c t u a l l y o b t a i n s t h e fit p~(x) b e c a u s e of
round-off errors. A n i m p r o v e d fit to f(x) is t h e n o b t a i n e d b y t a k i n g t h e p o l y -
n o m i a l p(x) + p~(x).
2. K n o w l e d g e of t h e p r o p e r t i e s of a f u n c t i o n can m a t e r i a l l y a i d in finding
m o r e efficient fits. F o r e x a m p l e , consider a n even f u n c t i o n f(x) defined o v e r

..... : ~:~o,~=~::~,~=~:~:~.~ :i/¸ • •


I02 P. C. CURTIS, JR., AND W. L. FRANK

[--a, a]. One can apply the minimax theory and obtain the function of best
approximation to f(x) which in theory is an even polynomial of degree k. In
practice the actual fit is not exactly even. I t is possible, however, to fit ] ( % / ~ )
over [0, w/a] by a polynomial p(co) of degree k and then make the change of
variable co = x ~ in order to obtain a minimax fit which is an even polynomial p(x)
of degree 2k over [ - a , a]. B y applying this device one can obtain a better fit
for the same number of coefficients.
A similar consideration can be given odd functions. Assuming the existence of

limf('v/~) _ g(~/~),
V;
we approximate g(%/~) by a polynomial p(o~) of degree k over [0, w/a]. Making
the change in variable oo = x ~ we obtain the even polynomial p(x). The approxi-
mation to f(x) is then attained by forming xp(x), which is an odd polynomial.
This procedure does not, of course, produce the best approximating polynomial.
Nevertheless, as is exhibited in column four and five of table 1, the resulting
approximation is far better than the normal results of column 2. In fact, for
n = 4 the improvement is more than two orders of magnitude.
3. The slow rate of decrease of [6] for increasing n for the f u n c t i o n f ( x ) = I x5 I
is predictable since the fourth derivative of f fails to exist at the origin. In fact,
it is well known that for such a function there exists a constant C, such that
for each n the best approximation ~ __< C/n 3.
4. The graph of the acceleration data in the last column of table 2 is given by
figure 1. The behavior of the ~'s suggests that this function is essentially a fourth
degree polynomial, and no substantial improvements can be obtained by ap-
proximations of higher degree.
The error function, for the approximation n = 4, is given by figure 2. Unlike
most error functions which are smooth and have only n + 1 zeros, this particular
120

I10

ioo
2

90

80

IZ 1'3 14 15
T i m e in Seconds

F~G. I
POLYNOMIAL OF BEST MINIMAX APPROXIMATION 403

ERROR FUNCTION FOR n : 4

A
/
F I G . 2.

one exhibits many fluctuations and the convergence was slightly slower than
usual. Such problems offer a good test to study the convergence of the method
and efficiency of the machine program.
5. The acceleration data of table 2 is an example of data for which a minimax
solution is attained over the interval [ - 1 , 1] but the polynomial transformed to
Ix1, x~] does not satisfy the convergence criteria. An illustration of this difficulty
is given by this data for n = 81 for which the error function over the restored
interval [12, 18] had extremes of 6&
The transformation of the polynomial from coefficients as over [ - 1 , 1] to
coefficients bl over Ix1, xN] is given by:
i-- 0,11...,nl
where

¢~,0 = 0; ~P~,~+l= a . _ ~ i = 0, ... n


OL

~j+l,i+l "~" O L ~ j , i + I + ~j,i l j ~ i -- 1,

and
2 Xl - ~ x N
O/ -- , ~ -- •
XN ~ Xl Xl ~ XN

If the unnormMized polynomial is actually desired it is essential that the error


function be checked over [Xl, x~]. If the residuals do not agree substantially with
those calculated over [ - 1 , 1], then it becomes imperative to use the normalized
polynomial together with the transformation which maps x from [xl, x~] to
[--1, 1].
I t is of interest to compare the Remes algorithm with this method. In the
404 P, C. CURTIS~ JR.~ AND W. L. FR2~NK

second procedure the function ~ is always computed from the input data Ix, f(x)],
while in the first method the evaluation of the error function depends in sequence
upon the solution of a linear system, the computation of the value of a poly-
nomial, and finally the difference f ( x ) - p ( x ) . The latter computation results in
a loss of significant figures when p ( x ) is close to f ( x ) . Experimental evidence is
at hand which suggests t h a t the second algorithm can do better when the error
function E ( x ) approaches the order of the limits of the precision of the computer.
The successive vectors u chosen by each respective algorithm will in general
not be the same even though the initial sets are equal. As a result, one or the
other of the two methods m a y converge faster. Unfortunately, lack of experience
with the first algorithm prohibits any conclusions. However, one m a y still dis-
cuss the difference in the number of operations (multiplications and divisions)
associated with the basic computation for the two procedures.
A complete cycle r for either of the algorithms is defined to be respectively the
determination of E ( x ) over the entire interval [ - 1 , 1] or the successive deter-
mination of 8i(x), j = 0, . . . , n T 1, for a particular choice of a vector u.
The number of cycles r required for the first procedure must be integral and
greater than or equal to 2 unless the chosen initial set of n -t- 2 points happens
to be the solution. In [6], r is reported to be in the range 6-8.
On the other hand the number of cycles for the second method can be frac-
tional, and r has been observed to be approximately 2 for most problems.
Considering t h e cycle to be the unit which measures the convergence we ob-
• tain the following results, where n is the degree of the approximating polynomial
and N is the number of points in S:
Method Number of multiplications and divisions

I: r-V(n + 1 ) ( n W 2 ) ( n + 6) + 3nhr 1
3
II: r[(n + 1 ) ( n -k 2) + N ( 7 n -~ 9)] + n + 2 [ n ~..kl0n+ 6]

In the practical range of interest, say n =< 10 and N = 100, one can probably
do better with respect to computing time when using method I. However, in
either case the total computer time requirements are quite small.
REFERENCES
1. D. BRATTEN,New results in the theory and techniques of Chebyshev fitting. Abstract
No. 546-34, Notices Amer. Math. Soc. 5 (1958) 248.
2. P. C. CURTIS, JR., n-parameter families and best approximations. To appear; see also
Abstract No. 548-70, Notices Amer. Math. Soc. 5 (1958), 496.
3. C. DE LAVALL~EPOUSSIN,Lecons sur l'approximations des fonctions d'une variable r~ele.
Gauthier Villars, Paris, 1952.
4. E. N. NOVODVORSKIIAND I. SH. PINSKER, On a process of equalization of maxima.
Usp. ,Mat. Nauk 6 (1951), 174-181 [Russian]. (English translation by A. Shenitzer
available from New York University.)
5. YA L. RE~S, On a method of Chebyshev type approximation of functions. Ukr. A.h r.
1935.
6. A. SHENITZER,Chebyshev approximation of a continuous function by a class of func-
tions, J. Assoc. Comp. Mach. 4 (1957), 30-35.

You might also like