You are on page 1of 11

A Direct Method lbr ~hebyshev Approximation

b y Rational Functions*

JOSEF STOER

Mathematisches [nstitut der Tdchischen Hochschule Miinchen, Germany


1. Introduction. In this paper, an efficient method is described for computing
the "best-fit" rational function
l¢*(z) - P~(z) _ Co + clz + ... + c~x z (1)
Qm(x) do -4- dlx -4- .." + d,~x~'
which approximates a given real continuous function f ( x ) on a given interval
[a, b] in the sense of Chebyshev. That is, we try to find that rational function
R * ( x ) which minimizes the maximum modulus of the weighted error function
~a(x) - R ( x ) - f ( x )

on the interval [a, b], where g(x) denotes a given weight function positive and
continuous on [a, b]. In what follows, we consider only the case 1 ~ m. Other-
wise, we interchange 1 and m and approximate the reciprocal l / f (x) using the
weight function g ( x ) / ( f ( x ) ) Z .
The general outline of the method is already known (see Fraser, Hart [2],
Maehly [3, 5], Meinardus, Strauer [6], Werner [16, 17]). It is an extension (to the
rational case) of Remez' Second Algorithm [10, 11] for polynomial approxima-
tion (see also Murnaghan, Wrench [8, 9]). By iteration, this method generates a
sequence of rational functions R~(x) which converge to R * ( x ) . In the rational
case (m > 0), the computation of the coefficients c~~), d~i) of any of these inter-
mediate rational functions requires solving a system of nonlinear equations (3),
arising from an interpolation problem. The methods described in [2, 3, 5, 6]
have the common feature that they solve these nonlinear equations without
regarding the interpolative character of these equations. Taking up an idea of
Stiefel [12] we therefore propose to compute the rational functions Ri(x) not in
the conventional form (1) but in the form
R ( x ) = eo -tc el(x - yo) "4- "'" -I- ez__~(z - yo) " " (x - yl-~-l)

- y0)J + I + ' + (2)

suggested by the theory of interpolation by rational functions (see [13]). It will


be shown in Sections 3 and 4 that the coefficients e~ of (2) are easily calculated
by difference methods.
In addition to this methodological advantage, the extra degrees of freedom of
(2) due to the parameters yk can be used so as to make the evaluation of (2)
numerically more stable than that of form (1). An example in Section 6 illustrates
* Received August, 1963.
59
Journal of the Association for Computing Machinery, VoI. 11, No. 1 (January, 1964), pp. 59-69
~0 JOSEF STOER !iii
the numerical eglcieuey of the method under extreme circumstances. I~ S(~ction
7 an Azc~oL-formulation of the algorithm is given.
2. The error function 8R.(x) of the best-fit rational function has an oscillaCi~lg
character which determines R * ( x ) uniquely. In most cases encountered in
practice, the error function is of standard form, which means,
~R.(x) has exactly l + m + 2 local extrema xi* in [a, b] with alternating signs and
equal amplitude: ~R*(xi*) (--1)~X *, where a = x~ < x~ < . . . < x~_~.... <
=

Xl+m+l ~ b.
The points x~* are called critical points.
We shall deal only with the case in which ~ . ( x ) has standard form.
Assume that approximate values x~ have ~lready been found for the critical
points. Then the method is a two-stage iteration procedure which may be de-
scribed roughly ~s follows:
STAGE I. Find a solution R ( x ) and k of the equations

~R(x~) = R(x~) - - f ( x ~ ) = ( _ 1 ) ~ , i = 0,1, . . . , l + m + l . (3)


g(x~)
STAGE II. Determine a new set of approximate critical points
2~ , i= 1,2, . . . , l+m, ~o = xo = a, :~l÷,~+~ = xt-~,~+~ = b

by evaluating the actual extrema of the error function 8R( x ) of the rational function
determined in Stage I, and repeat Stage I.
Usually, the extrem~

x, - 2 -F~cos ~- 1 1-t-~n--F1 , i = 0,1, .-- , l + m - F 1 (4)

of the suitably translated Chebyshev polynomial T~+,~+~(x) are taken as the


initial guess.
Practice has shown that for this guess four or five iterations will yield the
best-fit rational with sufficient accuracy. For the polynomial ease (m = 0) quad-
ratic convergence has been confirmed by Veidinger [14].

3. From ~ theoreticM standpoint, Stage I is more difficult than Stage II.


Equations (3) may be transformed into the system
P~(x~) - (f(x~) + (--1) g(x~))Q~(x~) = O, i O, 1, • • , l + m + l , (5)
which may be interpreted as arising from aa interpolation problem, if ~ were
known. Also it can be regarded as a linear system of equations for the coefficients
c~, d~ of (1) with coefifcients depending on k. Its determinant is ~ polynomial in
h, of degree m + 1. The linear system has nontriviM solutions if and only if
is chosen to be a root of the determinant. However, we can only use such solu-
tions which correspond to a real ~ ~ and yield rational functions having no pole
Recently, Werner has proved in [18] t h a t ull roots ~ are re~h He shows t h a t the deter-
ruination of ~ leads to solving an eigenwilue problem of the type det(A + hB) = 0, where
A is real und symmetric, and B is positive definite. For m = 1, he also treats the question
when k yields ~ polefree solution.
A I)It~ECT METHOD FOR CHEBYStiEV APPRGXIM[ATION 61

in the interval [a, b]. Maehly [5] has shown that there is at most one pole-free
solution. But apart from simple special cases (l = 1, m = 1, see [1511), no general
answer has been given to the question as to which k yields this pole-free solution.
Since in our problem ~ should be very small, we choose to be the root k0 with the
smallest modulus.
Now, let us turn to the problem of how to evaluate X0and R ( x ) from the system
(5). Maehly [3] first determined the root ~0 of the determinant by iteration and
for this X0 ; he then solved the linear homogeneous equations (5), which have a
well defined nontrivial solution if a suitable normalizing condition (say Q~(1) = 1,
do = 1 or dm = 1) is added. Fraser and Hart [2] and Meinardus, Strauer [6]
solved the system (5) simultaneously for the unknowns c~, dk and ),0 by an over-
all iteration.
Our method is iterative, also. As mentioned in the introduction, we use the
theory of interpolation by rational functions and represent R ( x ) in the form (2).
As a byproduct, we do not need any artificial normalizing condition.
Since we interpolate at the approximate values xk of the critical points in
Stage I, we put
y~ = xik, k = O, 1, . . . , l+m-1
with xik ~ xii for k ~ j for the parameters yk entering into the representation (2)•
For the sake of demonstration only, we choose simply y~ = xk. (In the ALGOL-
program at the end of this paper, the "inner" critical points xl, " " , xt+~ are
used.)
Now, R ( x ) may be written
R(x) = P~_.~(x) -{- P.-m+~(x) -- P~_,~(x) (6)
~(x)
Here, PZ-m(X), Pl-m+l(X) and ~(x) denote the expressions:
P~_~(x) = eo + e~(x - Xo) + "." + e~(x - xo) . . . (x - x~_,.

Pz-,~+l(x) = eo + e1(x -- xo) + . . . d- e~--m+l(x -- Xo) " " (x - xl--~),

¢(x) = 1 + ~ - ~-~+11 + . . + x - x~+~_~ l


I el-.~+~ 1 ez+.~ '
respectively. To evaluate the coefficients e~, assume for the moment that we
know the interpolation points (x~ , hi), i = O, 1 , . . . , 1 + m + l, where h~ =
f ( x i ) + ( - 1 ) i k g ( x i ) . Our aim is to construct R ( x ) such that R ( x i ) = hi for
i = O, 1, • .. , 1-+-roW1. The coefficients e~, i = 0, 1, . . . , l - - m + 1 , of the poly-
nomial part of (6) can be found easily by the technique of divided differences (see
for instance [7]).
R(x0) = a0,0
al,1
R(xl) = al.o
a2,1
R(x2) -~ a2,o al--m~lfl--m+l
al.-m~.-1,1"
R(xz-~+l) = az_~+l.0
62 JOSEF STOER

Starting from the first column, the divided differences a~,~ are computed re-
cursively b y the well-known formula
ai,k-1 - ai_l,k-1
ai,k =--

and we have
e i = a~,i, i = O, 1, . . . , l--mq-l.

Note t h a t in the case m = 0, the condition az+l,~+l = 0 is necessary and suf-


ficient for the polynomial P~(x) of order 1 to pass through the l-t-2 points (x~, h~),
i = 0, 1, . . . , l-q-1.
For the computation of the remaining e~, i = l - - m q - 1 , • . . , l d - m , we use
Thiele's method of reciprocal differences (see for instance [7]). T h e continued
fraction ~(x) can be expressed in terms of R ( x ) , P ~ , ~ ( x ) and P ~ , , + l ( x ) :
P~-m+I(x) - P~_.,,~(x)
¢(x) = R(x) - P~(~)

Therefore, we have to take as starting values the known quantities


¢(xi) = Pl_.,~+l(x~) -- P,_m(x~)
h~ - P~(z~)

Thus the table of reciprocM differences


¢(x,_m+l) = c0,0
el,1
~(X/--m+2) = Ci,0
C2,1 " C2m--l,2m--1
C2m, 2,m
¢(Xl-m+3) = C2,0 • C2m,2m--1
: : C2m,1

can be obtained from the first column by the recursion formula (set c~,_~ = 0) :

Ci,k ~ Xl--"m&14d - - ggl--m-~l+i--k ~ Ci--l,k--2,


Ci,k--I ~ Ci--I,k--1

The remaining coefficients e~ are easily calculated from the diagonal elements
Ci,i :

ez-~+l+~ = c~.~ -- c~-2,~-~, i = 1, 2, . . . , 2m--1.


The rational function R ( x ) thus determined passes through the points (x~, hi) :
R(xl) = hl, i = O, 1 , . . . , l+m.

To meet the additionM requirement


R(x~+~) = h,..,+~,

the condition c~m.2,~ = ~, or, e q u i v a l e n t l y , c~_~.~,~_~ = c~,,~,,_~ must be fulfilled


for the table of reciprocal differences.
A DIRECT METHOD FOR CHEBYSHEV APPROXIMATION 63

4. Now, let us apply this interpolation algorithm to the determination of


R(x) and ~ in Stage I. Here the quantities h~ are linear functions of the unknown
amplitude ~ :
h~ = f(x~) + X ( - 1 ) ~ g ( x , ) .
Nevertheless, we may compute the difference tables ]ust described; but now, the
divided differences a~,~ become linear functions and the quantities ¢(x~) and
ci,~ rational functions of ~. Finally, k may be determined from the conditions
c2m-l,2,~-~(X) = c~m.2~-~(X) for m > 0 (7)
az+l,~+l(k) = 0 for m = 0.
The exact determination of the functions ci,~(X), which is rather cumbersome,
is not necessary if we solve equations (7) by Newton-iteration: X ~ X + ~ .
In order to get a correction ~k for an approximate value ~ of the "right" >,,
we need only the values of ai.~(k), c~,~(~) and their first deriwtives a~,~(~)
and c'i ,~.(~), which again may be computed recursively. With the notation

b~,~ = a'~,~(~), Q~-,,(x) - OPt ~(x)


Ok t x=~

d~,~ = c'i,~(i), Q,-,~+~(x) - OPt-.~+~(x)Ok x=;

we obtain the new recursion formula for the derivatives b~,~0and d~,~ :
b~,~-~- b~_~,~_~
bi,k ~-

el,k--1 - - ei-l,k-1
i
and the starting values (set hi = f ( x ~ ) + k ( - 1 ) g(x~)):
a~,o = h~, bi,o = (--1) g(xi), i = O, 1, . . . , l-m+l (8)
and
Pz-~+l(x5 - P,-m(x~)
Ci--l+m--l,O
h~ - tP~(x~)
(9)
Qz-m41(x~) - Q,._,~(x~) (-lyg(x,) - Q,~(x~)
di--l-~a--l,O -~ -- Ci--t+m--l,O
h~ - P~.,~(x~) hi- Pv-.~(xO
for i = l - - m + 1 , • • • , / + m + l . Since ~ is expected to be a very small number,
the initial guess ~ = 0 will be a good choice:
Finally, one Newton-iteration of equation (7) can be performed and we get
+ ~ as a new approximation for X, where
~k "~- C2m--l,2m--I - - C2m,2m--1
for m > O,
d2,~-1,2,~-1 -- d2m,2m-1
(lO)
~ = az+l,/+1 form, = O.
b ~.l. ~1
64 JOSEF STOER

This value may again be corrected by a~lother Newton-itera~io~ ~nd so on.


Naturally, this iteration process for k converges quadr~tic~dly, if tile st~r~iag
value k = 0 is close enough to ~. Note, that for m = 0 formula (10) gives the
exact value of k after the first step, since the divided differences a,~,k are linear
functions of ~,. Numerical experiments have shown that for m > 0 two Newton
iterations yield k with sufficient accuracy.
Instead of Newton's method, regula falsi may be used for determining ~.
This amounts to evaluating the difference table for two different small wlucs of
~, say 0 and 10-5, and then obtaining a better ~ by linear interpolation.
From equations (8) and (9), it is easy to see that the absolute error ~ of
determined by the process lust described will be, in general, not less than
f(x)
e g~xi . Here e = 5.10 -t denotes the maximum relative error of the machine
number representation if the calculation is carried through with t decimal digits
in the mantissa. The numerical stability and the udequacy of representation
(2) is underlined by the fact that this rough estimate for ~ has been confirmed
by practice: an accuracy of this order of magnitude for ~ has been obtained. This
estimate gives some information about the machine accuracy needed in order :!
to guarantee for k a certain relative accuracy, which describes the quality of the
error function. For example, consider the approximation of f ( x ) := in(x) in
the interval [1, 2] with the weight function g(x) : = 1, and with the requireme~t
that k be not greater than 10-~°. If in addition, the relative error of ~ itself can
be as large as 0.1, then the machine error must be about

5.10-t = v ~ 0.1.10 -~° < 0.1g(x).k


ln( ) -

or equivalently t ~ 12. That is, single precision arithmetic with a wordlength of


t = 12 decimal places in the mantissa is necessary.

5. At the beginning of Stage II a new rational function R(x) is given, and now
the problem is to find the extrema xi of its error function
_ -

Following Maehly [3, 5], we determine the extrema by a simple searching pro-
cedure. We start from the previous extrema x~, and compare the values of 6~(x)
at equidistant abscissae x~- = x~ + k.h~, l~ -~ 0, 1, . - . . Having found the
closest extreme value of ~(x~ - ~), say at the point 2, we compute the new critical
point 2~ as the abscissa of the extremum of the parabola of second order passing
through the neighboring points
(4 - h~, ~ ( 2 - h~)), (2, 6~(2)), (2 + h,~, 6R(2 + hl))
and get
h~ 6R(2 + h~) - 6R(2 - h~)
2i = 2 --- •
2 ~ ( ~ + h0 - 26,(2) + 6R(2 - h~)"
A DIRECT METHOD FOIt CHEBYSHEV APPROXIMATION 65

As to the increments h~, it would be p o o r policy to choose it t o o small, because


the increased searching t i m e w o u l d more t h a n outweigh the b e t t e r a c c u r a c y of
2~. Moreover, noise in the error f u n c t i o n ~ (x) m a y cause the searching procedure
to stop before the "real" e x t r e m u m is reached. O n the other hand, the increment
at the old critical point x, m u s t be smaller t h a n the distance b e t w e e n x~ and its
neighboring critical points xi-1 a n d xi+l, because otherwise one m i g h t find a
wrong e x t r e m u m . Therefore, choose for every critical point x~ an individual
increment h~, w h i c h is proportional to the distance from x~ to its next neighbor-
ing critical points. As the factor of proportionality, use a suitable multiple of t h e
m a x i m u m correction I x~ -- x~ ] f o u n d in the previous run of Stage I I .

6. I n this section, we illustrate our m e t h o d b y a p p r o x i m a t i n g f ( x ) : = l n ( x )


in the interval [1, 2] b y a rational function R ( x ) with 1 = 3, rn = 3. Choose t h e
weight function g ( x ) : = 1 ( m i n i m i z a t i o n of absolute error). T h e calculation was
accomplished w i t h the Siemens $2002 c o m p u t e r (word length t = 10 digits).
We describe the example in a condensed form, giving only the successive
(Stage I ) and t h e new e x t r e m a 21 a n d the corresponding error amplitudes
~R(2~) of the error f u n c t i o n (Stage I I ) .

Initial guess for x~ x0 = 1.0 x, = 1.611260466


x~ = 1.04951 5567 x5 = 1.81174 4900
x~ = 1,18825 5101 x6 = 1.95048 4433
x3 = 1.38873 9534 x7 = 2.0
Ist Run Stage I:
X~ = 19.5.10 -l°
X3 = 21.6"10-'0

Stage II: i ~i 8R (f~i)" 1010

1 1.03282 9500 -- 28.6


2 1.139308775 43
3 1.31896 5771 -- 41
4 1.55154 1059 38
5 1.76154 3540 -- 26
6 1.93728 0282 31
2d Run Stage I: ~4 ~ 30'10 -I°
X5 = 32.10-1°

Stage II: i ~ 8a (5~).10 l°

1 1.03408 6414 - 33.8


2 1.13930 8776 31
3 1,32675 5176 - 33
4 1.55154 1059 31
5 1.76487 1141 -- 28
6 1.94265 6744 33

Since the differences between the error amplitudes and ~ = X lie within the error bound
f(x) 5.10-1o In (2)
3.5.10-to
g(x) - 1
66 JOSEF STOER

a further r u n would yield no improvement. Tile final result is: R(x) = 0.03230 211978
0.92149 03503(x - 1.03282 9 5 ~ x -21113930 8775j F -2 1.318(~ 5771j
q- ! 1 + i 2.29709 997 + | 3.61707 2046 )

x - 1.55154 10591 ~
-2 1_'7(i15~ 3540l + :c -- 1.93728 ()282~
"~-I 1.24719 9-t0 + ~ 7.28646 432 y()-~i)~- "
):
7. A L e o L - P r o g m m . T h e f o l l o w i n g p r o g r a m d e s c r i b e s o u r m e t h o d of a p p r o x i -
m a t i o n i n ALGOL 60. I t h a s b e e n t e s t e d o n t h e S i e m e n s $ 2 0 0 2 c o m p u t e r w i t h t h e
h e l p of t h e A L G o L - t r a n s l a t o r ALCOR M a i n z $2002, T h e e x a m p l e of t h e p r e v i o u s
section was computed by this program.

procedure Rational approximation (f) weight function: (g) order: (l, m) interval: (aa, bb)
error bound: (eps) result: (e, x, lb);
comment This program determines for the function f(x), the coefficients of the best-fit
rational function (in the Chebyshev sense)
r(x) : = e[0] + e[1](x-x[1]) + ... + e[l-m](x--x[l]) -.. ( x - x [ l - m ] ) +

+ e [ l - m + l [ ( x - x [ 1 ] ) "'" ( x - x [ l - m + l l ) I (x-x[l-m+2]) (x-x[l+m])[


i 1 +[ eIl-m+ ,l + +[
which minimlzes the maximum modulus of the weighted error function (r (x) - f(x))/g (x)
in the interval [an, bb]. The degree l of the numerator of r@) is supposed to be not less
t h a n -the degree m of its denominator. The weight function g(x) must be positive in
[aa, bb]. The maximum weighted error of the best-fit rational function is lb. The com-
ponents of the vector

x[l:l+m+2] (with x[1] < x[2] < . . . < x [ l + m + l ] = bb, x[l+m+2] = aa)

are the abscissae of the extrema of the error function. The error bound eps shall be not
less t h a n 5~0-t, if the calculation is performed on a machine with a word length of t
decimal places in the mantissa;
v a l u e l,m, aa, bb, eps; i n t e g e r l , m; r e a l aa, bb, eps, lb; a r r a y e, x; r e a l p r o c e d u r e
f, g;
b e g i n i n t e g e r n, r, rl, r2, u, i,/~, M;
real bl, ml, epsl, zz, z, y, p, q, f l , f2, zl, z2, z3, dlb, hl, y2, y3, ab;
a r r a y xx, h[O:l+m+l], a, b [ 0 : l - m + l , 0 : / - m - F 1 ] , c, d [ - - l : 2 X m , - - l : 2 = m ] ;
B o o l e a n s, bbb;
B o o l e a n a r r a y ss[O:l+m+2];
p r o c e d u r e psi(y, f l , f2, a);
v a l u e y; r e a l y, f l , f2; a r r a y a;
b e g i n r e a l p; i n t e g e r i;
f l := a[r, r];f2 : = 0;
F o r i : = l - m s t e p --1 u n t i l 0 d o
B e g i n p := y - x [ i + l ] ; f l := f l X p ; ]2 : = f 2 X p + a [ i , i]; e n d ;
e n d psi;
real p r o c e d u r e phi(y);
v a l u e y; r e a l y;
b e g i n r e a l s; i n t e g e r i;
s:=0;
for i : = rl s t e p - 1 u n t i l 1 do s : = (y-x[i+rD/(c[i, i] - c[i-2, i - 2 ] + s);
psi(y, f l , f2, a); phi : = ( f 2 + f l / ( s + l ) -- f(y))/g(y);
e n d phi;
A DIRECT METHOD FOR CHEB~(SHEV APPROXI~IATION 67

C o m p u t a t i o n of s t a r t i n g v a l u e s :
bl : = ab : = (bb-aa)/2; m l : ~ (bb+aa)/2; n : = / + r e + l ; r := l-re+l; rl :=
2Xm-!; r2 : = r 1 + 1 ; zl : = O;
c[O,O] : = 1; d[O, 0 ] : = O;
f o r i : = - 1 s t e p 1 u n t i l r2 do d[i, - 1 ] : = c[i, - 1 ] : = O;
s : = f a l s e ; k l : = entier(n/2+lo-l); bbb := 2 X k l ~ n;
f o r i : = 1 s t e p 1 u n t i l kl d o
b e g i n z := cos(3.14159265X(1--i/n))Xbl ~[i] : = z + m l x[n--i] : = m l -- z;
ss[i] := s; s : = -~s; ss[n-i] := bbb ~ s;
h [ n - i ] : = h[i] := ( 1 - a b s ( z ) / b l ) X 2 / i ;
end ;
x [ n + l ] : = aa; s s [ n + l ] : = t r u e ; x[n] : = bb; ss[n] : = mbbb;
for i : = 0 step 1 until n do
b e g i n zz : = x [ i + l ] ; z2 := abs(f(zz))/g(zz);
i f zl < z2 t h e n zl : = z2;
end ;
lb := O; epsl : = epsXzl;
Stage 1 :
u:=O;
Lb loop:
for i := 0 step 1 until r do
b e g i n z : = g(x[i+l]); z := b[i, O] : = i f ss[i+l] t h e n z e l s e - z ;
a[i, O] : = f(x[i+l]) + lb X z;
end;
for k := 1 step 1 until r do
for i := k step 1 until r do
begin z := 1/(x[i-k+l] - x[i+l]); a[i, k] : = (a[i--1, k - l ] -- a[i, b - - l ] ) X z;
b[i, #] : = ( b [ i - 1 , k - l ] -- b[i, k - l ] ) X z;
end;
f o r i : = 1 s t e p 1 u n t i l r2 d o
beginy:=x[r+i+l]; p:=f(y); q := g(y); psi(y, f l , f2, a);
if -lss[r+i+l]thenq:= --q; p := p + lb X q;
z : = 1 ~ ( p - f 2 ) ; y2 := c[i, O] := z X f l ;
psi(y, f l , f2, b); d[i, O] : = z X ( f l - - y 2 X ( q - f 2 ) ) ;
end;
for k := 1 step 1 until rl do
f o r i : = k s t e p 1 u n t i l r2 do
b e g i n z : = c [ i - 1 , k - l ] - c[i, k - l ] ; zl : = ( x [ r + l - k + l ] -- x[r+i+l])/z;
c[i, k] := zl + c[i-1, k - - 2 ] ;
d[i, k] : = (d[i, k - - l ] - d[i--1, k - - l ] ) X z l / z + d[i--1, k - 2 ] ;
end;
i f m = 0 t h e n dlb := --a[r, r]/b[r, r]
e l s e dlb := (c[rl, r l ] - c[r2, rl])/(d[r2, rl] - d[rl, r l ] ) ;
lb := lb + dlb ; u :-- u + l ;
i f u > sign(m) t h e n g o t o lb l o o p ;
f o r i : = 0 s t e p 1 u n t i l r d o a[i, i] := b[i, i] X dlb + a[i, i];
f o r i : = 1 s t e p 1 u n t i l r l d o c[i, i] := d[i, i] X dlb + c[i, i];
Stage 2:
k:=sign(/b); zl := zz := O;
fori:• 1 step 1 untiln- 1 do
b e g i n h l : = 0.2 X h[i] X ab; u := ifss[i] t h e n k e l s e - k ;
y2:=x[i]; y3 : = y 2 + h l ; z 2 : = phi(y2) X u ; z3 : = phi(y3) X u;
i f z3 > z2 t h e n g o t o s t e p ;
hl :=hl; z : = z3; z 3 : = z2; z 2 : = z ; z:~-y3; y3:=y2; y2:= z;
6~ JOSEF STOER

Step:y:=y3+hl; z : = phi(y) X u;
if z> z3then
b e g i n y 2 : = y3; y3 : = y; z2 : = z3; z3 : = z; go to step; e n d ;
y:= -2Xz3+z+z2;
y : = i f y = 0 t h e n y3 else (y2+y3)/2 + hl X (z2-z3)/y;
z:=y--x[i]; xx[i] := y; hl : = phi(y);
z : = abs(z); if zz < z t h e n zz : = z;
z : = abs(hl) - abs(lb); i f z l < z t h e n z l : = z;
end ;
if zl > epsl t h e n
b e g i n ab : = zz;
fori:= 1 step 1 u n t i l n - - 1 do
b e g i n zl : = x[i] : = xx[i]; h[i] : = 1 - a b s ( z l - m l ) / b l ; e n d ;
go to stage 1;
end ;
for i : = 0 step 1 u n t i l r do e[i] : = a[i, i];
for i : = 1 step 1 u n t i l rl do eli+r] : = c[i, i] - c [ i - 2 , i - 2 ] ;
e n d rational approximation;

A c k n o w l e d g m e n t . T h e a u t h o r wishes to t h a n k D r . C. W i t z g a l l for m a n y useful


discussions o n t h e s u b j e c t of C h e b y c h e v a p p r o x i m a t i o n a n d M r . D . B u l m a n for
his careful r e a d i n g of t h e m a n u s c r i p t .

REFERENCES
1. ACHIES~R,N. I. Vorlesungen iiber Approximationstheorie. Akademieverlag Berlin, 1953.
2. FRASER, W., AND HART, J. F. On the computation of rational approximations to con-
ginuous functions. Comm. A C M 5 (1962), 401--403.
3. MXEHLY, H. J. First interim progress report on rational approximation. Off. Nay.
Research, Project NR 044--196, Princeton, 1958.
4. MAEHLY, H. J., AND WITZGALL,C. Tschebyscheff-Approximationen in kleinen Inter-
vallen I. Approximation durch Polynome. N u m . Math. 2 (1960), 142--150.
5. MA~HLV, H. J. Methods for fitting rational approximations. Pts. I I and III. J. A C M .
10 (1963), 257-277.
6. M~I~ARDVS,G., AND STRAU~R,H.-D. Uber die Approximation von Funktionen bei der
Aufstellung von Unterprogrammen. Elektron. Datenver. IP (1961), 180--187.
7. MILLE, W. E. Numerical Calculus. Princeton University Press, 1949.
8. MURNAGH~N,F. D., ANDW~NCH, J. W. The approximation of differentiable functions
by polynomials. David Taylor Model Basin Report 1175, 1958.
9. XND . The determination of the Chebyshev approximating polynomial for a
differentiable function. M T A C 18 (1959), 185--193.
10. P~EMES, E. Sur la Determination des polynomes d'approximation de degr6 donn6.
Communications de la soei6t6 mathematique de Kharkoff et de l'institut des sciences
mathematiques et m6chaniques de l'universit6 de Kharkoff, serie 4, t. X. Kharkoff
1934.
11. .. Sur le ealcul effectif des polynomes d'approximation de Tchebycheff. Compt.
Rend. Acad. Sci. Paris 199 (1934), 337-340.
12. STIErEL, E. L. Numerical methods of Tchebycheff approximation. I n On Numerical
Approximation, R. E. Langer, Ed., 217-232, Madison 1959.
13. STOER, J. ~-ber zwei Algorithmen zur Interpolation mit rationalen Funktionen. Num.
Math. 3 (1961), 285-304.
14. VEIDINGER,L. On the numerical determination of the best approximation in the Cheby-
shev sense. N u m . Math. 2 (1960), 99-105.
A DIRECT METHOD FOR CHEBYSHE¥ APPROXIMATION 69

15. WERNER, H. Ein Satz ~iber diskrete Tschebyscheff-Approximation bei gebrochen


linearen Funktionen. Num. Math. ~ (1962), 154-157.
16. WERNER, H. Tschebyscheff-Approximationenim Bereich der rationalen Funktionen bei
Vorliegen einer guten Ausgangsn~herung. Arch. Rat. Mech. Anal. 10 (1962), 205-219.
17. WERN]ER, H. Die konstruktive Ermittlung der Tsch~byscheff-Approximierenden im
Bereich der rationalen Funktionen. Arch. Rat. Mech. Anal. 11 (1962), 368-384.
18. WERNER, H. Rationale Tschebyscheff-Approximation, Eigenwerttheorie und Differ-
enzenrechnung. Arch. Rat. Mech Anal. 13 (1963), 330-347.

You might also like