Professional Documents
Culture Documents
Golub 1983
Golub 1983
Summary. Algorithms are derived for the evaluation of Gauss knots in the
presence of fixed knots by modification of the Jacobi matrix for the weight
function of the integral. Simple Gauss knots are obtained as eigenvalues of
symmetric tridiagonal matrices and a rapidly converging simple iterative
process, based on the merging of free and fixed knots, of quadratic con-
vergence is presented for multiple Gauss knots. The procedures also allow
for the evaluation of the weights of the quadrature corresponding to the
simple Gauss knots. A new characterization of simple Gauss knots as a
solution of a partial inverse eigenvalue problem is derived.
Snbject Classifications:AMS (MOS): 65F15, 65D30; CR: 5.14, 5.16.
1. Introduction
I(f)=Q(f):= ~ ~ Cj,fo-t)(xj)
j=l i=1
for any polynomial f of degree less than k. Here f(~ denotes the value of
the i-th derivative of f at x. Given the knots and their multiplicities it is
possible to find the weights such that the quadrature has polynomial order at
least N.'= ~ n j . Such quadratures are called interpolatory; finding their
j=l
j=l
z
i=1 j=l
2
i=1
Given the fixed (or prescribed) knots vl, ..., v,. and the multiplicities m l ..... mm,
nl,..., n n the quadrature Q is called the Gauss quadrature if the free (or Gauss)
knots x 1.... ,x, are such that the corresponding interpolatory quadrature has
polynomial order N + n . Here, again, N is the number of weights (Cji and Djl)
H PTI
Theorem 1. Let qF(t): = (t--Vj)"J and q~(t): = rI (t-xj).J. The knots x 1.... , x n
j=l j=l
are the Gauss knots if and only if
b
S tk q6(t) qF(t) w(t) d t= 0 (1)
a
for k = 0 , 1 , . . . , n - 1.
Theorem 2. I f w>O on (a,b), if the multiplicities nj of the Gauss knots are odd
and if the multiplicities mj of the fixed knots vie(a, b) are even then there exist
real distinct Gauss knots xj~(a, b), j = 1, 2.... , n.
For general (odd) multiplicities n~ of the Gauss knots (1) is a system of
nonlinear algebraic equations which can be solved by some iterative method
(see e.g. [12] for a few special cases without prescribed knots). For simple
Gauss knots ( n j - 1 ) q6 is a polynomial of order n which, by (1) has to be
orthogonal (to lower degree polynomials) with respect to the function ~(t)
9.= Cqr(t)w(t) (where r is a suitable constant to achieve ff > 0 on (a, b)) in order
that its roots are the required Gauss knots. To find the coefficients of q~ from
(1) is then a linear algebraic problem; the Gauss knots can be found by a
suitable rootfinder. There is a vast literature on the calculation of such simple
Gauss quadratures based on this approach (see [11] for numerical results and
[3] for extensive reference list); a common aspect being that the function w,
which determines the integral approximated by the quadrature, enters the
b
calculations through the moments #j=Sp~w of the function w - here Po, P~. . . .
a
are some linearly independent polynomials. The resulting numerical procedures
may be unstable unless the polynomial basis Po, P~, -.. is chosen carefully.
Calculation of Gauss Quadratures 149
In this section we will establish some basic well-known relations for ortho-
gonal polynomials in matrix notation which we will then use throughout this
paper. We will deal with real functions of a real variable only.
Let k__>1 be an integer, Po, Pl, ..., Pk-1 be linearly independent polynomials
of degree less than k and let PR be a polynomial of degree exactly k. We denote
by p : = (Po, Pl . . . . , Pk-1) T the column vector of these polynomials. Given such a
set of polynomials there exists a unique constant matrix J and a unique
constant vector n such that
flj+lpj+l(t)=(t--o~j+l)pj(t)--flipi_l(t), j = 0 , 1. . . . .
Note again that the elements of the Jacobi matrix are independent of its order
in the sense stated in Lemma 3 and that for orthonormal polynomials
b
D=Spprw=I
a
b (4)
J =S t p(t) pY(t) w(t) d t.
a
Assuming the Jacobi matrix J for the function w is known we wish to find the
Jacobi matrix .7 for the modified function ~v=aqrw. In this section we will
consider the two special cases of qr(t)=(t-v)S, s = l , 2. For s = l we must
assume v ~ (a, b) and choose a > 0 if v < a and a < 0 if v > b to ensure ff > 0. For
s = 2 we choose a > 0 and v may be arbitrary.
The following result is due to Galant [1]. We present it using the matrix
notation introduced in w2, with a shorter and more instructive proof.
Theorem 3. In the case s= 1 the matrix a ( J - v I ) is positive definite; let L be a
lower triangular bidiagonal matrix such that
~ ( s - vI) = L L r (5)
152 G.H. Golub and J. Kautsky
( Cholesky decomposition, L nonsingular). Then the Jacobi matrix for the func-
tion ~v(t) = t r ( t - v) w(t) is
j=lLr L + v I + yek e[ (6)
O"
where 2
= tr flk/(ekT
Lek) 2 9
Proof. As the eigenvalues of J lie in (a, b) the positive definiteness of t r ( J - v I )
follows from the choice of a and v. Note that (by (4))
b
tr(J - v I) = ~ ~ (t - v) p(t) pr(t) w(t) d t
a
b
=L~L-lppTL-T~LT
a
we have similarly
b
J - v I = ~ (t - v) q(t) qr(t) ~(t) dt
a
b
=r 1 ~(t - v)2 p(t) pr(t) w(t) dt L - r
a
=L-1JL
Let us now consider two such similarity transformations, with two general-
ly different shifts v 1 and v 2. Starting with the Jacobi matrix J they may be
described as follows
al(J-vlI)=L1LT
JI-IL~L1 +vlI
--0-1 1
a 2 ( d l --/)2 I) = L 2L r (7)
J2 - - L L~ L 2 +1321.
-- 0. 2 2
0-(J - v I) = Q R
(9)
J2=I RQ+vI.
Corollary 1. In the case s = 2 the Jacobi matrix J for the function ~v(t)=a(t
- v ) 2 w(t) coincides up to the last two rows and columns with the matrix J2 in (9)
for any v and a > O.
It is now fairly obvious how to obtain the modified matrix J for general qr.
Starting with the Jacobi matrix J for w we perform [m/2] QR-type transfor-
mations (9) for each knot vs of multiplicity ms and one LR-type transformation
(5), (6) (no correction y) for each knot vs of odd multiplicity ms, discarding the
appropriate number of last rows and columns. The details of the algorithm are
given in the next section. We note that by (8) the shifts are in fact the
differences between the knots, i.e. only m nontrivial shifts are needed. Finally,
it is not surprising that for multiple knots we can replace each two symmetric
LR steps by one QR - see, for example, Wilkinson [15], Chap. 8, w
Input
k required order of the resulting matrix J,
m number of fixed knots,
vs, ms, j = 1.... , m, fixed knots and their multiplicities,
154 G.H. Golub a n d J. Kautsky
Step I. Set kc..=N, set J c , = J - V l I (k~ is the order of the current modified
matrix J~ which may overwrite the storage for J).
Step 3. exit.
Remark. The interval of integration (a, b) is not really required in this algo-
rithm; all that is needed is to know whether the prescribed knots of odd
multiplicities are to the left or to the right of this interval so that a in Step 2.5
can be determined. Such Boolean information can be encoded, for example, in
the sign of the multiplicity of the knot in question and the input parameters
a, b may be discarded.
x=(x~ ..... x,) T, Newton's method for F(x)=0 defines the iterations as '~(~+~)
=~(')+~ where ~ solves
F('R(~))+ H (~,(~))~ = O.
Hki:----.,aFk = - -
n iJ~t k.[ .t .- .- .X i ) -1 fi (t_xj),~qr(t)w(t)dt.
OXi a i,j= 1
which, using t - xi
^(*)= t - x f ) - ni z f ), we may rewrite as
appropriate averages for the starting values. We should note here that the
prescribed multiplicities of the Gauss knots, if not all equal, have to be inter-
preted in some ordered way, e.g. implying x l < x 2 < .... < x , for the required
knots. In this case if 2 1 < 2 2 < . . . , <2Nl are eigenvalues of J we define x(~~
=(P121q-P2'~2 "[-"'" +PnI2n~)/(Pl+ "'" +P,)' X(2~ + ""+Pn~+n2"En.+,2)/
( P , ~ + l + " . + P , l + , 2 ) , etc. Our numerical tests (see w indicate that taking
py'=w(2j) gives better starting values than the ordinary average p j = l . How-
ever, this would be the only explicit use of the weight function w in our algo-
rithms.
We may now summarise the steps in evaluating the Gauss knots.
Algorithm 2
Input
m number of fixed knots,
vj, m~, j = 1.... , m fixed knots and their multiplicities,
a, b end points of the interval of integration,
n number of Gauss knots,
nj, j = 1.... , n multiplicities of Gauss knots,
J JacobimatrixoforderN:=Nl+N2, ( N I : = ~ n~,N2".=~mj) forw,
e 1 tolerance for merging knots, j= 1 j= 1
e2 tolerance for convergence of multiple Gauss knots,
w the weight function.
Step 1. Apply Algorithm 1 to J and fixed knots to obtain J, the Jacobi matrix
of order N 1 for the weight function Cv=aqFw, q~):= f i (t-vj) m.
j=l
Step 2. Calculate eigenvalues 21 ..... 2N1 of J.
Step3. If N~=m (all Gauss knots simple) set x~..=2j, j = l .... ,n and go to
Step 5.
Step 4. (Multiple Gauss knots; the eigenvalues 2j are assumed ordered):
Step 4.1. (Initial approximation): Set i.. = 0.
Step 4.2. For j = 1, 2, ..., n do:
Step 4.2.1. Set
yj"= Y', ~(2i§ ~(;~i+k),
k=l k I
Step 5. (Merging knots): For every j and k such that Ixj--Vkl < e l , do:
Step 5.1. Set xj: = (nj xj + m RVk)/(n j -1-ink) and ny -- nj + m k.
Step 5.2. Remove Vk, m k from the list of fixed knots.
Output
x j, nj Gauss knots and their multiplicities j = 1, 2 .... , n,
m number of fixed knots,
v j, mj, j = 1, 2 .... , m fixed knots and their multiplicities.
Once all the knots of the quadrature are known the weights are uniquely
determined by the requirement that quadrature is interpolatory (i.e. of poly-
nomial order at least N, N number of weights). The algorithms described in
[7] may then be used as they exploit the same information as Algorithm 2, i.e.
the Jacobi matrix J for the weight function w rather than its moments. In fact,
as we show in this section, some of the weights can be more efficiently
calculated by a suitable modification of the approach used in I-4] and [5-1. The
other weights can then be calculated by the algorithms of [7] which allow for
the independent evaluation of the weights corresponding to selected knots.
The advantage of the approach used in I-4], 1"5] is that we can, with rather
minimal computational effort, evaluate selected elements of the normalized
eigenvectors simultaneously with the eigenvalues of a Jacobi matrix (this is
achieved by a slight change in the procedure I M Q L T I of 1"9-1which evaluates
all elements of these eigenvectors). We now have the following result.
Theorem 4. Let qk 6Tzk, qR be polynomials such that the knots of the quadrature
are the roots of their product. Let q." =(qo, ql .... , qk-1) r be linearly independent
polynomials of order k such that, for some vector u and some symmetric matrix
K,
t q ( t ) = K q ( t ) + q k ( t ) u , Vt. (10)
158 G.H. Golub and J. Kautsky
Let v be a root of qk which is a simple knot of the quadrature. Then the weight
C corresponding to this knot is
C= qr(v) y
qR (v) qT(v) q(v) (11)
where
b
with fixed knots) have to be determined by the other means mentioned earlier.
The possibility to evaluate the weights of simple prescribed knots exists but is
rather limited as we will point out in the next section.
For the evaluation of weights it would be thus advantageous to keep the
simple Gauss knots in a separate list. An abbreviated version of Algorithm 2
for the calculation of Gauss quadratures with simple free knots only (no
Step 4) should therefore merge (Step 5) Gauss knots to fixed knots to produce
such a list automatically.
so the process may be applied for any v which is not a root of PN- 1. Of course,
there is nothing to be gained when v is a Gauss knot and the quadrature is
without fixed knots (Pn(V)=e~u=0).
However, the situation is more interesting when we prescribe two knots,
say v l , v 2. By T h e o r e m 5 the matrix K will differ from J in the last two
elements in the last row - thus K is tridiagonal but not necessarily symmetric.
If we wish to employ Theorem 4 so that the weights can be evaluated from the
eigenvectors we must symmetrize K by a similarity transformation which must
be by a diagonal matrix only to preserve the orthogonality of the polynomial
basis. A simple calculation shows that the corrected last two coefficients
~ , fl*- t of the symmetrized matrix K satisfy
8. Numerical Experiments
3 • 0.549229076
~0.004226880
0.000301108
5 • 0.606158895 898 ~741606103
0.008112798
0.008211978
T0.000087186
0.000014942
3 0 0.559922297
0
0.004379507
3 • ~000104814
T0.000025414
0.000001997
5 • 1.566166131749 0.280394646
~0.116085974
0.029938594
~0.003627300
0.000304690
3 0 1.211454931
0
0.058092165
Table 2. Relative errors (x~v~- xj)/x I of the iterations in evaluating multiple Gauss knots
Chebyshev weight function
v Error v Error
v Error v Error
0 0.13 0 -0.26
I 0.55 (-1) 1 -0.13
2 0.20 ( - 1 ) 2 -0.26 (-1)
3 0.11. (-2) 3 -0.66 (-3)
4 0.77 (-6) 4 -0.25 (-5)
5 0.75 ( - 12) 5 0.43 ( - 12)
References