Statistics & Decisions 3, 91-113 (1985

)
© R. Oldenbourg Verlag, München 1985

ESTIMATION FROM A LEN6TH-BIASED DISTRIBUTION

Lajos Horväth

Received: Revised Version: February 10, 1984

Abstract. We prove the weak convergence and the strong approximation of
the process

- F ( t ) ) , where F^ is the nonparametric maximum

likelihood estimate of a lifetime distribution F on the bases of a sample
of size n from the lenght-biased distribution of F.

1.

Introduction
A random variable Y is said to be a lenght-biased distributed random

variable i f i t i s nonnegative and i t s distribution function has the form
1
G(t) = i

^
; xdF(x), 0 ^ t < » ,
0

AUS 1980 subject Classification. 62 G 05, 62 C 12
Key words and phrases, Lenght-biased distribution, weak convergence,
streng ?ppr"yimt'tion

Brought to you by | New York University Bobst Library Technical Services
Authenticated
Download Date | 1/13/15 6:46 PM

92

L. Horväth

where F is a distribution function defined on the positive half line and

y = ;xdF(x).

G = Gp is usually called the length-biased distribution of F and it
arises naturally in many fields, Interesting applications can be found
in Cox (1969), Patil and Rae (1977, 1978), Coleman (1979) and Vardi
(1982b).
Cox (1969) and Vardi (1982a) considered the problem of finding a
nonparametric maximum likelihood estimate (NPMLE) of F on the basis of
a sample {Y^, l ^ i ^ n } fron G. Vardi (1982a) proposed a simple way of
finding a NPMLE of F which we briefly treat here.
Letyj<.

denote the ordered values of {Y^., l ^ i ^ n }

(h^n

because of possible ties) and let n^- be the multiplicity of the Y's at
y^. Vardi (1982a) showed that it is enough to maximize

(1.1)

log L(pj

p^) =

h
E p. = 1, p. > 0 ,
j=l ^
^
Solution of (1.1) is (p*,...
subject to

*

h
h
l^n. log(y.p.) - n log .^^ViPi

h and proved that the unique
, where

-1 "^k

and

Brought to you by | New York University Bobst Library Technical Services
Authenticated
Download Date | 1/13/15 6:46 PM

0 ^ t < .. y-X(y) and v„ = / y-'dG„(y) b (throughout this paper / = / ). Introducing the empirical distribution function of {Y^.b) The main aim of this paper i s to prove the weak convergence and the strong approximation of ctn(t) = _ p^^)).1 yi • He proposed the estimate F^ for F to be F„(t) = E p* Let I(A) denote the indicator of the event A.L. a [a. l ^ i ^ n } can be written in the form F„(t) = . Horväth 93 1 ^n = ^ h . Brought to you by | New York University Bobst Library Technical Services Authenticated Download Date | 1/13/15 6:46 PM .

®) from which it follows that F is also continuous. where V = EY"^ = . namely F(t) = v-^ .m-«>> Sen (1984) proved the weak convergence of a^ to a Gaussian process _2 assuming that EY <00. Integrating by parts we obtain that Brought to you by | New York University Bobst Library Technical Services Authenticated Download Date | 1/13/15 6:46 PM . one a sample of size n from G and the other a sample of size m from F. An elementary calculation shows that F is determined uniquely by G. he obtained the Bahadur representation of sample quantiles in length-biased sampling. under the more stringent regularity -2-5 condition that EY < « for some 6 > 0 . Proof. where F ^ ^ is based on two independent samples. If v < ® . His result cannot be used in our case because he assumed that lim m(m + n)"^>0. then sup |F(t)-F(t)hO a.1.y"^dG(y).94 L. Let 0<a<«'. The strong unifomi consistency of F^ Throughout this paper we assume that G is continuous on [0. THEOREM 2.s. y"^dG(y). n. Actually. Horväth Vardi (1982a) studied the weak convergence of (m + n)^^^ C^n-Hn^^^ " F(t)). 2.

v| + v"^ / y ' ^ d G (y) + v"^ / y"^dG(y) 0 " 0 supl a^t«» G(t)-G(t)l.L. " The Kolmogorov law of large numbers implies that (2.1). < a } . and (2.y"^d(G (y) .s.s. but on the other hand lim a-»0 a 1 / y ' dG(y) = 0 0 so we proved Theorem 2.1./ y ' ^ dG(y) 1=1 ^ ^ 0 = EY'^HYO} a.2) and from the Glivenko-Cantelli theorem that lim sup n ->• ® 1 supl F (t) .s. Brought to you by | New York University Bobst Library Technical Services Authenticated Download Date | 1/13/15 6:46 PM .y-X(y) 0 " = i " S YT^ I { Y . We get from (2.v|+ v"^ supl .F(t)| ^ 2v"^ 0^t<" 3 / y 0 1 dG(y) a.G(y)) . (2. Horväth 95 sup |F (t) .2) .F{t)|^v-^|v ä " + (av)-^ .1) v^ = i V = EY-1 a.

B (t)| of Komlös. Brought to you by | New York University Bobst Library Technical Services Authenticated Download Date | 1/13/15 6:46 PM . Horväth Weak and strong approximation of the empirical process Approximations of a^ will be derived fron the well-known approxitna- tions of the empirical process Without loss of generality we can assume that our probability space {Q.1) sup |ß (t) . Whenever we write «n OC-n) for a sequence of random variables R^ and positive constants r^ we mean that lim sup n -»• jR C a. Ä.b)). Major and Tusnädy (1975) holds.G(x)G(y)) (a A b=min(a.s.96 3. L. Here B^ is a two-parameter Guassian process with zero mean and covariance EB^(x)B|jj(y) = (mn)'^^^(mAn)(G(XAy) . 00 with a non-random positive constant C. P ) is so rieh that the approximation (3.

G(t))-^ B„(t). Introduce (3. and Let H^"^ and K^"^ denote the a-fields generated by the random variables Brought to you by | New York University Bobst Library Technical Services Authenticated Download Date | 1/13/15 6:46 PM . e^ will be very useful when we deal with the processes near to zero.1) in the proof of the weak convergence of ct^.2..(t) = (G(t))-S. For each S€(0. both F^"^ and hold.. {e^(t).L.. 0<t<co. The martingale propertieis of e*. Observe that for each s £ t and n = 1.G(t))-^ ß^(t).2) e*(t) = (1 .(t). But when we investigate the behavior of a^ near to infinity we have to replace e*. Oät<» en(t) = (1 . and It is well-known and easy to show that both {e*(t). 0<t<» fn(t) = (G(t))-^ B^(t). Horväth 97 We need a weighted Version of (3.'") let denote the 0-field generated by the random variables {3^(u): O s u ^ s } and let g^"^ denote the a-field generated by the random variables {B^(u): O g u ^ s } . 0^t<<». 0<t<=o} and 0 < t < " } are separable Square integrable martingales for each n. e^ by f.

a) (3.98 L.3) and (b. Tg) is the Support of G (ii) there exist two possibly degenerate intervals (tg. sup t"^A(t)<» 00 (iii) . and q > 0 . 0 < t < a. t"2(Mt))2dG(t) 0 ». In the following section we prove not only the weak convergence of a^ to a Gaussian process but we will get the convergence of ct^ in weighted metric. a s t^ b. Tg). If A(t) = 1 then the conditions (3. respectively. constant. 0 < t < < » } and {f„(t). Brought to you by | New York University Bobst Library Technical Services Authenticated Download Date | 1/13/15 6:46 PM .3) (i). We assume that the weight function Jl(t). so that J!. Tg). is a nonnegative function on (tg. Horväth s ^ u < < » } and {B^(u): s ^ u < " } . where a and b are arbitrary constants. t g ^ a < b g T g ."^ 0 < t < c o } are separable Square integrable reverse martingales for each n. 0 ^ t < ~ satisfies the following conditions: (i) Ä. Kj. where (tg. then we can choose the following weight function: Mt) = t"*^ . If EY'^^^'^'") < r > 0 . It is well known that {f*(t). Tg). (ii) are satisfied and (iii) is equivalent to E Y " ^ < » .(t) is nonincreasing on (tg. a) and t'^Jl(t) nondecreasing on (b.

3) ( i ) .V ( t ) ( l . t"^ji^(t)dG(t) 0 and Brought to you by | New York University Bobst Library Technical Services Authenticated Download Date | 1/13/15 6:46 PM .G(t)) le. then aJ. Using Theorem 5.^ Z ( t ) | B b<t<Te |ß„(t)| (t)| (t)|.L.^) = sup Jl(t)t'^ |ß_(t) .1 of Birnbaum and Marshall we obtain t h a t P{ = P{ sup t ' ^ J l ( t ) | ß tgctsa sup tg<ta ft)|>X} t \ { t ) { l .(t)|>X} ^ X-2 . The f i r s t term goes to zero almost surely by ( 3 . Proof. Horväth 99 THEOREM 3. 1 ) .B„(t)| t 0.3) ( i i ) .(t))2 i .( i i i ) are s a t i s f i e d . n^». I f the conditions (3.B ( t ) | " " + sup f ^ ( t ) tg<t<a + sup t " ^ J l ( t ) | B „ ( t ) | + sup t " ^ ) l ( t ) l ß tg<t<a " b<t<Tg + sup t .G(t))2dE(e. Dividing the p o s i t i v e h a l f l i n e i n t o three parts w i t h the points a and b we get a(^) ^ sup " a^t^b t"^ll(t) |ß„(t) .1. Let t g ^ a < b ä T g as i n (3. t .

X) such t h a t f o r each n P{ sup tg<t^a t^li{t)###BOT_TEXT###amp;(t)###BOT_TEXT###gt;X}^e P{ sup t"^Jl(t) l ß „ ( t ) | >X} ^ e. X) and b = b(E.100 L. and t h e r e f o r e we proved t h i s theorem. c o n d i t i o n s of t h e form Brought to you by | New York University Bobst Library Technical Services Authenticated Download Date | 1/13/15 6:46 PM . b^t<Te and A s i m i l a r argunent shows t h a t P{ sup t " ^ Ä ( t ) | B „ ( t ) l > X } ^ e Ostäa " and P{ sup t " ^ i l ( t ) ] B „ ( t ) | > X } ^ £. When approximating a^ s t r o n g l y . So we proved t h a t f o r each X>0 and o O t h e r e e x i s t a = a ( e . Horväth P{ sup t " ^ £ ( t ) | ß f t ) l >X} = P{ sup t"^Jl(t)G(t)|f*(t)|>X} ä / t"2(£(t))2dG(t).

these integrals exist as improper Riemann integrals under the condition EY . Horväth 101 00 1 / o v(r) = / du <co will play an important role. This is slightly strenger than EY '"<o> but EY "P [logY] 1+6 <'=with any 6 > 0 implies v(r) < " (see Appendix of Hoeffding (1973)). because E /y'^|B„(y)|dy = 0 " /y"^G(y)(l 0 On the other hand.1/r. y'^B (y)dy 0 " will appear in the approximating Gaussian processes.2. Brought to you by | New York University Bobst Library Technical Services Authenticated Download Date | 1/13/15 6:46 PM . The integrals . THEOREM 3. If v(r) < <» for some r > 2. by Theorem 3 of § 2 in Chapter 2 of Skorohod (1965). These integrals exist as Lebesgue integrals if and only if v(2)<«'.L. then a(^) = sup t ' V (t) . O ä t < ~ . 0 _2 < and (y)dy " is continuous for each n. as a function of t.B„(t)| 0(n-^) for any 0 < X < 1/2.

then Brought to you by | New York University Bobst Library Technical Services Authenticated Download Date | 1/13/15 6:46 PM .l)log n)(log log n)^^^).3.(iii) are satisfied. Horväth Proof. We get from (3.r6 .102 L.15.rö . THEOREM 3.4) lim t"''G(t) = 0. If the conditions (3. Corollary 1. If we choose 6 near zero and T near we get the theorem.2 in Csörgö and Revesz (1981) implies that sup t"^|B (t) OStäa(n) " 0(exp(-T(r/2 .4) we obtain for any 0 < 6 < l / 2 that sup f ^ l ß (t)| Ost^a(n) " ^ sup t-\G(t))^/2-6 O^t^a(n) sup (G(t))«-1/2|3 (t)| 0^t<<=° " 0(exp(-T(r/2 . Condition v(r) <®implies that EY"*" <<» and therefore (3.3) (i) .l)log n)(log log n)^^^).B (t)| a(n)^t<» " " Using Theorem of James (1975) and (3.1) that sup f ^ l ß (t) . Let a(n) = n'^ with 0 < T < r ' ^ .

y'^e-(y)dy| O^täa 0 " + t p sup iMt) .t"^ e^Ct).G(x))-\l . We obtain by a simple manipulation that sup l{t) sup len(t)-B(t)| + t sup |)i(t) . p.(y)dy ..5) = 42)(a).G(x)G(y))dxdy Brought to you by | New York University Bobst Library Technical Services Authenticated Download Date | 1/13/15 6:46 PM . Introduce now the process e**(t) = J y " ^ e. y " ^ ß _ ( y ) .G(y))-^ " 0 0 (G(XAy) .. Let tg < a < t.A(2)(a). Horvath 103 for each t.A(2)(a).B (y))dy| & 0. 0 ^ t < " } is a separable square integrable martingale for each n. " OStST 0 Proof. It is routine to establish that {e**(t).L. Notice also that by the c^-inequality (Loeve (1960).2). y B (y)dy| O^tta 0 (3. n . where e* is defined by (3. 155) E(e!*(t))2 ^ 2 / / x ' V ^ l . tg < t <Tq a(2)(t) = sup |Jl(t) .

(t))2 0 " 0 " ^ 272X"^ .2t"^(l . t'V(t)dG(t).104 L. 0 if G(t) ^ 1/2./ y"^ e*(y)d6(y).*(t) + (1 . (1 . 0 " 0 " We have for each X > 0 by the Birnbaum-Marshal1 inequality that sup |Jl(t)e**(t)| >X/4} tg<t^a " + P{ sup |t"^Jl(t)e*(t)| > X/4} tg<t^a ^ 16X-2 /£2(t)dE(e.G{t))"^G(t) t ? ^ 16 / y"^dG(y). Brought to you by | New York University Bobst Library Technical Services Authenticated Download Date | 1/13/15 6:46 PM .G(t))e.G(y)) 0 9 * d (/u"^ e*(u)du) 0 " = (1 ..(y)(l ..*(t))2 + 16X"2 J t-V(t)dE(e. Horväth . We can assume that G(a) ^ 1/2. Integrating by parts we get /y'^0n(y)dy = / y'^ e.6(y))dy t y = .G(t))t"^ B*{t) + / e n y ) d G ( y ) .

5).1/r .6)log G(a(n)))(log log 0(exp(-Tr(l/2 . If v(r) <<" for some r > 2 .1) that (a(n)) Let 6 > 0 .L. Then we obtain by (3. Next we prove a streng Version of Theorem 3. Brought to you by | New York University Bobst Library Technical Services Authenticated Download Date | 1/13/15 6:46 PM . Horväth 105 If we replace e* by e^ in the previous inequalities we get >X} ^ 272X"^ J t"^Jl^(t)dG(t) for each X > 0 . Using again Theorem of James (1975) and (3. Proof. THEOREM 3.4. Let a(n) = n""^ with 0 < T < r " ^ .1/r. Let l(t) = 1.3.1/r . and the proof of Theorem 3.4) we get A^n^ (a(n)) s O^tga(n) " 0(exp((l/2 . t = a = a(n) in (3.26)log n). then for any 0 < p < l / 2 .3 is complete.

2 3^(y)dy e^(y)dy (4.l / V ^ .^ I t " ^ 3n(t) + . ^ F ( t ) ( .2 in Csörgö and Revesz we obtain in a similar way as before that 0(exp(-Tr(l/2 .l / V ^ .26)log n). Weak and streng approximation of a^ Integrating by parts we get the following almost sure representation of a^: o'n(t) = v " ^ " ^ - + v"^ / y .v " V ( t ) / y"^B^(y)dy = v"^ J y-^dB (y) . Horväth I f we Substitute James theorem for Corollary 1. The theorem is proved because 6 > 0 is arbitrary small and t < 1 can be so d o s e to r"^ as we wish. y .15. This form of a^ suggests that the approximation processes will be rn(t) = v-^t"^ B„(t) + v'^ } y"2 B„(y)dy .1) + n .1/r .2 ß^(y)dy} / y ' ^ ßn(y)dy. 0 " 0 " Brought to you by | New York University Bobst Library Technical Services Authenticated Download Date | 1/13/15 6:46 PM .106 L.v"^F(t) . 4. y ' 2 ß„(y)dy)2 . y"^dB (y).n .

v ^ O } denote a two-parameter Wiener process. n- Brought to you by | New York University Bobst Library Technical Services Authenticated Download Date | 1/13/15 6:46 PM . 0 ^ t < o o } are Gaussian processes with expectation zero and covariance Er„(x)r^(y) = . näl} = {W(a(t).r ( t ) | I 0.F(y)a{x) + F(x)F{y)o}. THEOREM 4. n). Horväth 107 I t IS easy to check that { r ^ ( t ) .3) ( i ) . t<». n) . By (4.F(t)W(a.( i i i ) are s a t i s f i e d . where a(t) = v ' ^ /y"^dG(y) 0 and a = lim a ( t ) = v t-«> ? / y 0 ? dG(y). then A^ = sup M t ) l o i ^ ( t ) . nsl}. Let {W(u. I f the conditions (3.L. v ) . u i O .2) .2) the following representation holds: {n^^^rn(t).F(x) g (y) (4.1.

= n-l/V^"^ + 0 sup ß (t) ßn(y)dy|-| / y"^ ßn(y)dy|.3)(iii) imply that 1^1/2 ^(3) gi^j 1^1/2 ^(4) i^g^g nondegenerate limit distributions and v^ goes to V in probability. " 0 " a(5)(t) = sup |)l(t)(x^(t) and aJ^^t) = sup U ( t ) r (t)|.3) A^ ^ + v"^ + A(3)(t) + A^'^^T) + where = en(y)dy)^ sup Jl(t)F(t). ^(l)' and AI a(2)' go to zero in It follows fron Theorems 3. (4.3 that A^ n n ^ probability.108 L. Horväth Proof. Let tQ<T<Tg. Clearly.1 and 3. These theorems and condition (3. An elementary calculation shows that Brought to you by | New York University Bobst Library Technical Services Authenticated Download Date | 1/13/15 6:46 PM .

because {f**(t).2 en(y)dy| Tät<o° t + sup ß (t)l + sup Jl(t)(l . Itj. Horväth 109 OO ^ sup v."^.3) ( i ü ) implies that (4.t-'f. 0 < t < " } is a separable square integrable reversi martingale.3.L. (t) = ..4) lim Jl(t)(l .F(t)) = 0 t-x- and therefore lim lim sup P { Ai5)(T)>X} = 0 for each X > 0.F(t))v. Introducing the process f . Using the Birnbaum-Marshall inequality again we get lim lim sup PlA^J^ (T)> X} = 0 T->« N-H» Brought to you by | New York University Bobst Library Technical Services Authenticated Download Date | 1/13/15 6:46 PM .^Jl(t)| / y ..^i ßn(y)dyl Condition (3. y"'f*(y)dy .(t) we can repeat the proofs of Theorems 3.1 and 3.

110 L. Let T = Tg.1.1 and 3. If v(r) <-. Corollary. A similar argument shows that lim lim sup >X} = 0 for each X > 0 and so we proved this theorem. If EY"^<oo. but we have to use Theorems 3.s.2 and 3. Horväth and lim lim sup P{Af|2^(T) >X} = 0 x-KD n-x» for each X > 0 . We follow the proof of Theorem 4.r (t)| O(n-P) for any 0<p<l/2 .2. THEOREM 4.1. n->co. The weak convergence of ct^ is a simple consequence of Theorem 4. for scme r > 2.4 instead of Theorems 3. Then A(5)=A(^)=0 n n a.r (t)| £ 0. then An= sup |a_(t) .1/r. then sup la„(t) .3: Brought to you by | New York University Bobst Library Technical Services Authenticated Download Date | 1/13/15 6:46 PM . JL(t) = 1. Proof.

Brought to you by | New York University Bobst Library Technical Services Authenticated Download Date | 1/13/15 6:46 PM . Horväth O(n-P) and a(2) O(n-P).2 sup I J y e_(y)dy| Oät<« 0 0 O((log log n)^/^) and sup t"^ß„(t)| ^ sup t'^GCt))^/*" sup Oät<ß Oät<<» Oit<® O((log log n)^/^).3) that for each 0 < 6 < l / 2 .^^ 0(n"^/2log log n) and ^(4) a. o(n-l/2iog log n).1/r t .s. So we obtain that aJ.L. If follows fron James theorem and (3.

W. Tusnädy.: An Introduction to Mathematical Stereology. A. and Marshall. Ann. of Theoretical Statistics. P. M. Johnson and H. Z. Statist. of Aarhus. 3.: A functional law of the iterated logarithm for weighted empirical distributions.: An approximation of partial sums of independent r. Z. P. G. Wahrscheinlichkeitstheorie verw.'s and the sample d.: Probability Theory. Probability 3 (1975). W. C.772 [7] Komi ÖS. Ann. New Developments in Survey Sampling (N..: Strong Approximations in Probability and Statistics.527 [4] Csörgö. M. R. 32 (1961).v. 687 . Applications of Statistics (P. I. 54 . Univ. III . Eds. Statist. New York .: Weighted distributions: A survey of their applications. J.) (1977). 383 .405 Brought to you by | New York University Bobst Library Technical Services Authenticated Download Date | 1/13/15 6:46 PM . 506 .San Francisco . Academic Press (1981) [5] Hoeffding.f. and Rao. Second Ed. R. Smith. G.: Some sampling problems in technology. Krishnaiah. Horväth References [1] Birnbaum.131 [8] Loeve. L. 1 (1973). Princeton. 762 .) (1969). Math.London. B. Ed.. and Revesz. P. W.: On the centering of simple linear rank statistic. R. Van Nostrand (1960) [9] Patil. D. Ann.66 [6] James.: Some multivariate Chebyshev inequalities with extensions to continuous parameter processes. Gebiete 32 (1975). Major. R.112 L..703 [2] Coleman. Dermark (1979) [3] Cox. Memoirs No. R. Dept.

P. 179 . Ann. Addison-Wesley (1965) [13] Vardi. Statist. K.67 [12] Skorohod. C. V.Ii 3 L. R. Calcutta Statistical Association Bulletin 33 (1984). 59 . 10 (1982b). and Rao. 772 . Biometrics 34 (1978). G. Y.189 [11] Sen. Statist.: On asymptotic representations for reduced quantiles in sampling frcxn a length-biased distribution.: Nonparametric estimation in the presence of lengthbias. Horväth [10] Patil. Reading.785 Lajos Horväth Szeged University Bolyai Institute Aradi vertanuk tere 1 H-6720 Szeged Hungary Brought to you by | New York University Bobst Library Technical Services Authenticated Download Date | 1/13/15 6:46 PM . P.: Weighted distributions and sizebiased sampling with applications to wildlife populations and human families.620 [14] Vardi.: Studies in the Theory of Random Processes. 10 (1982a). Y. A. 616 . Ann.: Nonparametric estimation in renewal process.

Brought to you by | New York University Bobst Library Technical Services Authenticated Download Date | 1/13/15 6:46 PM .