You are on page 1of 16
Bate ra bo0 General Interest Section Maximum Likelihood Estimation and Mathematica By AWN 0. CURRIES Hero Us, Eaugh UE ois 1. Introduction We show how several maximum likelihood problems can be solved by computer algebra. We use Mathematica (Wolfram, 1981) but almost any computa. algsia program would suffice. Mathematica isan attractive tool for maximiam Lkelihood fstimation since a wide range of problems can be tackled with a common method: ‘of the paper gives examples that show how Mathematica is used for jon. Section 2 uses two elementary problems (one discrete and one continuous) to establish the basic approach that is wed with Mathematics. Section ‘applies the method to problems involving censoring, tuncating and grosping: an example is taken from Yule and Kendall 1968) co illustcate estimation wit grouped ‘normal data, whereas an example discussed by Brass (1958) is used to demonstrate the method for the truncated negative binomial distribution. Section 4 stows how seneralized linear models can be fitted, our examples here being a binary regression ‘with logit link and an analysis of a dilation assay; the data come from Bliss (1935) and Wetherill (1981) respectively. Section 5 looks at two research problams. The First shows how Mathematica is used to del witha gamma model with incomplete data and presents a problem used by Meng and Rubin (1993); the second is @ {Adin fo corpondee Depa of Acai Mathai ad Sint, Heo Was Ue, eT oe x0 connie ‘quadratic logistic model diseussed by Dellaportas and Smith (1993). Some conclu- ‘Sons ate given in Section6. 2, Simple Examples In this section we consider two elementary examples. The first stows how to deal with a discrete distribution (Poisson) whereas the second uses continuous ‘istibution (normal). We begin with estimation for the Poisson distritution. Table 1 pives the Bortkewisch (1898) data on deaths from the kick of a horse in 10 Prussian Army corps over the 20-year period 1875-94. As is usual for data ofthis Sor, the 200 observations are presented in a frequency table ‘Mathematica consists of @ core program, known asthe kernel, ané many more specialized functions organized in packages. We begin by loading two of the Statistical packages with Gott" SteiaticeDisrateDisibuons") Get" Stetice ContinoousDistbuti ‘To obtain the likelihood forthe Poisson model for these data in Mathematica we proceed as follows. Fst, we use the discrete distributions package ard obtain the probability distribution with ‘rob = Table POF[PolasonDistibution(t) ). ft 0, 4}] o Alternatively, we could supply the probability function diretly with prob = Table Ett (r.0, 4)1 ® In this paper we use the statistical packages, but, whichever method is used, ‘Mathematia responds with ist of the Paiseon prabahities in famine algerie oom: 28 et ‘E2e or we Mathematica numbers is input and output statements; Outs refers 16 the output from the fourth input line. Second, we Supply the date with (109, 65, 22. 9,1} “The output from lel = Apel Pus, tea Loateb ° ives the lop-likelinood in algebraic form owes =[ tre ‘Proquney of death rom the Bk of «horse wane i ae eked ‘out{a] and Ovo] are examples of the symbolic output that characterizes Mathes matin (and programs like). The two examples give an indication of the kind of ‘pression thar Mathematica produces, From now on, we generally omit these ‘One ofthe attractions ofa program like Mathematica i the ease with which plots can be produced. The command Plttiog! {8 6H) © produces a plot of logl where the plot parameters» and b determine the range of the plot. A litle experimentation with a and b will give «satisfactory plot; Fig was produced with a = 0.3 and b= 0.8, “The maximum likelihood estimate of the Poisson parameter tis obtained with maxlog = FndMinimem—tog. (8, 1}) ‘nla = maxleg [121] “The FindMinimum function uses the method of stcpest descent with derivatives computed algebraically. FndMinimar returns a list with two entries: the first lement ofthe iste the minimum value of ~togl and the second isthe maximum likelinood estimate of t. The initial value of 1 in equations (8) can be taken from Fig. I. The information function is found symbolically with Dltogl tt © fof the maximum likelihood estimate of t follows with o and the standard ex Fie 1. Lopes fr the Poison mode ae conn sep of 10 paints with and without @Sdtve 7 “Anca fp fr —iio ‘neonate } | ssanzaneag S2RRRRAgIg | asesezsess i aS MAXIMUM LIKELIHOOD ESTIMATION 38 _standardertor = Sart information Jee o where the symbol /. means replace the variable ¢ with its value in ml. ‘Our second example involves the normal distibution. The method folows that laid out in equations (1), @) and (5-(). Table 2 gives some data provided by Snedesor and Cochran (1967), pages 43-44, onthe amount of sleep of 10 patents troubled with sleeplessness. We suppose thatthe differences inthe amount of sleep ‘vith and without a sedative follow a normal distribution Nim. s] with mean m and ‘andard deviation , The process of finding the maximum Hklihood estates of ‘mand + i only slightly different from that for the Poisson data. We can tead in the data with 0.7, 00, 37, 08, 20, 1.6, 34, -0.2, 1.2, -0.1) and evaluate the normal density function at each data point by pt = POF[NormalDisibution{m, s], data] ® “The likelihood is obtained as in equation (3) by foal = Apoiy{ Plus, Loal pat) o ‘The maximum likelihood estimates ofthe normal parameters ae obtained with the two-parameter version of equation (3): rmaxiol = FindMiimurlog, fo, 0}. (8. 1H) ‘me = maxiogl (21, ao Inital values of O and | have been supplied forthe mean m and standard deviation Finally, the standard errors of m and s are obtained asin equations (6) and (D 11 = -Df10g,.m, m 12 = -Ofloal, m8] 122 = -Dlloal 6. 5] Information = {{i1, 12), {i12, 122)) an variance = Inverse information Jno ‘standardrrorotm = Sar varance|(1,1]}] Standarderrorots = Sat variance (2.2) ‘Mathematica can produce contour plots or thre-dimensional plots; Fig. 2, contour plot of log! with specified contour levels, is pr summit = ~maxlog 1111 contours = summit ~{0.5, 1, 1.5, 2, 25,3, 3.5) a) ContourPot log. m1, 2.8}, fa, 1, 38), Contours -> contours} ‘These two examples show the general approach used to obtain maximum likelihood ‘estimates with Mathematica (@) the elements ofthe likeihood function are found (equation (1) 2¢ ()); (6) the loglikelinood is obtained (equation (3) or ()) (6) if required, a plot is produced (equation (4) oF (12); a connie (@) the maximum likelihood estimate(s are obtained with the general mnimiza- tion routine (equation (9) of (10)}; (©) standard errors are found (equations (6) and (7) or equation (1), \We do not suggest that maximum likelihood estimation should always be tackled in this way. Clearly, there i no attempt to make use of any spechl structure that @ probiem might have, and this puts a severe practical limit onthe size of the problem that can be tackled. Nevertheless, some non-tivial problems ean still be solved by using lite more than the simple Mathematica code given above. rom a conepiual point of view, this unified approach is attractive, particularly in teaching Inthe ret ofthe paper we give a variety of examples which demonstrate this point 3. Censored, Truncated and Grouped Data Several papers appeared in the 1950s on maximum likelihood estimation where the sample came from a truncated distribution. Plackett (1953) and Moore (1954) Considered the Poisson distribution whereas Sampford (1955) and Brass (1958) Tooked atthe problem when the sampled distribution was negative biromial. The fim of these papers was to find effilent alternatives to maximum lkethood since, fat that time, the exact solution of the maximum likelihood eq valuation ofthe information matix were daunting arithmetical published series of papers on estimation with censored samples; Cohen (1991), ‘on truncated and censored samples, summarizes much of the work in this are, 'Yule and. Kendall (1968) presented many examples of grouped frequency dis- tributions. They used Sheppat’s corrections inthe calculation of mements from frouped data. Although today's data handling capabilites do not requre that data Ee grouped, grouped data al rie and present fitting problems similar to those that arise for censored and truncated data. In tht section we show that the direct ‘approach lad out in Section 2 can be applied to estimation with censored, cruncted for grouped data ‘Table 3 is taken from p. 82 of Yule and Kendall (1968) and gives the heights of, {8585 adult males inthe British Isles; the data were collected in 1883. We suppose {hat a normal distribution N(m, 5} £0 be Fited to the data, ‘The measurements eg of 858 Bish males ag) Peper Hoes tn) rmeny Hea tn) Pane MAXIMUM LIKELIHOOD ESTIMATION as leading to Table 3 were made to the nearest 1/8th of an inch, so we can setup the lass Boundaries with leftend = Tablet 56 4, (i 1, 21}1~ 1716 tightened = leftand 61 ‘The probabilities associated with each class interval are found similarly t equation ®): rab ~ COFINormalDistibutonim, |, sghtend} “CF{NarmalDistbuton{m, 2 lefend] ‘We finish the calulation by using the method laid out in Section 2. The data are supplied with freq = (2, 4, 14, 41,83, 168, 394, 669, 990, 1223, 1328, 1230, 1063, 646, 302, 202, 78, 32, 16,'5. 2) ‘The likelihood function og is found with equation (3) and the maximum likelihood estimates of mand s similarly to equation (10). Yule and Kendall (1968) gave the mean of these data as 67.458 in and the Sheppard corrected standard deviation ‘as 2886in; the maxinum likelihood estimates are also 67.438 in and 2.886in respectively. We can also find the standard errors of out estimates; formule (11) ‘apply without alteration and give 0.028in and 0.020in as the standard erors ofthe ‘estimates of mand s respectively. We can now also give a measure of the los of {information that has resulted from the grouping. We assume thatthe population Standard deviation is 2.5S6in and ask, i there is no grouping, what sample size ‘would give a standard error of the mean of 0.028in? This gives a sample sze of 847%; the loss of information is very small, about 1%, A similar ealeultion for s ives an equivalent sample size of §371; the loss of information is sigly greater 22m, but atl very sll, For our second example inthis section we use the data in Table 4. These data ‘were used by Brass (1988) to illustrat his simplified method for fiting the truncated hegatve binomial distribution to data. The observations are the number of ehileen ever born {0 a paticular sample of mothers over 40 years ol. “The fing method is almost identical with that described in Section 2 for a sample from the (complete) Poisson distribution. It is convenient t0 deine nogbin = NogativeBinomiaiDistrbution|,p) and then the truncated probabilities ae simply given by prob = Table POFIneabin iI, {i 1, 12}1/(1 ~ POF{negbin, 0)) ‘The list fea is obtsned from Table 4 and the maximum likelihood estimates are found with equations (3) and (10); inital estimates of 1 for rand O.€ for p are [Nimber of dren of 40 Arion mater 36 conn sulficent. A contour plot ofthe log-likelihood can be obtained with equation (12) and standard errors follow as in equations (11). The values found by Brass in 1958 fre confirmed 4. Generalized Linear Models Generalized linear models (GLMs) are a large and important class of models that, are conveniently fitted by GLIM (Francis era, 1993) (and many other statistical packages). The GLIM algorithm takes full advantage of the structure of GLMs ‘and modes with many parameters ate efficiently ited. The maximum likelihood Aspect of the model fing process is scarcely noticed by the user since it only pears in the Geror statement. Mathematica allows GLMs with stall numbers of| parameters to be fitted in the same way a5 the models in Sections 2 and 3. We ‘demonstrate by looking at wo models. The first ia standard binary rgestion with logit link. The second example is a dilution assay whichis fitted wh the help fof the Softee direaive in GLIM; in Mathematica the standard method applis Sirectly. ‘The data in Table $ were described by Blis (1925) and give the number of ‘beetes killed after exposure to various concentrations of carbon disulphide (CS). ‘A preliminary plot (which could easily be done in Mathematica) suggests that a ‘standard binary regression with the logit link describes che data very well. We se up the data lists as follows: dose = (1.6907, 1.7242...) exposed = (89, 60...) land kiled ~ (6, 13, .».}. The probabilities of a death at each dose are given by = Explb0 + BI doselit1 + ExpIbO + bt dose) «3 ‘The contribution tothe likelihood at each dose level is given by rob = Table| POF{ Binomiaibistebution! exposed 11, pf TKI. kilos 11) we 81 sy [As before, we find the lopikehood with equation (3). The maximum likelihood ‘estimates and their standard errors are found with equations (10) and (11); iil ‘Values Tor 80 and bi ean both be taken as 0. “The data in Table 6 are from a dilution assay described by Wetheril 1981, p.95. “The concentrations in the suspensions Sy, S), Sp, Sy and S, are taken (0 be 9, amber 2f Bee heater expe 10 C8 iw tony cSzm Nn of ts [MAXIMUM LIKELIHOOD ESTIMATION a (o/%,p/, p/8 and p/16 respectively. In GLIM the standard dilution assay model {s fitted with a binomial eror and a complementary log-loglink (Francis etal. (1993), p. 572) The regressor variable coresponding tothe lution appears in the Tinear predictor with a known coefficient which GLIM handles with the offset, directive, The maximum likelihood estimate of pcan be obtained in Mathematica fn more transparent way. IfX; is the number Of samples out of, which contain particles then A~ Bim, exp), =, 12,3, } and n = (10, 12, ...) the binomial probabilities With «= (10 in equation (13) by p= Table 1 ~ Exp[-2(-) eho]. tk, 0. 411 ‘and the contributions to the likelihood function by prob = Table[POFTBinomiaibistibution{of [1], plUrII1.xlUeIIL (r.4. 811 “The model is fitted by following the recipe lad out in equations (3) ard (5). 5. Farther Examples “This section looks at wo examples from recent published research. The first shows how Mathematica used to deal with 2 gamma model with incomplete data ‘This example was used by Meng and Rubin (1993) to ilustrate the expectation conditional maximization (ECM) algorithm. The ECM algorithm is an extension ‘of the EM algorithm (Dempster era, 1977) and an be used when the maximization ‘Step ofthe EM algorithm does not have an explicit solution. The ECM algorithm night be uve iu survival analyis when the lifetimes follow gamma aitrbution ‘The standard parametric models in survival analysis are exponential, Weibull and ‘extreme value (Aitkin and Clayton, 1980); these models give rise to proportional hhazarde and canbe fited by using GLIM. Gamma models do nt fit into the propor tional hazards framework and itis dificult o see how they could be fitedin GLIM. ‘With Mathematica we can fit both the standard proportional hazards models and gamma models with the same direct approach, “The data in Table 7 were discussed by Gehan (1965) and used by Aitkin and Spee No.of wt eon s F 8 cunne Time of remision of leacmie pets Tino oon oe for he fig samp som de “Sm te Cayton (1980) to illustrate their method. The liketood function for data of the Kind given in Table 7 is Tis Ti se 09 ~ (0) are the densty function 8nd survivor function respectively ofthe ith individual. In Mathematia, both ft) and 5) are available forthe exponential, Weibull, extreme value and gamma Aistibutions and simple censored survival models can be fitted by using the method laid out eatir. For example, one of the model ited by Aitkin and Cleyton (1980) to the data in Table ? isa Weibull model with hazard functions «* exp and fat" ‘exp(dy + By) The Mathematica functions in expression (15) are supped ith functions like twiWi[t_1:= 1 ~ COFIWeibulDistibuton(a, EO!) ¢] as) where the *—" character indicates that tsiWW1[] is @ user-defined function with “dummy argument t; the parameterization chosen matches the roportinal hazards parameterization used in Aitkin and Clayton (1980). The maximum likelihood fstimates of «, 2) and 3 are now obtained by minimizing the log-likelihood in the usual way."One advantage of Mathematica is that the whole log-Helinood is ‘considered and hence the standard errors of A and By are obtained directly as {in equations (11). In GLIM ther is the minor complication that the timation of By and 8 is conditional on a, and hence additional work i required to bain the correct standard error. "The fiting ofthe exponential, Weibull and extreme value models 1 the data in ‘Table 7 is straightforward in Mathematica but there is a complication with the ‘gamma model; we give some details of the solution forthe gamma models since this nt only lays out the method for the other models but also shows some other features of Mathematica. We suppose thatthe data are supplied in the data lists times =(6,6,...}. group = (1. 1.---, 2,2) and censor = {0, 1,_-.] where O codes Tora censored value and 1 fo a valu observed exactly. We suppose that we wish torfit a model with eamma survival times where the distibutions ofthe survival MAXIMUM LIKELIHOOD ESTIMATION Eo imes in the two populations have a common shape parameter a and mean survival times mt and m2 respectively, The density and survival functions in expression (15) ae given by taiGtIt.]:= 1 ~ COFIGammaDistrbuton{a, m/a}, DulGI It ]:= POF|GammaDistbuton( a, t/a, th ia fiG2(_]:= 1 ~ COF|GammaDistrbutont a, mi} ¢] d¥G2It_1:= POF[ GammeDiseibuton|, mZ/a, t] We now define a function which wil find the contribution to the likelihood funetion for an individual survival dime, 2, according tothe group value y and the censoring tomers Yor 21 xiao 1 Ba y == 1 parte), x yee, caicttay HET BR y m= 2 patGZie], x == 2, ealG22)) Finally, we use the Mathematica function Mapes to apply the functicn element {o the three data ists times, group and censor the components of the likelinood Funetion result: win components = MapThread element, {ensor, group, tes} “The lo oat “The log-likelihood appropriate for exponential, Weibull and extreme valie models an be obtained by making the obvious changes to expresions (17) and the FindMtnimurm function ean then be used to estimate the parameters in the models Ine usual way (equations (3) and (1))s the standard errors follow with simple txtension of equations (11). For the amma dstrTbution the FindMinimun function ‘wll not work in the form of equation (5) or (10) singe Mathematia eannct compute the symbolic derivative of the functions taiGT and tG2, but FndMiinam has a ‘numerical form where two inital values ae supplied foreach parameter. Some care fe needed in the choice ofthese values. The Plt function is invaluable here. We define the Mathematica function ifs, mt, m2] by log! ‘We can now find approximations to the maximum lkeinood estimates by a succes. sion of plots of by varying one variable while the other two variables are held fixed. For example, we might start with fixed at 1 (an exponential distribution) ‘and m2 fixed at €.7 (he mean of the second group). We issue Pot(1, mt, 8.7], (mt, 10, 50}1 The plot suggests that mt is about 32. We continue with Phttte, 32, 8.71, (2, 05, 2)1 “This lads quickly to good approximations to the maximum likelihood estimates. FindMnimum can now be used to obtain more accurate values. Inthe present case, wwe used ielhood Function is given as usual by poly Plus, Log components }] Ma. mt, m2 0 connie indMirimum{ log, 2, 1.4, 1.5}. (mt, 32, 33}, {m2, 8, 8}) = max (21) 5 for the two groups and the ratio of the hazards are given hazardGt = pdf tft] /taiG3t). mies hherard2 = paG2{¢]/t0lG2{t ). eG fatloG = hazbedG2 /hazardG {A plot ofthe ratios of the hazards forthe exponential, Weibull and gana models fan now be obtained with Pot rato, raloW, ratio) {t, 1, 30}) and i given in Fig. 3 (we have assumed that rato and ratioW have been obtained, land we have omitted the graphics options that control the detailed appearance of the graph). Fig. 3 emphasizes the proportional hazards nature ofthe exponential and Weibull models, and the non-proportional hazards natute ofthe gamma model is apparent ‘The information matrix must also be found numerically. One simple vay of doing this isto use finite differences. First, we obtain the maximum likelihood estimates mex, mimax and m2max ae Follows max al. mlaG; imax =mt/. mleG; — mamax= mai. mas where the semicolon allows more than one command on single input lie, The finite Fe 5. Rats of mers rponenia mde ~~~, Well mode suaMune LIKELIHOOD ESTIMATION oo difference approximation to 3(log/)/3a? at the point (amx, mma, m2max) ‘wih Increment is 92 = +h, mimes, mamax} ~ 2il9, mime in entenoe, mamax N72 amex) and a few trials with h give a satisfactory estimate of 2(log/)/@a?, The remaining ‘entries of the information matrix can be found ina siilar fashion; Mathematica functions could be defined to streaine this paces. "The second example is taken from Dellaportas and Smith (1993), who showed ‘how the Gibbs sampler can be used to make Bayesian inferences for GLMs. They {gave data from Knuiman and Speed (1988) on the connection between duration of ‘iabetes and retinopathy, an eye dseate. Table Bis a modification of Kruiman and ‘Speeds original data and is used by Dellaportas and Smith to ilusteate the fiting, ‘of the quadratic logistic model boe(E!) = 2, +052, +852) where (ry, xy) are the probabilities of being with or without retinopathy with uration in the th category and Z = (Z))= (0, 4, 7, 10, 13, 16, 19, 24 isthe ‘vector of mid-diratons. In this example, Dellaportasand'Smith assured that no prior data are available and take as the prior p(B, 8, 8) = coasant. There [5 no difficulty in obtaining the posterior summary statics of Knaiman and Speed since these are the maximum likelihood estimates. We set yes = (1,1...) no = (6, 4,...} and z= (1, 4...]/10, where the scaling of Z gives a more con: ‘venient scale for the estimation ofthe regression parameters Band By. The linear predictor for the quadratic logistic mode is ven by ota betat + beta? 2 + betad 22 and the log-likelihood follows (Dellaportas and Smith (1993), equation @2)) as log! = Appy| Pus, yes eta ~ (yes + no) Log + Exp(otal Diao raphy defray ener) Duron of ee ‘ops SEREetee Ee conn “The maximum likelihood estimate is found by equation (10) and its covariance matrix by equations (I1), Mathematica provides a Variety of integration routines fad these can be used {0 find the posterior mean and covariance matrix of the Bayesian extimate of B. We obtained 2.469) 1517-2594 0.892 er =( 2457]. 2394 6.17 -2.403), =0.495 0392-2403 1.039) lose agreement with the simulated values of Dellapotas and Smith. One of the fdvantages of the Gibbs sampler approach is that, once the sample of f-valus has been assembled, further statistics (ke means and covariances) are eaiy found. Of course, Mathematica must compute any integrals from scratch, but &s powerful {inguage and simple graphs have other advantages. Dellaportas and Smith (1993) ‘compared the maximum likelihood and Bayesian solutions by graphing the marginal ‘istrbutions of By, By and 8) which they readily obtained from ther sample of [Bralues, Instead we compare the (wo solutions by plotting the linear predictor #9 together with approximate (pointwise) 95% confidence or Bayesian Intervals. TnMathematica we proceed in.a dizect fashion: let botabeyes = (=2.469, 2.457, 701493] and covbayes ~{{1.517, ~2.594, 0.892}, ...) be and D* respectively. For the Bayesian plot the Functions to be computed are given by stabayos = betabayas{{1}] + botabayes{ 2] zz + betabayest [91] 2272 Strror= Sarl (1, 22, 22°2).covbayos((1}, (22), (22"2)}} Upper = etabayes + 2 steror lower = etabayos ~ 2 sterror Prot etabeyes, upper, lower). (zz, 0, 2.8)] In these expressions the generic variable x 5 used, nor the data Ist 2; the output from the lie etabayes =. is the algebrac expression utin] = -2.469 + 2.487 22 ~ 0.495 22? Fig. 4 also includes the plot from the maximum likelihood estimation. Although [BY = LIB the predicted values of under the two methods are quit close over the Fange of Z. The Bayesian interval noticeably wider than the maximum likelihood iimerval for small and large values of Z 6, Concluding Remarks “This paper has used Mathematica to solve problems in maximun likelihood ‘estimation, which ange from the clementary to the computationally demanding. ‘A feature of the solutions described has been the common approach 0 ‘hese diverse problems. We have targeted the paper at anyone with lite or no experience of Mathematica, and for that reason we have used avery restricted set of Mathematica commands. Three omissions are worth mentioning. First, Mathematica has exten- Sive algebraic simplification commands. The output provided by equaions 3), 9) land similar expressions can be greatly simplified; we have omitted suet simpli tions since they refer mainly tothe efficiency of computations and do nxt materially fffec the method, Second, we have used the FinaMinimum command. A different MaXIMUME LIKELIHOOD ESTIMATION ” logic made: ——, maximum elo extima- approach involves differentiating the log-liketiNood and solving the resulting ‘suation(9). For example, instead of equation (3) we could proceed with og! ~ Dlto9, ¢1 ‘mia Selvel log! == 0, t} ‘The Solve command will find exact solutions, if they exist; a numerical version of Solve (NSolve) exists for equations without exact solutions. In this paper we hhave ignored this avenue, although this alternative approach could provide a ‘more efficent solution for many problems. Third, we have ignored any particular structure that a problem may have. This leads to'a conceptually simple approach ‘but which in practice will be severely limited. Even a program as powerful and Sophisticated as Mathematica will ned help; serious user could supply such help asthe of she saw ft We see Mathematica as having two roles in statistics. In teaching, the approach ‘outlined in this paper brings many conceptually simple but computationally heavy [problems within eary reach ofthe student, Asa research instrument, Mathematica provides a symbolic computational tool of immense power, which complements the data handling and model iting abilities of any statistical package. ‘Aekin, Mand Clon (198) The fiting of eponetl Wetland exe vale tans o compe censored sural data ung OLIM. Ap Stat 38, 136-16 incr (099) The lean of the dorage-moctiy care! Atm Appt Bol, 22, 194-16 Boren L9H) Dar Gasser Kl Zain, Lai esbaet a connie Bras W. (1984) Sinplifed meds of Slag the truncated egave bina dsbuton ‘Biome 8) 588 Cohn SC, (391 Trancted ond Censored Sample. New York: Dee ‘etapa, Pana Sh, AFM (995) Bays fence fr generale nea and propos srs model vi Gibby sping. App. Sl, 2 3. Dempsey AvP Lai NM and Rubin, DB (97) Maximum Betood fom ioompete data inthe EM aos (th daca) A. Sis. Sec, 3. 1-38, ‘rani Bee, Mand Payne, Ces) (998) The GLIM Sem Release ¢ Mana, Oxo: “Guoid aieaty Pres Coin 3) Aad wit tet r compuing trary Syme ae ‘gly MLW" ed Speed, T-P. (988) copeting pir information ato ‘ontngny table, omer si, 0511071 ‘Meng. an Rain .B (05) Maximum Hood etimaton via tbe ECM algorim: age ‘funenork. Blom 0, 27-28 Moore: Fr. (1920) A nte oa toned Pion dstibuns, Blomeric, 0, 2-06. Pectin iL (oe The tencated Poison dito. Blot, 9, 4 upto MR. (343) To ance nraive bnomal drown. Biome, $-9 Stedman. Wand Cocran, W195) State Mao, het. Anes fw Sate Unter ‘Well, 6, (M81) Intermediate Satta! Methods, London: Chap a Hal Wattam, 5.91 Mashmanca'e sem or Doin Saemats by Comput, de New Yor: ‘Adana Wen ‘Yales6°0. and Kendal MC. (965 Ax noduction othe Theory of Statics, a. London: analysis of

You might also like