You are on page 1of 42

CHAPTER Vl

APPLICATION OF G-FUNCTION IN STATISTICS

During the past few years Meijer's G-function is extensively used in the

theory of gtatistical Distributions, Characterization of Distributions and in

studying certain structural properties of statistical distributions. This chapter

deals with the applications of G-function in the various fields of Statistics. Only

the main results are discussed in the text. Some more results are given in the

exercises at the end of this chapter and more can be found from the references. All

the articles in the list of references do not make use of G-functions directly but

the problems under consideration, in these articles, are such that atleast particu-

lar cases of G-function could be made use of. Some other applications in Physical

Sciences, Engineering and related fields will be discussed in the next chapter.

6.1. EXACT DISTRIBUTIONS OF MULTIVARIATE TEST CRITERIA

In order to apply a test of a statistical hypothesis to a practical problam one

needs the exact distribution of the test criterion in a form suitable for computing

the exact percentage points. Several statistical tests associated with a multi-

variate normal distribution, along with the exact distributions in particular cases,

of the likelihood ratio test criteria, are available in Anderson [16]. A method of

obtaining approximate distributions in these cases is given by Box [50].

There are different techniques available to tackle these distribution problems.

But none of them proved to be powerful enough to give the exact distributions of

these test criteria in general cases. The methods of Fourier and Laplace transforms

are useful in some of these problems. The method of Mellin transform is success-

fully used by Nair [215] to work out some exact distributions in particular cases.

In a series of papers Consul ([68],[69],[70],[71],[72]) obtained the exact distri-

butions in particular cases by using inverse Mellin transform technique and represen-

ted these in terms of Hypergeometric functions. These Hypergeometric functions are

of the logarithmic type discussed in Section 5.4. The users of these results are
-190-

often misled because it is not specifically mentioned in Consul's papers that he

is getting the results in terms of Hypergeometric functions in the logarithmic cases.

In a number of recent papers on exact null and non-null distributions Pillai,

Ai-Ani and Jouris [231], Pillai and Jouris [234] and Pillai and Nagarsenker [235]

expressed the exact distributions in G-function~. B,t unfortunately the problems

under consideration in these papers are not solved because their problems are the

cases where the poles of the integrand are not simple except in particular cases.

The exact null and non-null distributions of almost all the multivariate test

criteria, in the general multinormal cases, are obtained in computable forms for

the first time by Mathai ([173],[174],[179],[180],[181],[182]) and Mathai and Rathie

([186],[187],[189],[190],[191]) by using different techniques including the tecni-

ques developed in Chapter V. The discussion in this section is mainly based on

these articles. These problems were open problems since the 1930's and with the

help of a number of techniques several authors have worked out particular cases.

The expansions of Meijer's G-function given in Chapter V yielded the exact distri-

butions of all these test criteria in computable forms and for the general cases.

A detailed account of the different methods, applied to these problems so far, are

available from Mathai [185].

Only the dens'ty functions are given in the following subsections.The distribution

functions are available by term by term ~ntegration with the help of Theorem 6.1.1

and hence the discussion is omitted. The density functions are given in series

forms, which are computable, but in particular cases they can be simplified in terms

of elementary Special Functions, mostly in logarithmic cases. Some of these simpli-

fications are given in the exercises.

All the density functions are represented in terms of series involving terms of

the type xa(-log x) b. Hence in order to obtain the distribution functions one needs

the integral of the type given in Theorem 6.1.1.


-191-

Theorem 6.1.1. For ~ > O, k a positive integer, 0 < u < I.

x
f u~( -log u)k-ldu = x~+I kE k(k.l). "" (k-r+l) (- log x) k'r
o r=l k(~+l) r
(6.1.1)

This result follows from successive integration.

6.1.1. Testing linear Hypotheses on Regression Coefficients.

In this as well as in later sections we will only give the h-th moment of the

likelihood ratio criterion of the problem under consideration and the exact density

functions. The details of the tests and the methods of deriving these moments may

be seen from any book on Multivariate Statistical Analysis or from Anderson [16] .

In all these problems the (h-l)st moment is nothing but the Mellin transforms of the

density functions, Hence the density functions are available from the inverse Mellin

transforms. Since the density functions exist in all these cases, existence condi-

tions are not stated separately.

In the problem of testing linear hypotheses on regression coefficients the h-th

moment of the criterion U is as follows, [Anderson [16], pp. 192-194]:

E(U h) : 11 (6.1.2)

where E denotes'mathematical expectation'and p,q,nare all positive integers. From

(6ol.2) it is easily seen that by taking the inverse Mellin transform the density

function, denoted by f(u), is available as,

.n+q+l-~ n+q+l-j , j=l,...,p


P il 2 ) p,0 2
f(u) : TI u -I G [ul ],
j=l F(n+2~ ) p,p n+l-~2 j=l ..... p

0 < u < i. (6.1.3)


-192-

Evidently the G-function in(6.1.3)is in the logarithmic case but this can be eva-

luated by using the technique of Chapter V after identifying the poles. In order

to identify the poles one has to consider four different cases, namely, Case I:

p-even, q-even; Case II: p-odd; q-even; Case III: p-even; q-odd; Case IV: p-odd;

q-odd. It is seen that in Cases I,II and III, E(U s'l) can be written in the

following form:

-b.
E(U s'l) = C ~ (~.j)-aj K (~- ~1 -j) 3 , (6.1.4)
jea jeb

and for Case IV,


r(~- ½) -a. -b .
E(U s-l) = C l[ (C~-j) J II
i
(E~- "~ -j)
j
, (6.1.5)
P (~) j ea j eb

where n P P (n2q+l- j )
c~ = s +'~ + ~ -I and C = II (6.1.6)
j=l p~)

The quantities a,b, a. and b are different for the different cases and these will
J J
be given below.

Case I: p-even, q-even (q ~ p)

aj = bj = (j,j = 1,2 ..... ~ - i .


I
/~ ~ =P- P-+l N.
2' J 2' 2 ''''' 2 '

~ - i' J = ~2 + i' i = 1 2 " "''' ]2


; =2 1 (6.1.7)

a = b =[ ¢1,2,. P+q - I). (6.1.8)


"'' 2

Case II: p-odd, q-even, (q > p)

a.
J
=

{ j j o l 2

p+l
2

2
. p+l
' j = 2
.....

'
p+l + I ..... ~2
2

J - 2 + i, i = 1,2,..., eel
2
(6.1.9)
193-

b . j,j = 1,2 . . . . . 2 '


]

I 2 ] = 2 ' "2 "r l . . . . . 2 '

p-3
:12 . . . . . (6.1.10)
2

a : {1,2,. "'' P+q']


2 }, b ={ 1,2 ..... p+q-3
2 ]" (6.1.11)

Case III: p-even~q-odd (q > p)

l
a. = j :12 ..... 1
]

P ~ - P P + I q+l
2' J - 2 ' 2 ''''' 2 ' (6.1.12)

( 2~ _ l, ] _s~!
- 2 + l, i -
- 1 ,2 .... '
P
2
I ;

b . lj j = 1 2, P- i
] ' ' "'" 2

I ~2 - i,j = q-i + i,
2
i = 1,2, P
.... 2
i ; (6.1.13)

= { 1,2, p-l-q-i] b = {1,2, p+q-3} (6.1.14)

Case IV: p-odd, q-odd (q ~ p)

aj = [j-i, j = 2,3 ..... ~ ,

!
Ip-i , p-l. , q+l
I 2 ' J = -~--~ z ..... 2 '

p-3 •
( ~ i, j = ~ 2 1 + i, i = 1,2 ..... (6.1.15)
2 '
- 194-

b . j+l, j = 1,2 .... , p-I


] 2'

2!+1 , j =in!+ 1' i~!


2 2 .... 2 '

Pn! j:a!!
2 ' 2 '

p-i q+l + i, i : 1,2, p-3 ; (6 1.16)


2 - i,j = 2 "''' 2

a = { 2,3, piq-2 } and b = {1,2 p.n-2)~-~ (6 1.17)


.... 2 . . . . . . 2 "

For the cases I,II and III the poles are available by equating to zero the various

factors of
a. b
(~ - j ) ] ~ (~ - ~
1 -j) j (6.1.18)
jea jeb

and for case IV the poles are available from,

oo a b.
11 (C~- ~i +v) II (C~- j) j 11
i
(C~- "~ -j)
J
(6.1.19)
v=o jea j6b

where the exponents denote the orders and the quantities a,b,aj and bj are avail-

able from (6.1.7) to (6.1.17). Now by using the results in Chapter V we can w r i t e

down the density function as follows.

T h e o r e m 6.1.2. For cases I, II and III, that is, when p and q are not both odd, the

density function of U is given as,

j n + ~2 -l-j a .-i a .-1-v


j a-I J AV
f(u) = C { Z Z ( ] ) (-log u) v
jea (aj - i)'. v=o v

n+q. 3
2 ~- J b-I
u j b .-i b..-l-v
Z % ( J ) (-log u) ] B W}
jeb v=o v v
(bj- 1)'
0 < u < i, where (6.1.20)
-195-

-a -b
V = (j-t) t ~ ( j - ~ -It ) t
(6.1.21)
tea teb
t~j

-a -h
W = (j + ~ I -t) t ~ (j-t) t , (6.1.22)
tea 'teb
t~j

v-i v-I (v- l-Vl) Vl-i v.-I (Vl- l-v2)


A = Z ( ) Ao E ( i ) Ao . . . (6.1.23)
v
Vl=O v I v2=o v 2

v-I v-I (v-l-v I) Vl-i Vl-i ) (Vl-l-v 2)


B Z ( ) B Z ( B ... (6.1.24)
v o o
v=O vI v2=O v2

a
A (r) = ( -i ) r+l r: { Z [ t ]
o tea (j_t) r+l
t~j

b
+ Z [ t ] }, for r > 0 and (6.1.25)
teD (j_ i _t)r+l

(r) I) r+l r: a b
B° = (- [ Z [ t ] + Z [...... t ] I,
t6a i j_t)r+l t6b t)r+l
(~ + t~j (J"
(6.1.26)

for r ~ 0 where C is given in (6,1.6).

Theorem 6.1.3. When p and q are both odd the density function of U is given by,

n ~2 3
(-l) v j + - i + v
-8
f(u) = C { Z [ ] 11 ( i~ - v - t ) t
v=° v~ p(l- v) tea
-196-

n
-b j + -l-j a.-i
j a.-i
x II (-v-t) t] + Z Z ( j )
teb jea (a j-l)' v=o v

3
a.-l-v u2 + - -2 -j b j-i b.-i
~(-log u) j A' V' + Z Z ( j )
v
t~b (bj - i)' v=o v

b .-l-v
(-log u) 1 B' W'] , 0 < u < i, (6.1.27)
v

where A' and B' have the same expressions in (6.1.23) and (6.1.24) with A
v v o
and B replaced by A' and B' respectively and,
o o o

i
r(j- ~) r(j)
v' - v, w' w , (6.1.28)
r(j) r(~ +j)

i
A' = ~(j- ~) - ~(j) + A (6.1.29)
o o

I
B' = ~(j) - ~ ( ~ + j) + B
o o ' (6.1.30)

A,(r) ( l)r+l i ~(r+l,j) + A (r)] , r > i, (6.1.31)


o = - r~ [~(r+l, j- 3) - o

B,(r) = (_l)r+l r ~( ~ ( r + l , j ) - ~(r+l ~i +j) + B(r)] , r > i, (6.1.32)

where ~(') and ~(.,.) are defined in Chapter V. The cummulative distribution

can be easily worked out by using Theorem 6.1.1 and hence the discussion is omitted.

Exact percentage points are computed by using Theorems 6.1.1, 6.1.2 and 6.1.3.

These are available in Mathai [180] . Also the exact percentage points for a

number of likelihood ratio test criteria are computed for the first time by using
-197-

the expansions given in Chapter V. Only a few of them will be given here. A

number of others are available from the references cited at the end of this chapter.

6.1.2. The problem of Testing Independence.

The h-th moment of a criterion for testing independence of sub-vectors in a

multinormal case is available in Anderson ([16], p.235) as,

P
II 1~(~ + h)

E(Vh) = CI J=P°+I (6.1.33)


qPi
~ r( 'n+l-j + h)
i=lj=l 2

where q Pi
~ ~ n+l-
_

~ . i
)
=I j =i
cI
P ~n+l-i

j=po+l

P = Po + Pl +'''+ Pq' n, po,...,pq are all non-negative integers. Evidently the

density function of V is available from the inverse Mellin transform as,

n/2,(n-l)/2 ..... (n-Pl+l)/2 ..... n/2,(n-l)/2 .... (n-pq+l)/2


fl(v) = C I v -I GP-Po '0 Iv I
P-Po,P-Po (n-Po)/2,(n-Po-l)/2 ..... (n-p+l)/2 ]

0 < v < i . (6.1.34)

In order to use the results of Chapter V and to put (6.1.34) in computable forms

one has to identify the poles of the integrand in (6.1.34). In this connection

one has to consider three different cases. Case I: Po ~ Pl ~ "'" ~ Pr all even

and Pr+l odd such that Pr+l in magnitude is in between Pt and Pt+l for some t j r

and Po ..... Pr' Pr+l exhaust all the po,Pl ..... pq;, Case II: Po ~ Pl ~ "'" ~ Pr

all even and Pr+l ~ Pr+2 ~ "'" ~ Pq all odd with q-r = 2m, m = 0, i .... ,;

Case III: Po ~ Pl ~ "'" ~ Pr all even and Pr+l ~ "'" ~ Pq all odd with q-r=2m+l,

m = 0, i, . . . . The ordering of the p's is done without loss of any generality.


-198-

The simplification of the gammas in (6.1.33) is a lengthy process and hence further

discussion is omitted. The details of simplification and the exact density fl(v),

in computable form, are available from Mathai and Rathie [190].

Remark: In the problems discussed in Sections 6.1.1 and 6.1.2 it may be observed

that the garm~as cancel out leaving linear factors in the denominator of the moment

expressions in (6.1.2) and (6.1.33) for a number of cases. Hence the distributions

can also be worked out by using a generalized partial fraction technique developed

by Mathai and Rathie [189].

6.1.3. Testing the Hypothesis that the Covariance Matrix is Diagonal.

This test is described in Anderson ([16], p.262) and the h-th moment of a

criterion W 1 for testing this hypothesis is given as,

E(wh ) FP(2 ) p P ( ~ + h)
= 11 , 0 < ~I < i , (6.1.35)
rp(2+h) j=1

where n and p are positive integers and PP(') = {P(')}P • Therefore the density

function of W I, denoted by f2(~l), is as follows:

n/2, n/2 ..... n/2


f2(°°l) = C2 ~i -I GPI~ '0,p-I [°~iI
(n- i)/2, (n- 2)/2 ..... (n+l- p)/2

0<~I< i, (6.1.36)

where Fp(2 )
C2
IIP r (n+_.~2-" )
j=l
-199-

Case I, p-even: When p is even the poles of the integrand in (6.1.36) are avail-
able from,

2 b.
K (cz - ~2 + j) j" I[ (C~ - ~I P2 + j) J (6.1.37)
j=l j=l

where j,j = 1,2, P - I


b = { "''~ 2 •
J

~2, j ->- p2 '

by equating to zero the various factors in (6.1.37). Hence when p is even, f2(~l)

reduces to the following form.

__ n 1 P'+"J
n p+j 7-7"2
f2(c~I) = C 2 ~ I Zj [0~12 2 Ma" (,~I) + ~°I Mb. (,~i)} ,
J J

0 < '~1 < 1. (6.1.38)

where for example Me. (~/)i) stands for the expression,


J
a.-I
1 j a .-I aj'l-31 Jl -I jl- 1 (Jl-l-J2)
J ) (-log [
Ma.('~l)j ...........
(aj - I): Jl=°Z ( Jl t~l) J2~o ( J2 )Aaj

J2-1 J2-1 (J2-l-J3)


x Z ( J3 )Aa ...] B (6.1.39)
J3=o j aj

aj for (6.1.38) is a.j = j,j = 1 ' .... ~ - I and b j for (6.1.38) is given in

(6.1.37). The notation in (6.1.39) will be retained throughout the remaining sub-

sections. In the various problems to be discussed below aj'bj'Aaj'A~j'Bbjv are

different for different problems. Further A (t) is available from B by the


a. a.
J J
following technique. Introduce a dummy variable y in each factor of B Then
a.
J
evaluate the logarithmic derivative of B at y = 0. That is,
a.
J
-200-

A(t) = ~t+l
a. log B at y = 0 . (6.1.40)
a.
J ~ t+l ]
#Y

Similarly A "t'(
~ is available from B b Hence in the following sections only
b.
] ]
l ~

aj,b.,Ba.] and Bb. will be listed. The method of obtaining A it) from B
a. a.
] ] ] J
for (6.1.38) will be illustrated here. In this case

i p(22 _j. ~)"


3 ..N(~2 _j_ p~2I)
r(~2 -j- ~)
B : ~ ~ -I] " (6.1.41)
aj ? 2 (~ -j) { (-j+l)(-j+2)2...(-l) j'l lJ+12J+2...(~ -j-l)2

Introduce a variable y in each factor and write B (y) as


a.
]

r(~2 _j_ ~ +7) 3 ~ +y)


B (y) = . (6.1.42)
aj £ P-I
F2 (~2 -J+Y) {(-j+l+y) (-j+2+y)2... (-l+y)j'l... (22 -j-l+y)2 }

Now take the logarithmic derivative of (6.1.42) and evaluate at y = 0 then one gets,

P-I
A (r) • r _> I = ( - 1 ) r + t r ' ( 2 Z ~ ( r + l , ~ - j - ~i - k )
k=o

i" 1
P ~(r+l, ~2 "j) + [ _ _ _ _ L + 2 +...+ (-I) r+l
2 (_ j+l)r+l (_j+2)r+l

P-I
+ (~+1) + (j+2) +...+ 2 ]}; (6.1.43)
I r+l 2r+l (~ -J-z)"
..r+l

A (0) is obtained from (6.1.43) by putting r = 0 and replacing ~(r+l,.) by

-~k(.), where ~ and zeta functions are defined in (5.3.2) and (5.3.12) respectively.

Also in (6.1.38), Bb, is as follows:


]
-201-

F(~-j) I~-j-l)...r(2)
Bb • = . . . . i P -i
J (.i)J-I(__2)0-2... (__j+l) ip ~ (I + ~2 -j)j [(3 __j)(5 __j)2...(~2 " ~ .j)2 ]

for j = 1,2.,.. , ~ -I ; (6.1.44)


" 2

R R R ~-I R
= ~(_1)2 (_~)~ ...(~_j)2 (~_j_1)2 ...(_~+1~1E~2 (~ +j j)

~(~ -j)(~ -~)~... (~ - 7I -~) I~_


2 I j~ -I for j > I~ (6.1.45)
-- 2 "

Case II, p-odd: When p is odd the density function is of the form

- = -n+1
-~+ j - ! _ n-l
~-=+ j
f2(~l) : C2 ~ i Z. [~i 2 2 Ma.(el) + ~I 2 2 2 ~.(~i) ]
J ] ]

(6.1.46)

where 0 < o I < i, aj and bj have the same forms as in (6.1.38) with p replaced

by p + l a n d p - 1 r e s p e c t i v e l y . Ba. a n d Bb. a r e g i v e n b e l o w .
J ~

r~_+~ _j i r(~l _j_ 3 .i~(~I ~-2 )


2 ~) ~)'" -J- 2
B =
a~
J p-I
P2 ,p+l
2 -j) [(-J+I)(-J+2)2"''(-I)J-I(1)J+I'''(P~21-J)~-~ ]

for j = 1,2,. p-i (6 1.47)


"'' 2

I 3
P(~2 - ~ -j) r ~ - ~ -j)...r(2)
Bb . = e-_!
J p-I i 3 2 .(-j D-2)~ 2
I~ 2 (~2-J)[('l)J-l(-2)J-2"''(-J+l)][('J+ ~)(-J+'2 ) ""

for j = 1,2 p-i _ 1 ;


-202-

p-1 ~ p-1 p'-1 p-i -I


{[p 2 _j) 2 3 2

p-I
3 2 ~22) 2 ]-I
• .. (-J+D][(-j+ -I
2 )(-j+ 7) ...(-j+ ]

for j > p-I (6.1.48)


-- 2

Again A(r) and A(r) are available by the procedure described in (6.1.43)
a. b,
] ]

6.1.4. Testing Equality of Diagonal Elements.

In the multinormal case the h-th moment of a criterimn W 2 for testing the

hypothesis that the diagonal elements are equal given that the covariance matrix

is diagonal, is given in Anderson ([16], p.262) as,

n
E(W h) = php pp (~ + h) P(n~2 ) , (6.1.49)

PP (2) 1"(n-'~2 + ph)

The density of W 2 can be written in terms of an H-function given in Section 5.8

of Chapter V or in terms of a G-function after simplifying the Gamm~ in (6.1.49)

with the help of Gauss-Legendre multiplication formula (1.2.6). By using the

technique of Chapter V the density can be represented in computable form. These

different forms are given here.

p,O ~2
y,P)
f3(~2) = C~ ~ I H [ -- [
1,0 pP n
(~,I), (~,I),. • ., (~,1), (~,I) ..... (2,1)
-203-

n + I, n 2 n p-I
p-l,0 2 p ~+~ ..... 2 + 2
= C~ ~2 "I g [~21
p-l,p-I
n n n
2 ' 2 .... ' 2

Qo
n + v
C3 ~2 Z ~2 Ma.('~2 ), 0 < ~2 < i . (6.1.50)
v=o J

where Ma (~2) has the same form as in (6.1 • 39) with aj. = p-i
3

r (n-~2) r(~)
' , C3 = _
(6.1.51)
C3 pp(2 ) p(np-l)/2 (2~)@ pP(2 )

(-i) v (p-l) p-I r


B = II P (-v+ ~) (6.1.52)
aj [ (i) (2)... (v)] p-I r=l

n and p are positive integers and

p-I
A (r) , r > I, = (-i) r+l r' {(p-I) ~(r+l,l) - Z ~(r+l,-v+-i)
a -- j=l P
J

+ (p-I) [ i +...+ I ]} .
(.l)r +I (_v) r+l (6.1.53)

Put r = 0 in (6.1.53) and replace ~(r+l,.) by -~(.) to obtain A a. (0), where


J
and ~ - f u n c t i o n s a r e d i s c u s s e d i n (5.3.2) attd ( 5 . 3 . 1 2 ) r e s p e c t i v e l y .
-204-

6.1.5. The Sphericity Test.

The hypothesis that the covariance matrix ~ is of the form 021 where a 2 is
D

an unknown scalar and I is an identity matrix, is often known as the sphericity

test. This test is a combination of the tests given in 6.1,3 and 6.1.4. The h-th

moment of a criterion W for testing sphericity in the multinormal case is given in

Anderson [16] as,

php p ( ~ ) P P(n+-'~21-' + h)
E(W h) 71 (6.1.54)
P(~ + ph)j=l p(~)

As indicated in Section 6.1.4 the density function of W can be put in terms of a

H-function, a G-function and in computable elementary functions as follows:

(~ ,P)

l,p 7 n-2 , 1 ) ,
(--~- ,1) . . . . ]

n-2 j
p-l,0 ~--~- + p ' j = 1,2 ..... p-I
C4G
p-l,p-I
[0~
In-2 _ j j = 1,2 .,p-i
] ,

2 2 . . . .

=o . n P -l+j
. . . . ~ n p 3 + j
--

2 2 2 2 2
C4 [ % ~ M a .(~) + Z co Mb (~) ] ,
j =i 3 j =I J
(6.1.55)

0<0~< I, M (m) is given in (6.1.39), n and p are positive integers,


a.
J

C~ 4 = • C4 =
PP p r(n+__~.L~ l-p np-I P p.n+l-j
" 2 ) (2~) 2 p 2 71 [ 2 )
j =I j =i

(6.1.56)
- 205-

and Bb. are given below. A a(r)


. and "(r) occurring in ( 6 . 1 . 5 5 )
Ab are
aj'bj'Baj j J J

available by the procedure described in (6.1.43)•

Case I, p-odd (p _> 3):

j,j = 1,2,. p-3


"'' 2
a , = b. = J
J J
i ~ , j = p-i
P + 1'2 2

2 2
1
II P(r) !I P(-j--~ +r)
=
r=2 r=l p-3.
, j = 1,2 .....
S ,
j-i . p-i r 2 '
J
11 (-r) j'r II P(-j+ ~2 + p)
r=l r=l

p-1
2 i p-I P i
11 P(-j- -~ + r) [ II P ( - j + ~ + r ) ] -
r =i r =i P
p-i D - I -1
[(-1)(-2) ..(-j+ P-~ )1 2 [(-j+~-~-' -l) 2 ...(-j+l) ]
• 2

for j = p-._._~l p + l
2 ' 2 .... ;
p-1 p-i
2 j 2
I
r(r) 1I P ( - j + ~ + r )
r =2 r =i p-3
BB. = • j = 1,2,..., 2 ;
J j-i j-r p-i i r)
II (-r) II P(-j+ ~ + ~2 +
r =i r =i P

2 1 p-i i -i
P(-j+~+r) [ ~r(-j+7+e2+p)]
r=l r=l
, (6.1.57)

~(-i)(-2) ... ('J+ 2 )] 2 ~(-j+ 2 -l) z ...(-j+l) ]

j= P-~ e!!
2 ' 2 ....
- 206-

Case II, p-even (p _> 4):

(j,j = 1,2, P 2
"''' 2
a . =
J

-i, j = i2 -i, p2 ' "'"

l
j,j = 1,2 ..... 22 ,
b.
]

j o 2 2 + 1,22 + 2 . . . .

p-2
P -l-j 2
211 i
F<r) I[ F(-j- -~ + r)
r =2 r=l
B
8.
= j = 1,2,. i -2 ;
°''2
3 j-i P -2
[ 71 (-r) J-r] 2 ~'(-j+ 22 + p )
r =I
(_ j+ .p.~21) p-1
r =I
r~p/2

p-2
2 1
II F(-j- -~ +r)
r =I

J- ~2 +I P -i P -2 p-i
II (-r)] 2 (- j+ p -2"
L 2) II (- j+r) r iI r(-j+ 22 + _r)
r =I r =I r=l P
r~ p/2

for j_>~2- i.

£-l-j
2111 F(r) 22 H-i r ( - j + ~ +r)
r=2 r=l
m b .=
j = 1,2,.., ~ - 2 ;
• " 2
J
[j-1
71 (-r) j-r]
.
(-J+22' p-i i r)
~I P(-j+ i + 22 +
r =i r =I P
r~ p/2
-207-

P- 1
2
1
II F(-j+ ~ +r)
r=l

P-2
j- ~ + I i -i
2 )p~l_ i r
(-r)] 2
l[ (-j+r)r) (-j+ ~2 r=l F(-j+i +~2 + p)
r =i r=l
r~ p/2

(6.1.58)
for j ~ ~ -I, with the restriction that when j = p/2 the factor (-j+ ~) is to

be deleted from the denominator.

There are many other test criteria described in books on Multivariate

Analysis. The exact d i s t r i b u t i o n s of the likelihood ratio criteria for several

other tests are given in Mathai [172]. Some of these fall under the category of

H-functions but these are reducable to G-functions and hence the techniques of

Chapter V are applicable. There are some other test criteria for which the distri-

butions do not fall in the category of G-functions and the exact distributions for

these are not yet a v a i l a b l e for the general cases. One such test is the one for

testing that the mean vector equals a given vector and the covariance matrix equals

a given matrix in a m u l t i v a r i a t e normal distribution, Only a p p r o x i m a t i o n s are

available for the distribution of the likelihood ratio criterion for testing this

hypothesis. A description of the test is available in Anderson [16].

6.2. THE EXACT NON-NULL DISTRIBUTIONS OF MULTIVARIATE TEST CRITERIA

This is another topic where the theory of G-function is applicable. Non-null

distributions are the distributions of statistical test criteria when the null

hypothesis is not assumed to hold. These distributions and their representations

in computable forms are needed for comparison of tests and for studying various

other properties of tests. The exact non-null distributions of the likelihood

ratio criteria for testing hypotheses on multinormal populations, were not avail-

able for most of the problems till the 1960's. A breakthrough is achieved in this
-208-

direction with the help of the theory of Zonal Polynomials and Hypergeometric Func-

tions of matrix arguments developed by several authors of which the main articles

are James ([122], [123], [124], [125]), Herz [117] and Constantine [67]. Mathai

and Saxena [199] defined and developed the theory of the G-function with a matrix

argument.

Exact non-null distributions of various test criteria are represented in terms

of G-functions by several authors which include Khatri and Srivastava [136], Pillai,

AI-Ani and Jouris [231], Pillai and Jouris [234] and Pillai and Nagarsenker [235].

In these articles, as remarked earlier, the problems under consideration are not

solved because the problems under consideration are the log~rithmic cases and hence

their representations do not give computable forms. The non-central distribution of

the determinant of a Wishart matrix and the non-null distributions of a collection

of multivariate criteria are obtained for the general cases and for the first time

in Mathai ([183], [184]). This section is mainly based on Mathai ([183]).

In order to introduce the non-null distributions one needs a Hypergeometric

function of a matrix argument which is in turn defined in terms of Zonal Polynomials.

Hence a brief description of Zonal Polynomial is given here.

6.2.1. Zonal Polynomials.

Let A be a positive definite, symmetric m x m matrix and ~(A) a polynomial

in the elements of A. Consider the transformation,

@(A) --~ ~(L-IAL'-I), L e GL(m) (6.2.1)

which defines a representation of the real linear group GL(m) in the eector space

of all polynomials in A. L' denotes the transpose of L. Under the transformation

(6.2.1) the space V k of homogeneous polynomials of degree k is invariant and V k

decomposes into the direct sum of irreducible subspaces,

Vk = Z (+) Vk, K (6.2.2)


K
-209-

where

K = (k I ..... km), kI ~ k 2 ~ ... ~ km, k l + k 2 + . . . + km = k, (6.2.3)

and the summation is over all partitions of k into not more than m parts. Each

Vk, K contains a unique one dimensional subspace invariant under the orthogonal

group 0(m). These subspaces are generated by the Zonal Polynomials, ZK(A) w h i c h

is invariant under the orthogonal group. That is,

ZK(H'AH) = Z K(A), H e 0(m) . (6.2.4)

The Zonal Polynomials are homogeneous symmetric polynomials in the eigenvalues of

A. Instead of ZK(A), Constantine [67] uses a normalized polynomial CK(A) where

CK(A) = c(K) Z K ( A ) / [ ( 1 ) ( 2 ) . . . ( 2 k - 1 ) ] , (6.2.5)

where c(K) i s the degree of the r e p r e s e n t a t i o n [2K] of the symmetric group on 2k

symbols.

6.2.2. Hypergeometric Functions w i t h Matrix Arguments.

Let Z be a complex syn~netric m x m matrix. The Hypergeometric functions w i t h

argument Z is defined as

(al)K-.-(ap)K CK(Z)
F .. " bl,...,bq; Z) = Z (6.2.6)
P q (a I, .,ap, k=o K
(bl)K..,(bq)K k~

with p < q+l or p = q+l and IIZII < i, w h e r e IIZII is a suitable "norm", for

example the m a x i m u m of the absolute values of the characteristic roots of Z,

(a)K = 11 (a- ~ )k i , K = (k I ..... k m) k I > ... k > 0,


iI-=- ' -- m --

k = k I + k 2 + . . . + k m,

(a)n = (a)(a+l) ... (a+n-l). (6.2.7)


-210-

The parameters a i and bj are all complex numbers and none of the b.j is an integer

or a half integer J ~ . When any a i is a negative integer the series terminates.

By using the above concepts the non-null distributions can be worked out in terms

of computable series. For a detailed discussion of the various problems in this

direction see Mathai [172]. For the purpose of illustration we will discuss one

problem here.

6.2.3 A Test Statistic and Its Non-null Distribution.

Consider the independent matrix variates X(p x n I) and Y(p × n2), p ~ n i,

i = 1,2, with the columns of X and Y distributed according to multinormal distri-

butions Np(O,El) and Np(O,%2) respectively, where Np(B,E) stands for a p-variate

multinormal density with mean vector ~ and covariance matrix E. Then S I = XX'

and S 2 = YY' are independent and have the Wishart distributions Wp(ni,%i),

i = 1,2,. Let 0 < fl < "'" < f < ~ be the eigenvalues of,
P

ISl - f $21 = 0, (6.2.8)

and 0 < ~I ~ ~2 j "'" ! ~p < ~ be the eigenvalues of,

I% I - % %21 = 0, (6.2.9)

where Wp(n,%) stands for the density of a Wishart distribution on p variates, n

degrees of freedom and covarisnce matrix %. Then a criterion W for testing the

hypothesis

~) /~ = I , ~ > O, known
P

is given by
P
W = ]I (l-ei) = lip- Ell (6.2.10)
i=l
where Ip is an identity matrix of order p, A = diag (~l,...,~p),

E 1 = diag (e I , .... ep), e i = ~fi/(l+Sfi) , i = 1,2 ..... p and I(" )[stands for

the determinant of (-). The h-th moment of W is available in Pillai, Ai-Ani and
-211-
Jouris ([231]) aS~
nI
n2 -~- n nl n -i
Pp(~) Pp(~- + h) 15 } 2FI(~, ~-- ; ~ + h, Ip - ( S A ) )
E(W h) =

n2
Pp (~---) P (@ + h) (6.2.11)
P A

where E denotes 'mathematical expectation' , n = n I + n2 and

p(p-l)/4 P
rp(U) = ~r lI P(u- ) . (6.2.12)
i=l

By taking the inverse Mellin transform of (6.2.11) the density function can be

written as,
nl i-I
n2 -i p,O ~ - + k. - -- i = I,. .,p
-- i 2 ' "
g(00) = A K co 2 G [ col ]
P'P _(i-I) , i = 1,2,...,p
2

(6.2.13)
0 < ~I < I and where A K stands for the expression

-nl n nl )-I
r e ( 2 ) I 5A[ T ~ ('2)K (2-)K CK(Ip-(5'%
AK = Z % (6.2.14)
n2 k=o K k '
Pp(~- )

The density function in (6.2.13) can be put into computable form by expressing the

G-functlon in (6.2.13) in computable form. By using the technique of Chapter V

and by using the notations in (6.1.39) to (6.1.43) we can write gl(~) in then2folio{-

wing form where for convenience g(~) in (6.2.13) is written as g(~) = A K m - ~ gl(~)

and in gl(~) we will write n I as q for convenience. For Cases I,II and III to be

discussed below,

_ ~ +j . ~ ! +j
2 2 2
gl(m) = Z ~ M (~) + Z m M b (m) (6.2.15)
a,
jea J jeb 3

0 < co < I, where aj,bj,Ba, and Bb. are listed below for the different values of Pl
3 J
and q = n I .
2
L..,.
U
II II

m o
m

i u.~. i i..~.
I
u,.. II
e.. II I

II u.... II II
It da "
II II
A II
IA
+
+ v
o. +
i +
+
I +
+ i +
i ~c?~- ~ 1'-o i
+
v i
v i i-o
+
II 1
q- e.,
+ ~
t~
II
I

+ +
+ +

I
+ 2
+ + ~
~ +

+ , +

v
!3
f~
,....= i'-i
i'-I
.o
rl
Ir
i II
u..h
e~
"7,
,::," I',~ I~ ~t-a ,....,.
d~
i i I_.~,
I L.,~, o
e~ {._~.
iI i'~" II
p.a 0
II
it,.,> ill L...~. r..~ {"El -
II IA I1 II i~
"" ~ "
+ ~i~ bo ,.a
+ v

+ ~
+ ~
!
b~ ro
II v + + ~
+
~ + ~ + ,,.e>

I
• I

+
i
i +

2
+ +
H. +
-214-

bj
i
|p+l
j, j = 1,2 ..... P21 ,
. p+l q+l _
I 2 ' J = , .... 2 T Kp_ I ,

lP+l _ i- i = q + l + k ,. + i ..... q+l+ kp_ + i


2 " " 2 p-(2i-') 2 (2i+I)

i = 1,2 . . . . P-I - I
" 2

j 6 b and b = [1,2, q+l + 2_L! + k z _ i } . (6.2.19)


.... 2 2

For Cases I, II and III,

-a -b
B
a.
= K (-j+t) t II (-j- ~1 + t) t
; (6.2.20)
3 tea teb
t~j

-a -b
BD. = ~I ( ~i - j + t) t ii (-j+t) t (6.2.21)
J tea teb
t~j

For Case IV,


_ ~ +j _ 2 _ ! + j
2 2 2
gl(~) = Z ~ M (~) + Z ~ M b (~) + Z R
jea aj jeb j v=o v

0 < o0 < i, (6.2.22)

where B a and Bb of Case IV are the B and B b in (6.2.20) and (6.2.21)


j j paj j
multipliplied by{ P(~2 - j+ q-12 + k 2 ) ~ ( 2 - J+~2 + kl)) and [F(P2 + i j+ q-12 + k2)/

i
F(~2 + 2 -J +~2 + kl)} respectively and

q+l + k2+v
(-I) v r(1) ~ 2 -a
I t

Rv v~ P( + kl-k 2- v ) tea

-b t
x ~ (- ~ - P - kf- ~ + t)
2 2 (6.2.23)
teb
-215-

Now the density in (6.2.13) is available in computable form from (6.2.13) to

(6.2.23).

Remark I: There are several test criteria associated with the test criterion dis-

cussed in this section. Two of them are the likelihood ratio test criteria asso-

ciated with Wilks criterion for testing regression and for testing independence

when there are only two subvectors. The exact distributions of these are available

from Mathai [183].

Remark 2: The exact non-null distributions of most of the test criteria mentioned

in Section 6.1 are still open problems. From the discussion in Section 6.2 it

should be remarked that once the non-null moments of these criteria are available

in Hypergeometric functions of matrix arguments then the exact distributions are

easily available by using the procedure discussed in this section.

Remark 3: In a recent series of papers Davis has employed the method of differen-

tial equations in deriving the exact distributions and he obtained some particular

cases. For a paper of this category see Davis [77]. It should be remarked that

whenever the density functions are representable in terms of G-functions the dif-

ferential equations can be written down directly from (1.5.1) which may be noticed

from the different problems considered in Sections 6.1 and 6.2. The method of dif-

ferential equation, for getting exact distributions of multivariate test criteria,

was employed for the first time by Nair [215] and from this paper it can be noticed

that the method is not a powerful one. The most powerful technique, available so

far, for tackling these types of problems, is the one given in Chapter V and

Sections 6.1 and 6.2.

6.3. CHARACTERIZATIONS OF PROBABILITY LAWS.

Characterization of probability laws is a fast developing branch of Statistics

and Probability Theory. A forthcoming book by C.R. Rao to be published by Wiley

and Sons, New York contains an up to date literature in this field. Some applica-

tions of characterization theorems may be seen from Csorgo and Seshadri [75].
-216-

There is another type of characterization problem in statistics, namely, the

characterization of fundamental coneepts or different measures which are used in

Statistics and Information Theory. This is a method of giving axiomatic defini-

tions to the various concepts in Statistics and Information Theory. Recent develop-

ment in this direction are available from the report Mathai and Rathie [192].

Characterization of a Probability law means that a particular Probability

distribution is shown to be the only distribution enjoying a given set of properties.

As it is evident that a G-function, due to its generality, may not be useful to

show that a particular property is the characteristic property of a Probability law.

But a G-function is useful to show the converse that a particular property is not s

unique property of a certain distribution and thus a G-function helps to get counter

examples in such problems. Some such uses will be indicated here.

6.3.1. The Gamma Property

Consider a real Gains Variate with the density function,

h(x) = ~
P(C~) xC~-I e -~x , ~ > 0, ,~ > 0, 0 < x < oo (6.3.1)

The characteristic function of X is given by,

oo

@x (t) = E(e itX) = ~C~ f x~_l e -~x eitX dx


r(c~) o

it -CZ
= (1- T ) (6.3.2)

=~-l,t is a real arbitrary constant and E denotes 'mathematical expecta~Lon I.

If XI,...,X n is a simple random sample from this gamma population, that is, a set

of independently and identically distributed gamma variates, then the characteristic

function of the sample meanj


-217-

= (X 1 +...+ X n)/n

is given by
@_(t) [@x (~)]n it -nCz
x = = (i- ~ ) (6.3.3)

Since the characteristic function uniquely determines the distribution, which

follows from the properties of Fourier transforms, by comparing (6.3.2) and (6.3.3)

it is seen that the sample mean X again has a gamma distribution with parameters

and ~ scaled by the sample size n. That is, the new parameters are nC~ and n6 . It

is natural to ask whether this property is a unique property of the gamma distribu-

tion or not. By using some properties of Special Functions it can be shown that this

is not a characteristic property of the gamma distribution. We will illustrate it

by taking a density function associated with a Bessel function.

Theorem 6.3.1. If a real stochastic variable Y has the probability law,

hl(Y,v,p,a ) = ve -py I v (ay) (6.3.4)


ya v [p+(p2-a2)I/2]-v

for y > O, v > O, p > 0 and hl(Y,V,p,a ) = 0 elsewhere, then the sample mean has

the probability law hi(y,nv,np,na), where n is the sample size and Iv(. ) is the

modified Bessel function of the first kind which is defined in Chapter II.

Proof: From Erd~lyi, A. et al (186], I, p.195), we have

f e-Pt iv(at) dt av (p2_a2)i/2 ] -v (6.3.5)


t v [P +
o

Now the characteristic function of Y, denotes by ~(t,V,p,a) is available from

(6.3.5). That is,

[(p-it) + [(p-it) 2 - a2] I/2} -~


~(t,v,p,a) = • (6.3.6)
[p + (p2 _ a2)i/2]-V
-218-

Therefore,

[~(~, v,p,n)] n = ~(t,nv,np,na) . (6.3.7)

This compleres the proof.

Remark: By using a G-function one can construct more general classes of distribu-

tions enjoying the same property. Similar properties are investigated by Mathai

and Saxena [195]. Due to the presence of a number of parameters, the various

functions appearing in this and in the remaining sections are all assumed to be non-

negative without loss of any generality.

6.3.2. The Ratio Property.

It is a well known fact that the ratio of two independent standard normal

variables is distributed according to s Cauchy probability distribution. It is

natural to ask the converse question that if the ratio of two independent variates

is distributed according to a Cauchy distribution, are the individual variables

standard normal? Several authors have given counter examples, see Laha [145].

There are also two other ratio distributions frequently used in Statistical

Inference. They are the Student-t and F-statistics. Mathai and Ssxena [197]

developed a general technique for giving counter examples for such problems. These

counter examples belong to very general classes of distributions and wider classes

can be constructed by using G-functions. We will state one such theorem here.

Theorem 6.3.2. If two independent stochastic v a r i m b l ~ X I and X 2 have the density

functions
D(a.+l+v)x v a.+v+l a.+v+2
Mai,v(x ) = l t -x 2)
Z-~(v+l)F(ai) 2FI ( 12 ' 2 ~ v+l; , (6.3.8)

for x > 0, i = 1,2, then XI/X 2 has the F-distribution for all values of v such

that M (x) exists.


Proof: Note: A variable X is said to have an F-distribution if its density

function is of the form,


-219-

BI-I
h2(x ) = P(~lq~2 ) x
, x > O, ~i > O, ~2 > O.
r(~l)r(~2) (l+x)Bl+~2
(6.3.9)

This is a slightly modified form of the density. It is easy to see that (6.3.8)

is a density, that is, M (x) > 0 for all x and f M (x)dx = I for
ai,v o ai~v
i = 1,2,. This follows from Erd~lyi, A.et.al ([86], I.P. 336 (3)). Now the

theorem is proved if we can show that

f u Mal ' (ux)M a (u)du = r(al+a2) xa2"l , (6.3.10)


o v 2,v F(al ) r(a2 ) (l+x)al '÷a'2

because when X 1 and X 2 are independently distributed according to (6.3.8) the

ratio XI/X 2 should satisfy the L.H.S. of (6.3.10). From Erd~lyi, A. et.al([86],

II, p.29) we have,

co t J (ut) a.-i
v l -t
f t e dt = M (u) , (6.3.11)
a.
o r ( a .1) I,V

R(a i + v ) > - 1 , u > 0 and

a.+l
t Jv(ut) ai-i e_t/ad t
f t = M (au)a I (6.3.12)
o F(ai) ai,v

Now by applying the Parseval property of the Hankel transform we have

= al-i -t/a a2-1 -t


u M (au) M (u)du = f [t t e ] [ t e ]dt
al,v a2,v o P(a)a al+l P(a 2)

a2-1
P(al+a2) a , for all v. (6.3.13)
r(al)F(a2) (l+a) ~Ita2

This compl~tes the proof


-220-

Note: For convenience the Parseval property of the Hankel transform will be

stated here. If,

co
f t Jv(pt) Pl(t)dt = @l(p )
o

a nd

f t Jv(pt) pz(t)dt = ~2(p)


o

then

oo
f t @l(t) P2(t)dt = / p '#l(p) ~$~2(P)dp,
o o

where Jr(. ) is the Bessel function defined in Chapter II.

Remark: From the nature of Theorem 6.3.2 it can be seen that wider classes can

be constructed by using G-functions. Several theorems similar to the one stated

above and further references to the literature in this topic are given in Mathai

and Saxena [197].

6.4. PRIOR AND POSTERIOR DISTRIBUTIONS

In one branch of statistical Inference known as Bayesian Inference statistical

distributions involving random parameters play an important role. A number of

authors have investigated the posterior distributions of random variables when one

or more parameters have prior distributions. Several such practical problems can

be found in standard books on Bayesian Statistical Inference and on Probability

Theory, for example see Karlin ([133], p.21). Instead of dealing with particular

conditional and unconditional distributions one can use generalized Special Func-

tions to derive wider classes of unconditional distributions from a very general

class of conditionsl distributions. Several such results are obtained by Kaufman,

Mathai and Saxena [134]. One such theorem will be given here.
-221-

Theorem 6.4.1. Let the conditonal distribution of a real stochastic variable X

(discrete or continuous), given the parameter p, be a generalized gamma with the

density function

f(xlP) ~ P(~/~ xC~-I e-pxS, x > O, C~,~,p > O, (6.4.1)


r(~/~)

and f(xlp) = 0 elsewhere, where ~ and ~ are additional parameters. Let the

marginal distribution of p be given by a density g(p) where,


s r

I[ II P(l+aj)
g(p) = j=k+l 'l~(-bJ) j=v+l k,v a I ..... ar
~G [kp I
k P(l+bj)
If V P(-aj)
II w'$ bl,. .,bs
j =l j =i
(6.4.2)

for p > O. The parameters in the G-function are assumed to be such that the G-

function under consideration exists and is real and non-negative. The uncondi-

tional distribution of X is given by the density

l~B ~ P(-bj) ~ F(l+aj) -(~+O~)


h(x) = j=k+l j=v+l
x
k v
~(l+bj) ~ P(-aj)F(~)
j =i j =i

" ~ ' al,.--,a r


k,v+l X
~C [ F
r+l,s 7 bl,'-',b s

r+s
where k+v > ('-7--~), R(bj + I + P~-) > 0, j = i ..... k. (6.4.3)

Proof: The unconditional distribution is given by

h(x) = / f(xlp) g(p)dp . (6.4.4)


o

Now (6.4.3) follows from (6.4.4) and Erd$1yi, A. et. al ([86], II, p.419 (5)).
-222-

Some useful particular cases of Theorem 6.4.1 are given below for different

g(p) but with the same f(xlp ) as in (6.4.1)

SOME PRIOR AND POSTERIOR DISTRIBUTIONS

g(p) hl(X) = f ~(x[p) g(p)dp


o

(a) Exponential

1,0 - ~ -i
X e"p~ = X G [Xp[0] k ~x ~-I ( ~ + ~ ) ~
0, i

(b) GaTm~a

6~-I X~ e-p~ I
P z~ ~ I~(~ +~) x~-l(~+~) -a(l+ ~)
r(a) P(~)

(c) Beta

F(~q~) p~- l(l-p)~- i

r(~) r(B)

_~)~-i

6.5. GENERALIZED PROBABILITY DISTRIBUTIONS

Wells, Anderson and Cell [354] considered the distribution of the product of

two independent stochastic variables one of which is a central Raleigh variable and

the other is a non-central Raleigh variable. Their problem came from a

problem in Physical Sciences. There are several papers on the ratio, product and

linear combinations of stochastic variables and all these problems come from

Physical, Natural and Biological Sciences and from Economics. The literature in
-223-

this direction is vastand a short list of references is available from Miller [213].

In all these problems the different authors have considered hombinations of

stochastic variables having some specified distributions. Mathai and Saxena [196]

considered the distribution of a product of two independent stochastic variables

where the density of each one is defined in terms of a product of two H-functions.

After analysing the structural set up of densities the authors have noticed that

almost all the ratio and product distributions which are used in statistical litera-

ture will come as special cases from the results obtained in Mathai and Saxena [196].

A large number of other product and ratio distributions are also available from the

results in this article. The technique can also be extended to the product and

ratios of several independent variables.

In the field of the distribution of linear combinations of independent

stochastic variables there is a large number of papers. Mathai and Saxena [198]

considered the linear combination of k independent stochastic variables Where each

one has a density defined in terms of H-functions with different parameters and ob-

tained the exact distribution. The results of this article cover almost all distri-

butions of linear combinations of stochastic variables used in statistical litera-

ture and many more in this category. Since the H-function is the most generalized

Special Function available so far, by specialysing the parameters, a large number

of results are available from the above mentioned articles. In Section 6.5, we are

not giving any specific result in order to save space. These results are readily

available from the references cited at the end of this chapter.

From Mathai and Saxena ([196],[197],[198]) it is evident that G and H-functions

and the techniques of Special Functions are useful and powerful in tackling problems

of statistical distributions. While solving statistical problems the authors have

come across some statistical techniques of deriving new results in the theory of

Special Functions. This may he seen from Mathai and Saxena ([200],[201]) and

articles of the author'S in this line are given in the list of references.
-224-

From the discussions in this chapter it is evident that G-functions are appli-

cable in ~ number of statistical problems. It is quite likely that in the next

few years G-functions will be applied to many other topics in Statistics as well.

Several authors have started applying G-function to statistical problems. These

may be seen from the recent articles by Pillai and Jouris [234], Pillai and

Nagarsenker [235] and Khatri and Srivastava [136]. Two more articles in these lines

by Bagai, O.P. are scheduled to appear in Sankhya Series A.

Some applications of G-function in other fields are given in Chapter VII.

EXERCISES

Note: For a discussion of the Hypergeometric function in t~e logarithmic case

see Section 5.4 of Chapter V. These appear in some of the following problems.

6.1. Show that f(u) in (6.1.3) reduces to the following forms for particular

cases.
_m -2 3 ! 1 1 1 _t
i) p = 3,q = 3; C u2 [(l-u) 2 -3u 2 sin-l(l-u) 2 + 3u log[u -2 + u 2 (l_u)2 ]]

1
C = P(n+2) r ( 7n+l
)/[ P(n-l) F(~ -I)3~ 2 ];

n i
-- -2
ii) p = 3,q = 4: C u2 [l-u 2 + 8u2(l-u) - 6u log u ],

C = F(n+3) I~(2 + 1)/[24 l~(n-l) P(2 -i)] ;

n_2 5 i 3 i I
iii) p = 3,q = 5: C[u 2 (l-u) 2 - 45u(l-u) 2 + (30 u 2 - ~--
15 u2)sin-l(l-u) 2

1 I i
+ (30u- ~15
- u2)log [ u-2 + u-2 (l-u) 2] ,

I
= P(n+4) I~(~)/[90 2 r(n-l) r(2 -i)] ;
-225-
n_2 i 3 5
(iv) p = 3,q = 6: C u 2 [1-16 u 2 - 65 u + 16 J - 65 u 2 - 16 u 2 + u 3

30 u(l-u) log u)],

C = P(n+5) P(~ + 2)/[1440 P(n-l) P(~ -i)];

n_ 2 7 3
(v) p = 3,q = 7: C u 2 .1855
[(l-u) 2 -{----~)u(l-u)
~ - I05. (1-20
(-~--) ~ + 8u 2)

i i I i
u-~ sin-l(l-u) -j + ( ~ ) ( 8 - 2 0 u + u2)u log({l+(l-u)2]u -2 ,

.n+5.
c = r(n+6) r~-T;l[2 r(7) P(~) r(n-l) r(~ -1)l ;

n 3
(vi) p = 3,q = 8: e u~ -2 [l_(@)u~(l - u 3 )-(----~)
1428 u(l-u 2) + 896 u2(l-u)

4
-u - 84u(l-5u + u2)log u] ,

[I n
C = P (n+7) ~ + 3)/[2 P(8) r(5) p(n-l) (~ -i)];

n-5 i 3 5
(vii) p = 4,q = 4: C u 2 [1-15 u 2 -80u + 80u 2 + 15u 2 - u 2

3
-30(u + u 2) log u] ,

C = r(n+l) r(n+3)/r (n-3);


n-5 1 5
(viii) p = 4,q = 5: C u 2 [1-24 u 2 - 375u + 375u 2 + 24u 2 - u 3

3
- 3 0 ( 3 u + 8u 2 + 3u 2) log u] ,

C = P(n+2) P(n+4)/[(2)4: 6'. l'(n-3) P(n-1)];


n-5 1 3
(ix) p = 4,q = 6: C u 2 [1_35u 2 _ i099u - 1575u 2 + 1575u 2

5 7 3 5
+ 1099u 2 + 35u3-u 2 - 210(u+5u 2 + 5u 2 + u2)log u],

C = p(n+3) r(n+5)/[(2)5~ 7~ P(n-3) F(n-l)];


-226-
n-5 i
(x) p = 4,q = 7: c u 2 [l_u4 48u2(l_u3)_2548u(l_u 2)

3 3 5
-8624 u 2 (l-u)-420(u + 8u 2 + 15u 2 + 8u 2 + u3)log u],

C =l~(n+4) I~(n+6)/[(2)6 '. 8'. l~(n-3) P~n-l)];

n-5 9 1 7
(xi) p = 4,q = 8: C u. .2. .[l_u
. 2 . 63(u2- - 4
u4)-(5104 - ~)(u-u )

3 5
29988 (u 2 - u)-28244(u2-u 2) - 252(3u

3 5 7
+ 35u 2 + 105u 2 + 105u 2 + 35u 3 + 3u2)log u] ,

C = P(n+5) p(n+7)l[(2)7'. 9t p(n-3) r(n-l)];

(Consul, 1966, [68] )

6.2. Show that the density function fl(v) in (6.1.34) reduces to the following

forms in particular cases. [Most of these are worked out by Consul (1967,[69]).

(i) q = i (n-Pl-2) /2 __Pl -i ,


all Pl C v (l-v)2
Po = i
C = P(n/2)[ p ( ~ l ) r ( ~ )]~

(n-Pl-3)/2
(ii) q =I C v l-vl/2) pI-I

all Pl

Po = 2 C = F(n-l)/[2 P(n-Pl-I ) r(pl)]


-227-

(n-Pl-4)/2 Pl/2 PI-I Pl -I) ( vl/2)i


(iii) q =I C v (l-v) z ( -
i=o i
i Pl
all Pl ~2FI ( ~ ' ~ ; T + l; l-v)

Po = 3 C = P(n-l) I'(2 -l)/[2P(n-Pl-I ) I ~ ( ~ -i)

Pl
P(pl ) P(~--+ I) ]

(n-Pl-5)/2 1 2 2PI'I
(iv) q =i C v (l-v / ) 2Fl(Pl-2,Pli2Plil-vl/2 )

all Pl
Po = 4 C = P(n-3) P(n-l)/[2 P(2Pl) r(n-Pl-I ) r(n-Pl-3)]

(n-P2-3) P2 -I/2 P2 P2
(v) q =2 C v (l-v) 2FI(--~ ,--~ ; P2+i/2; l-v)

all P2
C : p2 (~)/[ p(~ _ P2 n-P2-1
Po = Pl = I --~ ) P ( ~ ) P(P2+I/2)]

(n-P2-3)/2 P2 P2-1 P2 "I)


(vi) q =2 C v (l-v)--~ + i E ( r (-vl/2)r
r=o
P2 r 3 P2
all P2 ~2FI(-~ + i, ~ + ~ -~ + 2; l-v)

Po = I, Pl = 2 C = F(n-l) F(~)I[2 P(n-P2-1 ) P(~ - --~


P2 -I) P(p2 ) r(--~
P2 +i)]

(vii) q = 2 C v (n- P2 - 5)/2(I_vi /2) 2p2+I 2Fl(P2,P2;2P2+2;l_vl/2 )

all P2 V = p2(n-l)/[2 P(n-P2-1 ) F(n-P2-3 ) P(2P2+2)]

Po = Pl = 2
-228-

6.3. Show that for p = 3 the density function f2(~l ) in (6.1.36) can be put

in terms of a hypergeometric function in the logarithmic case as follows:

n_ 2 i
2 ~ i i 3
c ~i (1-~i) 2F1(~ ' ~; ~; I-~1)

( Mathai 1971, [181]).

6.4. Show that for p = 3 the density in (6.1.50) reduces to the form

n
2 I
C ~22 2F i(~ ' ~; i;i-~2)

C = F(~ )/[3 (3n'I)/2 (2~) -I r3(~)]

(Mathai, 1971, [182]).

6.5. Show that the density function in (6.1.55) reduces to the following form

for particular cases:

n 2 3
(i) p = 3: C 2 (i_~0)2 5 7 5
2FI(~, ~" ~, i-~),

q~ (3n+I)/2
C = 2n+l ~(o_~ )/[ F(n-l) F(@ -i)3 ] ;

n-5 1 7 I
(ii) p = 4: C o0 2 (1_ 2)2 2FI(I, 3 9 i_~o2 ) •
~;

c = (n-l) P(n+ ~)/[ 7r(n-3)


i r(~)]

(Consul, 1967, [70])

6.6. Obtain the exact densities of (det S), W 2 and W 3 from the following moment

expressions.
t Pm(t+ ~)
(i) E [(det S) ] - (det 2%) t exp (tr ~) IFI(I+ 2; 2 ~ g)'
rm(~)
-229-

where Z,g,S are positive definite matrices, n,m are positive integers, (det S)

denotes the determinant of S and tr(.) denotes the trace (-) = the sum of the

leading diagonal elements of (');

P (h+ t
(ii) E(W2 h) = P 7) PP( ) iFl(h; h + ~ ; -~ ),
t n
Pp(7) Pp(h+ 3)

where is a positive definite matrix and n and p are positive integers,

O< W 2 < i;

Pp(2 ) pp(n--~2 + h) n
n n n
(iii) E(W3 h) = Jlp_p2j ~ 2FI(7 ' 7; ~ + h; p2),
pp(n-~2 ) P (2 + h)
P

where P is a positive definite matrix, n,p,q are positive integers,

n ~ q, p+q ! n, 0 < W 3 < 1 and J( )J denotes the determinant of (.)

(Mathai, 1972, 1971, [184],[183]).

6.7. Show that the following densities have the gsr~na property discussed in

Section 6.3.1 .
I i
- -v -~ - (~+~)3
(i) (~_~) 2 e -px x v e
I i[ ] ,x > O,
-V -V y-~
p((2) . (p"~) ( p"l't3

~_>8, v > 0 ;
x
v(~-~3) -v e-px x -I e-(C~-~3)~
(ii) i I -2v Iv [(C~q)x)] , x> O,v > O, ~ _ > 8 ;
[(p+cz)g + (pa~3)~]

-px 2v-2m-i ~m
(iii) e 2 e
iFl(2V,2v-2m;(~-~)x), x > O, (v-m) > O;
p(2v-2m)(p-~)2m(p-B)-2v
-230-

-px 2m+2v -I
(iv) e x 2 2
1 -C~ x
IF2(v; m+ V, m+ v+--2 "-- 4 )'
p(2m+2v )p-2m(p2 + 2 ) - v

x> O, m+ v>O .

(Mathai and Saxena, 1968,[195]).

6.8 Show that the ratio of any two independent stochastic variables having the

density functions in the class of N v ( x ) h a s a Cauchy p r o b a b i l i t y law, where,

2
x
r('~ + l)x v e- 2-- IFI ( ~; v+l; ~x-2 ), x > 0 .
Nv(x) = F(v+l) (2~) I/2 2 v/2

6.9. Evaluate the exact density of PlXl +o..+ PnXn if X 1 .... ,X n are independent

variates, pl,°.°,p n are constants and the density function of X i being,

~i "I -aix i
(i) CI x i e , x i > 0, ~i > 0, a i > 0, ~i > 0 ;

(x.
~i -I -a.x. l
i i
(ii) C2 x i e , x.l > O, 5.1 > 0, a.l > 0

c~.-1
I -a.x.
i I
(iii) C3 x i e , x i > 0, c~i > O, a i > 0 ;

- a ,x°
i i
(iv) C4 e , x. > 0, a. > 0,
i l

where CI,C2,C 3 and C4 are the normalizing constants.

You might also like