An Identity




for the Wishart






of California,


by M.

San Diego

with unknown
X and K degrees
IJet s,,, have a Wishart distribution
of freedom.
For a matrix
T(S) and a scalar h(S), an identity
is obtained
Two applications
are given. The lirst provides
and related formulae
for the Wishart
can be generated
The second application
good estimators
of 6 and Z-i. In particular,
for several risk functions
are obtained,
of X (X-i)
are described
a < l/K
(b < k - p - 1). Half
J. Multivar.
7 374-385;
7 No. 5; (1980)
Ann. Statist.
8 used special cases of the identity
to find unbiased
risk estimators.
These are unobtainable
in closed form for
certain natural
loss functions.
In this paper, we treat these case as well. The
results provide
a unified
for the estimation
of Z and 2-i.

with unknown matrix Z and k degrees
Let S,X, have a Wishart distribution
of freedom. In the standard notation, S - W(Z, k), ES = kE. For a possibly
T,x, = (T,,(S)) and a scalar h(S), an identity is obtained for
I@(S) tr(T,ZY)].
This identity (given by (2.1)) and two applications
are the
subject of this paper.
The identity generalizes some special cases which appear in Haff [3] and [4].
It is derived from Stokes’ Theorem,
a multivariate
by parts. Stein
[14] introduced
the integration
by parts idea within the context of estimating
the multivariate
normal mean. He did not use Stokes’ Theorem,
but certain
other generalizations
of the Fundamental
Theorem of Calculus.
The first application
(Section 3) generates product moments and related
formulae for the Wishart distribution.
As usual, denote S by (Q) and S-l by
(sij). Olkin and Rubin [lo] gave a method for computing
Wishart moments by


2, 1978;



25, 1979.

1970 subject classifications:
Key words
and phrases:
and inverted
of covariance
and its inverse,
Stoke’s theorem,
for the risk function.

0 1979 by Academic
Press, Inc.
All rights of reproduction
in any form reserved.

Kaufman [7] computed the moments E(sijs”z) by exploiting the factorization S = L’L. Z) = c (6.2.) is nonnegative.) under which .. special cases of (2. Later. In essence.j . At that time.CQ)” pij .?Y) are as. Throughout. he used the results of Olkin [9] to obtain the distribution of L-l. HAFF symmetry. For positive integers i. Roughly speaking. geometric. then the risk R($ Z) = EL($. L(2. e. matrix. The natural estimators of Z(.avariety of loss functions. and [5].. the unbiased risk estimator is obtained for other loss functions of this type.. Also. Z) = tr(&Pl . Martin [8] obtained the same moments by using characteristic functions. we provide estimatorsz(z-l) which dominate aS(h’F) for . L a lower triangular matrix. or harmonic mean eigenvalue of S(S-l). Our method of computing inverse Wishart moments is easier than either of these. k) distribution. Section 4 presents the second application for the identity.% . In [3]. if an estimator 2 incurs loss&$ zl). [4]. This paper was the subject of two talks given by me at Stanford University during February 1978.1) specifying the unbiased estimator. 12. Typical cases were L(z.@%) dominates aS(bS-l).p . The precise conditions are found in Subsection 4.1). In the present paper. and [5]. Among these results is the identity (2.1)2 and L@l. 0 < a < l/k O<b<k--p-l) (l-1) with a = l/k (b = k .j = l. our estimators have the form 2 = a(S + ut(u)l) p-1 = b(S-1 + vt(w)I)] (14 in which t(.i. Here. Z) is an average over the W(Z. we obtain (Es”s(3 kr. Its results extend and unify some of my previous work on estimating Z and Z-l-see [3]. bounded. which he essentially . some useful identities are provided for loss functions not of this type. [4]. R. (bS-1.I (all <p). I learned of Charles Stein’s unpublished results in this area. From the risk identities.532 L. p). EsiVk) as the solution of a nonsingular system of linear equations. 2-l) = tr(. The higher moments involving S can be obtained recursively. Esiksjl.d.3) Given (1. and nonincreasing.1) were used to obtain an unbiased estimator for the risk function. It was obtained in certain cases in which the loss function depended on Z-i explicitly. and u(v) is the arithmetic.g. i@ (1.3) the unbiased risk estimator is unobtainable in closed form unless Bij = asij (i. Kaufman’s results were given for a class of generalized Wishart distributions.Z-‘)“Q with Q an arbitrary p... we impose conditions on t(.

s. 1 is now given by THEOREM 2. exph41 .Z and P-l. s. For a matrix TpX./&. as p1 -+ O+. .~~~. I h(S)1 II T II = ob?‘k’2 as pz + 00 for arbitrary m > 0. Subsection 4.1 is a brief review of the literature concerning the estimation of . AN IDENTITY FOR Let M = (miP(S)) be a p following definitions: DEFINITION 2. i. Whenever (c) follows inversion...(W) = {S: S >. (M-l)(. = (m&) is such that rnijI = maj = cmij i=j i#j.(S)) (i) the functions T.$V~. ~4 = {S: S Z 0 (p. My derivation is independent and quite distinct from his-see Section 2 for further comment.d.). 2. = (T. Note p matrices M and IV.. we simply write ML-l). (The usual norm) 11M 11E [ZZm~j]l/z.) that tr[M(.(W) = {S: S > 0... (ii) on b.2. assume that h(S) satisfy th e conditions of.(S) all regions 1 and a scalar h(S).e.1. and (iii) on b. x THE WISHART DISTRIBUTION p matrix.J = tr(MIV). Stokes’ theorem on W = %P.0 -=c~1< II S II < PJ. (Matrix divergence) D*M DEFINITION 2.. for all p I x .1. DEFINITION 2. 11S (1= pz}. The identity is stated in terms of the (Off-diagonal multiplication) The matrix M(.3. I( S (1= pr}. 0. Our identity for the Wishart distribution = ZZ am.IDENTITY FOR THE WISHART 533 DISTRIBUTION derived several years ago. c # 0..

we derive somerelated equations.2)] + (K .The secondmomentsare well known-see Press[12. and tr(SFQ) . Regarding condition (i) (or the validity of (2.d.j). chapter 51for a convenient summary. Also.1) is used to compute secondorder moments for the Wishart and inverted Wishart distributions. p. Whitney’s conditions are more general than needed for the applications in the present paper.1.1) E[h(S) tr(S-lT)]. pp. We need the following from [4] and [S]: LEMMA 3. 3. the identity (2.s. is obtained by extending the work in [3]. A geometric proof.d. Ttl. matrices. COMPUTATION OF WISHART MOMENTS In this section. 377-379.(1/2)(tr S-r)(tr S-lQ). Remark 1. We shall omit the details.1) is basedon certain identities for the normal distribution-see [14]. which is somewhatmore direct. and scalarsg(S) for which Here b(B?)is the boundary of W. R. lOOff. Stokes’ theorem concernsregions W. pp.2) on the cone of p. Stein’s unpublished proof of (2. If QDx. cosqij the direction cosineat S E b(W) associated with (i. is an arbitrary (i> ~*(sQ)(l~ = (w) @) D*(S2Qh) (iii) = (w) D*(S-rQ)(r12) = -(l/2) matrix of constants. matrices and a brief description of the geometry. 377-379.p .534 L.1) exist. Here we do the computations in a relatively efficient manner.s. . more general than ours. provided the integrals (2.2)) th e reader might consult Whitney [17]. then tr Q. SeeHalf [3] and [4] for particular applications of Stokes’ theorem on the p. HAFF Then we have E[h(S) tr(TF)] = 2E[h(S) D*TclizJ + 2E tr [w . tr(SQ) + (1/2)(tr S)(tr Q). It entails an application of (2. Remark 2. and dT differential surface area.

.1.S-l = -S-‘(eke. set h(S) = 1. = k=l --S-le. Proof.kuii and T = Se..1 for the inverted Wishart distribution is THEOREM 3.) I k[(eiZee. A ap. Equation (2. We have 3. and the result follows since e. The caseA > 0 then follows from obvious transformations. T = S2eie. + eleL)S-l Differentiate both sidesof SS-i = I.)(etZe. k # 1.Z.Ze = uj. 535 DISTRIBUTION 1 Also.1).p - I). We omit further details.e.. T = Seie.1 ii)..1) now becomes 2k(eiZk.Q. E(SAS) = k(k + l)(CAZ) + k(tr .Zej) i #. 1 (ii) Set h(S) = sii . Set h(S) = 1.)(e. Proof.IDENTITY FOR THE WISHART The calculations are routine. i = l.)(eiZej) = i=j + (e:Zee. establish the result for A = I. LEMMA The matrix as-l/as. (ii) COV(S. ski) = k(uikujt (iii) + UUU~~C). matrix of constants. k).2. 3 > 0.d.s. If S. If S. 3. then . Denote the ith unit vector by ei . then (i) E(Q) = kuii . p. . etc.I.X. 1 The analogueof Theorem 3. 1 Some standard Wishart formulae are given by THEOREM 3.~ . we need to differentiate S-l with respect to the variables stj . hence we omit them.2. k . then use Lemma Proof. (i) E(sij) = &/(k -p W(Z. i (iii) First. and use Lemma 3. (i) In equation (2..eiZ.1(i).Z.EA)Z. - W(Z: k).

p .1) In a similar way. Here. (3. 1 + (k .536 L.3).1) and Theorem 3.2) and from h(S) = sik and T = eje. . a (iii) Again we take A = I.2(ii).(1/2)(tr S-l)(tr S-Q&). First set h(S) = @j. Q 2 tr ((-$) .From (2.1) becomes &“Z/(k . (ii) p . (k.3) z-lAz-l. T = eke.1) = -E(sW) .sizsjk so (2. and P2 = (wii).. h(S) = 1. T = eieJ.1) E(sizsjk).iW/(k . is -siksjl _ silsjk -sikSjk For my if if kfl. k = 1.jk/(k . T = S-le. matrix of constants.2).)) = -siW .p .p .P.E(sijskz)+ (k . (3.. .d. and (3. .1) = -E(si”sjz) .E(sijskL)+ (k . (3. (3. we obtain .2) E(v..ei .l(iii) implies D*( T(llz)) = -(l/2) Thus 2ED*(T(.iZ.3)(k . T(.j)* . HAFF (4 + (k .p .)) = --E(Q) tr(S2eie>) . and the statement is obvious.p .p)(klA a p..p .3) Thus Cov(#j.1).1) = --E(sV) .s.l)(k ” p . R. (i) Set h(S) = 1.1) E(s%“~). from h(S) = siz and T = ejeL.p . S-2 3 (vii).E(sizsjk)+ (k .-& E( sijskk). Lemma 3. .p) ozka’k +(k. Observe that the ijth element of as-l/as. skz)is determined by the linear equations(3.1) E(siksjz). 1 Proof..

$ Z) [Lo)(%l. From Theorem 3.p .‘.l$?.1)2.W(I.0. Denote the lossfunctions by I+(. k).‘. it follows and the proof is complete. Preliminary Remarks. k. . 1 4. The next result gives the optimal b. we prove a result which is used in Section 4.‘.&rjk that = wij . and it is minimizedat b = (k . R(‘?f ‘-‘) = b2[ (k _ p .P)].2(iii).( l/k (2.3)(k .1).‘=bS-’ b<k--p-l) (4.’ = R-r (b > 0). Z) is an average with respect to the distribution of S given Z and k. 2-l) = tr(bS-lZ .2(iii). Z-l) = b2tr(V2) . = aS a .&~. Z-l) is constantin Z. 1 As an application of Theorem 3. 2-l) = E.9~-1/2.p . FURTHER RESULTS ON ESTIMATINGTHE COVARIANCE MATRIX AND ITS INVERSE 4. We give estimatorsof Z(. The lossfunctions are natural ones. 3. and they have not been treated in the previous literature. Assume that Z-l is estimated with a scalar multiple of S-r. 2-l) is an average with respect to the distribution of S 1Z. THEOREM 0. assumethat the risk R(z.’ 3)(k.‘. The loss function L@.2b tr(S-lZ) + p in which V-1 E: ~-112.1. and this estimateincurs 10ssL(. It is understood that each R.2(i).0. say 2.!?. i = 1.1) with respect to several lossfunctions. Also. oijokk = & tr(C-l) and Cz=‘=. Ri($ Z) = E&@. the unbiased estimator of Z-l is given by b = k . From Theorem 3. The functions R(i) have a like meaning. Proof. Thefunction R(.p) ] and the result follows by differentiation with respectto b.p)/(k .L@.(Rci)) specifiesa separate . As before.J-‘) which dominate the arbitrary scalar multiples 2.IDENTITY FOR THE WISHART DISTRIBUTION 531 After noting that C%.2.

. Haff [4] studied yet a larger class 2.ZY) L.d. 2) = c @ii . First we describe some of the known results for L. A more recent paper [5] proposed estimators of the form (4. b > 0. HAFF ^ estimation problem. matrix. .1.log 1ZZ-l 1 . Selliah [13] gave the same result for L. For a variety loss functions. the estimators are only slightly better than S/k.&(&l) by certain estimators 2 = a[S + ut(u)l] (2-l = b[S-1 + wt(w)l]) (4. Let Z and z* be competing estimators of 2.1 and b = p2 + p . History. . real Y(W).l)S-r(modL(‘J)). it was shown [3] that (k . As usual. -%tfj.: dominates E. In these papers. a. again under L. then /3(S/k) dominates S/k with respect to L. and conditions were established for their dominance of aS under both L.l)S-’ dominates r. Efron and Morris [2] studied the estimators zajl = as-’ + [b/tr(S)]I.?‘.1) 4ij 9 with qii > 0 an arbitrary set of weights. Then conditions were given under which.538 L. and L(O)(z-1. results were given for f(S) = -m(w) and g(S) = r(w)/tr(S). The reversal is troublesome because L(O) and L(l) are qualitatively close if Q = RZ and k is large.(modL.)” if I&($ Z) < &(z’. In particular. then 2. or the harmonic average p/tr(S-t) [p/tr Sj. Our choices t(.Z-l)2 z-1) = tr(z-1 (4.p .2) in which t(. and L. the estimator . Stein.: = [a + f(S)]S-l + g(S)1 (f(S) and g(S) real) and gave conditions under which 2.2). . . Q an arbitrary p. These were given an Empirical Bayes interpretation.: dominates zil under L(O).’ = as-l + [r(w)/tr(S)]I for w = p 1S Il/“/tr(S).0. . . u = p/tr(S-l).1)2.0. L(l)(z-1. Regards to the estimation of Z-l.1.p. 0 < Y(W) < 1. z-1) = tr(z-1 .&’ dominates &’ for both L(O) and L(l). Perlman [ll] showed that if (Kp . R. In Haff [3].D--l . Z) (VZ). we have “2 dominates .p .@. 2) = tr(.’ was generalized as 2. Z) = tr(.??a. . See.) is a nonincreasing function of an average eigenvalue of S(S-l).p .‘(mod L(l)).z-1)2 S.-.(2.1.2)/(kp + 2) < p < 1. Efron and Morris [16].) and U(W) will depend on the loss function. we dominate .2) Q. qii = 1.4” i<j (4. Our results provide a unified theory for estimating Z and its inverse.2. . Stein [6] showed that all estimators of the form jI(S/K) are inadmissible with respect to L.’ (mod L(l)). and proved that if a = k . i < j. Although 2~’ dominates (k . The average eigenvalue U(W) might be the arithmetic average (tr S)/p [tr(S-l)/p]. the geometric average j S l1/P [I S l-1/P]. also. Stein [15] presented an estimator which is substantially better. Let /I be a positive scalar. 4.

THEOREM 1.@. 2-l) = tr(.0. is the unbiased estimator of Z-l.1)/(/z .p . respectively.ElL! . Z-‘) wo Recall that . Z-l) (VZ). Then R’2’(%.(Z L(Q(z-1. it is seen that b = k . Let f.2).-1. . andO < t < 2(p .IDENTITY adz-1 begiven by (4.. = aS isgiven by a = l/(K+p+ 1).-.Ifh-p-3<b<h-p-l. C) < I&(&. a = l/(K + p + l).&1 . q = c @.2. THEOREM (i) (ii) (iii) 4. THEOREM 2. v = ] S ]-l/P.&~. The new results are proved in Subsections 4. Theorems 1 and 2 (which pertain to La) are proved in [5]. d(i z-1) = 1 (6.0.i) i<i - 42 qii . t a constant.1)2. Theorems 3 and 4 pertain to loss function Lc2). Z) = tr@P L. From the proof of Theorem 3. u = l/tr(S-l). Under loss function L. and #/P)[vt’(v) + t(v)] + B*t(v)}q* b*=b-(h-p-l).2 is an optimal choice.4.0. Z-‘) < R(2)(. then Rt2)(&l.3 and 4. THEOREMS. They are stated here for completeness only. Further Results. + bt2(v) d 0. FOR THE WISHART 539 DISTRIBUTION We state some further results for L./)2. and L(3+2-1. Z-l) < R’2’(z:.2). Then R&I?.1) and (4.&y1begivenby(4.%y qij . .0. with . Let L? be given Sy (4.1). the best estimator of the form 2. k-p-3 >O.p + 3). zl)(VZ). . Let.

fm begiven by (4. Let .p)/(h . k . . Note that L. Some solutions R.1) provide an unbiased estimator of E[h(S) tr( TZ-I)]. Unbiased Estimation of R. to loss function If p > 2.2).0. HAFF to (iii) are given by functions satisfying 0 :< t(v) < 2 [ (~-P-11)-2/P b Theorems THEOREM 5 and 6 pertain 5.1). These are typical of loss functions for which we can find an unbiased estimator of the risk.540 L. a].3 > 0. Zm ‘) < Some simulation results in [4] indicate better than the scalar multiples. that our estimators are substantially 4. 5 extends the result of Perlman 6.p). 0 < t(v) f R’3’(2y.I)/(k .p .1) (i) (iii) . we prove Theorems 3 and 4. with t(u) + u2t2(u) < 0.p .(za. the best estimator of the form 2-l is given by b = (h . u = 1 S Ilip.l). Recall that Theorem THEOREM 8. . Let 2-l v = l/tr(S). Theorems 1 and 2 are proved in [5]. begiven by (4. In this subsection. < R3[(W)S. L. and Lc2’ are explicit functions of Z-l.= bS-1 7 was proved in Section 3. If (K R. Such estimators are used to prove the first four theorems.(pk - 2)/pP] which was referenced and (4.0.p)/(k .l).Z) (VZ).3)(k . then Theorem above.0. If t is a positive constant and a’9 < 2q*[(pk .3)(k . . The remaining results are for loss function L13). The terms under the expectation on the right side of (2. Under loss function Lt3J. then z:l (V-q. Then Ra($ 2) < -R.<Z THEOREM - t’(n) < 0 and ut’(u) + 2q*[a . d”(u) + 2t’(u) 2 0. respectively. . Z-1) (Vz?Y).q l] q”. then conditions THEOREM 7.2) with b = (k . and Rt2). (ii) (4q*/ph2) l)/K(k + 1) < a < l/k.3.p . 2(p .2)/pK2 (ii) and (iii) are satisfied. Let za and 2 be given by (4. The-n R(3)(z-1. =.0. and t’(v) < 0.

1) and (4.Z) = E 2b.. Proof.’ + h(S)1 with h(S) = bwt(w). we obtain an identity for E[h(S)@]. + c)S-I.3.3.e. R p.d.3. X:=1 riiqii I R VP P(I-I qiiY. Since R(z)(&l.5) .IDENTITY FOR THE WISHART 541 DISTRIBUTION First..q.($i)2.E[h(S) Recalling that a / S l/W = 1S ) 5’~. v = 1S I-l/r’.. 1341.3. From (4. 1 Q = diag(q& LEMMA 4. Z-l) + .1. (4. so (4.1.. i i From (2. = -b(w/p)[wt’(w) (4.s. An identity for I?(&@) is obtained from (2. Z-1) + P(Z) in which cG)(. b.3.. > p[l R I (n q#p. See Bellman [l.d. p. E[h(S) uii] = 2E tr [a g * T(.2h(S) C aiiq. Let R and Q be p x p matrices. Then Xi”=. .c 1 (@)” qij .. r.i + h’(S) 1 qii . Proof.3) I namely.3. 1 Proof of Theorem 4.1. Z-’ ) = Rt2)(zc1.1) as follows: For T = eiei and h(S) = sij. sii]. the matrix ah(S)/% .t2)(Z) in which LX(~)(Z) = E 2bh(S) T siiqi.2) = E[-si$jj _ (sij)2] + b.2. = tr(RQ) = j R IlIp tr[[ R l-l/pRQ] > I Proof of Theorem Z-1) = R(2)(b.. .4) becomes (4.2).3.d. 1 R 1 = 1.. Set 2.2~ C uijsijqij + c2 C i(j iQ 1 qij .4) + t(o)]SGt E[h(S)&J = bE{--2(w/p)[wt’(w) + t(w)]sii + bowt(w)sii}.1) with T = e. qii > 0 (i = l. we need the following: LEMMA 4. Thus R(2@-l. we obtain E(sij&) = E (2 tr [(a -$) * Tw2)] + M@)21 (4.)] + b. Let R and Q be p x p matrices with R p. we have LX(~)(Z)< 0 (VZ) if 2c Cici siisjjqij + 2CCi<j (Si5)2qi’ij + C2xi@ (Sii)2qij < 0. (4.. From Lemma 4. and the proof is complete. Now take 2-l = 2.p).S-l.3..’ = (b.p . Then min tr(RQ) = p ( Q jllp. A sufficient condition is -2 < c < 0. 3.(~)(Z) < 0 (Vi?) if c > -2.1) (sij)” is5 we show that CY. and Q p. = h ..

3.e.2j] = siju<j+ siiuji .4. and d > 0. (4.1) We use an identity for E(sijuij) which is given by (2.1) with h(s) = urn’(u) gives E[um’(u)sii] = KE[um’(u)aii] + (2/p) E{u[um’(u)]‘crii}.3.) = ((p + 1)/2)aii and ah(S)/% = [um’(u)/plS. HAFF From (4.4. 6.Z and h(S) = m(u). the utility of (2. 6Y2’(2) = bv[(4/~9(wt’(v) + t(w)) + 2b*t(w)] c siiqii + bv[bwt2(v)] .j) + E(siiojj).d. 2:) + aa in which 05(z) = E (--2ad zj sfjqij + 2d zj sijuijqij + d2zj ski j)* (4..2) If we solve (4. we have D*T(.2) becomes E(S:) = (h + 1) E(siju.& . qii .4.6) i We have C?(~)(Z) < 0 if and only if [(4/p)(vt’(v) + t(w)) + 2b*t(w)][x.1) with T = 5’eieiZ and h(S) = sij . andL(s) are explicit functions of Z (rather than Z-l).4.l)/ h(h + 1) Q a < l/k. equivalently. we obtain as(Z) = E{--.($ Z) = R. Siiqii/Ci qiJ + bwt2(w) < 0. NOW (2. we obtain an unbiased estimator of c&s)(Z). a0 = l/K. R. l).4. 1 Proof of Theorem6.(% 9-V + 4J3 05(Z) = E Pam(u) i siiqii . T(1. d < 2/k(k + 1). Z) = %(a$.2).l/(K + I)] + d2 < 0 or. We have R. and Rc3). (4.1(i)) and 2 tr[a(h(S)/aS) .?d[l/h .4. Proof of Theorem5. Now the identity (2. Recall that L.. we finally obtain the sufficient condition (k .t . and R. (4. here. Here 2 = 2.d)S. + m(u)l.2) for E(sijqij) and apply the result to (4.1) becomes E[m(u)sii] = hE[m(u)uii] + (2/p) E[um’(u)]ue . Here we prove Theorems 5.1) in natural situations where an unbiased estimator of R is unobtainable.3) and (4.2m(~) i id i-l uiiqii + m”(u) i qii]. We illustrate.4. m(u) = aut(u). 1 4.l/(h + I)] + d”>{& s&ij} . (4.4) Another application of (2.4. (4.5). . and the proof is complete. A su ffi cient condition for the last inequality is [(4/p)(wt’(w) + t(w)) + 2b*t(w)] wq* + bwt2(w) < 0 ( see Lemma 4.5) . Note that D*T c1j2)= ((P + 1)/2) tr(eie$:) = ((P + 1)/2) *ij (recall Lemma 3.Therefor (Y&Z) < 0 (VZ) if -2d[l/k . and 8.3.3) i-1 For T = Se.542 L.E{[2d/(h + I)] &jSiioj*qij . Set zG = (a.(. Since a = a. Exploitations of the Identity for R. .

I-l. z.4. + h2(S) tr(Z2)].3) . We omit further details.. T}. (1976). L.6). We have 2-l = 2.b2u2t2(u)S> 0 or [-4u2t’(u) .3). R.b].’ + h(S)I.1) becomes E[h(S) tr(Z)] = 2E tr[ah(S)/X? .4. [4] HAFF.(2/k) m(u) + (4/pk2) um’(u)][C~!r sdiqii] + [m2(u)][~~‘1 &I < 0.$VJ Here we use the fact that tr[ah(S)/&S . EFRON. L.) The latter inequality is true if and only if 4rwl(l.4. the last inequality is equivalent to the bound on t(u) given by hypothesis.p . E[m(u)s. (1979).p . we have ah(5’)/iW = --bu2[ut’(u) + + t(u)]S Wl~~X1)* The condition (4.3)(k . The identity (2. of the multivariate normal covariance of J. 683/9/4-C covariance matrix: random mixtures Ann. (Here b* = b (k .) . T(. Thus we complete the proof by establishing positive semidefiniteness. and (4.6) Set T = Z2. Z-l) = U3)(z$. Since the eigenvalues of US are all less than unity and t’(u) < 0. 1 (= Proof of Theorem 8. AND MORRIS.p)/(k . then aa(Z) < 0 (VZ) if [2um(u) .4. a sufficient condition for positive semidefiniteness is -4t(u) . New York. For h(S) = but(u) and u = l/tr(S). 7.4. For b = (k . An application of Lemma 4...=.. Multivm. we infer Ok < 0 (VZ) if tr{4[Z+(S)/&S](1.p . 7 374-385. Statist.Z) = E[2um(u) . tr[MNuJ = tr[M(.2)- 2b*h(S) s-1 .4) and (4. Ann.]S .2b*t(u)l >O.1) E[h(S) tr(S’-lZ2)].l).1 and simple algebra shows that condition (ii) is sufficient for the latter inequality. R. Introduction to Matrix Analysis.)] = tr{[ah(S)/&Sl(. Multivariate empirical Bayes and estimation covariance matrices. Z2 3 0 (VZ).2b*h(S)S-l . (1979). h(S) = Thus L(s)(. (2/pk) u(um’(u))‘aji] LX.. we obtain %(.bP(u) . Aft er solving the last equation for E[m(u)aii] and applying the result to (4. = l/tr(S). Statist.h2(S)I) .] = kE[na(u)oJ + (2/p) E[(l/k)um’(u)sii . wil . T(l.@/Pk”) * W~M4)‘lEi”E~ ai&il + E[m2(u)lEkl Gil* If m(u> aut(u)) satisfies condition (iii). . 1 REFERENCES [l] [2] [3] BELLMAN. Estimation of the inverse of the inverse Wishart matrix and the identity. HAFF.IDENTITY From FOR THE WISHART 543 DISTRIhUTION (4.l).h2(S)I > 0. R.2)] + (k .(2/K) m(u) + (4/pP)um’(u)] .p . 4 22-32. (1980).2a*t(u) > 0 or bt( U) < -2(b*+2) = 2[(k .2h(S) tr(Z) Kit. C. [5] HAFF.4. Anal. Ann. B. (1970).4ut(u) . After substituting for E[h(S) tr(Z)] in (4. and the proof is complete.. 8. L. Empirical Bayes estimation matrix. F or all matrices M and N. Minimax estimators for a multinormal precision matrix. Z-l) + OC(~)(Z). McGraw-Hill. U(~)(Z) = E[2ah(S) tr(S-‘Z2) .7) is equivalent to -4&a[z&(u) 2b*but(u)l.5). R.2) .

Improving the usual estimator of a normal Couariance matrix. Statist. Ph. IMS. J. 345-381. Holt. AND RUBIN. thesis. Department of Statistics Stanford University. Belgium.. M. Princeton. D. (1965). [17] WHITNEY. C.544 [6] L. W. R. D. B. (1973). of California Press. (1961). Berkeley. 6710. HAFF JAMES. (1972). Catholic University of Louvain. Estimation with quadratic loss. Press. [16] STEIN. pp. In Proceedings Prague Symp. Reitz lecture. H. [13] SELLIAH. G. [lo] OLKIN. Heverlee. Atlanta. C. Statist. N. New York. Stanford University. Center for Operations Research and Econometrics. Department of Statistics. Estimation of the mean of a multivariate normal distribution. A class of integral identities with matrix argument. [8] MARTIN. C. Princeton Univ. [14] STEIN. Georgia.. Prob. Thirty-eighth Annual meeting. AND MORRIS.. [15] STEIN. [7] KAUFMAN. (1950).. Some Buyesian Moment Formulae. Estimation and Testing Problems in a Wishart Distribution. C. (1967). (1972). . J. [9] OLKIN. A characterization of the Wishart distribution. [ll] PERLMAN. Math. In Fourth Berkeley Symposium Math. Technical Report No. 37. J. R e d uced mean square estimation for several parameters. Univ. (1962). Report No. 33 1272-1275. (1972). C. I. [12] PRESS. I. S. Sunkhya 34 89-92. (1964). J. Unpublished. Applied Multz’waviate Analysis. Asymptotic Statist. Some moment formulas for multivariate analysis. EFRON. Rinehart & Winston. Geometric Itztegration Theory. Duke Math. 26 207-213. Ann. H. (1957). AND STEIN. J. M.