This action might not be possible to undo. Are you sure you want to continue?

Wolfgang H. Schmidt

Fachhochschule für Technik und Wirtschaft Berlin Summary. The projection theorem in Hilbert spaces with real valued scalar products is shown to imply several well known results. Special attention is paid to statistical problems. AMS subject classification: Key words: Hilbert spaces, projection, Fourier approximation, Least-Squares, best linear unbiased estimator, best unbiased estimator, Bayesian estimator, Cramer-Rao inequality

1 Introduction

The purpose of this paper is to show that several well known results in different fields can be deduced from the projection theorem in Hilbert spaces . This is demonstrated e.g. for the computation of Fourier approximations of periodic functions , for Least-Squares approximations in R n and in R n × m and for the determination of best unbiased estimators. The Gauss-Markov theorem and the Lehmann-Scheffé theorem as well as a characterisation of the Maximum-Likelihood estimator by the attainment of the Rao-Cramér inequality are shown to be consequences of the projection theorem. These examples are intended to highlight the projection theorem. Of course, the interested reader may have in mind further applications.

**2 The projection theorem
**

Definition. Let F be a linear space endowed with a real valued scalar product < f , g > for f , g ∈F and the norm f = < f , f > and let G be a subset of F. A g∗ ∈G is called a projection of f ∈F onto G iff f − g∗ ≤ f − g for all g ∈G. Theorem. Let F be an Hilbert space with the real valued scalar product < ⋅,⋅ > the norm ⋅ and G ⊂ F. Then it holds a) The condition < f − g∗ , g∗ − g >= 0 for all g ∈G (1) ∗ is sufficient for g to be the unique projection of f onto G. b) If moreover, G is a linear subspace of F the condition (1) is necessary for g∗ to be a projection of f ∈F onto G. Proof. Let the condition (1) be fulfilled. It then holds

f − g = f − g∗ + g∗ − g

2 2 2 2

= f − g∗ + g∗ − g ≥ f − g∗

2

The projection theorem in Hilbert spaces for all g ∈G, thus g∗ being a projection of f onto G. We show that g∗ is the unique projection. To that aim we assume that g∗∗ is another projection. Then we infer with (1)

g∗∗ − g ∗

2

= g∗∗ − f + f − g∗ = g∗∗ − f

2

2

+ f − g ∗ + 2 < g∗∗ − f , f − g ∗ >

2

**= 2 f − g∗ + 2 < g∗∗ − f , f − g∗ > = 2 f − g∗ + 2 < g∗∗ − g∗ + g ∗ − f , f − g∗ > = 2 < g∗∗ − g∗ , f − g ∗ > =0
**

2

2

what implies g = g∗ . To prove the statement b) we assume g∗ to be a projection of f onto the linear subspace G and let (1) be violated. Then there must be a g ∈G with < f − g∗ , g∗ − g > ≠ 0 which implies

∗∗

**< f − g∗ , g − g∗ > g∗ − g ≠ 0 . Now ~ = g∗ + λ ( g − g∗ ) with λ = g belongs to G since G is a 2 g∗ − g linear subspace. Finally, 2 2 2 f − ~ = f − g∗ + λ2 g − g∗ − 2λ < f − g∗ , g − g∗ > g
**

= f − g∗ − λ2 g − g ∗ < f − g∗

2 2 2

demonstrates that g∗ cannot be a projection of f onto G if (1) is violated. Remark 1. The assumption of G to be a linear subspace can be weakened to G to contain with two elements h and k also h + λ h − k for all λ ∈R . Notice that this weaker assumption is

b

g

fulfilled if G is the set of all unbiased estimators for an unknown parameter. Remark 2. The condition (1) is implied by the somewhat stronger condition < f − g∗ , g > = 0 for all g ∈ G.

(2)

Remark 3. The condition (1) ensures g∗ to be a projection of f onto G if < ⋅,⋅ > is a semiscalar product only. Under this weaker assumption there may exist other projections than g∗ .

**3 Applications of the projection theorem
**

3.1 Approximation of square integrable functions

**First we discuss the problem of approximating a square integrable function f ∈ L 2 a , b within a linear subspace of L 2 a , b . Let F = L 2 a , b = f : a , b → R , f 2 ( x ) dx < ∞
**

a

**functions defined over a finite interval a , b ⊂ R .
**

2

R S T

z

b

U be the linear space of all square integrable V W

**The projection theorem in Hilbert spaces For given linearly independent basic functions g1 , ..., g m ∈F
**

G = g: g( x ) = ∑ β i g i ( x ); β1 , ..., β m ∈ R

i =1

**Obviously, F is endowed with the semi-scalar product
**

< f , g >= f ( x ) g( x ) dx

a

R S T

m

U V W

is an m-dimensional linear subspace of F.

z

b

**and the semi-norm
**

f =

z

b a

f 2 ( x ) dx .

Thus minimising f − g over G means minimising

zc b g

b a

m i =1

f x −g x

b gh dx over G.

2

**Let g∗ = ∑ β∗g i be a best quadratic approximation for f. The problem is the determination of i
**

∗ the coefficients β1 , ..., β∗ . Now the condition (2) m < f − g∗ , g > = 0 for all g ∈G is equivalent to

< f , g i > = ∑ < g i , g j > β∗ , j = 1, ..., m j

i =1

m

(3)

which is the well known normal equations system used to compute the coefficients ∗ β1 , ..., β∗ . From (3) we easily get the unique Fourier coefficients of the Fourier m approximation of a periodic function with period 2π . For that purpose we choose a = −π , b = π , g1 x ≡ 1, g 2 x = cos x , g 3 x = cos 2x , ..., g n +1 x = cos nx ,

**bg bg bg bg g bxg = sin x , g bxg = sin 2x , ..., g bxg = sin nx ,
**

n+2 n+3 2 n +1

thus m=2n+1 for an integer n. Then in view of < g 1 , g1 > = 2 π , < g i , g i > = π , i = 2, ..., m and < g i , g j > = 0 , i ≠ j, i = 1, ..., m, j = 1, .., m the normal equations system (3) leads to the Fourier approximation n a∗ n g∗ x = 0 + ∑ a ∗ cos kx + ∑ b ∗ sin kx k k 2 k =1 k =1 with the Fourier coefficients π 1 ∗ ak = f x cos kx dx , k = 0, .., n π −π and

bg

zbg

π

b∗ = k

1 f x sin kx dx , k = 1, .., n . π −π

zbg

3

The projection theorem in Hilbert spaces Especially, for the best polynomial approximation of f by g∗ x = ∑ β∗ x j the condition (3) j reads as x i f x dx = ∑ β∗ x i + j f x dx , i = 0, ..., n . j

a j=0 a

bg

n

z

b

bg

n

z

b

j=0

bg

3.2

Least-Squares approximations

Let us look for the nearest with respect to the quadratic distance solution of an inhomogeneous linear equation system Gβ = f which may have no exact solutions. Here f ∈ R n is an n-vector with real components and G is an n × m matrix with real entries. The problem is to find a β∗ ∈R m such that

f − G β∗ = min f − G β m

β∈R

where

f

2

= ∑ f i2

i =1

n

denotes the Euclidean norm of f. G β∗ then is the Least-Squares approximation of f in range of G. To apply the projection theorem

n

we choose F= R n , G= G β: β ∈ R m ⊂ R n and

n

s

**< f , g > = ∑ f i g i , the usual scalar product in R n . Here the condition (2)
**

i =1

< f − g∗ , g > = 0 for all g ∈ G is equivalent to the normal equations system for β∗ G T G β∗ = G T f where G T denotes the transpose of G. All solutions of (4) are obtained in the form

β∗ = G T G G T f

(4)

d

i

−

**where A − denotes any generalised inverse of the matrix A. By the projection theorem
**

f ∗ = G β∗ = G G T G G T f

d

i

−

is the unique projection of f onto G . Remark 4. The matrix G G T G G T is known to be the unique projection matrix onto the linear space generated by the columns of G, i.e. the range of G. It also holds the equality f ∗ = G β∗ = GG − f . The empirical Fourier analysis fits into this frame too. Assume that a 2 π -periodic function h(t) can be observed at equidistant points i t i = 2 π , i = 0, ..., N − 1 , N = 2n , n n being an integer only. For instance it may happen in technical applications that the analytic expression for h is not available but h may be observed at the points t i . 4

d

i

−

**The projection theorem in Hilbert spaces With f = h t 0 , ..., h t n −1
**

n n

**c b g b gh , a f b t g = f b t , βg = + ∑ a cos kt + ∑ b sin kt 2 and β = ba , a , ..., a , b , ..., b g the problem is to find a vector β with f − f d t, β i = min f − f b t, βg .
**

T n n −1

0

k

k

k =1 n

k =1

T

∗

0

1

n

1

∗

2

2

n

β∈R

n

n

With

F1 GG 2 GG .. G= GG . 1 GH 2

cos t 0 ... . . .

cos nt 0 . . .

sin t 0 ... . . .

sin( n − 1) t 0 . . .

cos t N −1 ... cos nt N −1 sin t N −1 ... sin( n − 1) t N −1

I JJ JJ JJ JK

it holds

G T G = n In (5) where I n the n × n identity matrix. The condition (5) is implied by the property that the functions 1, cos t, ..., cos nt, sin t, ...sin( n − 1) t form an orthonormal system. Utilising (5) we get that n n −1 a f n t, β • = 0 + ∑ a ∗ cos kt + ∑ b ∗ sin kt k k 2 k =1 k =1 with 1 N −1 ∗ a k = ∑ h t i cos k t i , k = 0, ..., n (6) n i=0 and 1 N −1 b ∗ = ∑ h t i sin k t i , k = 0, ..., n − 1 (7) k n i=0 is the best approximation for f. The equations (6) and (7) define the well known empirical Fourier coefficients.

d i

bg bg

Next we discuss the problem of approximating a matrix within a linear subspace of matrices. Let R n ×p be the set of all n × p matrices with real elements. The problem is to approximate a given F ∈F= R n ×p using the Least-Squares approach by a matrix G B ∗ in the linear subspace

G= G B: B ∈ R k ×p .

n

s

**Here G is fixed n × k matrix. As a scalar product in R n ×p we choose < F, H > = tr F T H . Now we are looking for a matrix B ∗ ∈ R k ×p with
**

F − G B∗

2

= minp F − G B k×

B ∈R

2

a task usually occurring in the theory of multivariate linear models.

5

The projection theorem in Hilbert spaces

Again the condition (2)

< F − G B ∗ , G B > = tr F − G B ∗ G B = 0 for all B ∈ R k ×p

is equivalent to the normal equations system GTG B∗ = GTF . The condition (4) is a special case of the condition (8) for p = 1. Remark 5. Usually, the result (8) is derived utilising the theory of Kronecker matrices.

LMd N

i

T

OP Q

(8)

Remark 6. Every other scalar product in R n ×p with a norm equivalent to F = tr F T F leads to the system (8) too. As an example one might choose < F, H > = λ max F T H + H T F

2

d

i

**where λ max denotes the largest eigenvalue of a given symmetric matrix.
**

3.3 Approximation of a random variable by a constant

Let [ Ω ,A,P] be a probability space and let X be a random variable mapping [ Ω ,A,P] → [X,A, P X ] with X ⊂ R . By F we denote the set of all such random variables with finite second moments and positive variance, i.e. 2 E X 2 = x 2 dP X < ∞ , σ 2 = E X − E X with E X = x dP X .

X

z

b

g

X

z

X

As a semi- scalar product serves < X, Y > = E X Y = X ω Y ω dP .

z

bgbg

g

2

Ω

**The problem is to find a constant a ∗ ∈ R with
**

E X − a∗

d

i

2

= min E X − a

a ∈R ∗

b

.

It is well known that a = E X is the solution. Again this can be deduced from the projection theorem regarding < X − a ∗ , a > = E X − a ∗ a = E X − a ∗ a = 0 for all a ∈ R

ed

ij d

l

i

if a ∗ = E X . Using this result one also easily proves that the posterior mean is a Bayes estimator. For that we assume that the random sample X with values in X possesses a probability distribution P within a family P = Pϑ : ϑ ∈ Θ , ϑ being a real parameter. Let P be dominated

q

dPϑ x be a version of the Radon-Nikodym dµ densities. f x / ϑ is assumed to be the conditional density of X under the condition ϑ = ϑ

by a σ -finite measure µ and let f x / ϑ =

b g

bg

b g

and ϑ is considered at random with a given prior density π ϑ . The Bayesian risk of an $ $ estimator ϑ = ϑ X for ϑ is

bg

bg

6

The projection theorem in Hilbert spaces

$ and ϑ ∗

**e j z ze b g j b g $ = ϑ b Xg is a Bayes estimator if $ $ reϑ , π j = min reϑ , π j .
**

Θ X ∗ ∗

$ ϑ

2 $ $ r ϑ , π = [ ϑ x − ϑ f x / ϑ dµ ] π ϑ dϑ

bg

Because of 2 $ $ r ϑ , π = g( x )[ ϑ x − ϑ h ϑ / x dϑ ] dµ

e j bg

z ze b g

X Θ

j b g

with

g x = f x / ϑ π ϑ dϑ

Θ

zb

gbg

and

h ϑ/x =

b g ffbbxx//ϑϑggππbbϑϑgg z

Θ

$ $ ϑ ∗ is a Bayes estimator iff ϑ ∗ minimises 2 $ ϑ x − ϑ h ϑ / x dϑ .

$ Thus ϑ ∗ X =

ze b g bg z

Θ

j b g ϑ hbϑ / Xg dϑ is a Bayes estimator.

Θ

3.4 Best linear unbiased estimator, the Gauss-Markov theorem Let F be the set of random variables as described in section 3.3. Further, let X1 , ..., X n be a sample of size n to the distribution P X , i.e. X1 , ..., X n are independent and identically distributed random variables with joint distribution P X . By F we denote the set of all random T n variables being measurable functions of the random vector X b g = X1 , ..., X n with finite

b

g

**moments of second order. Again in F the semi-scalar product < X, Y > = E X Y and the semi-norm
**

X = E X2

are defined. We are looking for a best linear unbiased estimator for µ = x dP X ,

z

X

**that is a g ∈ G is to be determined with
**

E g∗ − µ

∗

d

n Here G = g Xb g = ∑ c i X i , ∑ c i = 1 is the set of all linear unbiased estimators for µ . It is i =1 i =1

well known that the arithmetic mean 1 n g∗ = X = ∑ X i n i =1 is the solution.

Re j S T

i

2

= min E g − µ

g ∈G n n

b

g

2

.

U V W

7

The projection theorem in Hilbert spaces Again this follows by the projection theorem. With g∗ = ∑ c∗ X i and g = ∑ c i X i we have i

i =1 i =1 n n

< µ − g , g − g > = 0 for all g ∈ G and all µ ∈R

∗

∗

iff

σ

2

∑ ec c

i =1

n

∗ i i

− c∗ i

2

j = 0.

1 , i = 1, ..., n . n

Thus condition (1) is fulfilled for c∗ = i

Next we shall discuss the linear model f = Gβ + ε . Here f is the n-vector of random observations, G ∈ R n × m is the known design matrix, β ∈ R m is the unknown vector of regression coefficients and ε is an unobservable random error vector with the properties E ε = 0 ∈ R n and E ε ε T = σ 2 Λ with unknown variance 0 < σ 2 < ∞ and Λ ∈ R n × n is a known positive definite matrix. In particular, Λ could be the n × n identity I n . The Gauss-Markov Theorem claims that the estimator

$ γ ∗ = C G T Λ−1G G T Λ−1f

d

i

+

for γ = C β has smallest covariance matrix within the class of all linear unbiased estimators for γ = C β whenever γ = C β with C ∈ R l × m is an estimable parameter. As usual A + denotes the Moore-Penrose inverse of a given matrix A. In the set F of all random l-vectors with existing covariance matrices we introduce the semiscalar product < f , h > = tr f h T . Let G be the set of all linear unbiased estimators for γ = C β , G = L f : L G = C, f ∈ F .

l

q

**The problem is to find an L ∈ R l × n such that γ ∗ = L∗ f minimises
**

Lf − γ

2

∗

over all l ∈ R l × n

**g bL f − γ g = tr EbL f − γ gbL f − γ g with L G =C. Obviously, EbL f − γ gbL f − γ g is the covariance matrix of the
**

= E Lf − γ

T T

b

T

$ linear unbiased estimator γ = L f provided L G =C. With these notations the condition (1) reads as follows: $ $ $ $ < γ − γ ∗ , γ ∗ − γ > = 0 for all γ ∈ G and all f ∈ F . Because of $ γ ∗ − γ = L∗ G β + ε − C β = L∗ ε

b

g

and

$ $ γ ∗ − γ = L∗ − L ε

d

i

8

The projection theorem in Hilbert spaces

we have $ $ $ < γ − γ ∗ , γ ∗ − γ > = < L∗ε , L − L∗ ε >

LM d i OP N Q = σ tr LL ΛdL − L i O NM QP

= tr E L∗εε T L − L∗

2

∗

d

i

T

∗ T

=0

**for L∗ = C G T Λ−1G G T Λ−1 since
**

L∗ ΛLT − L∗ ΛL∗T = C G T Λ−1G G T LT − C G T Λ−1G Thus the generalised Aitken-estimator

$ γ ∗ = C G T Λ−1G G T Λ−1f

d

i

+

d

i

+

d

i dG Λ GidG Λ Gi C

+

T

−1

T

−1

+

T

= 0.

d

i

+

minimises the trace of the covariance matrix within the class of all linear unbiased estimators of the estimable parameter γ = C β . $ $ Of course, γ ∗ also has minimal covariance matrix, not only trace of, since k T γ ∗ as an estimator for k T γ has smallest variance for every fixed k ∈ R l what can be proved by the same reasoning. For details see C.R. Rao (1973).

3.5 The Lehmann-Scheffé Theorem

Let X be a random vector having a distribution in P = Pϑ : ϑ ∈ Θ

l

q

and let P possess a

sufficient and complete statistic T(X). The Lehmann-Scheffé Theorem asserts that there is for $ every estimable one-dimensional parameter γ = γ ϑ an estimator of the form γ ∗ = h T X

bg

c b gh

which has smallest variance within the class of all unbiased estimators for γ , see e.g. H.Witting (1985). $ $ Indeed, let γ = γ X be an arbitrary but fixed unbiased estimator of γ , then

$ γ

∗

bg $ = E mγ b Xg / Tb Xgr

•

a version of the conditional expectation independent of ϑ outside of a P-zero set has the $ desired property. Actually, for γ ∗ with the semi-scalar product $ γ $γ < γ , ~ > = Eϑ γ ~ $ and any other unbiased γ

**d id i $ $ $ = E eE od γ − γ id γ − γ i / Tb Xgtj $ $ $ = E ed γ − γ iE od γ − γ i / Tb Xgtj $ = E ed γ − γ ik cTb Xghj $ $ with k cTb Xgh = E od γ − γ i / Tb Xgt .
**

$ $ $ $ $ $ < γ − γ ∗ , γ ∗ − γ > = Eϑ γ − γ ∗ γ ∗ − γ

ϑ • ∗ ∗ ϑ ∗ • ∗ ϑ ∗ • ∗

(9)

Now

9

The projection theorem in Hilbert spaces

c b gh implies k cTb Xgh = 0

$ $ Eϑ k T X = Eϑ γ ∗ − Eϑ γ = γ ϑ − γ ϑ = 0

P-almost everywhere

bg bg

(10)

by the completeness of T(X). (9) and (10) together yield $ $ $ $ < γ − γ ∗ , γ ∗ − γ > = 0 for all γ ∈ G and all ϑ ∈Θ . Remark. 7 The extension to l-dimensional estimable parameters γ = γ ϑ ∈ R l is obvious. Thus every unbiased estimator for γ = γ ϑ depending through X upon a sufficient and complete statistic T(X) only possesses smallest covariance matrix within the class of all unbiased estimators for γ ϑ .

bg

bg

bg

Example 1. In the normal linear model f∼ N G β, σ 2 Λ

**F G f I is a complete and sufficient for ϑ = F β I . Therefore, the generalised AitkinTb Xg = G GH σ JK H f JK
**

T 2 2

d

i

estimator

$ γ ∗ = C G T Λ−1G G T Λ−1f

d

i

+

is a best unbiased estimator for γ ϑ = C β which is assumed to be an estimable parameter. Similarly, 2 + 1 $ f − G G T Λ−1G G T Λ−1f σ2 = n−r G

bg

bg

d

i

is a best unbiased estimator for γ ϑ = σ 2 . Example 2. Quality control Let X1 , ..., X n be a sample to the binomial distribution B ϑ,1 , 0 < ϑ < 1 . Then with

b g

X = X1 , ..., X n

b

g

T

**T X = ∑ X i is a complete and sufficient statistic for ϑ ∈ 0,1 and
**

i =1

bg

n

b g

therefore

1 n $ ϑ∗ = X = ∑ Xi n i =1 is a best unbiased estimator for ϑ .

3.6 Attainment of the Rao-Cramér inequality Attainment of the Rao-Cramér inequality can take place for Maximum-Likelihood estimators only. To make this statement precise let us first formulate some regularity conditions needed. Let X be a random vector with values in X ∈R n having a distribution in a family P = Pϑ : ϑ ∈ Θ

l

q

which is dominated by a σ -finite measure µ . For a version of the Radon-Nikodym densities dPϑ x = f x, ϑ dµ

bg b g

10

The projection theorem in Hilbert spaces

**b g Let f bx, ϑ g be differentiable with respect to ϑ for all x ∈X and let Θ be an open interval of
**

f x, ϑ > 0 for all x ∈ X and for all ϑ ∈Θ .

it holds

the real line. Further d f x, ϑ dµ = 0, dϑ X

z b g z bg b b g z FGH

Iϑ =

X

d $ $ ϑx f x, ϑ dµ = 1 for every unbiased ϑ X , dϑ X d ln f x, ϑ dϑ

g

bg

b gIJK f bx, ϑg dµ < ∞

2

and

I ϑ > 0 for all ϑ ∈Θ

bg

should be fulfilled. Under these assumptions the Rao-Cramér inequality $ Varϑ ϑ X ≥ I −1 ϑ for all ϑ ∈Θ holds true, see e.g. P. Hoel, S. Port, C. Stone (1971). Further let us assume that a unique Maximum-Likelihood estimator ϑ ML for ϑ exists. We shall show that the equality $ Varϑ ϑ X = I −1 ϑ for ϑ ∈Θ $ for an unbiased estimator ϑ ∗ X implies the existence of a µ -zero set N such that

bg bg

bg

bg

$ ϑ ∗ x = ϑ ML x outside of N. To this aim let F be the set of all random variables being functions of X with finite second moments and let G be the subset of all unbiased estimators for ϑ in F. With the semi-scalar product ~ $ ~ $ < ϑ , ϑ >= ϑ x ϑ x f x, ϑ dµ

bg

bg

bg

z bg bgb

X

2

g

it holds $ $ Varϑ ϑ = ϑ − ϑ . $ Since ϑ ∗ X attains the lower bound I −1 ϑ it is a projection of ϑ onto G. Therefore, it follows by the projection theorem $ $ $ $ < ϑ − ϑ ∗ , ϑ ∗ − ϑ > = 0 for all ϑ ∈ G and all ϑ ∈Θ . (11) Because of d ϑ + I −1 ϑ t f x , ϑ dµ = 1 dϑ X with d t= ln f x, ϑ dϑ $ (11) holds especially for ϑ = ϑ + I −1 ϑ t . Therefore, we get

bg

bg

zd

b gi

b g

b g

bg

11

The projection theorem in Hilbert spaces $ $ $ $ < ϑ − ϑ ∗ , ϑ ∗ − ϑ > = < ϑ − ϑ ∗ , I −1 ϑ t > or or $ − I −1 ϑ = I −1 ϑ < ϑ − ϑ ∗ , t >

bg

bg

bg

$ 1 =< ϑ ∗ − ϑ , t > and because of $ 1 = t ϑ∗ − ϑ $ $ < ϑ∗ − ϑ, t > = ϑ∗ − ϑ t

holds true. In view of the Cauchy-Schwarz inequality there is a µ -zero set N such that $ t = a ϑ f x, ϑ ϑ ∗ − ϑ for all x ∈ X \ N for some constant a ϑ .

b g b ge

2

j

bg

Since

I ϑ = t = a2 ϑ

bg

is positive a ϑ cannot be zero for ϑ ∈Θ . By assumption ϑ ML is a Maximum-Likelihood estimator for ϑ and therefore the Likelihood equation $ aϑ f x, ϑ ϑ∗ − ϑ = 0 for x ∈ X \ N

bg

$ b gz eϑ − ϑj f bx, ϑg dµ

∗

2 3

X

b gb

ML

ML

ge

ML

j

holds true what implies $ ϑ ∗ x = ϑ ML x outside of N.

bg

bg

References. [1] P.G. Hoel, S.C. Port, C.J. Stone (1971), Introduction to Statistical Theory, Houghton Mifflin Company, Boston. [2] C.R. Rao (1973), Linear Statistical Inference and Its Applications, Wiley, New York. [3] H. Witting, (1985), Mathematische Statistik, Teubner Verlag.

12

Sign up to vote on this title

UsefulNot usefulFuntional Analysis topic - projection theorem in Hilbert Spaces.

Funtional Analysis topic - projection theorem in Hilbert Spaces.

- Emet2007 Notes
- From Image Vector to Matrix
- Tutorial 5
- Stanford Stats 200
- Adaptive Estimation of Spectral Densities via Wavelet Threshold Ing and Information Projection
- Sadhana v27 Part 5 Oct2002
- A PC Based State Estimator Interfaced With a Remote Terminal Unit Placement Algorithm
- baum 2013
- MIT 18.022 2
- 03. PDF Estimation Corr
- Probability Theory Lecture notes 10
- APES-D-15-00239
- Barnorff-Nielsen-Shephard, Power Variation 2003
- Modern Mathematical Statistics With Applications (2nd Edition).pdf
- 05676009
- SSC DEO LDC General Intelligence Solved Paper Held on 28.11.2010 1st Sitting
- R Commands
- Quensland Matlab Tutorial
- Pag 1si 2 Si Pag 8-12 Quensland_matlab_tutorial
- 9709164 v 1 Leys Hon
- -28sici-291096-9128-28199711-299-3A11-3C1127-3A-3Aaid-cpe337-3E3.0.co-3B2-y
- Level 1 Course Notes
- Trapdoor
- Rancob Phia
- Connor Ch5
- Rich Text Editor File
- Kharkov-Application of Monte Carlo Simulation for the Evaluation of Measurement Uncertainty
- 44th Cdc Ecc
- Link and Node Utilization Factors Computation Using Graph Theory
- US Federal Reserve
- The Projection Theorem in Hilbert Spaces