est2

© All Rights Reserved

1 views

Estimation 2

est2

© All Rights Reserved

- Spread of the Data
- ExamST
- Simulation-Based Econometric Methods.pdf
- The Implicit Loss Function for Errors in Soil Info
- 01.13.Hierarchical Model-based Motion Estimation
- Date Longitudinale Anx Infalntila 2006
- Differential Evolution - A simple and efficient adaptive scheme for global optimization over continuous spaces
- Kanda Paper
- Software vulnarability
- Settle 1999
- 2013 01 - Statistical Analysis of Noise Multiplied Data...- Klein, Sinha
- Doe
- Short Manual
- (EMS Textbooks in Mathematics) Joaquim Bruna, Julia Cufi - Complex Analysis-European Mathematical Society (2013)
- Wom
- lec2
- Lexicons and Minimum Risk Training for Neural Machine Translation: NAIST-CMU at WAT2016
- Assumptions
- 1940 Chapter III
- Diseno y Analisis de Experimentos M Parte70

You are on page 1of 14

estimate the parameter with a GOOD estimator.

Estimator: u(X1, X2, , Xn) (Statistic, random variable)

Estimate: u(x1, x2, , xn) (Observed value)

One possible definition for GOOD estimator is the one that

is Unbiased and has Small Variance

Def 7.1.1 Y = u(X1, X2, , Xn) is called an minimum

variance unbiased estimator (MVUE) of the parameter

if Y is unbiased and if the variance of Y is less than or

equal to the variance of every unbiased estimator of .

Example X1, , X9 N (, 1). Consider the two estimators for

, Y2 = u2(X1, , X9) = X1

Y1 = u1(X1, , X9) = X

E{Y1} =

, V {Y1} =

, E{Y2} = , V {Y2} =

Can we say that Y1 has minimum variance among all unbiased estimators ?

0

Stat 463

of the observed value of the statistic Y , and is called a

decision function or decision rule.

Measure of difference between (y) and the true value

Loss Function: L[, (y)]

1.Nonnegative

2. Calculable for each [, (y)].

Risk Function: R[, ] = E{L[, (Y )]}

Generally function of and

Single decision rule may or may not minimizes Risk function for all .

L[, (y)] = [ (y)]2.

1(y) = y , 2(y) = 0

R(, 1) = E[(

1

Y )]2

1

=

25

Stat 463

R(, 2) = E[( 0)]2 = 2

1

1

R(, 2) < R(, 1) , < <

5

5

R(, 2) R(, 1) , elsewhere

Note

1. Restriction: Unbiasedness, E[(Y )] = .

2. Minimize the maximum risk function: minimax criterion

Examples of Loss functions;

1. Square-error loss function: L[, ] = [ ]2

2. Absolute-error loss function: L[, ] = | |

3. Goal post loss function

L[, ] =

0,

b,

| | a,

| | > a,

2

Stat 463

Ex (P7.1.4.)

Ex (P7.1.5.)

Stat 463

iid

X1, , Xn f (x; )

f (x; )

N (, 1):

Y = u(X1, , Xn) =

N (0, ):

Y = u(X1, , Xn) =

b(1, ):

Y = u(X1, , Xn) =

=

Reduce the dimension of data without losing any information necessary to estimate the parameter .

=

Conditional distribution of X1, , Xn given the sufficient

statistic does not depend on the parameter .

=

Sufficient statistic exhausts all the information about that

data has.

iid

u1(X1, , Xn) be a statistic whose p.d.f. is g1(y1; ).

4

Stat 463

f (x1; ) f (xn; )

= H(x1, , xn)

g1[u1(x1, , xn); ]

where H(x1, , xn) does not depend on .

iid

Xn .

Ex (B7.2.2.)

+ Xn .

iid

X1, , Xn gamma(2, ). Y1 = X1 +

using the definition of sufficient statistic. But it is not easy

to find a sufficient statistic just from the distribution of random sample. But we can do with Factorization Theorem

5

Stat 463

iid

statistic Y1 = u1(X1, , Xn) is a sufficient statistic for

if and only if we can find two nonnegative functions, k1

and k2, such that

f (x1; ) f (xn; )

= k1[u1(x1, , xn); ]k2(x1, , xn),

where k2(x1, , xn) does not depend on .

Ex (B7.2.4.)

known.

iid

Ex (B7.2.6.) Let Y1 < Y2 < Y3 denote the order statistics of a random sample of size 3 form the distribution with

p.d.f.

f (x : ) = e(x)I(,)(x) .

Stat 463

iid

iid

Stat 463

Function of a sufficient statistic is also sufficient.

Let Y1 = u1(X1, , Xn) be a sufficient statistic of the

parameter and Z = u(Y1) = u[u1(X1, , Xn)] =

(X1, , Xn) be a function of the sufficient statistic that

is independent of and has a single-valued inverse such

that Y1 = w(Z). Then Z is also a sufficient statistic.

f (x1; ) f (xn; )

=k1[u1(x1, , xn); ]k2[u1(X1, , Xn)]

=k1{w[(x1, , xn)]; }k2[u1(X1, , Xn)]

Sufficient statistic is not unique.

iid

. Let Y1 = u1(X1, , Xn) be a sufficient statistic of

the parameter and Y2 = u2(X1, , Xn) be an unbiased estimator of . Then E(Y2|y1) = (y1) defines a

statistic (Y1) and

1. (Y1) is an unbiased estimator

2. (Y1) is a function of a sufficient statistic

8

Stat 463

3. V ar{(Y1)} V ar{Y2}

iid

Them 7.3.2.

X1, , Xn f (x : ) ,

Y1 = u1(X1, , Xn) is a sufficient statistic for

is the unique m.l.e. of , then is a function of

u1(X1, , Xn).

Proof.

iid

Ex (B7.3.2)

. If

and

Y1 =

Stat 463

Example (P7.3.2.)

Y1 < < Y5 order statistics of a

random sample of size 5 from the distribution with p.d.f.

1

, 0<x< , 0<<

f (x; ) =

Is 2Y3 unbiased ?

(y5) = E(2Y3|y5)

Compare variances of 2Y3 and (Y5)

Example (P7.3.3.)

X1, X2 is a random sample of size

2 from the distribution with the p.d.f.

1

x

f (x; ) = exp

,

0<x<

Is Y2 unbiased ?

(y1) = E(Y2|y1)

Compare variances of Y2 and (Y1)

10

Stat 463

iid

is

. The p.d.f. of this sufficient statistic Y1 is

g1(y1; ) =

?

E[u(Y1)] = 0 if and only if

Why ?

Def 7.4.1.

Let the random variable Z have a p.d.f. of

the family {h(z; ); }. If the condition E[u(Z)] =

0 for every requires that u(z) = 0 except on

a set of points that has probability zero for each p.d.f.

{h(z; ); }, then the family {h(z; ); } is

called a complete family of probability density functions.

Ex (B7.4.1.)

Let Z have a p.d.f. that is a member of

family {h(z; ); 0 < < }

1

z

h(z; ) = exp

11

, 0<z<

Stat 463

iid

X1, , Xn f (x : ) , .

Sufficient Statistic: Y1 = u1(X1, , Xn)

The family of p.d.f.s, {fY1 (y1; ), } is complete.

If there is a function of Y1 that is unbiased estimator of ,

then this function of Y1 is the unique minimum variance

unbiased estimator.

that is unbiased for the parameter. Then it will be the

unique MVUE

12

Stat 463

Ex (P7.4.3.)

Ex (P7.4.6.)

13

- Spread of the DataUploaded byAnu Amruth
- ExamSTUploaded byuuulisa
- Simulation-Based Econometric Methods.pdfUploaded byJavierRuizRivera
- The Implicit Loss Function for Errors in Soil InfoUploaded byArif Rizki
- 01.13.Hierarchical Model-based Motion EstimationUploaded byAlessio
- Date Longitudinale Anx Infalntila 2006Uploaded byJean Boutiere
- Differential Evolution - A simple and efficient adaptive scheme for global optimization over continuous spacesUploaded bymtl0612
- Kanda PaperUploaded byjodaobiuan
- Software vulnarabilityUploaded byShubham Shukla
- Settle 1999Uploaded byJames Aaron Santiago
- 2013 01 - Statistical Analysis of Noise Multiplied Data...- Klein, SinhaUploaded bymoneypigagain
- DoeUploaded byStanley Cheruiyot
- Short ManualUploaded byCarlos Trucios Maza
- (EMS Textbooks in Mathematics) Joaquim Bruna, Julia Cufi - Complex Analysis-European Mathematical Society (2013)Uploaded bythecomons
- WomUploaded byChiquita Felina
- lec2Uploaded byDeepa Devaraj
- Lexicons and Minimum Risk Training for Neural Machine Translation: NAIST-CMU at WAT2016Uploaded bywwfwpifpief
- AssumptionsUploaded byBantiKumar
- 1940 Chapter IIIUploaded byiqbal jatmiko putra
- Diseno y Analisis de Experimentos M Parte70Uploaded byJoceses
- Robust shrinkage prior estimationUploaded bySubhankar Ghosh
- StatisticsUploaded byrsingh433
- Adrian W, Bowman Adelchi Azzalini - Applied Smoothing Techniques for Data Analysis_ The Kernel Approach with S-Plus Illustrations (1997).pdfUploaded byJuan Andres
- 06190757Uploaded byJiadong Wang
- 16Uploaded byXav Sat
- hca14_PPT_CH10_GEUploaded bySugim Winata Einstein
- Literature ReveiwsUploaded byMadiha Khan
- Oil_Prices_vs_RigsUploaded byMuhammad Ahsan Mukhtar
- Lecture 28Uploaded byFight Nesan
- Bahan Tabel Bab 4 5 170917Uploaded byerlangga suryarahman

- Ch 17 Hull Fundamentals 8 the dUploaded byjlosam
- Ch 25 Hull Fundamentals 8 the dUploaded byjlosam
- Ch 24 Hull Fundamentals 8 the dUploaded byjlosam
- Ch 23 Hull Fundamentals 8 the dUploaded byjlosam
- Life Insurance Products and FinanceUploaded byjlosam
- Ch 12 Hull Fundamentals 8 the dUploaded byjlosam
- Ch 22 Hull Fundamentals 8 the dUploaded byjlosam
- Ch 21 Hull Fundamentals 8 the dUploaded byjlosam
- Ch 11 Hull Fundamentals 8 the dUploaded byjlosam
- Ch 13 Hull Fundamentals 8 the dUploaded byjlosam
- Ch 20 Hull Fundamentals 8 the dUploaded byjlosam
- Ch 19 Hull Fundamentals 8 the dUploaded byjlosam
- Ch 08 Hull Fundamentals 8 the dUploaded byjlosam
- Ch 02 Hull Fundamentals 8 the dUploaded byjlosam
- Ch 18 Hull Fundamentals 8 the dUploaded byjlosam
- Ch 16 Hull Fundamentals 8 the dUploaded byjlosam
- Enterprise Risk ManagementUploaded byjlosam
- Ch 10 Hull Fundamentals 8 the dUploaded byjlosam
- Ch 15 Hull Fundamentals 8 the dUploaded byjlosam
- Ch 14 Hull Fundamentals 8 the dUploaded byjlosam
- Ch 06 Hull Fundamentals 8 the dUploaded byjlosam
- Ch 07 Hull Fundamentals 8 the dUploaded byjlosam
- Ch 09 Hull Fundamentals 8 the dUploaded byjlosam
- Ch 05 Hull Fundamentals 8 the dUploaded byjlosam
- Chapter 3 on SEO Seminar FactorsUploaded byRobinTang
- Ch 04 Hull Fundamentals 8 the dUploaded byjlosam
- Chapter 1 8thEdUploaded byThunder7
- Stochastic Calculus, Filtering, And Stochastic ControlUploaded byIgor F. C. Battazza
- HullRMFI4eCh23Uploaded byjlosam
- HullRMFI4eCh27Uploaded byjlosam

- First Five Chapter GujratiUploaded byনিশীথিনী কুহুরানী
- Eco No MetricsUploaded byhalhoshan
- VolatilityUploaded bydeba_econ
- m HurdleUploaded byquaidian06
- Adjustment for CovariatesUploaded byEcaterina Adascalitei
- 3a PPS Sampling With Replacement(1)Uploaded byKEEME MOTSWAKAE
- Samples variance estimatorUploaded byshotorbari
- Inverse Problems as Statistics - EvansUploaded byFederico Gómez González
- CAM625_2019_s1_Module1Uploaded byKelvin Leong
- probmeth09_ex4Uploaded byzhangguanheng
- 30234_Chapter4Uploaded bysaliboua
- Burnham and Anderson 2004 Multimodel InferenceUploaded byIsabelVinhal
- 9781315140421_preview.pdfUploaded byhw
- Econ 4002 Assignment 02Uploaded byPranali Shah
- Outlier Robust Nonlinear Mixed ModelUploaded byLam Sin Wing
- CGUIDEUploaded byvazeli2004
- PDUploaded byAnonymous 0sxQqwAIMB
- econ exercisesUploaded bypedro hugo
- FRM 2Uploaded bysadiakhn03
- Extreme Value Theory Introduction DeHaanFerreiraUploaded byxcavier
- 7034910Uploaded bysthembiso
- A Leisurely Look at the Bootstrap, The Jackknife, And Cross-Validation (1983 13s)_BRADLEY EFRONUploaded byValentin Rodriguez
- Acemoglu et al. - 2014 - Democracy Does Cause Growth.pdfUploaded byindifferentj
- (Springer Series in Statistics) Yves Tillé-Sampling Algorithms -Springer (2006)Uploaded byohda
- noneUploaded byAudrey Soedjito
- STATISTICAL PARADISES AND PARADOXES IN BIG DATA (I): LAW OF LARGE POPULATIONS, BIG DATA PARADOX, AND THE 2016 US PRESIDENTIAL ELECTIONUploaded byjohn3963
- Organizational culture, innovation, and performance: A test of Schein's modelUploaded bypatyal20
- Extreme Value TheoryUploaded byGalloClau
- C2017_109-SPE-PD-EL-SB-0183-00Uploaded byAnonymous SOQFPWB
- Sampling Theory and MethodsUploaded byagustin_mx