32 views

Uploaded by Jessica Soh

- Fat Tailes and Fragility
- 10. Parameter Estimation
- Exam C Formulas
- Introduction to Statistics and Data Analysis - 9.pdf
- Aps u9 Test Review Key
- US Federal Trade Commission: perloffslides
- 9379c35f-77df-44fc-8425-9f1615025f25.pdf
- Sampling Distribution
- UT Dallas Syllabus for ee6343.001 06f taught by Naofal Al-dhahir (nxa028000)
- ms2006
- Chap 005
- Power Spectrum Estimation
- Central Tendency.doc
- EstevaoSarndal_2003_New Perspective on Calibration Estimators.pdf
- Multistage Sampling
- Estimators Confidence
- 77122_4. Estimation-1.pdf
- Estimation and Testing of Population Parameters
- Lecture 2
- Statistical Inference

You are on page 1of 4

Exercise 1 B is true. Exercise 2 Denote the marginal distribution of X by g(X). The marginal distribution of X is: g(0) = f(0,-1) + f(0,0) + f(0,1) = .13 + .01 + .04 = .18 g(1) = f(1,-1) + f(1,0) + f(1,1) = .12 + .30 + .08 = .50 g(2) = f(2,-1) + f(2,0) + f(2,1) = .14 + .06 + .12 = .32 Hence B is the correct answer. Exercise 3 Denote the joint distribution of X and Y by f(x,y) and denote the marginal distribution of X by fX(x) and the marginal distribution of Y by fY(y). By definition, X and Y are independent if and only if f(x,y) = fX(x)fY(y) for all x,y. We calculate the marginal distribution of Y to be: fY(-1) = .39, fY(0) = .37 and fY(2) = .24. From this we see that for example fX(0)fY(-1) = .0702 .13 = f(0,-1). Thus, we have found an (x,y) such that f(x,y) fX(x)fY(y), hence X and Y are not independent. Exercise 4 A is true. Exercise 5 1. The distribution of X1-X2 is a normal with mean E[X1-X2] = E[X1] E[X2] = 12 10 = 2 and with variance (since X1 and X2 are independent) V[X1-X2] = (1)2V[X1] + (-1)2V[X2] = 9+16 = 25. 4 42 ) = ( ) = (0.4). 2. P(X1 -X 2 < 4) = ( 25 02 P(X 1 > X 2 ) = P(X 1 - X 2 > 0) = 1 - P(X 1 - X 2 < 0) = 1 - ( ) = 1 (0.40) = (0.40) = 0.66 25 Exercise 6 E is the correct answer. Exercise 7 E[2X+3Y]=2E[X]+3E[Y]=2(14)+3(5)=43 V[2X-Y]=(2)2V[X]+(-1)2V[Y]+2(2)(-1)cov(X,Y)=4(16)+100-4(-20)=244. Exercise 8 D is the correct answer. Exercise 9 1. E[Y]=E[2X1+X2]= 2E[X1]+E[X2]=2(5)+10=20 and E[Z]=2(10)+15=35 V[Y]=(2)2V[X1]+V[X2]+2(2)(1)Cov(X1,X2)=4(9)+16=52

V[Z]= (2)2V[X2]+V[X3]+2(2)(1)Cov(X2,X3)=4(16)+25+4(2)=97 2. Cov(Y,Z)=Cov(2X1+X2,2X2+X3)=E[(2X1+X2)( 2X2+X3)]-E[2X1+X2]E[2X2+X3]= E[4X1X2+2X1X3+2X22+X2X3]-{E[2X1]E[2X2] + E[2X1]E[X3]+ E[X2]E[2X2]+ E[X2]E[X3]}= 4{ E[X1X2]- E[X1]E[X2]}+2{E[X1X3]- E[X1]E[X3]}+2{E[X22]- E[X2]E[X2]}+ {E[X2X3]- E[X2]E[X3]}= 4Cov(X1,X2)+2Cov(X1,X3)+2V[X2]+Cov(X2,X3)=2(16)+2=34. 3. The correlation coefficient: Cov(Y , Z ) 34 Corr (Y , Z ) = = = .4787 V (Y )V ( Z ) (52)(97) Exercise 10

1 1 1 1. E[ Y ]=E[ (Y1 + Y2 + Y3 + Y4 ) ]= ( E[Y1 ] + E[Y2 ] + E[Y3 ] + E[Y4 ]) = 4 = 4 4 4 1 1 V[ Y ]=V[ (Y1 + Y2 + Y3 + Y4 ) ]= V [Y1 + Y2 + Y3 + Y4 ] 4 16 1 1 1 = (V [Y1 ] + V [Y2 ] + V [Y3 ] + V [Y4 ]) = 4 2 = 2 since the Yi are independent. 16 16 4 1 1 1 1 1 1 1 1 2. E[W]=E[ ( Y1 + Y2 + Y3 + Y4 ) ]= E[Y1 ] + E[Y2 ] + E[Y3 ] + E[Y4 ] = 8 8 4 2 8 8 4 2 1 1 1 1 + + + = , hence W is an unbiased estimator of . 8 8 4 2 1 1 1 1 (1 + 1 + 4 + 16) 2 22 2 V[W]= V [Y1 ] + V [Y2 ] + V [Y3 ] + V [Y4 ] = = . 64 64 16 4 64 64 3. Since V[W]=22/64>16/64=V[ Y ], Y thus has smaller variance than W, hence we prefer Y .

Exercise 11 (Estimators; this is part of Exercise C.2 in Wooldridge) Let Y1, Y2, Y3,.,Yn be n pairwise independent, identically distributed random variables with common mean and common variance 2. Let Y denote the sample average. 1. Define the class of linear estimators of by Wa = a1Y1 + a 2Y2 + ... + a nYn , where the ai are constants. What restriction on the ai is needed for Wa to be an unbiased estimator of ? 2. Find V(Wa). 3. Suppose that the restriction from part 1. is satisfied. Find the condition that needs to hold in order for Y to be the most efficient estimator (that is, that Y has the smallest variance) of Y and Wa.

E[Wa]=E[ a1Y1 + ... + a nYn ]= a1 E[Y1 ] + ... + a n E[Yn ] = (a1 + ... + a n ) , restriction required for unbiasedness is that (a1 + ... + a n ) = 1 . 2. V[Wa]= a1 V [Y1 ] + .... + a n V [Yn ] = (a1 + ... + a n ) 2

2 2 2 2

hence

the

3.

In order for Y to have the smallest variance of Y and Wa the following must hold: 1 1 2 2 2 2 V [Y ] V [Wa ] 2 (a1 + ... + a n ) 2 (a1 + ... + a n ). n n Further to this part of the exercise (this was not asked in the exercise), but you can 1 2 2 see in Wooldridge Exercise C.2 (iii) that (a1 + ... + an ) 2 (a1 + ... + an ) always n holds. Since we have assumed in this question that the condition for Wa to be unbiased holds, we have assumed that a1 + ... + a n = 1 , hence the inequality from 1 2 2 Wooldridge becomes (a1 + ... + a n ) , which was exactly the condition we n found for Y to have smaller variance than Wa. Hence, what you have showed is that among all weighted averages that produce an unbiased estimator of , the usual sample average Y is the most efficient.

Exercise 12

1. The 95 % confidence interval is x t n 1,(10.95) / 2 since t 99, 0.025 interval is

S2 S2 = x t 99, 0.025 , so n n 1.984 (see table for t distribution) we get that the confidence

0.0009 0.0009 ,0.2495 + 1.984 ] = [0.2435,0.2555] . 100 100

[0.2495 1.984 2.

3.

S2 , it is n determined by the estimated variance (S2) and the sample size. Hence, one way to make the confidence interval smaller is to increase the sample size. Since we have found the 95% confidence interval, we can test the proposed null hypothesis at a 5% significance level. Since 0.25 is contained in the confidence interval, we can not reject the null hypothesis that =0.25.

=0. >0.

3.

= 45.5536. 466.6 S2 900 n The critical value in N(0,1) at 5% level for a one-sided test is approximately 1.645, so since 45.5536>1.645 we strongly reject the null at a 5% level. The critical value at

t=

32.8

1% level for a one-sided test is approximately 2.33, so we also strongly reject at a 1% level.

- Fat Tailes and FragilityUploaded bySeijun Jeong
- 10. Parameter EstimationUploaded byNurgazy Nazhimidinov
- Exam C FormulasUploaded byTrever Grah
- Introduction to Statistics and Data Analysis - 9.pdfUploaded byDanielaPulidoLuna
- Aps u9 Test Review KeyUploaded bydeez nuts
- US Federal Trade Commission: perloffslidesUploaded byftc
- 9379c35f-77df-44fc-8425-9f1615025f25.pdfUploaded byAndreaHaylenFernandezVeramendi
- Sampling DistributionUploaded byDigvijay Singh
- UT Dallas Syllabus for ee6343.001 06f taught by Naofal Al-dhahir (nxa028000)Uploaded byUT Dallas Provost's Technology Group
- ms2006Uploaded bycomplete9jb
- Chap 005Uploaded bymallick5051rajat
- Power Spectrum EstimationUploaded byJai Gaizin
- Central Tendency.docUploaded byVellardo Albay
- EstevaoSarndal_2003_New Perspective on Calibration Estimators.pdfUploaded byAnonymous gUySMcpSq
- Multistage SamplingUploaded byamryina
- Estimators ConfidenceUploaded byrogerrabbit12
- 77122_4. Estimation-1.pdfUploaded byAnggi Gita
- Estimation and Testing of Population ParametersUploaded bythrphys1940
- Lecture 2Uploaded byDingzhou Li
- Statistical InferenceUploaded byShams Ali Khan
- random 2.pdfUploaded bySteven Truong
- Sampling TechniquesUploaded byKS Jagadish
- COURSE 2 ECONOMETRICS 2009 confidence interval.pptUploaded byAlex Ionescu
- 11748_Chao etal2013Uploaded byPaul Gutiérrez-Cárdenas
- ASH Ret PredUploaded byEngr Fizza Akbar
- Measuring Organization Capital in JapanUploaded byicha
- notes0102.docUploaded byRiya Gupta
- sampleUploaded byMatt GTE
- Module4Sampling JULYUploaded bydeivy ardila
- Suggestion _ MAth3_CSE 4th.Uploaded byLipika Sur

- Kimi Ga Yobu Namae Piano SheetUploaded byJing Hui
- 3rd Year Course SelectionUploaded byJessica Soh
- hypothesis testingUploaded byJessica Soh
- multiple linear regressionUploaded byJessica Soh
- econometricsUploaded byJessica Soh
- probability theory and statisticsUploaded byJessica Soh
- econometricsUploaded byJessica Soh

- Revised Brochure Bstat(2016)Uploaded byLokesh Dhaker
- Effect of Community ParticipationUploaded byمزوان موجايل
- Fundamentals of Quality Control and ImprovementUploaded byEngr Zubair Ahmed
- Statistics Study Guide Chi-SquareUploaded byldlewis
- intro baUploaded byhitesha
- panel dataUploaded byShikha Sharma
- UAS Praktikum Metlit.docxUploaded byYenny Handayani Sihite
- sta2023 chapter3slidesUploaded byapi-258903855
- Reproducibility of Physiological Track-And-trigger Warning Systems for Identifying at-risk Patients on the WardUploaded bySunto Lim
- ztest_pvalueUploaded bylifeware04
- Causal Inference Woth Observational DataUploaded bydiegojo
- HYPOTH. TESTING.pptUploaded byRobert Manea
- Applying Data Mining Techniques to Breast Cancer AnalysisUploaded byLuis Rei
- TTestLectureUploaded byInnocent Harry
- apresentacao_CIDMA_2010Uploaded byimmsilvam
- AP Stats Chapter 11 NotesUploaded byJuanJujubee
- 5_Continuous Normal DistributionUploaded bySudibyo Gunawan
- plsUploaded byLawrence Harold
- QTMUploaded byAishwarya A.G
- Regression MCQuestionsUploaded bytamizh
- econometrics-02-00045Uploaded byNita Ferdiana
- 1-s2.0-S0005796716302364-mainUploaded byAgung Prayoga
- Week6_CBEB2105_Uploaded byRaChael Yoon
- Crude Oil Viscosity Correlations a Novel Approach for Upper Assam Basin, Deepak Jain, 2014Uploaded byAnonymous Xy309m9Sm9
- SR97_33.pdfUploaded byShabir Tramboo
- bartMachine.pdfUploaded byAshwini Kumar Pal
- Sample Midterm Exam QuestionsUploaded byA K
- Probability and non probabilityUploaded bychandu_jjvrp
- Chap.1Uploaded byJohanna Shin
- Edexcel S1 Notes.pdfUploaded byÅzmâñ Khäñ