30 views

Uploaded by jiashengrox

Statistics

- F5 Maths Annual Scheme of Work_2012
- International Trade HE205
- 1 Arithmetic Versus Geometric Mean Estimators
- 10. Parameter Estimation
- Adjustment Computation
- Coupled 1
- (EMS Textbooks in Mathematics) Joaquim Bruna, Julia Cufi - Complex Analysis-European Mathematical Society (2013)
- As 4595-1999 Copper Lead and Zinc Sulfide Concentrates - Precision and Bias of Mass Measurement Techniques
- Fundamental Uncertainty, Portfolio Choice, And Liquidity Preference Theory
- ASTM D 2013-86 Standard-method-of-preparing-coal-sample-for-analysis.pdf
- Untitled
- SSC CGL Official Answer Key for Tier I - 4th September Shift 3
- PM
- Lab Report 2
- SSC DEO & LDC General Intelligence Solved Paper Held on 28.11.2010 1st Sitting Wwww.sscportal.in
- 21_2
- Influence of Quantity of Principal
- 2D Array
- ME345 FALL14 HW13
- Latex maths help

You are on page 1of 7

EXAMINATIONS

(Semester 2: AY 2011-2012)

April/May 2012

1. This examination paper contains four (4) questions and comprises seven (7) printed pages.

2. Candidates must answer ALL questions on the paper.

3. Each question carries 20 marks. The total mark for the paper is 80.

4. Calculators may not be used.

5. Statistical tables will not be available.

6. This is a closed book exam.

ST5223

1.

Yi = 0 +

p1

X

j xij + i

i {1, . . . , n}

j=1

where i are i.i.d. zero mean random variables, p > 1. For simplicity we will assume:

i N (0, 2 ), with N (0, 2 ) the univariate normal distribution with zero mean and

variance 2 .

p<n

i.i.d.

(i) Consider the estimation of = 0:p1 . Show that maximizing the log-likelihood of the

normal linear model and minimizing the residual sum of squares lead to the same estimators.

[3 Marks].

(ii) Using vector dierentiation, derive the least squares estimator. Show that estimator is

unbiased. Note: you do not need to use any Q R decompositions. [4 Marks].

(iii) State, but do not prove, the Gauss Markov theorem. [3 Marks].

(iv) Let e = BY , where B is a deterministic p n matrix (which could depend upon X ), be

any unbiased estimator of . Show that BX = Ip and establish that

(B S 1 X 0 )(B S 1 X 0 )0 + S 1 = BB 0

where S = X 0 X . [5 Marks].

(v) Show that the variance of e in (iv) is minimized (that is, the diagonal elements of the

covariance matrix) when e = (X 0 X)1 X 0 Y . [5 Marks].

ST5223

2.

This question again concerns the normal linear model, under study in question 1, which in matrix

and vector notation is:

Y = X +

with Nn (0, 2 In ). Recall Nn (0, 2 In ) is the normal distribution with mean vector 0 and

covariance matrix 2 In , In the n n identity matrix.

(i) Suppose that there is an orthogonal matrix Q such that the n p matrix R = QX is

upper-triangular (rij = 0 for i > j ). Show that

(Y X)0 (Y X) = (QY R)0 (QY R)

and hence that minimizing the residual sum of squares is the same as minimizing: kQY

Rk2 . [4 Marks].

(ii) By partitioning R into its rst p rows (U , you may assume that this matrix has rank p) and

its zero n p rows and partition Q in a similar manner into V and W , show that one must

take U = V to minimize the residual sum of squares. [4 Marks].

(iii) For the linear model under study, show that:

b =

E[]

b = 2 (X 0 X)1

Var[]

where U b = V . [6 Marks].

(iv) For the linear model under study, show that:

E[e

2] = 2

b

where e2 = RSS()/(n

p) and

b

RSS()

[6 Marks].

= kW k2 .

ST5223

3.

This question again concerns the normal linear model, under study in question 1, which in matrix

and vector notation is:

Y = X +

(i) Consider a Bayesian approach to estimation. It is assumed that the prior is:

| 2 Np (m, 2 V )

2 IG(a, b)

where IG(a, b) is the inverse Gamma distribution with probability density function:

p( 2 ) =

b

ba

( 2 )(a+1) e 2

(a)

with a, b > 0 and () the gamma function. The joint distribution of , 2 is called the

normal-inverse gamma distribution, denoted N IG(m, V, a, b).

(a) Show that the posterior density function of , 2 is of the form:

2 (a +p/2+1)

(, |Y, X) ( )

exp

1

0

1

2 ( m ) (V ) ( m ) + 2b

2

where

m

V

a

b

=

=

=

=

V (V 1 m + X 0 Y )

(V 1 + X 0 X)1

a + n/2

b + [m0 V 1 m + Y 0 Y (m )0 (V )1 m ]/2

where all inverses are assumed to exist. Hence identify the joint posterior distribution

of , 2 . You may assume, without proof, that

(Y X)0 (Y X) + ( m)0 V 1 ( m) + 2b = ( m )0 (V )1 ( m ) + 2b .

[5 Marks].

(b) Show that the marginal posterior density of is:

( 21 (2a + p)) |V |1/2

(|Y, X) =

[1 + ( m )0 (V )1 ( m )/2b ](2a +p)/2 .

2a

p/2

p/2

(2b )

( 2 )

Comment on the tails of the posterior distribution, relative to that of the prior. [5

Marks].

ST5223

(ii) An often used method for regularization is ridge regression. That is, when there is no

well-dened solution of the normal equations in linear regression (X 0 X) = X 0 y . In this

scenario, it is sought to minimize, with respect to (w.r.t.) , the equation

kY Xk2 + kk2

(a) Show that, for an appropriate range of , the ridge regression estimator is:

b = (X 0 X + Ip )1 X 0 Y.

(b) Bayesian maximum a-posteriori estimation (MAP) constitutes picking a parameter

which maximizes the posterior density (assuming the maxima exists). Identify a Bayesian

interpretation of ridge regression by MAP estimation. [5 Marks].

ST5223

4.

(i) Consider observations (y1 , x1,0:p1 ), . . . , (yn , xn,0:p1 ). Describe the three elements of

eralized linear models that have been studied in this course. [5 Marks].

gen-

(ii) The following question concerns a dynamic probit regression model. Here one observes

(y1 , x1 ), . . . , (yn , xn ), . . . sequentially in time with (yn , xn ) {0, 1} R. The proposed

model is:

Yn = I(0,) (Zn )

Zn = n xn + n

n = n1 + n

(1)

(2)

i.i.d.

i.i.d.

P(Yn = 1|n ) = (n xn )

(b) Consider only equations (1)-(2) that dene the model. Show that, for n 2

p(n |z1:n ) = R

where

p(zn |n )p(n |z1:n1 )dn

Z

p(n |z1:n1 ) =

[4 Marks].

(c) Again, considering only (1)-(2), show that:

zx

1

1 1

1 |z1 N

,

.

1 + x21 1 + x21

2

n1 |z1:n1 N n1 , n1

show that:

n |z1:n1 N

2

n|n1 , n|n1

n |z1:n N n , n2

2

for some n|n1 , n|n1

, n , n2 to be determined. [7 Marks].

ST5223

(d) It is of interest to infer the regression coecients as data arrives. Given the above

results and assuming one can perform expectations w.r.t. p(z1:n |y1:n ) suggest how one

may calculate the expectation:

E[n |y1:n ].

Note that as one cannot analytically perform expectations w.r.t. p(z1:n |y1:n ) you may

leave your answer in terms of an integral. [2 Marks].

- F5 Maths Annual Scheme of Work_2012Uploaded byJoyce Tay
- International Trade HE205Uploaded byNg Hai Woon Alwin
- 1 Arithmetic Versus Geometric Mean EstimatorsUploaded byShristy Agarwal
- 10. Parameter EstimationUploaded byNurgazy Nazhimidinov
- Adjustment ComputationUploaded bying_nistor
- Coupled 1Uploaded byC V CHANDRASHEKARA
- (EMS Textbooks in Mathematics) Joaquim Bruna, Julia Cufi - Complex Analysis-European Mathematical Society (2013)Uploaded bythecomons
- As 4595-1999 Copper Lead and Zinc Sulfide Concentrates - Precision and Bias of Mass Measurement TechniquesUploaded bySAI Global - APAC
- Fundamental Uncertainty, Portfolio Choice, And Liquidity Preference TheoryUploaded bythejournalofeconomic
- ASTM D 2013-86 Standard-method-of-preparing-coal-sample-for-analysis.pdfUploaded byJANET GT
- UntitledUploaded byapi-262707463
- SSC CGL Official Answer Key for Tier I - 4th September Shift 3Uploaded byTestbook Blog
- PMUploaded byHafiidz Malek
- Lab Report 2Uploaded byShan Anwer
- SSC DEO & LDC General Intelligence Solved Paper Held on 28.11.2010 1st Sitting Wwww.sscportal.inUploaded byasatyamanoj
- 21_2Uploaded byAdjei Paul
- Influence of Quantity of PrincipalUploaded byCS & IT
- 2D ArrayUploaded byCHANDRA BHUSHAN
- ME345 FALL14 HW13Uploaded byMartin Vail
- Latex maths helpUploaded byletter_ashish4444
- MLNC 2013Uploaded bykaxapo
- 121008-HMEP-wp06-technical-note5-v1-1Uploaded byHimanshu Vashist
- ANFIS: Adaptive Neuro-Fuzzy Inference SystemsUploaded byES410
- Import Substitution in Leontief ModelsUploaded byAdi Rahman
- Pp vs OnionsUploaded byLTE002
- 9860Slim_Soft_v111_EN.pdfUploaded byGood Lar
- 141-12th-comp-excellent_practical-em.pdfUploaded byShri Pandiyan
- untitledUploaded byHaocheng Li
- Mathcad TutorialUploaded bysanjeev845
- 1131 and 1141 Maple Exam Sample SolutionsUploaded byheyyheyyheyy

- The_Sutra_on_the_Residence_of_Manjusri.pdfUploaded byAbhishekChatterjee
- Essential Daily YumkaUploaded byMac Dorje
- Chenrezik_ for the Benefit of All Beings - Khenpo Khartar RinpocheUploaded byjiashengrox
- Chenrezig_Teaching_at_Buddhist_Center_Sampo.pdfUploaded byjiashengrox
- Chenrezig_Teaching_at_Buddhist_Center_Sampo.pdfUploaded byjiashengrox
- 21 Tar AsUploaded bybalbinaxyz
- The Very Secret and Swift PathUploaded byjiashengrox
- The Practice of Chod - Karma Chakme Raga Asya.pdfUploaded byjiashengrox
- White Umbrella Sadhana Formatted for AmuletUploaded byjiashengrox
- The Transcendent Perfection of Wisdom in Ten Thousand Lines - 84000Uploaded byTom
- Garchen Rinpoche Vajra Recitation TummoUploaded byjfv1212
- Namgyalma all - TibEngChi.pdfUploaded byjiashengrox
- 8988-8782-1-PB.pdfUploaded byjiashengrox
- 8988-8782-1-PB.pdfUploaded byjiashengrox
- Cho Body Offering by Orgyen Nuden DorjeUploaded byjiashengrox
- Sutra of 42 Chapters Sutra v1.3Uploaded byjiashengrox
- UT22084-051-004.pdfUploaded byjiashengrox
- Calc Lesson Instruct v7Uploaded byjiashengrox
- Sarva Buddha Visaya Avatara Loka Lamkara Namah Mahayana SutraUploaded byCungSenPao
- Com King of Prayers Oct2006 A4Uploaded byvitralak
- Long EkaviratibUploaded byjiashengrox
- Tashi Tsegpa Noble Stack of Auspiciousness GavinUploaded byjiashengrox
- Meditation on White Tara PreviewUploaded byjiashengrox
- 35BuddhaConfessionPrayer[1]Uploaded byjiashengrox
- KI PathAndGrounds Delighting DrapaSedrupUploaded byjiashengrox
- manjushreesadhana.pdfUploaded byjiashengrox
- 8586-8394-1-PBUploaded byjiashengrox
- Presenting Gospel to Buddhists Final Jly29c ESUploaded byjiashengrox
- ShingonTexts_Kukai_KakubanUploaded byRoland Maxwell
- Understanding the Awakening of Faith in the Mahayana 1Uploaded byZacGo

- Cee 253 NotessUploaded byPasha Tan
- Axel Hutt et al- Additive noise-induced Turing transitions in spatial systems with application to neural fields and the Swift–Hohenberg equationUploaded byJmasn
- Nm ProgrammingUploaded byrichshy dragon
- USAMTS 1998 - 2017 en With SolutionsUploaded byestoyanovvd
- 8532-Business Math & StatUploaded byHassan Malik
- 04 01 Product of Vectors1Uploaded byRaju Raju
- 4 StabilityUploaded byMarwan Ahmed
- 1921 - Timoshenko - On the Correction for Shear of the Diferential Equation for Transverse VibrationsUploaded byJuan Karlos Alberca Alfaro
- linearUploaded byDerren Nierras Gaylo
- Thrust Force Analysis of Tripod Constant Velocity Joint Using Multibody ModelUploaded bybee140676
- Exercises - Set Theory - Answer KeyUploaded bygemnikkic
- 14-2Uploaded byksr131
- fortranUploaded byjimusos
- New Microsoft Office Word DocumentUploaded byVinothRajagopal
- Wavelets and subband coddingUploaded bytrungnt1981
- 10.1.1.25Uploaded byzygmunt60
- 5_Assignment on Traffic NetworkUploaded byDr. Ir. R. Didin Kusdian, MT.
- A1.2 - Perms and CombosUploaded byamolkhadse
- Kalman Filter - Wikipedia, The Free EncyclopediaUploaded bydineshprocks
- R- Vegan Tutorial - Multivariate Analysis of Ecological Communities by OksanenUploaded byRoger Chullunquia Tisnado
- mathematics (formula).docUploaded bypepenapao1217
- Eigen ValuesUploaded byAnonymous OrhjVLXO5s
- Grade 10 Mathematics 1st Quarter PtUploaded bysaniya
- Integral Equation NotesUploaded by123chess
- A Brief Look on Gaussian Integrals Quantum Field Theory William O. StraubUploaded byMartinAlfons
- ps38 n3cs19Uploaded byapi-234377298
- hw5Uploaded byKiran Adhikari
- CalcI CompleteUploaded byAnthony Loriem
- Iran MathUploaded byGiovani R Morales Milla
- Bearing Load CalculationUploaded bymhsalih