You are on page 1of 9

BIRKBECK

(University of London)

MSc EXAMINATION FOR INTERNAL STUDENTS

MSc IN FINANCIAL ENGINEERING

DEPARTMENT OF ECONOMICS, MATHEMATICS AND STATIS-


TICS

FINANCIAL ECONOMETRICS

AND

QUANTITATIVE METHODS

EMMS012S7

June 7 2012, 10:00 – 13:15 (includes 15 minutes reading time)

The paper is divided into two sections. There are four questions in each
section.

Answer TWO questions from Section A and TWO questions from Section
B.

All questions carry the same weight.

Birkbeck
c College 2012
EMMS012S7
Page 1 of 9
SECTION A (Answer TWO questions from this section)

1. Consider the linear regression model

y = Xβ + u,

where y is an n × 1 vector of observations on the dependent variable,


X is a full-rank n × k matrix of observations on k non-stochastic ex-
planatory variables, β is a k × 1 vector of unknown coefficients, and
u is an n × 1 vector of unobservable random disturbances such that
E(u) = 0 and E(uu0 ) = σ 2 In , σ 2 being an unknown positive constant
and In the n × n identity matrix. Let βb be the ordinary least squares
(OLS) estimator of β.

(a) Show that βb is the best linear unbiased estimator of β. [7 marks]

(b) Stating any further assumptions you may require, show that βb is
consistent for β. [4 marks]

b = y − Xβ,
(c) If u b show that σ b0u
b2 = u b /(n − k) is an unbiased and
consistent estimator of σ 2 . [7 marks]

(d) Suppose X is stochastic and plimn→∞ (n−1 X0 u) 6= 0. Let W


be a full-rank n × p matrix of instruments (k 6 p < n) such
that plimn→∞ (n−1 W0 u) = 0. Define X
b = Pw X, where Pw =

W(W0 W)−1 W0 (and note that P2w = P0w = Pw ). Stating any


additional assumptions you make, show that the OLS estimator in
the regression of y on X, b 0 X)
b denoted βe = (X b −1 X
b 0 y, is a consistent

estimator of β. [7 marks]

Birkbeck
c College 2012
EMMS012S7
Page 2 of 9
2. Consider the linear regression model

y = Xβ + u,

where y is an n × 1 vector of observations on the dependent variable,


X is a full-rank n × k matrix of observations on k non-stochastic ex-
planatory variables, β is a k × 1 vector of unknown coefficients, and
u is an n × 1 vector of unobservable disturbances such that E(u) = 0
and E(uu0 ) = σ 2 Ω, with Ω being a known symmetric, positive def-
inite matrix and σ 2 an unknown positive constant. Assume that:
(i) (1/n)X0 Ω−1 X → Q as n → ∞, where Q is a finite, positive definite
√ d d
matrix; (ii) (1/ n)X0 Ω−1 u → N (0, σ 2 Q) as n → ∞ (with → denoting
convergence in distribution).

(a) Let βb be the generalised least squares (GLS) estimator of β. Show


that βb can be obtained by minimising the GLS criterion function
Q(β) = (y − Xβ)0 Ω−1 (y − Xβ) with respect to β. [5 marks]

(b) Show that βb is unbiased and obtain var(β).


b How does var(β)
b
e where βe = (X0 X)−1 X0 y? [5 marks]
compare to var(β),

(c) Show that βb is consistent for β. [4 marks]



(d) Obtain the limiting distribution of n(βb − β) as n → ∞. [5
marks]

(e) Suppose we wish to test the hypothesis H0 : φ(β) = 0 against


H1 : φ(β) 6= 0, where φ is a q × 1 vector function (with q < k).
The function φ(β) has continuous first derivatives collected into
∂φ(β)
the full-rank q × k matrix F(β) = ∂β 0
. Explain how to construct
a test of H0 using the Wald principle and obtain the asymptotic
distribution of your test statistic under H0 . [6 marks]

Birkbeck
c College 2012
EMMS012S7
Page 3 of 9
3. Suppose (X1 , . . . , Xn ) is a random sample of size n from the distribution
with probability density function

f (x) = (1/θ) exp (−x/θ) , x > 0,

R∞
where θ > 0 is an unknown parameter. Note that 0
xf (x)dx = θ and
R∞ 2
0
x f (x)dx = 2θ2 .

(a) Derive the maximum likelihood estimator (MLE) θb of θ. [4 marks]

(b) Show that θb is an unbiased, consistent and fully efficient estimator


of θ. [6 marks]
√ b
(c) Obtain the limiting distribution of n(θ−θ) as n → ∞. [3 marks]

(d) What is the MLE of the parameter β = exp(θ)? Obtain the



limiting distribution of n(βb − β) as n → ∞. [6 marks]

(e) Derive the Wald and likelihood-ratio statistics for testing the
null hypothesis H0 : θ = 1 against the alternative H1 : θ 6= 1.
What is the asymptotic distribution of these statistics under H0 ?
[6 marks]

Birkbeck
c College 2012
EMMS012S7
Page 4 of 9
4. Suppose (X1 , . . . , Xn ) is a random sample of size n from the N (µ, σ 2 )
distribution and let θ = (µ, σ 2 )0 .

(a) Explain in detail how to estimate θ by the generalised method


of moments (GMM) using the fact that, if X ∼ N (µ, σ 2 ), then
E(X) = µ and E[(X − µ)2 ] = σ 2 . [5 marks]

(b) Show that the GMM estimator of θ in (a) is the same as the
maximum likelihood estimator. [6 marks]

(c) Explain in detail how your answer to (a) would change if you used
the additional information that E[(X −µ)3 ] = 0 and E[(X −µ)4 ] =
3σ 4 . [5 marks]

(d) Explain how to obtain an asymptotically efficient GMM estimate


of θ using the four moment conditions in part (c). [5 marks]

(e) Explain how to test the validity of the four moment conditions
used to compute the GMM estimator of θ in part (c). [4 marks]

Birkbeck
c College 2012
EMMS012S7
Page 5 of 9
SECTION B (Answer TWO questions from this section)

5. Consider the bivariate VAR process defined by


      
x 0.2 −0.3 x u
 t =   t−1  +  1t  ,
yt α 0.1 yt−1 u2t

where {ut = (u1t , u2t )0 } are independent and identically distributed


random vectors with zero mean and variance–covariance matrix equal
to the identity matrix.

(a) Under what condition on α is the VAR stationary? [5 marks]

(b) Show that there exists a univariate ARMA(2, 1) representation


for xt . (Hint: using the second equation of the VAR, write yt as a
function of xt ). [5 marks]

(c) Under what condition(s) is the ARMA representation in (b) sta-


tionary? Compare the stationarity conditions in (a) and (b).
[2 marks]

(d) Establish whether xt Granger causes yt and/or yt Granger causes


xt . [4 marks]

(e) Derive impulse response functions Ψs , for s = 1, 2, 3, 4. Discuss


whether the innovations of the above VAR need to be orthogo-
nalised. [5 marks]

(f) Find an expression for Et (xt+n ), where Et denotes conditional ex-


pectation given information available at time t. [4 marks]

Birkbeck
c College 2012
EMMS012S7
Page 6 of 9
6. Suppose that the bivariate time series of real stock prices (Pt ) and real
dividends (Dt ) satisfies the system

Pt + βDt = u1t , u1t = φu1,t−1 + ε1t ,

Dt = u2t , u2t = u2,t−1 + ε2t ,

where β 6= 0, −1 < φ < 1, and {(ε1t , ε2t )0 } are independent and


identically distributed random vectors with zero mean and variance–
covariance matrix equal to the identity matrix.

(a) Determine the order of integration of {Pt } and {Dt }. [4 marks]

(b) Are {Pt } and {Dt } cointegrated? Explain. [3 marks]

(c) Derive the error-correction representation of the system. [7 marks]

(d) Assume that stock prices are the discounted value of the future
dividends stream, i.e.,
" ∞  s #
X 1
Pt = E t Dt+s + ε1t ,
s=1
1+r

where r is a fixed discounting rate, ε1t is a measurement error, and


Et denotes conditional expectation given information available at
time t.

(i) Derive the stochastic process for {Pt } assuming that {Dt } fol-
lows the process given above. (Note that ∞
P s
P∞ s
s=1 a = a s=0 a =
a
1−a
for |a| < 1). [7 marks]

(ii) What value of φ ensures that the long-run relationship Pt +


βDt will be identical to the present-value solution of the model?
Explain. [4 marks]

Birkbeck
c College 2012
EMMS012S7
Page 7 of 9
7. Consider the following ARCH models for ε1t and ε2t :

ε1t = vt (c1 + α1 ε21,t−1 )1/2 , where vt ∼ N (0, 1),

ε2t = ut (c2 + α2 ε22,t−1 )1/2 , where ut ∼ N (0, 1) and E(ut vt−s ) = 0 for all s.

(a) Derive the first, second, third and fourth conditional moments of
ε1t . [6 marks]

(b) Derive the first, second, third and fourth unconditional moments
of ε1t . [7 marks]

(c) Show that the n-periods-ahead forecast of the conditional variance


of ε1t can be written as
   
2 c1 c1
Et−1 (σt+n ) = + σt2 − (α1 )n ,
1 − α1 1 − α1

where σt2 = Et−1 (ε21t ) and Et−1 denotes conditional expectation


given information available at time t − 1. [6 marks]

(d) Define ε2t = γ1 ε21t + γ2 ε22t . Does ε2t have ARCH effects? Do the
results change when α1 = α2 and, if so, which ARCH model char-
acterises ε2t ? [6 marks]

Birkbeck
c College 2012
EMMS012S7
Page 8 of 9
8. Consider the bivariate VAR model
      
0 0
y a a y ε
 t  =  11 12   t−1  +  t  ,
0 0
xt a21 a22 xt−1 ut

0 0
where {εt } and {ut } are white-noise processes, yt and xt are given by
0
yt = yt − α0 − α1 St ,
0
xt = xt − α0 − α2 St ,

and {St } follows a two-state Markov process with transition probabili-


ties
p = P (St = 1|St−1 = 1),

q = P (St = 0|St−1 = 0).

(a) Derive E(St ) and determine under which conditions on p and q this
moment exists. Explain how E(St ) is related to the unconditional
probability P (St = 1). [6 marks]

(b) Explain what p = 1 implies about the Markov process {St }.


[3 marks]

(c) Derive the expected value of the state n periods ahead conditional
on the information about the state available at time t, E(St+n |St ).
[7 marks]

(d) Suppose yt is the first difference of the spot exchange rate (et ),
yt = et −et−1 , and xt is the spread between the one-period forward
rate (ft ) and the spot exchange rate, xt = ft − et . A version of the
hypothesis of the unbiasedness of the forward rate as a predictor
of the future spot rate may be expressed as

xt = Et (yt+1 ).

Which testable restrictions does this theory impose on the pa-


rameters of the VAR and which on the switching parameters?
[9 marks].

Birkbeck
c College 2012
EMMS012S7
Page 9 of 9

You might also like