You are on page 1of 14

Mathematics for Economics and Finance

Answer Key to Final Exam


Instructor: Norman Schürhoff

Date: January 10, 2014

1
Question I (25 points)
1. Are the following statements true or false? Just state TRUE or FALSE. A correct
answer receives a 1 point bonus while a wrong answer receives a 0.5 point deduction.
State nothing to avoid any deduction.

(a) Spectral decomposition is a method to decompose light into different colors.


(b) Gauss Inversion is a method to invert linear functions.
(c) The Weierstrass Theorem provides necessary requirements for the existence of
solutions to Lagrange problems.
(d) The Implicit Function Theorem allows to compute the change in solutions to
Lagrange problems.
(e) If the rows of matrix Am×n are linearly dependent, then m > n.
(f) If the null space of matrix Am×n has dimension n, then m > n.
(g) For any random variables X and Y , E[X|Y ] is the best linear unbiased predictor.
(h) Quasi-concavity of a function implies pseudo-concavity.

2. Are the following statements true or false? Just state TRUE or FALSE. A correct
answer receives a 4 point bonus while a wrong answer receives a 2 point deduction.
State nothing to avoid any deduction.
 n 2 3 4
(a) limn→∞ 1 + n = 1 + x + x2! + x3! + x4! + . . .
x

(b) limx→0 sinbxax = − ab



Rx
(c) ∂x 0
ln xy dy = ln x + 1

3. Compute
hZ ∞ i
E e−rt Xt dt ,
0

where Xt is a Gaussian stochastic process with Xt ∼ N (1, 1).

2
Question I Solution
1. (a) False. It is a representation of a matrix in terms of its eigenvalues and eigenvectors.
Theorem 2.5 (Spectral Theorem)
(b) True. Gauss Elimination Method is a method to invert matrices which represent
linear functions. Algorithm 2.4 (How to Find an Inverse Matrix)
(c) False. The Weierstrass Theorem provides sufficient requirements for the existence
of solutions to Lagrange problems. Theorem 3.6 (Weierstrass’ Theorem)
(d) True. The Implicit Function Theorem allows to compute the change in solutions
to Lagrange problems. Section 6.9.4 (Implicit Function Theorem)
(e) False. We cannot say whether m < n by the given information.
(f) False. The null space of matrix Am×n has dimension n by definition and does
not allow us to conclude anything about m and n. Definition 2.12 (Null space&
Nullity)
(g) False. E[X|Y ] is the best unbiased predictor, but not necessary linear. Section
4.6.4 (Best Nonlinear Predictor)
(h) False. Pseudo-concavity implies quasi-concavity. Section 3.3.3 Concavity, Quasi-
and Pseudo-Concavity for Differentiable Functions.
 n 2 3 4
2. (a) True. limn→∞ 1 + nx = ex = 1 + x + x2! + x3! + x4! + . . .
sin ax a
(b) False. limx→0 bx
= b
Rx   x
y y
(c) False. By Leibniz’s formula, F 0 (x) = ln x2 + 0 xy
dy = 2 ln x + x
= 2 ln x + 1.
0

3.
hZ ∞ i Z ∞ h i Z ∞
−rt −rt
E e Xt dt = e E Xt dt = e−rt · 1dt =
0
0 R∞
0

if r = 0 = 0 1dt = t diverges


0(
= ∞ if r > 0 − 1r (0 − 1) = 1

if r 6= 0 = − 1r e−rt = r


0 if r < 0 diverges
.

3
Question II (25 points)
Let x be the number of Red Bullr drinks consumed per month. Denote by y the number
of Zweifelr potato chips packages consumed over the same period. A consumer has utility
function
U (x, y) = emin{x,ay} ,
with parameter constraint a > 0 and budget constraint

Rx + M y ≤ W.

Let D be the constraint set, that is, the set of all feasible consumption bundles.

1. Give an interpretation to a, R, M , and W .

2. How does the constraint set D look like?

3. Formulate the optimization problem.

4. Is constraint qualification satisfied?

5. Is the maximum of U (x, y) achieved on D? Explain why or why not.

6. Can the Kuhn-Tucker method be applied to this optimization problem? If yes, formu-
late Kuhn-Tucker conditions. If not, explain why.

7. Solve the optimization problem. Is the solution unique?

8. How do the solution and the value function depend on (R, M, W )? What are the
properties of the value function and the solution in terms of W ?

4
Question II Solution
1. a is a desirable proportion of goods y and x.
R is a price of a Red Bull drink.
M is a price of a Zweifel package.
W is a consumer’s month budget.
(3 points)
n o
2. D = (x, y) ∈ R : Rx + M y 6 W, x > 0, y > 0 (3 points)

3. (
U (x, y) = emin{x,ay} −→ maxx,y ,
s.t. Rx + M y 6 W, x > 0, y > 0.
For simplicity, let’s discuss V (x, y) = ln U (x, y). As f (z) = ln z is an increasing
monotone differentiable function, it will not affect the solution.
(
V (x, y) = min{x, ay} −→ maxx,y ,
s.t. Rx + M y 6 W, x > 0, y > 0.

(3 points)

4. Yes, it is, since the constraints are linear. (2 points)

5. It is, by Weierstrass theorem. The objective function V (U ) is continuous and the


constraint set D is compact (closed and bounded). (3 points)

6. It cannot, because the objective function is not differentiable at optimal points. (3


points)

7. Optimal points form a ray x = ay, y > 0. Substituting x in the budget constraint, we
get Ray + M y = (Ra + M )y = W . Thus,
(
y ∗ = Ra+M
W
,
∗ aW
x = Ra+M .

The value function is


n aW W o aW
v(R, M, W ) = V (x∗ , y ∗ ) = min{x∗ , y ∗ } = min ,a = .
Ra + M Ra + M Ra + M
aW
u(R, M, W ) = ln(v(R, M, W )) = e Ra+M .

(4 points)

5
8. To find the properties of the solution and the value function, we need to compute their
derivatives w.r.t. the corresponding parameters:
∂x∗ a W 2 ∂x∗ aW ∂x∗ a
∂R
= − (Ra+M )2
< 0, ∂M
= − (Ra+M )2
< 0, ∂W
= Ra+M
> 0,
∂y ∗ aW ∂y ∗ W ∂y ∗ 1
∂R
= − (Ra+M )2
< 0, ∂M
= − (Ra+M )2
< 0, ∂W
= Ra+M
>0
∂u 2
a W ∂u aW ∂u a
∂R
= − (Ra+M )2
u < 0, ∂M
= − (Ra+M )2
u < 0, ∂W
= Ra+M
u > 0.

All three functions increase with the income and decrease with both goods’ prices. All
three functions are linear in W . (4 points)

6
Question III (25 points)
The rating agency P iRates splits companies into three categories: A (high performance),
B (average performance), and C (defaulting). Each quarter they reconsider their ratings of
companies.
Historical statistics describe probabilities of transition between different categories in the
following way:
2
• A sound company stays sound with probability 3
and has no chance to default in the
next period.

• An average company improves its performance, maintains it, or defaults with equal
probabilities.

• The default state is absorbing, i.e., companies which default today never improve their
performance.

Answer the following questions.

1. Write down a matrix


Π = {πij }i,j∈{A,B,C} ,
where πij is the probability of moving from group i at time t to group j at time t + 1.

2. What are the properties of a probability transition matrix? Does the matrix Π satisfy
them?

3. Is the given Π symmetric? Is it orthogonal? Is it idempotent?

4. Assume pit = 31 , i ∈ {A, B, C}, is a probability to get rating A, B, or C, respectively, to-


day. What are the probability distributions yesterday pt−1 and, respectively, tomorrow
pt+1 ?

5. Find all eigenvalues and corresponding eigenvectors of the matrix Π.

6. Compute the determinant of the matrix Π.

7. Find the inverse matrix of Π.

8. Assume there is a stationary distribution q = (q A , q B , q C ) with i∈{A,B,C} q i = 1, q i >


P
0. Explain what is a stationary distribution. Find all possible q.

7
Question III Solution
1.  2 1 
3 3
0
1 1 1
Π = {πij }i,j∈{A,B,C} =  .
 
3 3 3
0 0 1

(3 points)
P3
2. The sums along the rows of a matrix must equal 1: ∀i j=1 πij = 1. The given Π
satisfies this requirement. (3 points)

3. It is not symmetric, by definition:


Π 6= ΠT .
It is not orthogonal, by definition:

ΠΠT 6= I.

It is not idempotent, by definition:

ΠΠ 6= Π.

(3 points)

4.
pt+k = pt Πk
 2 1 
3 3
0
1 1 1 1 1 1
( ) = (0 1 0) 
 
3 3 3
3 3 3

0 0 1
pt−1 = (0 1 0)
 2 1 
3 3
0
1 2 4 1 1 1  1 1 1
( )=( )

3 3 3
3 9 9 3 3 3

0 0 1
pt+1 = ( 13 2 4
9 9
) ≈ (0.3333 0.2222 0.4444)
(3 points)

5.  2 1 
3 3
0
1 1 1
 v = λv
 
 3 3 3
0 0 1
det(A − λI) = 0

8
2 1
3
−λ 3
0

1 1 1
2 1 11
−λ = ( − λ)( − λ)(1 − λ) − (1 − λ)

3 3 3
3 3 33


0 0 1−λ
 
2 1 1
= ( − λ)( − λ) − (1 − λ)
3 3 9
 
1 2
= − λ + λ (1 − λ)
9

3± 5
{The solution of the quadratic equation is λ = }
6
 3 − √5  3 + √5 
= −λ − λ (1 − λ)
6 6
 √ √ 
3+ 5 3− 5
Eigenvalues are 6 6
1 ≈ (0.8727 0.1273 1).

λ=1
2 1
−1 − 31 1
−1
     
3 3
0 3
0 1 0
1 1 1 1
−1 ⇔ − 32 1
⇔ 1 −2 1 
     
 3 3 3 3 3
1−1
0 0 0 0 0 0 0 0

−1 1 0
    
v1 0 v1 = v2 ,

 1 −2 1   v2  =  0  ⇒ v1 = v3 ,
    

v3 = c, c ∈ R

0 0 0 v3 0
√ √
Normalization: c2 + c2 + c2 = 1 ⇔ |c| 3 = 1 ⇔ |c| = √13 . The normalized eigenvec-
 0
tor, corresponding to λ = 1 is √1
3
√1
3
1

3
≈ (0.5774 0.5774 0.574)0 .


3+ 5
λ= 6
 √   √ 
2 3+ 5 1 1− 5 1
3
− 6 3
0 6 3
0
 √   √ 
1 1 3+ 5 1 ⇔ 1 −1− 5 1

 3 3
− 6 3   3 6 3


√ √
3+ 5 3− 5
0 0 1− 6
0 0 6
 √   √ 
1− 5 2 0 1− 5
2
1 0
  √
 
 2 −1 − 2 ⇔ 0
5 0 0 
 √   
0 0 3− 5 0 0 1
 √ 
1− 5
   
2
1 0 v1 0 v1 = c, c √∈ R

 
0 0 0   v2  =  0  ⇒ v2 = − 1−2 5 v1 ,
    
 
v3 = 0.

0 0 1 v3 0

9
q √ q √
Normalization: c2 + ( 1−2 5 c)2 + 02 = 1 ⇔ |c| 52 − 2
5
= 1 ⇔ |c| = q 1 √
5

2
− 25

3+ 5
0.8507. The normalized eigenvector, corresponding to λ = 6
is approximately
(0.8507 0.5257 0)0 .

3− 5
λ= 6
 √   √ 
2 3− 5 1 1+ 5 1
3
− 6 3
0 6 3
0
 √   √ 
1 1 3− 5 1 ⇔ 1 −1+ 5 1

 3 3
− 6 3   3 6 3


√ √
3− 5 3+ 5
0 0 1− 6
0 0 6
 √
1+ 5
  √ 
2
1 0 1+ 5 2 0
 √ 
 2 −1 + 5 1 ⇔
 0 0 0 


 
0 0 3+ 5 0 0 1
 √     
1+ 5 2 0 v1 = c, , c√∈ R,
 v1 0
0 0 0   v2  =  0  ⇒ v2 = − 1+2 5 v1 ,
    


v3 = 0.

0 0 1 v3 0
q √ q √
Normalization: 2 1+ 5 2
c + ( 2 c) + c = 1 ⇔ |c| 52 + 25 = 1 ⇔ |c| =
2 q 1 √
5

2
+ 25

3− 5
0.5257. The normalized eigenvector, corresponding to λ = 6
is approximately
(0.5257 − 0.8507 0)0 .
The matrix of normalized eigenvectors is:
 
0.8507 0.5257 0.5774
 0.5257 −0.8507 0.5774  .
 

0 0 0.5774

(4 points)
1
6. det(Π) = 9
≈ 0.1111. (3 points)

7. Gauss Elimination Method


 2 1     
3 3
0 1 0 0 2 1 0 3 0 0 2 1 0 3 0 0
 1 1 1
 3 3 3 0 1 0  ⇔  1 1 1 0 3 0  ⇔  1 1 0 0 3 −1 
    

0 0 1 0 0 1 0 0 1 0 0 1 0 0 1 0 0 1
1 0 0 3 −3 1 1 0 0 3 −3 1
     
2 1 0 3 0 0
 1 1 0 0 3 −1  ⇔  1 1 0 0 3 −1  ⇔  0 1 0 −3 6 2 
     

0 0 1 0 0 1 0 0 1 0 0 1 0 0 1 0 0 1

10
−3
 
3 1
inv(A) =  −3 6 −2  .
 

0 0 1

(3 points)

8.
q = qΠ, q A + q B + q C = 1.
 2 A 1 B
 A

 3
q + 3
q = qA, 
 2q + qB = 3q A ,
 
 1 qA +
 1 B
q = qB ,  qA

+ qB = 3q B ,
3 3
1 B



 3
q + qC = qC , 

 q B + 3q C = 3q C ,

 A 
B C  A
q + q + q = 1. q + qB + qC = 1.


 −q A + q B = 0,

 q B − 2q B

= 0,
⇔ q = (0 0 1).


 0 + qC = qC ,

0 + 0 + qC = 1.

(3 points)

11
Question IV (25 points)
Suppose Y1 , ....., Yn are independent and identically distributed random variables with each
Yi having density function
y 2 −y
f (y; θ) = 3 e θ , y > 0,

where θ > 0 is a parameter. It is known that E[Yi ] = 3θ and V[Yi ] = 3θ2 for each i = 1, ....n.

1. Determine θ̂M OM , the method of moments estimator of θ.

2. Compute the likelihood function L(θ; y) for this random sample.

3. Show that the maximum likelihood estimator of θ is


n
1 X Ȳ
θ̂M LE = Yi = .
3n i=1 3

4. Based on your answers above, show that both θ̂M OM and θ̂M LE are unbiased estimators
of θ.

5. Find the Fisher information number I(θ).

6. Explain why θ̂M LE is the minimum variance unbiased estimator of θ.

12
Question IV Solution
1. To find the method of moments estimator, we equate the first population moment with
the first sample moment. Since E(Yi ) = 3θ, we conclude θ̂M OM = Ȳ3 . (4 points)

2. By definition, the likelihood function L(θ) is given by


n n n
!2 ( n
)
Y Y yi2 (−yi /θ) −n −3n
Y 1X
L(θ) = fy (yi |θ) = 3
e =2 θ yi exp − yi
i=1 i=1
2θ i=1
θ i=1

(4 points)

3. The log-likelihood function is

n n
X 1X
l(θ) = ln L(θ) = −n ln 2 − 3n ln θ + 2 ln yi − yi
i=1
θ i=1

Taking derivatives yields

n
∂L 3n 1 X
=− + 2 yi
∂θ θ θ i=1

∂L
Setting ∂θ
= 0 gives

n
1 X ȳ
θ= yi =
3n i=1 3

We conclude that


θ̂M LE =
3
(4 points)

4. Since E(Yi ) = 3θ, we have


 
Ȳ E(Yi ) 3θ
E(θ̂M LE ) = E(θ̂M OM ) = E = = =θ
3 3 3

In other words, both θ̂M LE and θ̂M OM are unbiased estimators of θ.


(4 points)

5. Since

y
ln fy (y|θ) = − ln 2 − 3lnθ + 2 ln y −
θ

13
we compute

∂ 3 y
ln fY (y|θ) = − + 2
∂θ θ θ
and

∂2 3 2y
ln fY (y|θ) = 2 − 3
∂θ∂θ θ θ
Therefore,

∂2
 
2 3 3
I(θ) = −E ln fY (Y |θ) = 3
E(Y ) − 2 = 2
∂θ∂θ θ θ θ

(5 points)

6. Recall that the Cramer-Rao lower bound tells us that if θ̂ is an unbiased estimator of
θ, then

1
V ar(θ̂) ≥ .
nI(θ)

Since θ̂M LE is unbiased (shown in the exercises before)

3θ2 θ2
 
Ȳ 1
V ar(θ̂M LE ) = V ar = V ar(Y1 ) = =
3 9n 9n 3n

and since
1 1 θ2
= =
nI(θ) 3n/θ2 3n
we see that the lower bound of the Cramer-Rao inequality is attained, and so we deduce
that θ̂M LE is attained, and so we deduce that θ̂M LE is the minimum variance unbiased
estimator of θ.
(4 points)

14

You might also like