You are on page 1of 8

2301212244 梁忠诚

1. Reproduce example 3.1, 3.3 and 3.5 from Jehle and Reny (third edition) on the CES production
function.

(a) Example 3.1


n n
1
αi xiρ ) ρ , where
∑ ∑
Given CES function: y = ( αi = 1.
i=1 i=1
∂y 1 n 1
MPi ∂x i ρ
( ∑i=1 αi xiρ ) ρ −1αi ρxiρ−1 αi xiρ−1
α x
MRT Sij (xi, xj ) = = = = = i ( i ) ρ−1
MPj 1 ∂y n 1
( ∑i=1 αi xiρ ) ρ −1αj ρxjρ−1
ρ−1
αj xi αj xj
∂xj ρ
α x x
d ln MRT Sij (xi, xj ) d ln αi ( xi ) ρ−1 (ρ − 1)d ln xi
j j j −1 1
σij = − ( xi )−1 = − ( xi )−1 = − ( xi ) =
d ln x d ln x d ln x 1−ρ
j j j
n n
1
αi xi1) 1 =
∑ ∑ i i
i. When ρ → 1, σ → + ∞, y = ( αx
i=1 n i=1
1 n ρ
ρ ρ1
αi xi ) = e limρ→0 ρ ln( ∑i=1 αi xi ); as L’Hospital Rule,

ii. When ρ → 0, σ → 1, lim y = lim (
ρ→0 ρ→0
i=1
∑ni=1 αi xiρ ln x i
n n
1 n ∑ni=1 αi xiρ ∑i=1 αi xiρ ln xi ∑i=1 αi ln xi n
ρ
∑ ∑ i
lim ln( αi xi ) = lim = lim n = = α ln xi.
ρ→0 ρ
i=1 n
ρ→0 1 ρ→0 ∑i=1 αi xiρ 1 i=1
So lim y = e ∑i=1 αi ln xi = xiαi.
ρ→0 ∏
n ρ
1 1 n
αi xiρ ) ρ = e limρ→−∞ ρ ln( ∑i=1 αi xi ); as L’Hospital
ρ→−∞ ∑
iii. When ρ → − ∞, σ → 0, lim y = lim (
ρ→−∞
i=1
∑ni=1 αi xiρ ln x i
n
1 n ∑ni=1 αi xiρ ∑i=1 αi xiρ ln xi
ρ
∑ i i
Rule, lim ln( α x ) = lim = lim n , let
ρ→−∞ ρ
i=1
ρ→−∞ 1 ρ→−∞ ∑i=1 αi xiρ
xK = x = min [x1, x 2, ⋯, xn ], then
n n
∑i=1 αi xiρ ln xi ∑i=1 αi xiρ ln xi /x ρ αk ln xK
lim n = lim n = = ln xk. So
ρ→−∞ ∑i=1 αi xiρ ρ→−∞ ∑i=1 αi xiρ /x ρ αK
n
lim y = e ∑i=1 αi ln xi = xK = min [x1, x 2, ⋯, xn ].
ρ→0

(b) Example 3.3


1
min w1 x1 + w2 x 2 s.t. (x1ρ + x2ρ ) ρ ≥ y.
x1≥0,x 2 ≥0
1
L = w1 x1 + w2 x 2 − λ(y − (x1ρ + x2ρ ) ρ ), assuming an interior solution, the first order conditions
∂L 1 1
∂x1
= w1 + λ ρ (x1ρ + x 2ρ ) ρ −1 ρx1ρ−1 = 0
1 w1 x 1
are: ∂L
= w2 + λ ρ1 (x1ρ + x 2ρ ) ρ −1 ρx 2ρ−1 = 0 ⇒ = ( 1 ) ρ−1, (x1ρ + x2ρ ) ρ − y = 0 ⇒
∂x 2 w2 x2
∂L 1
∂λ
= (x1ρ + x 2ρ ) ρ − y = 0
1
w 1 w ρ 1 y y ⋅ w2ρ − 1
x1 = ( 1 ) ρ − 1 ⋅ x 2, (( 1 ) ρ − 1 + 1) ρ ⋅ x 2 = y ⇒ x 2 = ρ = ρ ρ ,
w2 w2 w 1 1
(( w1 ) ρ − 1 + 1) ρ (w1ρ − 1 + w2ρ − 1 ) ρ
2
1
y y ⋅ w1ρ − 1
x1 = ρ = ρ ρ
w 1 1
(( w2 ) ρ − 1 + 1) ρ ρ−1
(w1 ρ−1 ρ
+ w2 )
1

1
2301212244 梁忠诚

Cost function:
1 1
y ⋅ w1ρ − 1 y ⋅ w2ρ − 1 ρ ρ ρ−1
c(w1, w2, y) = w1 x1 + w2 x 2 = w1 ρ ρ + w2 ρ ρ = y(w1ρ − 1 + w2ρ − 1 ) ρ
1 1
(w1ρ − 1 + w2ρ − 1 ) ρ (w1ρ − 1 + w2ρ − 1 ) ρ

(c) Example 3.5


β β
max p y − (w1 x1 + w2 x 2 ) s.t. (x1ρ + x2ρ ) ρ ≥ y ⇒ max p(x1ρ + x2ρ ) ρ − (w1 x1 + w2 x 2 )
x,y≥0 x≥0
β
∂L
∂x1
= pβ(x1ρ + x 2ρ ) ρ −1 x1ρ−1 − w1 = 0
FOC: β
∂L
∂x 2
= pβ(x1ρ + x 2ρ ) ρ −1 x 2ρ−1 − w2 = 0
x1 ρ−1 w1 w1 1 β
⇒( ) = , x1 = ( ) ρ − 1 ⋅ x 2, Substituting in y = (x1ρ + x2ρ ) ρ
x2 w2 w2
1 1 1 1 1
yβ y β ⋅ w2ρ − 1 y β ⋅ w1ρ − 1
⇒ x2 = ρ = ρ ρ , x1 = ρ ρ , Substituting in
w1 1 1 1
(( w ) ρ − 1 + 1) ρ
(w1 ρ − 1 ρ −
+ w2 ) ρ 1
(w1ρ − 1 + w2ρ − 1 ) ρ
2
β
ρ ρ ρ −1 ρ−1
pβ(x1 + x2 ) x1 − w1 = 0
ρ β ρ−1 ρ ρ ρ−1
⋅( −1)
⇒ pβ y β ρ y β (w1ρ − 1 + w2ρ − 1 )− ρ w1 − w1 = 0
β−1 ρ ρ ρ−1
ρ−1 ρ−1 − ρ
⇒ pβ y β (w1 + w2 ) =1
1−β ρ ρ ρ−1
⇒ y β = pβ(w1ρ − 1 + w2ρ − 1 )− ρ
β ρ ρ ( ρ − 1)β
ρ−1 ρ − 1 ρ( β − 1)
⇒ Supply function: y = ( pβ ) 1 − β (w1 + w2 )
1 1 ρ ρ ρ−β
⇒ Input demand function: xi = wiρ − 1 ( pβ ) 1 − β (w1ρ − 1 + w2ρ − 1 ) ρ( β − 1)
⇒ Profit function:
β ρ ρ ( ρ − 1)β n 1 1 ρ ρ ρ−β
ρ−1 ρ − 1 ρ( β − 1) ρ−1 ρ−1 ρ − 1 ρ( β − 1)
∑ i i
π (w, y) = p( pβ ) 1 − β (w1 + w2 ) − ww ( pβ ) 1 − β (w1 + w2 )
i=1
1 ρ ρ ( ρ − 1)β β 1
= p 1 − β (w1ρ − 1 + w2ρ − 1 ) ρ( β − 1) (β 1 − β − β 1 − β )
1 ρ ρ ( ρ − 1)β β
ρ−1 ρ − 1 ρ( β − 1) 1 − β
= p 1 − β (w1
+ w2 ) − β)β (1
i. When β < 1, decreasing return to scale, we have above profit function.
ii. When β = 1, constant return to scale, profit function is undefined
iii. When β > 1, increasing return to scale, what we get above is a local profit minimum by
checking second order conditions. So function above is not the profit function.

2. Suppose your company needs only two inputs x1 and x 2 to produce y, and the production function is
f (x1, x 2 ) = α x1 + β x 2.

(a) Find your firm’s cost function and conditional input demand function.
Cost minimization: min w1 x1 + w2 x 2 s.t. f (x1, x 2 ) ≥ y
x1≥0,x 2 ≥0
Lagrangian: L = w1 x1 + w2 x 2 − λ(y − (α x1 + β x 2 )) − μ1(−x1) − μ2(−x 2 )

{w2 + λ β + μ2 = 0
w1 + λ α + μ1 = 0
Obviously, we have y − (α x1 + β x 2 ) = 0, and FOC:

2
2301212244 梁忠诚

x1 = 0
β {x 2 = β
w1 α w α w2
i. x1 = 0, x 2 > 0 ⇒ μ1 > 0, μ2 = 0 ⇒ < . When 1 < , y , c(w, y) = y.
w2 β w2 β
y

β {x 2 = 0
w1 α w α x1 = α w1
ii. x 2 = 0, x1 > 0 ⇒ μ2 > 0, μ1 = 0 ⇒ > . When 1 > , , c(w, y) = y.
w2 β w2 α
w1 α w α
iii. x1 > 0, x 2 > 0 ⇒ μ1 = 0, μ2 = 0 ⇒ = . When 1 = ,
w2 β w2 β
w w
(x1, x 2 ) ∈ {(x1, x 2 ) | α x1 + β x 2 = y, x1 ≥ 0,x 2 ≥ 0}, c(w, y) = 1 y = 2 y.
α β
y w2 w1 α
x1 = 0, x 2 = β , c(w, y) = β y, when w < β
2
y w1 w1 α
In summary, x1 = α , x 2 = 0, c(w, y) = α
y, when w2
> β
w1 w2 w1 α
α x1 + β x 2 = y, x1 ≥ 0, x 2 ≥ 0, c(w, y) = α
y or β
y, when w2
= β

(b) Verify whether the properties of Theorem 3.2 and Theorem 3.3 hold.
Theorem 3.2
w2 w1 α
c(w, y) = β
y, when w2
< β
w1 w1 α
cost function: c(w, y) = α
y, when w2
> β
w1 w2 w1 α
c(w, y) = α
y or β
y, when w2
= β

Based on the cost function above, we can verify that:


1. When y = 0, c(w, y) = 0 always holds.
2. Obviously, c(w, y) is always continuous when y ≥ 0.
3. As α, β > 0, ∀w ≫ 0, c(w, y) is strictly increasing and unbounded above in y.
y y
4. When y = 0, c = 0; when y > 0, , > 0. So c(w, y) is always increasing in w.
α β
5. c(t ⋅ w, y) = tc(w, y) always holds.
6. c(w, y) is a linear function of w, so it is concave.
w1 w
7. Shephard’s lemma: f is strictly quasiconcave, when w ≫ 0, c(w, y) = y or 2 y is
α β
∂c(w, y)
differentiable in w, and = xi, i = 1,2.
∂wi

Theorem 3.3
y w1 α
x1 = 0, x 2 = β , when w2
< β
y w1 α
conditional input demands function: x1 = α , x 2 = 0, when w2
> β
w1 α
α x1 + β x 2 = y, x1 ≥ 0, x 2 ≥ 0, when w2
= β

Based on the conditional input demand function above, we can verify that:
1. x(t ⋅ w, y) = x(w, y) is homogeneous of degree zero in w.

[0 0]
0 0
2. The substitution matrix: is symmetric and negative semidefinite.

3
2301212244 梁忠诚

(c) Calculate the elasticity of substitution.


MP1 α d ln MRT S12 −1
MP1 = α, MP2 = β, MRT S12 = = , σ12 = − ( x ) →∞
MP2 β d ln 1 x2

3. Consider a price-taking firm whose production function is f (x1, x 2 ) = x1 + x2 .

(a) Derive the long run cost function and the conditional input demand functions.
Cost minimization: min w1 x1 + w2 x 2 s.t. f (x1, x 2 ) ≥ y
x1≥0,x 2 ≥0
As f (x) is strictly increasing and strictly convex, only a unique interior solution on f (x1, x 2 ) = y.
Lagrangian: L = w1 x1 + w2 x 2 − λ(y − ( x1 + x 2 ))
∂L 1
∂x1
= w1 + λ ⋅ =0
2 x1
w1 x 1 w
FOC: ∂L
= w2 + λ ⋅ 1
=0⇒ = ( 2 ) 2 ⇒ x 2 = x1 ⋅ ( 1 )2, Substituting in
∂x 2 2 x2 w2 x1 w2
x1 + x2 − y = 0
w1
x1 + x2 − y = 0 ⇒ x1 +
x1 = y
w2
w2 w1
⇒ Conditional input demand functions: x1 = ( ⋅ y)2, x 2 = ( ⋅ y)2.
w1 + w2 w1 + w2
w2 w1 w1w2
⇒ Cost function: c(w, y) = w1 x1 + w2 x 2 = w1( ⋅ y)2 + w2( ⋅ y)2 = ⋅ y2
w1 + w2 w1 + w2 w1 + w2

(b) Derive the long run profit function, output supply function, and input demand functions.
Profit maximization: max p y − w1 x1 − w2 x 2 s.t. f (x1, x 2 ) ≥ y
x,y≥0
⇒ max p ⋅ f (x1, x 2 ) − (w1 x1 + w2 x 2 ) = p( x1 + x 2 ) − (w1 x1 + w2 x 2 )
x≥0
Lagrangian: L = p( x1 + x 2 ) − (w1 x1 + w2 x 2 )
∂L p
= − w1 = 0
∂x1 2 x1 w1 x 1 w
FOC: p ⇒ = ( 2 ) 2 ⇒ x 2 = x1 ⋅ ( 1 )2, Substituting in
∂L
= − w2 = 0 w2 x1 w2
∂x 2 2 x2
w w2
x1 + x 2 − y = 0 ⇒ x1 + 1 x1 = y ⇒ x1 = ( ⋅ y)2. Substituting in
w2 w1 + w2
p p
− w1 = 0 ⇒ 2w y − w1 = 0
2 x1 2
w1 + w2
p(w1 + w2 )
⇒ Supply function: y =
2w1w2
p 2
⇒ Demand function: xi = ( ) , i = 1,2
2wi
p 2(w1 + w2 ) p2 p2 p 2(w1 + w2 )
⇒ Profit function: π (w, p) = p y − w1 x1 − w2 x 2 = − − =
2w1w2 4w1 4w2 4w1w2

4
2301212244 梁忠诚

(c) Verify the solutions you obtained in part (a) and part (b) satisfy their properties mentioned in
class.
w1w2
Part (a) — Cost function: c(w, y) = ⋅ y2
w1 + w2
Based on the cost function above:
1. When y = 0, c(w, y) = 0 always holds.
2. c(w, y) is always continuous when y ≥ 0 (when w ≫ 0)
∂c(w, y) 2y w1w2
3. ∀w ≫ 0, = > 0, so c(w, y) is strictly increasing; and obviously c(w, y) is
∂y w1 + w2
unbounded above in y.
w1w2 1
4. c(w, y) = ⋅ y2 = 1 ⋅ y 2, c(w, y) is always increasing in w.
w1 + w2 +
1
w w 1 2
t 2 w1w2 w w y2
5. c(t ⋅ w, y) = ⋅ y2 = t ⋅ 1 2 = tc(w, y) always holds.
t (w1 + w2 ) w1 + w2
wj2
∂(y 2 ⋅ )
∂ 2 c(w, y) (wi + wj ) 2 −2
6. = = y 2 wj2(
) < 0, i, j = 1,2, i ≠ j. So it is concave.
∂wi2 ∂wi (wi + wj )3
w1w2
7. Shephard’s lemma: f is strictly quasiconcave, when w ≫ 0, c(w, y) = ⋅ y 2 is
w1 + w2
∂c(w, y) wj2 wj y
2
differentiable in w, and =y ⋅ = ( )2 = xi(w, y), i = 1,2
∂wi (wi + wj )2 wi + wj

w2 w1
Part (a) — conditional input demands function: x1 = ( ⋅ y)2, x 2 = ( ⋅ y)2
w1 + w2 w1 + w2
Based on the conditional input demands function above:
t w2 w2
1. x1(t ⋅ w, y) = ( ⋅ y)2 = ( ⋅ y)2 = x1(w, y);
t w1 + t w2 w1 + w2
t w1 w1
x 2(t ⋅ w, y) = ( ⋅ y)2 = ( ⋅ y)2 = x 2(w, y). Conditional input demands
t w1 + t w2 w1 + w2
function is homogeneous of degree zero in w.
−2y 2 w22 2y 2 w1w2
(w1 + w2 ) 3 (w1 + w2 ) 3
2. The substitution matrix: S = . S is obviously symmertric.
2y 2 w1w2 −2y 2 w12
(w1 + w2 ) 3 (w1 + w2 ) 3
−2y 2 w22
< 0, | S | = 0 ⇒ S is negative semi-definite.
(w1 + w2 )3

p 2(w1 + w2 )
Part (b) — profit function: π (w, p) =
4w1w2
∂π ( p, w) (w1 + w2 )
1. Obviously = 2p ⋅ > 0, increasing in p.
∂p 4w1w2
∂π ( p, w) p2 1
2. = ⋅ (− 2 ) < 0, i = 1,2, decreasing in w.
∂wi 4 wi
2 2
t p (t w1 + t w2 ) p 2(w1 + w2 )
3. π (t w, t p) = =t⋅ = t π (w, p), homogeneous of degree one in
4t 2 w1w2 4w1w2
( p, w).
∂ 2 π (w, p) (w1 + w2 ) ∂ 2 π ( p, w) p2 1
4. = > 0 , = ⋅ > 0, i = 1,2. Convex in ( p, w).
∂p 2 2w1w2 ∂wi2 2 wi3

5
2301212244 梁忠诚

p 2(w1 + w2 )
5. Obviously π (w, p) = in ( p, w) ≫ 0. As f (x1, x 2 ) = x1 + x 2 is strictly
4w1w2
∂π (w, p) p(w1 + w2 )
concave, it is easy to show that = = y(w, p) and
∂p 2w1w2
∂π (w, p) p2
=− = − xi(w, p). So Hotelling’s lemma holds.
∂wi 4wi2

p(w1 + w2 )
Part (b) — supply function: y(w, p) = & input demand functions:
2w1w2
p 2
xi(w, p) = ( ) , i = 1,2
2wi
t p(t w1 + t w2 ) p(w1 + w2 )
1. y(t w, t p) = = = y(w, p);
2t 2 w1w2 2w1w2
tp 2 p 2
xi(t w, t p) = ( ) =( ) = xi(w, p), i = 1,2. So supply function and input demand
2t wi 2wi
functions are homogeneous of degree zero.
p(w1 + w2 ) ∂y(w, p) w + w2
2. y(w, p) = ⇒ = 1 ≥ 0;
2w1w2 ∂p 2w1w2
p 2 ∂x (w, p) p2
xi(w, p) = ( ) ⇒ i =− ≤ 0, i = 1,2.
2wi ∂wi 2wi3
w1 + w2 p p
2w1w2
− −
2w12 2w22

p p2
3. The substitution matrix: S = − 0 . Firstly, S is obviously symmetric;
2w12 2w13

p p2
− 0
2w22 2w23
w1 + w2 p
2w1w2

w1 + w2 2w12 p2
Secondly, ≥ 0, = ≥ 0,
2w1w2 p p2 4w13

2w12 2w13
w1 + w2 p p
2w1w2
− −
2w12 2w22

p p2 p 4(w1 + w2 ) p4 p4
− 0 = − − = 0, so S is positive
2w12 2w13 8w14 w24 8w13w24 8w14 w23
p p2
− 0
2w22 2w23

semidefinite.

6
2301212244 梁忠诚

4. At current output and input prices and current level of fixed input level, a firm’s short-run profit
maximizing level of output is y* = 20. If
smc(y*, w, w̄, x̄ ) = 15,
savc(y*, w, w̄, x̄ ) = 8,
safc(y*, w, w̄, x̄ ) = 9,
what is the most you can say about the firm’s level of profit in the short run? (smc: short-run
marginal cost, savc: short-run average variable cost, safc: short-run average fixed cost)

When profit is maximized, p = s m c = 15, so the profit is


π = p y − c(y) = p y − (sa vc + sa f c)y = 15 × 20 − (8 + 9) × 20 = − 40. In the short run the firm
makes a loss of 40, which is better than not producing.
If the firm had not produced any output, then y = 0 ⇒ π = 0 − s f c = − 180, it would have made a
loss of the entire fixed cost of 180.

5. Prove Hotelling’s lemma:


∂π ∂π
= y( p, w), = xi( p, w), i = 1,2,⋯, n
∂p ∂wi
(Hint: Use the first order condition you obtain from profit maximization problem (the first version)
and use the fact that y( p, w) = f (x ( p, w))

Profit maximization problem: max p y − w1 x1 − w2 x 2 s.t. f (x1, x 2 ) ≥ y


x,y≥0
L (x1, x 2, λ) = p y − w1 x1 − w2 x 2 − λ(y − f (x1, x 2 ))
∂L (x1, x 2, λ) ∂f (x1, x 2 ) ∂L (x1, x 2, λ) ∂f (x1, x 2 )
FOC: = − w1 + λ = 0; = − w2 + λ = 0;
∂x1 ∂x1 ∂x 2 ∂x 2
∂L (x1, x 2, λ)
= y − f (x1, x 2 ) = 0.
∂λ
π (w, p) = p y(w, p) − wx(w, p) = p f (x* 1
(w, p), x*
2
(w, p)) − w1 x* 1
(w, p) − w2 x*2
(w, p) = L (x* 1
, x*
2
, λ*)
∂π ∂L (x* 1 , x*
2 , λ*)
=
∂p ∂p
∂x*(w1, w2, p) ∂x*(w1, w2, p) ∂f ∂x* 1 (w1, w2, p) ∂f ∂x* 2 (w1, w2, p)
= y − w1 1 − w2 2 + λ*( +
∂p ∂p ∂x1 ∂p ∂x 2 ∂p
∂f ∂x* (w1, w2, p) ∂f ∂x* (w1, w2, p)
= y + (−w1 + λ* ) 1 + (−w2 + λ* ) 2
∂x1 ∂p ∂x 2 ∂p
=y
∂π ∂x*(w1, w2, p) ∂x*j (w1, w2, p) ∂f ∂x* i (w1, w2, p) ∂f ∂x* j (w1, w2, wi )
= x* i
− wi i − wj + λ*( + )
∂wi ∂wi ∂wi ∂xi ∂wi ∂xj ∂wi
∂f ∂x* (w1, w2, p) ∂f ∂x*j (w1, w2, p)
= x*
i
+ (−wi + λ* ) i + (−wj + λ* )
∂xi ∂wi ∂xj ∂wi
= x*
i
i, j = 1,2, i ≠ j

7
2301212244 梁忠诚

6. Prove the Expected Utility Theorem. (You can find the complete proof in the textbook. Writing it
down once will be useful understanding the use of the axioms that stand behind it.)

Proof: assuming L ≿ L ≿ L ∀L ∈ ℒ
i. L ≻ L′, α ∈ (0,1) ⇒ L ≻ α L + (1 − α)L′≻ L′
As per independece axiom, L = α L + (1 − α)L ≻ α L + (1 − α)L′≻ α L′+ (1 − α)L′= L′
⇒ L ≻ α L + (1 − α)L′≻ L′.

ii. α, β ∈ [0,1] . β > α ⟺ β L + (1 − β )L ≻ α L + (1 − α)L


i. β > α ⇒ β L + (1 − β )L ≻ α L + (1 − α)L
β −α
Let γ = ∈ (0,1), then β L + (1 − β )L = γL + (1 − γ)(α L + (1 − α)L). Firstly,
1−α
L ≻ α L + (1 − α)L; secondly, γL + (1 − γ)(α L + (1 − α)L) ≻ α L + (1 − α)L. Therefore,
β L + (1 − β )L ≻ α L + (1 − α)L.
ii. β > αβ L + (1 − β )L ≻ α L + (1 − α)L ⇒ β > α
It is equivalent with β ≤ α ⇒ β L + (1 − β )L ≾ α L + (1 − α)L.
Firstly, when β = α, β L + (1 − β )L = α L + (1 − α)L ⇒ β L + (1 − β )L ∼ α L + (1 − α)L.
Secondly, when β < α, similar to prove β > α ⇒ β L + (1 − β )L ≻ α L + (1 − α)L.

iii. ∀L ∈ ℒ, there is a unique αL such that αL L + (1 − αL )L ∼ L


∀L ∈ ℒ, let A = {α ∈ [0,1] | α L + (1 − α)L ≿ L} and B = {α ∈ [0,1] | α L + (1 − α)L ≾ L}. As
≿ is continuous, A and B is closed; as ≿ is complete, ∀α ∈ [0,1] belongs to at least one of the two
sets. Since both sets are nonempty and [0,1]] is connected, A ∩ B ≠ ∅.
Let αL ∈ A ∩ B, then αL L + (1 − αL )L ≿ L and αL L + (1 − αL )L ≾ L ⇒ αL L + (1 − αL )L ∼ L.
Assuming A ∩ B has more than one element, denoted as αL and αL′, αL ≠ αL′. WLOG, let αL > αL′,
then L ∼ αL L + (1 − αL )L ≻ αL′L + (1 − αL′)L ∼ L, which is impossible.
So there is a unique αL such that αL L + (1 − αL )L ∼ L, ∀L ∈ ℒ.

iv. The function: ℒ → R that assigns U(L) = αL , ∀L ∈ ℒ represents the preference relation ≿.

∀L ∈ ℒ, assign U(L) = αL such that L ∼ αL L + (1 − αL )L.


Then L ≿ L′ ⟺ αL L + (1 − αL )L ≿ αL′L + (1 − αL′)L ⟺ αL ≥ αL′ ⟺ U(L) ≥ U(L′).

v. U(L) = αL is linear and therefore has the expected utility form.


Let L ∼ αL L + (1 − αL )L, L′∼ αL′L + (1 − αL′)L.
βL + (1 − β )L′∼ β(αL L + (1 − αL )L) + (1 − β )(αL′L + (1 − αL′)L)
= (βαL + (1 − β )αL′)L + (β(1 − αL ) + (1 − β )(1 − αL′))L
= (βαL + (1 − β )αL′)L + (1 − βαL − (1 − β )αL′)L
βL + (1 − β )L′∼ βαL + (1 − β )αL′)L + (1 − βαL − (1 − β )αL′)L ⇒ U(βL + (1 − β )L′) = βαL + (1 − β )αL′
⇒ U(βL + (1 − β )L′) = βU(L) + (1 − β )U(L′), which means U(L) = αL is linear and has the
expected utility form.
In summary, the Expected Utility Theorem holds.

You might also like