pin@unive.it
http://venus.unive.it/pin
August, 2005
Rangaraian K. Sundaran, 1996, “A First Course in
Optimization Theory”, Cambridge University Press:
Solutions of (some) exercises
Appendix C (pp. 330347): Structures on Vector Spaces.
1.
1
4
_
x +y
2
−x −y
2
_
=
1
4
_
< x +y, x +y > − < x −y, x −y >
_
=
1
4
_
< x, x > +2 < x, y > + < y, y > −(< x, x > −2 < x, y > + < y, y >)
_
=
1
4
_
4 < x, y >
_
= < x, y > 2
2.
x +y
2
+x −y
2
= x +y
2
−x −y
2
+ 2x −y
2
Polarization Identity =⇒ = 4 < x, y > +2x −y
2
= 2 < x, x > +2 < y, y >
= 2
_
x
2
+y
2
_
2
3. Let’s take e.g. R
2
, x = (1, 0) and y = (0, 2):
x +y
2
1
+x −y
2
1
= 3
2
+ 3
2
= 18 = 10 = 2 · (1
2
+ 2
2
) = 2
_
x
2
1
+y
2
1
_
2
1
4. a) {x
n
} →x in (V, d
1
) if and only if:
lim
n→∞
d
1
(x
n
, x) = 0 =⇒ lim
n→∞
d
2
(x
n
, x) ≤ lim
n→∞
c
1
· d
1
(x
n
, x) = c
1
· lim
n→∞
d
1
(x
n
, x) = 0
since d
2
(x
n
, x) ≥ 0, ∀ n: lim
n→∞
d
2
(x
n
, x) = 0 ;
b) A is open in (V, d
1
) if and only if:
∀ x ∈ A, ∃ r > 0 s.t. B
1
(x, r) = {y ∈ V  d
1
(x, y) < r} ⊂ A
=⇒∀ x ∈ A, ∃
r
c
2
> 0 s.t. B
2
(x,
r
c
2
) = {y ∈ V  c
2
· d
2
(x, y) ≤ r} ⊆ B
1
(x, r) ⊂ A
the reverse of (a) and (b) is equivalent. 2
5. Consider R with the usual metric d(x, y) ≡ x −y
and with the other one deﬁned as δ(x, y) ≡ max{d(x, y), 1};
δ is a metric, to prove triangle inequality consider that:
δ(x, z) = max{d(x, z), 1} ≤ max{d(x, y) +d(x, z), 1}
≤ max{d(x, y) +d(x, z), 1 +d(x, y), 1 +d(y, z), 2}
= max{d(x, y), 1} + max{d(x, z), 1} = δ(x, y) +δ(y, z) ;
∀ B
δ
(x, r) ⊂ R, B
d
(x, r) ⊆ B
δ
(x, r),
similarly ∀ B
d
(x, r) ⊂ R, B
δ
(x, min{r,
1
2
}) ⊆ B
d
(x, r),
so they generate the same open sets;
nevertheless ∃ c ∈ R
+
such that d(x, y) < c · δ(x, y), ∀ x, y ∈ R,
suppose by absurd it exists,
then d(0, c) = c < c · δ(x, y) is impossible since δ(x, y) ≤ 1, ∀ x, y ∈ R. 2
6. d
∞
(x, y) = max{x
1
−y
1
, x
2
−y
2
} is equivalent to any
d
p
(x, y) =
_
x
1
−y
1

p
+x
2
−y
2

p
_1
p
, with p ∈ N, p ≤ 1, because:
d
∞
≤ d
p
∧ d
p
≤
1
2
d
∞
all the metrics are equivalent by symmetry and transitivity (see exercise 7). 2
7. If d
1
and d
2
are equivalent in V , ∃ b
1
, b
2
∈ R such that ∀ x, y ∈ V :
2
d
2
(x, y)
b
1
≤ d
1
(x, y) ≤ b
2
· d
2
(x, y)
similarly if d
2
and d
3
are equivalent in V , ∃ c
1
, c
2
∈ R such that ∀ x, y ∈ V :
d
3
(x, y)
c
1
≤ d
2
(x, y) ≤ c
2
· d
3
(x, y)
hence ∃ (b
1
· c
1
), (b
2
· c
2
) ∈ R such that ∀ x, y ∈ V :
d
3
(x, y)
b
1
· c
1
≤ d
1
(x, y) ≤ b
2
· c
2
· d
3
(x, y)
=⇒ d
1
and d
3
are equivalent. 2
8.
ρ(x, y) =
_
1 x = y
0 x = y
Positivity and Simmetry come from the deﬁnition;
for Triangle inequality consider the exhaustive cases:
x = y = z =⇒ d(x, z) = 0 = 0 + 0 = d(x, y) +d(y, z)
x = y = z =⇒ d(x, z) = 1 = 0 + 1 = d(x, y) +d(y, z)
x = y = z =⇒ symmetric
x = z = y =⇒ d(x, z) = 0 ≤ 1 + 1 = d(x, y) +d(y, z)
all diﬀerent =⇒ d(x, z) = 1 ≤ 1 + 1 = d(x, y) +d(y, z) 2
9. We generalize from R to any V ;
every subset X in (V, ρ) is closed because X
C
is open (every subset is open in (V, ρ));
in (V, ρ) the only convergent sequences are those constant from a certain point on
(not only constant sequences as stated at page 337),
consider a ﬁnite subset A of V , every sequence in A have a constant (hence con
verging) subsequence,
consider now an inﬁnite subset B of V , (by the Axiom of choice) we can construct
a sequence with all diﬀerent elements, where all subsequence will have all diﬀerent
elements,
=⇒ compact subsets in (V, ρ) are all and only the ﬁnite ones. 2
10. Since that every subset X in (V, ρ) is closed (exercise 9), then cl(X) =
X, ∀ X ⊆ V , and
∀ X ⊆ V, ∃A ⊂ X s.t. A = X and cl(A) = X
3
neither for V itself, so that if V is uncountable, there exists no countable subset of
V whose closure is V . 2
11. The correct statement of Corollary C.23 (page 340) is:
A function f : V
1
→ V
2
is continuous if and only if, ∀ open set
U
2
⊆ V
2
, f
−1
(U
2
) = {x ∈ V
1
f(x) ∈ U
2
} is an open set in V
1
.
(The inverse function of every open set is an open set.)
Since every subset of (R, ρ) is open, every function (R, ρ) → (R, d) and (R, ρ) →
(R, ρ) is continuous, that is F = G = {f  f : R →R};
clearly not every function (R, d) → (R, d) is continuous (here continuity has its
standard meaning), then H ⊂ F = G and H = F. 2
12. See exercise 11 for the correct statement of Corollary C.23 (page 340);
the open sets in (R
n
, d
∞
) and (R
n
, d
2
) coincide, because:
∀ B
d∞
x,r
= {y ∈ R
n
 max{y
i
−x
i
}
i∈{1,...n}
≤ r}
∃ B
d
2
x,r
= {y ∈ R
n
 (
n
i=1
(y
i
−x
i
)
2
)
1
2
≤ r} ⊆ B
d∞
x,r
and
∀ B
d
2
x,r
= {y ∈ R
n
 (
n
i=1
(y
i
−x
i
)
2
)
1
2
≤ r}
∃ q ≡
_
r
2
n
s.t. B
d∞
x,q
= {y ∈ R
n
 max{y
i
−x
i
}
i∈{1,...n}
≤ q} ⊆ B
d
2
x,r
;
hence also the continuous functions in (R
n
, d
∞
) →(R, d) and the continuous func
tions in (R
n
, d
2
) →(R, d) coincide. 2
13. [The padic valuation here is not well deﬁned:
if r = 0, a = 0 and any n would do;
if r = 1 there are no a, b, n such that 1 = p
−n
_
a
b
_
, a is not a multiple of p.
For a survey: http://en.wikipedia.org/wiki/Padic_number ]
14. Since V with the discrete topology is metrizable from (V, ρ), topological com
pactness is equivalent to compactness in the sequential sense (page 343), hence
exercise 9 proves that the compact subsets in (V, ρ) are all and only the ﬁnite
ones. 2
4
15. As seen in exercise 11, every such function is continuous. 2
16.
d[(x, y), (x
′
, y
′
)] =
_
d
1
(x, y)
2
+d
2
(x
′
, y
′
)
2
_1
2
positivity and symmetry are straightforward from positivity of d
1
and d
2
;
triangle inequality:
d[(x, z), (x
′
, z
′
)] =
_
d
1
(x, z)
2
+d
2
(x
′
, z
′
)
2
_1
2
≤
_
(d
1
(x, y) +d
1
(y, z))
2
+d
2
(x
′
, z
′
)
2
_1
2
≤
_
(d
1
(x, y) +d
1
(y, z))
2
+ (d
2
(x
′
, y
′
) +d
2
(y
′
, z
′
))
2
_1
2
=
_
d
1
(x, y)
2
+ 2d
1
(x, y)d
1
(y, z) +d
1
(y, z)
2
+d
2
(x
′
, y
′
)
2
+ 2d
2
(x
′
, y
′
)d
2
(y
′
, z
′
) +d
2
(y
′
, z
′
)
2
_1
2
=
_
d
1
(x, y)
2
+d
2
(x
′
, y
′
)
2
+d
1
(y, z)
2
+d
2
(y
′
, z
′
)
2
+ 2
_
d
1
(x, y)d
1
(y, z) +d
2
(x
′
, y
′
)d
2
(y
′
, z
′
)
__1
2
≤
_
d
1
(x, y)
2
+d
2
(x
′
, y
′
)
2
+d
1
(y, z)
2
+d
2
(y
′
, z
′
)
2
+. . .
. . . + 2
_
d
1
(x, y)d
1
(y, z) +d
1
(x, y)d
2
(y
′
, z
′
) +d
2
(x
′
, y
′
)d
1
(y, z) +d
2
(x
′
, y
′
)d
2
(y
′
, z
′
)
__1
2
=
_
d
1
(x, y)
2
+d
2
(x
′
, y
′
)
2
+d
1
(y, z)
2
+d
2
(y
′
, z
′
)
2
+ 2
_
d
1
(x, y) +d
2
(x
′
, y
′
)
__
d
1
(y, z) +d
2
(y
′
, z
′
)
__1
2
since
_
a +b + 2
√
ab =
√
a +
√
b
=
_
d
1
(x, y)
2
+d
2
(x
′
, y
′
)
2
_1
2
+
_
d
1
(y, z)
2
+d
2
(y
′
, z
′
)
2
_1
2
= d[(x, y), (x
′
, y
′
)] +d[(y, z), (y
′
, z
′
)] . 2
17. The base of a metric space (X, d
1
) is the one of its balls B(x, r),
every ball in (X, d
1
) ×(Y, d
2
) = (X ×Y, (d
2
1
+d
2
2
)
1
2
) is of the form:
B((x
∗
, y
∗
), r) = {(x, y) ∈ X ×Y  (d
1
(x, x
∗
)
2
+d
2
(y, y
∗
)
2
)
1
2
≤ r}
⊆ {(x, y) ∈ X ×Y  max{d
1
(x, x
∗
), d
2
(y, y
∗
)} ≤ r}
= B(x
∗
, r) ×B(y
∗
, r)
for every open set in (X, d
1
) ×(Y, d
2
) there is a couple of open sets, one in (X, d
1
)
and one in (Y, d
2
), whose product contains them,
moreover
5
B(x
∗
, r
1
) ×B(y
∗
, r
2
) = {(x, y) ∈ X ×Y  d
1
(x, x
∗
) ≤ r
1
∧ d
2
(y, y
∗
) ≤ r
2
}
⊆ {(x, y) ∈ X ×Y  (d
1
(x, x
∗
)
2
+d
2
(y, y
∗
)
2
)
1
2
≤ r
1
+r
2
}
= B((x
∗
, y
∗
), r
1
+r
2
)
then for every couple of open sets in (X, d
1
) and (Y, d
2
), there is an open set in the
product space containing their product,
then the two topologies coincide. 2
18. [(a) is not hard only for f compact =⇒ f sequentially compact,
the other way round and (b) are really non trivial;
see e.g.:
http://wwwhistory.mcs.stand.ac.uk/˜john/MT4522/Lectures/L22.html ]
19. f topological continuous =⇒ f sequentially continuous:
consider x
n
→x ∈ V , and an open O
f(x)
containing f(x) ∈ V
′
,
f
−1
O
f(x)
contains x and all of the x
n
after a certain n
0
,
then all of the f(x
n
), after the same n
0
, are in O
f(x)
,
f is sequelntially continuous;
f sequentially continuous =⇒ f topological continuous:
non trivial. . . 2
20. a) 1) Clearly ∅, [0, 1] ∈ τ;
2) if O
1
, O
2
∈ τ, O
C
1
and O
C
2
are ﬁnite,
but then also their union O
C
1
∪ O
C
2
= (O
1
∩ O
2
)
C
is, =⇒ O
1
∩ O
2
∈ τ;
3) if {O
α
}
α∈A
⊆ τ, O
C
α
is ﬁnite ∀ α ∈ A,
but then also their intersection
α∈A
O
C
α
= (
α∈A
O
α
)
C
is, =⇒
α∈A
O
α
∈ τ;
b) suppose ∃ x ∈ [0, 1] with a countable base {B
i
}
i∈N
,
since B
C
i
is ﬁnite ∀ i ∈ N,
i∈N
B
C
i
is countable,
since [0, 1] is uncountable, ∃ y = x, y ∈ [0, 1] such that y ∈
i∈N
B
C
i
,
=⇒ y ∈ B
i
∀ i ∈ N,
{y}
C
contains x but is not contained in any element of its base. 2
6
Chapter 3 (pp. 9099): Existence of Solutions.
1. As a counterexample consider f : (0, 1) →R such that f(x) =
1
x
:
sup
(1,0)
f(x) = lim
x→0
1
x
= +∞. 2
2. It can be shown (e.g. by construction) that in R (in any completely ordered set)
sup and inf of any ﬁnite set coincide respectively with max and min.
If D ⊂ R
n
is ﬁnite also f(D) = {x ∈ R  ∃ x ∈ D s.t. f(x) = x} is.
The result is implied by the Weierstrass theorem because every ﬁnite set A is com
pact, i.e. every sequence in A have a constant (hence converging) subsequence. 2
3. a) Consider ¯ x = max{D}, which exists because D is compact subset of R,
f(¯ x) is the desired maximum, because ∃x ∈ D such that x ≥ ¯ x,
=⇒ ∃x ∈ D such that f(x) ≥ f(¯ x);
b) consider D ≡ {(x, y) ∈ R
2
+
 x +y = 1} ∈ R
2
,
D is the closed segment from (0, 1) to (1, 0), hence it is compact,
there are no two points in D which are ordered by the ≥ partial ordering,
hence every function D →R is nondecreasing,
consider:
f(x, y) =
_
1
x
if x = 0
0 if x = 0
max f on D is lim
x→0
f(x, y) = lim
x→0
1
x
= +∞. 2
4. A ﬁnite set is compact, because every sequence have a constant (hence converg
ing) subsequence.
Every function from a ﬁnite set is continuous, because we are dealing with the
discrete topology, and every subset is open (see exercise 11 of Appendix C for the
correct statement of Corollary C.23 (page 340)).
We can take a subset A of cardinality k in R
n
and construct a onetoone function
assigning to every element of A a diﬀerent element in R.
A compact and convex subset A ⊂ R
n
cannot have a ﬁnite number k ≥ 2 of
elements (because otherwise
a∈A
a
k
is diﬀerent from every a ∈ A but should be
in A).
Suppose A is convex and ∃ a, b ∈ A such that f(a) = f(b), then f
[a,b]
(λ) =
f(λa + (1 − λ)b) is a restriction of f : A → R but can be also considered as a
function f
[a,b]
: [0, 1] →R.
By continuity every element of [f(a), f(b)] ⊂ R must be in the codomain of f
[a,b]
7
(the rigorous proof is long, see Th. 1.69, page 60), which is then not ﬁnite, so that
neither the codomain of f is. 2
5. Consider sup(f), it cannot be less than 1, if it is 1, then max(f) = f(0).
Suppose sup(f) > 1, then, by continuity, we can construct a sequence {x
n
∈
R
+
 f(x
n
) = sup(f) −
sup(f)−1
n
, moreover, since lim
x→∞
f(x) = 0, ∃¯ x s.t. ∀ x >
¯ x, f(x) < 1.
{x
n
} is then limited in the compact set [0, ¯ x], hence it must converge to ˆ x. f(ˆ x) is
however sup(f) and then f(ˆ x) = max(f).
f(x) = e
−x
, restricted to R
+
, satisﬁes the conditions but has no minimum. 2
6. Continuity (see exercise 11 of Appendix C for the correct statement of Corollary
C.23 (page 340)) is invariant under composition, i.e. the composition of continuous
functions is still continuous.
Suppose g : V
1
→ V
2
and f : V
2
→ V
3
are continuous, if A ⊂ V
3
is open then, by
continuity of f, also f
−1
(A) ⊂ V
2
is, and, by continuity of g, also g
−1
(f
−1
(A)) ⊂ V
1
is.
g
−1
◦ f
−1
: V
3
→ V
1
is the inverse function of f ◦ g : V
1
→ V
3
, which is then also
continuous.
We are in the conditions of the Weierstrass theorem. 2
7.
g(x) =
_
_
_
−1 x ≤ 0
x x ∈ (0, 1)
1 x ≥ 1
, f(x) = e
−x
arg max(f) = 0, max(f) = e
0
= 1 and also sup(f ◦ g) = 1, which is however never
attained. 2
8.
min p · x subject to x ∈ D ≡ {y ∈ R
n
+
 u(y) ≥ ¯ u}
s.t. u : R
n
+
→R is continuous, p ≫
0
a) p · x is a linear, and hence continuous function of x,
there are two possibilities: (i) u is nondecreasing in x, (ii) u is not;
(i) D is not compact,
we can however consider a utility U and deﬁne W
U
≡ {y ∈ R
n
+
 u(y) ≤ U},
by continuity of u, W
U
is compact,
for U large enough D ∩ W
U
is not empty and the minimum can be found in it;
8
(ii) for the components of x where u is decreasing,
maximal utility may be at 0,
so D may be compact, if not we can proceed as in (i);
b) if u is not continuous we could have no maximum even if it is nondecreasing
(see exercise 3);
if ∃ p
i
< 0, and u is nondecreasing in x
i
, the problem minimizes for x
i
→+∞. 2
9.
max p · g(x) − w
′
· x s.t. x ∈ R
n
+
with p > 0, g continuous, w ≫
0
a solution is not guaranteed because R
n
+
is not compact. 2
10.
F( p, w) = {(x, l) ∈ R
n+1
+
 p
′
x ≤ w(H −l), l ≤ H}
F( p, w) compact =⇒ p ≫0 (by absurd):
if ∃ p
i
≤ 0, the constrain could be satisﬁed also at the limit x
i
→+∞,
F would not be compact;
p ≫0 =⇒ F( p, w) compact:
F is closed because inequalities are not strict,
l is limited in [0, H],
∀ i ∈ {1, . . . n}, x
i
is surely limited in [0,
w(H−l)
p
i
]. 2
11.
max U(y(
φ)) subject to
φ ∈ {
ψ ∈ R
N
 p ·
ψ ≤ 0, y(
φ) ≥ 0}
where y
s
(
φ) = ω
s
+
N
i=1
φ
i
z
is
, U : R
S
+
→ R is continuous and strictly increasing
(y ≫y
′
in R
S
+
if y
i
≥ y
′
i
∀i ∈ {1, . . . , S} and at least one inequality is strict).
Arbitrage: ∃
φ ∈ R
N
such that p ·
φ ≤ 0 and Z
′
φ ≫
0.
A solution exists ⇐⇒ no arbitrage
(=⇒) (by absurd: (A =⇒B) ⇔(¬B =⇒¬A) )
suppose ∃
ψ ∈ R
N
such that p ·
ψ ≤ 0 and Z
t
ψ ≫
0, then any multiple k ·
ψ of
ψ
has the same property,
9
y(k ·
ψ) increases linearly with k and then also U(y(k ·
ψ)) increases with k,
max
_
U(y(k ·
ψ))
_
is not bounded above;
(⇐=) ∀
ψ ∈ R
N
such that p ·
ψ ≤ 0 there are two possibilities:
either
N
i=1
ψ
i
z
is
= 0 ∀ s ∈ {1, . . . S},
or ∃ s ∈ {1, . . . S} such that
N
i=1
ψ
i
z
is
< 0;
in the ﬁrst case the function of k U(y(k ·
φ)) : R →R is constant,
then a maximum exists,
in the second case y(
φ) ≥ 0 impose a convex constraint on the feasible set,
by Weierstrass theorem a solution exists. 2
12.
max W
_
u
1
(x
1
, h(x)), . . . u
n
(x
n
, h(x))
_
with h(x) ≥ 0, (x, x) ∈ [0, w]
n+1
, w −x =
n
i=1
x
i
suﬃcient condition for continuity is that W, u and h are continuous,
the problem is moreover well deﬁned only if ∃ x ∈ [0, w] such that h(x) ≥ 0;
the simplex ∆
w
≡ {(x, x) ∈ [0, w]
n+1
 x +
n
i=1
x
i
= w} is closed and bounded,
if h(x) is continuous H = {(x, x) ∈ [0, w]
n+1
 h(x) ≥ 0} is closed because the
inequality is not strict,
then the feasible set ∆
w
∩ H is closed and bounded because intersection of closed
and bounded sets. 2
13.
max π(x) = max
x∈R
+
_
xp(x) −c(x)
_
with p : R
+
→R
+
and c : R
+
→R
+
continuous, c(0) = 0 and p(·) decreasing;
a) ∃ x
∗
> 0 such that p(x
∗
) = 0, then, ∀ x > x
∗
, π(x) ≤ π(0) = 0,
the solution can then be found in the compact set [0, x
∗
] ∈ R
+
;
b) now ∀ x > x
∗
, π(x) = π(0) = 0, as before;
c) consider c(x) =
¯ p
2
· x, now π(x) =
¯ p
2
· x, so that the maximization problem is
not bounded above. 2
14. a)
max v(c(1), c(2)) subject to c(1) ≫
0, c(2) ≫
0, p(1) · c(1) +
p(2)
1 +r
· c(2) ≤ W
0
10
b) we can consider, in R
2n
, the arrays c = (c(1), c(2)) and p = ( p(1),
p(2)
1+r
), the
conditions
c(1) ≫
0, c(2) ≫
0, p(1) · c(1) +
p(2)
1 +r
· c(2) ≤ W
0
become
c ≫
0, p · c ≤ W
0
we are in the conditions of example 3.6 (pages 9293), and p ≫
0 if and only if
p(1) ≫
0 and p(2) ≫
0. 2
15.
max
_
π(x
1
) +π(x
2
) +π(x
3
)
_
subject to
_
_
_
0 ≤ x
1
≤ y
1
0 ≤ x
2
≤ f(y
1
−x
1
)
0 ≤ x
3
≤ f(f(y
1
−x
1
) −x
2
)
let’s call A the subset of R
3
+
that satisﬁes the conditions, A is compact if it is closed
and bounded (Th. 1.21, page 23).
It is bounded because
A ⊆ [0, y
1
] ×[0, max f([0, y
1
])] ×[0, max f([0, max f([0, y
1
])])] ⊂ R
3
+
and maxima exist because of continuity.
Consider (ˆ x
1
, ˆ x
2
, ˆ x
3
) ∈ A
c
in R
3
+
, then (if we reasonably suppose that f(0) = 0),
becasue of continuity, at least one of the following holds:
ˆ x
1
> y
1
, ˆ x
2
> f(y
1
− ˆ x
1
) or ˆ x
3
> f(f(y
1
− ˆ x
1
) − ˆ x
2
).
If we take r = max{ˆ x
1
− y
1
, ˆ x
2
− f(y
1
− ˆ x
1
), ˆ x
3
− f(f(y
1
− ˆ x
1
) + ˆ x
2
)}, r > 0 and
B((ˆ x
1
, ˆ x
2
, ˆ x
3
), r) ⊂ A
c
.
A
c
is open =⇒ A is closed. 2
11
Chapter 4 (pp. 100111): Unconstrained Optima.
1. Consider D = [0, 1] and f : D →R, f(x) = x:
arg max
x∈D
f(x) = 1, but Df(1) = 1 = 0. 2
2.
f
′
(x) = 1 −2x −3x
2
= 0 for x =
1 ±
√
1 + 3
−3
=
_
−1
1
3
f
′′
(x) = −2 −6x ,
_
f
′′
(−1) = 4
f
′′
(
1
3
) = −4
−1 is a local minimum and
1
3
a local maximum, they are not global because
lim
x→−∞
= +∞ and lim
x→+∞
= −∞. 2
3. The proof is analogous to the proof at page 107, reverting the inequalities. 2
4. a)
f
x
= 6x
2
+y
2
+ 10x
f
y
= 2xy + 2y
_
=⇒ y = 0 ∧ x = 0
D
2
f(x, y) =
_
12x + 10 2y
2y 2x + 2
_
(0, 0) is a minimum;
b)
f
x
= e
2x
+ 2e
2x
(x +y
2
+ 2y) = e
2x
(1 + 2(x +y
2
+ 2y))
f
y
= e
2x
(2y + 2)
_
=⇒ y = −1
1 + 2(x + 1 −2) = 0 =⇒ x =
1
2
=⇒ f(
1
2
, −1) = −
1
2
e
since lim
x→−∞
f(x, −1) = 0 and lim
x→+∞
f(x, −1) = +∞, the only critical point
(
1
2
, −1) is a global minimum;
c) the function is symmetric, limits for x or y to ±∞ are +∞, ∀ a ∈ R:
f
x
= ay −2xy −y
2
f
y
= ax −2xy −x
2
_
=⇒ (0, 0) ∨ (0, a) ∨ (a, 0) ∨ (−
2
5
a, −
1
5
a) ∨ (−
1
5
a, −
2
5
a)
D
2
f(x, y) =
_
2y a −2x −2y
a −2x −2y 2x
_
12
(−
2
5
a, −
1
5
a) and (−
1
5
a, −
2
5
a) are local minima ∀ a ∈ R;
d) when sin y = 0, limits for x ±∞ are ±∞,
f
x
= sin y
f
y
= xcos y
_
=⇒ (0, kπy) ∀ k ∈ Z = {. . . , −2, −1, 0, 1, . . .}
f is null for critical points, adding any (ǫ, ǫ) to them (ǫ <
π
2
), f is positive,
adding (ǫ, −ǫ) to them, f is negative,
critical points are saddles;
e) limits for x or y to ±∞ are +∞, ∀ a ∈ R,
f
x
= 4x
3
+ 2xy
2
f
y
= 2x
2
y −1
_
=⇒ y =
1
2x
2
=⇒ 4x
3
= −
1
x
=⇒ impossible
there are no critical points;
f) limits for x or y to ±∞ are +∞,
f
x
= 4x
3
−3x
2
f
y
= 4y
3
_
=⇒ (0, 0) ∨ (
3
4
, 0)
since f(0, 0) = 0 and f(
3
4
, 0) < 0, (
3
4
, 0) is a minimum and (0, 0) a saddle;
g) limits for x or y to ±∞ are 0,
f
x
=
1−x
2
+y
2
(1+x
2
+y
2
)
2
f
y
=
−2xy
(1+x
2
+y
2
)
2
_
=⇒ (1, 0) ∨ (−1, 0)
since f(1, 0) =
1
2
and f(−1, 0) = −
1
2
, the previous is a maximum, the latter a
minimum;
h) limits for x to ±∞ are +∞,
limits for y to ±∞ depends on the sign of (x
2
−1),
f
x
=
x
3
8
+ 2xy
2
−1
f
y
= 2y(x
2
−1)
_
=⇒
_
¸
¸
¸
¸
_
¸
¸
¸
¸
_
y = 0 =⇒ x = 2
∨
x = 1 =⇒ y = ±
√
7
4
∨
x = −1 =⇒ impossible
D
2
f(x, y) =
_
3
8
x
2
+ 2y
2
4xy
4xy 2y(x
2
−1)
_
13
D
2
f(2, 0) =
_
3
2
0
0 0
_
=⇒ saddle, D
2
f(1,
√
7
4
) =
_
5
4
√
7
√
7 0
_
=⇒ saddle,
D
2
f(1, −
√
7
4
) =
_
5
4
−
√
7
−
√
7 0
_
=⇒ saddle. 2
5. The unconstrained function is given by the substitution y = 9 −x:
f(x) = 2 + 2x + 2(9 −x) −x
2
−(9 −x)
2
= −2x
2
+ 18x −61
which is a parabola with a global maximum in x =
9
2
=⇒ y =
9
2
. 2
6. a) x
∗
is a local maximum of f if ∃ǫ ∈ R
+
such that ∀y ∈ B(x
∗
, ǫ), f(y) ≤ f(x
∗
),
considering then that:
liminf
y→x
∗
f(x
∗
) −f(y) = liminf
y∈B(x
∗
,ǫ)→x
∗
f(x
∗
) −f(y) ≥ 0
∀ y < x
∗
, x
∗
−y > 0 ∧ ∀ y > x
∗
, x
∗
−y < 0
−1 · liminf = −limsup
we have the result;
b) a limit may not exist, while liminf and limsup always exist if the function
is deﬁned on all R;
c) if x
∗
is a strict local maximum the inequality for liminf
y→x
∗ is strict, so also
the two inequalities to prove are. 2
7. Consider f : R →R:
f(x) =
_
1 x ∈ Q
0 otherwise
where Q ⊂ R are the rational number; x = 0 is a local not strict maximum, f is
not constant in x = 0. 2
8. f is not constant null, otherwise also f
′
would be constant null,
since lim
y→∞
f(y) = 0, f has a global maximum ¯ x, with f(¯ x) > 0 and f
′
(¯ x)
x is the only point for which f
′
(x) = 0 =⇒ x = ¯ x. 2
9. a) φ
′
> 0:
dφ ◦ f
dx
i
= φ
′
(f)
df
dx
i
,
df
dx
i
= 0 =⇒
dφ ◦ f
dx
i
= 0
Df(x
∗
) =
0 =⇒ Dφ ◦ f(x
∗
) =
0
14
b)
d
2
φ ◦ f
dx
i
dx
j
=
d
dx
j
_
φ
′
(f)
df
dx
i
_
= φ
′′
(f)
df
dx
i
df
dx
j
+φ
′
(f)
d
2
f
dx
i
dx
j
df
dx
i
= 0 =⇒
d
2
φ ◦ f
dx
i
dx
j
= φ
′
(f)
d
2
f
dx
i
dx
j
Df(x
∗
) =
0 =⇒ D
2
φ ◦ f(x
∗
) = φ
′
(f(x
∗
)) · D
2
f(x
∗
)
a negative deﬁnite matrix is still so if multiplied by a constant. 2
10. Let us check conditions on principal minors (see e.g. Hal R. Varian, 1992,
“Microeconomic analysis”, Norton, pp.500501).
D
2
f(x) =
_
_
_
f
x
1
x
1
. . . f
x
1
xn
.
.
.
.
.
.
.
.
.
f
x
1
xn
. . . f
xnxn
_
_
_
is positive deﬁnite if:
1) f
x
1
x
1
> 0,
2) f
x
1
x
1
· f
x
2
x
2
−f
2
x
1
x
2
> 0,
. . .
n) D
2
f(x) > 0;
D
2
g(x) =
_
_
_
g
x
1
x
1
. . . g
x
1
xn
.
.
.
.
.
.
.
.
.
g
x
1
xn
. . . g
xnxn
_
_
_ = D
2
−f(x) =
_
_
_
−f
x
1
x
1
. . . −f
x
1
xn
.
.
.
.
.
.
.
.
.
−f
x
1
xn
. . . −f
xnxn
_
_
_
is negative deﬁnite if:
1) −f
x
1
x
1
< 0,
2) (−f
x
1
x
1
) · (−f
x
2
x
2
) −(−f
x
1
x
2
)
2
> 0,
. . .
n) D
2
−f(x) < 0 if n is odd, D
2
−f(x) > 0 if n is even;
the i
th
principal minor is a polinimial where all the elements have order i,
it is easy to check that odd principal minors mantain the sign of its arguments,
while even ones are always positive,
hence D
2
f(x) positive deﬁnite =⇒ D
2
f(x) negative deﬁnite. 2
15
Chapter 5 (pp. 112144): Equality Constraints.
1. a)
L(x, y, λ) = x
2
−y
2
+λ(x
2
+y
2
−1)
L
x
= 2x + 2λx
=0
=⇒ λ = −1 ∨ x = 0
L
y
= −2y + 2λy
=0
=⇒ λ = 1 ∨ y = 0
L
λ
= x
2
+y
2
−1
=0
=⇒
_
x = 0 ⇒y = ±1 λ = 1
y = 0 ⇒x = ±1 λ = −1
when λ = 1 we have a minimum, when λ = −1 a maximum;
b)
f(x) = x
2
−(1 −x
2
) = 2x
2
−1 =⇒
_
min for x = 0
max for x = ±∞
the solution is diﬀerent from (a) because the right substitution is y ≡
√
1 −x
2
,
admissible only for x ∈ [−1, 1]. 2
2. a) Substituting y ≡ 1 −x:
f(x) = x
3
−(1 −x)
3
= 3x
2
−3x + 1 =⇒ max for x = ±∞
b)
L(x, y, λ) = x
3
+y
3
+λ(x +y −1)
L
x
= 3x
2
+λ
=0
=⇒ λ = −3x
2
L
y
= 3y
2
+λ
=0
=⇒ λ = −3y
2
_
=⇒ x = ±y
L
λ
= x +y −1
x=y
=⇒ x = y =
1
2
x = y =
1
2
is the unique local minimum, as can be checked in (a). 2
3. a)
L(x, y, λ) = xy +λ(x
2
+y
2
−2a
2
)
L
x
= y + 2λx
=0
=⇒ (y = 0 ∧ λ = 0) ∨ y = −2λx
L
y
= x + 2λy
=0
=⇒ (x = 0 ∧ λ = 0) ∨ x = −2λy
L
λ
= x
2
+y
2
−2a
2
=0
=⇒
_
_
_
x = 0 ⇒ y = ±
√
2a
y = 0 ⇒ x = ±
√
2a
y = −2λx ∧ x = −2λy
16
(0, ±
√
2a) and (±
√
2a, 0) are saddle points, because f(x, y) can be positive or neg
ative for any ball around them;
y = −2λx ∧ x = −2λy imply:
λ = ±
1
2
, x = ±y, and x = ±a,
the sign of x does not matter, when x = y we have a maximum, when x = −y a
minimum.
b) substitute ˆ x ≡
1
x
, ˆ y ≡
1
y
and ˆ a ≡
1
a
L(ˆ x, y, λ) = ˆ x + ˆ y +λ(ˆ x
2
+ ˆ y
2
− ˆ a
2
)
L
ˆ x
= 1 + 2λˆ x
=0
=⇒ ˆ x = −
1
2
λ
L
ˆ y
= 1 + 2λˆ y
=0
=⇒ ˆ y = −
1
2
λ = ˆ x
L
λ
= ˆ x
2
+ ˆ y
2
− ˆ a
2
=0
=⇒ ˆ x = ˆ y = ±
√
2
2
a
for ˆ x and ˆ y negative we have a minimum, otherwise a maximum.
c)
L(x, y, x, λ) = x +y +z +λ(x
−1
+y
−1
+z
−1
−1)
L
x
= 1 −λx
−2
=0
=⇒ x = ±
√
λ
. . .
=0
=⇒ y = ±
√
λ
. . .
=0
=⇒ z = ±
√
λ
we have a minimum when they are all negative (x = y = z = −3) and a maximum
when all positive (x = y = z = −3).
d) substitute xy = 8 −(x +y)z = 8 −(5 −z)z:
f(z) = z(8 −(5 −z)z) = z
3
−5z
2
+ 8z
f
z
= 3z
2
−10z + 8
=0
=⇒ z =
5 ±
√
25 −24
3
=
_
2
4
3
as z →±∞, f(z) →±∞,
f
zz
= 6z −10 is negative for z =
4
3
(local maximum)
and positive for z = 2 (local minimum),
17
when z =
4
3
, x +y = 5 −z =
11
3
and xy = 8 −(x +y)z =
28
9
=⇒ one is also
4
3
and the other
7
3
,
when z = 2, x +y = 5 −z = 3 and xy = 8 −(x +y)z = 2
=⇒ one is also 2 and the other is 1;
e) substituting y ≡
16
x
:
f(x) = x +
16
x
=⇒ f
x
= 1 −
16
x
2
=0
=⇒ x = ±4
when x = ±4 and y = ±
1
4
we have a minimum, maxima are unbounded for lim
x→0
and lim
x→±∞
;
f) substituting z ≡ 6 −x and y ≡ 2x:
f(x) = x
2
+ 4x −(6 −x)
2
= 16x −36 ,
f(x) is a linear function with no maxima nor minima. 2
4. Actually a lemniscate is (x
2
+y
2
)
2
= x
2
−y
2
, while (x
2
−y
2
)
2
= x
2
+y
2
identiﬁes
only the point (0, 0).
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
−0.5
0
0.5
x
y
(x
2
+y
2
)
2
−x
2
+y
2
= 0
A lemniscate
In the lemniscate x +y maximizes, by simmetry, in the positive quadrant,
in the point where the tangent of the explicit function y = f(x) is −1;
for x and y positive the explicit function becomes:
(x
2
+y
2
)
2
= x
2
−y
2
x
4
+ 2x
2
y
2
+y
4
−x
2
+y
2
= 0
y
4
+ (2x
2
+ 1)y
2
+x
4
−x
2
= 0
y
2
=
−2x
2
−1 +
√
4x
4
+ 4x
2
+ 1 −4x
4
+ 4x
2
2
y = f(x) =
¸
−2x
2
+ 1 +
√
8x
2
+ 1
2
computing f
′
(x) and ﬁnding where it is −1 we ﬁnd the arg max = (x
∗
, y
∗
),
(−x
∗
, −y
∗
) will be the arg min. 2
18
5. a) (x −1)
3
= y
2
implies x ≥ 1,
f(x) = x
3
− 2x
2
+ 3x − 1 =⇒ f
x
= 3x
2
− 4x + 3 which is always positive for
x ≥ 1,
since f(x) is increasing, min f(x) = f(1) = 1 (y = 0);
b) the derivative of the constraint D((x − 1)
3
− y
2
) is (3x
2
− 6x + 3, −2y)
′
,
which is (0, 0)
′
in (1, 0) and in any other point satisfying the constraint,
hence the rank condition in the Theorem of Lagrange is violated. 2
6. a)
L(x,
λ) = c
′
x +
1
2
x
′
Dx +
λ
′
(Ax −
b)
=
n
i=1
c
i
x
i
+
1
2
n
i=1
_
n
j=1
x
j
D
ji
_
x
i
+
m
i=1
λ
i
__
n
j=1
A
ij
x
j
_
−b
i
_
L
x
i
= c
i
+D
ii
x
i
+
1
2
n
j=1,j=i
x
j
D
ji
+
1
2
n
j=1,j=i
x
j
D
ij
+
m
j=1
λ
j
A
ji
= c
i
+
n
j=1
x
j
D
ij
+
m
j=1
λ
j
A
ji
L
λ
i
=
_
n
j=1
A
ij
x
j
_
−b
i
b). . .
7. The constraint is the normalixation x = 1, the system is:
f(x) = x
′
Ax
=
n
i=1
_
n
j=1
x
j
A
ji
_
x
i
f
x
i
= 2
n
j=1
x
j
A
ji
x
∗
i
= −
n
j=1,j=i
x
∗
j
A
ji
A
ii
x
∗
= −
_
_
_
_
_
0
A
21
A
11
. . .
A
n1
A
11
A
12
A
22
0 . . .
A
n2
A
22
.
.
.
.
.
.
.
.
.
.
.
.
A
1n
Ann
A
2n
Ann
. . . 0
_
_
_
_
_
·
x
∗
≡ B ·
x
∗
such that 
x
∗
 = 1
19
for any eigenvectors of the square matrix B, its two normalization (one opposite
of the other) are critical points of the problem;
since D
2
f(x) = 2
_
_
_
A
11
. . . A
n1
.
.
.
.
.
.
.
.
.
A
1n
. . . A
nn
_
_
_ , ∀ x is constant,
all critical points are maxima, minima or saddles, according wether A is positive
deﬁnite, negativedeﬁnite or neither. 2
8. a)
0 50 100 150 200 250 300 350 400
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
s
to
c
k
days
The function s(t) quantifying the stock is periodic of period T = x
dt
dI
,
in this period the average stock is
R
T
0
s(t)
T
=
x·T
2T
=
x
2
,
in the long run this will be the total average;
b)
L(x, n, λ) = C
h
x
2
+C
0
n +λ(nx −A)
L
x
=
C
h
2
+λn
L
n
= C
0
+λx
_
=0
=⇒ n = −
C
h
2λ
∧ x = −
C
0
λ
L
λ
= nx −A
=0
=⇒
C
h
C
0
2λ
2
= A =⇒λ = ±
_
C
h
C
0
2A
x and n negative have no meaning so λ must be negative. 2
9. The condition for equality constraint to suﬃce is that u(x
1
, x
2
) is nondecreasing,
this happens for α ≥ 1 ∧ β ≥ 1;
if otherwise one of them , say α is less than 1,
the problem is unbounded at lim
x
1
→0
x
α
1
+x
β
2
= +∞. 2
10.
20
min w
1
x
1
+w
2
x
2
s.t. (x
1
, x
2
) ∈ X ≡ {(x
1
, x
2
) ∈ R
2
+
 x
2
1
+x
2
2
≥ 1}
a)
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
x
1
x
2
X=x
1
2
+x
2
2
≥ 1
− w
1
/ w
2
if w
1
< w
2
it is clear from the graph that (1, 0) is the cheapest point in X,
similarly, if w
2
< w
1
, the cheapest point is 0, 1,
if w
1
= w
2
they both cost w
1
= w
2
;
b) if nonnegativity constraints are ignored:
min
x
2
1
+x
2
2
≥1
w
1
x
1
+w
2
x
2
= lim
(x
1
,x
2
)→(−∞,−∞)
w
1
x
1
+w
2
x
2
= −∞
similarly for (x
1
, x
2
) →(+∞, +∞) if w
1
and w
2
are positive. 2
11. [x
1
2
is not deﬁned if x < 0. . . ]
max x
1
2
+y
1
2
s.t. px +y = 1
L(x, y, λ) = x
1
2
+y
1
2
+λ(px +y −1)
L
x
=
1
2
x
−
1
2
+λp
L
n
=
1
2
y
−
1
2
+λ
_
=0
=⇒ x = (−2λp)
−2
∧ y = (−2λ)
−2
L
λ
= px +y −1
=0
=⇒ p(−2λp)
−2
+ (−2λ)
−2
= 1
p
4λ
2
p
2
+
1
4λ
2
=
p
2
+p
4λ
2
p
2
= 1 =⇒ λ = ±
_
p
2
+p
2p
the sign of λ does not matter to identify x
∗
=
1
p
2
+p
and y
∗
=
p
2
p
2
+p
,
the unique critical point, with both positive components,
x
∗
1
2
+y
∗
1
2
=
1+p
√
p
2
+p
=
_
1+p
p
is greater e.g. than 0
1
2
+ 1
1
2
= 1,
being the unique critical point it is a maximum. 2
21
Chapter 6 (pp. 145171): Inequality Constraints.
1. Since the function is increasing in both variables, we can search maxima on the
boundary,
substituting y =
√
1 −x
2
(which means x ∈ [0, 1] =⇒y ∈ [0, 1]):
f(x) = lnx+ln
_
1 −x
2
= ln x+
1
2
ln(1−x
2
) =⇒ f
x
=
1
x
−
1
2
2x
1 −x
2
=
1 −2x
2
(1 −x
2
)x
=0
=⇒ x =
√
2
2
=⇒ y =
√
2
2
which is the argument of the maximum. 2
2. The function is increasing, we search maxima on the boundary,
we can substitute to polar coordinates y ≡
√
xsin ρ, z ≡
√
x cos ρ:
f(ρ) =
√
x(p
y
sin ρ +p
z
cos ρ) =⇒ f
ρ
=
√
x(p
y
cos ρ −p
z
sinρ)
=0
=⇒
sin ρ
cos ρ
=
p
y
p
z
=⇒
y
z
=
p
y
p
z
economically speaking, marginal rate of substitution equals the price ratio,
sin ρ = ±
p
y
p
2
y
+p
2
z
∧ cos ρ = ±
p
z
p
2
y
+p
2
z
y = ±
√
x
p
y
p
2
y
+p
2
z
∧ z = ±
√
x
p
z
p
2
y
+p
2
z
(
√
x
py
p
2
y
+p
2
z
,
√
x
pz
p
2
y
+p
2
z
) is a maximum and (−
√
x
py
p
2
y
+p
2
z
, −
√
x
pz
p
2
y
+p
2
z
) is a minimum. 2
3. a) We can search maxima on the boundary determined by I:
max x
1
x
2
x
3
s.t. x
1
∈ [0, 1], x
2
∈ [2, 4], x
3
= 4 −x
1
−x
2
max f(x
1
, x
2
) = 4x
1
x
2
−x
2
1
x
2
−x
1
x
2
2
s.t. x
1
∈ [0, 1], x
2
∈ [2, 4]
consider only x
1
,
f(x
1
) = 4x
1
x
2
−x
2
1
x
2
−x
1
x
2
2
has a maximum for x
1
=
4x
2
−x
2
2
2x
2
= 2 −
1
2
x
2
,
by simmetry a maximum is for x
1
= x
2
=
4
3
, which is however not feasible,
nevertheless, for x
1
∈ [0, 1] the maximum value for x
2
∈ [2, 4] is 2,
we get then x
1
= 1 and x
3
= 1;
b)
22
max x
1
x
2
x
3
s.t. x
1
∈ [0, 1], x
2
∈ [2, 4], x
3
=
6 −x
1
−2x
2
3
max f(x
1
, x
2
) = 2x
1
x
2
−
1
3
x
2
1
x
2
−
2
3
x
1
x
2
2
s.t. x
1
∈ [0, 1], x
2
∈ [2, 4]
f(x
1
) has a maximum when x
1
=
2x
2
−
2
3
x
2
2
2
3
x
2
= 3 −x
2
,
f(x
2
) when x
2
=
2x
1
−
1
3
x
2
2
4
3
x
2
=
3
2
−
1
4
x
1
,
the two together give x
1
= 3 −
3
2
+
1
4
x
1
=⇒ x
1
= 2 and x
2
= 1, not feasible,
nevertheless, for x
1
∈ [0, 1] the maximum value for x
2
∈ [2, 4] is again 2,
hence x
1
is again 1 and x
3
= 1. 2
4. The argument of squareroot must be positive, the function is increasing in all
variables, so we can use Lagrangean method:
L(x, λ) =
T
t=1
2
−t
√
x
t
+λ
_
T
t=1
x
t
−1
_
L
x
i
= 2
−i−1
x
−
1
2
i
+λ
if λ = 0 all x
i
are 0 (minimum);
otherwise, for i < j, i, j ∈ {1, . . . T}:
2
−i−1
x
−
1
2
i
= 2
−j−1
x
−
1
2
j
=⇒
_
x
i
x
j
_1
2
=
2
j+1
2
i+1
= 2
j−i
=⇒
x
i
x
j
= 2
2(j−i)
the system becomes x
1
= x
1
, x
2
=
1
4
x
1
, . . . x
T
=
1
2
2(T−1)
x
1
,
since they must sum to 1:
x
1
= 1/
T−1
i=0
4
−i
, x
2
=
1
4
/
T−1
i=0
4
−i
, . . . x
T
=
1
4
(T−1)
/
T−1
i=0
4
−i
as T increases the sum quickly converges to
1
1−
1
4
=
4
3
. 2
5. a)
min w
1
x
1
+w
2
x
2
s.t. x
1
x
2
= ¯ y
2
, x
1
≥ 1, x
2
> 0
the feasible set is closed but unbounded as x
1
→+∞
b) substituting x
2
=
¯ y
2
x
1
we get:
23
f(x
1
) = w
1
x
1
+w
2
¯ y
2
x
1
f
′
(x
1
) = w
1
−w
2
¯ y
2
x
2
1
=0
=⇒ x
1
= +
_
w
2
w
1
y since x
1
> 1
f
′′
(x
1
) = 2w
2
¯ y
2
x
3
1
> 0 since x
1
> 1
if
_
w
2
w
1
y ≥ 1, this is the solution, and x
2
=
_
w
1
w
2
y,
otherwise 1 is, and x
2
= y
2
,
in both cases ∃
λ satisfying KuhnTucker conditions:
L(x
1
, x
2
, λ
1
, λ
2
) = w
1
x
1
+w
2
x
2
+λ
1
(¯ y
2
−x
1
x
2
) +λ
2
(x
1
−1)
for
_
_
w
2
w
1
y,
_
w
1
w
2
y
_
L
x
2
= w
2
−λ
1
x
1
=0
=⇒ λ
1
=
w
2
y
_
w
1
w
2
L
x
1
= w
1
−λ
1
x
2
+λ
2
=0
=⇒ λ
2
= 0
for (1, y
2
)
L
x
2
= w
2
−λ
1
x
1
=0
=⇒ λ
1
= w
2
L
x
1
= w
1
−λ
1
x
2
+λ
2
=0
=⇒ λ
2
= w
2
y
2
−w
1
. 2
6.
max
(x
1
,x
2
)∈R
2
+
f(x
1
, x
2
) = max
(x
1
,x
2
)∈R
2
+
p
1
x
1
2
1
+p
2
x
1
2
1
x
1
3
2
−w
1
x
1
−w
2
x
2
substitute x ≡ x
1
2
1
and y ≡ x
1
3
2
, the problem becomes
max
(x,y)∈R
2
+
f(x, y) = max
(x,y)∈R
2
+
x(p
1
+p
2
y) −w
1
x
2
−w
2
y
3
f
x
= p
1
+p
2
y −2w
1
x
=0
=⇒ x =
p
1
+p
2
y
2w
1
f
y
= xp
2
−3w
2
y
2
=
p
2
p
1
2w
1
+
p
2
2
y
2w
1
−3w
2
y
2
=0
=⇒ y =
−
p
2
2
2w
1
±
_
_
p
2
2
2w
1
_
2
+ 6
w
2
p
2
p
1
w
1
3w
2
only the positive value will be admissible;
for p
1
= p
2
= 1 and w
1
= w
2
= 2 we have:
24
y =
−
1
4
+
_
1
16
+ 6
6
=
√
97 −1
24
=⇒ x =
√
97
96
it is a maximum, because D
2
f(x, y) =
_
4 1
1 −12y
_
is negative semideﬁnite for
every y ≥ 0,
it is a global one for positive values because it is the only critical point. 2
7. a) Substituting x ≡ x
1
2
1
and y ≡ x
2
, considering that utility is increasing in both
variables, the problem is:
max
(x,y)∈R
2
+
f(x, y) = x +x
2
y such that 4x
2
+ 5y = 100
substituting y = 20 −
4
5
x
2
the problem becomes:
max
x∈R
+
f(x) = x + 20x
2
−
4
5
x
4
f
x
= 1 + 40x −
16
5
x
3
with numerical methods it is possible to calculate that the only positive x for which
f
x
= 0 is x
∗
≃ 3.5480,
where f(x) ≃ 128.5418,
since f
xx
= 40 −
48
5
x
2
< 40 −
48
5
· 5 < 0, it is a maximum,
we obtain x
∗
1
≃ 12.5883 and x
∗
2
≃ 9.9294;
b) buying the coupon the problem is:
max
(x,y)∈R
2
+
f
a
(x, y) = x +x
2
y such that 3x
2
+ 5y = 100 −a
which becomes:
max
x∈R
+
f
a
(x) = x +
100 −a
5
x
2
−
3
5
x
4
f
′
a
= 1 + (200 −2a)x −
12
5
x
3
the value of a that makes the choice indiﬀerent is when f
a
(x) ≃ 128.5418 (as
without coupon) and f
′
a
= 0,
for lower values of a the coupon is clearly desirable. 2
8. a)
max u(f, e, l) s.t. l ∈ [0, H], u
f
> 0, u
e
> 0, u
l
> 0
25
if α is the amount of income spent in food, f =
αwl
p
and e =
(1−α)wl
q
:
max u
_
αwl
p
,
(1 −α)wl
q
, l
_
s.t. l ∈ [0, H], α ∈ [0, 1], u
f
> 0, u
e
> 0, u
l
< 0 ;
b)
L(l, α,
λ) = u
_
αwl
p
,
(1 −α)wl
q
, l
_
+λ
1
l +λ
2
(H −l) +λ
3
α +λ
4
(1 −α)
L
l
=
du
dl
+λ
1
−λ
2
L
α
=
du
dα
+λ
3
−λ
4
and the constraints;
c) the problem is
max
l∈[0,16], α∈[0,1]
f(l, α) = (α3l)
1
3
((1 −α)3l)
1
3
−l
2
= (α)
1
3
(1 −α)
1
3
(3l)
2
3
−l
2
we can decompose the two variables function:
f(l, α) = g(α)(3l)
2
3
−l
2
, where g(α) ≡ (α)
1
3
(1 −α)
1
3
since (3l)
2
3
is always positive, for l ∈ [0, 16],
and g(α) is always positive, for α ∈ [0, 1],
g(α) maximizes alone for α =
1
2
, where f(
1
2
) = 64;
now we have:
f(l) ≡ 64(3l)
2
3
−l
2
=⇒ f
l
= 128 · (3l)
−
1
3
−2l
=0
=⇒ l
∗
= 64 · 3
−
1
3
· l
∗−
1
3
=⇒ l
∗
4
3
≃ 64 · 0.6934 =⇒ l
∗
≃ 17.1938
f
ll
= −128 · (3l)
−
4
3
−2 ← always negative
l
∗
is a maximum but is not feasible,
however, since the function is concave, f(l) is increasing in [0, 16],
the agent maximizes working 16 hours (the model does not include the time for
leisure in the utility) and splitting the resources on the two commodities. 2
9.
max x
1
3
1
+ min{x
2
, x
3
} s.t. p
1
x
1
+p
2
x
2
+p
3
x
3
≤ I
26
in principle Weierstrass theorem applies but not KuhnTucker because min is con
tinuous but not diﬀerentiable.
The cheapest way to maximize min{x
2
, x
3
} is however when x
2
= x
3
, the problem
becomes:
max x
1
3
1
+x
2
s.t. p
1
x
1
+ (p
2
+p
3
)x
2
≤ I
now also KuhnTucker applies. 2
10. a)
max p · f(L
∗
+l) −w
1
L
∗
−w
2
l s.t. l ≥ 0, f ∈ C
1
is concave =⇒ f
l
< 0
b)
L(l, λ) = p · f(L
∗
+l) −w
1
L
∗
−w
2
l +λl
L
l
= p · f
l
(L
∗
+l) −w
2
+λ
L
λ
= l
l = 0 and λ = w
2
−p · f
l
(L
∗
+l);
c) p·f(L
∗
+l)−w
1
L
∗
−w
2
l maximizes once (by concavity) for p·f
l
(L
∗
+l) = w
2
,
when this happens for l ≤ 0, the maximum is (by chance) the Lagrangean point,
when instead this happens for l > 0, the maximum is not on the boundary. 2
11. a)
max p
y
x
1
4
1
x
1
4
2
−p
1
(x
1
−K
1
) −p
2
(x
2
−K
2
) s.t. x
1
≥ −K
1
, x
2
≥ −K
2
L(x
1
, x
2
, λ
1
, λ
2
) = p
y
x
1
4
1
x
1
4
2
−p
1
(x
1
−K
1
) −p
2
(x
2
−K
2
) +λ
1
(K
1
+x
1
) +λ
2
(K
2
+x
2
)
L
x
1
=
1
4
p
y
x
−
3
4
1
x
1
4
2
−p
1
x
1
+λ
1
x
1
L
x
2
=
1
4
p
y
x
1
4
1
x
−
3
4
2
−p
2
x
2
+λ
2
x
2
L
λ
1
= K
1
+x
1
L
λ
2
= K
2
+x
2
b) the unbounded maximum of f(x
1
, x
2
) = x
1
4
1
x
1
4
2
−x
1
−x
2
+ 4 is for:
27
f
x
1
=
1
4
x
−
3
4
1
x
1
4
2
−1 = 0 =⇒ x
3
4
1
=
1
4
x
1
4
2
f
x
2
=
1
4
x
1
4
1
x
−
3
4
2
−1 = 0 =⇒ x
3
4
2
=
1
4
x
1
4
1
_
=⇒ x
1
= x
2
=
_
1
4
_
2
=
1
16
bounds are respected,
the ﬁrm sells most of x
1
and buys only
1
16
units of x
2
to produce
1
4
units of y;
c) is analogous to (b) since the problem is symmetric. 2
12. a)
max p
y
(x
1
(x
2
+x
3
)) − w
′
x
L(x,
λ) = p
y
(x
1
(x
2
+x
3
)) − w
′
x +
λ
′
x
L
x
1
= p
y
(x
2
+x
3
) −w
1
+λ
1
L
x
i
i={2,3}
= p
y
x
1
−w
i
+λ
i
L
λ
i
= x
i
x
1
=
w
2
−λ
2
p
y
=
w
3
−λ
3
p
y
=⇒ λ
3
−λ
2
= w
3
−w
2
x
2
+x
3
=
w
1
−λ
1
p
y
b) [ w
4
??? ] x ∈ R
3
+
and
λ ∈ R
3
are not deﬁned by the equations;
c) the problem can be solved considering the cheapest between x
2
and x
3
,
suppose it is x
2
, the problem becomes:
max p
y
x
1
x
2
−w
1
x
1
−w
2
x
2
which has critical point (
w
2
py
,
w
1
py
) but its Hessian
_
0 p
y
p
y
0
_
is not negative semidef
inite.
The problem maximizes at (+∞, +∞) for any choice of p
y
∈ R
+
and w ∈ R
3
+
. 2
28