You are on page 1of 9

Calculus of Several Variables

Recall that the derivative of a function of one variable at 𝑥0 is


𝑑𝑓 𝑓(𝑥0 + ℎ) − 𝑓(𝑥0 )
(𝑥0 ) = lim
𝑑𝑥 ℎ→0 ℎ
It describes how a dependent variable changes if the independent variable has a small
change. For a multivariable function, we change one independent variable at a time,
keeping all the other independent variable constants. The variation brought about by
the change of one variable is described by partial derivative.

Definition Let 𝑓: ℛ 𝑛 → ℛ1 . Then for each variable 𝑥𝑖 at each point 𝒙0 =


(𝑥10 , 𝑥20 , … , 𝑥𝑛0 ) in the domain of 𝑓, the partial derivative is

𝜕𝑓 0 0 𝑓(𝑥10 , … , 𝑥𝑖0 + ℎ, … , 𝑥𝑛0 ) − 𝑓(𝑥10 , … , 𝑥𝑖0 , … , 𝑥𝑛0 )


(𝑥1 , 𝑥2 , … , 𝑥𝑛0 ) = lim
𝜕𝑥𝑖 ℎ→0 ℎ

if this limit exists. Only the 𝑖𝑡ℎ variable changes, the others are treated as constants.
𝜕𝑓
The partial derivative is usually written as . Other common ways include 𝜕𝑓 ⁄𝜕𝑥𝑖 ,
𝜕𝑥𝑖

𝑓𝑖 , 𝑓𝑥𝑖 and 𝐷𝑖 𝑓.

Examples:
(1) Consider the function 𝑓(𝑥, 𝑦) = 3𝑥 2 𝑦 2 + 4𝑥 3 𝑦 + 7𝑦,
𝜕𝑓
= 6𝑥𝑦 2 + 12𝑥 2 𝑦
𝜕𝑥
𝜕𝑓
= 6𝑥 2 𝑦 + 4𝑥 3 + 7
𝜕𝑦
(2) Consider the production function 𝑄 = 𝐹(𝐾, 𝐿),where 𝐾 is the amount of capital
input, 𝐿 is the amount of labor input.
𝜕𝐹
 The partial derivative is called the marginal product of capital which
𝜕𝐾

estimates the change in output due to one unit increase in capital.


𝜕𝐹
 The partial derivative is called the marginal product of labor which
𝜕𝐿

estimates the change in output due to one unit increase in labor.


 If capital increases by ∆𝐾, then the output will increase by

1
𝜕𝐹
∆𝑄 ≈ (𝐾, 𝐿)∆𝐾
𝜕𝐾
 If labor increases by ∆𝐿, then the output will increase by
𝜕𝐹
∆𝑄 ≈ (𝐾, 𝐿)∆𝐿
𝜕𝐾
(3) (Elasticity) If 𝑄1 = 𝑄1 (𝑃1 , 𝑃2 . 𝐼) represent the demand for good 1 in terms of
prices and income.
• The own price elasticity of demand
% 𝑐ℎ𝑎𝑛𝑔𝑒 𝑖𝑛 𝑑𝑒𝑚𝑎𝑛𝑑 ∆𝑄1⁄𝑄1 𝑃1 ∆𝑄1 𝑃1 𝜕𝑄1
𝜀1 = = = =
% 𝑐ℎ𝑎𝑛𝑔𝑒 𝑖𝑛 𝑜𝑤𝑛 𝑝𝑟𝑖𝑐𝑒 ∆𝑃1 ⁄𝑃1 𝑄1 ∆𝑃1 𝑄1 𝜕𝑃1
• The cross price elasticity of demand
% 𝑐ℎ𝑎𝑛𝑔𝑒 𝑖𝑛 𝑑𝑒𝑚𝑎𝑛𝑑 𝑓𝑜𝑟 𝑔𝑜𝑜𝑑 1 𝑃2 𝜕𝑄1
𝜀𝑄1,𝑃2 = =
% 𝑐ℎ𝑎𝑛𝑔𝑒 𝑖𝑛 𝑜𝑤𝑛 𝑝𝑟𝑖𝑐𝑒 𝑓𝑜𝑟 𝑔𝑜𝑜𝑑 2 𝑄1 𝜕𝑃2
• The Income elasticity of demand
% 𝑐ℎ𝑎𝑛𝑔𝑒 𝑖𝑛 𝑑𝑒𝑚𝑎𝑛𝑑 𝐼 𝜕𝑄1
𝜀𝑄1 ,𝐼 = =
% 𝑐ℎ𝑎𝑛𝑔𝑒 𝑖𝑛 𝑖𝑛𝑐𝑜𝑚𝑒 𝑄1 𝜕𝐼

The Total Derivative


Consider a function 𝐹(𝑥, 𝑦) of two variables. Given a point (𝑥 ∗ , 𝑦 ∗ ), total variation
of the function when 𝑥, 𝑦 vary simultaneously can be approximated by
𝜕𝐹 ∗ ∗
𝐹(𝑥 ∗ + ∆𝑥, 𝑦 ∗ ) − 𝐹(𝑥 ∗ , 𝑦 ∗ ) ≈ (𝑥 , 𝑦 )∆𝑥
𝜕𝑥
𝜕𝐹 ∗ ∗
𝐹(𝑥 ∗ , 𝑦 ∗ + ∆𝑦) − 𝐹(𝑥 ∗ , 𝑦 ∗ ) ≈ (𝑥 , 𝑦 )∆𝑦
𝜕𝑦
𝜕𝐹 ∗ ∗ 𝜕𝐹 ∗ ∗
𝐹(𝑥 ∗ + ∆𝑥, 𝑦 ∗ + ∆𝑦) − 𝐹(𝑥 ∗ , 𝑦 ∗ ) ≈ (𝑥 , 𝑦 )∆𝑥 + (𝑥 , 𝑦 )∆𝑦
𝜕𝑥 𝜕𝑦

Notation Sometimes we use notations 𝑑𝑥, 𝑑𝑦, 𝑑𝐹,


𝜕𝐹 𝜕𝐹
𝑑𝑥 = ∆𝑥, 𝑑𝑦 = ∆𝑦, 𝑑𝐹 = (𝑥 ∗ , 𝑦 ∗ )𝑑𝑥 + (𝑥 ∗ , 𝑦 ∗ )𝑑𝑦
𝜕𝑥 𝜕𝑦

The above expression for 𝑑𝐹 is called the total differential which is an appropriate
linear approximation to the change ∆𝐹.

Exercise 𝐹 = 𝑥 2 𝑙𝑛𝑦, ∆𝐹 =?

2
Jacobian Derivative
For a function 𝐹(𝑥1 , … , 𝑥𝑛 ) of 𝑛 variables, at a given point 𝒙∗ = (𝑥1∗ , … , 𝑥𝑛∗ ), the
total differential is
𝜕𝐹 ∗ 𝜕𝐹 ∗
𝑑𝐹 = (𝒙 )𝑑𝑥1 + ⋯ + (𝒙 )𝑑𝑥𝑛
𝜕𝑥1 𝜕𝑥𝑛
which can be viewed as a linear function of 𝑑𝑥𝑖 for 𝑖 = 1, … , 𝑛 with coefficients
𝜕𝐹
(𝒙∗ ), 𝑖 = 1, … , 𝑛 respectively. It is a good approximation to the actual change
𝜕𝑥𝑖

∆𝐹 = 𝐹(𝒙∗ + ∆𝑥) − 𝐹(𝒙∗ ) ≈ 𝑑𝐹

The derivative of 𝐹 at (𝒙∗ ) is represented by a vector


𝜕𝐹 ∗ 𝜕𝐹 ∗
𝐷𝐹(𝒙∗ ) = 𝐷𝐹𝒙∗ = ( (𝒙 ) ⋯ (𝒙 ))
𝜕𝑥1 𝜕𝑥𝑛
It is sometimes called the Jacobian derivative of 𝐹 at 𝒙∗ .

Chain Rule I

Definition A function 𝑓: ℛ 𝑛 → ℛ1 is continuous differentiable (or 𝐶 1 ) on an open


set 𝑈 ⊆ ℛ 𝑛 if and only if for each 𝑖, (𝜕𝑓⁄𝜕𝑥𝑖 )(𝑥) exists for all 𝑥 in 𝑈 and is
continuous at 𝑥.

Chain Rule I: If 𝒙(𝑡) = (𝑥1 (𝑡), 𝑥2 (𝑡), … , 𝑥𝑛 (𝑡)) is a 𝐶 1 curve on an interval about

𝑡0 and 𝑓 is a 𝐶 1 function on a ball around 𝒙(𝑡0 ), then

𝑔(𝑡) = 𝑓(𝑥1 (𝑡), 𝑥2 (𝑡), … , 𝑥𝑛 (𝑡))

is a 𝐶 1 function at 𝑡0 and
𝑑𝑔 𝜕𝑓 𝜕𝑓 𝜕𝑓
(𝑡0 ) = (𝒙(𝑡0 ))𝑥1′ (𝑡0 ) + (𝒙(𝑡0 ))𝑥2′ (𝑡0 ) + ⋯ + (𝒙(𝑡0 ))𝑥𝑛′ (𝑡0 )
𝑑𝑡 𝜕𝑥1 𝜕𝑥2 𝜕𝑥𝑛

Directional Derivatives
Given a point 𝒙∗ = (𝑥1∗ , 𝑥2∗ , … , 𝑥𝑛∗ ) and a vector 𝒗 = (𝑣1 , 𝑣2 , … , 𝑣𝑛 ), then a line
through 𝒙∗ in the direction 𝒗 is a curve and can be written as
𝒙(𝑡) = 𝒙∗ + 𝑡𝒗 = (𝑥1∗ + 𝑡𝑣1 , 𝑥2∗ + 𝑡𝑣2 , … , 𝑥𝑛∗ + 𝑡𝑣𝑛 )

3
A function 𝐹 is a real-valued function defined on ℛ 𝑛 . Evaluating the function along
the line 𝒙(𝑡)

𝑔(𝑡) ≡ 𝐹(𝒙(𝑡)) = 𝐹(𝑥1∗ + 𝑡𝑣1 , 𝑥2∗ + 𝑡𝑣2 , … , 𝑥𝑛∗ + 𝑡𝑣𝑛 )

Then use chain rule I to take derivative of 𝑔 at 𝑡 = 0.


𝜕𝐹 ∗ 𝜕𝐹 ∗ 𝜕𝐹 ∗
𝑔′ (0) = (𝒙 )𝑣1 + (𝒙 )𝑣2 + ⋯ + (𝒙 )𝑣𝑛
𝜕𝑥1 𝜕𝑥2 𝜕𝑥𝑛
𝑣1
𝜕𝐹 ∗ 𝜕𝐹 ∗ 𝜕𝐹 ∗ 𝑣2
=( (𝒙 ), (𝒙 ), … , (𝒙 )) ( ⋮ )
𝜕𝑥1 𝜕𝑥2 𝜕𝑥𝑛
𝑣𝑛

= 𝐷𝐹(𝒙∗ ) ∙ 𝒗

This expression is called the derivative of 𝐹 at 𝒙∗ in the direction 𝒗.

Given the point 𝒙∗ , and a direction 𝒗, if the independent variable move a very small

step from 𝒙∗ in the direction 𝒗 to 𝒙∗ + (∆𝑡)𝒗, the change in the function value

could be approximated by

𝐹(𝒙∗ + (∆𝑡)𝒗) − 𝐹(𝒙∗ ) ≈ (𝐷𝐹(𝒙∗ ) ∙ 𝒗)∆𝑡

Notice that for the direction 𝒆𝑖 , 𝑖 = 1, … , 𝑛 parallel to the 𝑥𝑖 -axis,

1
𝜕𝐹 𝜕𝐹 𝜕𝐹 𝜕𝐹 ∗
𝐷𝐹(𝒙∗ ) ∙ 𝒆1 = ( (𝒙∗ ), (𝒙∗ ), … , (𝒙∗ )) (0) = (𝒙 )
𝜕𝑥1 𝜕𝑥2 𝜕𝑥𝑛 ⋮ 𝜕𝑥1
0
0
𝜕𝐹 𝜕𝐹 𝜕𝐹 𝜕𝐹 ∗
𝐷𝐹(𝒙∗ ) ∙ 𝒆2 = ( (𝒙∗ ), (𝒙∗ ), … , (𝒙∗ )) (1) = (𝒙 )
𝜕𝑥1 𝜕𝑥2 𝜕𝑥𝑛 ⋮ 𝜕𝑥2
0

0
𝜕𝐹 𝜕𝐹 𝜕𝐹 𝜕𝐹 ∗
𝐷𝐹(𝒙∗ ) ∙ 𝒆𝑛 = ( (𝒙∗ ), (𝒙∗ ), … , (𝒙∗ )) (0) = (𝒙 )
𝜕𝑥1 𝜕𝑥2 𝜕𝑥𝑛 ⋮ 𝜕𝑥𝑛
1

4
The directional derivative in 𝑥𝑖 −direction is merely the partial derivative with

respect to 𝑥𝑖 .

The Gradient Vector

The derivative of 𝑦 = 𝐹(𝑥1 , 𝑥2 , … , 𝑥𝑛 ) at 𝒙∗ can be written as a vector.

𝜕𝐹 ∗ 𝜕𝐹 ∗ 𝜕𝐹 ∗
𝐷𝐹𝒙∗ = ( (𝒙 ), (𝒙 ), … , (𝒙 ))
𝜕𝑥1 𝜕𝑥2 𝜕𝑥𝑛

what mainly used is each entry of this vector. Vector form is for notational simplicity.

Sometimes we are also interested in the direction this vector. When we begin to

consider its direction, it is usually written as a column vector


𝜕𝐹 ∗
(𝒙 )
𝜕𝑥1
𝜕𝐹 ∗
(𝒙 )
𝜕𝑥2

𝜕𝐹 ∗
(𝒙 )
(𝜕𝑥𝑛 )

We think of this column vector as a vector in ℛ 𝑛 with tail at 𝒙∗ . We write it as

∇𝐹(𝒙∗ ) or grad 𝐹(𝒙∗ ) and call it the gradient or gradient vector of 𝐹 at 𝒙∗ .

The length and direction of gradient vector have significance. Notice that the

directional derivative of 𝐹 at the direction 𝒗 can be written as the dot product of

∇𝐹(𝒙∗ ) and 𝒗. ∇𝐹(𝒙∗ ) ∙ 𝒗 measures the rate at witch 𝐹 rises or falls as one moves

form 𝒙∗ in the direction 𝒗. Now a natural question is in what direction does 𝐹

increase most rapidly. Since our focus is on the direction and

∇𝐹(𝒙∗ ) ∙ 𝒗 = ‖∇𝐹(𝒙∗ )‖‖𝒗‖𝑐𝑜𝑠𝜃

5
where 𝜃 is the angle between ∇𝐹(𝒙∗ ) and 𝒗, we normalize ‖𝒗‖ = 1. Then the

angle 𝜃 determines the rate of change. Note that −1 ≤ 𝑐𝑜𝑠𝜃 ≤ 1, when 𝑐𝑜𝑠𝜃=1,

that is when 𝜃 = 0°, ∇𝐹(𝒙∗ ) ∙ 𝒗 will be the largest.

For the change of function in direction 𝒗, with ‖𝒗‖ = 1,


𝐹(𝒙∗ + (∆𝑡)𝒗) − 𝐹(𝒙∗ )
≈ (𝐷𝐹(𝒙∗ ) ∙ 𝒗)∆𝑡
∆𝑡
= ∇𝐹(𝒙∗ ) ∙ 𝒗

= ‖∇𝐹(𝒙∗ )‖‖𝒗‖𝑐𝑜𝑠𝜃

= ‖∇𝐹(𝒙∗ )‖𝑐𝑜𝑠𝜃

Theorem Let 𝐹: ℛ 𝑛 → ℛ1 be a 𝐶 1 function. At any point 𝒙 in the domain of 𝐹 at

which ∇𝐹(𝒙) ≠ 0, the gradient vector ∇𝐹(𝒙) points to the direction in which 𝐹

increases most rapidly. And −∇𝐹(𝒙) points to the direction in which 𝐹 decreases

most rapidly.

Functions from 𝓡𝒏 to 𝓡𝒎

Suppose we are studying

𝐹 = (𝑓1 , 𝑓2 , … , 𝑓𝑚 ): ℛ 𝑛 → ℛ 𝑚

at a specific point 𝒙∗ = (𝑥1∗ , 𝑥2∗ , … , 𝑥𝑛∗ ) and we want to use approximation by

differentials to estimate the effect of a change at 𝒙∗ by ∆𝒙 = (∆𝑥1 , ∆𝑥2 , … , ∆𝑥𝑛 ). We

first apply the results for real-valued function to each component function 𝑓𝑖 :
𝜕𝑓1 ∗ 𝜕𝑓1 ∗ 𝜕𝑓1 ∗
𝑓1 (𝒙∗ + ∆𝒙) − 𝑓1 (𝒙∗ ) ≈ (𝒙 )∆𝑥1 + (𝒙 )∆𝑥2 + ⋯ + (𝒙 )∆𝑥𝑛
𝜕𝑥1 𝜕𝑥2 𝜕𝑥𝑛
𝜕𝑓2 ∗ 𝜕𝑓2 ∗ 𝜕𝑓2 ∗
𝑓2 (𝒙∗ + ∆𝒙) − 𝑓2 (𝒙∗ ) ≈ (𝒙 )∆𝑥1 + (𝒙 )∆𝑥2 + ⋯ + (𝒙 )∆𝑥𝑛
𝜕𝑥1 𝜕𝑥2 𝜕𝑥𝑛

6
𝜕𝑓𝑚 ∗ 𝜕𝑓𝑚 ∗ 𝜕𝑓𝑚 ∗
𝑓𝑚 (𝒙∗ + ∆𝒙) − 𝑓𝑚 (𝒙∗ ) ≈ (𝒙 )∆𝑥1 + (𝒙 )∆𝑥2 + ⋯ + (𝒙 )∆𝑥𝑛
𝜕𝑥1 𝜕𝑥2 𝜕𝑥𝑛
Then combine these results into matrix form:
𝜕𝑓1 ∗ 𝜕𝑓1 ∗ 𝜕𝑓1 ∗
(𝒙 ) (𝒙 ) ⋯ (𝒙 )
𝜕𝑥1 𝜕𝑥2 𝜕𝑥𝑛
∆𝑥1
𝜕𝑓2 ∗ 𝜕𝑓2 ∗ 𝜕𝑓2 ∗
(𝒙 ) (𝒙 ) ⋯ (𝒙 ) ∆𝑥
∗ ∗)
𝐹(𝒙 + ∆𝒙) − 𝐹(𝒙 ≈ 𝜕𝑥1 𝜕𝑥2 𝜕𝑥𝑛 ( 2)

⋮ ⋮ ⋱ ⋮ ∆𝑥𝑛
𝜕𝑓𝑚 ∗ 𝜕𝑓𝑚 ∗ 𝜕𝑓𝑚 ∗
(𝒙 ) (𝒙 ) ⋯ (𝒙 )
( 𝜕𝑥1 𝜕𝑥2 𝜕𝑥𝑛 )

This expression describes the linear approximation of 𝐹 at 𝒙∗ . And the matrix


𝜕𝑓1 ∗ 𝜕𝑓1 ∗ 𝜕𝑓1 ∗
(𝒙 ) (𝒙 ) ⋯ (𝒙 )
𝜕𝑥1 𝜕𝑥2 𝜕𝑥𝑛
𝜕𝑓2 ∗ 𝜕𝑓2 ∗ 𝜕𝑓2 ∗
∗) (𝒙 ) (𝒙 ) ⋯ (𝒙 )
𝐷𝐹(𝒙 = 𝐷𝐹𝒙∗ = 𝜕𝑥1 𝜕𝑥2 𝜕𝑥𝑛
⋮ ⋮ ⋱ ⋮
𝜕𝑓𝑚 ∗ 𝜕𝑓𝑚 ∗ 𝜕𝑓𝑚 ∗
(𝒙 ) (𝒙 ) ⋯ (𝒙 )
( 𝜕𝑥1 𝜕𝑥2 𝜕𝑥𝑛 )

is called the derivative or the Jacobian derivative of 𝐹 at 𝒙∗ .

The Chain Rule II Let 𝐹: ℛ 𝑛 → ℛ 𝑚 and 𝒂: ℛ → ℛ 𝑛 be 𝐶 1 functions. Then the

composite function 𝑔(𝑡) = 𝐹(𝒂(𝑡)) is a 𝐶 1 function form ℛ1 to ℛ 𝑚 and


𝜕𝑓𝑖 𝜕𝑓𝑖 𝜕𝑓𝑖
𝑔𝑖′ (𝑡) = (𝒂(𝑡))𝑎1′ (𝑡) + (𝒂(𝑡))𝑎2′ (𝑡) + ⋯ + (𝒂(𝑡))𝑎𝑛′ (𝑡)
𝜕𝑥1 𝜕𝑥2 𝜕𝑥𝑛
= 𝐷𝑓𝑖 (𝒂(𝑡)) ⋅ 𝒂′(𝑡)

Putting all the component conditions together, we obtain the vector equation

𝑔′ (𝑡) = 𝐷(𝐹 ∘ 𝒂)(𝑡) = 𝐷𝐹(𝒂(𝑡))𝒂′(𝑡)

The Chain Rule III Let 𝐹: ℛ 𝑛 → ℛ 𝑚 and 𝐴: ℛ 𝑠 → ℛ 𝑛 be 𝐶 1 functions. Let 𝒔∗ ∈

ℛ 𝑠 and 𝒙∗ = 𝐴(𝒔∗ ) ∈ ℛ 𝑛 . Consider the composite function

𝐻 = 𝐹 ∘ 𝐴: ℛ 𝑠 → ℛ 𝑚

7
Let 𝐷𝐹(𝒙∗ ) be the 𝑚 × 𝑛 Jacobian matrix of the partial derivatives of 𝐹 at 𝒙∗ . Let

𝐷𝐴(𝒔∗ ) be the 𝑛 × 𝑠 Jacobian matrix of the partial derivatives of 𝐴 at 𝒔∗ . The

Jacobian matrix of 𝐷𝐻(𝒔∗ ) is given by the matrix product of the Jacobians

𝐷𝐻(𝒔∗ ) = 𝐷(𝐹 ∘ 𝐴)(𝒔∗ ) = 𝐷𝐹(𝒙∗ )𝐷𝐴(𝒔∗ )

Higher Order Derivatives

The partial derivative 𝜕𝑓⁄𝜕𝑥𝑖 of a function 𝑦 = 𝑓(𝑥1 , 𝑥2 , … , 𝑥𝑛 ) is itself a function

of 𝑛 variables. We can continue taking partial derivatives of these partial derivatives

and obtain higher order partial derivatives.

Some concepts:

 If for each 𝑖,
𝜕𝑓 ∗ 𝑓(𝑥1∗ , … , 𝑥𝑖∗ + ℎ, … , 𝑥𝑛∗ ) − 𝑓(𝑥1∗ , … , 𝑥𝑖∗ , … , 𝑥𝑛∗ )
(𝒙 ) = lim
𝜕𝑥𝑖 ℎ→0 ℎ
exists, we say 𝑓 is differentiable at 𝒙∗ .

 If these 𝑛 partial derivative functions are continuous functions at a point 𝒙∗ ,

we say that 𝑓 is continuously differentiable at 𝒙∗ or 𝐶 1 at 𝒙∗ .

 If all these 𝑛 partial derivative functions are themselves differentiable on an

open region 𝐽 of ℛ 𝑛 , we say 𝑓 is twice differentiable on 𝐽. For example,

𝜕 𝜕𝑓
( )
𝜕𝑥𝑗 𝜕𝑥𝑖

is called 𝑥𝑖 𝑥𝑗 −second order partial derivative of 𝑓. It is usually written as

𝜕 2𝑓
𝜕𝑥𝑗 𝜕𝑥𝑖
𝜕2 𝑓 𝜕2 𝑓
The 𝑥𝑖 𝑥𝑖 −derivative is usually written as instead of . Terms of the
𝜕𝑥𝑖2 𝜕𝑥𝑖 𝜕𝑥𝑖

8
𝜕2 𝑓
form with 𝑖 ≠ 𝑗 are called cross partial derivatives or mixed partial
𝜕𝑥𝑗 𝜕𝑥𝑖

derivatives.

Second Order Derivatives and Hessians

A real-valued function of 𝑛 variables will have 𝑛2 second order partial derivatives.

It is natural to arrange these 𝑛2 partial derivatives into an 𝑛 × 𝑛 matrix whose


𝜕2 𝑓
(𝑖, 𝑗)𝑡ℎ entry is . This matrix is called Hessian or Hessian matrix of 𝑓 and
𝜕𝑥𝑗 𝜕𝑥𝑖

written as 𝐷2 𝑓(𝒙) or 𝐷2 𝑓𝒙 .

𝜕 2𝑓 𝜕 2𝑓 𝜕 2𝑓

𝜕𝑥12 𝜕𝑥2 𝜕𝑥1 𝜕𝑥𝑛 𝜕𝑥1
𝜕 2𝑓 𝜕 2𝑓 𝜕 2𝑓
2
𝐷 𝑓(𝒙) = 𝜕𝑥1 𝜕𝑥2 ⋯
𝜕𝑥22 𝜕𝑥𝑛 𝜕𝑥2
⋮ ⋮ ⋱ ⋮
𝜕 2𝑓 𝜕 2𝑓 𝜕 2𝑓

(𝜕𝑥1 𝜕𝑥𝑛 𝜕𝑥2 𝜕𝑥𝑛 𝜕𝑥𝑛2 )

If all these 𝑛2 second order partial derivatives exist and are themselves continuous

functions of (𝑥1 , 𝑥2 , … , 𝑥𝑛 ), we say that 𝑓 is twice continuously differentiable or

𝐶2.

Yong’s Theorem

Theorem suppose that 𝑦 = 𝑓(𝑥1 , 𝑥2 , … , 𝑥𝑛 ) is 𝐶 2 on an open region 𝐽 in ℛ 𝑛 .

Then for all 𝒙 in 𝐽 and for each pair of indices 𝑖, 𝑗.

𝜕 2𝑓 𝜕 2𝑓
=
𝜕𝑥𝑗 𝜕𝑥𝑖 𝜕𝑥𝑖 𝜕𝑥𝑗

If a function is a 𝐶 2 function, applying Yong’s theorem, the Hessian matrix is


symmetric.

You might also like