Professional Documents
Culture Documents
Method HLi PDF
Method HLi PDF
Method HLi PDF
Huijuan Li
Department of Electrical Engineering and Computer Science
University of Tennessee, Knoxville, TN 37921 USA
(Dated: September 28, 2008)
This paper presents an introduction to the Lagrange multiplier method, which is a basic math-
ematical tool for constrained optimization of differentiable functions, especially for nonlinear con-
strained optimization. Its application in the field of power systems economic operation is given to
illustrate how to use it.
Optimization problems, which seek to minimize or In this section, first the Lagrange multipliers method
maximize a real function, play an important role in the for nonlinear optimization problems only with equality
real world. It can be classified into unconstrained op- constraints is discussed. The mathematical proof and a
timization problems and constrained optimization prob- geometry explanation are presented. Then the method
lems. Many practical uses in science, engineering, eco- is extended to cover the inequality constraints.
nomics, or even in our everyday life can be formulated Without the inequality constraints, the standard form
as constrained optimization problems, such as the mini- of the nonlinear optimization problems can be formulated
mization of the energy of a particle in physics;[1] how to as:
maximize the profit of the investments in economics.[2]
min f(x1 , · · · , xn ) (4)
In unconstrained problems, the stationary points the-
ory gives the necessary condition to find the extreme Subject to: G(x1 , · · · , xn ) = 0 (5)
points of the objective function f (x1 , · · · , xn ). The sta-
Where, G = [G1 (x1 , · · · , xn ) = 0, · · · , Gk (x1 , · · · , xn ) =
tionary points are the points where the gradient ∇f is
0]T , the constraints function vector.
zero, that is each of the partial derivatives is zero. All the
The Lagrange function F is constructed as:[4]
variables in f (x1 , · · · , xn )are independent, so they can be
arbitrarily set to seek the extreme of f. However when it F(X, λ) = f (X) − λG(X) (6)
comes to the constrained optimization problems, the ar-
bitration of the variables does not exist. The constrained Where, X = [x1 , . . . , xn ], the variable vector, λ =
optimization problems can be formulated into the stan- [λ1 , · · · , λk ], λ1 , · · · λk are called Lagrange multipliers.
dard form as:[3] The extreme points of the f and the lagrange multi-
pliers λ satisfy:
min f(x1 , · · · , xn ) (1)
∇F = 0 (7)
Subject to: G(x1 , · · · , xn ) = 0 (2)
H(x1 , · · · , xn ) ≤ 0 (3) that is:
k
X
Where, G, H are function vectors. The variables are ∂f ∂Gm
− λm = 0, i = 1, . . . n (8)
restricted to the feasible region, which refers to the points ∂xi m=1 xi
satisfying the constraints.
Substitution is an intuitive method to deal with opti- and
mization problems. But it can only apply to the equality-
G(x1 , · · · , xn ) = 0 (9)
constrained optimization problems and often fails in most
of the nonlinear constrained problems where it is difficult Lagrange multipliers method defines the necessary con-
to get the explicit expressions for the variables needed to ditions for the constrained nonlinear optimization prob-
be eliminated in the objective function. The Lagrange lems.
multipliers method, named after Joseph Louis Lagrange,
provide an alternative method for the constrained non-
linear optimization problems. It can help deal with both Mathematical Proof for Lagrange Multipliers
equality and inequality constraints. Method
In this paper, first the rule for the lagrange multipliers
is presented, and its application to the field of power The proving is illustrated on the the nonlinear opti-
systems economic operation is introduced. mization problem (10)-(12), which has four variables and
1
2
∂z
fx − λΦx − µΨx + (fz − λΦz − µΨz ) Extension to Inequality Constraints
∂x
∂t
+(ft − λΦt − µΨt ) =0 (20)
∂x The Lagrange multipliers method also covers the case
Hence by the definition of λ and µ, we can get of inequality constraints, as (3). In the feasible region,
H(x1 , · · · , xn ) = 0 Or H(x1 , · · · , xn ) < 0. When Hi =
fx − λΦx − µΨx = 0 (21) 0, H is said to be active, otherwise Hi is inactive. The
Similarly, we can get augmented Lagrange function is formulated as:[5]
descending direction of f and when Hi is active, this di- unit M w. Since w = SJ and the costs to produce 1J has
rection points out of the feasible region and towards the the unit $, we have [w] ≡ [$/s] ≡ [$/hr]. Hence, the cost
forbidden side, which means ∇Hi > 0. This is not the fi has the unit $/hr.
solution direction. We can enforce µi ≤ 0 to keep the The first step in determining the optimal scheduling of
seeking direction still in the feasible region. the generators is to express the problem in mathematical
When extend to cover the inequality constraints, the form. Thus the optimization statement is:
rule for the Lagrange multipliers method can be general-
ized as: min: f = f1 + f2 + f3 =
k m
x1 + 0.0625x21 + x2 + 0.0125x22 + x3 + 0.0250x23 (32)
X X
∇f (X) − λi ∇Gi (X) − µj ∇Hj (X) = 0 (25) subject to: G = x1 + x2 + x3 − 952 = 0 (33)
i=1 j=1
The corresponding Lagrange function is:
µi ≤ 0 i = 1···m (26)
µi Hi = 0 i = 1···m (27) F = x1 + 0.0625x21 + x2 + 0.0125x22
G(X) = 0 (28) +x3 + 0.0250x23 − λ(x1 + x2 + x3 − 952) (34)
In summary, for inequality constraints, we add them Setting ∇F = 0 and yields the following set of linear
to the Lagrange function just as if they were equality equations:
constraints, except that we require that µi ≤ 0 and when
Hi 6= 0, µi = 0. This situation can be compactly ex- 0.125 0 0 −1 x1 −1
pressed as (27). 0 0.025 0 −1 x2 −1
= (35)
0 0 0.05 −1 x3 −1
1 1 1 0 λ 952
APPLICATION TO THE POWER SYSTEMS
ECONOMIC OPERATION Solving (33) yields:
capable of dealing with both equality constrained and [2] N. Schofield, Mathematical Methods in Economics and So-
inequality constrained nonlinear optimization problems. cial Choice (Springer, 2003), 1st ed.
Many computational programming methods, such as the [3] D. P. Bertsekas, Constrained Optimization and Lagrange
Multiplier Methods (Academic Press, 1982), 1st ed.
barrier and interior point method, penalizing and aug-
[4] R. Courant, Differential and Integral Calculus (Inter-
mented Lagrange method,[5] have been developed based science Publishers, 1937), 1st ed.
on the basic rules of Lagrange multipliers method. The [5] D. P. Bertsekas, Nonlinear Programming (Athena Sci-
Lagrange multipliers method and its extended methods entfic, 1999), 2nd ed.
are widely applied in science, engineering, economics and [6] M. Crow, Computational Methods for Electric Power Sys-
our everyday life. tems (CRC, 2003), 1st ed.