fdsfds

© All Rights Reserved

5 views

fdsfds

© All Rights Reserved

- Ref 12_A Review of Optimization Techniques in Metal Cutting Processes
- KANNUR UNIVERSITY BTech S8 CE. Syllabus
- Clsu Thesis
- Simulation and Optimization of Heat Integration in Distillation Columns
- Capacity of Frequency-selective Channel
- Optimization of Aircraft wing
- 1395076257_IEEE_06739182
- Fea
- Bi 24385389
- CAE FRAME WORK FOR AERODYNAMIC DESIGN DEVELOPMENT OF AUTOMOTIVE VEHICLES
- CJK-OSM4prior
- Alex Akulov Statement of Work
- 00898091
- 990408
- VOL2I4P1- Application Of Genetic Algorithms In The Design Optimization Of Three Phase Induction Motor
- Ada 521137
- Brent Method for Dynamic Economic Dispatch with Transmission Losses
- Milawa module.pdf
- Samiksha Das
- A_Novel_Methodology_for_Multi-Year_Planning_of_Large-Scale_Distribution_Networks

You are on page 1of 23

Introduction

*

*

them is the global optimum.

problems with equality constraints.

have to be converted to equality constraints first.

function.

saddle point

http://www.math.okstate.edu/~yq

wang/teaching/math4553_spring

09/demo/minmax1.png

maximum

Stationary

Points

inflection

minimum

Max Z = F(X1, X2, X3, , Xn)

s.t.

g1(X1,

g2(X1,

g3(X1,

:

:

gm(X1,

X2, X3, , Xn) = b2

X2, X3, , Xn) = b3

multipliers, the objective function,

F and constraints, g1 to gm should

be continuous and differentiable.

The problem should also be

bounded, i.e., the optimal value

of Z should not be infinity or

negative infinity.

By definition, b1 to bm are

constants.

Why are these conditions

necessary?

The Lagrangian

The first step is to convert the problem from a constrained one

to an unconstrained one.

This is done by multiplying each constraint with a Lagrange

multiplier, i and subtracting the result of that from the objective

function.

The modified objective function is called the Lagrangian. For a

maximization problem:

L=F

(gi bi)

thought of as the unit penalty incurred

when that constraint is violated.

Stationary points of the Lagrangian, L can now be determined

by setting the following n + m partial derivatives to zero.

dL/dX1 = 0

dL/dX2 = 0

:

:

dL/dXn = 0

dL/d1 = 0

dL/d2 = 0

:

:

dL/dm = 0

variables that are introduced with the construction of the Lagrangian.

Solving the partial derivatives as a series of simultaneous

equations gives a set of stationary points.

Each stationary point refers to a unique combination X1,

X2, , Xn and 1, 2, , n values.

One or more of the stationary points may be infeasible, i.e.,

they are in violation of at least one constraint.

Once determined, those that are feasible are compared and

the one giving the maximum objective function value is the

global optimum.

For minimization problems, the procedure is the same

except that the Lagrangian, L is defined as:

L = F + 1(g1 b1) + 2(g2 b2) + + m(gm bm)

L=F+

(gi bi)

It is now a + instead of a .

the least (instead of the maximum) objective function value.

Does it really matter whether we define L with negatives

or positives in front of the i?

Consider the following constrained optimization problem:

MAX Z = X3 4X + Y2 + 5

s.t.

X+Y=4

X=0

To apply the method of Lagrange multipliers, modify the objective

function to the following to convert the problem from a constrained

problem to an unconstrained problem.

L = X3 4X + Y2 + 5 1 (X + Y 4) 2 X

To find the stationary points of L:

dL/dX = 3X2 4 1 - 2 = 0

(1)

dL/dY = 2Y 1 = 0

(2)

dL/d1 = X Y + 4 = 0

(3)

dL/d2 = X = 0

(4)

Substituting (4) into (3) gives:

X = 0, Y = 4

Substituting that into (2) gives:

1 = 8

And that with X and Y into (1) gives:

2 = -12

Because of the equality constraints, it so happens that this

particular problem has just one stationary point. Therefore, there

is no need for a comparison of stationary points to be done, and

the stationary point identified is the optimum.

Interpretation of Lagrange multipliers, 1 = 8 and 2 = -12

Principle 1:

When the Lagrange multiplier associated with a constraint is non-zero, it means the

constraint is binding. And when the multiplier is zero, it means the constraint is nonbinding. Both 1 and 2 are non-zero. Therefore, both constraints are binding.

Principle 2:

increase, and for a minimization problem, a decrease.

The Lagrange multiplier represents the improvement in the objective function, Z with

respect to a change in the RHS of the corresponding constraint.

1 = 8 means that if the RHS of the first constraint were to be increased by 1 unit,

the objective function value at optimum will increase (since this is a maximization

problem) by 8 units.

2 = -12 means that if the RHS of the second constraint were to be increased by a 1

unit, the objective function value at optimum will decrease by 12 units.

Interpretation of Lagrange multipliers, 1 = 8 and 2 = -12

Let us test out Principle 2 by solving and resolving the problem for different values of the RHS

of the constraints. To save time, we will use Solver in Excel.

Constraint 1, 1 = 8

RHS of Constraint

Change in RHS

Objective, Z

Change in Z

Z / RHS

Change in RHS

Objective, Z

Change in Z

Z / RHS

Constraint 2, 2 = -12

RHS of Constraint

Does it make sense now why it

is important how the signs in

front of the Lagrangian

multipliers in the

Lagrangian, L are defined

depending on whether the

problem is a maximization or

minimization problem?

Consider the following constrained optimization problem:

MIN Z = X3 4X + Y2 + 5

s.t.

X + Y = 4

X >= 0

Y <= 10

To apply the method of Lagrange multipliers, the constraints are first converted to equality

constraints. To do that, slack and surplus variables are added at appropriate places.

X + Y = 4

X >= 0

---> X S12 = 0

Y <= 10 ---> Y + S22 = 10

L = X3 4X + Y2 + 5 + 1(X + Y 4) + 2(X S12) + 3(Y + S22 10)

To find the stationary points of the Lagrangian:

dL / dX = 3X2 4 + 1 + 2 = 0

(5)

dL / dY = 2Y + 1 + 3 = 0

(6)

dL / d1 = X + Y 4 = 0

(7)

dL / d2 = X S12 = 0

(8)

dL / d3 = Y + S22 10 = 0

(9)

dL / dS1 = 2 2 S1 = 0

--->

2 = 0 or S1 = 0

dL / dS2 = 2 3 S2 = 0

--->

3 = 0 or S2 = 0

Possible solution 1: 2 = 0 and 3 = 0

Eqn (6): 1 = 2Y

Eqn (5): 3X2 4 2Y = 0 . (10)

Eqn (10) + 2 Eqn (7):

--->

--->

3X2 + 2X 12 = 0

X = 1.694 or X = 2.361

--->

--->

--->

--->

X = 1.694, Y = 2.306

1 = 4.612, 2 = 0, 3 = 0

S12 = 1.694, S22 = 7.694

Z = 8.403

Possible solution 2: 2 = 0 and S2 = 0

Eqn (9): Y = 10

Eqn (7): X = 6

Eqn (8): S12 = 6

This gives the second stationary point, but

this point is infeasible since S12 is negative.

Possible solution 3: S1 = 0 and 3 = 0

Eqn (8): X = 0

Eqn (7): Y = 4

This gives the third stationary point:

--->

X = 0, Y = 4

--->

1 = 8, 2 = 12, 3 = 0

--->

S12 = 0, S22 = 6

--->

Z = 21

Possible solution 4: S1 = 0 and S2 = 0

Eqn (8): X = 0

Eqn (9): Y = 10

This gives the fourth stationary point, but this

point is infeasible since it violates Eqn (7).

10

Y <= 1 0

stationary points.

Before going further,

let us have a graphical

look at where those

stationary points lie.

What pattern do you

observe?

4

8

X >= 0

6

3

4

Z = 20

Z = 15

Z = 8.4

1

0

-6

-4

-2

0

X

stationary points,

which is the optimum?

Which constraints are

binding or nonbinding? Graphically?

From the Lagrange

multipliers? From the

slack and surplus

variables?

Interpretation of Lagrange multipliers, 1 = 4.612, 2 = 0, 3 = 0

As before, let us test the Lagrange multipliers by solving the problem multiple times for

different values of the RHS of the constraints.

RHS of Constraint

Change in RHS

Constraint 1, 1 = 4.612

Constraint 2, 2 = 0

Constraint 3, 3 = 0

Objective, Z

Change in Z

Z / RHS

In-Class Exercise

There are two polluters, the first discharging R1 kg of

waste into the atmosphere everyday and the second R2 kg

of waste. The profits of the polluters increase with R1

and R2:

Profit of polluter 1 (million $) = 10 R12

Profit of polluter 2 (million $) = 5 R22

Use the method of Lagrange multipliers to find the

values of R1 and R2 that give the maximum total profit.

Note that regulations restrict the sum of R1 and R2 to be

no greater than 5 kg.

Based on your results, estimate the increase in profit

you would expect if the restriction on the sum of R1 and

R2 were to be made less strict and increased by 0.1 kg.

- Ref 12_A Review of Optimization Techniques in Metal Cutting ProcessesUploaded byRushikesh Dandagwhal
- KANNUR UNIVERSITY BTech S8 CE. SyllabusUploaded byManu K M
- Clsu ThesisUploaded byJerwin Martinez Rodriguez
- Simulation and Optimization of Heat Integration in Distillation ColumnsUploaded byWoWLooP
- Capacity of Frequency-selective ChannelUploaded byChristo Jacob K
- Optimization of Aircraft wingUploaded bynarenivi
- 1395076257_IEEE_06739182Uploaded byJennifer Young
- FeaUploaded byPrashant Ingale
- Bi 24385389Uploaded byAnonymous 7VPPkWS8O
- CAE FRAME WORK FOR AERODYNAMIC DESIGN DEVELOPMENT OF AUTOMOTIVE VEHICLESUploaded byUmer Qureshi
- CJK-OSM4priorUploaded byDeepak Kumar
- Alex Akulov Statement of WorkUploaded byJanardhan Reddy
- 00898091Uploaded bydebasishmee5808
- VOL2I4P1- Application Of Genetic Algorithms In The Design Optimization Of Three Phase Induction MotorUploaded byJournal of Computer Applications
- Ada 521137Uploaded byIDRISSI
- Brent Method for Dynamic Economic Dispatch with Transmission LossesUploaded byselaroth168
- 990408Uploaded bysaisenth
- Milawa module.pdfUploaded byBrandy Gonzalez
- Samiksha DasUploaded byRationallyIrrationalSamiksha
- A_Novel_Methodology_for_Multi-Year_Planning_of_Large-Scale_Distribution_NetworksUploaded byzpopov
- 8091Uploaded byizzatikalid
- Practical 5Uploaded byLuiz Francisco Macedo
- Enhancement of Voltage Stability and Line LoadabilitUploaded byAli Raza
- Marino-Toskano2004_Article_PeriodicSolutionsOfAClassOfNon.pdfUploaded byAntonio Torres Peña
- 0c ut ref-V. TandonUploaded byamsubra8874
- The Capacitated Location-Allocation Problem in TheUploaded byBenny Love
- 01 IntroductionUploaded byHammadMehmood
- ASMEJMSE Sensor SingleStationKhanCeglarekUploaded byRajiv Ramanathan
- 171901(25.11.14)Uploaded byVIPUL
- PD Automotive Weight Management ArticleUploaded byDrey Go

- GEMs OfferedUploaded byMinh Lepaker Hoang
- 2015 CN3132 II Lecture 10 Definitions in HumidificationUploaded byLoady Das
- 2015 CN3132 II Lecture 02 Mass Transfer CoefficientUploaded byLoady Das
- HW+5dsadsadUploaded byLoady Das
- hellod asdasdasUploaded byLoady Das
- Ch03-M.pptxUploaded byLoady Das
- 00 OutlinefdfUploaded byLoady Das
- 03-HW 2.pdfUploaded byLoady Das
- cdcsAppendix a.pptxUploaded byLoady Das
- Group Tutorial 1123Uploaded byLoady Das
- 00 PoliciesasdsaUploaded byLoady Das
- dates15-16confirmedUploaded byLoady Das
- Tutorial 4 SolutionsUploaded byLoady Das
- Lecture17 UploadUploaded byLoady Das
- Group Tutorial 9Uploaded byLoady Das
- Group Tutorial 6Uploaded byLoady Das
- File_1Uploaded byLoady Das
- Lecture2supp-PictureLanguageUploaded byLoady Das
- Mission 02Uploaded byLoady Das
- Why sinhxUploaded byMinh Tieu
- 2010 Chapter 1 First OrderUploaded byCortney Watson
- install_guide.pdfUploaded byArash Arjmandi
- Graduation Requirements for ChE_updated Jul 13Uploaded byLoady Das
- chap03Uploaded byLoady Das

- Moment Distribution MethodUploaded bydixn__
- Lecture Notes - Linear ProgrammingUploaded byeolbielen
- 10 Math PolynomialsUploaded byAjay Anand
- Numerical Methods IUploaded byMragank
- Transportation ModelUploaded byFrancis Martin
- Ai Quiz-2 SolutionsUploaded byPratyush Agarwal
- 09 Lab Simpsons Gaussian (TAKE HOME LAB for O-2L)Uploaded byMea_Joy_Napita_7486
- 8. Artificial neural networks-Unsupervised learning.pdfUploaded bySelva Kumar
- Largrange 05inp eBookUploaded byMoses Angelo B. Decierdo
- msccst304Uploaded byapi-3782519
- 4-Assignment of Opration ResearchUploaded byism33
- part05.pdfUploaded byNihal Pratap Ghanathe
- Comparative Analysis of Dynamic and Greedy Approaches for Dynamic ProgrammingUploaded byIJMTER
- CS 4820-Lecture 12Uploaded bySean Li
- Numerical Methods With Worked ExampleUploaded byEdrodes Vicente Hernández
- Polynomials.docUploaded byluisrodrigon
- Benchmarking of LSTM NetworksUploaded byFelipe De Souza Araujo
- machine learningUploaded bySiddharth Jawahar
- test1_practice.pdfUploaded byamalwaysright
- optimizacion con matlab y abaqusUploaded byalejox104
- NA_lab_2Uploaded byLuca Schidu
- CV2018 ReviewUploaded byHimanshu Arora
- Lecture Notes - LPUploaded byeolbielen
- ARTIFICIAL NEURAL NETWORKSUploaded byKirti Susan Varghese
- Advanced Math 2nd AssignmentUploaded byhppresario
- Adaline/MadalineUploaded byRoots999
- Paper Ga Cleft Over Neural Character RecognitionUploaded bynguyenphuonghuy
- Mws Gen Sle Ppt GaussianUploaded bynomiaa
- BoundedVariableSimplexMethodUploaded byÖzlem Yurtsever
- Strassen Matrix MultiplicationUploaded byshower