You are on page 1of 12

Practice Problem

Specify the categories of the following. Functions as (strictly convex, convex, concave and strictly
concave)
NLP

Nonlinear programming (NLP)


4
Introduction
 A nonlinear programing is similar to a linear programing in that it is composed of an
objective function, general constraints, and variable bounds. The difference is that a
nonlinear program includes at least one nonlinear function, which could be the
objective function, or some or all of the constraints.

 We often encounter problems that cannot be solved by LP algorithms.


 Solving non-linear programming models are generally much complicated than
linear programming models based on the types of non-linear models. For example:
 The objective is linear but constraints are non-linear
 Non-linear objectives and linear constraints
 Non-linear objectives and constraints
NLP

Non Linear Programming


(NLP)
Lagrangian multiplier

× is a vector of n decision variables (×1, . . . , ×n), f is the objective function,


and the gi, are constraint functions.
The ai, and bi are specified lower and upper bounds on the constraint functions
with ai ≤ bi and lj, uj are lower and upper bounds on the variables with lj ≤ uj
If the upper and lower limits on gi correspond to ai = −∞ and bi = +∞, the
constraint is unbounded

Similarly, with lj = uj corresponding to a variable xj whose value is fixed, and


lj = −∞ and uj = +∞ free variable.

This is nonlinear if one or more of the functions f , g1, g2, . . . , gm are


Nonlinear.
NLP

6
Methods of Solving NLP

Lagrange multiplier method


Direct Substitution Non Linear Programming
(NLP)
Quadratic programming Lagrangian multiplier

Penalty function and augmented Lagrangian method


NLP

7
Lagrangian Multiplier Method
Example: Consider the following constrained optimization
problem

Non Linear Programming


(NLP)
Lagrangian multiplier

The gradient of the objective function ∇f (x∗) is orthogonal to the


tangent plane of the constraint at x∗. In general ∇h(x∗) is always
orthogonal to this tangent plane, hence ∇f (x∗) and ∇h(x∗) are
collinear, that is, they lie on the same line but point in opposite
directions. This means the two vectors must be multiples of each
other.
NLP

8 Lagrangian Multiplier Method


We now introduce a new function L(x, λ) called the Lagrangian function:

Non Linear Programming


Then (NLP)
Lagrangian multiplex

so the gradient of the Lagrangian function with respect to x, evaluated


at (x∗, λ), is zero, plus the feasibility condition constitute the first-
order necessary conditions for optimality. The scalar λ is called a
Lagrange multiplier.
NLP

9 Lagrangian Multiplier Method

Using the necessary conditions to find the optimum of the above example

Non Linear Programming


Setting the first partial derivatives of L with respect to x and λ to zero, we get (NLP)
Lagrangian multiplier
NLP

10 Exercise

Non Linear Programming


(NLP)
Lagrangian multiplier

Answer
11

Maximize 𝒇 𝒙, 𝒚 = 𝒙𝒚
Subjected to 𝒉 𝒙, 𝒚 = 𝒙 + 𝒚 − 𝟏𝟐
Non Linear Programming
(NLP)
Lagrangian multiplier

By Using Lagrangian Multiplier Method


12
Thanks!

Comments &
Suggestions