You are on page 1of 14

©CUKeL OPERATIONS RESEARCH

PRESENTATION
k 5
Introduction to non-linear programming

Learning Outcomes
By the end of this topic you should be able to;
• Define a non-linear program

• Describe local and global solutions

• Optimize a non-linear program with at least a single con-


straint
©KJM

1. Introduction
Nonlinear Programming deals with finding a solution that opti-
mizes an objective function, possibly subject to nonlinear con-
JJ II
straints on the decision variables. It is different than linear pro-
J I
gramming in that the functions defining the objective or
Back Close
CUK, Leading promoter of Digital Learning! 1
©CUKeL OPERATIONS RESEARCH
constraints
k may be nonlinear functions of the decision variables
rather than linear.
Nonlinear programming model much more difficult to formu-
late and solve because nonlinear functions can take on a such a
wide variety of functional forms.
In one general form,the nonlinear programming problem is to
find x=(x1 , x2 , . . . , xn ) so as to
Maximize f (x),
subject to
gi (x) ≤ bi , for i = 1, 2, ..., m,
and
©KJM

x ≥ 0,
where f (x) and the gi (x) are given functions of the n decision
variables.
JJ II There are many different types of nonlinear programming prob-
J I lems, depending on the characteristics of the f (x) and gi (x) func-
Back Close
CUK, Leading promoter of Digital Learning! 2
©CUKeL OPERATIONS RESEARCH
tions.k Different algorithms are used for the different types. For
certain types where the functions have simple forms, problems
can be solved relatively efficiently but in other cases, solving even
small problems is a real challenge.

1.1. Local and global solutions


One of the reasons why it is difficulty to solve nonlinear functions
is the problem of distinguishing a local optimum from a global
optimum. The usual available information is;

• the point x itself


©KJM

• the value of the objective function at x

• the value of the constrains at x

JJ II • the gradient at x (value of 1st derivative of f (x) at x


J I
Back Close
CUK, Leading promoter of Digital Learning! 3
©CUKeL OPERATIONS RESEARCH
• kthe hessian matrix at x (value of 2nd derivative of f (x) at
x

With this information, we can tell when we are at a local


minimum or maximum point but is not possible to know if there
exists a different and even better minimum or maximum point.

Definition: Local solution/optimum


It is the feasible point that has a better value of the objective
function than any other feasible point in a small neighbour-
hood (A small ball of radius ε)
©KJM

Definition: Global solution/optimum


It is the feasible point that has a better value of the objective
function than any other feasible point anywhere in the feasible
JJ II region
J I
.
Back Close
CUK, Leading promoter of Digital Learning! 4
©CUKeL OPERATIONS RESEARCH
©KJM k

Figure 5.1: Illustration

JJ II
J I
Back Close
CUK, Leading promoter of Digital Learning! 5
©CUKeL OPERATIONS RESEARCH
1.2. kSingle variable unconstrained local nonlinear opti-
mization
Here we consider a case in which the objective function is to be
optimized with no constraints. unconstrained0) If the function
if diffentiable, we can apply basic calculus to identify the local
minimum or maximum as follows;
df (x) d2 f (x)
+ Determine and
dx dx2
d2 f (x)
+ If = f 00 (x) > 0, at the stationary point, then this
dx2
stationary point is said to be a minimum stationary point.
©KJM

d2 f (x)
+ If = f 00 (x) < 0 at the stationary point, then this sta-
dx2
tionary point is said to be a maximum. (Here we have used
f (x) = y)
JJ II
J I
Back Close
CUK, Leading promoter of Digital Learning! 6
©CUKeL OPERATIONS RESEARCH
Minimum
k point

Minimum point
@
dy @ d2 y dy
> 0@ >0 <0
dx dx 2 dx
@
@
dy
=0
dx

Maximum point
dy
=0
dx
dy d2 y
@ dy
>0 <0
©KJM

<0
@
dx dx 2 @dx
Maximum point @
@

JJ II
J I
Back Close
CUK, Leading promoter of Digital Learning! 7
©CUKeL OPERATIONS RESEARCH
Point
k of inflexion dy
dy >0
=0 dx
dx
dy k
> 0d y 6= 0, k > 2
dx dxk
Point of inflexion

In computers, the implementation is done via numerical search


methods. The computer works with values of the function as op-
posed to algebraic method by searching the best solutions through
rapid substitution and evaluation of the
©KJM

function using some rules. Some of the well known function using
some

JJ II 1. Bisection
J I
2. Fibonacci
Back Close
CUK, Leading promoter of Digital Learning! 8
©CUKeL OPERATIONS RESEARCH
3. kGradient ascent or descent

The following is a brief explanation of gradient descent algo-


rithm.
Let f have continuous first partial derivatives.
The gradient ∇f (x) is, according to our conventions, defined
as a n-dimensional row vector. For convenience we define the
n-dimensional column vector The method of steepest descent is
defined by the iterative algorithm

xk+1 = xk − αk ∇f (xk )
©KJM

where αk is a nonnegative scalar minimizing f (xk )


In words, from the point xk we search along the direction of
the negative gradient (-∇f (xk )) to a minimum point on this line;
JJ II this minimum point is taken to be xk+1 when f (xk ) attains the
J I required minimum value (local minimum).
Back Close
CUK, Leading promoter of Digital Learning! 9
©CUKeL OPERATIONS RESEARCH
2. Single
k variable constrained local nonlinear optimiza-
tion
Example .
The Riverwood Paneling Company makes two kinds of wood pan-
eling, Colonial and Western. The company has developed the fol-
lowing nonlinear programming model to determine the optimal
number of sheets of Colonial paneling (x1 ) and Western paneling
(x2 ) to produce to maximize profit, subject to a labor constraint
Maximize 4x1 − 0.1x21 + 5x2 − 0.2x22
subject to
©KJM

x1 + 2x2 = 40 (hours)
Determine the optimal solution to this nonlinear programming
model.
Solution: The first step in the substitution method is to solve the
JJ II
J I constraint equation for one variable in terms of another. We will
Back Close
CUK, Leading promoter of Digital Learning! 10
©CUKeL OPERATIONS RESEARCH
arbitrarily
k decide to solve for as follows:
x1 = 40 − 2x2
Now wherever appears in the nonlinear objective function, we
will substitute this expression to get;
4(40 − 2x2 ) − 0.1(40 − 2x2 )2 + 5x2 − 0.2x22
=−0.6x22 + 13x2
This is an unconstrained optimization function, and we can
solve it by differentiating it and setting it equal to zero
dZ
= −1.2x2 + 13 = 0 leading to x2 = 10.8mugs
dx2
To determine x1 we can substitute x2 into the constraint equa-
tion:
©KJM

x1 + 2(10.8) = 40 leading to x1 = 18.4 bowls


Substituting the values into the original objective function
gives the total profit, as follows:
JJ II Profit=4(18.4) − 0.1(18.42 ) + 5(10.8) − 0.2(10.82 )=70.42
J I 
Back Close
CUK, Leading promoter of Digital Learning! 11
©CUKeL OPERATIONS RESEARCH
Exercise
k 1. The Pottery Company has developed the follow-
ing nonlinear programming model to determine the optimal num-
ber of bowls and mugs to produce each day:
maximize Z=7x1 − 0.3x21 + 8x2 − 0.4x22
subject to
4x1 + 5x2 = 100 (Constraint on hours)
Determine the optimal solution to this nonlinear programming
model using the substitution method.

Exercise 2. The Evergreen Fertilizer Company produces two


types of fertilizers, Fastgro and Super Two. The company has
©KJM

developed the following nonlinear programming model to deter-


mine the optimal number of bags of Fastgro (x1) and Super Two
(x2) that they must produce each day to maximize profit, given
a constraint for available potassium
JJ II
maximize Z=30x1 − 2x21 + 25x2 − 0.5x22
J I
subject to
Back Close
CUK, Leading promoter of Digital Learning! 12
©CUKeL OPERATIONS RESEARCH
3xk1 + 6x2 = 300.
Determine the optimal solution to this nonlinear programming
model using the substitution method
Note that this method of substitution will only be easy with
simple function (i.e., the highest order of a variable was a power
of two) and there were only two variables, and the single con-
straint in each example was an equation.This method becomes
very difficult if the constraint becomes complex. An alternative
solution approach that is not quite as restricted is the method of
Lagrange multipliers.
©KJM

3. Integer Programming
An optimization program where some or all of the variables are
limited to be integral.
JJ II
Problems in which this is the case are called integer programs
J I
(IP’s) and the subject of solving such programs is called integer
Back Close
CUK, Leading promoter of Digital Learning! 13
©CUKeL OPERATIONS RESEARCH
programming
k (also referred to by the initials IP)
An integer linear program in canonical form is expressed as;

maximize cT x
subject to Ax ≤ b,
x ≥ 0 and Integer

where c, bc, b are vectors and A is a matrix, where all entries are
integers.
Examples include the transportation problem and assign-
ment problem among many others.
©KJM

JJ II
J I
Back Close
CUK, Leading promoter of Digital Learning! 14

You might also like