You are on page 1of 134

OPTIMIZATION

Outline

The Derivatives of Vector Functions

The Chain Rule for Vector Functions

1 The Derivatives of Vector
Functions

1.1 Derivative of Vector with Respect to Vector .

1.1.2 Derivative of a Scalar with Respect to Vector If y is a scalar It is also called the gradient of y with respect to a vector variable x. denotedby y .3 Derivative of Vector with Respect to Scalar .

Example 1 Given and .

2*x3] 2*x3] Note: Note:Matlab Matlabdefines defines the thederivatives derivativesas asthe thetransposes transposesof ofthose those given in this lecture. 0. 2*x3] 2*x3] . given in this lecture. 0] 0] [[ 0. y1=x1^2-x2. -1. 3] 3] [[ 0. 0] 0] [[ -1. >> >> JJ == jacobian([y1. >> >> J' J' ans ans == [[ 2*x1.y2].y2]. 2*x1. 2*x1.In Matlab >> >> syms syms x1 x1 x2 x2 x3 x3 real. 0. [x1 [x1 x2 x2 x3]) x3]) JJ == [[ 2*x1. jacobian([y1. 3. real. >> >> y1=x1^2-x2. y2=x3^2+3*x2.-1. 3. -1. >> >> y2=x3^2+3*x2.

Some useful vector derivative formulas  n xt C1t   C11 C12  C1n   x1   t 1  C C22  C2 n   x2   n    t  21 xt C2t    Cx         1   CT      n   x  Cn1 Cn 2  Cnn   xn     t 1 xt Cnt    xT C C x  xT x  2x  c11 c21  cn1  Homewor x    Cx  c12 c22  cn 2  k   C T x        c c c   1n 2 n nn  .

Important Property of Quadratic Form xTCx  (xT Cx)   C  CT  x x   xt C1t  n  C11 C12  C1n   x1  t 1  Proof: C  21 C22  C2 n    n   x  x  2 C   t 1 t 2t              n  n       n  x T Cx    xi   x j Cij   Cn1 Cn 2  Cnn   xn    xt Cnt  i 1  j 1   t 1   n  n         xi   x j Cij   n   xk   x j Ckj    n   (x T Cx)  i 1  j 1        xi xk Cik        j 1 i 1  xk xk xk xk n n   x j Ckj   xi Cik j 1 i 1  (x T Cx)   Cx  CT x   C  CT  x x If C is symmetric.  (xT Cx)  2C x x .

which is in turn a function of x. we can write Each entry of this matrix may be expanded as .2 The Chain Rule for Vector Functions Let where z is a function of y.

) Then On transposing both sides. we finally obtain This is the chain rule for vectors (different from the conventional chain rule of calculus. the chain of matrices builds toward the left) .The Chain Rule for Vector Functions (Cont.

we have  z3  z3  y  y 2 2    1 2  z  2y  y  z4  4 1 2  z1 z2 z3 z4  y y1 y1 y1 z   1    2 y1 1 2 y1 2 . y  z1 z2 z3 z4  2 2 y2 2 y2 1    y2 y2 y2 y2 Therefore.Example 2 x. y are as in Example 1 and z is a function of y defined as  z1 2  z1  y1  2 y2     z2  y2  y1 2 z2 z    .  2 x1 0   4 x1 y1 2 x1 4 x1 y1 4 x1 z y z    2 y 1 2 y 2       1 3   1    2 y1  6 1  6 y2  2 y 2  6 y 2 1 1 x x y  0 2 x   2 2 y2 2 y2 1  4 x 4 x3 y2 4 x3 y2 2 x3  3  3  . and  .

4*(x1^2-x2)*x1. z3=y1^2+y2^2. -2*x1. -4*x3. 4*(x3^2+3*x2)*x3.In Matlab >> >> z1=y1^2-2*y2.[x1 x2 x2 x3]) x3]) Jzx Jzx == [[ 4*(x1^2-x2)*x1. z2=y2^2-y1. z4=2*y1+y2. -2*x1^2+20*x2+6*x3^2. 2*x3] 2*x3] >> >> Jzx’ Jzx’ ans ans == [[ 4*(x1^2-x2)*x1. 4*(x1^2-x2)*x1. 6*x3^2+18*x2+1. 4*(x3^2+3*x2)*x3] 4*(x3^2+3*x2)*x3] [[ 4*(x1^2-x2)*x1. -2*x1^2+20*x2+6*x3^2. -4*x3] -4*x3] [[ -2*x1. 4*(x3^2+3*x2)*x3. z2. -2*x1^2+2*x2-6. -2*x1^2+20*x2+6*x3^2. >> >> z3=y1^2+y2^2. -2*x1^2+2*x2-6. -2*x1^2+20*x2+6*x3^2. 4*(x1^2-x2)*x1. -2*x1. 4*(x1^2-x2)*x1. z4]. 1]1] [[ -4*x3. >> >> z2=y2^2-y1. >> >> z4=2*y1+y2. 4*(x3^2+3*x2)*x3. 4*(x1^2-x2)*x1. 1. 1. 6*x3^2+18*x2+1. >> >> Jzx=jacobian([z1. 4*(x3^2+3*x2)*x3] 4*(x3^2+3*x2)*x3] [[ 4*x1. Jzx=jacobian([z1. 4*x1] 4*x1] [[ -2*x1^2+2*x2-6. 6*x3^2+18*x2+1. -2*x1^2+2*x2-6.[x1 z4]. z3. z3. z1=y1^2-2*y2. -2*x1. 2*x3] 2*x3] . z2. 4*x1. 4*(x3^2+3*x2)*x3. 6*x3^2+18*x2+1.

s.s.Outline  Unconstrained Optimization  Functions of One Variable o General Ideas of Optimization o First and Second Order Conditions o Local v. Global Extremum  Functions of Several Variables o First and Second Order Conditions o Local v. Global Extremum  Constrained Optimization  Kuhn-Tucker Conditions  Sensitivity Analysis  Second Order Conditions .

Unconstrained Optimization An unconstrained optimization problem is one where you only have to be concerned with the objective function you are trying to optimize. .  None of the variables in the objective function are constrained. An objective function is a function that you are trying to optimize.

Maximization (example: maximize profit)  In this case you are looking for the highest point on the function. Minimization (example: minimize cost)  In this case you are looking for the lowest point on the function.General Ideas of Optimization There are two ways of examining optimization. Maximization f(x) is equivalent to minimization –f(x) .

Graphical Representation of a Maximum y 16 y = f(x) = -x2 + 8x 4 x 8 .

. What is the sign of f '(x) when x > x*? What is f '(x) when x = x*? Definition:  A point x* on a function is said to be a critical point if f ' (x*) = 0. This is the first order condition for x* to be a maximum/minimum.Questions Regarding the Maximum What is the sign of f '(x) when x < x*? Note: x* denotes the point where the function is at a maximum.

x* is a maximum of f(x) if f ' '(x*) < 0. f ' '(x*). can we decide whether it is a max. x* can be a maximum. a minimum or neither if f ' '(x*) = 0.Second Order Conditions If x* is a critical point of function f(x). x* is a minimum of f(x) if f ' '(x*) > 0. a min or neither? Yes! Examine the second derivative of f(x) at x*. .

An Example of f''(x*)=0 Suppose y = f(x) = x3. This implies that x* = 0 and f ''(x*=0) y=f(x)=x3 = 0. y x x*=0 is a saddle point where the point is neither a maximum nor a minimum . then f '(x) = 3x2 and f ''(x) =6x.

4 2 0 -2 -4 -6 -8 -0.Example of Using First and Second Order Conditions Suppose you have the following function: f(x) = x3 – 6x2 + 9x Then the first order condition to find the critical points is: f’(x) = 3x2 .5 4 .5 0 0.5 1 1.5 3 3.12x + 9 = 0 This implies that the critical points are at x = 1 and x = 3.5 2 2.

Testing x = 3 implies:  f ' '(3) = 6(3-2) = 6 > 0.  f ' '(x) = 6x – 12 = 6(x-2) Testing x = 1 implies:  f ' '(1) = 6(1-2) = -6 < 0.) The next step is to determine whether the critical points are maximums or minimums. we have a maximum.  These can be found by using the second order condition.  Hence at x =1.Example of Using First and Second Order Conditions (Cont.  Hence at x =3. we have a minimum. Are these the ultimate maximum and minimum of the function f(x)? .

Local Vs. . For the previous example. f(x)   as x   and f(x)  - as x  -. Neither critical point is a global max or min of f(x). Global Maxima/Minima A local maximum is a point that f(x*) ≥ f(x) for all x in some open interval containing x* and a local minimum is a point that f(x*) ≤ f(x) for all x in some open interval containing x*. A global maximum is a point that f(x*) ≥ f(x) for all x in the domain of f and a global minimum is a point that f(x*) ≤ f(x) for all x in the domain of f.

f(x) is a convex function. Global Maxima/Minima (Cont..) When f ''(x)≥0 for all x. i.e. f(x) is a concave function.. then the local minimum x* is the global minimum of f(x) When f ''(x)≤0 for all x.Local Vs.e. i. then the local maximum x* is the global maximum of f(x) .

for a function f(x) of several independent variables x Calculatef  x  and set it to zero. Solve the equation set to get a solution vector x*. Inspect the Hessian2 matrix at point x*. H x    f  x  . 2 f  x Calculate .Conditions for a Minimum or a Maximum Value of a Function of Several Variables Correspondingly. Evaluate it at x*.

H(x) is a symmetric matrix.Hessian Matrix of f(x) f  x  is a C 2 function of n variables.   2 f  x  2 f  x      x1 2 x1xn  H x    2 f  x       .   2 f  x  2 f  x      xn x1 xn  2 Since cross .partials are equal for a C 2 function. .

)  Let f(x) be a C2 function in Rn. If the Hessian H  x * is a negative definite matrix. 0 . 2. . then x* is a local maximum of f(x).Conditions for a Minimum or a Maximum Value of a Function of Several Variables (cont. Suppose that x* is off f(x). If the Hessian H  x * is a positive definite matrix.e.. 1. If the Hessian H  x * is an indefinite matrix. 3. a critical point  x * i. then x* is neither a local maximum nor a local minimum of f(x). then x* is a local minimum of f(x).

.0) and ( 3. gradient of f(x. computing the first order partial derivatives (i. . y )  x  y  9 xy 3 3  Firstly.y)) and setting them to zero  f    x   3 x  9 y   2 f ( x .y) f ( x. -3 ). y * is (0.Example  Find the local maxs and mins of f(x. y )    0  f   3y  9x  2      y   critical points  x*.e.

y).-3) is a local min of f(x. (0.y)  2 f 2 f     x xy  2  6x 9   f ( x.0). 2   2 f  f   2   yx y 2    The first order leading principal minor is 6x and the second order principal minor is -36xy-81. -3) a global min? . So. respectively. the Hessian is positive definite and (3. these two minors are 0 and -81. i. neither a max nor a min. Since the second order leading principal minor is negative. y )      9  6 y  .  At (0.)  We now compute the Hessian of f(x.  At (3.  Is (3..Example (Cont.y).e. -3).0) is a saddle of f(x. these two minors are 18 and 243.

then x* is a global min of f(x). 2 f  x is f  x *all negative semidefinitefor   x0 and .e. When f(x) is a convex function. then x* is a global max of f(x).2 f  x is f  xall positive semidefinite for * x0and ..Global Maxima and Minima of a Function of Several Variables Let f(x) be a C2 function in Rn.i.e.i.. . then When f(x) is a concave function.

 The monopolist’s cost of manufacturing q units of output is 90+20q. how much should the monopolist produce f (q1 .  In order to maximize profits. then these customers are willing to pay a price of 50-5q1 per unit.  20. q2for )  qeach market? 1 (50  5q1 )  q2 (100  10q2 )  (90  20( q1  q2 )).maximizing supply plan. q12 q22 q1q2 q2 q1   2 f is negative definite  (3. If it produces q1 units for type 1.  100  20q2  20  0  q2  4. . If it produces q2 units for type 2. then these customers are willing to pay a price of 100-10q2 per unit.4) is the profit .  Profit is: The critical points are f f  50  10q1  20  0  q1  3.   0.Example (Discriminating Monopolist)  A monopolist producing a single output has two types of customers. q1 q 2 2 f 2 f 2 f 2 f  10.

Constrained Optimization Examples: Individuals maximizing utility will be subject to a budget constraint Firms maximising output will be subject to a cost constraint The function we want to maximize/minimize is called the objective function The restriction is called the constraint .

x n )  bk . x n )  c m. hm (x1 ... .. x n )  c 2 .Constrained Optimization (General Form) A general mixed constrained multi- dimensional maximization problem is max f (x )  f (x1 . h2 (x1 ....... x n )  b2 ....L ...... x n ) subject to g1(x1 . h1(x1 .. xn )  c1 . x n )  b1 ..... gk (x1 .L ...... g2 (x1 ..

.Constrained Optimization (Lagrangian Form) The Lagrangian approach is to associate a Lagrange multiplier i with the i th inequality constraint and μi with the i th equality constraint... 1 ... 1 ..... k ........... x n )   i  gi (x1 .. x n )  bi  i 1 m   i  hi (x1 . x n . i 1 . m )  k f (x1 . xn )  ci  . We then form the Lagrangian L(x1 .....

i  1. k . then. i  1.  * . K . there exists multipliers 1* . i  1.  * ) f ( x* ) k * gi ( x* ) m * hi ( x* )    i   i  0. k gi ( x* )  bi . m i*  g i ( x* )  bi   0.K . n x j x j i 1 x j i 1 x j hi ( x* )  ci .K .Constrained Optimization (Kuhn- Tucker Conditions) If x * is a local maximum of f on the constraint set defined by the k inequalities and m equalities.K . k* .K .L satisfying . 1* . k i*  0. m* L( x* . i  1. j  1.L .

i  1. k That is to say if i*  0 then gi (x * )  bi if gi (x * )  bi then i*  0 .Constrained Optimization (Kuhn- Tucker Conditions) The first set of KT conditions generalizes the unconstrained critical point condition The second set of KT conditions says that x needs to satisfy the equality constraints The third set of KT conditions is i*  g i ( x* )  bi   0.K .

optimal bi i. if the constraint is not binding thus it does not make difference in the optimal solution and i*=0.e. and therefore increases the objective value Therefore.. note that increasing bi enlarges the feasible region. Finally.Constrained Optimization (Kuhn- Tucker Conditions) This can be interpreted as follows: Additional units of the resource bi only have value if the available units are used fully in the gi (x * )  solution. i0 for all i .

x  0. (8)x  0. y  0. (8) . y (3)x 2  y 2  4  0 (4)1x  0. (6)1  0. (7)2  0. (5)2y  0. x L (2)  2y  2 y  2  0.Example Form the Lagrangian max x  y 2 L=x-y2   (x 2  y 2  4)  1x  2y . 9 y  0. subject to The first order conditions become: x y 4 2 2 L (1)  1  2 x  1  0.

1  1  0    0 and x  0.) By (1). 2  2y (1   ).   . by (2). y .Example (cont. from (5)  y  0. 4 4 . or both are positive. by (4). from (4).  . 2 )  (2. 1  0. So. since 1  0. (x . 1  0. 2  0.0). 1 . By (3) and (8)  x=2. 1  1  2 x . 1 1 by (1).0.0. since 1    0  either both y and 2 are zero. .

m. i=1…k.Sensitivity Analysis We notice that L(x *. i=1…. i* bi i* ci i*changes by approximately It or i*  is is thethe shadow shadow price ofprice of ithconstraint ith equality inequality constraint and .  *. or ci .  *)  f (x *)   *   g (x *)  b    * '(h(x *)  c )  f (x *) What happens to the optimal solution value if the right-hand side of constraint i is changed by a small amount. say bi .

1we *  0.Sensitivity Analysis (Example)  In the previous example.do not change the solution or the optimum value since .975.1.9748. then we predict that the new optimal value would be 2+1/4(-0. we change the second constraint from x≥0 to x≥0. if we change the first constraint to x2+y2=3.9. constraint. instead.1)=1.  If we compute that problem with this new 3.9  1. then x-y2=  If.

Utility Maximization Example
The utility derived from exercise (X) and watching movies (M)
is described by the function
U  X,M   100  e  2 X  e  M
Four hours per day are available to watch movies and exercise.
Our Lagrangian function is
L(X,M,λ)  100  e 2 X  e  M    X  M  4 
First - Order Conditions :
LX  2e  2 X  λ  0
LM  e  M  λ  0
Lλ  X  M  4  0

Utility Max Example Continued

First - Order Conditions :
LX  2e  2 X  λ  0, (1) LM  e  M  λ  0, (2) Lλ  X  M  4  0 (3)
From (2) we get that λ  e -M . Substituting into (1), we get
2e - 2 X  e  M  0 Solving (3) for M and substituting, we get
2e -2X  e ( 4 X )  0  ln(2) - 2X  -4  X or 3X  ln(2)  4
ln(2)  4 8  ln(2)
X*  and M * 
3 3

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Numerical Methods for Optimization .

with optimization.Recall. we are seeking f '(x) =0 f '(x) = 0 f "(x)< 0 f(x) f '(x) = 0 f "(x)>0 x .

One Dimension Unconstrained Optimization (Example) 2 Find the maximum of x f ( x)  2 sin x  10 To solve the root problem for x f ( x)  2 cos x   0 ' 5 and the second condition is satisfied at the root 1 f ( x*)  2 sin x *   0 '' 5 .

2 and 2 2 -2 presented in Topic 3.2]) ans =1.4276 . One Dimension Unconstrained Optimization (Example) 2  We can solve f '(x) =0 by Bisection using initial 1 interval [1. >> fzero(f.2].[1. >> f=@(x) 2*cos(x)- -5 -6 -4 -2 0 x 2 4 6 1/5*x.2 or Secant method with y=2sinx-x /10 -1 initial points 1. Newton’s 0 with initial point 1. -3  We can also solve it in -4 Matlab.

we will focus on minimization in this topic. max f(x) is equivalent to min –f(x) .Objectives : Using Optimization Toolbox in Matlab to Solve unconstrained optimization with multiple variables Solve linear programming problem Solve quadratic programming problem (for example: optimal portfolio) Solve nonlinear optimization with constraints Mostly.

we have quadratic programming If f(x) in not linear or quadratic.Linear Programming/Quadratic Programming/Nonlinear Programming If f(x) and the constraints are linear. we have linear programming If f(x) is quadratic. and the constraints are linear. we have nonlinear programming . and/or the constraints are nonlinear.

• Example: min f(x)  e x1  x2 1  e x1  x2 1  e -x1 1 . x2.x1 1  f ( x)    .…xn) • Optimality Condition: x* is a local minimum if f ( x * )  0 and  2 f(x* ) is positive definite. e x1  x2 1  e x1  x2 1    • What values of x make f ( x )  0 ? .  e x1  x2 1  e x1  x2 1  e .Recall the Optimality Conditions for Multiple Variables • Unconstrained Minimization Problem: min f(x1.

m and save under the work path of matlab function f=objfun(x) f=exp(x(1)+x(2)-1)+exp(x(1)-x(2)-1)+exp(-x(1)-1). Step 2: >>optimtool in the commend window to open the optimization toolbox .Unconstrained Optimization with Multiple Variables in Matlab Step 1: Write an M-file objfun.

)  We use the function fminunc to solve unconstrained optimization problem “objfun” .Unconstrained Optimization with Multiple Variables in Matlab (cont.

. Multiple variable case : f (x ) x  - 2 f ( x k ) 1 f ( x k ). Quasi-Newton Method is an Algorithm used in function fminunc f ' ( x k  x)  f ' ( x k )  f ' ( x k ) f (x )  '' k  x x f ' (xk )  x   '' k .

Recall the Algorithm of Newton Method  Newton’s method may fail if Hessian is not positive definite .

The formula given by BFGS is . Fletcher. Goldfarb and Shanno) Hessian Update in the Quasi-Newton algorithm.Quasi-Newton Methods Replace the Hessian with some Positive Definite Matrix H  The function “fminunc” uses BFGS (Broyden.

. bi = amount of the ith resource available  Finally.Linear Programming  Both the objective function and the constraints are linear  Example: maximizing profit or minimizing cost  Objective function Max or Min Z = c1x1 +c2x2 +…. we add the constraint that all activities have a positive value. xi  0 .cnxn where cj = payoff of each unit of the jth activity xj = magnitude of the jth activity  The constraints can be represented by ai1x1 +ai2x2+…..ainxn  bi where aij = amount of the ith resource that is consumed for each unit of the jth activity.

Example Product Resource Regular Premium Resource Availability Raw Gas 7 11 77 x1 = 3 (m /tonne) amount of Production Time 10 8 120 regular and (hr/tonne) Storage 9 6 x2 = (tonne) amount of Profit (/tonne) 150 175 premium Total Profit = 150 x1 + 175 x2 Maximize Z = 150 x1 + 175 x2 Objective 7x1 + 11x2  77 function (material constraint) 10x1 + 8x2  120 (time constraint) x1  9 (storage constraint) x2  6 (storage constraint) x1.x2  0 (positivity constraint) .

x2  0 0 0 2 4 6 8 10 12 14 x1 . Graphical Solution (1) 7x1 + 11x2  77 →x2  -7/11 x1 +7 Constraint 1 16 (2) 10x1 + 8x2  120 14 Constraint 2 → x2  -5/4x1 + 15 12 Constraint 3 10 Constraint 4 (3) x1  9 x2 8 6 (4) x2  6 4 2 (5) x1.

5 . Start with Z = 0 (0=150x1 + 175x2) and Z = 500 (500=150x1 + 175x2) Z=1200 Z=1550 Still in feasible region x1*= 9 x2*  1.Graphical Solution Now we need to add the objective function to the plot.

ub . lb. Aeq and the vectors f.Linear Programming in Matlab Example:  Step 1: >>optimtool in the commend window to open the optimization toolbox  Step 2: Define matrices A. b.

etc.Linear Programming in Matlab (Example)  File->export to workspace  can export the results including lambda. .

Quadratic Programming in Matlab  Step 1: >>optimtool in the commend window to open the optimization toolbox  Step 2: Define matrices H. b .A and the vectors f.

Quadratic Programming in Matlab (Example: Portfolio Optimization) .

026 -0.004488271. 0] The function ‘quadprog ’ uses an active set strategy.003298885 0. 0. 0.008 -0. 0.003298885 0. . 0. -50.017087987 0. 0 0 -1] b=[1000.001224849 0.063000818] f=[0. -0. 0] A=[1 1 1. -1 0 0.Quadratic Programming in Matlab (quadprog) H=[0.001224849. The first phase involves the calculation of a feasible point.074. 0. The second phase involves the generation of an iterative sequece of feasible points that converge to the solution. 0 -1 0.004488271 0.005900944 0.

Nonlinear Programming in Matlab ( Constrained Nonlinear Optimization) Formulation .

Nonlinear Programming in Matlab (Example) Find x that solves  Step 1: Write an M-file objfunc. -x(1)*x(2)-10]. %Nonlinear equality constraints ceq=[]. function f=objfunc(x) f=exp(x(1))*(4*x(1)^2+2*x(2)^2+4*x(1)*x(2)+2*x(2)+1).  Step 3: >>optimtool to open the optimization toolbox .m for the objective function. ceq]=confun(x) %Nonlinear inequality constraints c=[1.5+x(1)*x(2)-x(1)-x(2).m for the constraints. function [c.  Step 2: Write an M-file confun.

Nonlinear Programming in Matlab (Example) .

.Sequential Quadratic Programming is an Algorithm Used in Function ‘fmincon’ (Basic Idea)  The basic idea is analogous to min f(x).me .. in the NLP. subject to  In unconstrained optimization. only g i(x)  0 i  1..e. both the objective and the constraint must be modeled. the objective function must be g i(x)  0 i  me  1. Newton’s method for unconstrained x optimization. approximated....m.  An sequential quadratic programming method uses a quadratic for the objective and a linear model of the constraint ( i. a quadratic program at each iteration) ..

Supplement 13-88 . Inc. Lecture Outline Model Formulation Graphical Solution Method Linear Programming Model Solution Solving Linear Programming Problems with Excel Sensitivity Analysis Copyright 2006 John Wiley & Sons.

Linear Programming (LP) A model consisting of linear relationships representing a firm’s objective and resource constraints LP is a mathematical modeling technique used to determine a level of operational activity in order to achieve an objective. Inc. Supplement 13-89 . subject to restrictions called constraints Copyright 2006 John Wiley & Sons.

Types of LP Copyright 2006 John Wiley & Sons. Supplement 13-90 . Inc.

Supplement 13-91 . Inc.Types of LP (cont.) Copyright 2006 John Wiley & Sons.

Supplement 13-92 . Inc.Types of LP (cont.) Copyright 2006 John Wiley & Sons.

Supplement 13-93 .LP Model Formulation Decision variables  mathematical symbols representing levels of activity of an operation Objective function  a linear relationship reflecting the objective of an operation  most frequent objective of business firms is to maximize profit  most frequent objective of individual operational units (such as a production or packaging department) is to minimize cost Constraint  a linear relationship representing a restriction on decision making Copyright 2006 John Wiley & Sons. Inc.

≥) b2 : am1x1 + am2x2 + .) Max/min z = c1x1 + c2x2 + . + a2nxn (≤. Inc. + a1nxn (≤.... =. =.. =..LP Model Formulation (cont. ≥) b1 a21x1 + a22x2 + . + amnxn (≤. Supplement 13-94 . ≥) bm xj = decision variables bi = constraint levels cj = objective function coefficients aij = constraint coefficients Copyright 2006 John Wiley & Sons... + cnxn subject to: a11x1 + a12x2 + ..

Inc. LP Model: Example RESOURCE REQUIREMENTS Labor Clay Revenue PRODUCT (hr/unit) (lb/unit) ($/unit) Bowl 1 4 40 Mug 2 3 50 There are 40 hours of labor and 120 pounds of clay available each day Decision variables x1 = number of bowls to produce x2 = number of mugs to produce Copyright 2006 John Wiley & Sons. Supplement 13-95 .

x2 0 Solution is x1 = 24 bowls x2 = 8 mugs Revenue = $1. Inc. Supplement 13-96 .360 Copyright 2006 John Wiley & Sons. LP Formulation: Example Maximize Z = $40 x1 + 50 x2 Subject to x1 + 2x2 40 hr (labor constraint) 4x1 + 3x2 120 lb (clay constraint) x1 .

Inc. Plot model constraint on a set of coordinates in a plane 2. Supplement 13-97 . Graphical Solution Method 1. Plot objective function to find the point on boundary of this space that maximizes (or minimizes) value of objective function Copyright 2006 John Wiley & Sons. Identify the feasible solution space on the graph where all constraints are satisfied simultaneously 3.

Graphical Solution: Example 50 x2 – 40 – 4 x1 + 3 x2 120 lb 30 – 20 – Area common to both constraints 10 – x1 + 2 x2 40 hr 0– | | | | | | 10 20 30 40 50 60 x1 Copyright 2006 John Wiley & Sons. Inc. Supplement 13-98 .

Computing Optimal Values x1 + 2x2 = 40 x 402 – 4x1 + 3x2 = 120 4 x1 + 3 x2 120 lb 4x1 + 8x2 = 160 30 – -4x1 . Inc. Supplement 13-99 . 3x2 = -120 5x2 = 40 20 – x2 = 8 x1 + 2 x2 40 hr x1 + 2(8) = 40 10 – x1 = 24 0 –8 | | 24 | | x1 10 20 30 40 Z = $50(24) + $50(8) = $1.360 Copyright 2006 John Wiley & Sons.

000 x2 =8 mugs Z = $1. Inc.200 10 – B 0– | | | C| 10 20 30 40 x1 Copyright 2006 John Wiley & Sons.360 x1 = 30 bowls 30 – x2 =0 mugs 20 – A Z = $1. Extreme Corner Points x1 = 0 bowls x2 x2 =20 mugs x1 = 224 bowls 40 – Z = $1. Supplement 13-100 .

100 10 – B 0– x1 + 2x2 40 hr | | | C | 10 20 30 40 x1 Copyright 2006 John Wiley & Sons. Inc.Objective Function x2 40 – 4x1 + 3x2 120 lb 30 – Z = 70x1 + 20x2 Optimal point: 20 – x1 = 30 bowls A x2 =0 mugs Z = $2. Supplement 13-101 .

Minimization Problem CHEMICAL CONTRIBUTION Brand Nitrogen (lb/bag) Phosphate (lb/bag) Gro-plus 2 4 Crop-fast 4 3 Minimize Z = $6x1 + $3x2 subject to 2x1 + 4x2  16 lb of nitrogen 4x1 + 3x2  24 lb of phosphate x 1. Inc. Supplement 13-102 . x 2  0 Copyright 2006 John Wiley & Sons.

Graphical Solution
x2
14 –
x1 = 0 bags of Gro-plus
12 – x2 = 8 bags of Crop-fast
Z = $24
10 –

8–A
Z = 6x1 + 3x2
6–

4–

2– B

0– C
| | | | | | |
2 4 6 8 10 12 14 x1
Copyright 2006 John Wiley &
Sons, Inc. Supplement 13-103

Simplex Method
 A mathematical procedure for solving linear programming
problems according to a set of steps
 Slack variables added to ≤ constraints to represent
unused resources
 x1 + 2x2 + s1 =40 hours of labor
 4x1 + 3x2 + s2 =120
 lb of clay
 Surplus variables subtracted from ≥ constraints to
represent excess above resource requirement. For
example
 2x1 + 4x2 ≥ 16 is transformed into
 2x1 + 4x2 - s1 = 16

 Slack/surplus variables have a 0 coefficient in the
objective function
 Z = $40x1 + $50x2 + 0s1 + 0s2

Copyright 2006 John Wiley &
Sons, Inc. Supplement 13-104

Solution
Points with
Slack
Variables

Copyright 2006 John Wiley &
Sons, Inc. Supplement 13-105

Inc. Supplement 13-106 . Solution Points with Surplus Variables Copyright 2006 John Wiley & Sons.

Supplement 13-107 . mugs (x2)=B11 Copyright 2006 John Wiley & Sons. Solving LP Problems with Excel Click on “Tools” to invoke “Solver.” Objective function =E6-F6 =E7-F7 =C6*B10+D6*B11 =C7*B10+D7*B11 Decision variables – bowls (x1)=B10. Inc.

Supplement 13-108 . Inc.Solving LP Problems with Excel (cont.) After all parameters and constraints have been input.” Objective function Decision variables C6*B10+D6*B11≤40 C7*B10+D7*B11≤120 Click on “Add” to insert constraints Copyright 2006 John Wiley & Sons. click on “Solve.

) Copyright 2006 John Wiley & Sons.Solving LP Problems with Excel (cont. Inc. Supplement 13-109 .

Inc.Sensitivity Analysis Copyright 2006 John Wiley & Sons. Supplement 13-110 .

Inc. Sensitivity Range for Labor Hours Copyright 2006 John Wiley & Sons. Supplement 13-111 .

Supplement 13-112 . Inc. Sensitivity Range for Bowls Copyright 2006 John Wiley & Sons.

The purchaser may make back-up copies for his/her own use only and not for distribution or resale. Request for further information should be addressed to the Permission Department. Inc. The Publisher assumes no responsibility for errors. John Wiley & Sons. Copyright 2006 John Wiley & Sons. Inc. omissions. Inc. Reproduction or translation of this work beyond that permitted in section 117 of the 1976 United States Copyright Act without express permission of the copyright owner is unlawful. Supplement 13-113 . Copyright 2006 John Wiley & Sons. or damages caused by the use of these programs or from the use of the information herein. All rights reserved.

.

Using Solver for Non- Linear Programming (NLP) .

NLP with Solver Requires Microsoft Excel Requires Premium Solver. . We’ll use our EOQ model as an example. Requires a spreadsheet model that needs to be optimized. which is located on your student disk.

The Model D Q MIN: DC  S  Ci Q 2 Subject to: Q  1 (Note the nonlinear D = Annual demand objective!) C = Box purchase costs S = Order costs I = Inventory carrying costs Q = Quantity ordered .

The Model Make sure cell formulas are correct. D Q MIN: DC  S  Ci Q 2 D = Annual demand C = Box purchase costs S = Order costs I = Inventory carrying costs Q = Quantity ordered .

Set solver parameters Recall Green cells are The blue the cell unknowns contains the objective function Red cells contain the constraints .

but the only constraint is non-negativity. the objective Red cells contain function the constraints. Set solver parameters Green cells are the The blue unknowns. cell Delete the contains formula. which is handled in the Solver dialogue .

Add-ins and make sure the Solver Add-in is checked.. . (Click on the check box if it isn’t.NLP with Solver Select Tools. plan on working in the lab Zone 1.) Click OK If the Solver Add-In is not showing at all.

click the Premium button ..Solver menu item If the Standard Solver window appears.NLP with Solver Select the Tools.

set the solution method to Standard GRG Non- linear .NLP with Solver In the Premium Solver window.

NLP
I. Clickwith
the optionsSolver
button

II. When the Solver Options window appears, you can include a non-
negativity constraint by checking Assume Non-Negative

III. Click OK.

NLP with Solver
Objective function
Select Max or Min
Unknown

NLP with Solver

2 example 1 pg 783 Focus on using Premium Solver NLP to get the same answer.NLP with Solver Try Waner 13. .

Create spreadsheet .1.

Enter objective function formula .2.

Set up Solver  Set cell points to objective function cell  By changing variable points to unknown cell  Constraints have “used” value to left of comparison and “Available” value to right. .3.

Min selected Standard GRG Nonlinear selected .4. Check for Solver “gotchas” Using Premium solver If needed.

assume non- negative selected .4. Check for Solver “gotchas” Under options.

you have tried multiple starting points . Check for Solver “gotchas” In spreadsheet.4.

NLP with Solver Answer is 100. . as shown in your text.