You are on page 1of 29

Optimization

Assoc. Prof. Dr. Ahmed A. Megahed


awafa971@gmail.com
Optimization
➢ In design, construction and maintenance of any engineering system, engineers
have to take many technological (and managerial) decisions at several stages.
The goal is to either minimize the effort required or maximize the desired
benefit. Both goals are required to be expressed as a function of certain
decision variables, optimizing over which would yield better (if not the best)
result.
➢ Some practical instances of use of optimization are: (a) minimizing material
volume (and/or stiffness) when constructing structures like over-bridges, (b)
optimizing the shape of an automobile body to minimize aerodynamic drag,
(c) deciding which route to take to go to a new location in the city, and many
more.
➢ Numerical implementation of optimization is usually an iterative procedure
wherein at every step the design variables are updated when a better goal
value is achieved.
➢ Methods of optimization can be classified in numerous ways depending on the
number of variables, constraints, their nature (linear or nonlinear), and the
nature of solution (generic or problem specific). We brief some generic methods
on single-variable and multi-variable optimization. 2
Optimization
➢ Classical Optimization
• The necessary condition for optimality for a function f(x) in single variable x is well
established in that equating the first derivative f ′(x) = df (x)/dx to zero yields the
locations of zero slope or the optima. Such locations correspond to sites wherein the
function changes its trend of monotonic increase (at maxima) or decrease (at
minima). f ( x)  0
'

f (x ) f (x )
f ' ( x) = 0

f ' ( x) = 0

f ' ( x)  0
x x
x * x*

As x increases, the As x increases, the


f '' ( x )  0 f '' ( x )  0
slope is decreasing slope is increasing

• Further, the sign of the second derivative f ′′ (x) conveys, as a sufficiency condition, the
nature of the optima at the location of zero slope. We would expect at a maximum that
the slope would start decreasing as x increases and vice versa at a minimum, that is, the
rate of change of slope d2f (x)/dx2 < 0 at a maximum and > 0 at a minimum. 3
Optimization
Classification
➢ Single variable optimization
- Direct method – do not use derivative of objective function – search
process
- Gradient based method
➢ Multivariable optimization
- unconstrained, multivariable (Taylor series expansion)
– different search methods
- Constrained ... both use single variable/ multivariable repeatedly
maintain search effort
• Linear programming (objective function is linear)
• Non-Linear programming
- Sequential Quadratic Programming (SQP)
➢ Non-traditional optimization
- Genetic algorithm (GA)
- Artificial neural networks (ANNs)
etc.
4
Optimization

Classical optimization techniques


Single variable optimization

• Useful in finding the optimum solutions of continuous and differentiable


functions

• These methods are analytical and make use of the techniques of differential
calculus in locating the optimum points.

• Since some of the practical problems involve objective functions that are not
continuous and/or differentiable, the classical optimization techniques have
limited scope in practical applications.

5
Optimization Single variable optimization

• A function of one variable f (x)


has a relative or local minimum
at x = x* if f (x*) ≤ f (x*+h)
for all sufficiently small Local
minimum
positive and negative values of
h

• A point x* is called a relative


or local maximum if f (x*) ≥ f Global minima
(x*+h) for all values of h
sufficiently close to zero. Local minima

6
Optimization Single variable optimization

• A function f (x) is said to have a global or absolute


minimum at x* if f (x*) ≤ f (x) for all x, and not just for all
x close to x*, in the domain over which f (x) is defined.

• Similarly, a point x* will be a global maximum of f (x) if f


(x*) ≥ f (x) for all x in the domain.

7
Optimization Single variable optimization

Necessary condition

• If a function f (x) is defined in the


interval a ≤ x ≤ b and has a relative
minimum at x = x*, where a < x* < b,
and if the derivative df (x) / dx = fʹ (x)
exists as a finite number at x = x*, then
fʹ (x*)=0

• The theorem does not say that the


function necessarily will have a
minimum or maximum at every point
where the derivative is zero. e.g. fʹ (x)=0
at x= 0 for the function shown in figure.
However, this point is neither a
minimum nor a maximum. In general, a
point x* at which fʹ (x*)=0 is called a
stationary point.

8
Optimization Single variable optimization

Necessary condition

• The theorem does not say what


happens if a minimum or a
maximum occurs at a point x*
where the derivative fails to exist.
For example, in the figure

f(x * + h) − f(x*)
lim = m + (positive) or m - (negative)
h→0 h

depending on whether h
approaches zero through positive
or negative values, respectively.
Unless the numbers m+ or m- are
equal, the derivative fʹ (x*) does
not exist. If fʹ (x*) does not exist,
the theorem is not applicable.

9
Optimization Single variable optimization

Sufficient condition

• Let fʹ (x*)=f″ (x*)=…=f (n-1)(x*)=0, but f(n)(x*) ≠ 0. Then f (x*)


is:
– A minimum value of f (x) if f (n)(x*) > 0 and n is even
– A maximum value of f (x) if f (n)(x*) < 0 and n is even
– Neither a minimum nor a maximum if n is odd

10
Optimization Single variable optimization

Example

Determine the maximum and minimum values of the function:


f ( x) = 12 x 5 − 45 x 4 + 40 x 3 + 5

Solution: Since fʹ (x) = 60(x4-3x3+2x2) = 60x2(x-1)(x-2),


fʹ (x) = 0 at x=0,x=1, and x=2.

The second derivative is:


f ( x) = 60 (4 x 3 − 9 x 2 + 4 x)
At x=1, f″(x)=-60 and hence x=1 is a relative maximum. Therefore,
fmax= f (x=1) = 12

At x=2, f″(x)=240 and hence x=2 is a relative minimum. Therefore,


fmin= f (x=2) = -11
11
Optimization Single variable optimization

Example

Solution cont’d:
At x=0, f″ (x)=0 and hence we must investigate the next derivative.

f ( x) = 60 (12 x 2 − 18 x + 4) = 240 at x = 0


Since f ‴ (x) ≠ 0 at x=0, x=0 is neither a maximum nor a minimum, and it is an
inflection point.

12
Optimization Linear Programming

Multivariable Optimization
Linear Programming (LP)

- In mathematics, linear programming (LP) is a technique for optimization of a


linear objective function, subject to linear equality and linear inequality
constraints.

- Linear programming determines the way to achieve the best outcome (such as
maximum profit or lowest cost) in a given mathematical model and given some
list of requirements represented as linear equations.

13
Optimization Linear Programming

Mathematical formulation of Linear Programming model:


Step 1
- Study the given situation
- Find the key decision to be made
- Identify the decision variables of the problem
Step 2
- Formulate the objective function to be optimized
Step 3
- Formulate the constraints of the problem
Step 4
- Add non-negativity restrictions or constraints
The objective function, the set of constraints and the non-negativity restrictions
together form an LP model.

14
Optimization Linear Programming

HOW TO SOLVE LP PROBLEMS

• Graphical Solution

• Simplex Method for Standard form LP


– Geometric Concepts
– Setting up and Algebra
– Algebraic solution of Simplex

15
Optimization Linear Programming

Prototype example

• The Wyndor Glass Co. produces high-quality glass products, including windows and
glass doors. It has three plants.

• Plant 1 produces Aluminum frames


• Plant 2 produces wood frames
• Plant 3 produces the glass and assembles the products.

• The company has decided to produce two new products.

• Product 1: An 8-foot glass door with aluminum framing


• Product 2: A 4x6 foot double-hung wood framed window

• Each product will be produced in batches of 20. The production rate is defined as the
number of batches produced per week.

• The company wants to know what the production rate should be in order to maximize
their total profit, subject to the restriction imposed by the limited production capacities
available in the 3 plants.

16
Optimization Linear Programming

To get the answer, we need to collect the following data:

(a) Number of hours of production time available per week in each plant for
these two new products. (Most of the time in the 3 plants is already
committed to current products, so the available capacity for the 2 new
products is quite limited).

• Number of hours of production time available per week in Plant 1 for the
new products: 4
• Number of hours of production time available per week in Plant 2 for the
new products: 12
• Number of hours of production time available per week in Plant 3 for the
new products: 18

17
Optimization Linear Programming

(b) Number of hours of production time used in each plant for each batch produced
of each new product (Product 1 requires some of the production capacity in
Plants 1 and 3, but none in Plant 2. Product 2 needs only Plants 2 and 3).
Number of hours of production time used in Plant 1 for each batch produced of
Product 1: 1
Number of hours of production time used in Plant 2 for each batch produced of
Product 1: 0
Number of hours of production time used in Plant 3 for each batch produced of
Product 1: 3
Number of hours of production time used in Plant 1 for each batch produced of
Product 2: 0
Number of hours of production time used in Plant 2 for each batch produced of
Product 2: 2
Number of hours of production time used in Plant 3 for each batch produced of
Product 2: 2

18
Optimization Linear Programming

(c) Profit per batch produced of each new product.

Profit per batch produced of Product 1: $3,000


Profit per batch produced of Product 2: $5,000

19
Optimization Linear Programming

Formulation as a Linear Programming Problem

To formulate the LP model for this problem, let

x1 = number of batches of product 1 produced per week


x2 = number of batches of product 2 produced per week
Z = total profit ( in thousands of dollars) from producing the two new products.

Thus, x1 and x2 are the decision variables for the model. Using the data of Table ,
we obtain:

20
Optimization Linear Programming

➢ Graphical Solution
The Wyndor Glass Co. example is used to illustrate the graphical solution.

Shaded area shows values of (x1, x2) Shaded area shows values of (x1, x2) ,
allowed by x1 ≥ 0, x2 ≥ 0, x1 ≤ 4 called feasible region
21
Optimization Linear Programming

The value of (x1, x2) that maximize 3x1 + 5x2 is (2, 6)

22
Optimization Linear Programming

Common Terminology for LP Model

Objective Function:
The function being maximized or minimized is called the objective
function.

Constraint:
The restrictions of LP Model are referred to as constraints.
The first m constraints in the previous model are sometimes
called functional constraints.
The restrictions xj >= 0 are called nonnegativity constraints.

23
Optimization Linear Programming

Infeasible Solution:
A feasible solution is
An infeasible solution is a located in the feasible
solution for which at least one region. An infeasible
solution is outside the
constraint is violated. feasible region.

Feasible Solution:
A feasible solution is a solution
for which all the constraints are
satisfied.
Feasible Region:
The feasible region is the
collection of all feasible
solutions.

24
Optimization Linear Programming

Common Terminology for LP Model


No Feasible Solutions:
It is possible for a problem to have no feasible solutions.

An Example:

The Wyndor Glass Co.


problem would have no
feasible solutions if the
constraint 3x1 + 5x2 ≥ 50
were added to the problem.

In this case, there is


no feasible region

25
Optimization Linear Programming

Common Terminology for LP Model


Optimal Solution:
An optimal solution is a feasible solution that has the maximum or minimum of
the objective function.

Multiple Optimal
Solutions:
It is possible to have more
than one optimal solution.

An Example
The Wyndor Glass Co. problem
would have multiple optimal
solutions if the objective function
were changed to Z = 3x1 + 2x2

26
Optimization Linear Programming

Common Terminology for LP Model


Unbounded Objective:
If the constraints do not prevent
improving the value of the
objective function indefinitely
in the favorable direction, the
LP model is called having an
unbounded objective.

An Example
The Wyndor Glass Co. problem
would have no optimal solutions if
the only functional constrait were
x1 ≤ 4, because x2 then could be
increased indefinitely in the
feasible region without ever
reaching the maximum value of Z
= 3x1 + 2x2
27
Optimization Linear Programming

Common Terminology for LP Model

Corner-Point Feasible (CPF) Solution:


A corner-point feasible (CPF) is a solution that lies at a corner of the feasible
region.

The five dots are the five CPF solutions


for the Wyndor Glass Co. problem.

28
Optimization Linear Programming

Common Terminology for LP Model


Relationship between optimal solutions and CPF solutions :
Consider any linear programming problem with feasible solutions and a bounded feasible
region. The problem must posses CPF solutions and at least one optimal solution.
Furthermore, the best CPF solution must be an optimal solution. Therefore, if a problem
has exactly one optimal solution, it must be a CPF solution. If the problem has multiple
optimal solutions, at least two must be CPF solutions.

(2,6)

(4,3)

The prototype model has exactly one The modified problem has multiple
optimal solution, (x1, x2) = (2,6), optimal solution, two of these optimal
which is a CPF solution. solutions, (2,6) and (4,3), are CPF
solutions.
29

You might also like