You are on page 1of 9

Professor Dr.

Sebastian Engell
Process Dynamics and Operations Group

4. Multidimensional Optimization
4.1 Unconstrained Problems
min
xR

f(x)

f (x )
f(x*)

x2

Taylor series expansion of f(x):

( )+ f (x ) (x x )

f (x ) f x

(0 )

(0 )

x
1
(
0)
Gradient: f x =
f
x
n

(0 )

( )

Process Optimization

( )(

x*

1
(0 ) T
(0 )
(0 )
+ xx
H x x x +
2

2
2f

2
x1xn
x1


Hessian: H x (0 ) =
2

2
f

2
xn x1
x

x (0 )

( )

x (0 )

x1

20114.1.1

Professor Dr. Sebastian Engell


Process Dynamics and Operations Group

Analytical Solution

( )

necessary condition for a minimum: f x * = 0


sufficient condition:

( )

H x*

is positive definite

(all eigenvalues are positive)

The analytical solution is only possible, if:


f(x) is twice differentiable,

( )

*
the set of (nonlinear) equations f x = 0 can be solved for x*.

otherwise: use numerical optimization methods


(again: direct and indirect methods)
Process Optimization

20114.1.2

Professor Dr. Sebastian Engell


Process Dynamics and Operations Group

Direct Methods
Univariate search:
fix n-1 variables, search for the
optimum of the remaining;
alternate between the coordinates

Simplex Method:

evaluate f for the vertices of a


simplex; reflect the vertex with
the maximum f-value

Alternative Methods:

x*

x2

x(1)

x(0)
x2

x1

x*

reduce the triangle size


when x* is contained

x1

Nelder-Mead method: simplex with expanding/ contracting polyhedra


Grid-based approaches
Random Search
Method of conjugate directions (Powells method)

[Textbook by Edgar, Himmelblau, and Lasdon: Optimization of Chemical Processes]


Process Optimization

20114.1.3

Professor Dr. Sebastian Engell


Process Dynamics and Operations Group

Indirect Methods
Abstract scheme:
(a) choose an initial point x(0); step index: i = 0
(b) determine a search direction d(i)
s(i):

(c) determine a stepsize


either just choose a constant s(i), or
by line search in direction d(i):

min f x (i ) + s (i ) d (i )
s >0

(i +1) = x (i ) + s (i ) d (i )
result: x

(d) stop if

f x (i +1) <

x(1)
x2

x(2)
s(0)

x*

d(0)
x(0)

x1

to go in direction of
decreasing cost:

( )

( )

(
i) T
f x (i ) < 0
d

else go to (b) with i = i + 1


Process Optimization

20114.1.4

Professor Dr. Sebastian Engell


Process Dynamics and Operations Group

Steepest Descent Method


Determination of the search direction:
f = constant

opposite to the gradient:

( )

d (i ) = f x (i )

d(i)
s(i)

(orthogonal to lines of equal cost)

+ converges quickly to a neighborhood of x*


+ robust with respect to the choice of x(0)
- slow convergence near to x* (since f 0)

( )
*

- the result must be checked ( H x > 0 )


Process Optimization

20114.1.5

Professor Dr. Sebastian Engell


Process Dynamics and Operations Group

Newton Method
as in the scalar case: Taylor series approximation of f (x) and use
of the necessary condition

with stepsize:

( ) ( )
x (i +1) = x (i ) s (i ) H 1(x (i ) ) f (x (i ) )
x (i +1) = x (i ) H 1 x (i ) f x (i )

search direction d(i)

The cost decreases in direction d(i), if H(x(i)) is positive definite.


+ quadratic convergence
- sensitive with respect to the choice of x(0)
- high computational costs due to matrix inversion in each step
Process Optimization

20114.1.6

Professor Dr. Sebastian Engell


Process Dynamics and Operations Group

Quasi-Newton Method (1)


Idea: reduce the costs for determination of d(i) by using symmetric
approximations G(i) of H-1(x(i))

Scheme:

( )
(0 )
y (i 1) = f (x (i ) ) f (x (i 1) ) , G = I

x (i +1) = x (i ) + s (i ) d (i ) with d (i ) = G f x (i )

and G(i) from: (i 1) = x (i ) x (i 1) ,

DFP (Davidson, Fletcher, Powell) formula:


T

T G y y G
(
i)
T
G = G + T

y
y G y

Process Optimization

(i 1)

20114.1.7

Professor Dr. Sebastian Engell


Process Dynamics and Operations Group

Quasi-Newton Method (2)


BFGS (Broyden, Fletcher, Goldfarb, Shano) formula:
T
T
T
T

y
G
y
y
G
G
y

G (i ) = G + 1 + T

T
T

y y
y

(i 1)

Properties:
+ convergence between linear and quadratic
+ smaller effort as for the Newton method

Process Optimization

20114.1.8

Professor Dr. Sebastian Engell


Process Dynamics and Operations Group

Conjugate Gradient Method


idea: choose linear independent and conjugate directions,

(
i 1) T
i.e.,: d
Q d (i ) = 0

with a positive definite Q

f T (i ) f (i )
(
(
0)
0)
(
(
(
i)
i)
i 1)
scheme: d = f x
, d = f x + d
T (i 1)
f
f (i 1)

( )

( )

+ good convergence; effort slightly higher as for steepest descent but


much more efficient

Line Search Step: min f x (i ) + s (i ) d (i )


s >0

)
)

numerical solution of: f T x (i ) + s (i ) d (i ) d (i ) = 0


Process Optimization

20114.1.9

You might also like