You are on page 1of 303

Optimum Design

in Mechanical Engineering

..

Local
optimization

Matlab

(Constraints)
Toolbox
Matlab
2

()

( )

Shape
Sh andd TTopology
l optimization
ti i ti

Shape and Topology optimization


Optistruct
.


1. - 40
2. 60


1. Rao, S. , Engineering Optimization, Theory and
rd
Practice 3 Edition, New York, Wiley, 2006
Call number : TA342 R291e 1996

1. (Design)
2. (Analysis)
3. (Fabrication)
4. (Test)
5.5

(Research
(R
h andd ddevelopment)
l
)

1.1

(System
(S stem needs and objective)
objecti e)

2. (System specifications)

3. (Preliminary design)

4. (Detailed design)

(Maximize systerm worth)
(Minimum cost effective)

4.4 (
()
)


5. (Prototype system fabrication)

6. (System testing)

()
(Final design)


(Trial
(Trial and
error)

(Detailed
(Detailed design)

-


-





1
(Efficient)
2 (Cost-effective)
(
)



(Optimum design)

Optimization

2 50%

(Objective, cost, function)

min f (b, h) = bhL


(1)
b =
h =

L = ()
=

b h

b>0
h>0

(2)
((3))

h = 3b

((4))

Von

ut

SF
2 mm

(5)
(6)

(2) (6)
(Constraint equations)


Optimization
1. (Objective function or cost function)

2.2

(Design variables)

3.

Explicit Implicit

(Mixed)
(Mixed)

(Constrained
equations)
i )

Equality Inequality

Optimization
1. Size (Parametric) optimization optimization
() ..

2. Topology optimization - optimization


() ()

Optimization ()
3. Shape optimization - optimization

4. Topography optimization optimization


()


Topology optimization


Shape optimization


Topography optimization

Optimization




5000

4
(1)
- Wire diameter (d),
(2) Spring diameter (D)
(N)

Optimization
()


1..

0.0.1
2. 10,000 psi
3. 100 Hz
k = ,

= ,
fn =

Optimization
()
G = = 12 x 106 psi
w =

= = 0.3 lb/in3
Ks = = 1.05
F =

Optimization
()

Optimization
()

F = 5000/4 = 1250 lb

Optimization
()

W k h 1 Optimization
Workshop
O ti i ti

(Design variables)
(Objective function)(Constraints)

1.1

V
2.
h r

Single variable optimization without constraint


Optimization

f(x)

Single variable optimization


without constraint

1. First order necessary condition (local) relative minimum f(x)


x = x*
df *
f ( x ) =
x )=0
(
dx
*

(1)

relative minimum relative maximum


(Saddle

(S ddl point)
i t)

Proof
x* local minimum f(x) x
d = x x and f = f ( x ) f ( x ) 0 (a) local minimum
*

Taylor series f(x) x*

1
f ( x ) = f ( x ) + f ( x ) d + f ( x* ) d 2 + R(Remainder)
2
Or
*

1
f = f ( x ) f ( x ) = f ( x ) d + f ( x* ) d 2 + R
2
*

(b)

(a) local minimum


(b)

d (a) f
(x*) = 0 (1)

2. Sufficient condition local minimum f(x) x = x*


2
d
f
f ( x* ) = 2 ( x* ) > 0
dx

(2)

Sufficient condition
local maximum

2
d f *
f ( x ) = 2 ( x ) < 0
dx
*

(3)

(4)
d2 f *
f ( x ) = 2 ( x ) = 0
dx
*

(4)

Higher-order
(order
(
3rd 4th)

Proof
x* local minimum f(x) x

f x* = 0

( )

1
f = f ( x* ) d 2 + R
2

(c )

( )
(a)


local
l l minimum
ii
(c)
d
(a) f (x*) > 0
(2) (c) f (x*) = 0
High-order
High
order term
xx*
local minimum

f" f'

f '> 0

f '< 0
f '= 0

()
. f = 0
f

f "<
<0
f "= 0

f "> 0
x

Single variable optimization



f(x) = 12x5 - 45x4 + 40x3+ 5
1
f ( x) = 60 ( x 4 3 x3 + 2 x 2 ) = 60 x 2 ( x 1)( x 2 ) = 0
x* = 0,1, 2

f ( x) = 60 ( 4 x 3 9 x 2 + 4 x )


Single variable optimization()
( )
x = 1,1

ff(x)
(x) = -60
60
x = 1
local maximum
fmax = f( x = 1) = 12
x = 2, f(x) = 240 x = 2 local minimum
fmin = f( x = 2) = -11
11
x = 0, f(x) = 0
f (x) =60(12x2 -18x +4) = 240

f (x)
( ) 0
minimum
ii
maximum
i


(Saddle point or inflection point)

W k h 2 Optimization
Workshop
O ti i ti

( )

f (x) = 4x3 18x2 + 27x 7

Optimum Design
in Mechanical Engineering
Lecture 2

..

Multi-variable optimization
without constraints

f ( x1 , x2 , K xn )

Min

x1
x
2
X=
M
xn

Taylor series

1 2
1 3
*
f ( X) = f ( X ) + df ( X ) + d f ( X ) + d f ( X* )
2!
3!
1 n
+ K + d f ( X* ) + Rn ( X* , h)
n!
Where h = X X*
*

r = 2,

2 n=3

2 f

x
1

2 f
{h1 h2 h3 }
x2 x1
2 f

x3x1

2 f
x1x2
2 f
x22
2 f
x3x2

2 f

x1x3
h1
2

f
T
h2 = h J h X=X*
x2 x3
h3
2

2
x3

Taylor series

( )

1 T
f ( X) = f ( X ) + f h X=X* + h Jh
+ h
X=X*
2
where
J X=X* = Hessian matrix
*

f
x
1
f

x
c = f = 2
M

f
x
n

.
x* = (1.8,1.6)
(1 8 1 6)
f (x) = ( x1 1) 2 + ( x2 1) 2

x = (1,1)

x* = (1.8,1.6)
f(x*) = (1.8-1)2 + (1.6-1)2 = 1
.
f
(1 8 1 6) = 2(1.8
(1.8,1.6)
2(1 8 1) = 11.6
6
x1
f
(1 8 1 6) = 2(1.6
(1.8,1.6)
2(1 6 1) = 11.2
2
x2
, }T
c = {{1.6,1.2}

J

. x* = (1,2)
(1 2)
f (x) = x13 + x23 + 2 x12 + 3 x22 x1 x2 + 2 x1 + 4 x2


f
= 3 x12 + 4 x1 x2 + 2,
x1
2 f

x
1
J (1, 2) = 2
f

x2 x1
10 1
=

1
18

f
= 3 x22 + 6 x2 x1 + 4
x2

2 f

x1x2 6 x1 + 4
1 6(1) + 4
1
=
=

2
6 x2 + 6 1
6(2) + 6
f 1
2
x2


local minimum
1.1 First order necessary condition

2.2 Sufficient
S ffi i condition
di i
1) Relative minimum
J
Positive definite matrix
2) Relative maximum
J
Negative definite matrix
3) Saddle point
J
Positive or Negative definite matrix
= indefinite


A
Determinant
principal minors

A
1. Positive definite A1, A2,,An
2. Negative definite Aj = (-1)j for j =1, 2,3,,n

3. Positive semidefinite Aj
4. Negative semidefinite Aj = (-1)j for j =1, 2,3,,n Aj

5. Indefinite 1 4
Aj (Aj Aj+1)


A
Eigenvalue
(Ax = x), = Eigenvalue
A
1. Positive definite 1, 2,, n
2. Negative definite 1,
2,,
n

3.3 Positive semidefinite
j

4. Negative semidefinite j
5. Indefinite j


( positive

definite )

1 1 0
A = 1 1 0
0 0 1

Determinant
A1 = 1 = 1, A 2 =

1
1

1
= 0, A 3 = 1
0

0 =0

A2 = A3 = 0 principle minors


Eigenvalue
i
l

( )

1
1

1
1
0

0
0 = 0
1

(1 ) (1 ) 2 1 = (1 )( + 2) = 0,
0
1 = 2, 2 = 1, 3 = 0

Eigenvalue


A
Negative semidefinite

N t
Nature
off stationary
t ti
points
i t
Hessian J positive definite:
Q
Quadratic form
Eigenvalues
Local nature: minimum

hT J h > 0

i > 0

N
Nature
off stationary
i
points
i
(2)
Hessian J negative definite:
Q
Quadratic form
Eigenvalues
Local nature: maximum

h T Jh < 0

i < 0

N t
Nature
off stationary
t ti
points
i t (3)
Hessian J indefinite:
Q
Quadratic form
Eigenvalues

h T Jh 0

Local nature: saddle point

i 0

Nature of stationary points (4)


Hessian J positive semi-definite:
Q
Quadratic form
Eigenvalues
Local nature: valley

h T Jh 0

i 0

J singular!

Nature of stationary points (5)


Hessian J negative semi-definite:
Q
Quadratic form
Eigenvalues
Local nature: ridge

h T Jh 0

i 0

J singular!

S i
Stationary
point
i nature summary
h T Jh

Definiteness J

Nature X*

>0

Positive d.

Minimum

Positive semi-d.

Valley

Indefinite

Saddlepoint

Negative semi-d.

Ridge

<0

Negative d.

Maximum

Global optimality
Optimality conditions for unconstrained
problem:
First order necessity: f (x*) = 0 (stationary point)
Second
S
d order
d sufficiency:
ffi i
J positive
iti d
definite
fi it att x**
Optimality
O ti lit conditions
diti
only
l valid
lid llocally:
ll

x * local minimum
When can we be sure x* is a global minimum?

C
Convex
ffunctions
ti
Convex function: any line connecting any
2 points on the graph lies above it (or on
it):
it)

Equivalent:
E i l
tangent lines/planes
li
/ l
stay b
below
l
the
h graph
h
J positive (semi-)definite f locally convex

(proof by Taylor approximation)

C
Convex
d
domains
i
Convex set:
A set S is convex if for every two points x1,
A
x2 in S, the connecting line also lies
completely inside S
S

C
Convexity
it and
d global
l b l optimality
ti lit
If:
Objective f = (strictly) convex function
Feasible domain = convex set (Ok for unconstrained
optimization)

Stationary point = (unique) global minimum


Special case: f, g, h all linear

Linear programming

More general class: convex optimization

E
Example
l
Quadratic functions with A positive definite
y convex:
are strictly
T

1 x1
f ( x) =
2 x2

3 1 x1 1 x1
1 2 x + 4 x + 2

2 2

St
Stationary
ti
point
i t
(1.2, -2.6) must be
unique global
optimum

x1

x2

Multivariable optimization

Stationary points


Multivariable optimization()
( )
Hessian

H i matrix
t i

Stationary points

W k h 1 Multivariable
Workshop
M lti i bl O
Optimization
ti i ti
( )

Multi-variable optimization
with equality constraints

Min

f ( x1 , x2 , K xn )

Subject to
g j ( X) = 0,
x1
x
2
X=
M
xn

n , m < n

j = 1, 2,..., m

Lagrange

L
M
Multiplier
lti li
Lagrange
L( x1 , x2 , K xn , 1 , 2 ,..., m ) = f ( X) + 1 g1 ( X) + 2 g 2 ( X) + ... + m g m ( X)
Where
h

j = Lagrange multiplier for g j ( X)

Constraints
Local minimum
Necessary condition

m
g j
L f
=
+ j
= 0, i = 1,, 2,...,
, ,n
xi xi j =1 xi

((1))

L
= g j ( X) = 0,
j

(2)
( )

j = 1,, 2,...,
, ,m

(1) (2)
x1*
1*
*
*
x2
2
*
*
X = , =
M
M
xn*
m*

X*

Relative constrained minimum f(X) *

Sensitivity Sufficient condition


X*


zi

i =
j =

zi
1. (n-m ) X*
2. (n-m ) X*
3. X*


Multivariable
M l i i bl optimization
i i i with
i h equality
li
constraints
(
)
)


A0 = 24
x1 x2



Maximize

f ( x1 , x2 ) = x12 x2

Subject to
g ( x1 , x2 ) = 2 x12 + 2 x1 x2 24 = 0

Lagrange function
L( x1 , x2 , ) = x12 x2 + (2 x12 + 2 x1 x2 24 )

Necessaryy condition

(E1) (E2)

(E2) (E3) (E4)

A0

z
x1 x2

Workshop 2 Multivariable Optimization with equality


constraints

f ( x1 , x2 ) =

Mi i i
Minimize

12000
x1 x22

Subject
j to
g ( x1 , x2 ) = x12 + x22 400 = 0
Answer

20 *
2
27 3
*
x =
, x2 =
20, =
3
3200
3
*
1

Optimum Design
in Mechanical Engineering
Lecture 3

..

Multi-variable optimization
with inequality constraints

Minimize

f ( x1 , x2 , K xn )

Subject to
g j ( X) 0,

j = 1, 2,..., m

x1
x

X = 2 = Design variable vector
M
xn

n , m < n

Multivariable optimization with inequality constraints

1
2
3 ()

4 2 3


Multivariable optimization with inequality
constraints

Minimize

f ( x1 , x2 ) = ( x1 1.5) + ( x2 1.5)
2

Subject to
g1 ( x1 , x2 ) = x1 + x2 2 0
g 2 ( x1 , x2 ) = x1 0
g3 ( x1 , x2 ) = x2 0

Workshop 1 Multivariable Optimization with


inequality constraints()
x1 , x2
Mi i i
Minimize

f ( x1 , x2 ) = x1 + x2

Subject to
150 x2
450 x1 + x2
4 x1 + x2 900
x1 0, x2 0

Equality constraint Slag variables,


Minimize

f ( x1 , x2 , K xn )

Subject to
G j ( X, Y) = g j ( X) + y 2j = 0,

j = 1, 2,..., m

y1
y
2
Y = = Slack variable vector
M
ym

y 2j

Lagrange

L( X, Y, ) = f ( X) + j G j ( X, Y)
j =1

1


= 2 = Lagrange multiplier vector
M
m

Stationary points Necessary condition


m
g j
L
f
( X, Y, ) =
( X) + j
= 0, i = 1, 2,..., n
xi
xi
xi
j =1

(1)

L
( X, Y, ) = G j ( X, Y) = g j ( X) + y 2j = 0,
j

(2)

L
( X, Y, ) = 2 j y j = 0,
y j

j = 1, 2,..., m

j = 1, 2,..., m

(3)

(1) (2) (3)


Necessary
N
conditions
diti (1) (2)
(3)
*
*
*
x1
1
y1
*
*
*
x2
2
y2
*
*
*
X = , = , y =
M
M
M
xn*
m*
ym*

(2) y2 (2) 0
(3) j = 0 gj (Inactive)

yi2 > 0

(3) yj = 0 gj (Active) j 0
Equality constraint

Necessary condition
m
g j
L
f
( X, Y, ) =
( X) + j
= 0, i = 1, 2,..., n
xi
xi
xi
j =1

(1)

L
( X, Y, ) = G j ( X, Y) = g j ( X) + y 2j = 0,
j

(2)

L
( X, Y, ) = 2 j y j = 0,
y j

j = 1, 2,..., m

j = 1, 2,..., m

(3)

gj Active (yj = 0 gj = 0)
Inactive (gj < 0)

inactive ()


minimum point

Khun-Tucker Necessary condition (


Active)
g j
f
( X) + j
= 0, i = 11, 2,...,
2 n
xi
xi
j = J1

j > 0,

(1)
(2)

j J1

J = J1 (Active) + J 2 (Inactive)

Khun-Tucker Necessary condition (



Active)
m
g j
f
( X) + j
= 0, i = 11, 2,...,
2 n
xi
xi
j =1

j g j ( X) = 0,

j = 1,
1 2,...,
2 m

(1)
(2)

g j 0,

j = 1, 2,..., m

(3)

j 0,

j = 1, 2,..., m

(4)


Multivariable optimization with inequality
constraints Necessary Conditions
Minimize f ( x1 , x2 ) = ( x1 1.5)2 + ( x2 1.5)2
Subject to
g ( x1 , x2 ) = x1 + x2 2 0

Lagrange
L = ( x1 1.5) 2 + ( x2 1.5) 2 + ( x1 + x2 2 + y 2 )


Necessary condition
L
= 2( x1 1.5) + = 0
x1

(a)

L
= 2( x2 1.5) + = 0
x2

(b)

L
= x1 + x2 2 + y 2 = 0

(c )

L
= 2 y = 0
y

(d )

4 4 (d)

y = 0
1) (a) (b) x1 = x2
2) y = 0 x1 = x2 (c) x2 = 1 x1 = 1
3) x1 (a) = 1
2 = 0
1) (a) (b)
x1 = x2 = 1.5
15
2) x1 = x2 3 2 + y12 = 0

y12 = 1

()

1 X* = (1,1), = 1 , f (X*) = 0.5





Active
A i

Multi-variable optimization
with equality and inequality constraints

Minimize

f ( x1 , x2 ,K xn )

Subject to
g j ( X) 0,

j = 1, 2,..., m

h k ( X) = 0, k = 1,
1 2,...,
2
p
x1
x
2
X = = Design variable vector
M
xn

Equality constraint Slag variables,


Minimize

y 2j

f ( x1 , x2 , K xn )

Subject to
G j ( X, Y) = g j ( X) + y 2j = 0,

j = 1, 2,..., m

, ,p
hk ( X) = 0, k = 1,, 2,...,
y1
y
2
Y = = Slack variable vector
M
ym

Lagrange
m

j =1

k =1

L( X, Y, , ) = f ( X) + j G j ( X, Y) + k hk ( X)
1
1


2
2
= , =
M
M
m
p

j k Lagrange multipliers Gj hk
Stationary points Necessary condition

N
Necessary
condition
di i
p
m
g j
hk
L
f
( X, Y, , ) =
( X) + j
+j
= 0, i = 11, 22,..., n
xi
xi
xi k =1 xi
j =1

L
( X, Y, , ) = G j ( X, Y) = g j ( X) + y 2j = 0,
j

j = 11, 22,..., m

(2)

L
( X, Y, , ) = hk ( X) = 0, k = 11, 22,..., p
k

(3)

L
( X, Y, , ) = 2 j y j = 0,
y j

(4)

j = 1,
1 22,..., m

n+2m+p
Inequality constraints

(1)


Multivariable optimization with equality and
inequality constraints

Maximize

f ( x1 , x2 ) = x1 x2

Subject to
g ( x1 , x2 ) = x1 4 0
h( x1 , x2 ) = x1 + x2 10 = 0

Lagrange

L( X, Y, , ) = x 1 x2 + 1 ( x1 4 + y12 ) + 1 ( x1 + x2 10)


Necessary condition
L
= x2 + 1 + 1 = 0
x1

(a)

L
= x1 + 1 = 0
x2

(b)

L
= x1 4 + y12 = 0
1

(c )

L
= x1 + x2 10 = 0
1

(d )

L
= 21 y1 = 0
y1

( e)

((e))

y1 = 0
1) (c) x1 = 4
2) x1 (d) (b) x2 = 6 1 = -4
3) x1 , x2 1 (a) 1 = -2
2
1 = 0
1) (a) (b)
x1 = x2 = - 1
2) x1 = x2 (c) x1 = x2 = 5 1 = -5
3) x1 (c) 5 4 + y12 = 0 y12 = 1

()

1 X* = (4,6), 1 = -2, 1 = -4 , f(X*) = 24

Workshop 2 Multivariable Optimization with equality


and inequality constraints
x1 , x2 Candidate
(

(

Sufficient condition )
)

Minimize

f ( x1 , x2 ) = ( x1 2) 2 + ( x2 1) 2

Subject to
2 x1 + x2
x2 x1

Khun-Tucker Necessary condition ( Active)


p
m
g j
hk
f
( X) + j
+ k
= 0, i = 1, 2,..., n
xi
xi k =1 xi
j =1

G j ( X, Y) = g j ( X) + y 2j = 0,
g j ( X) 0,
0

j = 1, 2,..., m

(2)

j = 11, 22,..., m

(3)

hk ( X) = 0, k = 1, 2,..., p

(4)

2 j y j = 0,

(5)

j 0,

j = 1, 2,..., m

j = 1, 2,..., m

k is not restricted in sign.

(6)

(1)

Khun-Tucker Necessary condition (1)


p
m
g j
hk
f

( X) = j
+ k
, i = 1, 2,..., n
xi
xi k =1 xi
j =1

(1)

Descent direction





Active
A i

Optimum Design
in Mechanical Engineering
Lecture 4

..

Matlab
Matlab Optimization toolbox ( )
Type of minimization
Function of one variable or scalar
minimization
i i i ti
Unconstrained minimization
of function of several variables
Minimization of function of
several variables subject
t constraints
to
t it

Standard form for


solution by Matlab

Command in Matlab

Find x to minimize f(x) with fminbnd


x1 < x < x2
fminunc or fminsearch
Find x to minimize f(x)
fmincon
Find x to minimize f ((x))
subject to c(x) 0, ceq = 0
[A] b,
[A]x
b [Aeq]x
[A ] = beq,
b
lxu

1 One variable minimization


Minimize

f ( x) = 2 4 x + e

Subject to
10 x 10

Matlab

2 Multi-variable unconstrained minimization


Minimize

f ( x) = 100( x2 x12 ) 2 + (1 x1 ) 2


Matlab (-1.2, 1.0)

1. fminsearch Nelder-Mead
2. fminunc BFGS(default), Hessian matrix,
SSteepest
eepes descent
desce methods
e ods

Objective function

3 Multi-variable constrained minimization


Minimize

f ( x) = ( x1 10)3 + ( x2 20)3

Subject to
13 x1 100
0 x2 100
g1 = 100 ( x1 5) 2 ( x2 5) 2 0
g 2 = 82.81
82 81 + ( x1 6) 2 + ( x2 5) 2 0

Matlab ((20.1,, 5.84))


fmincon

Objective function

Constrained equations (Nonlinear equality and inequality)

Workshop 1 Multivariable Optimization inequality


constraints
x1 , x2
Minimize

f ( x1 , x2 ) = ( x1 2) 2 + ( x2 1) 2

Subject to
2 x1 + x2
x2 x1

Multivariable Optimization with equality and


inequality constraints

d1 d2 = 0.01
() = 7,850
, kg/m3,
g ,

(allow)= 1,000 MPa

Multivariable Optimization with equality and


inequality constraints(cont.)

Multivariable Optimization with equality and


inequality constraints(cont.)
Solution

Real life practical engineering design

Step 1: (Problem statement)


Step 2: (Data and information collection)
Step 3:
(Identification/Definition of design
variables)
Step 4: (Identification of a criterion to be
optimized)
Step 5: (Identification of
constraints)
Step 6: (Solving the problem)

Example 1 Design of coil springs


Step 1: Problem statement : Coil springs are used in numerous practical
applications. Detailed methods for analyzing and designing such
mechanical
h i l componentst hhave bbeen ddeveloped
l d over th
the years ((e.g., SSpotts,
tt
1953; Wahl, 1963; Shigley, 1977; Haug and Arora, 1979). The purpose of
this project is to design a minimum mass spring (shown in Figure) to carry
a given axial load (called tension-compression spring) without material
failure and while satisfying two performance requirements: the spring must
deflect by at least (in.) and the frequency of surge waves must not be
less than 0 (Hertz, Hz).

Step 2:
St
2 Data
D t andd information
i f ti collection
ll ti
To formulate the problem of designing a coil spring, the following notation
is defined:
Deflection along the axis of the spring , in.
Mean coil diameter D, in. , Wire diameter d, in.
Number of active coils, N, Gravitational constant g = 386 in./s2
Frequency of surge waves , Hz
L t th
Let
the material
t i l properties
ti bbe given
i as
Weight density of spring material = 0.285 lb/in.3
Shear modulus G = (1.15 x 107) lb/in.2
Mass density of material ( = /g) = (7.38342 x 10-4) lb-s2/in.4
Allowable shear stress a = 80,000 lb/in.2

Step 2:
St
2 Data
D t andd information
i f ti collection
ll ti ((cont.)
t)
Other data for the problem are given as
Number of inactive coils Q = 2
Applied load P = 10 lb
Minimum spring deflection = 0.5 in.
Lower limit on surge wave frequency 0 = 100 Hz
Limit on outer diameter of the coil Do = 1.5 in.
Th ddesign
The
i equations
ti ffor th
the spring
i are given
i as
Load deflection equation : P = K
Spring constant:

d 4G
K=
8D3 N
8D

(a)
(b)

St 2:
Step
2 Data
D t andd information
i f ti collection
ll ti ((cont.)
t)
8kPD
Shear stress :
=
d3
(4 D d ) 0.615d
Wahl stress concentration factor: k =
+
4( D d )
D
Frequency of surge wave :

d
=
2 ND 2

G
2

(c )
(d )
(e)

Step 3:
St
3 Identification/Definition
Id tifi ti /D fi iti off ddesign
i variables
i bl
The three design variables for the problem are defined as
d = wire diameter, in.
D = mean coil diameter, in.
N = number of active coils
Step 4: Identification of a criterion to be optimized
The problem is to minimize the mass of the spring, given as volume x mass
y
density:
1
2
2
Mass = ( N + Q) Dd
4

St 5:
Step
5 Identification
Id tifi ti off constraints
t it

The constraints for the problem are defined as


Shear stress constraint :

K
a

Constraint on frequency
q
y surge
g waves :

Diameter constraint:

D + d D0

Deflection constraint:

E li it bounds
Explicit
b
d on design
d i var iables
i bl :

d min d d max
Dmin D Dmax
N min N N max

Formulation
With design variables

Example 2 Design of power screw


Step 1: Problem statement : A power screw, having double square threads, is to
be designed to lift a load with maximum efficiency. (See below figure).
Formulate the optimization problem by treating the major diameter (d),
(d) the
number of threads per inch (N), and the mean diameter of the collar (dc) as
d i variables.
design
i bl

Step 2: Data and information collection :


Load, P = 1,500 lb,
Coefficient of friction of threads in nut, = 0.08
Coefficient of friction of thrust collar in bearing, c = 0.08
g of the screw,, h = 6 in
Height
Material is steel and Youngs modulus, E = 30 x 106 psi
Permissible shear stress, max = 12,000 psi
Permissible compressive stress,
stress max = 20,000
20 000 psi

Step 2: Data and information collection : (cont.) The equations are as follows.

(1) The efficiency of screw (e) is


Work out
e=
W k in
Work
i
where Work out = PL and
Pd p d p + L 1

Work in = 2 T =

+ c Pd c
2 d p L 2

where T is the torque, L is the lead, d p is the pitch diameter,


and d c is the collar diameter.

Step 2: Data and information collection : (cont.)

(2) The tensile area is


At =

d p + dr

4 2
where is the pitch diameter(d p ) and root diameter(d r ) are given by
d p = d (0.649519 / N ), d r = d (1.299038/ N ),
where d denotes the major diameter of the screw.
screw
The direct compressive stress in screw( ) is P / At .
(3) The self-locking condition is d p L.

Step 3:
St
3 Identification/Definition
Id tifi ti /D fi iti off ddesign
i variables
i bl
The three design variables for the problem are defined as
d = major diameter, in.
dc = mean diameter of the collar, in.
N = number of threads per in
Step 4: Identification of a criterion to be optimized
The problem is to maximize the screw efficiency (e)
e=

PL
Pd p d p + L 1

+ c Pd c
2 d p L 2

St 5:
Step
5 Identification
Id tifi ti off constraints
t it

The constraints for the problem are defined as


(1) Direct compressive stress () must be less than max.
(2) Direct compressive stress () must be less than buckling stress b.
(3) Shearing of screw threads at minor diameter (dr) in nut must be
avoided.
((4)) Shearingg of nut at major
j diameter of the screw must be avoided.
(5) Bearing stress in threads must be less than due to (max).
(6) Shear stress in screw due to applied torque torque () must be less
than max.
(7) Design variable values must be positive.

Step 5: Identification of constraints (cont.)


(cont )

The constraint equations are


(1) The
Th direct
di t compressive
i stress
t
on the
th screw is
i
P
g1 = max 0
At
(2) The buckling constraint is
P
g2 = b 0
At
y,
gp
pin ends for the screw
where the bucklingg stress(( b ) is ggiven by,assuming

b =

2 EI
2

h At

,I =

d p4
64

(3) The constraint on shearing of screw threads is given by


P
g3 =
max 0
h
( d r )
2

Step 5: Identification of constraints (cont.)


(cont )
(4) The constraint on the shearing of nut threads is
P

max 0
h
( d )
2
(5) The constraint on bearing stress in threads is
P
max 0
g5 =
2
2 h
(d d r )
4
p
where p is the pitch of the threads.
g4 =

(6) The constra int on the shear stress in the screw du


duee to applied torque is
g6 =

Tr 16T
=
max 0
3
J dr

(7) The constraint for self-locking is


g 7 = L d p 0
(8) Nonnegative condition for the design variables
g 8 = d 0, g 9 = d c 0, g10 = N 0

Workshop 2 Multivariable Optimization with equality


and inequality constraints
Problem statement: Columns are used as structural members in many
practical applications. Many times such members are subjected to
eccentric loads such as a jib crane. The problem is to design a
minimum mass tubular column that is subjected to an eccentric load,
as shown in Figure.
g The cross section of the column is a hollow
circular tube with R and t as the mean radius and wall thickness,
respectively.
respectively

E t i column
Eccentric
l

Optimum Design
in Mechanical Engineering
Lecture 5

..

Workshop Multivariable Optimization with equality


and inequality constraints (Solution)
Problem statement: Columns are used as structural members in many
practical applications. Many times such members are subjected to
eccentric loads such as a jib crane. The problem is to design a
minimum mass tubular column that is subjected to an eccentric load,
as shown in Figure.
g The cross section of the column is a hollow
circular tube with R and t as the mean radius and wall thickness,
respectively.
respectively

E t i column
Eccentric
l

Identification of design variables:


R = Mean radius of the tube (m)
t = Wall thickness (m).
Identification the criteria to be optimized:
p
The objective is to minimize the mass of the column which is given as

f (x) = L A = (7850) (5) (2 Rt) kg

Identification of constrains:
St constrain
Stress
t i :
a

Bucklingg load constrain: P Pcr


Deflection constrain:

Radius/thickness constrain: R/t 50


Bounds on the variables: 0.01 R 1
0.005 t 0.2

(1)
((2))
(3)
(4)
(5)

Solution: Let us redesign the variable and other parameters for


Matlab as follows,
x1 = R,

x2 = t

x2
c = x1 + , e = 0.02 x1
2
2
I
x
3
A = 2 x1 x2 , I = x1 x2 , k = = 1
A 2

SSolution(cont):
l i ( ) Therefore,
Th f the
h optimization
i i i problem
bl iis statedd iin the
h
standard form as follows:
minimize

f ( x) = 2 (5)(7850) x1 x2

j to
subject
2 0.02( x1 + 0.5 x2 ) 2 L

P
sec
1 +
1 0
x1
E (2 x1 x2 )

x1
2 E (2 x13 x2 )
0
g 2 ( x) = 1
2
4L P

0.02 x1
P
g3 ( x) =
sec L
1 1 0
3

E ( x1 x2 )
x
g 4 ( x) = 1 1 0
50 x2

P
g1 ( x) =
2 x1 x2 a

0.01 x1 1, 0.005 x2 0.2

Matlab files
% File name = column_opt.m (Main file, starting here)
clear
l
allll
% Set options
p
= optimset
p
((LargeScale,
g
off, TolCon, 1e-8, TolX, 1e-8);
options
% Set the lower and upper bounds for design variables
Lb = [0.01 0.005]; Ub = [1 0.2];
% Set initial design
x0 = [1 0.2];
% Invoke the constrained optimization routine, fmincon
[x, FunVal, ExitFlag, Output] = . . .
fmincon(column_objf, x0, [ ], [ ], [ ], [ ], Lb, Ub, column_conf, options);
x
FunVal
ExitFlag

Answer X = ( 0.0537, 0.0050), FunVal = 66.1922

Matlab files (cont.)


(cont )
% File name = column
column_objf.m
objf m (Objective
Obj ti function
f
ti
description
d
i ti )
% Column design
function f = column_objf
column objf (x)
% Rename design variables
x1 = x(1); x2 = x(2);
% Set input parameters
L = 5.0; % length of column (m)
rho
h = 7850;
7850 % d
density
it (k
(kg/m^3)
/ ^3)
f = 2*pi*L*rho*x1*x2; % mass of the column

Matlab files (cont.)


(cont )
% File name = column_conf.m (Constraint description)
% Column
C l
d
design
i
function [g, h] = column_conf (x)
x1 = x(1); x2 = x(2);
% Set input parameters
P = 50000; % loading (N)
E = 210e9; % Youngs
g modulus (Pa)
L = 5.0; % length of the column (m)
Sy = 250e6; % allowable stress (Pa)
Delta = 0.25; % allowable deflection (m)
% Inequality constraints
g(1) = P/(2*pi*x1*x2)*(1 + . . .
2*0 02*(x1+x2/2)/x1*sec( 5*sqrt(2)/x1*sqrt(P/E/(2*pi*x1*x2)) ) )/Sy - 1;
2*0.02*(x1+x2/2)/x1*sec(
g(2) = 1 - pi^3*E*x1^3*x2/4/L^2/P;
g(3) = 0.02*x1*( sec( L*sqrt( P/(pi*E*x1^3*x2) ) ) - 1 )/Delta - 1;
g(4) = x1/x2/50
x1 x2/50 - 1;
% Equality constraint (none)
h = [];


Unimodal ( 1)



1. ((Fixed stepp size search))
2. (Accelerated step size search)
3. Exhaustive ( )
4.
Dichotomous
5. (Interval Halving Method)


1 Unimodal function

(Fixed step size search)






1. x1
( )
2. f(x1)
3. s , x2 = x1 + s
4.4
f(x2)
5. f2 < f1, x3, x4,
f(xi)
( ) > f(xi-1)
( )
xi = x1 + (i-1)s
( )

(Fixed step size search) ()


6. xi xi xi-1
7. 4 f2 > f1


x-2, x-3, x-j = x1 + (j-1)s
8. f2=f1 x1 x2 x1 x2
9. f-2 f2 f1 x-2 x2

(Accelerated step size search)



xi
xi-1 xi


xi-1 xi
f(x) = x(x-1.5) x 0
0.05

. x1 = 0, f1 = 0
x-2 = -0.05, f-2 = 0.0775
f2 > f1 Unimodal

x


x5

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%% Accelerated step size search
%%% 1D Optimization without constraint
%%%% By
B Asst.Prof.Dr.
A t P f D Monsak
M
k Pimsarn
Pi
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
clear all;
s(1) = 0.0; %% Initial Step size
x(1) = 0; %% Starting point
i = 1;;
Diff = -1;
f(i) = objf(x(i));
fprintf(1 '------------------------------------\n');
fprintf(1,
\n );
fprintf(1,' i
s
xi = x1+s fi \n');
fprintf(1,'------------------------------------\n');
fprintf(1 '%d
fprintf(1,
%d
%4
%4.3f
3f
%3.2f
%3 2f %5.4f\n
%5 4f\n',i,s(i),x(i),f(i));
i s(i) x(i) f(i));

while Diff < 0,


i = i+1;
if i == 2
s(i)=0.05;
else
s(i) = 2*s(i-1);
end
x(i) = x(1)+s(i);
f(i) = objf(x(i));
fprintf(1,'%d
p
( ,%
%4.3f
%
Diff = f(i) - f(i-1);
end

%3.2f
%

%5.4f\n',i,s(i),x(i),f(i));
%
, , ( ), ( ), ( ));

Workshop 1
Find the minimum of the function
f ( x) = 0.65

00.75
75
1 1

0.65
x
tan
1 + x2
x

using the following methods:


(a) Unrestricted search with a fixed step size of 0.1 from the starting point 0.0
(b) Unrestricted search with an accelerated step size using an initial step size
of 0.1 and starting point of 0.0

Exhaustive
( xs xf)
n+1



n = 8, L0 = xf xs (Initial uncertainty)
Final uncertainty

Exhaustive ()
Final
Fi l uncertainty,
t i t Ln = 2 L0 /(n+1)
/( +1)
n


f(x) = x(x-1.5) (0,1.0)
10%
Ln/L0 = 2/(n+1),
2/(n+1)
= 1/(n+1) 1/10
n 9 n = 9

x = 0.75 ( f7 = f8)

Dichotomous
( xs xf)



2 x1 x2
/2
f1 f2


x2
xf (
)
)

Dichotomous ()

()
x1 x2 ( )
L0
x1 = ,
2 2

L0
x2 = +
2 2

L0
+
2 2

(Number of experiments) n (
)
L0
1

Ln = n / 2 + 1 n / 2
2
2

Dichotomous ()

()

f(x) = x(x-1.5) (0,1.0)


10%

= 0.001
0 001

Ln
1

1
= n / 2 + 1 n / 2 ,
L0 2
L0 2
1 Ln 1
1

1 1

n / 2 + 1 n / 2
2 L0 10
2
L0 2 5

Dichotomous ()

()
= 0.001,
0 001 L0 = 1

1
1 1
+
or
1 n / 2
n/2
2
1,000 2 5
999 1
995
999
n/2

or 2
5.0
n/2
1,000 2
1,000
199
1


n = 6

Dichotomous ()

()

f1 > f2 (0.4995,1.0)

Dichotomous ()

()

f3 > f4 (0.74925,1.0)

Dichotomous ()

()

f6 > f5 (0.74925,0.875125) = (x3,x6)

x3
x6

(Interval
(I t l Halving
Hli
Method)



3
2
1. L0 =[a,b] n
x0 x0 x1 x2
x0 L0/4
2.2

f1,
f1 f2
f0
3. (a) f2 > f0 > f1 (a) (x0, b)
x1 -> x0 x0 -> b
4

(Interval
(I t l Halving
Hli
Method) ()
()
(b) f2 < f0 < f1 (b) (a,x0) x2 ->
x0 x0 -> a
4
(c) f1 > f0 f0 < f2 (c) (a,x1) (x2,b)
x1 -> a x2 -> b 4
4. L = b-a (())
L0 = L 1

n
n 3

1
Ln =
2

( n 1)
2

L0

(Interval Halving
Method) ()
()
f(x) = x(x-1.5) (0,1.0)
10%

Ln L0
1

or
2 10
2
Q L0 = 11,
1
2

( n
n 1)
2

( n 1)
2

L0
L0
5

1
( n 1) / 2

5
or 2
5

n n = 7

f1> f0 > f2 (a,x0)= (0,0.5)


x2 x0 x0 a
L3 =[0.5,1]
=[0 5 1]

f1> f0 < f2 (a,x1) = (0.5,0.625)


(x2,b)=(0.875,1) x1 a x2 b
L5 =[0.625,0.875]

=[0 625 0 875]


f1> f0 < f2 (a,x1) = (0.625,0.6875)


(x2 b)=(0 8125 0 875)
(x2,b)=(0.8125,0.875)
x1 a x2 b

L7 =[0.6875,0.8125]

exact solution
l i

Workshop 2
Find the minimum of the function
f ( x) = 0.65

00.75
75
1 1

0.65
x
tan
1 + x2
x

using the following methods:


(a) Dichotomous search method in the interval (0, 3) to achieve an accuracy
of within 5% of the exact value using a value of = 0.0001
(b) Interval halving method in the interval (0, 3) to achieve an accuracy of
within 5% of the exact value

Optimum Design
in Mechanical Engineering
Lecture 6

..

(()
5. ((Interval Halvingg Method))
6. (Golden section search)

(Interval
(I t l Halving
Hli
Method)



3
2
1. L0 =[a,b] n (4 )
x0 x0 x1 x2
x0 L0/3
2.2

f1,
f1 f2
f0
3. (a) f2 > f0 > f1 (a) (x0, b)
x1 -> x0 x0 -> b
4

(Interval
(I t l Halving
Hli
Method) ()
()
(b) f2 < f0 < f1 (b) (a,x0) x2 ->
x0 x0 -> a
4
(c) f1 > f0 f0 < f2 (c) (a,x1) (x2,b)
x1 -> a x2 -> b 4
4. L = b-a (())
L0 = L 1

n
n 3

1
Ln =
2

( n 1)
2

L0

(Interval Halving
Method) ()
()
f(x) = x(x-1.5) (0,1.0)
10%

Ln L0
1

or
2 10
2
Q L0 = 11,
1
2

( n
n 1)
2

( n 1)
2

L0
L0
5

1
( n 1) / 2

5
or 2
5

n n = 7

f1> f0 > f2 (a,x0)= (0,0.5)


x2 x0 x0 a
L3 =[0.5,1]
=[0 5 1]

f1> f0 < f2 (a,x1) = (0.5,0.625)


(x2,b)=(0.875,1) x1 a x2 b
L5 =[0.625,0.875]

=[0 625 0 875]


f1> f0 < f2 (a,x1) = (0.625,0.6875)


(x2 b)=(0 8125 0 875)
(x2,b)=(0.8125,0.875)
x1 a x2 b

L7 =[0.6875,0.8125]

exact solution
l i

(Golden section search)


(Golden

L0 =[a,b]
x1 x2

x1 = a + (1 r ) L0

(1)

x2 = a + r L0

(2)

L0
rL0
=
rL
L0 (1 r ) L0
1 r = r2
r=

or

or r 2 + r 1 = 0

1 + 5
= 0.618 or
2

1
= 1.618
r

(Golden
(
section search)) ()
( )
1.618
1 618

(Golden ratio) = (
(
Phidias 440
BC))

2:3, 5:8, 8:13, 89:144


Pythagoras
(560-480 BC)

(Golden
(
section search)) ()
( )

(Golden
(
section search)) ()
( )


L0=[0,1]
L0=[0 1]
2
x1 x2
x1 = a + (1 r ) L0 = 0 + (1 0.618)(1 0) = 0.382
x2 = a + r L0 = 0 + 0.618(1) = 0.618

x1 x2 f1 f2

(Golden
(
section search)) ()
( )
1 f1 > f2 x
[x1,b]

a = x1 L1 =[a,b] x1 x2
x1 = a + ((1 r ) L1 = 0.382 + (1
( 0.618)(1
)( 0.382)) = 0.618
x2 = a + r L1 = 0.382 + 0.618(1 0.382) = 0.764

x1 x2 f1 f2
x1() = x2 ()

(G ld section
(Golden
ti search)
h) (
()
)

2 f1 < f2 x
[a,x2]

b = x2 L1 =[a,b] x1 x2
x1 = a + ((1 r ) L1 = 0 + ((1 0.618)(0.618
)(
0)) = 0.236
x2 = a + r L1 = 0 + 0.618(0.618 0) = 0.382

x1 x2 f1 f2
x2() = x1 ()

(G ld section
(Golden
ti search)
h) (
()
)

x1
x2


1.
2. L0 = [a,b]
3. k = 1
4. |a-b| x = (a+b)/2
5.5
x1 x2

(Golden section search) ()


Lk 1 = b a
x1 = a + 0.382 Lk 1
x2 = a + 0.618
0 618Lk 1

6. f(x1) f(x2)
7. f(x1) > f(x2) [x1,b]
a = x1,


k = kk+11
4
8. f(x1) < f(x2) [a,x2]
b = x2,2


k = k+1
4

(Golden section search) ()


(0,3.0) k = 5
00.75
75
1 1
f ( x) = 0.65
0.65 x tan
2
1+ x
x

k = 1; L0 = b-a = 3-0 = 3
x1 x2

x1 = a + 0.382 L11 = 0 +0.382 3 = 1.1460

x2 = a + 0.618L11 = 0.618 3 = 1.8540


f1 = f ( x1 ) = 0.208654
f 2 = f ( x2 ) = 0.115124

(Golden section search) ()


f2 > f1 [x2,3.0]
b = x2 = 1.8540
k = 2; L1 = b-a = 1.854-0 = 1.854
x1 (

( x2
x1 )
)
x1 = a + 0.382 L21 = 0 + 0.382 1.854 = 0.7083
f1 = f ( x1 ) = 0.288943

f2 > f1 [x2,1.8540]=[1.1460,1.8540]
b = x2 = 1.1460
k = 3; L2 = bb-aa = 1.146
1.146-00 = 1.146
x1 ( x2 x1 )

(Golden section search) ()

f2 > f1
[x2,1.146]
[x2,1.146]=[0.7083,0.1460]
[0.7083,0.1460]

b = x2 = 0.7083
k = 4;
4 L3 = b-a
b = 0.7083-0
0 7083 0 = 0.7083
0 7083
x1 ( x2 x1 )
x1 = a + 0.382 L41 = 0 + 0.382 0.7083 = 0.2706
f1 = f ( x1 ) = 0.278434

f1 > f2 [0,x1]=[0,0.2706]
a = x1 = 0.2706
0 2706
k = 5; L4 = b-a = 0.7083-0.2706 = 0.4377
x2 ( x1
x2 )

(Golden section search) ()


k = 5();
5();
x2 = a + 0.618L51 = 0.2706 + 0.618 0.4377 = 0.5411
f 2 = f ( x2 ) = 0.308234

f2 > f1 [x2,b]=[0.5411,0.7083]
b = x2 = 0.5411
x* = (a+b)/2 = (0.2706 + 0.5411)/2 = 0.4059
= |a-b| = | 0.2706
0 2706 - 0.5411|
0 5411| = 0.2705
0 2705
x*(Matlab) = 0.4809

(Golden section search) ()


Matlab
Step 1: Objective function (objfun.m)
function f= objfun(x)
()
f= 0.65(0.75/(1+x^2))0.65*x*atan(1/x);
Step 2: (ExampleFile.m) fminbnd
clear all
options = optimset('LargeScale','off');
[x fval] = fminbnd(@objfun
[x,fval]
fminbnd(@objfun,0,0.5,options);
0 0 5 options);
x
fval

Workshop 1

f ( x) = x( x 1.5)
1 5)

(0,1)
(0 1)
k = 5

((Exact solution))

1. Steepest descent ( )
2.2
Conjugate gradient

()

()
11.

x((0))
2.
r ( k +1) r ( k )
r(k )
x
= x + x

k = 0,1,2,K

Or
xi

( k +1)

= xi

(k )

+ xi

(k )

i = 1 to n, k = 0,1,2,
012K

i k

()

()
(()

r(k )
r(k )
x = k d

k = step size(positive scalar)


r(k )
d = search direction

3.

()

()
(()

()

()
()
1.

x(0)
k=0
2. (Search direction) d(k)

3.

4. (step size), k, d(k)


5. x(k+1) = x(k) + k d(k) k = k+1
2

(k)
d

x(k)


x(k+1)

r ( k +1)
r(k )
f ( x ) < f ( x ) or

r(k )
r(k )
r(k )
f (x + k d ) < f (x )

Taylors series

r(k )
r(k ) r(k )
r(k )
f ( x ) + k (c .d ) < f ( x )

where
r(k )
r(k )
r
c = f ( x ) = gradient of f ( x )
Or

r(k ) r(k )
k (c .d ) < 0

d(k)
k

r (k ) r (k )
c .d < 0

900 2700
d(k)

Descent
D t
direction Downhill direction

r
( x1 + x2 )
2
2
f ( x ) = x1 x1 x2 + 2 x2 2 x1 + e

d = (1,2) (0,0) Descent direction

d(k)
d = (1,2) Descent direction

rr
c .d < 0

r
r
( x1 + x2 )
( x1 + x2 )
c = f ( x ) = (2 x1 x2 2 + e
, x1 + 4 x2 + e
) = (1,1)
1
rr
c .d = (1,1) = 1 + 2 = 1 > 0
2

d = (1,2)
(1 2)
Descent direction

()

()

x(k+1)
1. d(k)
2. k ()

f(x) = f()

r
2
2
f ( x ) = 3 x1 + 2 x1 x2 + 2 x2 + 7

d(k) = (-1,-1) (1,2) k f(x)


x(k) = (1,2), f(x(k) )=22 d(k) = (-1,-1)
d(k) Descent direction
(1,2), c(k) = (10,10) c(k). d(k) = -1010 = -20

()
()
()

()
x(k+1)
r(k )
r ( k +1) r ( k )
x
= x + k d
( k +1)

x1
1
1
= + or
x
2
1
2
x1( k +1) = 1 , x2( k +1) = 2
r ( k +1)
f ( x ) = 3(1 ) 2 + 2(1 )(2 ) + 2(2 ) 2
f ( ) = 7 2 20 + 22

( )

()
()
Necessary
N
Sufficient
S ffi i t
condition
2
df
10
= 0 = 14 k 20, k = ,
d
7

d f
= 14 > 0
2
d

( )
k = 10/7

d(k) = (-1,-1) x(k+1)

x1
x
2

( k +1)

1 10 1 7
= + =
2 7 1 4
7

54
r
r
f ( x ( k +1) ) =
< f ( x ( k ) ) = 22
7

Golden section search k

Steepest descent

r
r
r(k )
d = c = f ( x )

1. x(0) k = 0
> 0 (convergence
(
criterion)
i i )
2. c(k) = f(x(k))
3. || c(k) || < x(k) = x*

4. (Search direction) d(k) = -c(k)

Steepest descent ()

()
()
5. (step size), k, d(k)
f(x(k) + d (k) )

6. x(k+1) = x(k) + k d(k) k = k+1


2

Minimize f ( xr ) = x12 2 x1 x2 + x22


S
Steepest
ddescent
(1,0)
(1 0)
1.1

x(0) = (1,0)
(1 0)
2. c(0) = (2x1 2x2, 2x2 2x1 ) = (2,-2)
3. || c(0) || = 2.83 0
4. d(0) = - c(0) = (-2,2)
5. f(x(0) + d (0))

() Necessary Sufficient
condition

6. x(1) = x(0) + k d(0) = (0.5,0.5)


k = 0+1 = 1

2
c(1) = (2x1 2x2, 2x2 2x1 ) = (0,0) < x(1) = (0.5,0.5)
f(x(1) ) = 0

Workshop 2

r
f ( x ) = x12 + 2 x22 4 x1 2 x1 x2

(1,1) 2 (k = 3)

Optimum Design
in Mechanical Engineering
Lecture 7

..

()

()
1.
2.

Steepest descent (
(
)
)
Conjugate gradient ()

Conjugate gradient

Fletcher Reeves (1964)


Steepest descent
2
d

(k )

= c

(k )

+ kd

( k 1)

; k = c

(k )

( k 1)

Steepest descent

Conjugate gradient ()

()
Steepest descent Conjugate gradient

d1

Conjugate gradient ()

()

( )
1. Steepest descent x(1)

k=1
2. c(k) = f(x(k))
3. || c(k) || < x(k) = x*

4. (Search direction)
d

(k )

= c

(k )

+ kd

( k 1)

; k = c

(k )

( k 1)

Conjugate gradient ()

()
()
5. (step size), k = , d(k)
f(x(k) + d (k) )

6. x(k+1) = x(k) + k d(k) k = k+1


2

Minimize

r
f ( x ) = x12 + 2 x22 + 2 x32 + 2 x1 x2 + 2 x2 x3

Conjugate gradient

(2,4,10)
(2 4 10)
2
Steepest
p descent x(1)
1.1 x(0) = (2,4,10), f(x(0) )=332.0
1.2 c(0) = (2x1 + 2x2, 4x2 + 2x1 + 2x3 , 4x3 + 2x2 ) = (12,40,48)
1.3 || c(0) || = 63.3 > = 0.005
1.4 d(0) = - c(0) = (-12,-40,-48)
11.55
0
f(x((0)) + 0 d ((0)))

0 = 0.1587

()
11.66

x(1) = x(0) + 0 d(0) = (0.0956,


(0 0956 -22.348,2.381)
348 2 381)
k = 0+1 = 1 2
2. c(1) = (-4.5, -4.438,4.828 ) f(x(1) ) = 10.75
3.3 || c((1)) || = 7.952
7 952 >
4

4. 1 = (|| c(1) ||/|| c(0) ||)2 = (7.952/63.3)2 = 0.015633


4.500
12 4.31241
d (1) = c(1) + 1d (0) = 4.438 + (0.015633) 40 = 3.81268

4.828
48 5.57838
5.5
1
f( (1) + 1 (1))

1 = 0.3156

()
6

x(2) = x(1) + 1 d(1)


x(2)

0.0956
4.31241 1.4566
= x(1) + 1d (1) = 2.348 + (0.3156) 3.81268 = 1.1447

2.381
5.57838 0.6205

k = 1+1 = 2 2
2. c(2) = (0.6238, -0.4246,0.1926 )
3. || c(2) || = 0.7788 > 4
4. 2 = || c(2) ||/|| c(1) ||2 = (0.7788 /7.952)2 = 0.0096
x* = (0,0,0), f(x* ) = 0


Steepest descent
Conjugate gradient
Minimize f ( X) = 50( x2 x12 ) 2 + (2 x1 ) 2
X0 = (5, 5), = 0.005, Answer X* = (2,4), f ( X* ) = 0

Workshop 1

r
f ( x ) = x1 x2 + 2 x12 + 2 x1 x2 + 2 x22

(0,0)
x* = (-1,1.5), f(x* ) = 1
x3 = xx*

(d(k) = 0)

1. penalty function method


1) Interior penalty function method
2) Exterior penalty function method

Multi-variable optimization
with equality and inequality constraints ()

Minimize

f ( x1 , x2 ,K xn )

Subject to
g j ( X) 0,

j = 1, 2,..., m

h k ( X) = 0, k = 1,
1 2,...,
2
p
x1
x
2
X = = Design variable vector
M
xn

Penalty function method

(Constrained)
(Unconstrained)

2
1. Interior penalty function method
2. Exterior penalty function method

1. Interior penalty function method



(Inequality constraints)
Find X which minimizes f(X)
Subject to
gj (X) 0,, j = 1,, 2,, 3,, ,m
,

1
= ( X, rk ) = f ( X) rk
j =1 g j ( X)

rk = (penalty parameter) > 0, > f (X)

1. Interior penalty function method ()

Subject to

Find X ={x1} which minimizes f(X) = x1


g1 (X) = - x1 0,

rk
k = ( X, rk ) = x1
x1


rk (rk > rk+1)


(Interior)

(Design space)

1. Interior penalty function method ()


1. X1


( , gj (X) 0,
j = 1, 2, 3, , m) r1 > 0 k = 1
2. (X, rk)


X*k
3. X*k


(
( 4)

f (X ) f (X )
or (X) + (X) + ... + (X)
f (X )

*
k

*
k 1

*
k

2
1

2
2

2 1/ 2
n

1. Interior penalty function method ()


4. rk+1 (Next penalty parameter)
rk+1 = crk

c < 1 (0.1, 0.2 or 0.5)

5. k = k +1, X1 = X*k

1.1 Interior penalty function method ()


()

1
Minimize
Mi
i i
f ( x1 , x2 ) = ( x1 + 1)3 + x2
3
Subject to
g1 ( x1 , x2 ) = x1 + 1 0
g 2 ( x1 , x2 ) = x2 0


1
1
1
3
( X, r ) = ( x1 + 1) + x2 r

3
x1 + 1 x2

first order necessary condition



r
2
2
2
= ( x1 + 1)
=
0,
that
is,
(
x

1)
=r
1
2
x1
(1 x1 )

r
= 1 2 = 0, that is, x22 = r
x2
x2
These equations give
x1* (r ) = (r1/ 2 + 1)1/ 2 ,

x2* (r ) = r1/ 2

1 1/ 2
1
1/ 2
2
1/ 2
min (r ) = [(r + 1) + 1] + 2r
3
(1/ r ) (1/ r 3 / 2 + 1/ r 2 )1/ 2

f min = lim min (r )


r 0

x1* = lim x1* (r )


r 0

x2* = lim x2* (r )


r 0

1. Interior penalty function method ()


Normalization of constraints

g1 ( X) = ( X) max 0

(1)

g 2 ( X) = ( X) max 0

(2)

where

max = 0.5 in
max = 20,000 psi

X1 g1 g2 -0.2 -10,000 g2
g1

1 x 104


Normalize

1. Interior penalty function method ()


( X)
1 0
max
max
g 2 ( X) ( X)
=
1 0
g 2 ( X) =
max
max

g1 ( X) =

g1 ( X)

(1)
(2)

Normalize

2. Exterior penalty function method

= ( X, rk ) = f ( X) + rk g j ( X)

j 1
j=

rk = (penalty parameter) > 0, q > 0 ( q = 2) gj(X)

g j ( X) = max g j ( X),0
g j ( X), g j ( X) > 0 (constraint is violated)
=
(
is satisfied))
0,, g j ( X) 0 (constraint

2. Exterior penalty function method ()

Subject to

Find X ={x1} which minimizes f(X) = x1


g1 (X) = - x1 0,

k = ( X, rk ) = x1 + rk max( x1 ,0) 2

rk (rk < rk+1)

(Design space)

(Exterior)
(Exterior)

2. Exterior penalty function method ()


1. X1


() r1 > 0 k = 1
2. (X, rk)

X*k
33.

X*k

( 4)

f ( X*k ) f ( X*k 1 )
2
2
2 1/ 2

or
(

X
)
+
(

X
)
+
...
+
(

X
)
1
1
2
n
*

f (Xk )

2. Exterior penalty function method ()


4. rk+1 (Next penalty parameter)
rk+1 = crk

c>1

5. k = k +1, X1 = X*k

2.2 Exterior penalty function method ()


()

1
Minimize f ( x1 , x2 ) = ( x1 + 1)3 + x2
3
Subject to
g1 ( x1 , x2 ) = x1 + 1 0
g 2 ( x1 , x2 ) = x2 0


1
3

( X, r ) = ( x1 + 1)3 + x2 + r [ max(0,
(0 x1 + 1) ] + r [ max(0,
(0 x2 ) ]
2

first
fi order
d necessary condition
di i


= ( x1 + 1) 2 2r [ max(0,1 x1 ) ] = 0
x1

= 1 2r [ max(0, x2 ) ] = 0
x2
These equations can be written as
min ( x1 + 1) 2 ,( x1 + 1) 2 2r (1 x1 ) = 0

min [1,1 + 2rx2 ] = 0

(E1 )
(E 2 )

In Eq.(E
Eq (E1 )), if ( x1 + 1) 2 = 00, x1 = 1 (this violates the constraint),
constraint) and if
( x1 + 1) 2 2r (1 x1 ) = 0, x1 = 1 r + r 2 + 4r
In Eq.(E
Eq (E 2 ),
) the only possibilty is that 1 + 2rx2 = 0,
0 x2 = 1/ 2r. Thus
Thus,
1/ 2

4
x1* (r ) = 1 r + r 1 +
r

, x2* (r ) =

1
2r


8
f min = lim min (r ) =
r 0
3
x1* = lim x1* (r ) = 1
r 0

x2* = lim x2* (r ) = 0


r0


(Inequality constraint)

Minimize

f ( x1 , x2 , K xn )

Subject to
g j ( X) 0,

j = 1, 2,..., m

l j ( X) = 0,

j = 11, 22,..., p

1.1 Interior penalty function method


penalty function
m

1
1
= ( X, rk ) = f ( X) rk
+
rk
j =1 g j ( X)

2
l
j ( X)
j =1

2. Exterior penalty function method


penalty function
m

= ( X, rk ) = f ( X) + rk g j ( X) + rk l 2j ( X)
j =1

j =1

Workshop 2

Minimize

f ( x1 ) = 2 x1

Subject to
2 x1 10

1. Interior penalty function


2.2 Exterior
E t i penalty
lt ffunction
ti
rk 2

Workshop 3

Minimize

f ( x1 ) = x12 2 x1 1

Subject to
1 x1 0

Interior penalty function


(
)
)



1

Textbook Paper


1 . 2
2. Matlab
3.3

4. (
)
5.

Optimum Design
in Mechanical Engineering
Lecture 8

..

Multi-criterion optimization
1.
2.
3.
4.

Size (Parameter) optimization


Topology optimization
Topography optimization
Shape optimization

Finite Element-Based Optimization


1.
2.
3.
4.

Size (Parameter) optimization


Topology optimization
Topography optimization
Shape optimization

Size(Parametric) Optimization
(Boada, et al.,2007)


ANSYS

FEM Model

t1,t2,t3,t4,t5,t6,t7,t8

Genetic
algorithm
Optimizer

Implicit

Topology Optimization

(Kaya, et al., 2010)



24%
9% Topology Shape
optimization

53 500 cycles
53,500


(FEM results)

Topology optimization


FEM model
Topology
optimization

Size optimization


Topology
T l optimization
i i i
Size optimization 2

Topography Optimization

Oil pan

Shape Optimization

(W.
(W Annicchiarico,
A i hi i M
M. CCerrolaza.,
l
2001)
Shape optimization


50%

Optimum Design
in Mechanical Engineering
Lecture 9
Multi--Objective Optimization
Multi

..

Multi-objective optimization(MOO)

( i l objective
(single
bj ti ffunction,
ti SO)


(multi-objective optimization, MOO)

MOO


:
( ) ?

( )?


()



?


Ti k t
Ticket
A

Travell Time
T
Ti
(hrs)
10

Ticket
Ti
k t
Price ($)
1700

2000

1800

7.5

2300

2200


A B


B C C
C B B ( C dominates B)
B C C

E D D ( E dominates D)
A, C E
(trade off)
(non-dominated
solution set)


Plane Ticket Options

Feasible Region(
g (

5000

Price
e ($)

4000

D B

3000
2000

1000

C A

0
0

10

15

20

25

Flight Time (hrs)

Pareto front (
Vilfredo
Pareto )


(
solution))
(non-dominate
Pareto optimal set

2 1
Z


3 4
12 Y
$0.80 Z $2
Y Z

Y


( )

A f1
6.72 B f2
6.0 (feasible region)


A B


1. (weighted sum method)

2. ( )
A, B, 1 2

(feasible region)

D fi iti off PPareto


Definition
t optimality
ti lit


Pareto optimal (1)

maximize
Pareto optimality

minimize

Pareto optimality




(comfort
(comfort
index) (wear resistant index)

1
2
3
4
5
6
7

10
8
5
5.5
4
3.5
3

2
2 25
2.25
2.5
2.5
4
4.5
6.5
8

(dominated)
(dominated)
?
Pareto curve


(Generation of Pareto Curve)





1. (weighted sum method)

2.

(weighted

( i ht d sum method)
th d)

(Single Best
Compromise Solution )


1.1
(weighted sum method)
2. min-max
3.

(Goal
( l programming)
i )


(weighted
(
sum method))

min-max

i
(Mi M approach)
(Min-Max
h)

fminimax function Matlab

(Goal programming)

You might also like