Professional Documents
Culture Documents
An Introduction To Optimization Methods For Engineering Design PDF
An Introduction To Optimization Methods For Engineering Design PDF
Purdue e-Pubs
International Compressor Engineering Conference School of Mechanical Engineering
1974
Ragsdell, K. M., "An Introduction to Optimization Methods for Engineering Design" (1974). International Compressor Engineering
Conference. Paper 150.
http://docs.lib.purdue.edu/icec/150
This document has been made available through Purdue e-Pubs, a service of the Purdue University Libraries. Please contact epubs@purdue.edu for
additional information.
Complete proceedings may be acquired in print and on CD-ROM directly from the Ray W. Herrick Laboratories at https://engineering.purdue.edu/
Herrick/Events/orderlit.html
AN INTRODUCTION TO OPTIMIZATIO
N METHODS FOR ENGINEERING DESI
GN
K. M. Rag sdel l
Ass ista nt Pro fess or
Sch ool of Mec hani cal Eng inee rin9
Purd ue Uni vers ity
Wes t Laf aye tte, Indi ana 4790 7
ABSTRACT
395
Da II.....___
straints or objectives. Optimization al-
lows us to treat this process more formal-
ly, and with increased opportunity for
success. Optimization is truely a natural
design activity [2].
I._ -+1
____,
I,....___ c - -....
NONLINEAR PROGRAMMING Figure 2: Open-Top Rectangular Storage
Bin
Nonlinear programming is a misnomer. A
nonlinear program.is not a computer pro- storage bin shown in Figure 2. We wish
gram, although digital computers play an to design the bin, that is, choose the de-
important role in the area. A nonlinear sign variables a, b, and c such that a pre-
program is a formal design situation, much scribed volume can be stored with minimum
like those discussed earlier, where the ob- cost. Let us assume that cost is surface
jective and constraint functions are non- area. The nonlinear program is-:
linear in the design variables. Formally
then, we define a nonlinear program with MINIMIZE:
the following notation: F(x) ~ 2x 2 (x 1 + x 3 ) + x x 3 ;
1
MINIMIZE:
X = [ a, b ,c JT , -XE: R3 (4)
F (x); x (1)
SUBJECT TO,
SUBJECT TO, 0, k ~ 1, 2, 3 (5)
¢k(xl ~""
¢k<"x> .., o, k 1, 2, 3, ••• , K (2)
wl (x> ~
V* xlx2x3 "' 0 ( 6)
AND,
1jJt (xl - o, t = 1, 2, 3, ••• , L (3) Where in this case the ¢k's are called non-
negativity restraints. We see that
where N = 3, K = 3, L = 1
x = a column vector of design variables,
x , x , x 3 , ••• , xN; where N is the V* is the prescribed volume which the so-
1 2 lution design must store. An interesting
number of design variables. variation on this problem occurs in the
design of grain storage bins. The stored
F(x) ~ a nonlinear scalar function of the material has a prescribed angle of repose,
design variables called the ob- 8 and the bin sides are allowed to slant
jective function. to introduce an additional degree of de-
sign freedom.
K nonlinear constraint functions.
These functions delimit regions
in the design space, because of METHODS FOR THE SINGLE VARIABLES CASE
the inequalities.
Problems without constraints have been
L nonlinear constraint functions. given considerable attention in the past.
These functions vastly reduce the This is due in part to the fact that un-
number of candidate designs, be- constrained problems are significantly
cause they require specific com- easier to handle than the constrained
binations of the design variables. variety. Later we shall see that the con-
We shall call the solution to this non- strained problem can be reformulated as a
linear program, xM where M signifies the sequence of unconstrained problems, hence,
the contemporary need for unconstrained
number of steps or iterations required to methods. Similarly many of these uncon-
achieve the solution from a prescribed strained methods call for a one dimensional
starting point, x 0 • This paper very search in N-space. This coupled with the
fact that some very interesting problems
briefly examines several useful methods or can be formulated with a single degree of
algorithms for finding xM. But before the design freedom, motivates the need for de-
methods let us solidify the formulation by pendable one dimensional methods.
example.
Bounding the Minimum
A Simple Example
Let us seek the minimum, x* of a function,
A Rectangular Storage Bin. Consider the F (x) of a single variable, x, as shown in
396
In Figur e 3 we see tha x would be the
3
last point gener ated, and x* is bound ed
F(X) I betwe en x and x • Recog nize that only
I
1 3
~x, ....Ax 2 --Ax three point s must be handl ed at a time,
: I I
I
and reca ll the proce dure in the follow ing
I algor ithmi c form.
I
I
~~~--~~----------~
Boun ding a Local Minim um
~x
x*x 2 1. given x •
0
Figur e 3: Searc hing Along a Line
2. calcu late F (x ) = FO.
0
Figur e 3. In most usefu l metho ds there
are two disti nct phase s to the searc h; a 3.
bound ing phase and a refin emen t phase .
let ~X
1 = a small value .
In the bound ing phase , which we discu ss
here, two value s of x, a and b are sough t 4. let X =X + ~xl.
1 0
which brack et the minim um point x*.
(8)
yes: go to 7.
no: let ~x = ~x /2 and go to 4.
and 1 1
F(x) > F(x*) . (9)
0
Incre ment x,
(10)
and check if the minim um is being ap- , 9. calcu late F(x ) = F2.
preac hed; 2
10. is F2 < Fl?
(11)
If (11) is not satis fied eithe r ~x is too yes: let FO Fl, Fl = F2,
1 x = x , x = x and go to 7.
large or (8) is viola ted and we are look- 0 1 1 2
ing in the wrong direc tion. Assum e then
that ~x is suffi cient ly small and (11) is no: stop, x* has been bound ed betwe en
1 x and x .
0 2
satis fied. Now let ~x = 2~x to get
2 1 Remem ber that this algor ithm assum es that
~x
(U) 1 is taken in the corre ct direc tion from
x and a local , not globa l minim um is
,
0
Now test to see if the minim um has been sough t.
bound ed, when:
(13)
If this is true, we may stop with the
know ledge that the minim um is bound ed be-
tween x and x • If (13) is viola ted, F(X)
0 2
gene rate- an addit ional point x ,
3
x3 = x2 + t-.x3 (14)
with t-.x = 2~x
3 2,
and test this point ;
Figur e 4: A Bound ed Minim um Point
(15)
397
Exhaustive Search [3,4] halfing is a member.
Let us assume that x* is bounded from a- Let us adopt as a new strategy a sequence
bove and below as shown in Figure 4. That of M stages containing N sample points. At
is, we are certain that the minimum of the first stage the function will be sa~pl
F(x), x* is in the interval a to b; ed at N equally spaced pqints ~nd a qew
interval of uncertainty will be eq~ql tp
x* :>: b {16) 2H, where H is the original point spacing.
a :>:
At the second stage only this 2H interval
will be considered further, with N ad-
Therefore, we have an original interval of ditional sample points, and so on. The
uncertainty of b - a; fractional reduction in the interval of
uncertainty after M such stages is:
IU(O) = b - a (17)
M IU(N)m
Now we wish to reduce this interval of un- FR{N)M = IU(N) Ill- l
(25)
certainty to a tolerable level. To do so m=l
let us sample F(x) at N equally spaced in-
terior points; a< x. < b, i = 1,2,3, •.•• N,
~
wrere IU(N)
0
= IU(O) as before. There-
and examine the value of F(x.) at each fore we see,
~
where Tl MN (47)
{19)
= {-2_ n/N
N+l 1
and FR (N)M (28)
The fractional reduction of the original
in·terval is;
Nln(l/FR(N)M)
FR(N) - IU(N) (20) so ~ = In[ (N + 1)/2]
(29)
- IU{O)
Now what is the value of N which will make
therefore
Tl a minimum for a given value of FR(N)M?
FR (N) 2 { b-a}
N+l
{ 1 }
b:a ·
(21) Since N must be an integer, N = 3 is the
best policy at each stage. So
and for a given FR(N), N is:
N _ 2-FR (N) FR{3) = {3~1} = 4 ( 3 0)
(22)
- FR (N)
Hence, the name interval halfing. Notice
Let us say that we wish to find x* + • 01. that something very.interesting.happens
Then IU(N) .02 and when interval halfing is employed; the
center function evaluation is saved at all
N= 100 IU(O) 1 -
(23)
stages except the first. Tr~ating the first
Therefore, i f IU(O) = 2.0 we see that stage differently;
N= 199. {24)
Tl' = Tl + 1 ( 31)
to locate x* with a tolerance of .01 using total number of function evalu-
wher n'
the exhaustive search scheme. We can do ations.
better than this, as we see in the next
section.
Tl= 2M.
A Sequential Search M = number of stages.
Interval Halfing:
Tl' = 2M+ 1 (32)
We see that the exhaustive search scheme
causes us to waste much time in unpromis- n'-1
ing regions of the original interval. A and FR(3)n' = {~} (~) (33)
better strategy is to execute a sequence
of increasingly refined searches in the which gives
most promising interval at each stage. 2tn ( 1/FR (3) 1J , )
This is the basis of the famil~ of se- Tl' 1 + { .-Ln2 } (34)
quential search methods, of wh~ch interval
398
The refo re, to achi eve a frac tion
al re- proc eeds to the next vari able , x
duct ion of .01 as before~· ~ 15 , in
func tion sim ilar fash ion, 2
eval uati ons will be requ ired usin
g the in-
terv al half ing sche me, whic h com
favo rabl y with the 199 requ ired befopare s quit e
(38)
Add ition ally , the logi c of inte rval re.
ing is easi ly auto mate d, beca use half -
we deal and (35) is app lied . This proc ess
with only thre e poin ts at a time con-
. tinu es unt il all vari able s have
been in-
crem ente d, afte r whic h a patt ern
dire ctio n
is esta blis hed . One tria l is perf
METHODS FOR THE UNCONSTRAINED CASE orm ed in
the patt ern dire ctio n, as show n
in the
figu re, befo re retu rnin g to the
Now let us cons ider the situ atio univ aria te
n whe re sear chin g. The proc ess term inat
ther e are N desi gn vari able s and es when no
no con- prog ress can be made for even sma
stra ints , so The sim plic ity of the meth od spea ll ~x's.
ks for
itse lf.
K = L =0 in (2) and (3).
The situ atio n beco mes more com plex cauc hy [6]
much of the thin king of the prev and
ious sec~
tion does not dire ctly extr apo late cauc hy's meth od is ofte n call ed
. Al- the
mos t all meth ods have two phas es;
choi of stee pest desc ent, sinc e it emp meth od
ce
of a dire ctio n in N-sp ace and min
imiz atio n loca l grad ient in the move logi c. loys the
in this dire ctio n, choi ce of a new spea king an enti re clas s of meth Stri ckly
ion, etc. We sha ll disc uss here dire ct- ods exis ts
a few emp loyin g the loca l grad ient . Con
side r a
meth ods whic h have prov en to be
depe ndab le. fam ily of tria l poin ts, x:
X=x ~ a g(X) (39)
whe re
399
Searching a Line in N-Space the search, x, x, x.
Therefore , P(~)
previous search direction . This method
The previous and following methods use a possesses the property called quadratic
common move prescript ion in N-space. convergen ce. Given a generaliz ed quad-
ratic function:
X== x - a P(X) (41)
(46)
F(xl
Where P(x)is the direction of search, and
is different for each method. Each method where a = a scalar.
calls for a single var.iable search along a E a column vector of constants .
line in the N-dimensi onal design space de-
fined by P(x). Using the previous discus- H = an N x N matrix of constants .
sion the following algorithm is useful:
the Fletcher-R eeves method will find the
Line Search in N-Space solution, xM in N steps or less from an
arbitrary starting point, x 0 .
1. set a = o.
2. calculate F (x (a.) ) = FO. The Davidon-F letcher-Pa well method changes
the search direction by updating the mat-
3. set a = a small value. rix A:
A AT A AT A
4. let X X - a. P <x> ox ox A 6g 6g A
A = A + AT A (47)
5. calculate F (;;(a)) Fl ox [',g 6~? A 6g
6. is Fl < FO? where
A
yes: go to 7. A metric at last step.
no: let a. = a/2 and go to 4. A
6X previous step = X - x.
7. set a 2a
[',g = gradient change g(x>- g (x).
8. let x = x- a P(x).
The method begins with A as any positive
9. calculate F(x(a)) = F2. definite matrix, such as the identity
matrix, I. It is interestin g to note that
10. is F2 < Fl? for a quadratic function with A = I at the
start, the variable metric method performs
yes: let FO = Fl, Fl =
F2, and go to 7. exactly as the 1 conjugate gradient method.
no: a* has been bounded, begin inter- Also if A = H- at the start, one step
val halfing. convergen ce is achieved for a quadratic
function, because the Newton method is
generated . F£r a nonquadra tic function A
Fletcher-R eeves [7] and Davidon-F letcher- approache s H- in a finite number of steps.
Powell [8] Restartin g is necessary when the positive
definiten ess of the A matrix is violated.
These methods use the move prescript ion The method is quadratic ally convergen t. A
given in (41), with the search direction disadvant age is the need to carry a full
P(x) given below: N x N matrix throughou t the search.
search direction is chosen to maintain con- where ox x x = previous step, and a and
jugacy of the sequence of direction s: S are free parameter s to be found at each
step. A closer look, than space will al-
q<x> = g(~l:g<~> P(xl (45) low, at conjugate gradient algorithm s is
necessary to see the connectio n with
g(x) g(x) Miele's method. Briefly, there are really
two free parameter s at each point in the
where we consider three contiguou s points in conjugate gradient method. One of these
400
par~ete~s i~ sele cted assu ming a quad -
rat1 c ob]e ct1v e func tion . The othe that the star ting poin t, x, is feas ible
mete r, a, is foun d dire ctly to help r para - 0
with re~pect to the ineq uali ty con
comm odate the non quad ratic effe ac- stra ints ,
and not~ce that the weig ht of the ineq
memory grad ient meth od, no such cts.conc
In the
ty viol atio ns are decr ease d, and the uali -
is made to qua drat ic obje ctiv e func essi on of the equ ality viol atio ns are incr weig ht
Pub lish ed num eric al resu lts indi catetion s. afte r each min imiz atio n phas e. Theease d
crea se in the rate of conv erge nce overan in- adva ntag e of the pen alty appr oach is majo r
Flet cher -Ree ves meth od, espe cial ly the the
ease of usag e. We have take n a rela
init ial stag es of the sear ch. The in the com plex prob lem and brok en it down tive ly
move
pres crip tion is very simp le, but requ sequ ence of more simp le unco nstr aineinto a
a ~wo d~mensional sear ch at each geneires blem s, whic h can be solv ed with the d pro-
po~nt, x.
ric pre-
This two dim ensi onal sear ch is viou sly disc usse d meth ods.
a majo r prac tica l disa dvan tage of the
meth od. Cur rent ly a grea t deal of effo rt is
expe nded in the deve lopm ent of pen bein g
func tion s [10, 11], and this auth or alty
METHODS FOR THE CONSTRAINED CASE metho~s to app7 ar.so on whic
expe cts
h will be vast ly
supe r1or to ex1st~ng meth ods, espe cial
In the prev ious disc ussi on we have for larg e prob lems . ly
that in mos t prac tica l engi neer ing seen
desi gn
prob lems the vari able s are con stra ined
some way. Thes e con stra ints norm in
The Gri ffith and Stew art Meth od [12]
pear in one of two form s7 ineq ualially ap-
ties or
equ alit ies, as give n in (2) and (3). This meth od is form ulat ed to take adva
very prac tica l desi gn prob lems con tainMany of the very pow erfu l line ar prog ramm ntag e
vari able s whic h mus t take on disc meth ods curr entl y ava ilab le, such as ing
valu es, and othe r prob lems poss essrete Rev ised simp lex ~ethod [13] . Bas icalthe
vari -
able s whic h exh ibit rand omn ess. Thes
e two meth od work s as follo ws. Give n a feasly the
latt er com plic atio ns are of trem endo star ting poin t, x , expa nd each of ible
prac tica l imp orta nce, but will be us 0 the
here . We cons ider here thre e fund igno red
ame ntal-
func tion s, F(X) , ¢k(x ), k = L 2, 3,
••. ,
ly diff eren t proc edur es for hand ling K, ~{(x), { = l, 2, 3, ••• , L abou
con stra ined prob lem. the t the
poin t x , reta inin g only line ar term
0 s.
Solv e the line ar prog ram whic h resu
The Pen alty Fun ctio n [9] expa nd and line ariz e abou t the new lts, and
etc. In orde r to obta in any succ ess poin t,
Con side r the follo win g func tion the meth od, the chan ges in the desi with
able s mus t be limi ted at each phas gn vari -
P(x) = F(x) + 1'iJ (X, R, ¢, f) (49) Rev ised Simp lex Meth od vill retu rn e. The
whic h resi des on at leas t two of thea poin t
whic h we call a gen eral ized weig hted
pen- ized con stra ints , and whic h may not line ar-
alty .fun ctio n. ~is the pen alty term , the neig hbor hood of the curr ent poinbe in
and 1s con stru cted so that feas ible The refo re, the meth od must be forc t.
(~oints whic h sati sfy the
poin ts ed to
con stra take sma ll step s. The meth od has prov
g1ve n favo red trea tme nt. Also we ints
wish
) are
to be effe ctiv e for prob lems whe re en
choo se ~ such that the unco nstr aine to ject ive and con stra int func tion s arethe ob-
d
minimum of P(X) can be easi ly foun d, and ly line ar. The meth od will ofte n be near -
has a solu tion whic h is rela ted to slow or not conv erge at all for very very
~· the non-
solu tion to the con stra ined prob lem. line ar prob lems .
Cho osin g ~ is very much an art,
work rem ains to be done in this area and much
pen alty term whic h has prov en to be . A The con stra ined Der ivat ive Meth od [14,
15]
is: usef ul
This meth od was deve lope d to hand le
K non line ar prob lem with equ ality con the
tiJ(x, but can be used for the full prob lemstra ints ,
R, ¢, l> = R2 I! { - 1 - } + (]:)
R the intr odu ctio n of slac k vari able s with
k=l ¢k (X:> of the ineq uali ty con stra ints such in each
L they appe ar as tran sfor med equ alitythat
con-
~ {~-t(x)}2 stra ints . This amo unts to a book keep
(50) diff icul ty whic h we will not cons ider ing
{=1 ther here . fur-
whe re R, the pen alty para mete r is chos
as a sma ll pos itiv e valu e, say 1.0, en Divi de the desi gn vari able s, x, into
star t and decr ease d towa rd zero afte to clas es call ed the stat e and deci sion two
r each able s7 vari -
unco nstr aine d min imiz atio n. We assu
me
401
Mischke, c. R., An Introduction to
- X= c-y, -JT
Z {51)
3.
computer-Aided Des~gn, Prent~ce-Hall,
1968.
where the decision variables are completely
free, and the state variables are slaves 4. Wilde, D. J. Optimum Seeking Methods,
used to satisfy the constraints. That is, Prentice-Hall, 1964.
whenever a decision variable is changed all
state variables are adjusted to maintain s. Hooke, R. and Jeeves, T. A., "Direct
feasibility. Now, the constrained deri- Search Solution of Numerical and
vative (also known as the reduced gradient) statistical Problems," JACM, 8(2), 212-
is the rate of change of the objective 229, 1961. --
function with respect to small changes in
the decision variables, with the state 6. cauchy, A., "Method generale pour la
variables adjusted to maintain feasibility. resolution des systems d'equations
The strategy then is to find a set of de- simultanees," Comptes Rendus deL'
cision variables which make the constrain- Academie des Sciences, 25, 536-538,
ed derivative zero. Any of the uncon- 1847.
strained methods can be used to achieve
this goal. The constrained derivative 7. Fletcher, R. and Reeves, c. M.,
method is probably the most powerful method "Function Minimization by Conjugate
currently known for the nonlinear program- Gradients," Computer Journal, vol. 7,
ming problem. no. 2, 1964.
402
Meth ods for Engi neer ing Desi gn," ASME
pape r no. 73-D ET-1 7, June , 1973 .
18. Rag sdel l, K. M., Nati onal Scie nce
Foun datio n: Rese arch Init iatio n
Gran t, GK-4 2162 , "Opt imiz ation Tech
-
niqu es for Engi neer ing Desi gn," Apri
1974 . l,
403