You are on page 1of 6

FAIPA_SAND: An Interior Point Algorithm for Simultaneous ANalysis and Design Optimization

Jos Herskovits*, Paulo Mappa* and Lionel Juillen** *COPPE / Federal University of Rio de Janeiro, Mechanical Engineering Program, Caixa Postal 68503, 21945 970 Rio de Janeiro, Brazil. e-mail: jose @ com.ufrj.br, Web page: http:// www.pem.ufrj.br/prof/jose **RENAULT, Research Service 1 Av. du Golf, 78288 Guyancourt cedex, France e-mail: Lionel.Juillen @ renault.fr 1. Abstract In the classical approach for Engineering Design Optimization, a Mathematical Program that depends on the design variables is solved. That is, the objective function and the constraints depend exclusively of the design variables. Thus, the state equation that represents the system to be designed must be solved at each of the iterations of the optimization process. The simultaneous analysis and optimal design technique (SAND) consists on adding the state variables to the design variables and including the state equation as additional equality constraints [6,7,8,9]. In this way, the state equation is solved at the same time as the optimization problem. We present a new Algorithm for SAND optimization that solves the enlarged problem in a very efficient way and takes advantage of numerical tools normally included in Engineering Analysis software. This one is an extension of the Feasible Arc Interior Point Algorithm, FAIPA. 2. Keywords: Design Optimization, Nonlinear Programming, Numerical Optimization, Engineering Design. 3. Introduction We consider the Optimal Design of Engineering Systems represented by a State Equation e(x, u) = 0 , where e R r . The equation depends on the parameters x R n , that we call design variables, being u R r the state variables. The classical model for this problem can be represented by the Nonlinear Program subject to g ( x , u ( x )) 0 (1) and h ( x , u ( x )) = 0, where u (x) solves the state equation for x , f is the Objective Function, g Rm and h Rp are the inequality and the equality constraints respectively. We assume that f, g and h are continuous, as well as their first derivatives. Minimize
x

f ( x , u ( x ))

The problem (1) is solved iteratively and, at each iteration, the state equation must be solved and the sensitivity of the state variables must be computed. If the solution of the state is iterative, the whole process can be very painful. The simultaneous analysis and optimal design technique (SAND) consists on adding the state variables to the design variables and including the state equation as additional equality constraints. Then, the state equation is solved at the same time as the optimization problem. This is very advantageous in the case of nonlinear systems but, on the other hand, the size of the Mathematical Program is greatly increased. The Nonlinear Program for SAND Optimization is stated as follows: Minimize f ( x , u ) x, u subject to g ( x , u ) 0 (2) h( x, u ) = 0 and e(x,u) = 0. We present a new Nonlinear Programming Algorithm for SAND optimization that solves the enlarged problem in a very efficient way and takes advantage of numerical tools normally included in Engineering Analysis software. This one is an extension of the Feasible Arc Interior Point Algorithm, FAIPA [1,2,3].

FAIPA makes iterations in the primal and dual variables of the optimization problem to solve Karush - Kuhn - Tucker optimality conditions. Given an initial interior point, it defines a sequence of interior points with the objective monotonically reduced. At each point, a feasible descent arc is obtained and an inexact line search is done along this one. At each of these iterations, to compute a feasible arc, FAIPA solves three linear systems with the same matrix. There is classical a quasi-Newton version of FAIPA and also a Limited Memory quasi-Newton algorithm. In the present contribution we present a technique to reduce the size of the linear systems and of the quasi-Newton matrix to the same magnitudes as in classical design optimization. The present method can be considered as Reduced Newtonlike Algorithm. In general reduced algorithms require feasibility of the equality constraints at each iterate. This means that, at each iteration, feasibility must be restored with an iterative procedure. This procedure is avoided in the present method. 4. FAIPA The Feasible Arc Interior Point Algorithm Consider now the standard Nonlinear Program: subject to g ( x ) 0 and h ( x) = 0, Minimize
x

f ( x)

(3)

where x R , g Rm and h Rp . The Feasible arc interior Point Algorithm requires an initial estimate of x at the interior of the feasible region defined by the inequality constraints, and generates a sequence of points also at the interior of this set. At each iteration, FAIPA defines an arc that is feasible with respect to the inequality constraints and descent regarding the objective or another appropriate function. That is, we can walk along the feasible arc reducing the objective and remaining feasible. When only inequality constraints are considered, FAIPA reduces the objective At each iteration. In the complete problem, an increase of the objective may be necessary in order to have the equalities satisfied. In this contribution we consider a quasi Newton version of FAIPA.
n

The Algorithm: Parameters: > 0, (0,1) and r > 0 , r R p . Data. Initialize x, > 0 and B R n n symmetric and positive definite. x is a feasible point. Step 1. Computation of the direction d Compute (d 0,0,0) and (d 1,1 ,1 ) by solving the linear systems:

Bd 0 + g ( x ) 0 + h ( x ) 0 = f ( x ), g t ( x ) d 0 + G ( x ) 0 = 0. h t ( x) d 0 = h( x ) If d 0 = 0, stop. If ri 0 i , take ri > 0 i , i = 1,2, ,p. Take (x, r ) = f (x ) + rt sgn[h(x )]h (x ) .

and

Bd 1 + g ( x ) 1 + h ( x ) 1 = 0, t g ( x )d 1 + G ( x ) 1 = . h t ( x ) d 1 = 0

t t If d1 ( x, r) > 0 take = inf () d 0 ; ( 1)d t0 (x, r) / d1 ( x, r) 2

else

= d0

d = d 0 + d 1

Step 2. Computation of the descent feasible arc ~ ~ Take w I = gi (x + d) g i (x ) g ti d and w E = h i (x + d) h i (x ) h ti d. i = 1,...,m. i i ~ ~ Compute d and by solving the following linear system: ~ Bd + g ( x ) ~ + h ( x ) ~ = 0 ~ ~ t ~I g ( x ) d + G ( x ) = w i ~ ~ h t ( x ) d = w E i Step 3. Curvilinear search Find a step t satisfying a given constrained line search criterion in the auxiliary function ( x , r ) and such that

~ ~ g i (x + td + t 2 d) < 0 if i 0 or g i (x + td + t 2 d) < gi (x) , otherwise.

Step 4. Updates ~ Set x = x + td + t 2 d and define new values for > 0, > 0 and B symmetric and definitive positive.

5. FAIPA_SAND Algorithm To simplify this presentation we consider the SAND Optimization problem with only inequality constraints: Minimize
x, u

When applied to this problem, FAIPA solves the linear systems: B xx B xu x g ( x , u ) x e ( x , u ) d 0 x B ux B uu u g ( x , u ) u e( x , u ) d 0 u x g t ( x , u ) u g t ( x , u ) G 0 0 xe t ( x, u) u e t ( x, u) 0 0 0 where d1x d1u 1 1
~ dx x f ( x , u ) 0 0 ~ du u f ( x , u ) 0 0 = ~ ~ 0 ~ ~ e( x , u )

In general the number of degrees of freedom of the model is much larger than the number of design variables. In consequence, the size of the systems above and of the quasi - Newton matrix are greatly increased in SAND approach. In this contribution e present a new technique that reduces the state variables and the state equations from the linear system also reduces the quasi Newton matrix of the size of the design variables. Now we call

From the first system of equation (5), we get

Go back to step 1.

f ( x, u)

subject to and

g( x, u) 0 e(x,u) = 0.

(4)

(5)

B B = xx Bux

Bxu Bxx

(6)

u = u e t (x, u) e(x, u ) and

Du = u e t (x, u) x e t (x , u)

(7) (8) (9)

0 = u e t (x, u) [u f (x , u ) + Bux d 0 x + B uu d 0 u ]

d 0 u = u [ Du ]d 0 x

Then, we can eliminate the state equation and the corresponding Lagrange multipliers from the first system in Eq.(5). If we define

M = [I

Du t ] ,

(10) (11) (12) (13) (14)

B = MBM t , I x = [I I u = [0 0] R n R n +r , I] R r R n + r , B uu = Iu BI tu ,

B xu = Ix BI tu and
we can write the first systems of the Eq. (5 ) as B t t x u dDu where

b x g Du t u g d 0 x = t 0 u g u G

(15)

b = x f ( x , u ) + [Du ]t u f ( x , u ) {I x + [Du ]t I u }BI tu u .

(16)

The present approach will be effective if b and B are computed without need of storing B. Existing Reduced Algorithms restore the state equation at each iteration. Then the third term of the right side of the Eq. (16) is null, since u = 0 . We present a formulation that avoids this procedure, that is equivalent to solve the state equation at each iteration. Our method also avoids the storage of the quasi-Newton matrix to evaluate B in the Eq. (15). With this object, we use limited memory representation of quasi-Newton matrices. 5.1 Using the limited memory technique [4]. Let be s k = x k +1 x k and y k = f ( x k +1 ) f ( x k ) , (17)

the direct BFGS update formula is given by B k +1 = B k B k s k s tk Bk s tk B k s k +


t yk yk t yk s k

(18)

Considering the q pairs {s i , y i } , i = k q ,..., k 1 , we define

Yk = [y k q ,...,yk 1 ] and Sk = [s k q ,...,s k 1 ] .


Thus, we can write the direct BFGS update formula as B k = I [Yk where
t D k = diag[s k q y k q ,..., s tk 1 y k 1 ] , 1 2 Sk ] D k 0

D k 2 Ltk J tk
1

1 2 Dk

1 2 L k Dk

0 Jk

Ykt t S k

(19)

t y if i > j s ( L k ) ij = k q 1 +i k q +i 0 otherwise

B k q = I and
J is the lower triangular matrix that satisfies J k J tk = Stk B 0 Sk + L k D1 Ltk . We can prove that J exists and is nonsingular. k

Then, given a vector v we can evaluate the product B k v without storing B from Eq.(19) by means of the following procedures: i. ii. iii.
t Update S , Y and compute L k , Sk Sk , D k .

Compute the Cholesky factorization of Stk Sk + L k D 1Ltk to obtain J k J tk . k Y t v Compute p = tk . Sk v Perform a forward and then a backward solve to obtain
1 2 q = D k 0

iv.

D k 2 Ltk J tk
1

v.

Compute B k v = v [Yk

1 t 2 L k D k k Stk ]q .
2 Dk

0 t Jk

1 t Yk t p S k

We use this procedure for evaluate b, without need of restoring the equilibrium at each iteration. Given the vectors u and v, we can use the Eq. (19) to evaluate the product u t Bv :
1 t t t 2 u Bk v = u v u Wk D k 0

D k 2 Ltk J tk
1

1 2 Dk

1 t 2 L k D k

0 t Wk v , t Jk

(21)

Yt where Wk = kt . S k Now we represent each element of the matrix B for Bij and each row of the matrix M for M i . Then we write the matrix B from the Eq. (11) as Bij = M i BM tj . (22)

Thus, we ban evaluate B without storing B by means of (21). The second and third of the systems (5) produce the same reduced systems with the same matrix and with right that we compute in a similar way as b. 6. Conclusions. We obtained a new algorithm that avoids restoring the equilibrium at each iteration and that reduces the size of the linear systems and of the quasi-Newton matrix to the same magnitudes as in classical design optimization. This formulation makes SAND optimization of nonlinear engineering systems very efficient, when compared with the classical approach. Moreover, solvers of commercial simulation codes can be employed. These solvers take advantage of the structure of the linear systems and, in general, they are very efficient. 7. References. 1. Herskovits J and Santos G. Feasible Arc Interior Point Algorithms for Nonlinear Optimization, Fourth World Congress on Computational Mechanics, (in CD-ROM), Buenos Aires, Argentina, June-July, 1998. 2. Herskovits J. A View on Nonlinear Optimization, pg. 71-116, Chapter of the book " Advances in Structural Optimization ", J. Herskovits Ed., KLUWER Academic Publishers, Holland, June, 1995. 3. Herskovits J.A Feasible Directions Interior Point Technique for Nonlinear Optimization, JOTA - Journal of Optimization Theory and Applications, Vol. 99, N 1, pg. 121-146, October, 1998.

4. Byrd R H , Nocedal J and Schanabel R H. Representation of Quasi-Newton Matrices and their Use in Limited Memory Methods, Technical Report CU-CS-612-92, University of Colorado (Boulder, CO, 1992). 5. Luenberg D G. Linear and Nonlinear programming , 2nd printed , Sddilsson-Wesley, 1984. 6. Leontiev A and Herskovits J. Interior Point Techniques for Optimal Control of Variational Inequality, Structural Optimization Research Journal, Vol. 14, N 2/3, pg. 101-107, October, 1997. 7. Herskovits J , Dias G P, Santos G and Mota Soares CM. Shape Structural Optimization with an Interior Point Mathematical Programming Algorithm, Structural Optimization and multidisciplinary Journal, pg107-115, October 2000. 8. Dias G, Herskovits J, Rochinha F. Simultaneous Shape Optimization and Nonlinear Analysis of Elastic Solids, Fourth World Congress on Computational Mechanics (in CD-ROM), Buenos Aires, Argentina. June-July, 1998. 9. Herskovits J, Leontiev A, Dias G, Santos G. Contact Shape Optimization: A Mathematical Programming Approach, Applied Mechanics in the Americas. Vol. 6, pg. 385-388, printed for AAM and ABCM - Rio de Janeiro, Brazil, 4-8 January, 1999. Sixth Pan-American Congress of Applied Mechanics and Eighth International Conference on Dynamic Problems in Mechanics.

You might also like