You are on page 1of 6

20th European Symposium on Computer Aided Process Engineering – ESCAPE20

S. Pierucci and G. Buzzi Ferraris (Editors)


© 2010 Elsevier B.V. All rights reserved.

A particle swarm optimization for solving


NLP/MINLP process synthesis problems
Ruben Ruiz-Femenia and Jose A. Caballero
Department of Chemical Engineering, University of Alicante, Ap. 99, E-03080 Alicante,
Spain. Ruben.Ruiz@ua.es

Abstract
A hybrid particle swarm optimization algorithm which includes two alternative
gradient-based methods for handling constraints has been proposed to solve process
synthesis and design problems which involve continuous and binary variables and
equality and inequality constraints (a mixed integer non-linear programming problem,
MINLP). The first method for handling constraints uses the Newton-Raphson algorithm
(NR) and the other method transforms the problem of finding the feasible region into a
constrained simulation problem (CSP). The efficiency of both hybrid PSO algorithms
has been tested and compared with the original PSO method. The two hybrid algorithms
are able to achieve the global optimum for a small planning problem chosen as case
study.
Keywords: Process synthesis, Mixed-Integer Linear Programming, Global
Optimization, Particle Swarm optimization

1. Introduction
Many systems in chemical engineering are challenging for optimization methods
because they are highly non-linear, multivariable and involve non-convex functions
(Banga & Seider, 1996). Under these conditions, the conventional equation-solving and
local optimization gradient-based algorithms do not provide a guarantee of obtaining
global solutions. In fact, they may not be suited for numerical calculations due to the
presence of trivial solutions, local optima, or divergences that occur after a significant
amount of iterations. The stochastic optimization methods have been proven to be
highly useful because they do not require continuity nor other assumptions about the
optimization problem, are reliable and efficient, and can cope with multiple local
minima. The Particle Swarm Optimization (PSO) is a stochastic population-based
method for solving global optimization problems (Kennedy & Eberhart, 1995). PSO can
be easily implemented and it is computationally inexpensive, since its memory and CPU
speed requirements are low (Parsopoulos & Vrahatis, 2002). Furthermore, it does not
require gradient information of the objective function, is able to avoid local optima and
is very well suited to non-linear problems. The original PSO was developed for the
continuous optimization problems. However, many practical chemical engineering
problems are formulated as combinatorial optimization problems. A discrete binary
version of PSO, developed by Kennedy and Eberhart (1997) can be applied in that case.

2. Problem Statement
A large number of process synthesis, design and control problems in chemical
engineering can be modeled as mixed integer nonlinear programming (MINLP)
problems. The general statement of a MINLP is:
R .Ruiz-Femenia et al.

min f ( x, y) (1)
s.t h( x, y) = 0 (2)
g( x, y) £ 0 (3)
xL £ x £ xU (4)
where, x ä R , y ä {0,1} , f : R → R , g :R → R , h :R →R .
n m n+m n+m p n+ m q

In chemical engineering the equality constraints, Eq. (2), are usually taken to be the
mass, energy and momentum balances in the steady state, and the inequality constraints,
Eq. (3), are related to the design and process specifications.

3. Solution Approach
3.1. The original particle swarm optimization algorithm
PSO algorithm is inspired by social behavior of bird flocking, fish schooling and swarm
theory. PSO shares many similarities with other evolutionary computation techniques
such as genetic algorithms (GA). The system is initialized with a population of random
solutions and searches for optima by updating generations. However, unlike GA, PSO
has no evolution operators such as crossover and mutation. In PSO, the individuals
called particles fly through a multidimensional search space. Each particle is regarded
as a point in a D-dimensional space that adjusts its position according to its own
experience and experience of other particles, making use of the best position
encountered by itself and by the swarm.
In the last years, PSO has been successfully applied in many research and application
areas. One of the reasons that make PSO so attractive is that there are few parameters to
adjust. For each particle i , the position vector xi is updated according to:

xik + 1 = xik + vik + 1 (5)

and the pseudo-velocity vector vi is calculated in the following manner:

(
vik + 1 = wk vik + c1r1 (pik - xik ) + c2 r2 pkglobal - xik ) (6)

The particle’s new velocity vik + 1 is calculated according to its previous velocity vik and
the distances of its current position xik from its own best experience position pik and the
group’s best experience position pkglobal . Subscript k indicates a pseudo-time increment;
r1 and r2 are two uniform random vectors, whose elements are between 0 and 1; c1 and
c2 are two positive constants, called the cognitive and social parameter, respectively;
and w is the inertia weight.
In the discrete binary version of PSO (Kennedy & Eberhart, 1997) each component l of
the vector yi , that contains the binary variables associated with particle i , is updated
according to:
ìï 1 if r3,l £ sig(vli,k )
yli,k = ïí (7)
ïï 0
ïî if r3,l > sig(vli,k )
A particle swarm optimization for solving NLP/MINLP process synthesis problem

where r3 is a vector of random numbers between 0 and 1, and sig(x ) is the sigmoid
function, commonly used in neural networks:

sig(x ) = 1 (8)
1+ e ( - x )
The original PSO algorithm can obtain more accurate results in a faster way as
compared with other stochastic methods, but it is not so successful in solving MINLP
problems, because the original PSO handles the constrains penalizing infeasible
solutions.
3.2. Hybrid optimization algorithm for solving non-convex NLP/MINLP problems with
constraints
We propose a hybrid optimization algorithm built up with the association of global and
local optimization methods. The basic idea of this computer-assisted approach is to
merge the fast convergence properties of gradient-based optimization methods with the
wide exploration ability of population-based ones. To achieve this, we propose a PSO
algorithm for continuous and binary variables with two deterministic and alternative
methods for handling constraints: the Newton-Raphson method (HPSO-NR) and a
method for solving constraints as a simulation problem (HPSO-CSP). The algorithm
with the two gradient-based methods is illustrated in Figure 1. Both gradient-based
methods require previously to decompose the continuous and binary variables into
dependent ( xD , yD ) and independent variables ( xI , yI ). Then, for each iteration k of the
PSO algorithm, the value of the independent variables are fixed ( xkI , ykI ) and the inner
method to solve the constraints starts to iterate. Performing a few iterations ( p ) of this
method is sufficient in order to estimate the feasible region defined by the constraints.
This modification reduces the computation time, especially for the first iterations of the
PSO method, when the solution of the MINLP problem is far from the global optimum
and searching for the exact solution of the system defined by the constraints can be very
time-consuming. After p iterations of the method, the constraints are solved for the
continuous dependent variables xkD, p , and the PSO algorithm continues searching a new
position for all the particles. The HPSO Hybrid PSO algorithm using Newton-Raphson
(HPSO-NR) for handling constraints and the Hybrid PSO algorithm solving constraints
as a simulation problem (HPSO-CSP) have been implemented using vectorization in
Matlab® software.
3.2.1. Newton-Raphson for handling constraints (NR)
The first iterative deterministic method to handle the constraints, employs p iterations
of the Newton-Raphson algorithm (Quarteroni et al., 2007) to find the numerical
solution of the system built by the equality constraints, Eq. (2), and the new equality
constrains obtained as a result of the introduction of slack variables ( s ) into the original
inequality constraints, Eq. (3):
hi ( x, y) = 0 ïüï
ý ® H (éëx, xS ùû, y ) = 0 (9)
g j ( x, y) + x S2 , j = 0 ïï
þ
By solving the system defined by Eq (9), new values for the continuous dependent
variables ( xkD, p ) are obtained, which satisfy H éêxkD, p , xkI ùú, yk ; 0.
( )
ë û
R .Ruiz-Femenia et al.

Figure 1. Flow chart of the hybrid optimization algorithm for solving MINLP.

3.2.2. Constraints solved as a constrained simulation problem (CSP)


The second deterministic method uses the algorithm proposed by Bullard and Biegler
(1991) to solve the simulation problem defined by the equality constraints, Eq. (2), and
subject to inequality constraints, Eq. (3), and the variable bounds, Eq. (4), which is
obtained by eliminating the objective function, Eq. (1), from the original optimization
problem, Eqs. (1)-(4). According to Bullard and Biegler (1991) this constrained
simulation problem (CSP) can be written by adding auxiliary variables and constructing
a new objective function as:
min x, p ,n ,s
i i j
å i (pi + n i ) + å js j (10)
s.t h( x) = p − n (11)
g( x) £ s (12)
xL £ x £ xU , pi , n i , s j ³ 0
A particle swarm optimization for solving NLP/MINLP process synthesis problem

3.2.3. Calculation of the Jacobian Matrix


Both methods for handling the constraints require the calculation of the Jacobian matrix,
whose explicit computation is often very expensive. To bypass this problem, we use an
approximation using complex variables (Squire & Trapp, 1998). In matrix notation each
column j of the Jacobian matrix J can be written as:
(Jh )j ; Im éëH( x + ih j e j ) ùû h j ,i = - 1 (13)
where Im [H] represents the imaginary part of the function H , ej is the j-th vector of the
n
canonical basis of R and h j > 0 are increments to be chosen appropriately at each step
of the iteration.

4. Case study
The computational performance of the two hybrid PSO algorithms has been investigated
using a small planning problem. This problem is taken from (Kocis & Grossmann,
1987). Figure 2 shows the superstructure that contains several alternatives for obtaining
product C. The objective is to maximize the profit considering that the desirable product
C is produced from chemical B being the purchased raw material from a market or an
intermediate from another raw material A. There are two alternative paths of producing
B from A. The model for the superstructure can be formulated as a MINLP problem. It
contains three binary variables, five continuous variables, five equality constraints (two
non-linear), three inequality constraints and two upper bounds:
min z = 7 / 2 y1 + y 2 + 3 / 2 y 3 + 7 b1 + b2 + 6 / 5 b3 + 9 / 5 a - 11 c
ìï b - ln (1 + a )
ïï 2 2
ïï b - 6 / 5 ln (1 + a ) ìï b - 5 y £ 0
ïï
ïïï
3 3 1
s .t ï a - 5y £ 0 (14)
í c - 9b = 0 í 2 2
ïï ïï
ïïï - b + b1 + b2 + b3 = 0 ïîï a 3 - 5 y 3 £ 0
ïï a - a - a = 0
ïî 2 3
3
y ä {0, 1} , c1 £ 1, b2 £ 5, a, a2 , a 3, b, b1, b2, b3, c ³ 0

Figure 2. Superstructure of the case study.

5. Results and discussion


The best result from each Hybrid PSO algorithm has been compared with the global
optimum (obtained solving Eq. (14) in GAMS 23.2 using the BARON solver):
æy , y , y ö
÷ æ1,0,1 ö÷
çç 1 2 3 ÷ çç ÷
çça, a , a , b, b , b , b , c ÷
÷= çç1.52420440, 0,1.52420440,1.11111111, 0, 0,1.11111111 ,1.00000000 ÷÷
çç 2 3 1 2 3 ÷ ÷ çç ÷
÷
ççz ÷
÷ çç -1.92309874 ÷
÷
è ÷
ø è ÷ø
Each algorithm has been tested performing 100 runs. For each run the size of the swarm
has been set equal to 50 particles and the maximum number of iterations to 200. The
inertia weight w has been gradually decreased from 0.9 to 0.2. For the inner handling
constraint method the number of iterations has not been kept constant, it increases from
R .Ruiz-Femenia et al.

2 to 10 in order to reduce the computation time for solving the constraints for the first
iterations of the outer PSO algorithm when the solution is far from the global optimum.
Table 1 presents a comparison of the best solution obtained by the original PSO, hybrid
PSO with the Newton-Raphson method for handling constraints (HPSO-NR) and hybrid
PSO handling constraints as simulation problem (HPSO-CSP). In Table 1, NFE and
NRC represent, respectively, the average of objective function evaluations (a product of
population size and the number of iterations) and the percentage of runs converged to a
global optimum in all 100 runs. We observe that the two hybrid PSO algorithms are
able to achieve solutions very close to the global optimum (HPSO-CSP reaches the
exact solution) whereas the original PSO cannot converge to the optimal solution. The
difference in the convergence capability between HPSO-NR and HPSO-CSP is slight,
in all the 100 executions, the convergence proportion to a global optimum for HPSO-
NR is 88 %, while HPSO-CSP has converged 84%. Regarding computational expense,
the HPSO-CSP demands more computation time than the HPSO-NR. While the average
time for one run is 2 minutes for the HPSO-CSP, it reduces to 45 seconds for the
HPSO-NR (on a 3.0 GHz 1 GB RAM Intel Pentium 4 processor).
Table 1 Results using different algorithms.
Optimum Original PSO HPSO-NR HPSO-CSP
NFE 10000 4500 3800
NRC - 88 84
Best solution -1.92309874 -1.20914622 -1.92308776 -1.92309874

6. Conclusions and future work


The two hybrid PSO algorithms developed are able to solve an optimization problem
that contains continuous and binary variables and inequality and equality constraints, in
contrast to the original PSO, which cannot converge to the optimal solution. The
computation time demanded by the HPSO-CSP is higher than for the HPSO-CSP
algorithm. Future work will investigate how to implement the hybrid PSO algorithms
using parallel computing in order to decrease significantly the computation time. The
aim is to design an efficient method for the large-scale and complicated real world
applications.
7. Acknowledgements
The authors gratefully acknowledge the financial support from the “Ministerio de
Ciencia e Innovación” of Spain under project CTQ2009-14420-C02-02.
References
Banga, J. R., & Seider, W. D. (1996). Global Optimization of Chemical Processes using
Stochastic Algorithms. In C. F. a. P. Pardalos (Ed.), State of the Art in Global Optimization
(pp. 563-583): Kluwer Academic Pub.
Bullard, L. G., & Biegler, L. T. (1991). Computers & Chemical Engineering, 15, 239-254.
Kennedy, J., & Eberhart, R. (1995). Particle swarm optimization. Proceeding of IEEE
International Conference on Neural Networks (Vol. 4, pp. 1942-1948).
Kennedy, J., & Eberhart, R. C. (1997). Discrete binary version of the particle swarm algorithm. In
Anon (Ed.), Proceedings of the IEEE International Conference on Systems, Man and
Cybernetics (Vol. 5, pp. 4104-4108). Orlando, FL, USA.
Kocis, G. R., & Grossmann, I. E. (1987). Industrial and Engineering Chemistry Research, 26,
1869-1880.
Parsopoulos, K. E., & Vrahatis, M. N. (2002). Natural Computing, 1, 235-306.
Quarteroni, A., Sacco, R., & Saleri, F. (2007). Numerical mathematics (2nd ed.). Berlin ; New
York: Springer.
Squire, W., & Trapp, G. (1998). SIAM Rev., 40, 110-112.

You might also like