Professional Documents
Culture Documents
Abstract
A hybrid particle swarm optimization algorithm which includes two alternative
gradient-based methods for handling constraints has been proposed to solve process
synthesis and design problems which involve continuous and binary variables and
equality and inequality constraints (a mixed integer non-linear programming problem,
MINLP). The first method for handling constraints uses the Newton-Raphson algorithm
(NR) and the other method transforms the problem of finding the feasible region into a
constrained simulation problem (CSP). The efficiency of both hybrid PSO algorithms
has been tested and compared with the original PSO method. The two hybrid algorithms
are able to achieve the global optimum for a small planning problem chosen as case
study.
Keywords: Process synthesis, Mixed-Integer Linear Programming, Global
Optimization, Particle Swarm optimization
1. Introduction
Many systems in chemical engineering are challenging for optimization methods
because they are highly non-linear, multivariable and involve non-convex functions
(Banga & Seider, 1996). Under these conditions, the conventional equation-solving and
local optimization gradient-based algorithms do not provide a guarantee of obtaining
global solutions. In fact, they may not be suited for numerical calculations due to the
presence of trivial solutions, local optima, or divergences that occur after a significant
amount of iterations. The stochastic optimization methods have been proven to be
highly useful because they do not require continuity nor other assumptions about the
optimization problem, are reliable and efficient, and can cope with multiple local
minima. The Particle Swarm Optimization (PSO) is a stochastic population-based
method for solving global optimization problems (Kennedy & Eberhart, 1995). PSO can
be easily implemented and it is computationally inexpensive, since its memory and CPU
speed requirements are low (Parsopoulos & Vrahatis, 2002). Furthermore, it does not
require gradient information of the objective function, is able to avoid local optima and
is very well suited to non-linear problems. The original PSO was developed for the
continuous optimization problems. However, many practical chemical engineering
problems are formulated as combinatorial optimization problems. A discrete binary
version of PSO, developed by Kennedy and Eberhart (1997) can be applied in that case.
2. Problem Statement
A large number of process synthesis, design and control problems in chemical
engineering can be modeled as mixed integer nonlinear programming (MINLP)
problems. The general statement of a MINLP is:
R .Ruiz-Femenia et al.
min f ( x, y) (1)
s.t h( x, y) = 0 (2)
g( x, y) £ 0 (3)
xL £ x £ xU (4)
where, x ä R , y ä {0,1} , f : R → R , g :R → R , h :R →R .
n m n+m n+m p n+ m q
In chemical engineering the equality constraints, Eq. (2), are usually taken to be the
mass, energy and momentum balances in the steady state, and the inequality constraints,
Eq. (3), are related to the design and process specifications.
3. Solution Approach
3.1. The original particle swarm optimization algorithm
PSO algorithm is inspired by social behavior of bird flocking, fish schooling and swarm
theory. PSO shares many similarities with other evolutionary computation techniques
such as genetic algorithms (GA). The system is initialized with a population of random
solutions and searches for optima by updating generations. However, unlike GA, PSO
has no evolution operators such as crossover and mutation. In PSO, the individuals
called particles fly through a multidimensional search space. Each particle is regarded
as a point in a D-dimensional space that adjusts its position according to its own
experience and experience of other particles, making use of the best position
encountered by itself and by the swarm.
In the last years, PSO has been successfully applied in many research and application
areas. One of the reasons that make PSO so attractive is that there are few parameters to
adjust. For each particle i , the position vector xi is updated according to:
(
vik + 1 = wk vik + c1r1 (pik - xik ) + c2 r2 pkglobal - xik ) (6)
The particle’s new velocity vik + 1 is calculated according to its previous velocity vik and
the distances of its current position xik from its own best experience position pik and the
group’s best experience position pkglobal . Subscript k indicates a pseudo-time increment;
r1 and r2 are two uniform random vectors, whose elements are between 0 and 1; c1 and
c2 are two positive constants, called the cognitive and social parameter, respectively;
and w is the inertia weight.
In the discrete binary version of PSO (Kennedy & Eberhart, 1997) each component l of
the vector yi , that contains the binary variables associated with particle i , is updated
according to:
ìï 1 if r3,l £ sig(vli,k )
yli,k = ïí (7)
ïï 0
ïî if r3,l > sig(vli,k )
A particle swarm optimization for solving NLP/MINLP process synthesis problem
where r3 is a vector of random numbers between 0 and 1, and sig(x ) is the sigmoid
function, commonly used in neural networks:
sig(x ) = 1 (8)
1+ e ( - x )
The original PSO algorithm can obtain more accurate results in a faster way as
compared with other stochastic methods, but it is not so successful in solving MINLP
problems, because the original PSO handles the constrains penalizing infeasible
solutions.
3.2. Hybrid optimization algorithm for solving non-convex NLP/MINLP problems with
constraints
We propose a hybrid optimization algorithm built up with the association of global and
local optimization methods. The basic idea of this computer-assisted approach is to
merge the fast convergence properties of gradient-based optimization methods with the
wide exploration ability of population-based ones. To achieve this, we propose a PSO
algorithm for continuous and binary variables with two deterministic and alternative
methods for handling constraints: the Newton-Raphson method (HPSO-NR) and a
method for solving constraints as a simulation problem (HPSO-CSP). The algorithm
with the two gradient-based methods is illustrated in Figure 1. Both gradient-based
methods require previously to decompose the continuous and binary variables into
dependent ( xD , yD ) and independent variables ( xI , yI ). Then, for each iteration k of the
PSO algorithm, the value of the independent variables are fixed ( xkI , ykI ) and the inner
method to solve the constraints starts to iterate. Performing a few iterations ( p ) of this
method is sufficient in order to estimate the feasible region defined by the constraints.
This modification reduces the computation time, especially for the first iterations of the
PSO method, when the solution of the MINLP problem is far from the global optimum
and searching for the exact solution of the system defined by the constraints can be very
time-consuming. After p iterations of the method, the constraints are solved for the
continuous dependent variables xkD, p , and the PSO algorithm continues searching a new
position for all the particles. The HPSO Hybrid PSO algorithm using Newton-Raphson
(HPSO-NR) for handling constraints and the Hybrid PSO algorithm solving constraints
as a simulation problem (HPSO-CSP) have been implemented using vectorization in
Matlab® software.
3.2.1. Newton-Raphson for handling constraints (NR)
The first iterative deterministic method to handle the constraints, employs p iterations
of the Newton-Raphson algorithm (Quarteroni et al., 2007) to find the numerical
solution of the system built by the equality constraints, Eq. (2), and the new equality
constrains obtained as a result of the introduction of slack variables ( s ) into the original
inequality constraints, Eq. (3):
hi ( x, y) = 0 ïüï
ý ® H (éëx, xS ùû, y ) = 0 (9)
g j ( x, y) + x S2 , j = 0 ïï
þ
By solving the system defined by Eq (9), new values for the continuous dependent
variables ( xkD, p ) are obtained, which satisfy H éêxkD, p , xkI ùú, yk ; 0.
( )
ë û
R .Ruiz-Femenia et al.
Figure 1. Flow chart of the hybrid optimization algorithm for solving MINLP.
4. Case study
The computational performance of the two hybrid PSO algorithms has been investigated
using a small planning problem. This problem is taken from (Kocis & Grossmann,
1987). Figure 2 shows the superstructure that contains several alternatives for obtaining
product C. The objective is to maximize the profit considering that the desirable product
C is produced from chemical B being the purchased raw material from a market or an
intermediate from another raw material A. There are two alternative paths of producing
B from A. The model for the superstructure can be formulated as a MINLP problem. It
contains three binary variables, five continuous variables, five equality constraints (two
non-linear), three inequality constraints and two upper bounds:
min z = 7 / 2 y1 + y 2 + 3 / 2 y 3 + 7 b1 + b2 + 6 / 5 b3 + 9 / 5 a - 11 c
ìï b - ln (1 + a )
ïï 2 2
ïï b - 6 / 5 ln (1 + a ) ìï b - 5 y £ 0
ïï
ïïï
3 3 1
s .t ï a - 5y £ 0 (14)
í c - 9b = 0 í 2 2
ïï ïï
ïïï - b + b1 + b2 + b3 = 0 ïîï a 3 - 5 y 3 £ 0
ïï a - a - a = 0
ïî 2 3
3
y ä {0, 1} , c1 £ 1, b2 £ 5, a, a2 , a 3, b, b1, b2, b3, c ³ 0
2 to 10 in order to reduce the computation time for solving the constraints for the first
iterations of the outer PSO algorithm when the solution is far from the global optimum.
Table 1 presents a comparison of the best solution obtained by the original PSO, hybrid
PSO with the Newton-Raphson method for handling constraints (HPSO-NR) and hybrid
PSO handling constraints as simulation problem (HPSO-CSP). In Table 1, NFE and
NRC represent, respectively, the average of objective function evaluations (a product of
population size and the number of iterations) and the percentage of runs converged to a
global optimum in all 100 runs. We observe that the two hybrid PSO algorithms are
able to achieve solutions very close to the global optimum (HPSO-CSP reaches the
exact solution) whereas the original PSO cannot converge to the optimal solution. The
difference in the convergence capability between HPSO-NR and HPSO-CSP is slight,
in all the 100 executions, the convergence proportion to a global optimum for HPSO-
NR is 88 %, while HPSO-CSP has converged 84%. Regarding computational expense,
the HPSO-CSP demands more computation time than the HPSO-NR. While the average
time for one run is 2 minutes for the HPSO-CSP, it reduces to 45 seconds for the
HPSO-NR (on a 3.0 GHz 1 GB RAM Intel Pentium 4 processor).
Table 1 Results using different algorithms.
Optimum Original PSO HPSO-NR HPSO-CSP
NFE 10000 4500 3800
NRC - 88 84
Best solution -1.92309874 -1.20914622 -1.92308776 -1.92309874