You are on page 1of 26

Author’s Accepted Manuscript

Multi-objective reliability redundancy allocation in
an interval environment using particle swarm
optimization

Enze Zhang, Qingwei Chen

www.elsevier.com/locate/ress

PII: S0951-8320(15)00272-0
DOI: http://dx.doi.org/10.1016/j.ress.2015.09.008
Reference: RESS5408
To appear in: Reliability Engineering and System Safety
Received date: 26 March 2015
Revised date: 1 September 2015
Accepted date: 10 September 2015
Cite this article as: Enze Zhang and Qingwei Chen, Multi-objective reliability
redundancy allocation in an interval environment using particle swarm
o p t i m i z a t i o n , Reliability Engineering and System Safety,
http://dx.doi.org/10.1016/j.ress.2015.09.008
This is a PDF file of an unedited manuscript that has been accepted for
publication. As a service to our customers we are providing this early version of
the manuscript. The manuscript will undergo copyediting, typesetting, and
review of the resulting galley proof before it is published in its final citable form.
Please note that during the production process errors may be discovered which
could affect the content, and all legal disclaimers that apply to the journal pertain.

1

Multi-objective reliability redundancy allocation in an interval environment
using particle swarm optimization

Enze Zhang*, Qingwei Chen

School of Automation, Nanjing University of Science and Technology, Nanjing 210094, China
*Corresponding author. Tel.: +8615850577123.
E-mail addresses: yzzez8986@gmail.com (E. Zhang).

Abstract
Most of the existing works addressing reliability redundancy allocation problems are based on the assumption of

fixed reliabilities of components. In real-life situations, however, the reliabilities of individual components may be

imprecise, most often given as intervals, under different operating or environmental conditions. This paper deals

with reliability redundancy allocation problems modeled in an interval environment. An interval multi-objective

optimization problem is formulated from the original crisp one, where system reliability and cost are

simultaneously considered. To render the multi-objective particle swarm optimization (MOPSO) algorithm capable

of dealing with interval multi-objective optimization problems, a dominance relation for interval-valued functions

is defined with the help of our newly proposed order relations of interval-valued numbers. Then, the crowding

distance is extended to the multi-objective interval-valued case. Finally, the effectiveness of the proposed approach

has been demonstrated through two numerical examples and a case study of supervisory control and data

acquisition (SCADA) system in water resource management.

Keywords: Reliability optimization; Multi-objective optimization; Particle swarm optimization; Interval number

1. Introduction

The utilization of redundancy plays an important role in enhancing the reliability of a system.

The redundancy allocation problem (RAP) involves the selection of components and a

system-level design configuration to simultaneously optimize some objective functions, such as

system reliability, cost and weight, given certain design constraints. The incorporation of

redundant components improves system reliability, but can also increase system cost, weight, etc.

Thus, a RAP frequently encounters a trade-off between maximization of system reliability and

minimization of system cost and weight.

Traditionally, the RAP has been solved as a single objective optimization problem with the

goal to maximize the system reliability subject to several constraints such as cost, weight, etc.

Various methodologies have been proposed to handle it, e.g., dynamic programming [1], integer

if not impossible. the probability distributions or membership functions of the parameters is required to be known in priori. which makes the multi-objective formulations of the reliability optimization problem quite natural. bacterial-inspired evolutionary algorithm [7]. bee colony algorithm [8]. interval programming [31-33] and robust optimization [28. 2 programming [2]. In the stochastic or fuzzy approaches. [22] proposed a practical approach combining bare-bones multi-objective particle swarm optimization (MOPSO) and sensitivity-based clustering for solving multi-objective RAPs. fuzzy programming [25-27]. The causes may be improper storage facilities. branch and bound [4]. Many studies have demonstrated the effectiveness of the heuristic/meta-heuristic algorithms in solving multi-objective RAPs. reliability optimization incorporating uncertainty has become the subject of many research efforts in the area of reliability engineering over the last decade. most of the research studies are based on the assumption of fixed reliabilities of components. Soltani [34] presented a comprehensive review on reliability optimization with both deterministic and non-deterministic parameters. Khalili-Damghani et al. Tabu search [9]. Actually. however. To deal with uncertainty in system reliability problems. it is not an easy task to grasp this information in real engineering applications. Moreover. and heuristic/meta-heuristic approaches such as genetic algorithm [5]. the vagueness of human judgment and other factors relating to the environment. Unfortunately. swarm optimization [10-13]. the reliabilities of individual components are often imprecise under different operating and environmental conditions. In spite of the existence of diverse studies that address reliability optimization problems.29] are frequently employed. [15] recently proposed a decision support system for solving multi-objective RAPs. stochastic programming [23. variable neighborhood search [6]. column generation [3]. Multi-objective approaches for the RAP can be found in [15-22]. and hybrid algorithm [14]. An alternative solution to this problem is to use .24]. Zhang et al. Multiple objectives are often considered simultaneously in practical problems concerning with system design. to produce different components with exactly identical reliabilities. In real-life situations. and thus the issue is subject to uncertainty. Safari [20] proposed a variant of the non-dominated sorting genetic algorithm (NSGA-II) to solve a novel mathematical model for multi-objective RAPs. the vagaries of manufacturing processes makes it difficult.

[44] considered a multi-objective RAP with both fixed and interval-valued system parameters using entropy as a constraint for the system stability. an interval signifies the extent of tolerance or a region the parameter can possibly take [35. [43] considered a RAP with choices of active and cold strategy where the component’s time to failure follows an Erlang distribution and formulated it through a Min-Max regret method. the problem was formulated by Yokota et al. To deal with imprecision. This paper provides an alternative way to solve multi-objective reliability redundancy allocation problems. Bhunia et al. Some researchers have addressed reliability optimization problems by considering interval-valued component reliabilities (see.. As a coefficient. Soltani et al.38] and Taguchi et al.g. Sadjadi et al. three numerical examples are solved . Gupta et al. Despite their efforts. Then the crowding distance measure is extended to handle imprecise objective functions. More recently. Feizollahi et al. [37. and then converted the same into unconstrained single objective ones. and a multi-objective optimization approach for obtaining the trade-off curve of RAPs in an interval environment is still rather lacking. [39. Roy et al. modeled in an interval environment. [32] solved chance constrained reliability stochastic optimization problem with resource constraints considering the reliability of each component as an interval number. Our motivation is to handle the interval multi-objective case as-is and apply the particle swarm optimization to obtain the tradeoff curve. e. [31] formulated the problem as constrained multi-objective optimization problems with the help of their proposed order relations of interval-valued numbers.36]. Finally. In the early years. which will be used to compare imprecise individuals. Sahoo et al.37-45]).40] as a nonlinear integer programming problem and solved by using a GA. [33] formulated the RAP with interval reliabilities as an unconstrained integer programming problem with interval coefficients and developed an advanced GA for solving it. all these studies fail to find the trade-off front of the problem since the problem is either formulated as or converted into a single objective optimization problem. [31-33. 3 an interval-valued number to represent the imprecise number. [41] proposed a robust deviation framework to deal with the RAP with interval component reliabilities. the imprecise Pareto dominance is defined. [42] presented a redundancy allocation model with choices of a redundancy strategy and component type and solved it using an interval programming approach. Multi-objective reliability optimization problems with interval-valued objectives have been formulated and then particle swarm optimization has been applied to solve the problem under a number of constraints.

M : yi (x1 ) £ yi (x 2 )  x1 is strictly better than x2 in at least one objective. $i Î{1. y2 (x).e.2. If so. Therefore. M } : . Note that because y (x) is a vector. x2 . and this criterion is Pareto dominance. A general interval multi-objective optimization problem Very often real-world problems have several conflicting objectives. it may not adequately represent the problem being faced. Multi-objective optimization is concerned with the minimization of y that can be the subject of a number of inequality and equality constraints: min y(x) = ( y1 (x). The rest of the paper is organized as follows. it is necessary to establish certain criteria to determine what is considered as an optimal solution.. Two numerical examples and a case study are presented in Section 5 to demonstrate the effectiveness of the proposed algorithm. x1 is said to dominate x 2 (denoted by x1 ≺x 2 ).t. P (1) h j (x) = 0. 2.2. i = 1.e. j = 1. Section 3 describes the formulation of the interval RAP while the proposed algorithm for solving the interval optimization problem is discussed in detail in Section 4. there is no unique solution to this problem. A general interval multi-objective optimization problem is introduced in Section 2. gi (x) £ 0. y M (x))T s.….….…. in a minimization problem. there is a vector of objectives involving M ( 2) conflicting objective functions that must be traded off in some way.…. for feasible solutions x1 and x 2 . Considering multiple objectives often gives better ideas of the task.…. xn )T belong to the feasible region S  R which is n formed by constraint functions.. i. if any of the components of y (x) are competing.2. Finally. i. 4 and the trade-off relationship between reliability and cost performance is analyzed.…. Without loss of generality.Q where the decision vectors x = (x1 . the conclusions and suggestions for future research are provided in Section 6. "i = 1. Even though many of them can be reduced to a matter of a single objective.2. if and only if both of the following conditions are true:  x1 is no worse than x 2 in all objectives.

5 yi (x1 )  yi (x2 ) The above relationship is clearly suitable for precise objective values.2.2. The typical structure of a series-parallel system is illustrated in Fig. The ith subsystem consists of xi components arranged in parallel. i = 1. cost. In an interval environment.ckR ].2. j = 1.c2 . Problem formulation The RAP pertains to a system of n subsystems arranged in series.c) = ( y1 (x.…. y2 (x.c). ckL and ckR are the lower and upper bounds of the kth component. M) are the ith interval-valued objective function.…. represented as multidimensional intervals [50] (see Fig.2. yiR (x)] (i = 1. P (2) h j (x) = 0. k = 1.c). Normal and imprecise objective values 3. but also increases system cost and weight.Q where c = (c1 .¼. K gi (x) £ 0. as will be discussed later.…. The problem arises then regarding how to optimally allocate redundant components. yi (x) = [ yiL (x). The use of redundant components improves system reliability.cK )T are the interval-valued parameter. 1). . 2. but when it comes to objective functions that are imprecise. ck = [ckL . weight and other characteristics.c))T s. Each component potentially varies in reliability. y M (x.….t. 1.…. the normal Pareto dominance relation cease to work and need to be extended. the original problem (1) is converted into an interval multi-objective optimization problem as follows: min y(x. y2 y2 y  y" max(y )   1R   y2 R  y y" y y  min(y )   1L  y'  y2 L  y' y1 y1 Fig.

n) that will maximize the system reliability and minimize the system cost under given constraints. i = 1.…. in which the reliabilities and costs of components are interval-valued. the RAP is modeled in an interval environment as a multi-objective optimization problem. RSR (x) ùû i=1 min éëCSL . operating and failure. i.CSR ùû (4) s.t.….  All components are assumed to have binary states. j = 1. CSR ] are the interval-valued system reliability and system cost . the system reliability RS equals to the product of the interval-valued reliability of all subsystems (see [31-33. n RS (x)   RSL (x).1. The mathematical formulation of the problem is given below: n max Õ éë RSL (x).e.36] for the definition of the multiplication operator of interval numbers).2. A series-parallel system configuration 3.n where [ RSL .2.…. RSR (x) (3) i 1 where RSL (x)  [1  (1  riL ) xi ] and RSR (x)  [1  (1  riR ) xi ] . 6 1 1 1 2 2 2 x1 x2 xn Fig. RSR ] and [CSL .2. The problem is to determine the number of redundant components xi (i = 1.35. Assuming that the component type of the ith subsystem is identical. i.m.  Failures of component are mutually statistically independent.  The system will not be damaged when a component failure occurs. xi ÎZ + .e. 3. Formulation of the interval RAP In this paper. g j (x) £ 0.2.  All components are assumed to be non-reparable. Assumptions The basic assumptions for the RAP in an interval environment are as follows:  Reliability and cost of each component are imprecise and interval-valued. 2.

Interval A can also be denoted by A = ac . The second one defines strict and fuzzy preference ordering from a pessimistic decision maker’s point of view. The first one describes an optimistic decision maker’s preference index. 2 2 An extensive research on comparing and ranking two interval numbers can be found in [36] which gives two approaches of comparing two interval numbers. a dominance relation therefore has to be defined for comparison of individuals. a new partial order relation that compares intervals inside a single objective dimension is defined. To overcome this issue.aR ] = { a : aL £ a £ aR . modifications are necessary in those steps where imprecise solutions are compared. ac  (aL  aR ) and aw  (aR  aL ) . i. then A  [a. Let A  [aL . the Pareto dominance relation is not suitable for interval-valued objective functions.1. Thus. Extending Pareto dominance to the interval-valued case. the crowding distance metric is extended to handle interval objective functions.1.. respectively. we have proposed a general definition of order relations without extra knowledge about the underlying distributions or the decision maker’s preferences. Regarding this. a] is a real number. In the following subsections a dominance relation for interval-valued objective functions is defined. Dominance relation for interval objective functions As previously mentioned. we face the difficulty of comparison between two interval numbers.aw . aw and B  [bL . 4. 4. aR ]  ac . This approach is somewhat subjective since the degree of pessimism of decision makers should be specified beforehand.1.e. Order relation of interval numbers An interval number is defined as A = [aL . Proposed approach To develop an interval MOPSO algorithm for Problem (4). Then. If aL  aR . bR ]  bc . then for ac  bc .a ÎR} where aL and aR are the left and right limits on the real line R . 4. bw be two intervals. where ac and aw are the mid-point 1 1 and half-width of interval A . 7 respectively. we define an order relation  IM for which A  IM B implies A is superior to or greater than B : .

a novel crowding distance extending the previous one need to be developed. It is to be noted that when A and B are incomparable using  IM .2.x ) = m=1 . it is considered . d ( f m (xi ). (ii) if aw  bw . 4. f m (xi )  [ fmL (xi ). Imprecise Pareto dominance Applying  IM . However. Crowding distance for interval objective functions To produce well-distributed Pareto fronts. f mR (x j )] denote the interval-valued objective functions of xi and x j for the mth objective function.2. (6) i j V (x i ) +V (x j ) +1 where M indicates the dimension of objective functions. f m (x j )) D(x . IP-MOPSO employs the crowding distance to estimate the diversity of nondominated solutions. 8 (i) if aw  bw . f m (x j )) denotes the distance between f m (xi ) and f m (x j ) which can be calculated as follows: . and thus. the global best position of each particle is selected from the external repository with respect to the diversity of nondominated solutions. the distance between them can be calculated as follows: å M d( f m (x i ). f mR (xi )] and f m (x j )  [ f mL (x j ). Clearly the order relation  IM is reflexive and transitive but not symmetric.1. a relation ≺IP extending the standard Pareto dominance is proposed for the multi-objective interval-valued case. Considering two optimal solutions xi and x j . then A IM B  aL  bL and . then A  IM B should hold. the crowding distance first introduced in [46] is not suitable for interval-valued objective functions. the imprecise Pareto dominance relation ≺IP can be defined as: (5) 4. Assuming the m th objective function of solution xi and x j are denoted as ym (xi ) and ym (x j ) respectively.

During the flight. f m (x j ))  (7) 2 V (xi ) and V (x j ) denote the volumes of their corresponding hyper-cuboid. Particle swarm optimization Particle swarm optimization (PSO). the crowding distance of xi .f m (x j ) | (9) m=1 Accordingly the crowding distance of xi will have the following form: 1 M CD(xi )   | f m (x j )  f m (x k ) | 2 m 1 (10) which is consistent with the normal one for the non-interval case. is the best position found so far by itself. Assuming the two nearest points x j and x k . denoted by CD(xi ) . x j ) and D(xi . 4. while the global best position.1. then the distance between xi and x j will take the following form: M D(x i .x j ) = å | f m (x i ) . Each particle represents a potential solution of an optimization problem. It can be observed from Eq. 9 ( f mL (xi )  f mL (x j ))2  ( f mR (xi )  f mR (x j ))2 d ( f m (xi ). a swarm consists of a set of particles that fly around in a multi-dimensional search space. known as pbesti . first introduced by Kenney and Eberhart in 1995 [47]. making use of the best position they encounter. From this point of view. xk ) CD(xi )  D(xi . the crowding distance introduced in [46] is a special case of the one defined here. the personal best position. The IP-MOPSO algorithm for interval RAPs 4. is a stochastic global optimization technique inspired by the paradigm of birds flocking. each particle adjusts its position according to the experience of itself and other neighboring particles. In the PSO. x j )  D(xi . (8) that if the interval objectives represented as intervals regress to points. Considering the i th particle in the swarm at iteration t . is defined as the sum of D(xi . known as gbest .3. is the global best position found so far by neighbors of this . xk ) (8) It is worth noting that the solutions that lie in the boundary of each dimension of the objective space are assigned an infinite distance value.3.

The Big-M . r1 and r2 are random numbers between 0 and 1. We adopt in IP-MOPSO the time-varying inertia weight proposed by Shi and Eberhart [48]. 10 particle. Then. and Tmax is the maximum number of iterations.2. and w is the inertia weight used to control the impact of the previous velocities on the current one.3. Here. (12).3. c2i and c2 f are the initial and final values of c2 . Constraints handling approach Since problem (4) is a constrained optimization problem with interval-valued objectives. which is defined as follows: Tmax  t w  ( w1  w2 )  w2 (12) Tmax where w1 and w2 are the initial and final values of the inertia weight respectively. respectively. t is the current iteration number. influencing the particle’s ability of exploration in the search space. t and Tmax have the same definition as in Eq. This can be carried out by linearly changing the coefficients c1 and c2 through the time as suggested in [50]: t c1  (c1 f  c1i )  c1i (13) Tmax t c2  (c2 f  c2i )  c2i (14) Tmax where c1i and c1 f are the initial and final values of c1 . a constraint-handling scheme needs to be incorporated to deal with the constraints. and its position and velocity are denoted by xit and vit . the position and velocity of this particle at iteration t  1 can be expressed using the following equations: vit 1  wvit  c1r1 ( pbesti  xit )  c2 r2 ( gbest  xit ) (11) xit 1  xit  vit 1 where c1 and c2 are the acceleration coefficients. 4. 4.3. time-varying acceleration coefficients are applied with the idea of balancing the ability of exploration and exploitation. Besides using this inertia weight scheme. which control the influence of gbest and pbesti on the search process. Dynamic inertia weight and acceleration coefficient It is found that the performance of a PSO algorithm could be improved by linearly varying the inertia weight w [48].

 Step 5: Update the personal best position of the particles.2. Description of the proposed IP-MOPSO algorithm The proposed algorithm.  Step 4: Evaluate each of the particles and store the nondominated ones in the external repository Rep applying ≺IP relation. Clearly problem (15) is an unconstrained optimization problem with interval objectives. A mutation operator with action range covered narrowed over time is adopted to prevent a premature convergence to a false Pareto front [22].  Step 3: Update the velocity and position of the particles according to Eq. (11). g j (x) £ 0. Evaluate each of the particles using Eq. 4.  M ]. which is calculated using Eq. f R ] (15) s. if x  S .4. M ] . say M .3.  Step 2: Randomly select the global best position from the top 10% of solutions in Rep with the crowding distance.….…. otherwise. j = 1. Let S = {x : g j (x) £ 0. j = 1. (16) and store the nondominated ones in the external repository Rep applying ≺IP relation.2. (10). which can be written in an interval form [M . between its .0]. Let us consider the constrained optimization problem Maximize [f L . which converts the constrained optimization problem with interval-valued objective functions into an unconstrained one by penalizing the unfeasible solutions a large positive number. where  ( x)    f (x)  [ M . IP-MOPSO.m} be the feasible space. The personal best position is set as the particle itself.t. 11 penalty method introduced in [33] is considered in the IP-MOPSO algorithm. then the transformed optimization problem is as follows: Maximize fˆ (x)  f (x)   (x) (16) [0.m. which adopts the imprecise Pareto relation and extended crowding distance.  Step 1: Initialize the population of particles. is implemented as follows. For every particle.

//*initialize the population Pop0 *// 4: ENDFOR 5: F ( Pop0 )  Evaluate( Pop0 ) //*Evaluate each of the particles in Pop0 *// 6: Rep  GetNDParticles( Pop0 ) //*Store the nondominated particles in the repository using ≺IP *// 7: WHILE t  Tmax DO 8: FOR i  1 to nPop 9: gbesti  GetGbest( ) 10: pbesti  GetPbest( ) 11: xi  UpParticle( xi .. pbesti )  Initialize( ). whenever it gets full.e. those particles located in less populated regions of objective space. are kept in the repository. Since the size of the repository is limited. the dominating one. gbesti . will be the new personal best position. i. pbesti ) 12: xi  Mutate( xi ) 13: END FOR 14: F ( Popt 1 )  Evaluate( Popt 1 ) 15: NDPaticles  GetNDParticles( Popt 1 ) 16: Rep  GetNDParticles(Rep  NDPaticles) //*Update the external repository*// 17: IF | Rep | nRep THEN Rep  Prune( Rep) //* | Rep | : number of particles in Rep *// 18: t  t 1 19: END WHILE . N r solutions with greater crowding distances. gbesti . If the maximum number of cycles is reached. Otherwise. 12 current position and the previous personal best position. The IP-MOPSO can be outlined using the pseudo-code below: Algorithm IP-MOPSO 1: t 0 2: FOR i  1 to nPop 3: ( xi . go to Step 2.  Step 6: Update the external repository based on the crowding distance suitable for interval-valued objectives. the algorithm is terminated. if any. and any position randomly otherwise. This update consists of inserting all the currently nondominated solutions into the repository and eliminating the dominated ones.  Step 7: Increase the iteration counter.

b2  200. an archive size of 15 and 50 iterations.6] Pi 1 2 3 4 2 Wi 7 8 8 6 9 b1  110. The problem was solved using the developed algorithm. c2 .0.5089] to [105. The values of the parameters considered in the third example are selected from a real case study. system cost ranges from [52. Two numerical examples Example 1: An example problem taken from [33] is solved in the first example.63. (17) 5 g1 ( x)   Pi xi2  b1  0 i 1 5 g 2 ( x)  Wi  xi exp( xi / 4)   b2  0 i 1 and xi (i  1. Pi .5) being a nonnegative integer.6] [6.1.85] [0. Correspondingly.76] Ci [6. 0.3181] to [0. 84.0.0. The values of c1 .74. with a population size of 30.78. 0. Table 1 Data for example 1 i 1 2 3 4 5 ri [0.0.82] [0. b1 and b2 are listed in Table 1.8] [5.5326. 0.91] [0. Wi . 2.87. .2657. Table 2 presents 15 of the solutions found with their respective system reliability and cost. 4.5.0. Numerical examples Three examples are considered to illustrate and validate the performance of the proposed techniques for solving multi-objective RAPs with interval-valued reliabilities of components.9448.1  riL  i  x i 1   5 Minimize CSL .84. where the values of Ci .8] [3. For each subsystem. CSR    CiL .9132]. 5 Maximize  RSL .5.66] [0.4 respectively. the component type available to choose from is identical. 13 5. 5. RSR    1  1  riR . w1 and w2 have been taken as 2. IP-MOPSO. The range of system reliability is from [0.t. CiR  xi  exp( xi / 4)  i 1 s.9 and 0.9] [3.8839. 0. which consists of five subsystems connected in series.3.

3] [0.0.1.4085. x's Reliability Cost 1 [1. The results obtained are compared to the ones from [33].8148.0.8969] 5 [2.1. This Pareto front provides decision-makers different options for system implementation. 3# is the same as solution no.3.1.8069.8124] 13 [1.2657.7115] [81.0.9448.7731].0.1.168.2.5089] 2 [3.1. Table 2 Results obtained by IP-MOPSO of Example 1 Solution no. Solution no.2.4655] [64.0.5286] [64.1.84.7303. 13.2.3] [0.1.165.2] [0. where it can be seen that IP-MOPSO performs better.2] [0. 2 obtained by IP-MOPSO.0851] 9 [2.115.2] [0.2.3.168.1.1.104.1.1.2.2] [0.5184.2.8494] [92.0.8365] 11 [1.2] [0.132.2.7336.8954] [101.2.1.2.3784.2.2.3] [0.1] [0.135.7729.2.6971] 4 [2.2.120.2.9132] [105.7208.148.9206. we observe that solution no.2.2.3181] [52.1.2.1.2] [0. 4 and 13 shown in boldface.6147] 15 [2. 2# is dominated by solution no.120.5596. From Table 2 and 3.9491.8948] Table 3 Results obtained in [33] of Example 1 Solution no.2.9448.2172] 7 [1.1.3348.2.0.0.3945] [56.8502.1.0. These ranges assure the extent of the generated Pareto front.7731] 3 [1.2.2. 3 further shows solutions obtained by IP-MOPSO and from [33].3] [0.0.1.0027] 10 [2.113.2.3254.3.92.8888] [97.4625.9089. Solution no. Fig.0.8853] 12 [2.6125] 3# [3.1.1351.9794] 8 [2.0.1676] 14 [2.3] [0.7819] [83.9132] [105.3] [0.6238] [73.2] [0.5326.2.4300] [60.5310] [73.5019.8839.3.0.143.8182] [88.7731] .157.5399.124.6324.0.6125] 2# [1.6799] [77.1.4864.0.103.3] [0.3.2.8148.0030.2] [0.6267. which is indicated in italic type.0.5175] [71.2. 1# is obviously dominated by solution no.8839.0269] 6 [2.3183.100.2.2.0.3] [0.0971.4587.0.6663.1. 14 168. x's Reliability Cost 1# [2.8629.3. which are listed in Table 3.1.5762] [68.2] [0.

95] [8.68] [1.90] [5.88.0. Table 4 Component choices of example 2 Component type j Subsystem i 1 2 3 rij cij rij cij rij cij 1 [0.1.71] [1.i } where xij is the decision variable denoting the number of the jth components used in subsystem i .74.72] [2.0.3] [0.3 i=1 xij Î{0.Õ éë1.73] [1.93.10] [0. mi is the number of available component types for each subsystem i .67] [1. four and five types of components in each subsystem. "i = 1.0. RSR ùû = Õ ê1.0.96.71.i is the user-defined maximum number of components used in subsystem i which is eight in this case. Comparison of results obtained by both algorithms Example 2: The second example considers a system consisting of three subsystems.1.0.0.11] 2 [0.8 0.7] [0.92] [5.CSR ùû = å å [CijL .85.0.0.2.66. 3. 3 é mi xij ù Maximize éë RSL .7] 3 [0.76] [2.0.4] [0.5 0.5] 4 [0.rijR .0.90] [5. Table 4 defines the component choices for each subsystem.13] [0.69.71.73] [3.rijL ùû ú i=1 ë j=1 û 3 mi Minimize éëCSL .3] [0.65.4] [0.95.70.97] [9. 15 180 160 140 120 Cost 100 80 60 40 0.87] [2.i .0.2.nmax.0.3 0. The maximum number of components is eight in each subsystem.2 0.3] [0. with an option of five.3] .88.0.6 0.….90.9 1 Reliability Fig.4 0. nmax.t.7] [0.0.7 0. respectively.CijR ]xij i=1 j=1 s. (18) mi 1 £ å xij £ nmax.98] [11.4] 5 [0.

5 presents the Pareto optimal front generated by trade-off points with the mid-point of intervals representing objective functions.9 1 Average reliability Fig. which leads to the highest cost. 4. Configuration A . point B as labeled.5 0. This means that passing from . To better illustrate the trade-off relationship between reliabilities and costs. and 100 generations. As observed. configuration C represents the design configuration of reliability maximization. 16 200 180 160 140 120 Cost 100 80 60 40 20 0 0. IP-MOPSO was run considering a population size of 50. as labeled in Fig.8 0.6 0. a high increment of reliability is gained if the selection of solutions passes from point A to point B than when it changes from point B to C .4 0. provides the most economically benign system configuration. there is one obvious turning point on the Pareto frontier. 5.9 1 Reliability Fig.7 0.7 0. Pareto optimal front obtained of Example 2 180 C 160 140 System reliability maximization Average cost 120 100 80 60 System cost 40 minimization B 20 A 0 0. Each point on this frontier represents an optimal design configuration with different system reliabilities and costs. Fig.4 0. On the contrary.6 0. Fig.8 0.5 0. 4 shows a sample population plot of IP-MOPSO. which neglects the requirement on reliabilities. Pareto optimal front generated by trade-off points For this case. where it can be observed that the solutions are quite diverse and form a broad Pareto optimal front. 5. Moreover.

0.0.1.999999961.1.1.5. j = 1. The expectation value of the objective vector is used for optimization and thus the algorithm becomes a normal one. Table 5 Solutions corresponding to three trade-off points of Example 2 Sol.0.960482384.0.3.0.0. B and C .0.1. i ¹ j} and n is the number of nondominated j solutions.0. Hypervolume metric ( S ) : S metric is introduced by Limbourg et al.5.9] B [0.352444000] [3.6.2. Results were compared using the following four performance metrics.0. which has also interval form [50]. Once a design point is selected. 0.3] [0. 17 solution A to B brings a small increment in the system cost but a high improvement in the system reliability. It measures the volume that X dominates restricted by a reference point y ref . which handles imprecision by assuming a uniform distribution inside the objective vector.0.0.323334000.1] [0.n. to reflect the approximation performance of a nondominated set X .….0. Both algorithms were run for 100 generations with a population size of 50.0.0. Now a natural question comes: Does the imprecision Pareto dominance based MOPSO perform better than the standard distribution assuming one? To investigate this point. Spacing metric ( SP ) : To evaluate the distribution of solutions throughout the found Pareto front.193] A decision-maker can pick up any point from the Pareto front according to their specific design criteria or interest. D is the mean value of all Di .0.999999993] [147. as well as the respective reliability and cost values.x j ) | i.0.0] [0.0.0. A value of zero for this measure indicates that all the .2.969989817] [15. we introduce a metric that is an extension of the spacing metric originally proposed in [51]. Points x's Reliability Cost A [0. we choose DA-MOPSO (Distribution assuming MOPSO) for comparison purpose.1.0.2.1. Table 5 illustrates the solutions representing the design configurations corresponding to points A .35] C [0.0. the system configuration behind it can be obtained.2.0.1.0.0. The spacing or SP metric can be defined as: 1 n   D  Di  2 SP  (19) n  1 i 1 where Di = min{D(x i . 0.

8 0. IP-MOPSO — 0. C ( X .Y ) = (20) |Y | where |  | denotes the number of members of the solution set. Mean Std.0036 0. Applying the ≺IP relation.7 0. it can be defined as: |{y ÎY | $x Î X . 6 indicate that .6 S 0. C SP I C (IP-*. Y )  1 indicates that all solutions of Y are dominated by or equal to some solutions of X . 6.4 Best case: IP-MOPSO Best case: DA-MOPSO 0. Two-set coverage metric (C ) : In order to evaluate the closeness of the Pareto front found to the true one that is unknown in advance.2 0 20 40 60 80 100 Iteration Fig. x ≺IP y Ú x = y}| C( X . IP-*) Mean Std. Comparison results of S metric Detailed plots of S metric over the optimization run as shown in Fig. Table 6 Comparison results of the metric C. we extend the two-set coverage (C ) metric originally proposed in [52] to the imprecise case. 18 nondominated solutions found are equidistantly spaced. Imprecision metric ( I ) : Another performance considered for the imprecise case is the amount of uncertainty in a population.4706 — 0.0038 DA-MOPSO 0.5 0.0025 0. DA-*) C(DA-*. SP and I. This metric takes a pair of nondominated solution sets X and Y as inputs.0157 0.2613 0. This might be measured using the I metric [50] as the added volume of all solutions in the front.0072  represents MOPSO 0.3 Worsst case: IP-MOPSO Worst case: DA-MOPSO 0.3709 0.0243 0.1812 0.

3 i=1 xij Î{0. RSR ùû = Õ ê1. .1. The C metric values in Table 6 emphasize this observation since more than 26.i . nearly 47.rijL ùû ú i=1 ë j=1 û 3 mi Minimize éëCSL . A real case study of SCADA system Next.t. Modems are responsible for communication between stations and the main control center. which work serially. 5. mi is the number of available component types for each subsystem i .1% solutions generated by IP-MOPSO are dominated by those of DA-MOPSO. five types of redundant components can be used. Servers are used for acquiring information from FEP and sending them to SCADA software [19].2. the proposed approach for solving interval-valued multi-objective RAP is illustrated through the design of SCADA system in water resource management. modems.i is the user-defined maximum number of components used in subsystem i which is five in this case. 19 IP-MOPSO seems to have a better convergence performance than DA-MOPSO. "i = 1. The conclusion to draw from all these observations is that the imprecise Pareto dominance based MOPSO (IP-MOPSO) is an interesting alternative to the distribution assuming one (DA-MOPSO).2.1.CSR ùû = å å [CijL . The SCADA system considered has three main sub-systems..1% of the solutions obtained by DA-MOPSO are dominated by those of IP-MOPSO.i } where xij is the decision variable denoting the number of the jth components used in subsystem i .…. FEPs collect data from stations via a proper protocol. (18) mi 1 £ å xij £ nmax. i.Õ éë1. These results indicate that the IP-MOPSO algorithm performs better than DA-MOPSO in terms of diversity and imprecision. In each subsystem.2. nmax. and servers. the mean and standard deviation of the SP and I results over 30 independent simulation runs are given in Table 6. Besides. The maximum number of components is five in each subsystem.e.nmax. FEPs. 3 é mi xij ù Maximize éë RSL . However.rijR .CijR ]xij i=1 j=1 s.

918] [441.833.8 Reliability 0.612] [1764.4 0.4.204] [0.0.5] 5 [0.408] [0.0. 7.714] [78.765] [83.0.7 0.833.0.3.86. as presented in Table 7. with an option of 5 types of components in each subsystem.867] [2009.0. 7.735.9506.510] [0.1836] 3 [0.535.882.9894] [514. This integer-encoding scheme can achieve the mapping from the particle representation to the RAP to be solved in a convenient way.2218. which consists of 3 subsystems.867] [147.816] [392. where the number of redundant components of the corresponding type is selected as the encoded elements.2 percent.784.816] [98. Each particle contains a 15-bit integer coded string. The structure of a particle is illustrated in Fig.2150] 4 [0.2091] An integer-coded scheme is adopted.5.102] [0.9 0. where xij represents the number of redundant component of type j in subsystem i. 8. and considered here with a variation of +/.357] [0.969] [1960.735.0. 20 The reliabilities and costs of all the components are selected from the case study in [19].6] [0.0.3 2000 4000 6000 8000 10000 12000 14000 16000 Cost Fig.918] [2050.0.918] [196.5.0.931.2040] 2 [0.0.686. Encoding scheme for each particle 1 0.931.969] [490. x11 x12 x15 x21 x25 x31 x35 subsystem 1 subsystem 2 subsystem 3 Servers FEPs Modems Fig.931.784.0. Several elements compose a particle representing a candidate solution of the RAP problem.5] [0.81.765] [343.588.0.6 0.153] [0.969] [2131. Pareto optimal front obtained by IP-MOPSO .0.882.5 0. Table 7 Component choices of example 3 Component Subsystem i type j 1 (Server) 2 (FEP) 3 (Modem) rij cij rij cij rij cij 1 [0.459] [0.0.0.7] [0.882.

The system reliability value varies from [0. 21 1 0. We observe from Fig. it is evident that the solutions obtained using IP-MOPSO are better distributed through the tradeoff curve.5 0.9 0. is used for comparison purpose.3566] to [0. However. Pareto optimal front obtained by IP*-MOPSO The Pareto optimal front generated by IP-MOPSO is shown in Fig. with the corresponding total cost varying from [2325. 2234. 1 0.4 0.99994403. 9.6. To tell them apart. with all the other parts of the algorithm exactly the same as in IP-MOPSO.8 Reliability 0. Pareto front with a lower bound of 0. 0.99998972].3162. the MOPSO algorithm adopting the Pareto relation > IP is named IP*-MOPSO. a well-known Pareto relation > IP proposed in [50].4] to [13254. 13796].6 0.85 0. which means that IP-MOPSO performs better in terms of diversity of solutions. adopting the order relation > IN . 8 and 9 that a wide range of values is covered by solutions of both algorithms. Fig.8 2000 4000 6000 8000 10000 12000 14000 Cost Fig.7 0.9 0. where it can be seen that the solutions obtained are quite diverse and form a broad optimal set.3 2000 4000 6000 8000 10000 12000 14000 16000 Cost Fig. 8.95 Reliability 0. 0. 10. To further show the superiority of the proposed order relation for interval-valued numbers and the Pareto dominance relation. 8 shows the Pareto optimal front obtained by IP*-MOPSO.8 for reliability .

9951. those where a small improvement of one objective would lead to a large deterioration in at least one other objective [18.9990] [6639. which captures all possible types of system design under certain design criteria and conditions and thus provides a decision maker a full set of design configurations for implementation. System configuration diagram Reliability Cost 2 1 5 19 2 1 [0. which are plotted in Fig.6910. i.9971. 0. is proposed for solving it. A Pareto frontier can be obtained. In this case.9956. Table 8 illustrates three solutions from this region representing system design configurations..7803] 5 2 1 1 2 2 5 13 2 2 [0.9993] [7497. To further find the good compromises. 10. this paper could be extended in four directions: .0. 0.25]. The “knee” region is formed by the most interesting solutions of the Pareto front. Conclusion The reliability redundancy allocation problem with interval-valued reliability of each component has been discussed here.6] 5 2 2 3 2 4 2 5 1 43 5 [0.9998] [7673.e.5] 5 1 5 5 5 A lower bound of 0.4. with their respective interval-valued reliability and cost. 22 Table 8 Example system design configurations from the “knee” region Sol.8 for reliability is set to identify the solutions more likely to be chosen by decision-makers. For future research. where the reliability and cost of a system are simultaneously considered and properly traded off. No. A multi-objective particle swarm optimization algorithm. solutions from the “knee” region are those where a small improvement of reliability would lead to a huge increase of cost.5.7986. solutions from the “knee” region are considered. which adopts the imprecise Pareto relation and extended crowding distance. 6.

prefer designs with a high reliability assured by a low variability.130:132-9. Reliability redundancy allocation: An improved realization for nonconvex nonlinear programming problems. Châtelet E. Chen YC. an important issue remains in the simultaneous optimization of system reliability. Redundancy allocation for series-parallel systems using integer linear programming. Thus. [4] Ha C. a first future research direction is to propose a crowding distance which can reflect the spatial information well and works better when the number of objectives increases. Kuo W. Acknowledgement The authors would like to thank the editor. Researchers have rarely considered variability involved in the reliability optimization. circular. 3. the model can be extended to allow component mixing that makes the model more complicated. Redundancy allocation of series-parallel systems using a variable neighborhood search algorithm. Hamadani AZ. the associate editor and the anonymous reviewers for their insightful comments and suggestions that help us to improve the quality of the paper. When the number of objective functions is more than two. The choice of multi-state and multi-choice components.288:174-93. the proposed crowding distance may result in losing spatial information. [8] Yeh W C. IEEE Trans Reliab 2010. decision makers. 4. This work is supported by the National Natural Science Foundation of China under Grant No. Hence. Therefore.57:507-16.92(3):323-31. 61333008. [2] Billionnet A. However.38(11):1465-73. IEEE Trans Reliab 2008. complex.59:706-17. Reliab Eng Syst Saf 2014. Hsieh TJ. Chu C. [7] Hsieh TJ. Solving reliability redundancy allocation problems using an artificial bee colony algorithm. [3] Zia L. Inf Sci 2014. Reliability optimization of series-parallel systems with mixed redundancy strategy in subsystems. as well as standby redundancy strategy can be considered. 23 1. Reliab Eng Syst Saf 2007. Eur J Oper Res 2006. References [1] Yalaoui A. and bridge. . [5] Ardakan MA.171(1):24-38. another future research direction is investigating these different systems. [6] Liang YC. 2. Comput Oper Res 2011. IEEE Trans Reliab 2005. Hierarchical redundancy allocation for multi-level reliability systems employing a bacterial-inspired evolutionary algorithm. and their associated variance. Also. system cost. A new dynamic programming method for reliability & redundancy allocation in a parallel-series system.54(2):254-61. Coit DW. generally risk-averse. It should be noted that the general idea presented here could also be applied to other system structures such as k-out-of-n. Redundancy allocation for series-parallel systems using a column generation approach.

[22] Zhang EZ. [18] Dolatshahi-Zand A. Liao HT. Chinnam RB.33(3):335-47. Orthogonal simplified swarm optimization for the series-parallel redundancy allocation problem with a mix of components. IEEE Trans Reliab 2003.111:154-63.94:1585-92. Feyzollahi H.232(2): 275-84. Chakraborty D. Song C. Gao L. Sahoo L. Stochastic programming models for general redundancy-optimization problems. 2009. A Decision Support System for Solving Multi‐Objective Redundancy Allocation Problems.66(4):1115-24. [34] Soltani R. IEEE Trans Reliab 2011. Appl Math Comput 2010. Tangramvong S. [14] Kanagaraj G. [33] Gupta RK. [21] Khalili-Damghani K. A new multi-objective particle swarm optimization method for solving reliability redundancy allocation problems. Li S. Roy D. Sharifi M. Comput Ind Eng 2015. Soltani R. Wu YF. Ahmed S. [23] Zhao R. Proceedings of the Institution of Mechanical Engineers. Tabu search for the redundancy allocation problem of homogenous series-parallel multi-state systems. Part O: Journal of Risk and Reliability 2014.80:33-44.64(2):799-806. [32] Bhunia AK. [26] Garg H. Watada J. [28] Feizollahi MJ. Ouyang H. Murat A. [11] Huang CL.119(1):129-38. Reliability optimization of binary state non-repairable systems: A state of the art survey.142:221-30.62(1):152-60.68:13-22.108:10-20.52(2):181-91. The robust redundancy allocation problem in series-parallel systems with budgeted uncertainty. Reliab Eng Syst Saf 2009. Abtahi AR. Reliab Eng Syst Saf 2014. A knowledge-based archive multi-objective simulated annealing algorithm to optimize series-parallel system with choice of redundancy strategies. [19] Li ZJ. Vishwakarma Y.216(3):929-39. Shahriari MR. Reliab Eng Syst Saf 2014. Bi-objective optimization of the reliability-redundancy allocation problem for series-parallel system. Bhunia AK. [24] Tekiner-Mogulkoc H.93(8):1257-72. [15] Khalili-Damghani K. Knowledge-Based Systems 2014. J Comput Appl Math. IEEE Trans Reliab 2015. [29] Feizollahi MJ. The robust cold standby redundancy allocation in series-parallel systems with budgeted uncertainty. [35] Sengupta A. Sadjadi SJ. Tavana M.127:65-76. Fuzzy Sets Syst 2001 . A hybrid cuckoo search and genetic algorithm for reliability-redundancy allocation problems.111(2):58-75. Reliab Eng Syst Saf 2008. Solving the redundancy allocation problem with multiple strategy choices using a new simplified particle swarm optimization. Comput Ind Eng 2014. Sharma SP. Modelling redundancy allocation for a fuzzy random parallel-series system. Pal TK. Dynamic analysis and reliability assessment of structures with uncertain-but-bounded parameters under stochastic process excitations. Reliab Eng Syst Saf 2013. A practical approach for solving multi-objective reliability redundancy allocation problems using extended bare-bones particle swarm optimization. Nourelfath M. [31] Sahoo L. [12] Wang Y. Int J Ind Eng Comput 2014. Comput Ind Eng 2013.133:11-21. Multi-objective reliability optimization of series-parallel systems with a choice of redundancy strategies. [27] Soltani R.232(2):539-57.60(3):667-74. A GA based penalty function technique for solving constrained redundancy allocation problem of series system with interval valued reliability of components. Reliab Eng Syst Saf 2015. [30] Do DM. J Manuf Syst 2014.132:46-59. Chen QW. Bhunia AK.64:1-12. . Coit DW. Comput Ind Eng 2012. 24 [9] Ouzineb M. Hajipour V. Gendreau M. Genetic algorithm based multi-objective reliability optimization in interval environment. A PSO algorithm for constrained redundancy allocation in multi-state systems with bridge topology. [10] Yeh WC. Rani M. System reliability optimization considering uncertainty: Minimization of the coefficient of variation for series-parallel systems.5(3):339-64. Reliab Eng Syst Saf 2013. IEEE Trans Reliab 2014. [13] Kong X. A particle-based simplified swarm optimization algorithm for reliability redundancy allocation problems. [17] Zaretalab A. [16] Cao D. Gao W. Reliab Eng Syst Saf 2015. [20] Safari J. Design of SCADA water resource management control center by a bi-objective redundancy allocation problem and particle swarm optimization.144:147-58. Reliab Eng Syst Saf 2012. Kapur PK. Reliability optimization through robust redundancy allocation models with choice of component type under fuzziness.30(8):1249-62. Reliability stochastic optimization for a series system wi th interval component reliability via genetic algorithm. Tavana M. Modarres M. Abtahi AR. A two-stage approach for multi-objective decision making with applications to system reliability optimization. J Comput Appl Math 2009. Reliab Eng Syst Saf 2015. Interpretation of inequality constraints involving interval coefficients and a solution to interval linear programming. [25] Wang S. Qual Reliab Eng Int 2014. Liu B. Khalili-Damghani K. Roy D. Coit D W.228(5):449-59. Li L. Efficient exact optimization of multi-objective redundancy allocation problems in series-parallel systems.63(1):239-50. Jawahar N. Ponnambalam SG.


 [52] Zitzler E. Comput Ind Eng 1997. Li Y. [41] Feizollahi MJ. A fast and elitist multiobjective genetic algorithm: NSGA-II. et al. Watson HC. Kim CE. [51] Schott JR. [47] Kennedy J.228(3):254-64. DC. Interval programming for the redundancy allocation with choices of redundancy strategy and component type under uncertainty: Erlang time to failure distribution. IEEE Trans Reliab 2012. Comparison of multiobjective evolutionary algorithms: Empirical results.p.61(4):957-65. Thiele L. Yokota T. Mahapatra BS.79:204-13.  The crowding distance metric is extended to handle imprecise objective functions. Proceedings of the Institution of Mechanical Engineers.41:6147-60. Optimal design problem of system reliability with interval coefficient using improved genetic algorithms. 459-66. The robust deviation redundancy allocation problem with interval component reliabilities. Highlights  We model the reliability redundancy allocation problem in an interval environment. 2005. Minimum-Maximum regret redundancy allocation with the choice of redundancy strategy and multiple choice of component type under uncertainty. Sadjadi SJ. Gen M. An optimization algorithm for imprecise multi-objective problem functions. Eberhart RC. Washington. Meyarivan T. Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients.6(2):182-97. [44] Roy P. [38] Yokota T. [50] Limbourg P. Method for solving nonlinear goal programming with interval coeffic ients using genetic algorithm. Comput Ind Eng 1995. A genetic algorithm for interval nonlinear integer programming problem. Gen M. Empirical study of particle swarm optimization. Piscataway. [48] Shi Y.8(3):240-55. A method for interval 0–1 nonlinear programming problem using a genetic algorithm. Particle swarm optimization. In: Proceedings of the 1999 Congress on Evolutionary Computation. Sadjadi SJ. In: Proceedings of the 2005 IEEE Congress on Evolutionary Computation.29(1):531-5. [49] Ratnaweera A. Mahapatra GS. [43] Sadjadi SJ. Fault tolerant design using single and multicriteria genetic algorithm optimization. Appl Math Comput 2014. In: Proceedings of the 4th IEEE International conference on neural networks. Deb K.244:413-21. p. Exp Syst Appl 2014. Comput Ind Eng 2015 . [42] Soltani R.  A dominance relation for interval-valued multi-objective functions is defined. Gen M.  We apply the particle swarm optimization directly on the interval values. . Tavakkoli-Moghaddam R.31(3):913-7. IEEE Trans Evol Comput 2004. Pal TK. Pratap A. 1999. Evol Comput 2000. Agarwal S. Eur J Oper Res 2000.8(2):173-95.33(3):597-600. Soltani R. 1945-50.127(1):28-43. 1942-8. Entropy based region reducing genetic algorithm for reliability redundancy allocation in interval environment. Eberhart RC. Modarres M. Comput Ind Eng 1999. [45] Soltani R. Halgamuge S. Aponte DES. 25 [36] Sengupta A. [46] Deb K. p. Part O: Journal of Risk and Reliability 2014. [39] Taguchi T. Ida K. IEEE Trans Evol Comput 2002.37(1):145-9. 1995. [37] Yokota T. 1995. Comput Ind Eng 1996. Tavakkoli-Moghaddam R. NJ. Cambridge: Massachusetts Institute of Technology. On comparing interval numbers. Taguchi T. Robust cold standby redundancy allocation for nonrepairable series-parallel systems through Min-Max regret formulation and Benders’ decomposition method. Edinburgh. Roy PK. [40] Taguchi T.