Professional Documents
Culture Documents
A Concentration-Based Artificial Immune Network For Multi-Objective Optimization, Coelho
A Concentration-Based Artificial Immune Network For Multi-Objective Optimization, Coelho
net/publication/221228603
CITATIONS READS
9 72
2 authors:
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Guilherme Palermo Coelho on 16 May 2014.
1 Introduction
Although the Immune Network principles have been very popular in the
AIS literature, its computational implementations are generally restricted to the
negative response of immune cells. Besides, such behavior is usually implemented
in the form of eventual comparisons (during runtime) among all the cells in the
population and the elimination of the worst subset of cells (those that present
lower affinity with the antigens) from those that are closer than a given threshold
to each other [11]. On the other hand, cob-aiNet introduced a distinct approach,
with the implementation of the Immune Network mechanisms partially based on
the 2D shape-space immune network model of Hart et al. [13].
In this section, the overall structure of the cob-aiNet[MO] algorithm will be
presented, with emphasis on mechanisms and operators devoted to the multi-
objective scenario. The readers are referred to [5] for a full description of cob-
aiNet, including a local search operator that has been discarded here.
The main steps of cob-aiNet[MO] are given in Alg. 1. As we can see, the
overall structure of this algorithm is similar to other immune network-inspired
algorithms from the literature [11], except for the inclusion of the concentration
update steps, the novel concentration-based suppression mechanism, and the
selection operator that also performs insertion. However, there are also some
differences in the remaining steps, that will be highlighted in what follows.
j
P
j∈J Ct · [σs − d(i, j)]
if J 6= ∅
fiAb (t) = j , (1)
P
j∈J Ct
0 otherwise
where fiAb (t) is the total affinity among antibody i and the remaining antibodies
at iteration t, J is the set of antibodies better than i and within a radius σs
(defined by the user) from i, Ctk is the concentration of antibody k at iteration
t and d(k, l) is the Euclidean distance between antibodies k and l.
From Eq. 1, it is possible to notice that only antibodies closer to i than a
given threshold and also better than i (fjAg (t) > fiAg (t)) are considered in the
calculation of the total affinity fiAb (t), and their contribution is proportional to
how close they are from i, weighted by their concentration. As it will be seen
in next section, the affinity fiAb (t) among antibodies may lead to a decrease in
the concentration of antibody i when i is in the vicinity of better cells, and this
decrease is proportional to the concentration of such cells.
3 Experimental Results
In this section, the benchmark problems and the methodology adopted in the
experimental analysis will be presented, together with a thorough discussion of
the obtained results.
3.1 Benchmark Problems
3.2 Methodology
For all the problems studied here, cob-aiNet[MO] was run with the following
parameters: initial population size nAB = 100; minimum and maximum num-
ber of clones nC min = 3 and nC max = 7, respectively; initial concentration
C0 = 0.5; maximum population size maxAB = 250 cells for problems Deb & Ti-
wari and Two-on-One 4, and maxAB = 150 cells for problems EBN and Lamé
Supersphere; β i = 2.50 and β f = 0.001 for Lamé Supersphere and β i = 1.00
and β f = 0.001 for the other problems; and, finally, neighborhood threshold
σs = 0.060, 0.004, 0.150 and 0.070 for problems Deb & Tiwari, Two-on-One 4,
EBN and Lamé Supersphere, respectively.
The parameters of NSGA-II and omni-optimizer were the same for all the
problems: population size equal to 100; mutation probability equal to 1/n,
where n is the dimension of the problem; crossover probability equal to 1.0; and
crossover and mutation distribution index equal to 20. For the omni-optimizer,
the parameter δ for the -dominance concept was set to δ = 0.001.
The KP1 algorithm was evaluated with: population size equal to 100; muta-
tion probability equal to 1/n, where n is the dimension of the problem; crossover
probability equal to 0.80; and crossover and mutation distribution index also
equal to 20.
Finally, VIS was evaluated with the following parameters: initial population
size equal to 200; number of clones per cell equal to 5; percentage of random
cells equal to 20; 5 iterations between two suppression steps; and α equal to 2.5
for Lamé Supersphere and 1.0 for the other problems.
The average and standard deviation of the results obtained by each algorithm
for each problem are given in Tab. 1. Those results in which the null hypothesis2
can be rejected with significance 0.05 are marked with “*”. To calculate the hy-
pervolume for each solution set, the reference points for problems Deb & Tiwari,
EBN, Two-on-One 4 and Lamé Supersphere were [0, 0], [1, 1], [21, 10] and [1, 1],
respectively.
Considering the hypervolume metric, it is possible to notice from Tab. 1 that
cob-aiNet[MO] presented the best average hypervolume for problem Two-on-
One 4 (although it is not possible to reject the null hypothesis with significance
0.05 when comparing cob-aiNet[MO] and omni-optimizer), while the best results
for problems Deb & Tiwari, EBN and Lamé Supersphere were obtained by KP1.
Although cob-aiNet[MO] presented the best results w.r.t. the hypervolume
only for problem Two-on-One 4, it is possible to see from Fig. 1 (graphical repre-
sentation – in the objective space – of the final set of non-dominated solutions)
that cob-aiNet[MO] obtained good results for all the problems, in opposition
to the other techniques that presented degraded performances in some of the
problems (e.g. omni-optimizer in problem EBN and VIS in problems EBN and
Two-on-One 4 ). The results shown in Fig. 1 correspond to the repetition that
led to the best hypervolume for each algorithm.
With respect to the spacing metric, cob-aiNet[MO] obtained the best results
for the Lamé Supersphere, while NSGA-II was the best one in Deb & Tiwari ’s
problem (although its results were not significantly different from those of cob-
aiNet[MO]). For the other two problems, omni-optimizer obtained final sets of
non-dominated solutions with the best spacing. However, it is important to no-
tice that the spacing metric only indicates how well-spaced the non-dominated
solutions obtained by a given algorithm are, and not whether these solutions
2
In the Wilcoxon Rank Sum test, the null hypothesis states that the data in two
vectors x and y are independent samples from identical continuous distributions
with equal medians.
Table 1. Average ± std. deviation of the hypervolume, spacing, hypercube-based diver-
sity and overall rank. The best results are marked in bold. In the first column, “D&T”
corresponds to the Deb & Tiwari problem, “2-on-1” to the Two-on-One 4 and “Lamé”
to the Lamé Supersphere. Those results in which the null hypothesis (when compared
to cob-aiNet[MO]) can be rejected with significance 0.05 are marked with “*”.
present a maximum spread. This explains the victory of the omni-optimizer for
problems EBN and Two-on-One 4, even though it was not able to cover the
whole Pareto fronts for most of its executions for these problems, as illustrated
in Fig. 1.
Now focusing on the diversity of the final sets of solutions in the decision
space, evaluated here by the hypercube-based diversity metric, it is possible
to observe in the results shown in Tab. 1 that, as expected, cob-aiNet[MO]
was the algorithm that presented the best diversity maintenance capabilities,
obtaining significantly better results in problems Deb & Tiwari, EBN and Lamé
Supersphere. For problem Two-on-One 4, KP1 (which was proposed with the
same goals of cob-aiNet[MO] – diversity in the decision space) obtained the best
average results.
Though the omni-optimizer presents diversity stimulation mechanisms de-
voted to the decision space, this algorithm was not able to obtain the best results
w.r.t. diversity in any of the problems, although its results for Two-on-One 4
were not significantly different from those of cob-aiNet[MO]. In order to visually
illustrate the differences between cob-aiNet’s and omni-optimizer’s capability of
finding and maintaining diversity in the decision space, Fig. 2 shows the final
distribution of the solutions obtained by both algorithms in Deb & Tiwari ’s
problem. The results shown in Fig. 2 were obtained in the repetitions that led
cob−aiNet[MO] NSGA−II omni−optimizer VIS KP1
0 0 0 0 0
−1 −1 −1 −1 −1
−2 −2 −2 −2 −2
f2
f2
f2
f2
f
−3 −3 −3 −3 −3
−4 −4 −4 −4 −4
−5 −5 −5 −5 −5
−4 −2 0 −4 −2 0 −4 −2 0 −4 −2 0 −4 −2 0
f f f f f
1 1 1 1 1
1 1 1 1 1
2
0.5
f2
0.5
f2
0.5
f2
0.5
f2
0.5
f
0 0 0 0 0
0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1 f1 f1
10 10 10 10 10
2
5
f2
f2
5
f2
5
f2
5
f
0 0 0 0 0
5 10 15 20 5 10 15 20 5 10 15 20 5 10 15 20 5 10 15 20
f1 f1 f1 f1 f1
1 1 1 1 1
2
0.5
f2
0.5
f2
0.5
f2
0.5
f2
0.5
f
0 0 0 0 0
0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1 f1 f1
Fig. 1. Representation, in the objective space, of the final set of non-dominated solu-
tions obtained in the repetition that led to the highest hypervolume for each algorithm.
The rows correspond to problems Deb & Tiwari (first on the top), EBN, Two-on-One
4 and Lamé Supersphere, respectively.
6 6 6 6
4 4 4 4
x3
x4
x5
x2
2 2 2 2
0 0 0 0
0 2 4 6 0 2 4 6 0 2 4 6 0 2 4 6
x1 x1 x1 x1
6 6 6 6
4 4 4 4
x2
x3
x4
x5
2 2 2 2
0 0 0 0
0 2 4 6 0 2 4 6 0 2 4 6 0 2 4 6
x1 x2 x2 x2
6 6 6 6
4 4 4 4
x3
x3
x4
x5
2 2 2 2
0 0 0 0
0 2 4 6 0 2 4 6 0 2 4 6 0 2 4 6
x1 x2 x3 x3
6 6 6 6
4 4 4 4
x4
x4
x4
x5
2 2 2 2
0 0 0 0
0 2 4 6 0 2 4 6 0 2 4 6 0 2 4 6
x1 x2 x3 x4
6 6 6 6
4 4 4 4
x5
x5
x5
x5
2 2 2 2
0 0 0 0
0 2 4 6 0 2 4 6 0 2 4 6 0 2 4 6
x1 x2 x3 x4
omni−optimizer
Fig. 2. Representation in the objective space of the final set of non-dominated solu-
tions obtained by cob-aiNet[MO] (upper right triangle) and omni-optimizer (lower left
triangle) for Deb & Tiwari’s problem. These plots correspond to the final populations
in the repetitions that led to the highest hypercube-based diversity.
Table 2. Average ± std. deviation of the computational time (in seconds) required by
cob-aiNet[MO] and NSGA-II. Both algorithms were written in Microsoft’s Visual C#
.NET. The experiments were performed on a machine with an Intel Core2Duo T7500
(2.2Ghz) processor, 3GB of RAM, Windows 7 and .NET Framework 4.
Acknowledgements
The authors would like to thank CAPES and CNPq for the financial support.
References
1. Bersini, H.: Revisiting idiotypic immune networks. In: Advances in Artificial Life.
LNCS, vol. 2801, pp. 164–174. Springer (2003)
2. Burnet, F.M.: Clonal selection and after. In: Bell, G.I., Perelson, A.S., Pimgley Jr,
G.H. (eds.) Theoretical Immunology, pp. 63–85. Marcel Dekker Inc. (1978)
3. de Castro, L.N., Timmis, J.: Artificial Immune Systems: a New Computational
Intelligence Approach. Springer Verlag (2002)
4. Chan, K.P., Ray, T.: An evolutionary algorithm to maintain diversity in the para-
metric and the objective space. In: Proc. of the 2005 Intl. Conference on Compu-
tational Intelligence, Robotics and Autonomous Systems – CIRAS 2005 (2005)
5. Coelho, G.P., Von Zuben, F.J.: A concentration-based artificial immune network
for continuous optimization. In: Proc. of the 2010 IEEE International Conference
on Evolutionary Computation – CEC 2010. pp. 108–115 (2010)
6. Coello Coello, C.A., Lamont, G.B., Van Veldhuizen, D.A.: Evolutionary Algorithms
for Solving Multi-Objective Problems. Springer, New York, USA, 2nd. edn. (2007)
7. Deb, K., Pratap, A., Agarwal, S., Meyarivan, T.: A fast and elitist multiobjec-
tive genetic algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation
6(2), 182–197 (2002)
8. Deb, K., Tiwari, S.: Omni-optimizer: A procedure for single and multi-objective
optimization. In: Coello Coello, C.A., Hernández-Aguirre, A., Zitzler, E. (eds.)
Evolutionary Multi-Criterion Optimization - EMO 2005. LNCS, vol. 3410, pp. 47–
61. Springer (2005)
9. Emmerich, M., Beume, N., Naujoks, B.: An EMO algorithm using the hypervolume
measure as selection criterion. In: Coello Coello, C.A., Hernández-Aguirre, A.,
Zitzler, E. (eds.) Evolutionary Multi-Criterion Optimization - EMO 2005. LNCS,
vol. 3410, pp. 62–76. Springer (2005)
10. Emmerich, M.T.M., Deutz, A.H.: Test problems based on Lamé superspheres. In:
Obayashi, S., Deb, K., Poloni, C., Hiroyasu, T., Murata, T. (eds.) Evolutionary
Multi-Criterion Optimization - EMO 2007. LNCS, vol. 4403, pp. 922–936. Springer
(2007)
11. de França, F.O., Coelho, G.P., Castro, P.A.D., Von Zuben, F.J.: Conceptual and
practical aspects of the aiNet family of algorithms. Intl. Journal of Natural Com-
puting Research 1(1), 1–35 (2010)
12. Freschi, F., Repetto, M.: VIS: An artificial immune network for multi-objective
optimization. Engineering Optimization 38(8), 975–996 (2006)
13. Hart, E., Bersini, H., Santos, F.C.: How affinity influences tolerance in an idiotypic
network. Journal of Theoretical Biology 249(3), 422–436 (2007)
14. Jerne, N.K.: Towards a network theory of the immune system. Annales
d’Immunologie 125(1-2), 373–389 (1974)
15. Moore, D.S., McCabe, G.P., Craig, B.: Introduction to the Practice of Statistics.
W. H. Freeman, 6th. edn. (2007)
16. Preuss, M., Naujoks, B., Rudolph, G.: Pareto set and EMOA behavior for simple
multimodal multiobjective functions. In: Runarsson, T., Beyer, H.G., Burke, E.,
Merelo-Guervós, J., Whitley, L., Yao, X. (eds.) Parallel Problem Solving from
Nature - PPSN IX. LNCS, vol. 4193, pp. 513–522. Springer (2006)
17. Rudolph, G., Naujoks, B., Preuss, M.: Capabilities of EMOA to detect and preserve
equivalent Pareto subsets. In: Obayashi, S., Deb, K., Poloni, C., Hiroyasu, T.,
Murata, T. (eds.) Evolutionary Multi-Criterion Optimization - EMO 2007. LNCS,
vol. 4403, pp. 36–50. Springer (2007)
18. Shir, O.M., Preuss, M., Naujoks, B., Emmerich, M.: Enhancing decision space di-
versity in evolutionary multiobjective algorithms. In: Ehrgott, M., Fonseca, C.M.,
Gandibleux, X., Hao, J.K., Sevaux, M. (eds.) Evolutionary Multi-Criterion Opti-
mization - EMO 2009. LNCS, vol. 5467, pp. 95–109. Springer (2009)
19. Srinivas, N., Deb, K.: Multiobjective optimization using nondominated sorting in
genetic algorithms. Evolutionary Computation 2(3), 221–248 (1994)
20. Toffolo, A., Benini, E.: Genetic diversity as an objective in multi-objective evolu-
tionary algorithms. Evolutionary Computation 11(2), 151–167 (2003)
21. Unrich, T., Bader, J., Zitzler, E.: Integrating decision space diversity into
hypervolume-based multiobjective search. In: Proc. of the 2010 Genetic and Evo-
lutionary Computation Conference – GECCO 2010 (2010)
22. Yildiz, A.R.: A new design optimization framework based on immune algorithm
and Taguchi’s method. Computers in Industry 60(8), 613–620 (2009)
23. Zhou, A., Zhang, Q., Jin, Y.: Approximating the set of Pareto-optimal solutions in
both the decision and objective spaces by an estimation of distribution algorithm.
IEEE Transactions on Evolutionary Computation 13(5), 1167–1189 (2009)
24. Zitzler, E., Laumanns, M., Thiele, L.: SPEA2: Improving the strength Pareto evo-
lutionary algorithm. Tech. rep., ETH, TIK, Zurich, Switzerland (2001)