Professional Documents
Culture Documents
0.5
-0.5
60 2
60
40
40 B
20
20 Luxury
0 0
1
Price
Multi-objective Problem (ctd.)
Mapping: Rd to Fn
Reference: S. Dehuri, A. Ghosh, and S.-B. Cho, “Particle Swarm Optimized Polynomial Neural Network
for Classification: A Multi-objective View”, International Journal of Intelligent Defence Support Systems,
vol. 1, no. 3, pp.225-253, 2008.
Concept of Domination
A solution x1 dominate other solution x2, if
both conditions 1 and 2 are true:
1. The solution x1 is no worse than x2 in all
objectives, or fi(x1) ♣ fj(x2) for all j.
2. The solution x1 is strictly better than x2 in
at least one objective or fi(x1) ♠ fj(x2) for
at least one j.
Properties of Dominance
Reflexive: The dominance relation is not reflexive.
Symmetric: The dominance relation is also not symmetric.
Transitive: The dominance relation is transitive.
HOW?
Examples of MOP
Minimization Problem:
1. Minimize f1(x)=x1
f2(x)=(1+x2)/(x1)
Domain: {0.1 <= x1<=1, 0 <=x2 <=5}
20
18
16
14
12
10
2
0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Examples (ctd.)
Maximize f1(x)=x1
f2(x)=1+x2-(x1*x1)
Domain:{0<=x1<=1, 0<=x2<=3}
3
-1
-2
-3
1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2
MOPS Approaches
1) Weighted Sum Approaches
2) Lexicography Approaches
3) Pareto Approaches
Weighted Sum Approach
Optimize M
F(x)= wm . f m ( x)
M
wm [0,1], wm 1
m 1
m 1
4.5
3.5
2.5
1.5
0.5
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Lexicography Approach
• In the lexicographic approach, different
priorities are assigned to different objectives,
and then the objectives are optimized in order
of their priority.
Review of the Classical Methods
1. Only one Pareto optimal solution can be
expected to be found in one simulation run of
a classical algorithm.
2. Not all Pareto optimal solution can be found
by some algorithms in non-convex MOOPs.
3. All algorithms require some problem
knowledge, such as suitable weights, epsilon,
or target values, etc.
Pareto Approach from EA Domain
VEGA (Vector Evaluated Genetic Algorithms)
(Contributed by David Schaffer in 1984).
VOES(Vector Optimized Evolution Strategy) contributed by Frank Kursawe in
1990.
MOGA (Multi-objective GA) introduced by Fonseca and Fleming in 1993.
NSGA (Non-dominated Sorting GA) introduced by Srinivas and Deb in 1994.
NPGA (Niched-Pareto Genetic Algorithm) introduced by Horn et al. in 1994.
PPES (Predator-Prey Evolution Strategy) introduced by Laumanns et al. in 1998.
DSGA (Distributed Sharing GA) introduced by Hiroyasgu et al. in 1999.
DRLA (Distributed Reinforcement Learning Approach) introduced by Mariano
and Morales in 2000.
Nash GA introduced by Sefrioui and Periaux in 2000, motivated by a game
theoretic approach.
REMOEA (Rudolph’s Elitist MOEA) introduced by Rudolph in 2001.
NSGA-II by Deb et a. in 2000. and so on……………………..
Potential Research Directions
• MOEA in Data Mining [-1]
• MOEA in real time task scheduling [0]
• MOEA for Ground-water Contamination [1]
• MOEA for Land-use Management [2]
More about MOGA
Please Visit:
KANGAL-Kanpur Genetic Algorithm Laboratory
(Prof. Kalyanmoy Deb)
CINVESTA-Mexico (Prof. Carlos A. Coello Coello)
Particle Swarm Optimization
• A new Paradigm of Swarm Intelligence
– What is a Swarm Intelligence (SI)?
– Examples from nature
– Origins and Inspirations of SI
What is a Swarm?
• Collection of interacting agents (Soft/Hardware).
– Agents (Soft/Hardware):
• Individuals that belong to a group (but are not necessarily
identical).
• They contribute to and benefit from the group.
• They can recognize, communicate, and/or interact with each
other.
• The instinctive perception of swarms is a group of
agents in motion – but that does not always have to
be the case.
• A swarm is better understood if thought of as
agents exhibiting a collective behavior.
Example of Swarms in Nature
• Classic Example: Swarm of Wasps/Bees
• Can be extended to other similar systems:
– Ant colony
• Agents: ants
– Flock of birds
• Agents: birds
– Traffic
• Agents: cars
– Crowd
• Agents: humans
– Immune system
• Agents: cells and molecules
Beginnings of Swarm Intelligence
gBest
v
Pseudo code
• Initialize;
while (not teminated)
• { t = t +1
for i = 1:N // for each particle
• {
• Vi(t) = Vi(t-1) + c1*rand()*(Pi –Xi(t-1))
+c2*rand()*(Pg –Xi(t-1))
• Xi(t) = Xi(t-1) + Vi(t)
• Fitness i(t) = f(Xi(t));
if needed, update Pi and Pg;
• }// end for i
• } // end for while
PSO Vs.GA
• Similarity
– Both algorithms start with a group of a randomly generated population
– Both have fitness values to evaluate the population.
– Both update the population and search for the optimum with random
techniques.
– Both systems do not guarantee success.
• Dissimilarity
– However, unlike GA, PSO has no evolution operators such as crossover
and mutation.
– In PSO, the potential solutions, called particles, fly through the problem
space by following the current optimum particles.
– Particles update themselves with the internal velocity.
– They also have memory, which is important to the algorithm.
• advantages
– PSO is easy to implement and there are few parameters to adjust.
– Compared with GA, all the particles tend to converge to the best solution
quickly even in the local version in most cases
Our Contribution towards PSO
[1]Mishra, B.B., and Dehuri, S., “A Novel Stranger Sociometry
Particle Swarm Optimization (S2PSO)”, ICFAI Journal of
Computer Science, vol. 1, no. 1, 2007.
[2]Dehuri, S., “An Empirical Study of Particle Swarm
Optimization for Cluster Analysis”, ICFAI Journal of
Information Technology, 2007.
[3]Dehuri, S., Ghosh A., and Mall, R, “Particles’ with Age for
Data Clustering”, Proceedings of International Conference on
Information Technology, Dec. 18-21, Bhubaneswar, 2006.
[4]Dehuri, S, and Rath, B. K., “gbest Multi-swarm for Multi-
objective Rule Mining”, Proceedings of National Conference on
Advance Computing, March 22-23, Tezpur University, 2007.
PSO for MOP
Three main issues to be considered when extending PSO to multi-
objective optimization are:
30
25
Number of Publications
20
15
10
5
0
1999 2000 2001 2002 2003 2004 2005 2006 2007 2008
Years
Growth of GA, PSO and ACO for
MOP
Algorithm: MOPSO
• INITIALIZATION of the Swarm
• EVALUATE th e fitness of each particle of the swarm.
• EX_ARCHIVE = SELECT the non-dominated solution s from the Swarm.
• t = 0.
• REPEAT
• FOR each particle
• SELECT the gbest
• UPDATE the v elocity
• UPDATE the Position
• MUTATION /* Optional */
• EVALUATE th e Particle
• UPDATE the p best
• END FOR
• UPDATE the EX_ARCHIVE with gbests.
• t = t+1
• UNTIL (t <= MAXIMUM_ITERATIONS)
• Report Results in the EX_ARCHIVE.
Dehuri, S., Cho, S.-B., "Multi-criterion Pareto based particle swarm optimized polynomial
neural network for classification: A Review and State-of-the-Art. Computer Science Review,
Elsevier Science, vol. 3, no. 1, pp. 19-40, 2009.
A Few Contributions…
• Parsopoulos and Vrahatis [a]
• Baumgartner et al. [b]
• Hu and Eberhart [c]
• Parsopoulos et al. [d]
• Chow and Tsui [e]
• Moore and Chapman [f]
• Ray and Liew [g]
• Fieldsend and Singh [h]
• Coello et al. [i]
and so on…
References
[-1] A. Ghosh, S. Dehuri, and S. Ghosh, Multi-objective Evolutionary
Algorithms for KDD, Springer-Verlag, 2008.
[0] J. Oh and C. Wu, “Genetic Algorithms based real time task scheduling
with multiple goals”, The journal fo systems and Software, vol. 71, pp.
245-258, 2004.
[1]R. Farmani, et al., “An Evolutionary Bayesian Belief Network
Methodology for Optimum Management of Groundwater
Contamination”, environmental Modeling and Software, vol.24,
pp.303-310, 2009.
[2]D. Dutta, et al., “Multi-objective Evolutionary Algorithms for Land-
Use Management Problem”, International Journal of Computational
Intelligence Research, vol. 3, no. 4, pp/ 371-384, 2007.
[3] V. Chankong and Y. Y. Haimes, Multi-objective Decision Making
Theory and Methodology, New York: North-Holland, 1983.
References
[a]Konstantinos E. Parsopoulos and Michael N. Vrahatis. Particle swarm
optimization method in multiobjective problems. In Proceedings of the 2002
ACM Symposium on Applied Computing (SAC’2002), pages 603–607,
Madrid, Spain, 2002. ACM Press.
[b]U. Baumgartner, Ch. Magele, and W. Renhart. Pareto optimality and particle
swarm optimization. IEEE Transactions on Magnetics, 40(2):1172–1175,
March 2004.
[c]Xiaohui Hu and Russell Eberhart. Multiobjective optimization using
dynamic neighborhood particle swarm optimization. In Congress on
Evolutionary Computation (CEC’2002), volume 2, pages 1677–1681,
Piscataway, New Jersey, May 2002. IEEE Service Center.
[d]Konstantinos E. Parsopoulos, Dimitris K. Tasoulis, and Michael N. Vrahatis.
Multiobjective optimization using parallel vector evaluated particle swarm
optimization. In Proceedings of the IASTED International Conference on
Artificial Intelligence and Applications (AIA 2004), volume 2, pages 823–
828, Innsbruck, Austria, February 2004. ACTA Press.
References
[e]Chi-kin Chow and Hung-tat Tsui. Autonomous agent response learning by a
multi-species particle swarm optimization. In Congress on Evolutionary
Computation (CEC’2004), volume 1, pages 778–785, Portland, Oregon,
USA, June 2004. IEEE Service Center.
[f]Jacqueline Moore and Richard Chapman. Application of particle swarm to
multiobjective optimization. Department of Computer Science and
Software Engineering, Auburn University, 1999.
[g]Tapabrata Ray and K.M. Liew. A swarm metaphor for multiobjective design
optimization. Engineering Optimization, 34(2):141–153, March 2002.
[h]Jonathan E. Fieldsend and Sameer Singh. A multiobjective algorithm based
upon particle swarm optimisation, an efficient data structure and turbulence.
In Proceedings of the 2002 U.K. Workshop on Computational Intelligence,
pages 37–44, Birmingham, UK, September 2002.
[i]Carlos A. Coello Coello and Maximino Salazar Lechuga. MOPSO: A
proposal for multiple objective particle swarm optimization. In Congress on
Evolutionary Computation (CEC’2002), volume 2, pages 1051–1056,
Piscataway, New Jersey, May 2002. IEEE Service Center.