You are on page 1of 5

Parallelizat ion of Par t icle Swar m

Opt imizat ion using Message Passing


I nt er f aces Applied t o Jamming Resour ce
Allocat ion Pr oblems
Yee Ming Chen and Jhang-Guo Chen

Abstract Time requirements for the solving of complex large-scale engineering problems can be substantially reduced by
using parallel computation. Motivated by a computationally demanding weapon-target allocation (WTA) problem, we in-
troduce a parallel implementation of a stochastic population based global optimizer, the particle swarm algorithm(PSO) as a
means of obtaining increased computational throughput. The parallelization has been carried out on one of the simplest and
flexible optimization algorithms, namely the PSO with digital pheromones algorithm. PSO is a stochastic population global
optimizer and the initial population may be provided with random values and later convergence may be achieved. The use of
message passing interfaces (MPI) for the parallelization of the synchronous version of PSO is proposed. In this approach, ini-
tial population has been divided between the processors chosen at run time. The parallelization of the Particle Swarm Optimi-
zation (PSO) with digital pheromones algorithm is detailed and its performance and characteristics demonstrated for the
jamming resource allocation problem as example.


Index TermsParticle swarm algorithm Message passing interfaces Optimal allocation



1 INTRODUCTION
ODERN battle forces generally include many dif-
ferent platforms, e.g., ships, planes, helicopters, etc.
This military operations research, an important
problem is the so called weapon-target allocation (WTA)
problem, i.e., to allocate defensive weapon resources to
threatening targets in order to protect valuable defended
assets. Radar jamming resources are the most important
EW (Electronic Warfare) resources, so it is of great signifi-
cance to research on the radar jamming allocation prob-
lem [1]. Therefore, the optimal allocation of radar jam-
ming resources distributed over these platforms is an im-
portant research issue[2]. Moreover, parallel computing
of multi-platforms is under development to help the Elec-
tronic Warfare Commander (EWC) utilize these resources
effectively[3].

The radar jamming allocation problem is an optimization
problem of high military relevance. The need for obtain-
ing the solutions in real-time is often overlooked in exist-
ing literature[4]. The radar jamming allocation algorithm
must meet the real-time requirement. The available algo-
rithm is the centralized algorithm which needs a great
deal of compute price and cannot assure the good per-
formance of real-time. Moreover, modern radar jammers
of multi-targets jamming capability are distributed over
different platforms, so the algorithm is hoped to directly
deal with such instances as one platform jams multiple
radars and multiple platforms jam one radar. To take ad-
vantage of these advances, communication layers such as
Message Passing Interface (MPI) has been used to devel-
op parallel optimization algorithms, the most popular
being gradient-based, genetic (GA), and particle swarm
optimization (PSO) algorithms. In this study, we present a
parallel PSO algorithm which with the the aid of digital
pheromones, members in a swarm can better communi-
cate with each other to improve search performance. To
explore this idea, a swarm is deployed in the design space
across different processors. Through an additional pro-
cessor, each part of the swarm can communicate with the
others. While digital pheromones aid communication
within a swarm, the developed parallelization model fa-
cilitates communication between multiple swarms result-
ing in improved search accuracy and efficiency. The de-
velopment of this method and results from solving large-
scale radar jamming allocation problem are presented.
2 RELATED WORK
Several research works have been carried out in WAT
Problem. The traditional methods such as branch and


- Corresponding author:Yee Ming Chen is with the Department of Industrial
Engineering and Management, Yuan Ze University, Taoyuan, Taiwan.



M

JOURNAL OF COMPUTING, VOLUME 4, ISSUE 2, FEBRUARY 2012, ISSN 2151-9617
https://sites.google.com/site/journalofcomputing
WWW.JOURNALOFCOMPUTING.ORG 24

bound, divide and conquer, and dynamic programming
give the global optimum, but are often too time consum-
ing or do not apply for solving typical real-world prob-
lems. Traditional methods used in optimization are de-
terministic, fast, and give exact answers but often tends to
get stuck on local optima. Also the time complexity from
exponential to polynomial for traditional search algo-
rithms on NP-hard problems cannot be changed. Conse-
quently, another approach is needed when the traditional
methods cannot be applied. The modern heuristic ap-
proach helps in such situation. Modern heuristics are
general purpose optimization algorithms. Their efficiency
or applicability is not tied to any specific problem-
domain. Available heuristics include Simulated Anneal-
ing algorithms [5], Genetic Algorithms [6] and Ant Colo-
ny algorithm [7]. had proposed a hybrid strategy using
Hill Climbing algorithm as a local search method along
with Particle Swarm Optimization. Hill Climbing heuris-
tic has the problem of getting trapped in local optima.
Common to the papers on ACO, GA, and PSO described
above is that they combine the metaheuristics with local
search algorithms, and that the real-time requirements are
not taken into consideration (in many cases, the algo-
rithms are allowed to run for hours). What differs this
paper from earlier research is therefore the detailed study
of how realtime requirements affect the performance of
the implemented algorithms, and that no additional local
search is applied. In the work presented here, these are
chosen such as to minimize the number of computations
needed in each iteration, due to the tight time constraints.


3. PROBLEM DEFINITION

Assume there is a naval battle force made up of m plat-
forms. Each platform has a radar jammer with multi-
targets jamming capability, that is, the ith platform can
jam n radars simultaneously with its radar jammer. We
denote by ) (
) , ( j i
A p the event that the probability
) , ( j i
p as
target jammed. Write
) ( 1
) , ( ) , ( j i j i
A p Q = (Target j is jammed by jammer
i)

To label any particular allocation, we introduce bi-
nary decision variables and a corresponding action space.

=
otherwise
to i assignment
x
ij
0
j target jammer 1

Finally, to construct an objective function, we consider the
so called leakage probabilities[7] for radar jammer
[
m
i
x
j i
ij
Q ) 1 (
) , (

And in other words, the overall jamming blanket ratio for
radar jammer is
[
m
i
x
j i
ij
Q ) 1 ( 1
) , (

Now the constrained optimization problem reads

Max
[
= =
=
n
j
m
i
x
j i
ij
Q E
1 1
) , (
] ) 1 ( 1 [ (1)

s.t.
=
m
i
ij
x 1 n j , , 2 , 1 = (2)
i
m
i
ij
C x s n j , , 2 , 1 = (3)
) 1 , 0 ( e
ij
x (4)
Where
i
C is threadshold (i.e. loading)for each radar
jammer.

4. PARALLELIZATION METHOD IN PSO

PSO is based on a simplified model of the social behavior
exhibited by the swarming behavior of insects, birds, and
fish [8]. In this analogy, a swarm member (particle) uses
information from its past behavior (best previous loca-
tionpBest) and the behavior of the rest of the swarm
(the overall best particlegBest) to determine suitable
food or nesting locations (local and global optimums).
The algorithm iteratively updates the search direction of
the swarm propagating towards the optimum. Equations
(5) and (6) define the mathematical simulation of this be-
havior.


v
Itcr+1,I
| ] = W
Itcr
v
Itcr,I
| ] + C
1
ianu
p,I
Itcr+1
( )
(pBest
I
| ] - X
I
| ])
+C
2
ianu
g,I
Itcr+1
( )
(gBest
I
| ] - X
I
| ])
+C
3
ianu
T,I
Itcr+1
( ) (P
I
| ] - X
I
|])(5)
X
Itcr+1,I
| ] = X
Itcr,I
| ] +v
Itcr+1,I
| ] (6)

C
3
is a user-defined confidence parameter for the phero-
mone component of the velocity vector similar to C
1
and
C
2
in a basic PSO.
In a traditional PSO algorithm, the swarm movement ob-
tains design space information from only two compo-
nentspBest and gBest [9]. When coupled with an addi-
tional pheromone component, the swarm is essentially
presented with more information for design space explo-
ration and has a potential to reach the global optimum
faster. One thought is to increase the overall swarm size.
However, this causes increased swarm activity making
the search quality counterproductive in converging on the
optimum. On the other hand, multiple swarms of man-
ageable size can each autonomously move toward the
optimum potentially increasing the swarm efficiency.
This comes with a marked increase in computational
JOURNAL OF COMPUTING, VOLUME 4, ISSUE 2, FEBRUARY 2012, ISSN 2151-9617
https://sites.google.com/site/journalofcomputing
WWW.JOURNALOFCOMPUTING.ORG 25

costs. Parallelization has offered a promising alternative
to solve this issue. Communication between the swarms
on multiple processors substantially improves the chanc-
es of finding an optimum. Each processor: (a) performs
random swarm generation, (b) fitness value evaluation
and
digital pheromone update as the follow,


P
Itcr,Ij
= P
Itcr,Ij
+ E
Ij
(7)
P
Itcr+1,Ij
= P
Itcr,Ij
,(uecay factoi) (u,1) (8a)
P
Itcr+1,Ij
= _
P
Itcr+1,Ij
, P
Itcr+1,Ij
> 1
1 , P
Itcr+1,Ij
1
(8b)

The pheromone processor gathers a list of pheromones
released by all participating optimization processors.
They are then merged and decayed (for iteration number
greater than one) appropriately as well. Additionally, the
pheromone processor also ranks from gBest[] candidates,
gather all gBest[] to find the actual gBest[] of all the
swarms. Since the final pheromone list and actual gBest[]
information

(c) velocity vector calculation,(d) calculation and storage
of pBest[], gBest[],(e) particle position update, X[].(f) digi-
tal pheromone calculation, and (g) Convergence check is
performed after the digital pheromone and gBest[] broad-
casted.

A parallelization strategy has been developed where a
swarm is deployed across multiple processors. It takes up
a tiny amount of memory, their broadcast to all optimiza-
tion processors do not use a significant amount of net-
work bandwidth. Figure 1 shows a flow chart of the de-
veloped method. The optimization processors are a func-
tion of the desired number of swarm members on each
processor.

Figure 1 PSO with digital pheromone parallel implementation
flowchart


5. PARALLEL PSO USING MESSAGE INTERFACES
Running PSO in a parallel processing mode can be easily
implemented by using a language independent commu-
nication protocol called MPI. Libraries containing useful
functions are available for different programming lan-
guages such as FORTRAN, C, C++ etc[10] In this re-
search, in order to parallelize PSO, the most basic Master-
Slave mode is adopted where one processor is allocated
only for communication with slave processors(Figure 2).


Figure 2 Parallel PSO in master-slave mode

In the present work the particles are distributed over the
given no. of processors ensuring that each processor has
almost equal amount of load so that the result from each
processor arrives at almost the same time and also acts as
a savior of time. When the best results have been calculat-
ed by the processors amongst their respective particles,
then the final optimum result (of a particular iteration) is
obtained and this optimum is reflected to all the proces-
sors, both the things been done by a single MPI func-
tion[11] MPI_Allreduce() . Also if there is a need to reflect
some value from any processor (called root processor)
among all processors, MPI_Scatter () function may be
used.
In this work, all computations are performed on
MPICH2(ver.1.4.0). The Cluster configuration is CPU
Intel(R) Core(TM)2 Quad CPU Q8400. The parallel PSO
scheme implement in MPI processes which illustrated in
Figure. 3.

JOURNAL OF COMPUTING, VOLUME 4, ISSUE 2, FEBRUARY 2012, ISSN 2151-9617
https://sites.google.com/site/journalofcomputing
WWW.JOURNALOFCOMPUTING.ORG 26



Figure 3 Parallel PSO scheme implementation in MPI processes


6. EXPERIMENTAL RDSULTS

The parallel platform used was a master processor with
Intel(R) Core(TM)2 Quad CPU Q8400 and slave proces-
sors with Intel(R) Core(TM)2 Duo CPU E8400 communi-
cating through the Message Passing Interface (MPI) pro-
tocol. The experiments were done for WTA with
1000010000 size which represent the numbers of radar
jammers and targets. The numbers of processors were
from 1 to 8. Communication between Area K in the left
side of Figure 3 were done in MPI. Communication inside
Each process was tested for two cases: speedup and effi-
ciency [12].
Speedup is defined for each number of processors n as the
ratio of the elapsed time when executing a program on a
single processor (the single processor execution time) to
the execution time when n processors are available. In the
notation that we shall use throughout this paper,

S(n) =
T
1
T
n
(9)

Efficiency is defined as the average utilization of the n
allocated processors. Ignoring I/O, the efficiency of a sin-
gle processor system is 1. Speedup in this case is of course
1. In general, the relationship between efficiency and
speedup is given by

E(n) =
S(n)
n
(10)

Figure 4 shows the efficiency charts for the experiment
problem on four, and eight processors.





Figure 4 Parallel efficiency measure across 4, and 8 cores

As evident from Figure 4, the parallel efficincy values
range d from 0.63 throgh 0.97 across of 4 and 8 proces-
sors. For this problem, this means that the load communi-
cation worsened when the processes increased from 1
through 8.




Figure 5 Parallel efficiency measure across 6, and 8 cores

The Figure 5 shows that the parallel efficiency was almost
well when eight processors were used. The effect of
coress number is influenced the efficiency and increased
on the same number of processors.
MPI implementations allow the programmer to run his
application using arbitrary number of processes and pro-
cessors. The number of processes may be less than, equal
to, or greater than the number of processors. It is common
to develop parallel applications with a small number of
processes on a single processor. Therefore, the application
becomes more fully developed and stable.

7. CONCLUSIONS

This paper presents the development of a MPI model for
parallelization PSO. We studied the WTA problem that
affect the parallel programs performance, specially MPI
applications, showing some recommendations to be fol-
lowed to achieve a reasonable performance. The problem
nature is one of the most important factors that affect the
parallel program speed up. Since in parallel program-
ming, communication between processors is a major
drawback which cannot be removed completely, there-
fore in order to notice a reduction in time with the in-
crease in number of processors, one has to set a fairly
large problem size. This is typically the case in real world
optimization problems.

JOURNAL OF COMPUTING, VOLUME 4, ISSUE 2, FEBRUARY 2012, ISSN 2151-9617
https://sites.google.com/site/journalofcomputing
WWW.JOURNALOFCOMPUTING.ORG 27



ACKNOWLEDGEMENTS

This research work was sponsored by the National Sci-
ence Council, R.O.C., under project number NSC100-
2221-E-155-071.


REFERENCES

[1] Y. Sheen, Yong-guang Chen, Xiuhe Li, Research on optimal distri-
bution of radar jamming resource based on zero-one programming,
ACTA ARMAMENTARII, Vol.28 No.5, pp.528-532,2007.
[2] Tiehong Wang, Ying Li, Research on the Optimization Assignment
Algorithms for Jamming Resources in Land Based Air Defense
Countermeasure System, Journal of CAEIT, Vol.3 No.5, pp.441-
445,2008.
[3] C. F. Kuo, C. S. Shih, T. W. Kuo, Resource allocation framework
fordistributed real-Time end-to-end tasks, Proceeding of the 12th In-
ternational Conference on Parallel and Distributed System, Vol.1,
pp.115-122,2006.
[4] S. P. Lloyd, and H. S. Witsenhausen,Weapon Allocation is NP-
Complete, IEEE Summer Simulation Conference, Reno, Nevada,
pp.453-471,1986.
[5] I.H Osman.,Metastrategy simulated annealing and tabu search
algorithms for the vehicle routing problem, Annals of Operations
Research, Vol.41, No.4, pp.421-451, 1993
[6] S. Annie, Shiyun Jin Wu,, Kuo-Chi Lin and Guy Schiavone, Incre-
mental Genetic Algorithm Approach to Multiprocessor Scheduling,
IEEE Transactions on Parallel and Distributed Systems, 2004.
[7] J. Kennedy,R.C. Eberhart, Particle Swarm Optimization. Proceed-
ings of IEEE International Conference on Neural Networks 4,
pp.19421948,1995.
[8] V. Kalivarapu, J. Foo, E. Winer,A parallel implementation of parti-
cle swarm optimization using digital pheromones. In: 11th
AIAA/ISSMO multidisciplinary analysis and optimization confer-
ence, AIAA-2006-6908-694. Portsmouth,VA, September, 2006.
[9] R. Graham , Static Multi-processor scheduling with Ant Colony
Optimization and Local search, Master of Science thesis , University
of Edinburgh, 2003.
[10] J. M. Squyres , Processes, Processors, and MPI, Cluster World,
MPI Mechanic Vol 1 No 2, pp 8-11, January 2004.
[11] W. Gropp, MPICH2: A New Start for MPI Implementations, In
Recent Advances in PVM and MPI: 9th European PVM/MPI Users
Group Meeting, Linz, Austria, Oct.2002.
[12] A. Karwande, X. Yuan, and D. K. Lowenthal, CC-MPI: A Compiled
Communication Capable MPI Prototype for Ethernet Switched Clus-
ters, Journal of Parallel and Distributed Computing, Vol 65, No 10,
pp 1123-1133, 2005.






Yee Ming Chen is a professor in the Department of In-
dustrial Engineering and Management at Yuan Ze Uni-
versity, where he carries out basic and applied research
in agent-based computing. His current research inter-
ests include soft computing, supply chain management,
and system diagnosis/prognosis.






Jhang-Guo Chen received the B.S. degree in the
Department of Industrial Engineering and Manage-
ment at Yuan Ze University, where he has worked
ever since as a research assistant, developing grid
computing system. Since 2010, he has also been
studying for the Master degree at Yuan Ze University.
His research interests include soft computing, parallel
computing, and 3D graphics rendering.
JOURNAL OF COMPUTING, VOLUME 4, ISSUE 2, FEBRUARY 2012, ISSN 2151-9617
https://sites.google.com/site/journalofcomputing
WWW.JOURNALOFCOMPUTING.ORG 28

You might also like