You are on page 1of 5

Applied Soft Computing 25 (2014) 530534

Contents lists available at ScienceDirect

Applied Soft Computing


journal homepage: www.elsevier.com/locate/asoc

Swarm intelligence based techniques for digital lter design


Archana Sarangi a , Shubhendu Kumar Sarangi a , Sasmita Kumari Padhy a ,
Siba Prasada Panigrahi b, , Bijay Ketan Panigrahi c
a
b
c

ITER, SOA University, Bhubaneswar, India


CVRCE, Bhubaneswar, India
IIT, Delhi, India

a r t i c l e

i n f o

Article history:
Received 15 September 2011
Received in revised form 12 May 2013
Accepted 2 June 2013
Available online 14 June 2013
Keywords:
Digital lter design
Particle Swarm Optimization
IIR lter

a b s t r a c t
This paper deals with the problem of digital IIR lter design. Two novel modications are proposed
to Particle Swarm Optimization and validated through novel application for design of IIR lter. First
modication is based on quantum mechanics and proved to yield a better performance. The second
modication is to take care of time dependency character of the constriction factor. Extensive simulation
results validate the superior performance of proposed algorithms.
2013 Elsevier B.V. All rights reserved.

Introduction
Studies by Fang et al. [1] conforms the ndings by the
researchers of [213]. This work can be treated as extension of the
works done in [1]. For ease of reading, we have reproduced the
work as rst part of this article that paves a way for the next part
of the article.
The problem, design of IIR lters, error surface in most of the
cases is non-quadratic and multi-modal [2]. In a multi-modal error
surface, global minima can be achieved by a global optimization
technique. Bio-inspired algorithm, Genetic Algorithm (GA) [2], and
its hybrids [35] have been used for digital IIR lter design. Because
of disadvantages of GA (i.e., lack of good local search ability and
premature convergence) invited Simulated Annealing (SA) [6] for
use in the digital IIR lter design. Again, standard SA is very slow
and also requires repeated evaluations of cost function to converge
to the global minima. Hence, Differential Evolution (DE) [7] nds a
scope in digital IIR lter design. But, the problem with DE algorithm
is that it is sensitive to the choice of its control parameters.
Particle Swarm Optimization (PSO) [8,9] also used in the
problem of digital IIR lter design. PSO can be easily implemented
and also outperforms GA in many of the applications. But, as the
particles in PSO only search in a nite sampling space, PSO may
easily be trapped into local optima. One useful modication of
PSO is Quantum-Behaved Particle Swarm Optimization (QPSO)
[1013]. QPSO uses the merits of quantum mechanics and PSO.

Corresponding author. Tel.: +91 9437262878.


E-mail address: siba panigrahy15@rediffmail.com (S.P. Panigrahi).
1568-4946/$ see front matter 2013 Elsevier B.V. All rights reserved.
http://dx.doi.org/10.1016/j.asoc.2013.06.001

The problem of digital IIR lter design is a minimization problem


that can be solved by PSO and QPSO. In addition, the parameters
of lter are real coded as a particle and then the swarm represents
all the candidate solutions.
In [1], Fang et al. proposed one modication to the standard
version of QPSO. This modication, QPSO-M, is proposed by
introducing a random vector in QPSO in order to enhance the randomness and global search ability. In this paper we propose a
modication to PSO, where, the constriction factor is allowed to
change in each of the iterations with a linearly decreasing K. Importance and utility of proposed modications validated by novel
application to design of IIR lter. Three examples for the purpose
of lter design are tested and compared with PSO and QPSO.
Rest part of the paper is organized as: problem of lter design is
outlined in section Problem statement. PSO, QPSO and proposed
modications are discussed in section Methodologies followed
by simulation examples in section Simulation results. Finally, the
paper is concluded in section Conclusion.
Problem statement
In this paper, we have used the same system structure as that
used in [1] and reproduced in this section. Here, the model of IIR
lter is considered to be same as that of the autoregressive moving
average (ARMA) model. The model can be dened by the difference
equation [3]:
y(k) +

M

i=1

bi y(k i) =

L

i=0

ai x(k i)

(1)

A. Sarangi et al. / Applied Soft Computing 25 (2014) 530534

where x(k) represents input to the lter and y(k) represents output from the lter. M( L) is order of the lter, ai and bi are the
adjustable coefcients. The transfer function of this model takes
following general form:

L

a z i
A(z)
i=0 i
H(z) =
=

M
1 + B(z)
1+
b z i
i=1 i

(2)

The problem of IIR lter design can be considered as an optimization problem, where, the mean square error (MSE) is taken as
the cost function. Hence, the cost function is:
2

J() = E[e2 (k)] = E[{d(k) y(k)} ]

(3)

Here, d(k) is the desired response of the lter, e(k) = d(k) y(k) is
M
the error signal. Using two sets of coefcients, {ai }Li=0 and {bi }i=1 we
can get the composite weight vector for the lter. This is dened
as:
= [a0 , . . ., aL , b1 , . . ., bM ]

1 2
e (k)
N
N

(5)

k=1

Here, N represents the number of samples used.

This section discusses on optimization algorithms used in this


paper for the problem of digital IIR lter design.
QPSO
PSO [14,15] is a global search technique and based on social
behavior of animals. In PSO, each particle is a potential solution to a problem. In M-dimensional space, particle position is
dened as Xi = (xi1 , . . ., xid , . . ., xiM ) and velocity is dened as Vi =
(vi1 , . . ., vid , . . ., viM ). Particles can remember their own previous
best position. The velocity and position updation rule for particle i
at (k + 1)th iteration uses the relations:
k
k
k
vk+1
= w vkij + c1 r1j
(Pijk xijk ) + c2 r2j
(Pgj
xijk );
ij

k
k
r1j
, r2j
U(0, 1)

xijk+1 = xijk + vk+1


ij

(6)

Here, c1 (cognitive) and c2 (social) are two positive constants


that controls respectively the relative proportion of cognition and
social interaction. Vector Pi = (pi1 , . . ., pid , . . ., piM ) is the previous
best position (the position giving the best tness value) of particle
i, which is called pbest. And vector Pg = (pg1 , . . ., pgd , . . ., pgM ) is the
best position discovered by the entire population, which is called
gbest. Parameter w is the inertia weight and the optimal strategy
to control it is to initially set to 0.9 and reduce it linearly to 0.4
[14]. QPSO is derived from fundamental theory of particle swarm
having the merits of quantum mechanics. In the QPSO algorithm
with M particles in D-dimensional space, the position of particle i
at (k + 1)th iteration is updated by:
=

pkij

MP =

k
(MP1k , . . ., MPM
)

1 k
1 k
Pi1 , . . .,
PiD
M
M
M

i=1

i=1



MP k xk  ln
j

ij

k
pkij = ijk Pijk + (1 ijk ) Pgj
,


(9)

Here, parameter is called contractionexpansion (CE) coefcient. Pi and Pg have the same meanings as those in PSO. MP is
called Mean Best Position, which is dened as the mean of the pbest
positions of all particles.
Search space of QPSO is wider than that of PSO because of introduction of exponential distribution of positions. Also, unlike PSO,
where each particle converges to the global best position independently, improvement has been introduced by inserting Mean
Best Position in QPSO. Here, each particle cannot converge to the
global best position without considering its colleagues because that
the distance between the current position and MP determines the
position distribution of the particle for the next iteration.
Modied QPSO
This section reproduces the modication proposed in [1].
Though QPSO nds better global search ability than PSO, still may
fall into premature convergence [16] similar to other evolutionary algorithms in multi-modal optimization like GA and PSO. This
results in great loss of performance and yield sub-optimal solutions.
In QPSO, in a larger search space, diversity loss of the whole popalthough. From
ulation is also inevitable due to the collectiveness

Eq. (2), it is evident that when MPj xij  is very small, the search
space will be small and xij cannot go to a new position in the subsequent iterations. The explorative power of particles is also lost and
stagnate.
This case even may
the evolution process may become


occur at an early stage when MPj xij  is zero. Also, in the latter

Methodologies

xijk+1


k

(4)

The objective however is to minimize MSE (3). This is achieved


through adjustment of . Because of difculty in realizing the
ensemble operation, the cost function (3) can be replaced by the
time-averaged cost function:
J() =

531

stage, the loss of diversity for MPj xij  is a problem to be considered. In order to prevent this undesirable difculty, we in this
paper
construct
a random vector. This random vector will replace


MPj xij  with a certain probability CR, according to the difference
between two positional coordinates that are re-randomized in the
problem space, and the value of the new position updating equation
becomes:
 = mu xk mu xs ,

xijk+1

pkij

 
  ln

1
ukij

(10)

where mu xk and mu xs are two random particles generated in the


problem space. Because of introduction of the random vector, the
particles may leave from the current position and may be located
in a new search domain.
The modied version of QPSO is termed here as QPSO-M. The
procedure, used in this paper is outlined as:
Step 1: Initialize particles with random position and set the control
parameter CR.
Step 2: For k = 1 to maximum iteration, execute the following steps.
Step 3: Calculate the mean best position MP among the particles.
Step 4: For each particle, compute its tness value f {xi (k)}. If
f {xi (k)} < f {Pi (k)}, then Pi (k) = xi (k).
Step 5: Select gbest position Pg (k) among particles.
Step 6: Generate a random number, in the range [0, 1].
Step 7: If RN < CR then update the position according to (7)(9), else
according to (8) and (10).

1
ukij

ukij U(0, 1)

ijk U(0, 1)

(7)

Proposed modication with time dependant constriction factor


(PSO-LD)

(8)

Recent works in [17,18] indicate that the use of a constriction


factor may be necessary to insure convergence of the PSO. A

532

A. Sarangi et al. / Applied Soft Computing 25 (2014) 530534

Table 1
Simulation parameters.

time-dependent and linearly-decreasing K, unlike that of a xed K.


Here, we adjust k at each of the iterations following the recursion:

PSO

QPSO

QPSO-M & PSO-LD

Parameter

Value

Parameter

Value

Parameter

Value

c1 , c2

0.9 0.4
2

1 0.5

CR

1 0.5
0.8

Table 2
Filter design in example 1 (randomly chosen initial positions).
Number of hits

PSO
QPSO
QPSO-M
PSO-LD

Global
minimum 

0.312, 0.816

Local
 minimum
0.117, 0.557

27
92
100
100

73
8
0
0

Table 3
Mean values and standard deviations of the lter coefcients of 100 random runs
in example 1 (with randomly chosen initial positions).
a

0.263
0.2754
0.3192
0.3181

PSO
QPSO
QPSO-M
PSO-LD

0.6125
0.7926
0.8162
0.8171

0.2084
0.1164
0.0075
0.0045

kn = kmin + (kmax kmin )

mn
m1

Here, m and n respectively represents the maximum number


of iterations and the current iteration. Note here that the use of a
constriction factor ensures computational stability of the PSO algorithm. This results in a stabilizing effect on the swarm and which
therefore calls for the use of a lower value of the constriction factor.
Simulation results
Performance of above mentioned methodologies were studied
through simulations. With QPSO as reference, simulations were
carried for three different cases. One hundred Monte Carlo simulations were undertaken for each case. Simulation parameters
illustrated in Table 1.
Case 1: second-order system and rst-order IIR lter

CPU time (s)

0.5222
0.3891
0.0077
0.0011

7.843
2.780
2.308
2.630

simplied method of incorporating a constriction factor is illustrated in [17]. In [17], the performance of PSO using an inertia
weight was compared with the PSO performance using a constriction factor. It was concluded that the best approach is to use
a constriction factor while limiting the maximum velocity vmax
to the dynamic range of the variable xmax in each dimension. It
was also shown in [17] that this approach provides a performance
superior to any similar technique reported in the literature.
Constriction factor-based PSO [9] motivated this work to propose a new modication. This is achieved by introducing a

In this case, i.e., when the second-order system and rstorder IIR lter, the problem is of falling to local minima. To
test the effectiveness of proposed algorithms, we have chosen a
plant same as that discussed in [6], having numerator coefcients
{0.05, 0.4} and denominator coefcients {1, 1.1314, 0.25}. The
objective is now to nd coefcients a and b of 1st order lter
H(z) = a/(1 + bz1 ). A random Gaussian noise with zero mean and
unit variance was chosen as system input, x(k). The error surface has
a global minimum at {a, b} = {0.312, 0.816} and a local minimum
at {a, b} = {0.117, 0.576}. For all the ve algorithms, the search space
chosen was (1). The xed initial positions are chosen randomly as:
{0.117, 0.576}, {0.8, 0} and {{0.9, 0.9}} [6]. A comparison among
various algorithms in hitting the global and local minimum is outlined in Table 2. We found these results by 100 random simulations
and with randomly chosen initial positions. It is evident from the

Table 4
Mean values of the lter coefcients of 100 random runs in example 1 (with four xed positions).
Fixed initial positions

PSO
QPSO
QPSO-M
PSO-LD

0.117, 0.576

0.8, 0

0.9, 0.9

CPU time
(s)

0.9, 0.9

0.113
0.114
0.116
0.117

0.524
0.519
0.571
0.573

0.0361
0.77
0.81
0.83

0.0526
0.027
0.009
0.012

0.1697
0.9
0.89
0.91

0.4768
0.9
0.9066
0.9062

0.263
0.9
0.91
0.901

0.6125
0.9
0.907
0.903

7.47
2.65
2.298
3.13

Table 5
Mean values and standard deviations of the lter coefcients of 100 random runs in example 2 (with randomly chosen initial positions).
a0
0.3313
0.3909
0.3948
0.48

PSO
QPSO
QPSOM
PSOLD

a1

0.1092
0.0138
0.0134
0.021

b1

0.0586
0.0769
0.0742
0.076

0.12054
0.0162
0.0195
0.02

0.545
0.2187
0.2230
0.24

b2

0.2354
0.5796
0.5739
0.56

0.3892
0.0168
0.02
0.024

CPU time (s)

0.3034
0.0148
0.0155
0.019

72.37
48.45
44.071
46.03

Table 6
Mean values and standard deviations of the lter coefcients of 100 random runs in example 3 (with randomly chosen initial positions).
a0
PSO
QPSO
QPSO-M
PSO-LD

0.1697
0.2
0.199
0.2

a1

a2

b1

b2

b3

0.1051
0.3237 0.1364
0.3622 0.1829
0.3285 0.4288
0.2493 0.1974
0.088 0.1618
3.523 103
0.398 4.286 103 0.499 4.088 103 0.599 7.186 103
0.25 7.969 103 0.199 7.389 103
1.4 104
0.399 2.045 104 0.499 1.6 104
0.599 2.55 104
0.249 2.816 104
0.2 2.72 104
1.4 104
0.2 2.045 104
0.5 1.6 104
0.6 2.55 104
0.25 2.816 104 0.22 2.72 104

CPU
time (s)
9.526
4.523
3.998
4.12

A. Sarangi et al. / Applied Soft Computing 25 (2014) 530534

533

table that PSO and QPSO jump to the global minimum valley in
more number of cases and converge to the global minimum, but
they may also jump to the local minimum and then converge to
the local minimum. However, QPSO-M and PSO-LD have ability to
converge in the global minimum in almost all the cases.
In Tables 3 and 4, we demonstrate the mean values of lter
coefcients along with the standard deviations along with simulation run time for each of the algorithms. It is evident from Table 3
that QPSO-M could nd the global minimum with the least standard
deviations as compared to other four algorithms. It is evident from
Table 4 that QPSO-M and PSO-LD can jump out of any of the settled
xed initial positions and nd the global minimum while the other
algorithms are all trapped in these xed initial algorithms. Fig. 1
illustrates the coefcients learning curves and convergence behaviors of the algorithms for this case of design. The results shown
are averaged over 100 random runs with randomly chosen initial
positions.
Case 2: third-order system and second-order IIR lter
When the plant is a third-order system and lter is a secondorder IIR lter, the error surface of the cost function becomes
multi-modal because of reduced order lter. We have chosen
the plant having numerator coefcients {0.3, 0.4, 0.5} and
denominator coefcients {1, 1.24, 0.5, 0.1}. The problem now
reduces to that of nding coefcients {a0 , a1 , b1 , b2 } of the lter
with transfer function H(z) = (a0 + a1 z1 )/(1 + b1 z1 + b2 z2 ).
A uniformly distributed white sequence in the range (0.5) is
taken as the input x(k), and the SNR was kept xed at 30 dB.
Results by various algorithms in case 2 are illustrated in Table 5.
Results provide the mean best values and standard deviations of
the lter coefcients. These results are obtained by averaging over
100 random runs with randomly chosen initial positions like that
of case 1. Fig. 2(a) depicts the convergence behaviors for case 2
for different algorithms. It is evident from Table 5 that, mean best
values produced by PSO, QPSO and QPSO-M are approximate, but by
PSO-LD is exact. Convergence speed of QPSO and QPSO-M is faster
than the other three algorithms as seen from Fig. 2(a).
Case 3: third-order system and third-order IIR lter
When the plant is a third-order system and lter is a thirdorder IIR lter, the error surface of the cost function becomes
uni-modal because of same order lter. We have chosen the plant
having numerator coefcients {0.2, 0.4, 0.5} and denominator
coefcients {1, 0.6, 0.25, 0.2}. The problem now reduces to that
of nding coefcients {a0 , a1 , a2 , b1 , b2 , b3 } of the lter with transfer
function H(z) = (a0 + a1 z1 + a2 z2 )/(1 + b1 z1 + b2 z2 + b3 z3 ).
We have chosen same input as that of case 1. The best solution is
to be located at {0.2, 0.4, 0.5, 0.6, 0.25, 0.2}. The mean best
values and standard deviations of lter coefcients in case 3 are
provided in Table 6 that gives results averaged over 100 random
runs with randomly initial positions. Convergence behaviors for
case 3 averaged over 100 random runs are produced in Fig. 2(b).
As evident from Table 6, the lter coefcients found by QPSO are
exactly located at the best solution and the standard deviation is
smaller than that yielded by any other algorithms. QPSO-M and
PSO-LD are the most and the second most robust algorithms as
compared to two other algorithms. It is seen in Fig. 2(b) that the
convergence speeds of QPSO, QPSO-M and PSO-LD are much faster
than those of PSO. Computation time using MATLAB for different
strategies for three examples outlined in Table 2. It is seen that
PSO and QPSO are faster to solve than others, but approximating a problem, as QPSO-M and PSO-LD generally requires more
constraints and additional variables, which increases computation
time.

Fig. 1. (a) Coefcient a; (b) coefcient b learning curves for example 1 and (c)
comparison of convergence behaviors for example 1.

534

A. Sarangi et al. / Applied Soft Computing 25 (2014) 530534

has enhanced the randomness by modifying the update equation


of QPSO. QPSO-M replaces a part of the update equation with
a random vector with a certain probability. PSO-LD introduced
a time-dependent linearly-decreasing constriction factor. QPSO,
QPSO-M and PSO-LD were used in the design of digital IIR lters
for the purpose of system identication. Experimental results have
shown that the performance of QPSO, QPSO-M and PSO-LD are
superior to PSO in the digital IIR lter design problem and they
will be an efcient tool for lter design problem.
References

Fig. 2. Comparison of convergence behaviors for (a) example 2 and (b) example 3.

From the above three examples, QPSO, QPSO-M and PSO-LD


have shown their stronger search abilities both on the multi-modal
problem and on the uni-modal one. QPSO, QPSO-M and PSO-LD outperform PSO in convergence speed, robustness and qualitatively of
the nal solutions [19].
Conclusion
This paper introduced two novel modications to PSO and
its variant QPSO, and termed as QPSO-M and PSO-LD. QPSO-M

[1] W. Fang, J. Sun, W. Xu, A new mutated quantum-behaved particle swarm optimizer for digital IIR lter design, EURASIP J. Adv. Signal Process. (2009), Article
ID 367465.
[2] V.J. Manoj, E. Elias, Design of multiplier-less nonuniform lter bank transmultiplexure using genetic algorithm, Signal Process. 89 (11) (2009) 2274
2285.
[3] J.-T. Tsai, W.-H. Ho, J.-H. Chou, Design of two-dimensional IIR digital structurespecied lters by using an improved genetic algorithm, Expert Syst. Appl. 36
(3) (2009) 69286934 (Part 2).
[4] C.-W. Tsai, C.-H. Huang, C.-L. Lin, Structure-specied IIR lter and control
design using real structured genetic algorithm, Appl. Soft Comput. 9 (4) (2009)
12851295.
[5] Y.-P. Chang, C. Low, S.-Y. Hung, Integrated feasible direction method and genetic
algorithm for optimal planning of harmonic lters with uncertainty conditions,
Expert Syst. Appl. 36 (2) (2009) 39463955 (Part 2).
[6] S. Chen, R. Istepanian, B.L. Luk, Digital IIR lter design using adaptive simulated
annealing, Digital Signal Process. 11 (3) (2001) 241251.
[7] N. Karaboga, Digital IIR lter design using differential evolution algorithm,
EURASIP J. Appl. Signal Process. 2005 (8) (2005) 12691276.
[8] D.J. Krusienski, W.K. Jenkins, Adaptive ltering via particle swarm optimization,
in: Proceedings of the Conference Record of the Asilomar Conference on Signals,
Systems and Computers, vol. 1, November, 2003, pp. 571575.
[9] Y. Gao, Y. Li, H. Qian, The design of IIR digital lter based on chaos particle swarm
optimization algorithm, in: Proceedings of the 2nd International Conference on
Genetic and Evolutionary Computing (WGEC08), Jingzhou, Hubei, September,
2008, pp. 303306.
[10] J. Sun, B. Feng, W. Xu, Particle swarm optimization with particles having quantum behavior, in: Proceedings of the Congress on Evolutionary Computation
(CEC04), vol. 1, Portland, OR, USA, June, 2004, pp. 325331.
[11] J. Sun, W. Xu, B. Feng, A global search strategy of quantum-behaved particle
swarm optimization, in: Proceedings of IEEE Conference on Cybernetics and
Intelligent Systems, 2004, pp. 111116.
[12] J.I. Ababneh, M.H. Bataineh, Linear phase FIR lter design using particle swarm
optimization and genetic algorithms, Digital Signal Process. 18 (July (4)) (2008).
[13] A. Sarangi, R.K. Mahapatra, S.P. Panigrahi, DEPSO and PSO-QI in digital lter
design, Expert Syst. Appl. 38 (2011) 1096610973.
[14] J. Kennedy, R. Eberhart, Particle swarm optimization, in: Proceedings of
IEEE International Conference on Neural Networks, vol. 4, Perth, Australia,
NovemberDecember, 1995, pp. 19421948.
[15] Y. Shi, R. Eberhart, Modied particle swarm optimizer, in: Proceedings of the
IEEE Conference on Evolutionary Computation (ICEC98), Anchorage, AK, USA,
May, 1998, pp. 6973.
[16] L.D.S. Coelho, A quantum particle swarm optimizer with chaotic mutation operator, Chaos Solitons Fractals 37 (5) (2008) 14091418.
[17] R.C. Eberhart, Y. Shi, Comparing inertia weights and constriction actors in particle swarm optimization, in: Proceedings of 2000 Congress on Evolutionary
Computation, 2000, pp. 8488.
[18] M. Clerc, J. Kennedy, The particle swarm: explosion, stability, and convergence
in a multi-dimensional complex space, IEEE Trans. Evol. Comput. 6 (2002)
5873.
[19] S. Haykin, Adaptive Filter Theory, 4th ed., Prentice Hall, Englewood Cliffs, NJ,
USA, 2001.

You might also like