Professional Documents
Culture Documents
a r t i c l e
i n f o
Article history:
Received 15 September 2011
Received in revised form 12 May 2013
Accepted 2 June 2013
Available online 14 June 2013
Keywords:
Digital lter design
Particle Swarm Optimization
IIR lter
a b s t r a c t
This paper deals with the problem of digital IIR lter design. Two novel modications are proposed
to Particle Swarm Optimization and validated through novel application for design of IIR lter. First
modication is based on quantum mechanics and proved to yield a better performance. The second
modication is to take care of time dependency character of the constriction factor. Extensive simulation
results validate the superior performance of proposed algorithms.
2013 Elsevier B.V. All rights reserved.
Introduction
Studies by Fang et al. [1] conforms the ndings by the
researchers of [213]. This work can be treated as extension of the
works done in [1]. For ease of reading, we have reproduced the
work as rst part of this article that paves a way for the next part
of the article.
The problem, design of IIR lters, error surface in most of the
cases is non-quadratic and multi-modal [2]. In a multi-modal error
surface, global minima can be achieved by a global optimization
technique. Bio-inspired algorithm, Genetic Algorithm (GA) [2], and
its hybrids [35] have been used for digital IIR lter design. Because
of disadvantages of GA (i.e., lack of good local search ability and
premature convergence) invited Simulated Annealing (SA) [6] for
use in the digital IIR lter design. Again, standard SA is very slow
and also requires repeated evaluations of cost function to converge
to the global minima. Hence, Differential Evolution (DE) [7] nds a
scope in digital IIR lter design. But, the problem with DE algorithm
is that it is sensitive to the choice of its control parameters.
Particle Swarm Optimization (PSO) [8,9] also used in the
problem of digital IIR lter design. PSO can be easily implemented
and also outperforms GA in many of the applications. But, as the
particles in PSO only search in a nite sampling space, PSO may
easily be trapped into local optima. One useful modication of
PSO is Quantum-Behaved Particle Swarm Optimization (QPSO)
[1013]. QPSO uses the merits of quantum mechanics and PSO.
M
i=1
bi y(k i) =
L
i=0
ai x(k i)
(1)
where x(k) represents input to the lter and y(k) represents output from the lter. M( L) is order of the lter, ai and bi are the
adjustable coefcients. The transfer function of this model takes
following general form:
L
a z i
A(z)
i=0 i
H(z) =
=
M
1 + B(z)
1+
b z i
i=1 i
(2)
The problem of IIR lter design can be considered as an optimization problem, where, the mean square error (MSE) is taken as
the cost function. Hence, the cost function is:
2
(3)
Here, d(k) is the desired response of the lter, e(k) = d(k) y(k) is
M
the error signal. Using two sets of coefcients, {ai }Li=0 and {bi }i=1 we
can get the composite weight vector for the lter. This is dened
as:
= [a0 , . . ., aL , b1 , . . ., bM ]
1 2
e (k)
N
N
(5)
k=1
k
k
r1j
, r2j
U(0, 1)
(6)
pkij
MP =
k
(MP1k , . . ., MPM
)
1 k
1 k
Pi1 , . . .,
PiD
M
M
M
i=1
i=1
MP k xk ln
j
ij
k
pkij = ijk Pijk + (1 ijk ) Pgj
,
(9)
Here, parameter is called contractionexpansion (CE) coefcient. Pi and Pg have the same meanings as those in PSO. MP is
called Mean Best Position, which is dened as the mean of the pbest
positions of all particles.
Search space of QPSO is wider than that of PSO because of introduction of exponential distribution of positions. Also, unlike PSO,
where each particle converges to the global best position independently, improvement has been introduced by inserting Mean
Best Position in QPSO. Here, each particle cannot converge to the
global best position without considering its colleagues because that
the distance between the current position and MP determines the
position distribution of the particle for the next iteration.
Modied QPSO
This section reproduces the modication proposed in [1].
Though QPSO nds better global search ability than PSO, still may
fall into premature convergence [16] similar to other evolutionary algorithms in multi-modal optimization like GA and PSO. This
results in great loss of performance and yield sub-optimal solutions.
In QPSO, in a larger search space, diversity loss of the whole popalthough. From
ulation is also inevitable due to the collectiveness
Eq. (2), it is evident that when MPj xij is very small, the search
space will be small and xij cannot go to a new position in the subsequent iterations. The explorative power of particles is also lost and
stagnate.
This case even may
the evolution process may become
occur at an early stage when MPj xij is zero. Also, in the latter
Methodologies
xijk+1
k
(4)
531
stage, the loss of diversity for MPj xij is a problem to be considered. In order to prevent this undesirable difculty, we in this
paper
construct
a random vector. This random vector will replace
MPj xij with a certain probability CR, according to the difference
between two positional coordinates that are re-randomized in the
problem space, and the value of the new position updating equation
becomes:
= mu xk mu xs ,
xijk+1
pkij
ln
1
ukij
(10)
1
ukij
ukij U(0, 1)
ijk U(0, 1)
(7)
(8)
532
Table 1
Simulation parameters.
PSO
QPSO
Parameter
Value
Parameter
Value
Parameter
Value
c1 , c2
0.9 0.4
2
1 0.5
CR
1 0.5
0.8
Table 2
Filter design in example 1 (randomly chosen initial positions).
Number of hits
PSO
QPSO
QPSO-M
PSO-LD
Global
minimum
0.312, 0.816
Local
minimum
0.117, 0.557
27
92
100
100
73
8
0
0
Table 3
Mean values and standard deviations of the lter coefcients of 100 random runs
in example 1 (with randomly chosen initial positions).
a
0.263
0.2754
0.3192
0.3181
PSO
QPSO
QPSO-M
PSO-LD
0.6125
0.7926
0.8162
0.8171
0.2084
0.1164
0.0075
0.0045
mn
m1
0.5222
0.3891
0.0077
0.0011
7.843
2.780
2.308
2.630
simplied method of incorporating a constriction factor is illustrated in [17]. In [17], the performance of PSO using an inertia
weight was compared with the PSO performance using a constriction factor. It was concluded that the best approach is to use
a constriction factor while limiting the maximum velocity vmax
to the dynamic range of the variable xmax in each dimension. It
was also shown in [17] that this approach provides a performance
superior to any similar technique reported in the literature.
Constriction factor-based PSO [9] motivated this work to propose a new modication. This is achieved by introducing a
In this case, i.e., when the second-order system and rstorder IIR lter, the problem is of falling to local minima. To
test the effectiveness of proposed algorithms, we have chosen a
plant same as that discussed in [6], having numerator coefcients
{0.05, 0.4} and denominator coefcients {1, 1.1314, 0.25}. The
objective is now to nd coefcients a and b of 1st order lter
H(z) = a/(1 + bz1 ). A random Gaussian noise with zero mean and
unit variance was chosen as system input, x(k). The error surface has
a global minimum at {a, b} = {0.312, 0.816} and a local minimum
at {a, b} = {0.117, 0.576}. For all the ve algorithms, the search space
chosen was (1). The xed initial positions are chosen randomly as:
{0.117, 0.576}, {0.8, 0} and {{0.9, 0.9}} [6]. A comparison among
various algorithms in hitting the global and local minimum is outlined in Table 2. We found these results by 100 random simulations
and with randomly chosen initial positions. It is evident from the
Table 4
Mean values of the lter coefcients of 100 random runs in example 1 (with four xed positions).
Fixed initial positions
PSO
QPSO
QPSO-M
PSO-LD
0.117, 0.576
0.8, 0
0.9, 0.9
CPU time
(s)
0.9, 0.9
0.113
0.114
0.116
0.117
0.524
0.519
0.571
0.573
0.0361
0.77
0.81
0.83
0.0526
0.027
0.009
0.012
0.1697
0.9
0.89
0.91
0.4768
0.9
0.9066
0.9062
0.263
0.9
0.91
0.901
0.6125
0.9
0.907
0.903
7.47
2.65
2.298
3.13
Table 5
Mean values and standard deviations of the lter coefcients of 100 random runs in example 2 (with randomly chosen initial positions).
a0
0.3313
0.3909
0.3948
0.48
PSO
QPSO
QPSOM
PSOLD
a1
0.1092
0.0138
0.0134
0.021
b1
0.0586
0.0769
0.0742
0.076
0.12054
0.0162
0.0195
0.02
0.545
0.2187
0.2230
0.24
b2
0.2354
0.5796
0.5739
0.56
0.3892
0.0168
0.02
0.024
0.3034
0.0148
0.0155
0.019
72.37
48.45
44.071
46.03
Table 6
Mean values and standard deviations of the lter coefcients of 100 random runs in example 3 (with randomly chosen initial positions).
a0
PSO
QPSO
QPSO-M
PSO-LD
0.1697
0.2
0.199
0.2
a1
a2
b1
b2
b3
0.1051
0.3237 0.1364
0.3622 0.1829
0.3285 0.4288
0.2493 0.1974
0.088 0.1618
3.523 103
0.398 4.286 103 0.499 4.088 103 0.599 7.186 103
0.25 7.969 103 0.199 7.389 103
1.4 104
0.399 2.045 104 0.499 1.6 104
0.599 2.55 104
0.249 2.816 104
0.2 2.72 104
1.4 104
0.2 2.045 104
0.5 1.6 104
0.6 2.55 104
0.25 2.816 104 0.22 2.72 104
CPU
time (s)
9.526
4.523
3.998
4.12
533
table that PSO and QPSO jump to the global minimum valley in
more number of cases and converge to the global minimum, but
they may also jump to the local minimum and then converge to
the local minimum. However, QPSO-M and PSO-LD have ability to
converge in the global minimum in almost all the cases.
In Tables 3 and 4, we demonstrate the mean values of lter
coefcients along with the standard deviations along with simulation run time for each of the algorithms. It is evident from Table 3
that QPSO-M could nd the global minimum with the least standard
deviations as compared to other four algorithms. It is evident from
Table 4 that QPSO-M and PSO-LD can jump out of any of the settled
xed initial positions and nd the global minimum while the other
algorithms are all trapped in these xed initial algorithms. Fig. 1
illustrates the coefcients learning curves and convergence behaviors of the algorithms for this case of design. The results shown
are averaged over 100 random runs with randomly chosen initial
positions.
Case 2: third-order system and second-order IIR lter
When the plant is a third-order system and lter is a secondorder IIR lter, the error surface of the cost function becomes
multi-modal because of reduced order lter. We have chosen
the plant having numerator coefcients {0.3, 0.4, 0.5} and
denominator coefcients {1, 1.24, 0.5, 0.1}. The problem now
reduces to that of nding coefcients {a0 , a1 , b1 , b2 } of the lter
with transfer function H(z) = (a0 + a1 z1 )/(1 + b1 z1 + b2 z2 ).
A uniformly distributed white sequence in the range (0.5) is
taken as the input x(k), and the SNR was kept xed at 30 dB.
Results by various algorithms in case 2 are illustrated in Table 5.
Results provide the mean best values and standard deviations of
the lter coefcients. These results are obtained by averaging over
100 random runs with randomly chosen initial positions like that
of case 1. Fig. 2(a) depicts the convergence behaviors for case 2
for different algorithms. It is evident from Table 5 that, mean best
values produced by PSO, QPSO and QPSO-M are approximate, but by
PSO-LD is exact. Convergence speed of QPSO and QPSO-M is faster
than the other three algorithms as seen from Fig. 2(a).
Case 3: third-order system and third-order IIR lter
When the plant is a third-order system and lter is a thirdorder IIR lter, the error surface of the cost function becomes
uni-modal because of same order lter. We have chosen the plant
having numerator coefcients {0.2, 0.4, 0.5} and denominator
coefcients {1, 0.6, 0.25, 0.2}. The problem now reduces to that
of nding coefcients {a0 , a1 , a2 , b1 , b2 , b3 } of the lter with transfer
function H(z) = (a0 + a1 z1 + a2 z2 )/(1 + b1 z1 + b2 z2 + b3 z3 ).
We have chosen same input as that of case 1. The best solution is
to be located at {0.2, 0.4, 0.5, 0.6, 0.25, 0.2}. The mean best
values and standard deviations of lter coefcients in case 3 are
provided in Table 6 that gives results averaged over 100 random
runs with randomly initial positions. Convergence behaviors for
case 3 averaged over 100 random runs are produced in Fig. 2(b).
As evident from Table 6, the lter coefcients found by QPSO are
exactly located at the best solution and the standard deviation is
smaller than that yielded by any other algorithms. QPSO-M and
PSO-LD are the most and the second most robust algorithms as
compared to two other algorithms. It is seen in Fig. 2(b) that the
convergence speeds of QPSO, QPSO-M and PSO-LD are much faster
than those of PSO. Computation time using MATLAB for different
strategies for three examples outlined in Table 2. It is seen that
PSO and QPSO are faster to solve than others, but approximating a problem, as QPSO-M and PSO-LD generally requires more
constraints and additional variables, which increases computation
time.
Fig. 1. (a) Coefcient a; (b) coefcient b learning curves for example 1 and (c)
comparison of convergence behaviors for example 1.
534
Fig. 2. Comparison of convergence behaviors for (a) example 2 and (b) example 3.
[1] W. Fang, J. Sun, W. Xu, A new mutated quantum-behaved particle swarm optimizer for digital IIR lter design, EURASIP J. Adv. Signal Process. (2009), Article
ID 367465.
[2] V.J. Manoj, E. Elias, Design of multiplier-less nonuniform lter bank transmultiplexure using genetic algorithm, Signal Process. 89 (11) (2009) 2274
2285.
[3] J.-T. Tsai, W.-H. Ho, J.-H. Chou, Design of two-dimensional IIR digital structurespecied lters by using an improved genetic algorithm, Expert Syst. Appl. 36
(3) (2009) 69286934 (Part 2).
[4] C.-W. Tsai, C.-H. Huang, C.-L. Lin, Structure-specied IIR lter and control
design using real structured genetic algorithm, Appl. Soft Comput. 9 (4) (2009)
12851295.
[5] Y.-P. Chang, C. Low, S.-Y. Hung, Integrated feasible direction method and genetic
algorithm for optimal planning of harmonic lters with uncertainty conditions,
Expert Syst. Appl. 36 (2) (2009) 39463955 (Part 2).
[6] S. Chen, R. Istepanian, B.L. Luk, Digital IIR lter design using adaptive simulated
annealing, Digital Signal Process. 11 (3) (2001) 241251.
[7] N. Karaboga, Digital IIR lter design using differential evolution algorithm,
EURASIP J. Appl. Signal Process. 2005 (8) (2005) 12691276.
[8] D.J. Krusienski, W.K. Jenkins, Adaptive ltering via particle swarm optimization,
in: Proceedings of the Conference Record of the Asilomar Conference on Signals,
Systems and Computers, vol. 1, November, 2003, pp. 571575.
[9] Y. Gao, Y. Li, H. Qian, The design of IIR digital lter based on chaos particle swarm
optimization algorithm, in: Proceedings of the 2nd International Conference on
Genetic and Evolutionary Computing (WGEC08), Jingzhou, Hubei, September,
2008, pp. 303306.
[10] J. Sun, B. Feng, W. Xu, Particle swarm optimization with particles having quantum behavior, in: Proceedings of the Congress on Evolutionary Computation
(CEC04), vol. 1, Portland, OR, USA, June, 2004, pp. 325331.
[11] J. Sun, W. Xu, B. Feng, A global search strategy of quantum-behaved particle
swarm optimization, in: Proceedings of IEEE Conference on Cybernetics and
Intelligent Systems, 2004, pp. 111116.
[12] J.I. Ababneh, M.H. Bataineh, Linear phase FIR lter design using particle swarm
optimization and genetic algorithms, Digital Signal Process. 18 (July (4)) (2008).
[13] A. Sarangi, R.K. Mahapatra, S.P. Panigrahi, DEPSO and PSO-QI in digital lter
design, Expert Syst. Appl. 38 (2011) 1096610973.
[14] J. Kennedy, R. Eberhart, Particle swarm optimization, in: Proceedings of
IEEE International Conference on Neural Networks, vol. 4, Perth, Australia,
NovemberDecember, 1995, pp. 19421948.
[15] Y. Shi, R. Eberhart, Modied particle swarm optimizer, in: Proceedings of the
IEEE Conference on Evolutionary Computation (ICEC98), Anchorage, AK, USA,
May, 1998, pp. 6973.
[16] L.D.S. Coelho, A quantum particle swarm optimizer with chaotic mutation operator, Chaos Solitons Fractals 37 (5) (2008) 14091418.
[17] R.C. Eberhart, Y. Shi, Comparing inertia weights and constriction actors in particle swarm optimization, in: Proceedings of 2000 Congress on Evolutionary
Computation, 2000, pp. 8488.
[18] M. Clerc, J. Kennedy, The particle swarm: explosion, stability, and convergence
in a multi-dimensional complex space, IEEE Trans. Evol. Comput. 6 (2002)
5873.
[19] S. Haykin, Adaptive Filter Theory, 4th ed., Prentice Hall, Englewood Cliffs, NJ,
USA, 2001.