2

Digital Filters
Marcio G. Siqueira
Cisco Systems, Sunnyvale, California, USA

2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 2.10 2.11 2.12 2.13

Introduction ..................................................................................... Digital Signal Processing Systems .......................................................... Sampling of Analog Signals ..................................................................
2.3.1 Sampling Theorem

839 840 840 841 844 845 846 848 849 854 856 859 860 860

Paulo S.R. Diniz
Program of Electrical Engineering, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil,

Digital Filters and Linear Systems .........................................................
2.4.1 Linear Time-Invariant Systems • 2.4.2 Difference Equation

Finite Impulse Response (FIR) Filters .................................................... Infinite Impulse Response Filters .......................................................... Digital Filter Realizations .....................................................................
2.7.1 FIR Filter Structures • 2.7.2 IIR Filter Realizations

FIR Filter Approximation Methods .......................................................
2.8.1 W i n d o w Method • 2.8.2 Sar~imaki W i n d o w

FIR Filter Design by Optimization ........................................................
2.9.1 Problem Formulation • 2.9.2 Chebyshev Method • 2.9.3 Weighted Least-Squares Method • 2.9.4 M i n i m a x Filter Design Employing WLS • 2.9.5 The WLS-Chebyshev Method

IIR Filter Approximations ....................................................................
2.10.1 Analog Filter Transformation Methods • 2.10.2 Bilinear Transformation by Pascal Matrix

Quantization in Digital Filters ..............................................................
2.11.1 Coefficient Quantization • 2.11.2 Quantization Noise • 2.11.3 Overflow Limit Cycles • 2.11.4 Granularity Limit Cycles

Real-Time Implementation of Digital Filters ........................................... Conclusion ....................................................................................... References .........................................................................................

2.1 Introduction
The rapid development of integrated circuit technology led to the development of very powerful digital machines able to perform a very high number of computations in a very short period of time. In addition, digital machines are flexible, reliable, reproducible, and relatively cheap. As a consequence, several signal processing tasks originally performed in the analog domain have been implemented in the digital domain. In fact, several signal processing tools are only feasible to be implemented in the digital domain. As most real life signals are continuous functions of one or more independent variables, such as time, temperature, and position, it would be natural to represent and manipulate these signals in their original analog domain. In this chapter we will consider time as the single independent variable. If sources of
Copyright© 2005 by AcademicPress. All rights of reproductionin any form reserved.

information are originally continuous in time, which is usually the case, and they occupy a limited range of frequencies, it is possible to transform continuous-time signals into discretetime signals without losing any significant information. By interpolation, the discrete-time signal can be mapped back into the original continuous-time signal. It is worth mentioning that signals exist that are originally discrete in time, such as the monthly earnings of a worker or the average yearly temperature of a city. As a result, the current technological trend is to sample and quantize the signals as early as possible so that most of the operations are performed in the digital domain. The sampling process consists of transforming a continuous signal into a discrete signal (a sequence). This transformation is necessary because digital machines are suitable to manipulate sequences of quantized numbers at high speed. Therefore, quantization 839

840 of the continuous amplitude samples generate digital signals tailored to be processed by digital machines. A digital filter is one of the basic building blocks in digital signal processing systems. Digital filtering consists of mapping a discrete-time sequence into another discrete-time sequence that highlights the desired information while reducing the importance of the undesired information. Digital filters are present in various digital signal processing applications related to speech, audio, image, video, and multirate processing systems as well as in communication systems, CD players, digital radio, television, and control systems. Digital filters can have either fixed (Antoniou, 1993; Jackson, 1996; Oppenheim and Schafer, 1989; Mitra, 2001; Diniz, 2002) or adaptive coefficients (Diniz, 2002). In this chapter, we will describe: • The most widely used types of digital filter transfer functions; • How to design these transfer functions (i.e., how to calculate coefficients of digital filter transfer functions); • How to map a transfer function into a digital filter structure; • The main concerns in the actual implementation in a finite precision digital machine; and • The current trends in the hardware and software implementations of digital filters In particular, we focus on digital filters that implement linear systems represented by difference equations. We will show how to design digital filters based on magnitude specifications. In addition, we present efficient digital filter structures that are often required for implementation with minimal cost, taking into consideration that real-time implementation leads to nonlinear distortions caused by the finite word length representations of internal signals and coefficients. Finally, the mostly widely used approaches to implement a digital filter are briefly discussed.

Marcio G. Siqueira and Paulo S.R. Diniz
and converts the level of these samples into a numeric representation that can be used by a digital filter. • Digital filter: The digital filter uses the discrete-time numeric representation obtained by the A/D converter to perform arithmetic operations that will lead to the filtered output signal. • Digital-to-analog (D/A) converter: The D/A converter converts the digital filter output into analog samples that are equally spaced in time. • Low-pass filter: The low-pass filter converts the analog samples of the digital filter output into a continuous-time signal.

2.3 Sampling of Analog Signals
Figure 2.2 shows the mechanism behind sampling of continuous time signals and its effect on the frequency representation. Figure 2.2(A) shows that a continuous signal x(t) is multiplied in the time domain by a train of unitary impulses xi(t). The resulting signal from the multiplication operation is x*(t), which contains samples of x(t). Figure 2.2(B) shows the frequency representation of the original signal x(t) before sampiing. Figure 2.2(C) shows the spectrum of the impulse train xi(t). Finally, Figure 2.3 shows the spectrum of the sampled signal x* (t). According to the convolution theorem, the Fourier transform of the sampled signal x*(t) should be the convolution, in the frequency domain, of the Fourier transform of the impulse train and the Fourier transform of the original signal before sampling. Therefore, it can be shown that:

x,~(d'°) = ~
l=--o0

x

jT +j~-)
(2.1)

1
z - -

Tl=_oc
From the above equation and from Figure 2.2, it can be seen that the spectrum of the sampled signal will be repeated at every ~0s = 2 ~ / T interval. An important consequence of the spectrum repetition is the sampling theorem shown next.
2.3.1 S a m p l i n g T h e o r e m

2.2 Digital Signal Processing Systems
A typical digital signal processing (DSP) system comprises the following modules, shown in Figure 2.1.
• Analog-to-digital (A/D) converter: Assuming that the input signal is band-limited to frequency components below half of the sampling frequency, the A/D converter obtains signal samples at equally spaced time intervals

If a continuous signal is ban&limited (i.e., X(jtOa)= 0 for to, > We), x(t) can be reconstructed from XD(n) for

x(t)LI Sampleand ~ I

[I

x(n)

1 ....... I

Digital filter

y(n)

D/A
converter

y*(n) ,

Low-pass filter

y(t)
P

A/D converter

FIGURE 2.1 Architecture of a Complete DSP System Used to Filter an Analog Signal

2

Digital Filters
x i (t)

841 of the sampling frequency), the sampled signal spectrum will be distorted in the frequencies around ms/2. In the case of Figure 2.4(B), no distortion occurred only because the signal had no power at ms/2. This distortion is known as aliasing and is undesirable in most cases. As a consequence, only case 2.4(D) 1 can be used to recover the spectrum of the original continuous signal shown in Figure 2.4(A). The recovery of the continuous signal spectrum from the sampled signal can be theoretically accomplished by using an analog filter with fiat frequency response to retain only the components between -tos/2 and m,/2 of the spectra shown in Figure 2.4(A). The frequency response of the analog filter with these characteristics is shown in Figure 2.5. The impulse response of this filter is as follows:

x(t)

-~@

,.

x*(t)

(A) Signal Sampling
X(~)

/
-(%

\
mc

h(t) -ma

T sin (tOLpt) "fit

(2.2)

(B) Signal Spectrum

Xi(Jma)

-2ms

i

-ms

ms (C) Impulse Train Spectrum
0

I

"

I

2ms

t

In equation 2.2, mLp is the cutoff frequency of the designed low-pass filter. The mLp shall be chosen to guarantee no aliasing (i.e., mc < mLp ~ (Os/2). The impulse response in the above equation is depicted in Figure 2.5. It is easy to verify that a causal filter with this exact impulse response cannot be implemented. In practice, it is possible to design analog filters that are good approximations for the frequency response in Figure 2.5.
O)a

2.4 Digital Filters and Linear Systems
An example of a discrete-time signal is shown in Figure 2.6. In a digital filter, the input signal x(n) is a sequence of numbers indexed by the integer n that can assume only a finite number of amplitude values. Such a sequence might originate from a continuous-time signal x(t) by periodically sampling it at the time instants t = nT, where T is the sampling interval. The output sequence y(n) arises by applying x(n) to the input of the digital filter, with the relationship between x(n) and y(n) represented by the operator IT as:
y(n) - :fix(n)]. (2.3)

FIGURE 2.2 Samplingof Continuous-Time Signals - c o < n < oo if ms > 2me, where ms is the sampling frequency in radians per second. Figure 2.4 shows sampling effects on a continuous-time signal. Figure 2.4(A) shows the continuous-time signal spectrum, and Figures 2.4(B), 2.4(C), and 2.4(D) show the sampled signal spectra for the cases mc = ms~2, m~ > ms~2, and mc < mJ2. It can be seen from these pictures that only Figure 2.4(D) does not produce distortion in the frequencies between -mc and inc. Figure 2.4(C) shows that when the sampled signal has frequency components above ms/2 (half

x* (jo,~)

, \,/
3ms 2

, \,/--ms -ms__ 2
-m c

lIT

, \,/,.
mc ms 2 ms 3ms 2 2ms
ma

FIGURE 2.3 Spectrum of Sampled Signal 1 In the caseof Figure2.4(B), distortioncouldoccuronlyat ms/2.

842
37 ' X (j~)

Marcio G. Siqueira and Paulo S.R. Diniz

-C0 c

0

O)c

(%

(A) Continuous-Time Signal Spectrum

' XD(eJ~a)

\ / m
-2%

,V----V,
-(% -~s 0 e)s ~s

1/T

2-

-°c=

2

2°s

o

(B) Sampled Signal Spectrum o c = %/2

I Xo(eJ~)
1/T

X

I -2%

--03s

--0 s

0

,X
O)s --(Ds

, ×-2%
(0

2

2

(C) Sampled Signal Spectrum C0c>O~s/2

l Xo(eJ~)

--\
I -2c0s I
--(I) 8

I -~c
--~s

o'

~c I
03s

,
0s

\

,/

I

L

2c0s o

2

2

(D) Sampled Signal Spectrum Oc< o

FIGURE 2.4

Sampling Effects for all integers n and no. A causal digital filter is one whose response does not anticipate the behavior of the excitation signal. Therefore, for any two input sequences xl(n) and x2 (n) such that xl (n) = x2 (n) for n _< no, the corresponding responses of the digital filter are identical for n _< no; that is: f{xl(n)] = f[x2(n)], for n _< no. (2.6)

2.4.1 Linear Time-Invariant Systems
The main classes of digital filters are the linear, time-invariant (LTI) and causal filters. A linear digital filter responds to a weighted sum of input signals with the same weighted sum of the corresponding individual responses; that is: F[~Xlxl(n) + o~2x2(n)] = ~xl!T[Xl(n)] + ~XzY[xz(n)], (2.4)

for any sequences xx(n) and xz(n) and for any arbitrary constants O~l and cx2. A digital filter is said to be time-invariant when its response to an input sequence remains the same, irrespective of the time instant that the input is applied to the filter. That is, 5r[x(n)] = y(n), and then:
F[x(n -

no)] = y ( n - no),

(2.5)

An initially relaxed linear-time invariant digital filter is characterized by its response to the unit sample or impulse sequence 8(n). The filter response when excited by such a sequence is denoted by h(n), and it is referred to as impulse response of the digital filter. Observe that if the digital filter is causal, then h(n) = 0 for n < 0. An arbitrary input sequence

2 DigitalFilters

843

H
-O3Lp

(jC0a)

x(n)

n

OJLp

(%

l"
FIGURE2.6
Discrete-Time Signal Representation

, h(t)

-O)L(.j
O,)Lp O)Lp (OLp

X(z)= ~
n=--oo

x(n)z-'.

(2.10)

O)Lp

03Lp

O)Lp

The transfer function of a digital filter is the ratio of the z transform of the output sequence to the z-transform of the input signal:
H(z) Y(z) X(z)"

FIGURE 2.5 Ideal Low-Pass Filter

(2.11)

can be expressed as a sum of delayed and weighted impulse sequences:

Taking the z-transform of both sides of the convolution expression of equation 2.9 yields:

x(n)= ~ x(k)6(n-k),
k=-oc

(2.7) x(n)
can then be
n= oo n= oo k = - o o

and the response of an LTI digital filter to expressed by:

y(n)=F

[~
k=-oo

x(k)6(n-k)

]
(2.8)

and substituting variables (l = n - k):

Y(n)z-n= ~
n=-oo k=-cx~

h(k)z-k ~
l=-oc

x(1)z-l.

(2.13)

= ~_, x(k)T[6(n- k)]
k=--oo

= ~
k=-oo

x(k)h(n- k)

The following relation among the z-transforms of the output Y(z), of the input X(z), and of the impulse response H(z) of a digital filter is obtained as:

= x ( n ) * h(n). The summation in the last two lines of the above expression, called the convolution sum, relates the output sequence of a digital filter to its impulse response h(n) and to the input sequence x(n). Equation 2.8 can be rewritten as:

Y(z) = H(z)X(z).

(2.14)

Hence, the transfer function of an LTI digital filter is the z-transform of its impulse response.

2.4.2 Difference Equation
A general digital filter can be described according to the

y(n) = ~ h(k)x(n- k).
k= oo

(2.9)
as:

following difference equation:
N M

Defining the z-transform of a sequence

x(n) is written

Z a,y/,-i/- Z b,*/,-,/
i=0 l=0

=0.

(2.15)

844 Most digital filters can be described by difference equations that are suitable for implementation in digital machines. It is important to guarantee that the difference equation represents a system that is linear, time-invariant, and causal. This is guaranteed if the auxiliary conditions for the underlying difference equation correspond to its initial conditions, and the system is initially relaxed. Nonrecursive digital filters are a function of past input samples. In general, they can be described by equation 2.15 if a0 = 1 and ai = 0 for i = 1. . . . . N:
M

Marcio G. Siqueira and Paulo S.R. Diniz
Let us consider in detail a particular case of linear-phase FIR filters: For M even:

H(z) =
n=0

h(nlz-" + h
Z
rt=O

z-M/2 +

n=M/2+l

Z

h(nlz-"

h(n)(z-~ + z-(M ~1) + h

z M/2.
(2.19)

y(n) = Z
I=0

b l x ( n - 1).

(2.16)

At unity circle: Evaluating equation 2.19 (M even) at z = d °~, we have:

It can be easily shown that these filters have finite impulse response and are known as FIR filters. For a0 = 1, equation 2.15 can be rewritten as:
N M

M/2-1
H(eJ~°) = Z
n=O

(M)
h(n)e-J~" + h

e-j~M/2 -1- ~_.

M h(n)e -)°~". n=M/2+l

y(n) = - Z
i=1

aiy(n - i) + Z
l=0

b;x(n - l).

(2.17)

Considering the case of symmetric impulse response (i.e., h(n) = h(M - n) for n = 0, 1. . . . . M/2), we have:

It can be shown that these filters have, in most cases, infinite impulse response [i.e., y(n) ¢ 0 when n --+ ec] and are therefore known as IIR filters.

H(e i')

[ (m)
= e -j°~M/2

h

+ ~

M/2-1

2h(n)cos oJ n -

[ (

m)l]

.

By replacing n by (M/2 - m), the equation above becomes:

2.5 Finite Impulse Response (FIR) Filters
H ( d ° ' ) = e -j~°M/2 h ( M )
The general transfer function of an finite impulse response (FIR) filter is given by:
M M

+ m~_l2 h / ~ - -

M/2 /M m) cos [oJ(m)] 1.
=

Defining ao = h(M/2) and 2. . . . . M/2, we have: (Z -- Zt). (2.18)

am

2 h ( M / 2 - m), for m = 1,

H(z) = Z
I=0

blz-1= Hoz M H
1=0

A filter with the transfer functions in equation 2.18 will always be stable because it does not have poles outside the unit circle. FIR filters can be designed to have a linear phase. A linear phase is a desirable feature in a number of signal processing applications (e.g., image processing). FIR filters can be designed by using optimization packages or by using approximations based on different types of windows. The main drawback is that to satisfy demanding magnitude specifications, the FIR filter requires a relatively high number of multiplications, additions, and storage elements. These facts make FIR filters potentially more expensive than IIR filters in applications where the number of arithmetic operations or number of storage elements are expensive or limited. However, FIR filters are widely used because they are suitable for designing linearphase filters. An FIR filter has a linear phase if and only if its impulse response is symmetric or antisymmetric; that is

M/2 H (eP°)= e-J~M/2Z amCOS (o~m).
m=0

(2.20)

In summary, we have four possible cases of FIR filters with a linear phase:

• Case A: M Even and Symmetric Impulse Responses (1) Characteristics: h(n) = h ( M - n) for n = 0,1 . . . . . M/2 H(eJ~o) = ~-jtoM/2Z . . m = 0 Um (corn), where ao=h(M/2) ~-.~M/2~ COS and am = 2h(M/2 - m) for m = 1, 2 . . . . . M / 2
(2) Phase: ®(co) = tan -1 fIm[H(e)~)]l (3) Group delay: T ~@(~o)

~o~ - T"

__

M

h(n) = i h ( M -

n).

• Case B: M Odd and Symmetric Impulse Responses (1) Characteristics: h(n) = h(M - n) for n = 0, 1. . . . . (M/2 - 0.5). H(ej~°) = e-Jr°M~2Z..~m=I~-'~M/2+0"5 cos [03(m - ½)], bm where bm = 2h(M/2 + 0.5 - m) for m = 1,2 . . . . . (M/2 + 0.5).

2

Digital Filters

845
h(n)
__ M

(2) Phase: O(m) = - t o M. (3) Group delay: "r -@0(~o)

~o

2"

• Case C: M Even and Antisymmetric Impulse Response (1) Characteristics: h(n) = - h ( M n) for n = O, 1. . . . . ( M / 2 1) and h ( M / 2 ) = O. H(eJ~O) = e-J(~oM/2 ~/2 ~-~M/2 Cm sin (tom), where Cm = Z-,m=1 2 h ( M / 2 -- m) for m = 1, 2 . . . . . M / 2 . (2) Phase: O(to) = _ t o M + ~ .
, h(n)

2

3

TT T
4 5 6 (A) Case A

n
P

9

10

(3) Group delay: • -e~ -- T" • Case D: M Odd and Antisymmetric Impulse Response (1) Characteristics: h(n) = - h ( M - n), for n = 0, 1. . . . . ( M / 2 - 0.5). H ( eJo~) = e-j(~oM/2 -~/2)v'M/2+°S dm sin [to(rn - ½)], Z.~m=l where dm = 2h(M + 0.5 - m), for m = 1, 2 . . . . . ( M / 2 ÷ 0.5). (2) Phase: O(to) = _ t o m + 2" (3) Group d e l a y : c = 00(~o) _
2

00(m)

__

M

n 3 4 5 6 7 10 1

(B) Case B
, h(n)

so - - T "

M

As an illustration, Figure 2.7 depicts the typical impulse responses for the linear-phase FIR filter. Equation 2.19 for linear-phase FIR filters can be rewritten
as: L

H ( z ) = z M/2 Z g l ( z
I=0

I ± Z l),

(2.21)

45

t

6

10

n
P

I

where the gfs are related to h(l), where L = M / 2 - 0.5 for M odd and L = M / 2 for M even. If there is a zero of H ( z ) at z0, z0 1 is also a zero of H ( z ) . This means that all complex zeros not located on the unit circle occur in conjugate and reciprocal quadruples. Zeros on the real axis outside the unit circle occur in reciprocal pairs. Zeros on the unit circle occur in conjugate pairs. It is possible to verify that the case B linear-phase transfer function has a zero at to = 7, such that high-pass and bandstop filters cannot be approximated using this case. For case C, there are zeros at to = ~r and co = 0; therefore they are not suitable for low-pass, high-pass, and band-stop filter designs. Case D has zero at to = 0 and is not suitable for low-pass and band-stop design.

(C) Case C
, h(n)

T T
4 s

7
~ l

1011

(D) Case D

2.6

Infinite

Impulse

Response

Filters
response

FIGURE 2.7

Linear-Phase FIR Filter: Typical Impulse Response

The general transfer function for infinite impulse (IIR) filters is shown below:
H ( z ) -- Y ( z ) _ ~ M o blz-I X(z) 1 + ~ N 1 aiz i"

(2.22)

H(z) = ~Io~w-

rd HIM=o (1 -- Z-1Z1) 11i=0 (1

= noZ

-

M N-M F[l:o (z - zl)

~W-7----"

(2.23)

z-lpi)

11i=0 (Z -- Pi)

The above equation can be rewritten in the following alternative forms:

The above equation shows that, unlike FIR filters, IIR filters have poles located at pi. Therefore, it is required that [Pil < 1 to guarantee stability for the IIR transfer functions.

846

Marcio G. Siqueira and Paulo S.R. Diniz

2.7 Digital Filter Realizations
From equation 2.15, it can be seen that digital filters use three basic types of operations: • Delay: Used to store previous samples of the input and output signals, besides internal states of the filter • Multiplier: Used to implement multiplication operations of a particular digital filter structure; also used to implement divisions • Adder: Used to implement additions and subtractions of the digital filter

x(n)

h(O)(

h(1)I h(2)(~)

2.7.1 FIR Filter Structures
General FIR and IIR filter structures are shown in Figures 2.8 and 2.9. Figure 2.8 shows the direct-form nonrecursive structure for FIR filters, while Figure 2.9 depicts an alternative transposed 2 structure. These structures are equivalent when implemented in software. Since the impulse responses of linear-phase FIR filters are either symmetric or antisymmetric [e.g., h ( n ) = + h ( M - n)], this property can be exploited to reduce the overall number of multiplications of the FIR filter realization. In Figure 2.10 the resulting realizations for the case of symmetric h(n) are shown. For the antisymmetric case, the structures are straightforward to obtain.

(A) Odd

Order

)h(~) y(n) (B) Order Even
-y(n)

2.7.2 IIR Filter Realizations
A general IIR transfer function can be written as in equation 2.22. The numerator in this transfer function can be imple-

x(n)u[~[ ~ ............ - ~ ~ ~
h(0) ~ ~

FIGURE 2.10 Linear-Phase FIR Filter Realization With Symmetric Impulse Response mented by using an FIR filter. The denominator entails the use of a recursive structure. The cascade of these realizations for the numerator and denominator is shown in Figure 2.11. If the recursive part is implemented first, the realization in Figure 2.11 can be transformed into the structure in Figure 2.12 for the case N = M. The transpose of the structure in Figure 2.12 is shown in Figure 2.13. In general, IIR transfer functions can be implemented as a cascade or as a summation of lower order IIR structures. In case an IIR transfer function is implemented as a cascade of second-order sections of order m, it is possible to write:

FIGURE 2.8 Direct-Form Nonrecursive Structure

y(n) h(O)

x(n)l

I

I ....

I

I

H(Z) = ~ I ~Ok -}- "~lkZ-1 @ ~2kz-2 = rI'~okZ2 -}- ~lkZ -~- ~2k k=] 1 + talk z-1 + m2k z-2 k=l z2 + mlkZ + m2k
H z + Ylk_ £+Y2_ k = Z-lk(z). = Ho ~ z2 + mlkz + rn2k k=l (2.24)
rn 2 t t im I

FIGURE 2.9 Alternative Form Nonrecursive Structure
2 This structure is characterized by the branches being reversed, such that the sinks turn to summations, and the summations become distribution nodes.

2

Digital Filters
bo x(n)

847

<
1 -a 1

bo
y(n)

x(n)

=

t,

y(n)

E

E E
bl ? -al

-a 2

E

'

_82

,@-FIGURE 2.11

bM

-a N

Recursive Structure in Direct Form

bo
x( n )

.d)

.

.(g>--<?)

. y(n)

FIGURE 2.13

Alternative Direct-Form Structure for N = M

P
a2
i i

x(n) o

.

o

n,

(A) Cascade

b2

aN

bM

x(n)c

=

o y(n)

FIGURE 2.12

Direct Form Structure for N = M

Equation 2.24 is depicted in Figure 2.14(A). In the case an IIR transfer function is implemented as the addition of lower order sections of order m, it is possible to write:
! !

(B) Parallel

FIGURE 2.14

Realization with Second-Order Sections

H(z) = ~ -" %kz-- + Y~kZ-+-Y*2k = ho + ~_~
k=l

m

19 2

P

t)

m

YPOkZ247 ,~PlkZ Z 2 47 m l k Z 47 m2k

Z 2 47 m l k Z 47 m2k
l! f/

k=l

f
:

~ 2

m
k=l

h; Jr- =

2 47 m l k Z 47 m2 k

(2.25)

Equation 2.25 depicted in Figure 2.14(B). In general, for implementations with high signal-to-quantization noise ratios, sections of order two are preferred. Different types of structures can be used for second-order sections, such as the direct-form structures shown in Figures 2.15(A) and 2.15(B) or alternative biquadratic structures like state-space (Diniz and Antoniou, 1986).

848
'Yo x(n) c
, o y(n)

Marcio G. Siqueira and Paulo S.R. Diniz

2.8.1 W i n d o w M e t h o d
The window method starts by obtaining the impulse response of ideal prototype filters. These responses for standard filters are shown in Figure 2.16. In general, the impulse response is calculated according to:

h(n) = ~
--IT

H(eJ~)d°~&o,

(2.26)

r; ,q
-m 2

for a desired filter.
71

(A) Type 1

IH(eJe) l

x(n)

.(Z

Q

.

y(n)

,
71 -ml IH(eJco)l

I
2n

IP

~o

(A) Low Pass

.C>-G--@.
I
(B) Type 2
~cl ~c2 /'~ I.

~'2

t

-m2

2n

m

FIGURE 2.15 BasicSecond-Order Sections
IH(eJm)l

(B) Band - Pass

2.8 FIR Filter Approximation Methods
There are three basic approximation methods for FIR filters satisfying given specifications. The first one, the window method, is very simple and well established but generally leads to transfer functions that normally have a higher order than those obtained by optimization methods. The second method, known as minimax method, calculates a transfer function that has minimum order to satisfy prescribed specifications. The minimax method is an iterative method and is available in most filter design packages. The third method, known as weighted least squares-Chebyshev 3 (WLS-Chebyshev) is a reliable but not widely known method. The WLSChebyshev method is particularly suitable for multirate systems because the resulting transfer functions might have decreasing energy at a prescribed range of frequencies. [ 2n

(%1

(%2 /'~

co

(C) Band Reject IH(eJ~)l

I
(0cl (%2 1~
2n
0)

(D) High - Pass

3 The WLS-Chebyshevmethod is also known as the peakconstrained method.

FIGURE 2.16 Ideal Frequency Responses

2

Digital Filters
1 + cos (2"rr/M + 1) Y = 1 + c o s ( 2 [ 3 ~ / M + 1)"

849

In the case of a band-pass filter, the prototype filter can be described by following transfer function: 0 1 0 for 0 < ]col < Ocl. for cocl< Iml < tOc2. for O c 2 < Iol _< 7 .

(2.31)

IH(eJ~°) I =

(2.27)

Using the inverse Fourier transform, the impulse response of the filter with the response in equation 2.27 can be obtained as follows:

In the above equation, [3 is a variable parameter that adjusts the Sar~imaki window main lobe width given by 413~/M + 1. In the special case of [3 = 1, the Sar~imaki window becomes identical to the rectangular window. The desired normalized window function is as written here:

w(n) = fv(n)/fv(O),
for - M / 2 < n < M / 2 and w(0) = 1. The unscaled coefficients w(n) can be expressed as:

(2.32)

h(n) = ~ l .~ H(ei°')d'°ndm
= ~1 Jill cl 2 cos (on)do 1
cl

(2.28)

M/2

w(n) = vo(n) + 2 Z
k=l

vk(n),

(2.33)

= - - [ sin (oc2n) - sin (Ocl n)],
,n'n

1

for n # 0, for n = 0, h(0) = 1 (Oc2 - Ocl). Different window sequences can be multiplied to h(n) to limit its length, generating the impulse response for the designed FIR filter as follows:

where vk(n) can be calculated according to the recursive equations below:
Vo(n) = vl(n) =
0,

1

otherwise. n=0.

n = 0.

(2.34) (2.35)
vk 2(n)+y[Vk l(n--1)+Vk l ( n + l ) ] , -k<n<k.
otherwise.

{

' y - - 1,

3'/2,
0,

Inl = 1.
otherwise.

h'(n) = h(n)w(n).

(2.29)

vk(n)=f2(y

1)Vk l(n)

1 0,
In equation 2.29, w(n) is the window sequence, and h'(n) is the impulse response of the obtained filter. The rectangular window consists of truncating the sequence h(n); that is, w(n) = 1 for n = - M / 2 , - M / 2 + l . . . . . 0. . . . . M / 2 - 1 , The causal transfer function is given by:
M/2

(2.36)

M/2.

H(z)

= Z- M / 2

Z
n=-M/2

h'(n)z -n

(2.37)

2.8.2 Sariimaki Window
The abrupt truncation of the impulse response performed by the rectangular window leads to oscillations at frequencies close to the resulting filter band edges. The satisfaction of prescribed specification is difficult because these ripples have uncontrollable m a x i m u m magnitude. As alternatives, there are more flexible windows incorporating design parameters that allow a trade-off between the transition bandwidth and the ripples magnitude (Sar~imaki, 1992). The frequency response of the unscaled Sariimaki window is known to be the following:

Example Ap=O.2dB,
Design an FIR filter satisfying the following specifications: Ar=32dB, op=2wx3400rad/sec, mr= 2¢r x 4600 rad/sec, and o, = 2nv x 20, 000 rad/sec

Solution
Figure 2.17 shows different responses obtained by using the rectangular and Sar~imaki windows on a low-pass filter. It can be seen that the Sar~imaki window showed improvements with respect to the rectangular window response. In fact, the rectangular window has a high order of 248; to satisfy the specifications, the transition band had to be reduced due to the high ripples close to the band edges. The Sar/imaki window required order 48 to meet the specifications.

• (o) = ~
n=-~

~-v(n)e-j°~n = 1 + Z 2 T k [ y c o s o +
k=l

(y -- 1)l

sin [ - ~ ! c o s -~ {y cos o + (y - 1)}] sin[½cos -1 { y c o s o + (y1)}] ' (2.30) where Tk is the kth degree Chebyshev polynomial and: In the minimax approach, our aim is to design filters that minimize the peak (maximum) of the error with respect to

2.9 FIR Filter Design by Optimization

850 10

Marcio G. Siqueira and Paulo S.R. Diniz

-10 -20
m' g
¢O')

-30 -40 -50 -60 -70 -80

l

1

-900

lO'OO 20'00 3o'0o 4o'o0 50bo 6o.00 7o.00 8o.00 90'00 10,000
Frequency [Hz] (A) Rectangular Window

20

-201 rn -40
"O -I ¢-

\

-60

-80

-100

-120

0

' 1000

' 2000

' 3000

~ ' ' 4000 5000 6000 Frequency [Hz] (B) Sar&maki Window

' 7000

~ 8000

' 9000 10,000

FIGURE 2.17 Low-Pass Filter Using Window Method

desired characteristics without directly considering the error energy. On the other hand, the least squares approach aims to minimize the error energy. As can be observed in Figure 2.18, there are solutions between these two extremal types of objective functions that provide good trade-off between peak error and error energy. In this section, we present a very flexible approach to designing FIR digital filters satisfying prescribed

specifications in terms of the maximum deviation the passband and in part of the stopband, while minimizing the energy in the stopband frequency range. This type of approximation is very useful in modern DSP systems involving subsystems with different sampling periods. The approximation method discussed here includes the least squares and minimax approaches as special cases.

2 Digital Filters
0.0054+
0 0 0 x

851 The desired frequency response is given by e-JM/2°aD(~), and W(m) is a strictly positive weighting function. The weighted error function E(~o) is defined in the frequency domain as: E(to) = W(to)[D(to) -/2/(to)].
+ +

*

Minimax solution

0.00450.00360.0027-

(2.43)

0

+

--x

g
I--

+ +
0.0018+ +

Peak constrained

Least-square solution

The approximation problem for FIR linear-phase digital filters is equivalent to the minimization of some objective function of E(~o) in such way that ]E(to)l _< & Then: 8 I H ( t o ) - D(to)l 5 W(m~" (2.44)

*6
0.00090

+ + + + + +
I

,/
0

Idealerrors=(O,0) + + + + +
i I I

I

0.005

0.01

0.015

0.02

0.025

Peak error

FIGURE 2.18 Minimax Versus Least Squares

2.9.1 Problem

Formulation

A good discrete approximation of E(to) can be obtained by evaluating the weighted error function on a dense frequency grid with 0 _< toi _< ~ for i = 1. . . . . L(M + 1). For practical purposes, for a filter of length M + 1, we use 8 < L < 16. Points placed at the transition band are disregarded, whereas the remaining frequencies should be linearly redistributed in the passband and stopband including their corresponding edges. Thus, the following vector equation results:

Consider a nonrecursive filter of length M + 1 described by the transfer function:
M

e = W ( d - Ub),
where:

(2.45)

H(z) = Z h(n)z-n'
n=0

(2.38)

e = [E(~ol)E(to2) ... E(toL(M+I))]g.
W= diag[W(tol)W(to2)... W(toL(M+l))].

(2.46) (2.47) (2.48)
. . COS ( 2-031) M

assuming that co, = 2-rr. The frequency response of such filter is then given by:
M

d = [D(tol)D(to2) ... D(~oL(v+l))] r. If cos (ml)
COS ( ~ 2 )

cos (2ml)
COS ( 2 ( 0 2 )

H(d°') = Z h(n)e-J~'~ = d°(~°)/q(to)'
n=O

(2.39)

..

COS ( M 1.O2)

U=
COS ( ( O L ( M ÷ I ) ) COS ( 2 t O L ( M + I ) ) ... COS ( M

where O(to) and/2/(to) are the phase and magnitude responses of H(eJ°'), respectively defined as: . lfIm[H(eJ'°)]] 0(to) = tan ~ . ~ j~.

Tm~(M+I)) (2.49) (2.50)

(2.40)

/:/(to) = Im(d~°) I.

(2.41)

Assume that the phase response ®(to) is linear on to, that M is even, and that h(n) is symmetrical. The cases of M odd and/ or h(n) antisymmetrical can be easily derived based on the proposed formulation. The frequency response of such filter thus becomes:
M/2

In these equations, L < L because the transition frequencies were discarded. A low-pass filter specification includes 8p as the passband maximum ripple and 8r as the stopband minimum attenuation, with top and tot being the passband and stopband edges, respectively. Based on these values, the following can be defined:

DBp= 20 lOglo ( ~ )

1 +Sp

[dB ] .

(2.51) (2.52)

DBr = 20 lOgl0 (8r)[dB].
(2.42)

g(eJ°a) = e-JM/2~°Z an COS(ton),
n=O

with ao = h(M/2) and an = 2 h ( ~ - n) for n = 1. . . . . M/2.

The design of a low-pass digital filter using either the minimax method or the WLS approach is performed by choosing the ideal response and weight functions, respectively, as:

852 D(o3) = ~1' ( 0, 1,
W(o3) =

Marcio G. Siqueira and Paulo S.R. Diniz

0<o3<o3p. o3r < o3 _< "rr. 0 < o3 < o3p.
o3r < ~ < "rr.

(2.53) (2.54)

2.9.4 Minimax Filter Design Employing WLS
This section describes a scheme that performs Chebyshev approximation as a limit of a special sequence of weighted least-squares approximations. The algorithm is implemented by a series of WLS approximations using a varying weight matrix W k , whose elements are calculated by:

8p/Sr,

Other types of transfer functions can be similarly defined.

2.9.2 Chebyshev Method
Chebyshev filter design consists of the minimization of the maximum absolute value of E(o3) with respect to the set of filter coefficients: ]]E(o3)]]oo = min max [W(o3)lD(o3) -/4(re)I]. b 0<m<'rr (2.55) where:

W~+ 1(03) = W~(O3)Bk(O3),

(2.62)

Bk(o3) = IEk(o3)].

(2.63)

Using equations 2.46 to 2.50, the Chebyshev method attempts to minimize: ]]E(o3)][~ ~ min max [ W l d b 0<_toi_<~r Ub]], (2.56)

This approach entails increasing the weight where the error is smaller. The convergence of this algorithm is slow because usually 10 to 15 WLS designs are required in practice to approximate the Chebyshev solution. An efficiently accelerated algorithm was presented by Lim et al. (1992) that is characterized by the weight matrix W k whose elements are recurrently updated by:

at the discrete set of frequencies. The Chebyshev method minimizes: DB~ = 20 log10 (8)[dB], where 8 = max [8/, 8r]. (2.57)

w~+1(o3) = w~(o3)&(o3),

(2.64)

2.9.3 Weighted Least-Squares Method
The weighted least-squares (WLS) approach minimizes:

where/)k(o3) is the envelope function of Bk(o3) formed by a set of piecewise linear segments that start and end at consecutive extremes of Bk(o3). The band edges are considered extreme frequencies, and the edges from different bands are not connected. By denoting the extreme frequencies at a particular iteration k as o3~l,for l = 1, 2. . . . . the envelope function is formed as:
Bk(o3) = (o3 -- @)Bk(o3~l+l) + (@+1 -- o3)Bk(@) ' < to <
, o31 -

'
o3.1.

IIE(o3)l122 =

[

[E(m)l do3 = 2

V
0

(°".1

-

o3'l)

w2(o3)lD(o3)-- /-}(I.0)t2 do. (2.58)

(2.65) Figure 2.19 depicts typical cases of the absolute value of the error function (dash-dotted curve) used by the algorithm of equation 2.62 to update the weighting function; the figure also depicts a corresponding envelope (solid curve) used by the algorithm of equation 2.65.

This objective function at a discrete set of frequencies is estimated by:

IIE(o3)ll~ ~ eTe,
the minimization of which is achieved with:
b* = ( u T w 2 U) -1 u T w 2 d

(2.59)

2.9.5 The WLS-Chebyshev Method
(2.60) Comparing the adjustments used by the algorithms described by equations 2.62 through 2.65 and illustrated by Figure 2.19, with the piecewise-constant weight function used by the WLS method, it is possible to derive a very simple approach for designing digital filters with some compromises on minimax and WLS constraints (Diniz and Netto, 1999). This approach consists of a modification on the weight-function updating procedure in such way that it becomes constant after a particular extreme of the stopband of Bk(o3):

The WLS objective is to maximize the passband-to-stopband ratio (PSR) of energies:

(I0% ]/2/(m) ]2do3"~ PSR = 101ogl0 ~f~r [dB].

I/~(o3)12do3]

(2.61)

2 Digital Filters
xl 0-3 1.6

853

1.4

1.2

0.8
i,

ii~
0.6

i!f i! I i! I i!~ i! i!

0.4
ii ii ii ii i! i! i! i! i! i! i! i!

!i i ? 'i 't ii ,~ i !! ! ! i i ' i ! ii , i ! i : ii ~i t i i I !i !i !i !i i! i!
i'

'i

'i

ii

ii

i L 'i '
!

~i li ~ i ~i ii !i !i
' ,

if

i i i i '

~ ii i ~ ~ ii i i ' i i i i i ii i ~, ii i i 'i 'i ~i ,i i

0.2

i!

'f "

i'

i
1000 2000

r
3000 4000 5000 6000 7000 8000 9000 10,000

Frequency [Hz]

FIGURE 2.19 Typical Absolute Error Function B(to) (Dash-Dotted Line) and Corresponding Envelope /~(~o) (Solid Curve). The [3(o)) corresponds to the WLS-Chebyshev method (darker dotted curve).

W#+l(O.)

) =

W#(o.))~k(O.) ).

(2.66)

For the algorithm of equation 2.65, [3k(O~) is given by: f L 0 !< co < ~0L. mL<m_<~.
!

based algorithms comparable to the classic minimax approach (Antoniou, 1993). The WLS-based methods, however, do have the additional advantage of being easily coded into computer routines.

(2.67)

Example
Design FIR using some WLS-Chebyshev filters by satisfying the following specifications: Ap = 0.02 dB, Ar = 32 dB, ~0p = 2w x 3400 rad/sec, tot = 2~r x 4600 rad/sec, O~s = 2w x20,000 rad/sec. These are specifications established by industry standards for prefiltering speech signals before these signals are subsampled to 8kHz.

The coL is the Lth extreme value of the stopband of B(~0) = [E(co)[. The passband values of B(eo) and /~(eo) are left unchanged in equation 2.67 to preserve the equiripple property of the minimax method, although it does not have to be that way. The parameter L is the single design parameter for the proposed scheme. Choosing L = 1 makes the new scheme similar to an equiripple-passband WLS design, whereas choosing L to be as large as possible (i.e., making O-)L = w), we return to the original minimax algorithms. The computational complexity of WLS-based algorithms, like the algorithms here described, is of the order of (M + 1) 3, where M + 1 is the length of the filter. This complexity can be greatly reduced by taking advantage of the Toeplitz-plus-Hankel internal structure of the matrix (uTw2u) in equation 2.60 and by using an efficient grid scheme to minimize the number of frequency values. These simplifications make the computational complexity of WLS-

Solution
This example applied to the functions seen in Figure 2.19 is depicted in Figure 2.20. The results are for the least-squares and minimax approaches and for the WLS-Chebyshev filter where co~ was chosen as the fourth extreme in the filter's stopband. The order of all filters is 40. Compared with the filters designed by the window methods, the WLS, WLSChebyshev, and the minimax filters are much more economical. The reader should note that the passband ripple specified here is 10 times smaller than the value used in the window example.

854
10 , , ,

Marcio G. Siqueira and Paulo S.R. Diniz

-10

7
t , li i t !

-2O

2

m -30
(D -0

=

¢03

-40

z;

-50

!

n
•l~i : i ! :

;

i
A

.,\lli'

!" /if- -1[~

-60

•i
!]i '

.'! ". !]. r" i'j [:
!.'] i •• ~ t i i! .... • ~ ~ " /I ''~ i"i i i."

-70

-80

lo;o 2o'oo 3o'oo

!

~,.
i. i

4o'oo sooo 6000 7000 '.80~0 000 10,000 ;"
Frequency [Hz]
i

•I

FIGURE 2.20

Frequency Responses of the WLS, WLS (Dotted Line), WLS-Chebyshev(Dashes and Dotted Line), and the Minimax (Solid

Line) Filters
N

2.10 IIR Filter Approximations
IIR filters are usually designed by using analog filter approximations that are easy to apply and are well established. IIR filters can also be designed by using optimization methods. It is known, however, that simultaneous approximation of magnitude and phase is a difficult problem, and no reliable iterative design methods are available. The use of optimization methods in IIR filter design is more common in equalization problems when specifications for both magnitude and phase are given.

Hal(z) = Z
k=l

T

rkz
Z - - eP k T '

(2.69)

where rk and Pk are the zeros and poles of the analog transfer function approximation.
Bilinear Transformation Method

The b i l i n e a r t r a n s f o r m a t i o n m e t h o d is based on applying the transformation s = 2(z - 1)/T(z + 1) to the designed analog transfer function, so that:

2.10.1 Analog Filter Transformation Methods
There are two popular methods for designing digital filters starting from analog approximations: the i m p u l s e i n v a r i a n c e m e t h o d and the bilinear t r a n s f o r m a t i o n m e t h o d . These methods are discussed in the following sections.
Impulse Invariance Method

Hd(z) =Ha(s)ls_~ .

(2.70)

As a consequence, the bilinear transformation maps the analog frequencies into the digital frequencies as follows: 2 toT toa = ~ t a n 1 2 (2.71)

In the impulse invariance method, a digital filter is designed so that its impulse response is close to the sampled analog impulse response. This method is useful for low-pass and some band-pass filter approximations. In general:

ha(n) = Tha(nT),

(2.68)

where ha(n) is the digital filter impulse response and ha(t) is the analog filter impulse response with desired specifications. The above equation implies:

Therefore, to design a digital filter with predefined specifications, it is necessary to first design an analog transfer function with adjusted specifications (prewarped) according to the above equation. Figure 2.21 (A) shows how the frequencies in the digital filter are mapped into the designed analog filter frequencies. Figure 2.21(B) shows how the frequency response is affected by the bilinear transformation frequency mapping. The prewarped specifications of the low-pass analog filter prototype are given by:

2

Digital Filters
2

855

2~a
lOs 1

H(s) -

G - , sl f_o aN-isi"

(2.75)

When the bilinear transformation in equation 2.70 is applied to the transfer function shown above, the resulting transfer function in the z-domain is shown below:

H ( z ) -- ~ m O bN-lZl
0.5 2o)
IO s

1.0

(2.76)

ZT o aN_jzi
It can be shown (Psenicka et al., 2002) that it is possible to create a transformation that maps the coefficients hi into ai and bl into bl as shown below. a = PN x a. b = PN x b. (2.77) (2.78)

(A) Analog and Digital Frequencies

i.

i

. . . . i

It i

The a and b are vectors for the coefficients ai and bl defined by:
*lo
a = [aN

.11

[Ha(jlo) l

',,, :

aN

1 aN

2...

a0] T.

IH(ello)l

i i

i i

,':~ ,'
~lo

i i

b = [bN bN 1 bN-2...b0]V.

(2.79)

(B) Bilinear Transformation Effects
F I G U R E 2.21 Bilinear T r a n s f o r m a t i o n W a r p i n g Effects

The vectors h and b contain, respectively, the coefficients hi and bl as follows:

ITI2 ... gt0[v]NlT 012] 1

(2.80)

eOap = ~ t a n

2

o~pT
2

(2.72)

2 o~rT ~Oar = ~ t a n 2

(2.73)

It can be shown that PN is a Pascal matrix with the following properties: • All elements in the first row must be ones. • The elements of the last column can be computed by:

We can apply prewarping to as many frequencies of interest as desired. These frequencies are given by o~i for i = 1, 2 . . . . . n, such that the analog filter specifications are written as: 2 tan~°iT for i = 1, 2 , . . . , (Dai = T 2
n.

Pi, N+I = ( -- 1) i-1
(2.74)

(N-

N! i + 1 ) ! ( i - 1)!'

(2.82)

Design Procedure
• Prewarp the prescribed frequency specifications eoi to obtain ~Oaa. • Generate Ha(s), satisfying the specifications at the frequencies oJa~. • Generate Hd(s) by replacing s by 2 ( z - 1)/T(z + 1) at

w h e r e i = 1,2 . . . . . N + I . • The remaining elements can be calculated by:

Pi, j • Pi-l,j + Pi-l,j+l q- Pi, j+i,
wherei=2,3
Example

(2.83)

..... N,N+landj=N,N-1

. . . . . 2, 1.

Ha(s).

Ap = 0.02 dB, Ar = 32 dB, cop = 2Tr x 3400 rad/sec,
Solution

Design an elliptic filter satisfying the following specifications: eor = 2w x 4600 rad/sec, and O~s = 27 x 20, 000 rad/sec

2.10.2 Bilinear Transformation by Pascal Matrix
Consider an analog transfer function H(s) shown below:

According to equations 2.72 and 2.73, it is possible to calculate o% and e%r as follows:

856 t % = 8.8162 x lO-Srad/sec. t%~ = 5.9140 x lO-Srad/sec. An analog elliptic filter that meets the above specifications for t % and e%, has the following transfer function: + 1.4569 X 10 2653 + 1.1041 x -1.5498 x 10-345+ 3.9597 x 10.-22 H(s) = 55 + 8.6175 x 10-5s4 + 8.6791 x 1 0 - 9 5 3 + 4.3333 +1.6932 x 10-17s + 3.9597 x 10-22
× 10-654

Marcio G. Siqueira and Paulo S.R. Diniz
0.2188 + 0.1020z 1 + 0.3127z 2 + 0.3127z-3 + 0 . 1 0 2 0 z - 4 + 0 . 2 1 8 8 z -5 H(z) = 10 -1 0.3372 - 0.7314z -1 + 0.9856z 2 _ 0.7350z 3' +0.3406z 4 __ 0.0703z-5 The magnitude response for the transfer function above is shown in Figure 2.22.

6.8799

10-1352 x 10-1352 "

2.11 Quantization in Digital Filters
Quantization errors in digital filters can be classified as: • Round-off errors derived from internal signals that are quantized before or after more down additions; • Deviations in the filter response due to finite word length representation o f multiplier coefficients; and • Errors due to representation of the input signal with a set of discrete levels. A general, digital filter structure with quantizers before delay elements can be represented as in Figure 2.23, with the quantizers implementing rounding for the granular quantization and saturation arithmetic for the overflow nonlinearity. The criterion to choose a digital filter structure for a given application entails evaluating known structures with respect to the effects of finite word length arithmetic and choosing the most suitable one.

In this case, the vectors h and b are as follows: = [3.9597 x 10 -26, 1.5498 x 10 -42, 1.1041 x 10 -25, 1.4569 x 10 -42, 6.8799 x 10 -26, 0] T. b = 10-2310.0040 0.0169 0.0433 0.0868 0.0862 0.1000] r and the matrix P5 is as written here:
1

P5 =

5 10 10 5
1

1 3 2 -2 -3
-1

1 1 -2 -2 1
1

1 -1 -2 2 1
-1

1 -3 2 2 -3
1

1 -5 10 -10 5
-1

The resulting transfer function H(z) is as follows:

10

,

,

,

0 -10 -20
"0 OJ

-30 -40 -50 -60 -70 -80

,%

1
0

10'00 20'00 30'00

4000 5000 60'00 Frequency [Hz]

70'00 80'00 90'00 10,000

FIGURE 2.22

Magnitude Response of the Designed Low-Pass Elliptic Digital Filter

2

Digital Filters
x(n)
Z-1

857

xj(n+ l) Z-I

xi(n)

u(n)
Linear network

y(n)

-"

[~Xk+'(n+l!---7~ - Z
Z-1

XN(n)

FIGURE 2.23 DigitalFilter Including Quantizers at the Delay Inputs

2.11.1 C o e f f i c i e n t Q u a n t i z a t i o n Approximations are known to generate digital filter coefficients with high accuracy. After coefficient quantization, the frequency response of the realized digital filter will deviate from the ideal response and eventually fail to meet the prescribed specifications. Because the sensitivity of the filter response to coefficient quantization varies with the structure, the development of low-sensitivity digital filter realizations has raised significant interest (Antoniou, 1993; Diniz et al., 2002). A common procedure is to design the digital filter with infinite coefficient word length satisfying tighter specifications than required, to quantize the coefficients, and to check if the prescribed specifications are still met.
2.11.2 Quantization Noise

The discussion here concentrates in the fixed-point implementation. A finite word length multiplier can be modeled in terms of an ideal multiplier followed by a single noise source e(n) as shown in Figure 2.24. For product quantization performed by rounding and for signal levels throughout the filter much larger than the quantization step q = 2 b, it can be shown that the power spectral density of the noise source ei(n) is given by: q2

2-2b
1~(2.86)

Pe~(Z)- 1 2 -

In fixed-point arithmetic, a number with a modulus less than one can be represented as follows:

In this case, el(n) represents a zero mean white noise process. We can consider that in practice, el(n) and ek(n + I) are statistically independent for any value of n or 1 (for i ~ k). As a result, the contributions of different noise sources can be taken into consideration separately by using the principle of superposition. The power spectral density of the output noise, in a fixedpoint digital-filter implementation, is given by:
K 2 Py(z) = ~Ye ~

x = bob1 b 2 b 3

...

bb,

(2.84)

where bo is the sign bit and where bl b2b3 . . . bb represent the modulus using a binary code. For digital filtering, the most widely used binary code is the two's complement representation, where for positive numbers b0 = 0 and for negative numbers b0 = 1. The fractionary part of the number, called x2 here, is represented as:

Gi(z)Gi(z-1) ,

(2.87)

i=1

ei(n)

x( n) o
x2 =
x 2 -[x]

,,

,

o y(n)

if bo = O. if bo = 1.

(2.85) FIGURE 2.24 Model for the Noise Generated after a Multiplication

858

Marcio G. Siqueira and Paulo S.R. Diniz
9

~(~ nj(z)

P

q

Gi(~

mi

The constant X is usually calculated by using Lp norm of the transfer function from the filter input to the multiplier input Fi(z), depending on the known properties of the input signal. The Lp norm of Fi(z) is defined as:

IIF~(d')llp
X
x(n) : ' =~F'X"~ , = y(n)

L2,rrj °

IF~(~°~)lPd~o , I

(2.89)

for eachp _> 1, such that f2~ iFi(do~)]Pdo~ <_ oo. In general, the following inequality is valid:

FIGURE 2.25 Digital Filter Including Scaling and Noise Transfer Functions.

Ixi(n)[

~ IIF~llpllXllq, ,t,(1-4-1=q1),

(2.90)

where Pe,(ejw) = O'2e, for all i, and each G/(z) is a transfer function from multiplier output (gi(n)) to the output of the filter as shown in Figure 2.25. The word length, including sign, is b -4- 1 bits, and K is the number of multipliers of the filter.

for p, q = 1, 2 and oo. The scaling guarantees that the magnitudes of multiplier inputs are bounded by a number Mmax when Ix(n)[ _< Mmax. Then, to ensure that all multiplier inputs are bounded by Mmax we must choose X as follows:
1

2.11.3 Overflow Limit Cycles
Overflow nonlinearities influence the most significant bits of the signal and cause severe distortion. An overflow can give rise to self-sustained, high-amplitude oscillations known as overflow limit cycles. Digital filters, which are free of zeroinput limit cycles, are also free of overflow oscillations if the overflow nonlinearities are implemented with saturation arithmetic, that is, by replacing the number in overflow by a number with the same sign and with maximum magnitude that fits the available wordlength. When there is an input signal applied to a digital filter, overflow might occur. As a result, input signal scaling is required to reduce the probability of overflow to an acceptable level. Ideally, signal scaling should be applied to ensure that the probability of overflow is the same at each internal node of the digital filter. This way, the signal-to-noise ratio is maximized in fixed-point implementations. In two's complement arithmetic, the addition of more than two numbers will be correct independently of the order in which they are added even if overflow occurs in a partial summation as long as the overall sum is within the available range to represent the numbers. As a result, a simplified scaling technique can be used where only the multiplier inputs require scaling. To perform scaling, a multiplier is used at the input of the filter section as illustrated in Figure 2.25. It is possible to show that the signal at the multiplier input is given by:

x. = Max{llVlll p.....
which means that:

IIF~IIp..... IIFKIIp}'

(2.91)

IlFf(d~)llp < 1, forllX(~a~°)ll q < Mmax,

(2.92)

The K is the number of multipliers in the filter. The norm p is usually chosen to be infinity or 2. The L~ norm is used for input signals that have some dominating frequency component, whereas the L2 norm is more suitable for a random input signal. Scaling coefficients can be implemented by simple shift operations provided they satisfy the overflow constraints. In case of modular realizations, such as cascade or parallel realizations of digital filters, optimum scaling is accomplished by applying one scaling multiplier per section. As an illustration, we present the equation to compute the scaling factor for the cascade realization with direct-form second-order sections:
1

xi = II I1~{
where:

Hj(z)Fi(z)tl~
1

(2.93)

Fi(Z)

z

Z2 ÷ mliz ÷ m2i

The noise power spectral density is computed as:
2 3+ 3 HHi(z)H~(z_l)+5~-~
~11

Xi(n) = 2~j~c Xi(z)zn-ldz • ~

1 J0 Vi(eJt°)X(eJ~)eJ~°nd°)' (2.88) f2~,

P y ( Z ) = O"e

i=1

j=2 )kj

z~

H~(z)H;(z-I) ,

where c is the convergence region common to Fi(z) and X(z).

(2.94)

2 Digital Hlters
whereas the output noise variance is given by: [ 3+ 1 2II I Si(d~°)l[22+5 ~-~ csll /H_~Si(eJ~)ll2 ] 3 1 j "2
~11 i=1 j=2 )kj

859

2.12 Real-Time Implementation of Digital Filters
(2.95) There are many distinct means to implement a digital filter. The detailed description of these implementation methods is beyond the scope of this chapter. The interested reader can refer to Wanhammar (1990) and Diniz (2002) and the references therein. The most straightforward way to implement digital filters relies on general purpose computers by programming their central processing units (CPUs) to execute the operations related to a particular digital filter structure. This type of implementation is very flexible because it consists of writing software, allowing fast prototyping and testing. This solution, however, might not be acceptable in applications requiring high-processing speed, fast data input/output interfaces, or large-scale production. Efficient software implementations of digital filters are usually based on special-purpose CPUs known as Digital Signal Processors (DSPs). These processors are capable of implementing a sum of product operations, also referred to as multiply-and-accumulate (MAC) operations, in a very efficient manner. Another implementation alternative is to employ programmable logic devices (PLD) that include a large number of logic functions on a single chip. An advanced version of PLD is the field-programmable gate array (FPGA). An FPGA is an array of logic macro-cells that are interconnected through a number of communication channels configured in horizontal and vertical directions. The FPGAs allow the system designer to configure very complex digital logic that can implement digital signal processing tasks at low cost with reduced power consumption. Usually, high-level software tools are available for the design of digital systems using FPGAs. Special purpose hardware is also possible for implementing a digital filter. Hardware implementations consist of designing and possibly integrating a digital circuit with logical gates to perform the basic operating blocks inherent to any digital filter structure, namely multiplications, additions, and storage elements. Multiplications and additions can be implemented using bit-serial or bit-parallel architectures. In general, hardware implementations of digital filters are less flexible than software (CPU-based) implementations. Special purpose hardware, however, is usually necessary when high cost of development is offset by a large production and a DSP-based solution is too expensive or is incapable of meeting sampling frequency specifications. Consider a heuristic related to the computational complexity of digital filters in a full custom design (Dempster, 1995). Figure 2.26 shows an intuitive plot of complexity as a function of filter order for all digital filter implementations meant to satisfy a given set of specifications. The complexity of a digital filter is measured in terms of number of bits used in the

2 2 0"° = O'e

As a design rule, the pairing of poles and zeros is performed as explained here: poles closer to the unit circle pair with closer zeros to themselves, such that IIHi(z) llp is minimized for p = 2
o r p ~-- oo.

For ordering, we define the following:

p~_ [IHi(z)ll~ IIHi(z)ll2

(2.96)

For L2 scaling, we order the section such that Pi is decreasing. For L~ scaling, Pi should be increasing.

2.11.4 GranularityLimit Cycles
The quantization noise signals become highly correlated from sample to sample and from source to source when signal levels in a digital filter become constant or very low, at least for short periods of time. This correlation can cause autonomous oscillations called granularity limit cycles. In recursive digital filters implemented with rounding, magnitude truncation, 4 and other types of quantization, limitcycles oscillations might occur. In many applications, the presence of limit cycles can be harmful. Therefore, it is desirable to eliminate limit cycles or to keep their amplitude bounds low. If magnitude truncation is used to quantize particular signals in some filter structures, it can be shown that it is possible to eliminate zero-input limit cycles. As a consequence, these digital filters are free of overflow limit cycles when overflow nonlinearities, such as saturation arithmetic, are used. In general, the referred methodology can be applied to the following class of structures:
• State-space structures: Cascade and parallel realization of

second-order state-space structures includes design constraints to control nonlinear oscillations (Diniz and Antoniou, 1986). • Wave digital filters: These filters emulate doubly terminated lossless filters and have inherent stability under linear conditions as well as in the nonlinear case where the signals are subjected to quantization (Fettweis, 1986). • Lattice realization: Modular structures allowing easy limit cycles elimination (Gray and Markel, 1975).
4 Truncation here refers to the magnitude of the number is reduced, which leads to the decrease of its energy.

860

Marcio G. Siqueira and Paulo S.R. Diniz
These filters are very important building blocks for signal processing systems implemented in discrete-time domain. In particular, the widely available digital technology allows the implementation of very fast and sophisticated filters in a cheap and reliable manner. As a result, the digital tilers are found in numerous commercial products such as audio systems, biomedical equipment, digital radio, and TV just to mention a few.

Complexity

!
J
Min.

~ t i n g spec / j will be in this region J

References
............... Slopeis minimum feasible word length Order
Complexity of a Digital Filter
m

cost
.......!'"'

..........

NO

N1

Antoniou, A. (1993). Digital filters: Analysis, design, and applications. (2d ed). New York: McGraw-Hill. Dempster, A. ( 1995). Digital filter design for low-complexity implementation. Ph.D. Thesis. University of Cambridge, Cambridge, UK. Diniz, P.S.R. (2002). Adaptive filtering: Algorithms and practical implementation. (2d Ed.). Boston, MA: Kluwer Academic. Diniz, P.S.R., and Antoniou, A. (1986). More economical state-space digital filter structures which are free of constant-input limit cycles.

FIGURE 2.26

coefficients and number of multiplier operations, s For a prescribed set of frequency specifications and a particular filter structure, a certain order No may be capable of meeting the specifications with infinite precision. For filters with an order lower than No, it is not possible to meet the prescribed specifications. Infinite precision is translated as very high complexity because an infinite number of bits is required. If higher order is used, the number of bits necessary for multiplier coefficients may be reduced, implying lower complexity. An order N1 in Figure 2.26 is the order of the implementation with the lowest possible complexity. Orders higher than N1 will necessarily imply higher complexity. The heuristic introduced here, when applied to the case of FIR filter designs using directform realizations, leads to complexity curves that are very flat around their m i n i m u m point, indicating that almost mini m u m complexity can be achieved for a wide range of values for the filter order. On the other hand, for IIR filters, the complexity curve is rather sharp, implying that a more careful choice for the filter order should be made.

2.13 Conclusion
In this chapter the main steps for the design and implementation of linear and time-invariant digital filters are introduced.

IEEE Transactions on Acoustic Speech Signal Processing ASSP-34, 807-815. Diniz, ES.R., da Silva, E.A.B., and Netto, S.L. (2002). Digital signal processing: System analysis and design. Cambridge, UK: Cambridge University. Diniz, P.S.R., and Netto, S.L. (1999). On WLS-Chebyshev FIR digital filters. Journal of Circuits, Systems, and Computers 9, 155-168. Fettweis, A. (1986). Wave digital filters: Theory and practice. Proceedings of the 1EEE 74, 270-327. Gray, A.H. Jr., and Markel, J.D. (1975). A normalized digital filter structure. IEEE Transactions on Acoustic Speech Signal Processing ASSP-23, 268-277. Jackson, L.B. (1996). Digital filters and signal processing. (3rd ed.). Boston, MA: Kluwer Academic. Lim, Y.C., Lee, J.H., Chen, C.K., and Yang, R.H. (1992). A weight least-squares algorithm for quasi-equiripple FIR and IIR digital filter design. IEEE Transactions on Signal Processing 40, 551-558. Mitra, S.K., (2001). Digital signal processing: A computer-based approach. (2d ed). New York: McGraw-Hill. Oppenheim, A.V., and Schafer, R.W. (1989). Schafer, Discrete-time signal processing. Englewood Cliffs, NJ: Prentice Hall. P~eni~ka, B., Garcia-Ugalde, E, and Herrera-Camacho, A. (2002). The bilinear Z transform by Pascal matrix and its application in the design of digital filters. IEEE Signal Processing Letters 9, 368-370. Sar~imaki, T. (1992). Finite-impulse response filter design. In S.K. Mitra and J.E Kaiser, (Eds.) Handbook of digital signal processing. New York: John Wiley & Sons. Wanhamman, L. (1999). DSP integrated circuits. New York: Academic Press.

s For fixed-point implementation, computational complexity is mainly determined by the number of adders used in the implementation of multipliers.

Sign up to vote on this title
UsefulNot useful