Professional Documents
Culture Documents
SYNTHESIS OF CIRCUITS
Course contents:
1) Introduction
2) Passive circuits I - Realization of 1-ports.
3) Passive circuits II - Realization of 2-ports.
4) Active filters.
5) Digital filters.
Detailed content:
Passive Circuits
Strategy of filter design
System functions, realizability. Hurwitz polynomials
Passive One-port networks
Cauer synthesis procedure
Passive Two-port networks
Prototype filters
Impedance scaling
Frequency scaling
Transient response
Two-port reactive circuit synthesis
Butterworth and Chebyshev approximations
Frequency transformations
Active circuits
Operational amplifiers
Active two-terminal networks
Impedance converters and inverters, gyrator, negative elements
Active filters
2nd order sections. Sallen-Key circuit, Universal Active-RC Biquad
Digital filters
Recursive and non-recursive filters. Aliasing. Windows.
Digital transfer function, Z-transform
Design of non-recursive filters
Design of recursive filters
Synthesis (Part 2) page 2
Suggested Bibliography
W-K. Chen “Passive and Active Filters – Theory and implementations”, Wiley, 1986.
M.E. van Valkenburg “Analog Filter Design”. HRW, 1982. (Good for the active filters
part of the course).
F.F. Kuo “Network Analysis and Synthesis”. Wiley, 1966. (Useful for Passive network
synthesis).
R.W. Hamming “Digital Filters”. Prentice Hall, 1989. (Useful as introduction to digital
filters).
G.G. Temes and W. LaPatra. “Circuit synthesis and design”, McGraw Hill, 1977.
A. Budak. “Passive and active network analysis and synthesis”, Houghton Mifflin,
1974.
R. Bracewell. “The Fourier transform and its applications”, McGraw Hill, 1978.
page 3 Synthesis (Part 1)
INTRODUCTION
What is synthesis?
Let’s start with analysis which seems simpler to understand:
A typical “analysis problem” is represented as:
Now with synthesis the situation is far less clear. In this case we have:
In synthesis there are far too many design approaches to enumerate here, and
plenty of scope for anyone to invent new ones. It is a less clearly defined problem and
then, there are many ways to approach a solution. There is need of judgement as well
as calculation. It is (arguably at least) more difficult, more open-ended, more
interesting than analysis. Yet analysis remain vital - to test the design ideas before
putting them into practice.
at the same time it may be required to have a restricted transient response. This could
occur in a radar system where bursts of oscillation are used, and the response to one
burst must die away before the next pulse is received.
Usually the requirements are in conflict and a synthesis procedure consists in
finding a compromise. In the last example, the better the discrimination is made in the
frequency domain, the worse is the transient response.
V V
CIRCUIT
t (required) t
INPUT OUTPUT
(given) (given)
V V
CIRCUIT
f (required) f0 f
V V
...
...
CABLE
(given)
t t
INPUT
(given)
...
... EQUALIZER
(required)
OUTPUT t
(given)
The diagram is trying to convey the idea that a pulse train should ideally be
transmitted through the cable undistorted, but the cable characteristics are such that
different frequency components are transmitted with different velocities so that they
arrive at different times giving a distorted waveform (dispersion). The equalizer is a
circuit whose characteristics are the inverse of the cable characteristics - leading, we
hope, to a faithful pulse train at the receiving end.
i.f.
Tuner i.f. Filter
Amp.
?0
?0
Attenuation [dB]
?0
?0
?0
?0
?0
28 30 32 34 36 38 40 42 44 46
Frequency [MHz]
25
Group Delay, [ns]
?5
Tolerance masks for the amplitude response and for the group-delay of a television i.f.
filter. The figure also shows a typical response (amplitude and phase) satisfying the
requirements.
page 7 Synthesis (Part 1)
FILTERS
(linear)
ANALOGUE DIGITAL
PASSIVE ACTIVE
Passive circuits:
Components: These are restricted to the use of normal L,C and R electrical components
(lumped or distributed) for the use in filters, equalizers, etc.
Limited memory: Another restriction is that of severely limited memory: The energy
dissipation time constants place restrictions on how much data from the past can
influence the output at t = 0, i.e. the present.
Active Circuits
Components: Active circuits include an energy source, such as an amplifier, as well as
resistors and capacitors. Inductors can and usually are avoided.
Active circuits are not limited by stability, – e.g. they can have an exponentially
growing response (within certain limits). This means that poles and zeros of
immittance functions can be located anywhere in
the complex s-plane. jω
where
A0
ω0 ω
V2 A0
This leads to a frequency response: =
V1 1 − βA0
where β is for instance, a twin-T network using only
resistors and capacitors. A0
Q ≈ A0/4 for this circuit.
inductance. A negative resistance can be used to “cancel” losses and in the same way,
negative capacitance can be used to “cancel out” unwanted capacitance, – and this is
better than cancelling by the use of an inductor (tuning for resonance) because the latter
would only work at one frequency. Negative R, L, and C can be simulated with a
circuit:
Active
Z1 = − Z2 Z2
circuit
Another useful device is an active circuit which “looks like an inductor” if the
other port is terminated with a capacitor. It can effectively simulate inductors with
capacitors, – very convenient for integrated circuits.
Active
Z 1 = 1/ Z 2 Z2
circuit
(gyrator)
Digital Filters
Digital filters are algorithms for digital computers or circuits. Digital filtering is
any linear operation performed on data which has been sampled at equally spaced
intervals.
It includes smoothing (averaging), integrating, separating signals (filtering) and
predicting.
Examples are the Fourier transform (or Fast Fourier Transform), important in
signal processing work, the Simpson rule or the trapezoidal rule of integration, the
central difference formula of numerical derivative, etc. All these can be regarded as
digital filters.
We can also find equivalent digital filters to analogue filters. Low-pass and other
analogue filters have their digital counterparts. But digital filters have additionally
some special properties which make them well suited for digital communication
systems, especially when large distances are involved. It is then that analogue systems
are at a particular disadvantage because the attenuation continuously degrades the
signal-to-noise ratio. In digital systems, the signal is completely regenerated at intervals
and then re-transmitted with no loss of information.
There are no impedance-matching problems in the digital domain. Also, two or
more digital filters can have genuinely identical characteristics. These filters are also
programmable so that their characteristics can be changed easily and rapidly - even
almost continuously if needed.
Synthesis (Part 1) page 10
Digital filters can have long memories if required. Initial conditions, far from
dying away, can be stored indefinitely if needed. and the accuracy can be arbitrarily
large, limited only by the word length and rounding/truncation error in the computer.
In some applications they can also “see the future”. For many data-processing
applications, the information is all available on magnetic tape (or any other form of
computer storage) before the calculations begin. In that sense, the filter algorithm can
be written so as to take data from after as well as before the sampling interval for which
the output is being calculated. Thus they are not limited by causality or stability.
page 11 Synthesis (Part 1)
The only types of elements allowed are R, L and C (lumped or distributed). The
constraints of stability and causality lead to mathematical conditions on the form of Z(s)
and Y(s).
I1 I2
U1/I1 : a driving point impedance
For example, we might have H(s) = I(s)/U(s) = 3 + 4s in which case the realization
is simple:
Y(s) 4 1/3
Synthesis (Part 1) page 12
Synthesis of 2-ports:
The corresponding synthesis problem for a 2-port is transfer function design (filter
design). A transfer function is a system function for which the variables are defined at
different ports. e.g. if I1(s) is the excitation (input) and U2(s) is the response (output), the
system function to design is the transfer impedance:
Z21(s) = U2(s)/I1(s)
The most important aspect of transfer function synthesis is filter design. For
example, an ideal amplitude response for a low-pass filter is:
H 'ideal' H approximations
ω ω
As this “ideal” response is not realizable, this leads to the need for finding the
“best” approximation.
A network with a finite number of elements (R, L, C) has a driving point immittance of
the form:
a sn + an−1s n−1 + L + a1s + a0
Z(s) (or Y( s)) = n m (1)
bm s + bm−1sm −1 + L + b1 s + b0
Also, due to the constraints imposed by causality and stability, Z(s) (or Y(s)) must
be what is called a positive real (p.r.) function; i.e.:
These conditions ((1),(2) and (3)) form necessary and sufficient conditions for the
realizability of Z(s) (or Y(s)) with R, L and C components only.
page 13 Synthesis (Part 1)
The above set of conditions is very concise and constitutes an elegant way of
defining realizability, but unfortunately it is not very useful from a practical point of
view. To test for a rational function to be p.r., requires to analyse its behaviour for all
values of s ≥ 0, which is not simple.
(s − z1 )(s − z2 )L (s − zn )
Z(s) = K (4)
(s − p1 )(s − p2 ) L (s − pm )
Additionally, the sum of two p.r. functions is also p.r. (not necessarily the
difference), and if f(s) is p.r., so is 1/f(s). Perhaps the simplest property to verify is that
if we have a p.r. function written in the form (1), then all coefficients should be real and
positive. Also, if the numerator or denominator are not odd or even polynomials, no
coefficient corresponding to intermediate powers can be missing.
There is some redundancy in the above set but again, conditions I, IV, V and VI
constitute a set of necessary and sufficient conditions and this expanded set is much
easier to verify.
If we write P(s) = m(s) + n(s) where m(s) is the even part of p(s) and n(s) is the odd
part, then the continued fraction expansion of the rational function:
m (s ) 1
R(s) = = q1 (s) +
n( s ) 1
q2 (s ) + has only positive coefficients qi.
K
1
+
qn ( s )
(Of course we can get the same result if we factorise the function to form (4) where
the position of each pole and zero are evident).
To check condition V we need a partial fraction decomposition and then we can
check the residues. Finally, condition VI is simply tested examining the form of F(jω).
All this constitutes a rigorous test of realizability. However, the easiest way to
assess realizability of a function is simply trying to realize it.
Examples
as2 + b b
a) Z(s) = = as + clearly realizable as a combination of L and C.
s s
zeros are ±j b a and poles at 0 and ∞ .
s2 + 1
e) Z (s ) = NO, condition V is not satisfied.
s3
Cauer synthesis:
The procedure is based on the continued fraction expansion of the quotient of
polynomials and leads to networks of the form:
Foster synthesis:
This is based in a partial fraction expansion and then, in the case of an impedance,
the resultant network is a series connection of elements (or blocks formed with the
parallel connection of two elements) as in:
Cauer Synthesis
We start analysing a ladder network where each box represents either R, sL or sC
or their corresponding reciprocals.
Z1 Z3 Z5
Z(s) Y2 Y4 Y6
By inspection, we have:
1
Z(s) = Z1 +
1
Y2 +
1
Z3 +
1
Y4 +
1
Z5 +
Y6
where the quotients are alternately impedances or admittances, moving from left to
right into the ladder network. Since the impedance was specified, the first term must be
an impedance, and the ladder starts with a series element. If the admittance were
Synthesis (Part 1) page 16
required in the form of a continued fraction, the first term would be an admittance and
the ladder network would start with a shunt element.
Example
s4 + 4s2 + 3
Given the (realizable) impedance function: Z (s ) =
s3 + 2s
express it in the form of a continued fraction and so derive the ladder network, giving
all component values.
The first point to note is that there can be more than one continued fraction (and
consequently more than one ladder network) for the required Z(s). Additionally, there
may be some decompositions in the form of continued fraction which may not
correspond to a physically realizable network.
s4 + 4s 2 + 3 2 s2 + 3
Z (s ) = = Z1 + Z' 2 = s + 3 Z1 = s must be an impedance!
s3 + 2s s + 2s
1 s3 + 2s s (1/ 2) s
Now, Y' 2 = = 2 = Y2 + Y' 3 = + 2 Y2 = s/2
Z' 2 2 s + 3 2 2s + 3
1 2 s2 + 3 3
and again, Z' 3 = = = Z3 + Z4 = 4s + Z3 = 4s
Y' 3 (1/ 2) s (1/ 2) s
3 + 4s 2 + s4 3 / 2 (5 / 2) s2 + s4
Z (s ) = = + L, etc.
2s + s 3 s 2 s + s3
3/ 2 1
Z(s) = + 4/5 1 Z(s) 5/4 5
s +
s 25 / 2 1
+
s 1/ 5
s
Z(s) = F(s), we can as before try in two ways, taking numerator and denominator
in decreasing or increasing order of the powers of s. Also, we can invert the function
taking instead:
Y(s) = 1/F(s) (which of course should correspond to the same d.p. impedance as
before) and try again in the two forms of ordering the powers of s.
6s3 + 4s
If we consider the function Y (s) =
6s2 + 1
Y(s) 1 3
4s 4s + 6s3 : 1 + 6s
2
4s + 24s3
3
− 18s
Z(s) 9/8
Note that we have two possible realizations, both with identical response.
Exercise
6s3 + 8s2 + 4s + 4
Find ladder network(s) corresponding to: Z(s) =
6s 2 + 8s + 1
L – C Realizability conditions:
1) Z(s) or Y(s) is the ratio of either an even to an odd polynomial or vice versa.
3) All poles and zeros are simple and occur in conjugated pairs.
4) The residues at the poles lim{ (s – pi)Z(s)} must be real and positive.
5) The highest powers of s (and the lowest) in the numerator and denominator must
differ exactly by 1.
6) There must be either a pole or a zero at the origin. (Obviously, the same can be
said for infinity.)
There is visibly a lot of redundancy in the above constraints. A simpler and more
concise statement comes from considering Z(s) (or Y(s)) in a factorised form.
Examples
(s 2 + 1) s(s2 + 1)
3 is OK (type 3) 7 is not OK
s(s2 + 4) (s2 + 4)
The simplest way to display LC realizability is to say that, in the pole/zero and K
representation of equation 4 in page 13; K is real and positive, and the pole/zero
distribution along the jω axis is one of the following 4 possibilities (same numbering as
above):
page 21
1 2 3 4
Exercise
Use the Cauer method to derive the
2/3 1/3
following circuit from:
Important Note: In the synthesis of these cases, it is often necessary to rearrange the
terms of the polynomials during the process in order to keep the quotients positive (i.e.
realizable).
In the case of circuits containing resistances, another elementary synthesis
procedure (apart from the removal of poles at the origin and at infinity) is the removal
of min Re{ Z(jω)} .
If we have a function Z(s) which has a minimum on the jω axis at ω = ω1:
Z1
Z2
K1
ω1 ω
page 22
We have already seen the case where Z1 corresponds to a pole at the origin (a/s)
or at infinity (as). We can also extract a constant of value K provided that K ≤ K1 since
in that case the remainder will still satisfy the condition: Re{ Z2(jω)} ≥ 0.
Example
jω
7s + 2
Y (s) = is a p.r. function:
2s + 4 ? ?/7 σ
4 + 7ω 2
Re{Y ( jω )} =
8 + 2ω 2
1/2
and the minimum occurs at ω = 0.
ω
We can then remove Y1 = 1/2 and the remainder is still p.r. (e.g. realizable)
2 3/2
and then the complete network is:
Problem 1
Determine which of the following functions are realizable as driving point
impedances. Find a ladder circuit realization whenever possible.
s2 + s + 4 s 5 + 20s3 + 64s s2 + 1
a) b) c)
s+5 s 4 + 10s2 + 9 (s + 1)2
s 4 + 2s3 + 3s 2 + 2 s + 1 s 2 + 7s + 12
d) e)
s 4 + 2s 3 + 2 s2 s 2 + 3s + 2
Transmission
– cut-off frequency fc
100%
Impedance Scaling
If we start with a ‘normalized’ filter designed with easy numbers:
To change “impedance level” from 1 Ohm to say, R0 Ohm, just multiply all individual
impedances by R0. Then the impedance scaling factor is kL = R0:
page 24
Frequency Scaling:
If we now want to change the frequency ωc to ω0 instead of 1 rad/sec:
We want each component to have the same behaviour at ωc = ω0 as they had at 1
rad/sec, that is, we need the impedance values to be the same at the new frequency as
they were for 1 rad/sec: So, the frequency scaling factor is kf = ω0, and
Another form of specifying a filter is using the so called “Insertion Loss” function:
network load
Insertion loss
[dB]
ωc ω
The cut-off frequency is the frequency at which some specified insertion loss is
obtained e.g. 3 dB. The 3-dB point is the most common choice but it is not universal.
Again in this case the practical specifications are given in terms of restrictions to
the behaviour of this function.
page 25
V1 V2
The Chebyshev approximation will give the maximum rate of fall-off in the stop
band for a maximum allowed ripple in the pass-band.
3) Elliptic filter
1
For given cutoff frequency fc and ripple levels τ1 τ1
∂V
At a specified fc, maximize:
∂ω fc f
H(s) = K e–sT
Frequency ω
0.1 0.2 0.3 0.4 0.6 0.8 1 2 3
1
0
-1
-2
-3
-4
-5
Magnitude, dB
-6
-7 Amplitude response of
-8 n=3 Chebyshev filter
-9 with 1 dB ripple
-10 Butterworth response
-11 with n=3
-12
-13
-14
Amplitude responses of third order filters (n = 3).
Frequency ω
0.1 0.2 0.3 0.4 0.6 0.8 1 2 3
2
0
-2
-4
-6
-8
Magnitude, dB
-10
0.4 0.6 0.8 1
-12 0.0
-14 -0.5
-16 -1.0
-18 -1.5
-20 -2.0
-22 -2.5
-24 -3.0
-26
Amplitude responses of Butterworth filter of order 3 (solid line) and Optimum
filter of order 3 (dashed line).
page 28
1. Rise time The rise time tR of the step response is defined here as the time required
for the step response to rise from 10% to 90% of the final value as shown
in the figure.
2. Ringing Ringing is an oscillatory transient occurring in the response of the filter
to a sudden change in input (such as a step). A measure of the ringing in a
step response is given by its settling time.
3. Settling time This is the time ts beyond which the step response does not differ
from the final value in more than, say, ± 2%, as shown in the figure.
4. Delay time, tD Delay time is the time which the step response requires to reach
50% of its final value.
5. Overshoot. The overshoot in the step response is the difference between the peak
value and the final value, expressed as a percentage of the final value.
ts
1.0
Amplitude (normalized)
0.9
Overshoot
tD
0.5
0.1 tR
0.0
0 2 4 6 8 10 12 14 16 18 20
Time t, sec
Figures of merit for step response.
Most of the filter characteristics mentioned earlier are related to the frequency
response, in particular, to the bandwidth and to the phase linearity. The rise time and
the delay time are closely related to each other but have little connection with
overshoot.
We will now examine qualitatively these relationships.
Rise time and bandwidth have an inverse relationship in a filter. The wider the
bandwidth, the smaller is the rise time and vice versa. This could be understood by
page 29
noting that a limited performance of a filter at high frequencies slows down the increase
of the output in response to an abrupt step in the input, causing a long rise time.
It has been found experimentally that the relationship is very closely inversely
proportional so that we can write the following expression:
TR × (Bandwidth) = Constant
frequency(Log)
Comparison of a peaked response and a simple response from an RC filter.
The step responses of Butterworth filters of orders n = 3, 7 and 10 are shown in the
next figure. Note that as n increases, the overshoot increases. This is because the higher
order Butterworth filters have flatter magnitude characteristics (i.e. there is more gain at
higher frequencies than in the lower order filters), although the response is never of the
“peaked” type.
Ringing is due to sharp cutoff in the filter magnitude response and we can see that
also increases when the order increases.
page 30
1.2
1.1
1.0
Amplitude (normalized)
0.9
0.8
0.7
0.6 n=3
n=7
0.5
n = 10
0.4
0.3
0.2
0.1
0.0
0 2 4 6 8 10 12 14 16 18 20
Time, sec
1.2
1.1
1.0
Amplitude (normalized)
0.9
0.8
0.7
0.6
0.5 Step Responses
0.4
Butterworth n = 3
0.3
Chebyshev n = 3
0.2
Bessel n = 3
0.1
0.0
0 2 4 6 8 10 12 14 16 18 20
Time, sec
Comparison of filter transient response
The last figure compares the step response of a Bessel (linear phase) filter of order
n = 3 to the response of an n = 3 Chebyshev filter with 1-dB ripple. Rise times cannot be
compared because the bandwidths have not been adjusted to be equal. However, we
can compare their ringing and settling times. The Chebyshev filter has a sharper cutoff,
and therefore, more ringing and longer settling time than the Bessel filter. Note also
that the overshoot of the Bessel filter is negligible. This is characteristic of this class of
filters.
The decision of which filter is best depends strongly upon the particular situation.
In certain applications, such as in the transmission of music, phase is not important. In
page 31
these cases, the sharpness of the cutoff may be the dominant factor so that the
Chebyshev or the Optimum filter are preferred to the others.
If we are dealing with pulse transmission instead, the requirement of the system is
usually that the output sequence has approximately the same shape as the input
sequence, except for a time delay of T = T2 – T1, as shown in the next figure. It is clear
that a filter with a long rise time is not suitable, because the pulses would “smear” over
each other as shown in the figure. The same can be said for long settling times. Since a
pulse transmission system must have linear phase to ensure undistorted harmonic
reconstruction at the receiver, the best filter for these systems is one with linear phase
and small rise and settling times.
System
with short
rise and
settling
times
Input pulse train (a) Output pulse train
System
with long
rise and
settling
times
Low-pass
pass stop
High-pass
stop pass
Band-pass
stop stop
pass
Band-stop
stop
pass pass
f
page 33
There are (at least) two ways of starting with some desired response and then,
trying to realize it systematically.
1) The “Darlington procedure” starts with a desired insertion loss function, as a
function of the frequency and then continues to find a reactive two-port to give this
function.
2) The procedure we will follow is similar, but instead of the insertion loss
function, it takes a transfer function, effectively, the ratio of output to input voltage or
current.
U1 U1
Z11 = Z12 =
I1 I =0
I2 I
2 1= 0
U2 U2
Z21 = Z22 =
I1 I I2 I
2 =0 1= 0
Z11 and Z22 are driving point immittances, as they are concerned with a 1-port
(with the other pair of terminals “open-circuited”).
Z12 and Z21 are transfer impedances, being concerned with voltages and currents
in different ports.
If one was concerned with filter design where the load was effectively an infinite
impedance (open circuit), the above should be the starting equations; but two-ports are
usually designed to give a specified response with a real, resistive load.
U2 = Z21 I1 + Z22 I2 = –R I2
page 34
I2 − Z21
so that: CT (s) =
=
I1 R + Z22
Starting now with the admittance matrix I = Y U, would similarly give:
U2 − Y21
UT(s) = = where G = 1/R
U1 G + Y22
− Y21 − Z21
UT(s) = or CT (s) =
1 + Y22 1 + Z22
where the polynomial in the denominator has been split up into its even and odd parts,
and P(s) is either even or odd. The problem now is how to rearrange this equation into
the form of the previous equation above.
The answer is to divide both numerator and denominator by either E(s) or O(s),
the choice being such that the resultant numerator is an odd function of s, – which is
necessary. In other words, we have to choose the part with the opposite parity to P(s).
The denominator now becomes either: 1 + O(s)/E(s) or 1 + E(s)/O(s) which
corresponds to the form of the denominator of the equation at the beginning. From this
last expression we can easily separate the form of Z22, which is the d.p.i. for a reactive
circuit (the open-circuit output impedance on page 33) and so can be routinely realized.
Example
Suppose we want to realize the following current-transfer function:
2 − Z21
CT (s) = 3 2 =
s + 3s + 4s + 2 1 + Z22
Comparing this with the previous equations, we see that P(s) is even in s and so
must be divided by the odd part of the denominator: (s3 + 4s).
Back to the first equation, we have:
page 35
2 3s 2 + 2
Z21 = 3 and Z22 =
s + 4s s3 + 4s
There is still one last problem to solve in order to realize this Z22. Using the Euclid
algorithm, there are various possibilities and then we may find more than one solution
to this stage. (Note that since we are synthesizing Z22, the first elements to emerge are
those closest to the output terminals.
Z Y
3s2
+ 2 : s3
+ 4s s/ 3
9/10
s3 + 2s / 3 C
9 s/ 10 3s2 10s / 3
5/3 1/3
2 10s / 3 5s / 3
0 D
and
Z Y
1/ (2s) 3s + 2 2
: s 3
+ 4s 2/5 2
s / 2 + 2
2 C
5 s2 / 2 4s 8 / (5s)
5/8
5 / (2s) 5 s2 / 2 3
s
0 D
Two possibilities seem to emerge, but these are the open-circuit impedances
looking into the output terminals of the network of page 33, which should therefore
look like:
9/10 2/5 2
C B
C
B
A A D
D
Here we have attached the 1 Ohm resistive load to the C and D terminals of the
circuits, so that we have indeed realized Z22. Points A and B appear to be the only
terminals left above to become the port 1 of the circuit in page 33.
The right hand circuit above is, in fact, a band-pass filter and the left-hand is low-pass.
The desired current-transfer function CT(s) satisfies:
which is consistent only with the left-hand circuit above and not with the other.
So, in the end only one circuit emerges successfully from the two possibilities.
page 36
We now need to consider how to find sensible transfer functions from the filter
specifications.
2
I2 K 2 2
= 2 = CT ( j ω ) = h( −ω ) (1)
I1 1 + Pn (ω )
Pn(ω2) is a polynomial of the form: a1ω2 + a2ω4 + … + anω2n, and K is the d.c.
“gain”. This clearly corresponds to a low-pass since all “zeros of transmission” are at
s = ∞.
Now we want to find CT(s). We first note that:
2 2
h(s ) = h(−ω )
−ω 2 = s 2
We observe that the roots of CT(–s) are the negatives of those of CT(s). Then, from
h(s2)we can separate CT(s) from CT(–s) simply selecting the roots that lie in the left-
hand half-plane. This can be seen clearly with an example:
Butterworth Filters:
If we consider a Butterworth or maximally flat response, as many derivatives of
h(–ω2) must vanish at ω = 0. This is equivalent to asking for as many derivatives as
possible of P(ω2) w.r.t. ω2 to vanish at ω = 0, and consequently, all (n–1) first coefficients
of P must be zero and the polynomial has the form:
2 1
CT( jω ) = 1/ 2 =
1 + anω c2 n
the roots are all on the unitary circle and are symmetrically distributed with respect to
the jω axis.
We can separate then:
±1 s2 s1
CT (s) =
( s − s2 )( s − s3 )
We now have:
±1 s3 s4
CT ( s ) = 2
s + 2s + 1
and
2 1/ 2
s2 + 1
as the two possible realizations of: Z22 =
2s
2 1
page 38
1 1
CT ( jω ) ⋅ CT( − j ω ) = 6 =
1+ ω 1 − (− ω 2 )3
1 1
CT (s) ⋅ CT (− s) = 6 =
1− s (1 + 2s + 2 s + s )(1 − 2s + 2s2 − s3 )
2 3
1 1
CT (s) ⋅ CT (− s ) = and CT (s) =
Bn (s) ⋅ Bn (− s) Bn ( s)
The synthesis approach used above can be applied to Butterworth filters of any
order, and the adequate factorisation (the corresponding Bn(s)) is given in the following
table:
n a1 a2 a3 a4 a5 a6 a7 a8
1 1
2 1 n
2
Bn ( s ) = ∑ ai s i with a0 = 1
3 2 2 1
i =0
4 2.613 3.414 2.613 1
5 3.236 5.236 5.236 3.236 1
6 3.864 7.464 9.141 7.464 3.864 1
7 4.494 10.103 14.606 14.606 10.103 4.494 1
8 5.126 13.138 21.848 25.691 21.848 13.138 5.126 1
The same polynomials can be written in factorised form to give directly the
location of the roots:
Magnitude, dB
-8
curve shows how, with the 3-dB
-10
point at unit frequency, the curve
-12 n=3
approaches the rectangular
-14
“ideal”, with the response falling n=5
-16
off like 1/ωn as ω increases (20n -18
n=7
dB per decade or 6n dB per -20
octave). -22
But the response to the step -24
in page 34 show the price of this -26
increasing “sharpness” in the
frequency response. We see rise time, delay time, settling time, overshooting and
ringing all getting worse.
Example:
Find an expression to calculate the minimum order n of a normalized Butterworth
low-pass filter, to have an attenuation of at least αs dB at a frequency ωs outside the
passband.
ω ω ω
c s
We can define the attenuation α as
3
α = –20 log T dB.
α 10 α 10 α 10
10 = 1+ ω 2n ⇒ ω 2n = 10 − 1 ⇒ 2nlog ω = log(10 − 1)
log(10αs /10 − 1)
and finally, n=
2 log ω s
page 40
Problem 2
a) Consider the general form of a Butterworth frequency response:
2 1
T(ω ) =
1 + (ω / ω c )
2n
Find an expression to calculate the minimum value of n (order of the filter) necessary to
satisfy the following specifications:
ω p ωs ω
where α = −20 log10 T dB
α max
α max : Maximum attenuation
allowed in passband.
b) Given the values αmax = 0.5 dB, ωp =105 [rad/sec], αmin = 20 dB, ωs = 2×105
[rad/sec], find the minimum value of n (approximated to an integer) to satisfy this
specification. Choose the values given for ωs and αmin to determine the value of the 3-
dB cutoff frequency ωc using the calculated value of the order of the filter.
Problem 3
Similarly to the case of Butterworth filters, the Chebyshev response comes from:
1
CT (s) ⋅ CT (− s) =
1+ ε 2 Cn2 (ω )
where the Cn(ω) are the Chebyshev polynomials and ε is a parameter used to control
the amount of ripple in the passband.
≤ 1 ω ≤ 1
• Cn (ω ) Cn C7 C
> 1 ω > 1 3
C2
C0(ω) = 1
C1(ω) = ω
C2(ω) = 2ω2 – 1 In general we can write:
C3(ω) = 4ω3 – 3ω
C4(ω) = 8ω4 – 8ω2 + 1 Cn(ω) = cos (n cos–1ω)
C5(ω) = 16ω5 – 20ω3 + 5ω
C6(ω) = 32ω6 – 48ω4 + 18ω2 – 1
C7(ω) = 64ω7 – 112ω5 + 56ω3 – 7ω
C8(ω) = 128ω8 – 256ω6 + 160ω4 – 32ω2 + 1
C9(ω) = 256ω9 – 576ω7 + 432ω5 – 120ω3 + 9ω
C10(ω) = 512ω10 – 1280ω8 + 1120ω6 – 400ω4+ 50ω2 – 1
page 42
C3
1
Consider for example the
third order polynomial C3:
-1 1
-1
2
ε 2 C3
ε2
2 2
Now, we make: ε C3
2
1+ ε C3
2
If we now form: 1+ ε
2
2 2
1 + ε C3
1
2
CT
1
And finally we get:
2 1 1
CT = 2 2
1 + ε C3 1+ ε
2
page 43
We saw in page 37 that the poles in the Butterworth response were all located on
the unitary circle. In the case of Chebyshev filters, the poles are on an ellipse as shown
in the figure. There we can compare the position of the roots for Butterworth and
Chebyshev responses of order 6.
Problem 4
n /2 2
∏ ( s + b1i s + b0 i ) n = 2, 4,6,L
1 i =1
CT (s) = D( s) = ( n −1) / 2
D (s ) (s + b )
1 ∏ (s + b1i s + b0 i )
2
n = 3,5,L
i =1
2 1.425624 1.516203
0.5 dB 3 0.626457 0.626456 1.142448
equal-ripple 4 0.350706 1.063519 0.846680 0.356412
(Chebyshev) 5 0.362320 0.223926 1.035784 0.586245 0.476767
6 0.155300 1.023023 0.424288 0.590136 0.579588 0.156997
2 1.097734 1.102510
1 dB 3 0.494171 0.494171 0.994205
equal-ripple 4 0.279072 0.986505 0.673739 0.279398
(Chebyshev) 5 0.289493 0.178917 0.988315 0.468410 0.429298
6 0.124362 0.990733 0.339763 0.557720 0.464125 0.124707
Frequency transformations
As mentioned before, a low-pass prototype can be converted to any other type of
frequency response using an appropriate frequency transformation.
We already introduced frequency scaling by which a frequency ω is transformed
to a new frequency ω’ by:
ω’ = kf ω
For example, the impedance of the inductor L’ at the new frequency ω’ is the same as
that of the inductor L at the old frequency ω.
If we write in general the old frequency as a transformation of the new one:
ω = T(ω’) or s = T(s’), for what functions T(s’) can the corresponding relations still
apply?
The old impedances are:
sL = T(s’) L
s C = T(s’) C (2)
R = R’
page 45
We can see that the only condition is that T(s’)L and T(s’)C, or indeed simply T(s’),
must be realizable as a d.p.i.
sL = (1/s’) L = 1/s’C1
sC = (1/s’) C = 1/s’L1
R = R’
9/10 10/9
The circuit on the left (the low-pass filter from page 35) has identical insertion loss,
and I2/I1 at frequency ω as the circuit on the right at frequency 1/ω.
Note that one can follow the s’ = 1/s transformation by a frequency scaling, or do
it in one step by the transformation: s’= K/s. (or s = T(s’) = K/s’)
s' b
s = T( s' ) = + and
a s'
s' b
s = T( s' ) = 1 +
a s'
0 1 ω 0 ω ω ω ω
1 0 2
dB dB
→ s'
s
∆ω L
L s' L ω 02 L
sL = T (s' ) L = + ω 20 L ∆ω
∆ ω s' ∆ω
and
∆ω
ω 20 C
C s' C ω 02C
sC = T( s' )C = +
∆ ω s' ∆ω
C
∆ω
In this way, the prototype low-pass filter of Chebyshev response (for 1 dB ripple):
(Exercise: Find the corresponding element values)
1.333
1.509 1.012 1
∆ω 1.509 ∆ω 1.012 1
1.509ω 20 ∆ω 1.012ω 20 ∆ω
page 47
∆ω
s = T(s' ) = where again ∆ω = ω2 – ω1 and ω02 = ω1ω2.
s' ω 0
ω0 +
ω 0 s'
0 1 ω 0 ω ω ω ω
1 0 2
dB dB
→ s'
s
L∆ω
ω02
L s' ∆ω
sL = T (s' )L = L
(s' 2 +ω 20 )
1
L∆ω
and
C∆ω
C s' ∆ω 1
sC = T(s' )C = 2 C ω 20 C∆ω
(s' + ω 20 )
Richard’s transformation
Let's now try the transformation: s = T(s’) = tanh(s’)
or using a normalizing factor ω0: s = tanh(s’/ω0)
The question now is: what circuit component has a d.p.i. of the form:
s = tanh(s’/ω0)
= tanh(jω’/ω0) = jtan(ω’/ω0)
page 48
Zl + jZc tanθ
Zc
Zc + jZl tanθ
Then, a short-circuited piece with Zc = 1 and length: θ = βl= ω’/ω0 has the
impedance:
jtan(ω’/ω0)
L Zc = L
→
C
→ Yc = C
→ 1
2 1
Yc = 2
page 49
Both pieces of transmission lines or “stubs” have the same Zc in this case, – of
0.707 Ohm or 70.7 Ohm if we want a load of 100 Ohm.
The curves showing the corresponding transformation of the frequency response
are:
ω ω
0 1 ω 0 π/ 2 π 3π / 2 2π 0
dB dB
→ s'
s
The resulting filter looks dual-purpose, in containing both band-pass and band-
stop regions; but the maximally flat characteristic at d.c. of the low-pass prototype
strictly transforms to ω’/ω0 = π, 2π, 3π, etc.
This transformation is widely used at microwave frequencies, –say, 0.5 to 30 GHz.
The circuit above is not very convenient as it is and in practice some other
transformations are used (Kurodo transformations and “unit elements”) so that only
shunt elements are needed (because that is more convenient in planar microwave
circuits).
This transformation allows us to add transmission lines to the set of possible
elements for realizing passive filters, –that is, to L, C, and R.
These filters are very easy to make in microstrip, – a sort of printed-circuit board
for use at microwave frequencies.
conducting strip
dielectric
ground plane
For instance, a mask necessary for a 3-element filter would look like:
input output
The 3 free variables allow us to choose the desired third order response
(Chebyshev, Butterworth, or any other).
Problem 5
Starting from the Butterworth approximation for CT(jω)2, find the function
CT(s) corresponding to a third order low-pass filter and realize it as a ladder circuit.
Apply the low-pass to high-pass transformation to find a normalized third order
high-pass filter.
These could be used as part of a 2-loudspeaker design as in the figure.
Verify that the impedance looking into the two circuits in series is 1 Ω, for all
frequencies.
Finally, scale the two filters to provide: loudspeakers of 8 Ω and a cross-over
frequency of 5000/π Hz. The latter, taken as cutoff frequencies, would provide 3 dB
attenuation to each speaker.
low-pass 'woofer'
high-pass 'tweeter'
Problem 6
15
H(s) = 3 2
s + 6s + 15s + 15