You are on page 1of 86

FEE 422: Telecommunications and Electroacoustics B

(45 Hours)
0. Introduction to Digital Modulation
1. Pulse Modulation:
1.1. Pulse Code Modulation (PCM)
1.2. Differential Pulse Code Modulation (DPCM)
1.3. Delta Modulation (DM)
1.4. Pulse Width Modulation (PWM)
1.5. Pulse Position Modulation (PPM)
1.6. Pulse Amplitude Modulation (PAM)
1.2. Quantization Noise in PCM
1.3 SNR in PCM
1.4. Baseband Systems.

2. Fundamentals of Vibration:
2.1. Vibration and the Acoustic Wave equation.
2.2. Transverse and Longitudinal Vibrations.
2.3. Vibrations of Plates and Membranes .
2.4. Propagation of Acoustic Waves:
2.5. Transmission of Acoustic Waves.
2.6. Dissipation of Acoustic Energy in Fluids
2.7. Radiation and Reception of Acoustic Waves
2.8. Noise and Speech.

3. Transduction:
3.1. Electromechanical Analogues
3.2. Canonical Equations.
3.3. Transmitters
3.4. Loudspeakers.
3.5. Loudspeaker Cabinets
3.6. Receivers
3.7. Microphones.
Books
 Taub and Schilling, Principles of Communication Systems, McGraw Hill Book Company, New York,
1971]
 Sion Haykin, Communication Systems, 4th Edition, John Wiley & Sons, 2000
 Bernard Sklar, Digital Communications: Fundamentals and Applications, Prentice Hall 2000
 David R. Smith, Digital Transmission Systems, 2nd Edition, Chapman and Hall 1993
 Lawrence E. Kinsler, Fundamentals of Acoustics, 4th Edition, John Wiley & Sons, 2000
FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 1
Modulation: Amplitude Shift Keying (ASK)

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 2


Spectrum (frequency domain)

Baseband Signal Vd(f)

Carrier Signal: vc  t   cos  2 fct 


2A   
  cos  2 f 0t   cos  2  3 f 0  t   cos  2  5 f 0  t  
1 1
Digital Signal: vd  t   
 4 3 5 

ASK Modulated Signal: vASK  t   vd  t  vc  t 


 
cos  2 f ct     cos  2 f 0t   cos  2  3 f 0  t   cos  2  5 f 0  t   
2A 1 1
v ASK  t  
 4 3 5 
2A   
 cos  2 f c t   cos  2 f 0t  cos  2 f ct   cos  2  3 f 0  t  cos  2 f ct  
1

 4 3 
2A  1 
 cos  2  5 f 0  t  cos  2 f ct  
  5 

Useful Trigonometric Identity:
A 
cos  2 f ct   cos  2  f c  f 0  t   cos  2  f c  f 0  t  
A
v ASK  t  
2  
A  
 cos  2  f c  3 f 0  t   cos  2  f c  3 f 0  t  
3  
A  
  cos  2  f c  5 f 0  t   cos  2  f c  5 f 0  t   
5  

+++++++++++++++++++++++++++++++++++++++++++++++
FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 3
Frequency Shift Keying - FSK:
In a frequency shift keyed transmitter the frequency is shifted by the message.
Although there could be more than two frequencies involved in an FSK signal, in this
presentation uses a binary bit stream, and so only two frequencies are involved. The
word „keyed‟ suggests that the message is of the „on-off‟ (mark-space) variety, such as
one (historically) generated by a morse key, or more likely in the present context, a
binary.

f2 Corresponds to MARK
f1 Corresponds to SPACE

Generation of FSK
Conceptually, the transmitter could consist of two oscillators (on frequencies f 1 and f2),
with only one being connected to the output at any one time. This is shown in block
diagram form in below.

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 4


Unless there are special relationships between the two oscillator frequencies and the bit
clock frequencies, there will be abrupt phase discontinuities of the output waveform
during transitions of the message

The practice is for the tones f1 and f2 to bear special inter-relationships, and to be
integer multiples of the bit rate fS. This leads to the possibility of continuous phase,
which offers advantages, especially with respect to bandwidth control..
Alternatively the frequency of a single oscillator (VCO) can be switched between two
values, thus guaranteeing continuous phase - CPFSK.

Bandpass FSK Demodulator


The Mark and Space filters in the figure are bandpass filters that are centered on the
Mark and Space frequencies respectively.
In the simplest form, the information bits from an FSK signal are demodulated by
subtracting the amplitude of the detected Space component from the amplitude of the
detected Mark component.
 When the difference is greater than zero (the threshold level), a Mark is
assumed to be sent
 And when the difference is less than zero, a Space is assumed to be sent.
This process of transforming an FSK waveform into a binary level is often called slicing.
For an unbiased slicer, both filters are designed to have identical noise bandwidth.
Alternatively, the detectors of an FSK demodulator can also operate on baseband
signals, as illustrated below:

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 5


Baseband FSK Demodulator
The input signal is mixed (multiplied) by Mark and Space local oscillators into two
baseband signals, which are then filtered by identical lowpass filters.

The baseband approach is easily adaptable to moving Mark and Space frequencies,
without needing to change data filters.
Different data filters are still needed when the data rate changes.

In the interval [0,Tb], let s1  t   A cos  2 f1t 


s2  t   A cos  2 f 2t 
be the two FSK signals, with f1Tb  m and f 2Tb  n (m and n are integers)

Define the energies E1 and E2 according to

A Tb A2Tb
E1   s  t dt   1  cos  4 f1t dt 
Tb
2
1
0 2 0 2
A Tb A2Tb
E2   s2  t dt   1  cos  4 f 2t dt 
Tb
2
0 2 0 2
Define the correlation coefficient  as
1 2 Tb
s  t  s2  t dt   cos  2 f1t  cos  2 f 2t dt
Tb

E1E2
0 1 Tb 0
Useful Trigonometric Identity:

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 6


1 1
cos  2  f 2  f1  t dt  cos  2  f 2  f1  t dt
Tb Tb

Tb 0 Tb 0

sin  2  f 2  f1  Tb  sin  2  f 2  f1  Tb 
 
2  f 2  f1  Tb 2  f 2  f1  Tb
Usually either:  f2  f1  Tb  k an integer (chosen by design), or relatively the sum
 f 2  f1  is much much larger that the difference  f2  f1  . The result of either choice
is that the sum term is usually negligible, leaving the correlation coefficient to be given
as
sin  2  f 2  f1  Tb  sin  2fTb 
  where f  f 2  f1 is the frequency
2  f 2  f1  Tb 2fTb
separation between the two signals

The modulation index h of the FSK system then defined as h   f  Tb  f 2  f1 Tb

sin  2  f 2  f1  Tb  sin  2 h 
That is  
2  f 2  f1  Tb 2 h

Minimum Shift Keying


Notice that the correlation coefficient is zero for several values of the modulation
index h. When the correlation coefficient is zero, the two signals are said to be

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 7


orthogonal. The smallest value of the modulation index h that gives orthogonal signaling
gives what is referred as Minimum Shift Keying (MSK) which is also is also called Fast
FSK. The
The value of the modulation index is h = 0.5
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Phase Shift Keying – PSK
(Binary PSK Considered here)  BPSK

Implementation of PSK

Quadrature PSK (QPSK)


 To increase the bit rate, 2 or more bits can be coded onto one signal element.
FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 8
 In QPSK, the bit stream is split so that every two incoming bits are used to select
the phase of the PSK carrier. One carrier frequency is phase shifted 90o from
the other - in quadrature.

 The two PSKed signals are then added to produce one of 4 signal elements. L = 4
here.

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 9


Example
Find the bandwidth for a signal transmitting at 12 Mbps for QPSK. The value of
d = 0.
Solution
For QPSK, 2 bits is carried by one signal element (one symbol). This means that
r = 2. So the signal rate (baud rate) is S = N × (1/r) = 6 Mbaud. With a value of
d = 0, we have B = S = 6 MHz

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 10


Constellation Diagrams
 A constellation diagram helps us to define the amplitude and phase of a signal
when we are using two carriers, one in quadrature of the other.
 The X-axis represents the in-phase carrier and the Y-axis represents quadrature
carrier.

Example Constellations

+++++++++++++++++++++++++++++++++++++++++++++++++++++++
In binary signaling, the modulator produces one of two distinct signals in response to one bit of
source data at a time.

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 11


FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 12
where
1 
Q  x   e  y2 / 2
dy
2 x

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 13


+++++++++++++++++++++++++++++++++++++++++++++++++++++++

(example with Orthogonal signaling)

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 14


FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 15
FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 16
(OOK)

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 17


FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 18
Q function and the Complementary Error Function

Instead of the Q function, sometimes the complementary error function erfc is used
2 
erfc  x    e z dz
2

where
 x

1 
Recalling that the definition of the Q function, Q  x    e y /2 dy , it can be
2

2 x

shown by a change of variable z  y / 2 that the two functions are related by


1  x 
Q  x   erfc 
2  2
 or 
erfc  w  2Q w 2 
FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 19
Formatting and Transmission of Baseband Signals
Baseband Signal – a signal whose spectrum extends from (or near) DC up to some finite
value, usually less than a few MHz, is called a baseband or low-pass signal. Such a signal is
implied whenever we use any of the terms:
 Information
 Message, or
 Data.
For the transmission of baseband signals by a digital communication system:
 The information is formatted so that it is represented by digital symbols.
 Then, pulse waveforms are assigned that represent these symbols; this step is
referred to as pulse modulation or baseband modulation.
Baseband signals are not appropriate for propagation through many transmission media.
Bandpass Signals
 Bandpass signals have their spectral content clustered in a band of frequencies away
from dc, near a value called the carrier frequency.
 Bandpass signals may be obtained from baseband signals by shifting the baseband
spectrum
 Bandpass signals may be more appropriate for propagation through transmission media.
Formatting
The first essential signal processing step, formatting, makes the source signal compatible
with digital processing.
Transmit formatting is a transformation from source information to digital symbols (in the
receive chain, formatting is the reverse transformation).
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Though many message sources are inherently digital in nature, two of the most common
message sources, audio and video, are analog, i.e., they produce continuous time signals.
To make analog messages amenable for digital transmission sampling, quantization and
encoding are required.
Sampling: How many samples per second are needed to exactly represent the signal and how
to reconstruct the analog message from the samples?
Quantization: To represent the sample value by a digital symbol chosen from a finite set.
What is the choice of a discrete set of amplitudes to represent the continuous range of
possible amplitudes and how to measure the distortion due to quantization?
Encoding: Map the quantized signal sample into a string of digital, typically binary, symbols.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The main formatting topics are:
 Character coding,
 Sampling,
 Quantization, and
 Pulse Code Modulation (PCM).

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 20


Data already in a digital format would bypass the formatting function.
Textual information is transformed into binary digits by use of a coder.
Analogue information is formatted using three separate processes:
sampling, quantization, and coding.
In all cases, the formatting step results in a sequence of binary digits.
These digits are to be transmitted through a baseband channel, such as a:
 pair of wires or
 coaxial cable.

However, no channel can be used for the transmission of binary digits without first
transforming the digits to waveforms that are compatible with the channel. For baseband
channels, compatible waveforms are pulses.
A baseband signal may be from one several sources. Prior to transmission, these signals need
to be formatted to a form suitable for the transmission medium

Figure 1.1 Transmission of Baseband Signals.

In Figure 1.1,
 The conversion from binary digits to pulse waveforms takes place in the block labelled
waveform encoder, also called a baseband modulator.
 The output of the waveform encoder is typically a sequence of pulses with
characteristics that correspond to the binary digits being sent.
 After transmission through the channel,
o The received waveforms are detected to produce an estimate of the
transmitted digits, and then
o The final step, (reverse) formatting recovers an estimate of the source
information.

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 21


Ideal (or Impulse) Sampling
Ts is the period of the impulse train, also
referred to as the sampling period.
The inverse of the sampling period, f s  1/ TS , is
the sampling frequency or sampling rate.
It is intuitive that the higher the sampling rate
is, the more accurate is the representation of m  t  by ms  t  .
What is the minimum sampling rate for the sampled version ms  t  to exactly represent the
original analog signal m  t  ?

Spectrum of the Sampled Waveform

If the bandwidth of m  t  is limited to W Hertz, m  t  can be completely recovered from ms  t 


by an ideal lowpass filter of bandwidth W if f s  2W .
When f s  2W (under-sampling), the copies of M  f  overlap and it is not possible to recover
m  t  by filtering ⇒ aliasing.
FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 22
Bandlimited Interpolation
Example of Band−limited Signal Reconstruction (Interpolation)

Reconstruction of m(t)

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 23


Sampling Theorem
A signal having no frequency components above W Hertz is completely described by specifying
the values of the signal at periodic time instants that are separated by at most 1/ 2W seconds.

The condition f s  2W is known as the Nyquist criterion, the sampling rate f s  2W is called the
Nyquist rate and its reciprocal called the Nyquist interval.
Ideal sampling is not practical ⇒ Need practical sampling methods.

Natural Sampling
In the figure shown, h  t   1 for 0  t  
and h  t   0 otherwise.
The pulse train p  t  is also known as the gating
waveform.
Natural sampling requires only an on/off gate.

Illustration of Natural Sampling

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 24


Signal Reconstruction in Natural Sampling
Write the periodic pulse train p  t  in a Fourier series as:

The sampled waveform and its Fourier transform are

The original signal m  t  can still be reconstructed using a lowpass filter as long as the Nyquist
criterion is satisfied.

Flat-Top Sampling
Flat-top sampling is the most popular sampling method and involves two simple operations:
sample and hold.

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 25


Equalization
Not possible to reconstruct m  t  using an lowpass filter, even when the Nyquist criterion is
satisfied.
The distortion due to H  f  can be corrected by connecting an equalizer in cascade with the
lowpass reconstruction filter.
Ideally, the amplitude response of the equalizer is

PULSE MODULATION
Pulse modulation techniques are still analog modulation. For digital communications of an analog
source, quantization of sampled values is needed.

Pulse modulation is the process of transmitting signals in the form of pulses (discontinuous
signals) by using special techniques, such as
• Pulse Amplitude Modulation (PAM)
• Pulse Width Modulation (PWM)
• Pulse Position Modulation (PPM)
• Pulse Code Modulation (PCM)

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 26


PULSE MODULATION

Analog Pulse Modulation Digital Pulse Modulation

Pulse Amplitude (PAM) Pulse Code Modulation (PCM)

Pulse Width (PWM) Differential PCM

Pulse Position (PPM) Delta Modulation (DM)

Pulse Width Modulation (PWM):


Alternative Names - Pulse Length Modulation (PLM):
- Pulse Duration Modulation (PDM):
In pulse width modulation, the amplitude is maintained constant but the width (duration or
length) of each pulse is varied in accordance with instantaneous value of the analog signal.
The negative side of the signal is brought to the positive side by adding a fixed DC voltage.

Analog
Signal

Timing
Pulses

PWM

PPM

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 27


Generation of PWM and PPM
One method of generating a PWM signal is
to present the analog signal and a sawtooth
signal as inputs to a comparator.

The comparator output is the PWM signal

Using a monostable multivibrator with the


PWM signal as its input produces a PPM
signal.

Pulse Time Modulation (PTM)


PWM and PPM are sometimes referred to
collectively as Pulse Time Modulation (PTM)

In place of the sawtooth waveform, others can be used as shown below

The inputs to the comparator can be any one


of the three combinations shown below.
The output will be a PWM signal
The PPM signal is obtained from the PWM by
using the monostable multivibrator as above.

r(t) = the analog signal


c(t) = sawtooth signal (as above)

r(t) = the analog signal


c(t) = inverted sawtooth signal

r(t) = the analog signal


c(t) = triangular signal

Alternatively, as shown below, the PWM signal can be generated by:


1. First sampling the analog signal to produce PAM signal
2. a triangular pulse sequence to the PAM samples
3. A comparator is used to compare the sum in 2. to a reference, which produces a PWM signal.

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 28


Representation of Pulse-Time Modulated Signals
Representation of the PWM signal is somewhat complicated, and will not be considered here.
However since in the PPM signal, the position of a pulse relative to its un-modulated time of
occurrence is varied in accordance with the message signal, the PPM singal can be represented
as

s t    p t  kT
k 
s  k p m  kTs  

where k p is the time (or position) sensitivity parameter (in sec/volt), and the pulse p  t 
Ts
satisfied the condition that p  t   0 when t  0 and t  Ts with k p m  t  max 
2
FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 29
Demodulating the PPM Signal
In the PPM sequence the k th pulse is centred at kTs  k p m  kTs  which is estimated as t k .
 t  kTs 
Therefore setting tk  kTs  k p m  kTs  gives m  kT s    k 
 kp 
Since all the quantities on the RHS are known, it is possible to obtain a sequence of sample
values m  kT s  . Then an interpolating filter (actually a low-pass filter) is used to obtain an
estimate of the analog signal m t  .
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Pulse Amplitude Modulation (PAM)


When an analog signal is sampled, the result is a sequence of pulses of differing amplitudes, in
which the envelope of the pluses, follows (is equal to) the original analogue signal. In other
words, the result is a pulse amplitude modulated (PAM) signal.

PAM transmission does not improve the noise performance over baseband modulation, but allows
multiplexing, i.e., sharing the same transmission media by different sources.
The multiplexing advantage offered by PAM comes at the expense of a larger transmission
bandwidth

Analog Signal

Amplitude Modulated Pulses

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 30


The amplitude of regularly spaced pulses are varied in proportion to the corresponding sample
values of a continuous message signal. Two operations involved in the generation of the PAM
signal:
 Instantaneous sampling of the message signal m(t) every Ts seconds,
 Lengthening the duration of each sample, so that it occupies some finite value T.

Representation of PAM Signals


The PAM signal s(t) obtained from the analog signal s(t) is as shown

The PAM signal can be represented as



s t    m  kT  h t  kT 
k 
s s

where h  t  is a standard rectangular pulse, given by


 T  1 0t T
t 2   1
h  t   rect   t  0, t  T
 T  2
   0 t  0, t  T
You may have this represented previously as
 T 
t 2  1 0t T
h  t   rect  
 T  0 t  0, t  T
 
The instantaneously sampled version of m(t) is

m  t    m  kT    t  kT 
k 
s s

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 31


This can be modified to correspond to the PAM signal by the “hold” operation, which is
equivalent to passing the instantaneously sampled through the “hold” filter.

m  t  * h  t    m  h  t    d

 


 m  kT     kT h t    d
k 
s s

The order of summation and integration can be inter-changed to give


 
m  t  * h  t   
k 
m  kTs     kTs  h  t    d


equivalent to passing the instantaneously sampled through the “hold” filter.


Noting that

    kTs  h  t    d  h  t  kTs 

gives

m  t  * h  t    m  kT h t  kT 
k 
s s

which agrees with the form of the PAM signal s(t) given above.
Therefore

s  t   m  t  * h  t  M  f   fs  M  f  kf 
k 
s


S  f   M  f  H  f  S  f   H  f  fs  M  f  kf 
k 
s

Aperture Effect and its Equalization


Aperture effect
FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 32
The distortion caused by the use of pulse-amplitude modulation to transmit an analog
information-bearing signal
Equalizer
Decreasing the in-band loss of the reconstruction filter as the frequency increases
The amplitude response of the equalizer is

The amount of equalization needed in practice is usually small. For example, for low duty cycle
T / Ts  0.1 the amount of distortion is less than 0.5 per cent, and no equalization is needed.

The transmission of PAM signals imposes rather stringent requirements on the magnitude and
phase of the channel, due to the relatively narrow pulses required.
Furthermore, the noise performance of PAM transmission can never be better than ordinary
baseband signal transmission. Accordingly, for transmission over long distances, PAM
transmission remains as a means message processing for time-division multiplexing for which
conversion to other forms of pulse modulation is subsequently performed.

The Transition from Analog to Digital Communications SYstems


In completing the transition from analog to digital communication systems, it is noteworthy to
list some merits of digital communication systems over the analog counterparts
Merits of Digital Communication:
1. Performace
 Digital modulation is easier to receive, since the receiver has to just detect whether
the pulse is LOW or HIGH.
 While both analog and digital signals can be corrupted in transit, in digital signals, the
original signal can be reproduced accurately, despite having been corrupted.
 Digital modulation permits the use of regenerative repeaters, when placed along the
transmission path at short enough distances, can practically eliminate the degrading
effects of channel noise and signal distortion.
 The digital signals can be cleaned up at each regenerator stage to restore the quality
and amplified by the regenerators.
2. Ruggedness
 A digital communication system can be designed to withstand the effects of channel
noise and signal distortion
3. Reliability
FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 33
Digital signals can be made highly reliable by exploiting powerful error-control coding
techniques.
4. Security
 Analog signals can be received by any one by suitable receiver. But digital signals can
be coded so that only the person, who is intended for, can receive them. They can be
made highly secure by exploiting powerful encryption algorithms
5. Efficiency
 Inherently more efficient than analog communication system in the trade-off between
transmission bandwidth and signal-to-noise ratio
6. System integration
 It is easier to integrate digitized analog signals with digital computer data
PULSE CODE MODULATION (PCM)
 Pulse code modulation (PCM) is the name given to the class of baseband signals
obtained from the quantized PAM signals by encoding each quantized sample into a
digital word.
Steps - Sampling
- Quantization
- Coding
 The source information is sampled and quantized to one of Q levels;
 Then each quantized sample is digitally encoded into an N-bit codeword, where
N = log2Q.
 For baseband transmission, the codeword bits will be transformed to pulse waveforms.
 The essential features of binary PCM are shown in Figure 1.2.
PULSE CODE MODULATION (PCM) SYSTEM
Communication
Channel

Analog Analog-to-Digital Converter


Signal m(t) m'(t)
Sampler Quantizer Encoder Quantizer Decoder Filter

Digitally
Quantized Encoded
PAM signal

Figure 1.2 A PCM communication System.

Quantization transforms m  nTs  into a discrete amplitude m


ˆ  nTs  taken from a finite set.
ˆ  nTs  can
If the spacing between two adjacent amplitude levels is sufficiently small, then m
be made practically indistinguishable from m  nTs  . There is always a loss of information
associated with the quantization process, no matter how fine one may choose the finite set

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 34


of the amplitudes ⇒ Not possible to completely recover the sampled signal from the
quantized signal.
Quantization Process
Amplitude quantization
The process of transforming the sample amplitude m(kTs) of a baseband signal m(t) at time
t = kTs into a discrete amplitude v(kTs) taken from a finite set of possible levels.
I k  mk 1  m  mk  k  1, 2, ..., M

Mid-Treader Mid-Riser

The Encoder
A PCM communication system is represented in Figure 1.2.
The analogue signal m(t):
 is sampled, and then
 these samples are subjected to the operation of quantization.
 The quantized samples are applied to an encoder.
 The encoder responds to each such sample by the generation of a unique and identifiable
binary pulse (or binary level) pattern.
In the examples of Figure 1.3 and 1.4 (below), the pulse pattern happens to have a numerical
significance which is the same as the order assigned to the quantized levels. However, this
feature is not essential. We could have assigned any pulse pattern to any level.
FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 35
A(t)

Digital
t Clock
1 2 3 4 Serial PCM
Analog
Input Output
Sampler Encoder

7 111
110
5 101
100
3 011
Sampling Pulse 010 t
1 001
Pulses t Generator 0 000
1 2 3 4
TS 1 2 3 4 011 110 101 100
Digital Signal
Figure 1.3 A 3-Bit PCM System Showing Analogue to 3-Bit Digital Conversion
The combination of the quantizer and encoder in the dashed box of Figure 1.2 is called an
analog-to-digital converter, usually abbreviated A/D converter.

In commercially available A/D converters there is normally no sharp distinction


between that portion of the electronic circuitry used to do the quantizing and
that portion used to accomplish the encoding.

The output of the A/D converter is a digitally encoded signal, which is the signal transmitted
over the communications channel in a PCM system.

The Decoder
When the digitally encoded signal arrives at the receiver (or repeater), the first operation to
be performed is the separation of the signal from the noise which has been added during the
transmission along the channel.
Such an operation is again an operation of re-quantization; hence the first block in the receiver
in Figure 1.2 is termed a quantizer
A feature which eases the burden on this quantizer is that for each pulse interval it has only to
make the relatively simple decision of whether a pulse has or has not been received or which of
two voltage levels has occurred.
Suppose the quantized sample pulses had been transmitted instead, rather than the
binary-encoded codes for such samples. Then this quantizer would have had to have yielded, in
each pulse interval, not a simple yes or no decision, but rather a more complicated determination
about which of the many possible levels had been received.
In the examples of Figures 1.3 and 1.4, if a quantized PAM signal had been transmitted, the
receiver quantizer would have to decide which of the levels 0 to 7 was transmitted, while with a
binary PCM signal the quantizer need only distinguish between two possible levels.
The relative reliability of the yes or no decision in PCM over the multivalued decision
required for quantized PAM constitutes an important advantage for PCM.

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 36


The decoder, also called a digital-to-analog (D/A) converter, performs the inverse operation
of the encoder. The decoder output is the sequence of quantized multilevel sample pulses.

The quantized PAM signal is now reconstituted. It is then filtered to reject any frequency
components lying outside of the baseband. The final output signal m'(t) is identical with the
input m(t) except for quantization noise and the occasional error in yes-no decision making at
the receiver due to the presence of channel noise.

Example 1.2:
 An analog signal, x(t), is limited in its excursions to the range - 4 to + 4 V.
 The step size between quantization levels has been set at 1V.
 8 quantization levels are employed (0,1,2, …, 7);
 These are located at -3.5, -2.5, . . . , +3.5 V.

 Assign:
 The code number 0 to the level at -3.5 V,
 The code number 1 to the level at -2.5 V,
 The code number 2 to the level at -1.5 V,
 The code number 3 to the level at -0.5 V,
 The code number 3 to the level at +0.5 V, and so on,
 until the level at 3.5 V, which is assigned the code number 7.
 Each code number has its representation in binary form, ranging from 000 for
code number 0 to 111 for code number 7.
 The ordinate in Figure 4 is
labelled with quantization levels
and their code numbers.

 Each sample of the analogue


signal is assigned to the
quantization level closest to the
value of the sample.

 Beneath the analogue waveform,


x(t), are four representations of
x(t), as follows:
 the natural sample values,
 the quantized sample values,
 the code numbers, and Figure 1.4 Natural samples, quantized samples, and pulse code
 the PCM sequence. modulation. [Taub and Schilling, Principles of Communication Systems,
McGraw Hill Book Company, New York, 1971]

Note that in the example of Figure 1.4, each sample is represented by a 3 bit codeword.
For Q levels of quantization, each sample is digitally encoded into an N-bit codeword, where
N = log2Q.
FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 37
A PCM signal is obtained from the quantized PAM signal by encoding each quantized sample to a
digital codeword. In binary PCM each quantized sample is digitally encoded into an N-bit binary
codeword, where N = log2Q.
Binary digits of a PCM signal can be transmitted using many efficient modulation schemes.
There are several mappings:
 Natural binary coding (NBC),
 Gray mapping,
 Foldover binary coding (FBC), etc.

Quantization Error

2
mM
Peak-to-Peak Range of Signal  M 

m  t   mk  e
mk  2
Error, e

mk 1 e

 2
mk
 m
 
mk 1 2 

m t  b
m2

m1

a
2

Figure 1.5 (a) Voltage excursions for a signal m(t). The range is divided into M intervals of size . The
quantization levels are located at the centre of the interval. (b) The error voltage e(t) as a function of
the instantaneous value of the signal m(t).

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 38


Let p(m)dm be the probability that m(t) is in the range m - dm/2 to m+dm/2.
Then the mean-square quantization error is
m1  /2 m2  /2 m3  /2
e2   p(m)(m  m1 )2 dm   p(m)(m  m2 )2 dm   p(m)(m  m3 )2 dm 
m1  /2 m2  /2 m3  /2

Normally the probability density function p(m) of the message signal m(t) will certainly not be constant.
But when the number (Q) of quantization levels is large, so that the step size  is small in
comparison with the peak-to-peak range of the message signal, it is reasonable to make the
approximation that p(m) is constant within each quantization interval.

Then in each interval, set p(m) = p(k), k =1,2,…, Q, and remove each p(k) from inside the integral,
and also make the substitution x=m- mk, k =1,2,…, Q
e2   p (1)  p (2)  p (3)  
 /2
x 2 dx
 /2
  p (1)  p (2)  p (3)   
3
/12 
  p (1)   p (2)   p (3)     2 /12 
But since p(k )  is the probability that the signal voltage m(t) will be in the kth quantization
interval, (k =1,2,…, Q), the sum p(1)   p(2)   p(3)    p(Q )  1
2
Therefore, the mean-square quantization error is e2 
12
Signal-to-Quantization Noise Power Ratio
Suppose the analogue signal m(t) has a voltage range from –V to +V volts, and that each interval
is occupied with equal probability. That is, suppose that the signal is unformaly distributed.
The probability density function of signal is equal to 1/(2V), and the normalized average power
V  1  V2
of the input signal is S i  m 2 (t )  V
m 2 (t ) dm 
 2V  3
2
Recall from above that the quantization noise power is given by NQ 
12
If the number of quantization levels is Q, then Q = 2V, we have, V = Q/2
Si
Thus the signal to quantization power ratio is  Q2
NQ
For the Q quantization levels, the number N of bits needed will be N = log2Q.

Si
N
Equivalently, Q= 2 , so that we have,  2 2N (for a uniformly distributed signal)
NQ
 Si   Si 
In decibels,   = 10log   = 10 log10(22N) = 6N
 N Q  dB  N Q 

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 39


Si
The expression  2 2N (and its form in decibels) applies for a signal that is uniformly
NQ
distributed., that is when the levels are occupied with equal probabilities.

In voice communications, we use N = 8, corresponding to Q = 256 quantization levels


So that Si/NQ = 48 dB

The Crest factor


When the signal is not uniform, and the number Q of quantization levels is sufficiently large,
2
the quantization noise in each interval can be taken to be uniformly distributed, giving NQ 
12
while the signal has an RMS value that differs from the maximum value. Under these conditions,
it is necessary to define the crest factor F, defined by

the peak value of the signal S MAX


F 
the rms value of the signal S RMS
2 2 2SMAX
With Si = SRMS and NQ  with   , the signal-to-quantization noise ratio above is
12 Q
given by

Si
 2
 2
S RMS   3Q 2

3Q 2
NQ   /12  SMAX / SRMS 
2
F2
S MAX
where F  is the crest factor of the signal.
S RMS
V
It is sometimes called the loading factor (designated by  later when discussing the

overload distortion).
In decibels,
 Si   Si   3Q 2 
  = 10log   = 10log  2   20log  Q   20log  F   4.77
 N Q  dB  N Q  F 
 10log  22 N   20log  F   4.77
= 6.02N – 20log10(F) + 4.77

Example 1.3
A sinusoid with maximum voltage (amplitude) V/2 is to be encoded with a PCM coder having a range of V
volts. Derive an expression for the signal-to-quantization noise (distortion) ratio and determine the
number of quantization bits required to provide an S/NQ of at least 40 dB.

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 40


Solution:
For a sinusoid, Si = A2/2 = (V/2)2/2 = V2/8
Also: NQ = 2/12 ( where  = step size)
With Q levels, and N bits, we have Q = 2N
Also  = (2(V/2)/Q) = V/2N  2 = V2/22N
Therefore , Si/NQ = (V2/8) / ( 2/12) = (3/2) (V2/ 2) = (3/2)22N

Si/NQ = (3/2)22N
In decibels, (Si/NQ)dB = 6N + 1.76 dB
For 40 dB, we have 6N + 1.76  40  6N  38.24 or N  38.24/6 = 6.37
The minimum value of N is 7 bits

Example 1.4: Optimal Quantizer Design


1 d1 pb  m 
1/4
m pb  m dm  3 1/4
mdm 
d1

r
 0

0
m dm

1  8d12
pb  m dm 8  16d1
1 d1

1/4 1 d1
0 dm  3 1/4 dm
1
0
1
1 1
  d1 mdm 1  d12  1  d1
1

3
m p b m dm r2  r1 r1 r2
r2  1
d1
 3  
d1 pb  m dm 3 d1 dm 2 1  d1  2 1
1 1 d1 1 0
1
d1 1 m
 4
4
r1  r2 1  8d12 1  d1 5
d1   2d1  r1  r2    4d12  d1   0
2 8  16d1 2 4
 d1  0.4478 r1  0.1717; r2  0.7239.
Max-Lloyd quantizer
Max 1 and Lloyd 2 independently designed the quantizers minimizing the mean square error
(MSQE) and developed tables for input governed by standard distribution functions such as:
 Gamma,
 Laplacian,
 Gaussian,
 Rayleigh, and
 Uniform.
Types of Quantizers
 Uniform – A uniform quantizer is one with equally-spaced quantization levels (i.e. the
step-size is the constant for all the levels)
 Non-Uniform – A non-uniform quantizer is one with unequally-spaced quantization levels
(i.e. the step-size is different for some or all the levels)
1
J. Max, “Quantizing for minimum distortion,” IRE Trans. Inform. Theory, Vol. IT-6, pp.16-21, Mar 1960
2
S. P. Lloyd, “Least square quantization in PCM,” IEEE Trans. Inform. Theory, Vol. IT-28, pp.129-137,1982

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 41


 Midtreader – A midtreader is quantizer in which there is a level at zero.
 Midriser – A midriser is a quantizer in which the input-output characteristic exhibits a
transition at zero, with the result that there is no quantization level at zero.
Notice that these types are not all mutually-exclusive:

Midtreader (MT) Midriser (MR)


Uniform (U) U-MT U-MR
Non-Uniform (NU) NU-MT NU-MR
Yet another classification can be memoryless, or with memory. The former assumes that each
sample is quantized independently; the latter takes into account the previous samples. We will
limit ourselves here to memoryless quantizers.
Midttreader Midriser

Quantizer Design
Given the range of input x as from a L to aU and the number of output levels as Q, design the
quantizer so that the MSQE is minimized.

MSQE    E x  xˆ   
2
aL
aU
x  xˆ 2 p( x)dx
When the number of output levels Q is very large, the quantizer d approximated as follows.
Assuming that p(x) is constant over each level, p(x) ~ p(r l),

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 42


Q 1

 x  rl 2 p( x)dx
d l 1
 
dl
l 0

 
This is minimized by setting, the derivatives to zero:  0 and 0
rk d k
  
x  rk 1 2 p( x)dx   x  rk 1 2 p( x)dx
dk d k 1

d k d k  d k 1 dk 
Evaluating to zero, we obtain
d k  rk 1 2 p(d k )  d k  rk 2 p(d k )  0
which yields
dk  rk 1   dk  rk 
Since d k  rk 1   0 and d k  rk   0 , of the two solutions, only the following is valid:

dk  rk 1   dk  rk 
and so
rk  rk 1
dk 
2
Also
  
x  rk 2 p( x)dx 
d k 1

rk

rk   dk 

x  rk  p( x)dx  0
d k 1
 2 dk
Hence
d k 1

rk 
 dk
xp ( x ) dx
d k 1
 dk
p ( x ) dx

This says that the output level is the centroid of the adjacent input levels.

The solution is not in closed form. To find the input levels dk, we must find rk, vice versa.
However, by iterative techniques, both dk, and rk can be determined.

If the signal is uniformly distributed (or the number of levels Q is large), p(x) can be
taken as constant in each interval, and we have:

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 43


   dk 1
 p (rk )     dx 
2
x r
rk rk  dk 
k

1  
  d k 1  rk    d k  rk  
3 3
p (rk )
3 rk  
Equating the derivative to zero, we have

dk 1  rk 2  dk  rk 2 0
From this we obtain:
dk 1  rk   dk  rk 
and so
d k 1  d k
rk 
2
Each reconstruction level rk is midway between its two adjacent decision levels dk and dk+1.

Overload Distortion
Overload distortion results when the input signal exceeds the outermost quantizer
levels (-V, V). Notice that -V = d0 and V = dQ. For such signals, we may rewrite the mean
square quantization error as follows:
Q 1
x  d 0   l x  rl  2 p( x)dx
d0 d l 1
  p ( x)dx 
2 2
e
 d
l 0

 x  d 

 Q
2
p ( x)dx
dQ

where the first and the last integrals represent the overload distortion terms.
To compute the mean square error due to overload distortion (D o), the input signal pdf must be
specified. First, let us assume the pdf to be symmetric so that the overload distortion terms
are equal, and can be combined to give

 

D0  2 d  2
x d Q p( x)dx
Q

If the input signal has a noise like characteristic, then a Gaussian pdf can be assumed,
described by
1  x 2 / 2 2 
p ( x)  e
2 2
An important example of signals that are closely described by a Gaussian pdf, by virtue of the
law of large numbers, is an FDM multichannel signal.
Speech statistics are often modelled after the Laplacian pdf, given by

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 44


1 2 x /
p ( x)  e
2 
where  is the root mean square (rms) value of the input signal and 2 is the average signal
power. Substituting the Gaussian and Laplacian pdf's into the expression for overload
distortion, we obtain, respectively,

For large , performance is bounded by a quantization noise asymptote given by Q2 = 22N,


where N is the number of bits used to represent each level in the quantizer.
Code lengths of n = 2, 4, 6, 8, and 10 are shown. These curves also illustrate that by increasing
the code length by 1 bit, the S/D ratio is improved by 6 dB.

 2 2
D0  
  

 V  2  e

 x2 / 2
 2
dx   VeV / 2
  
2 2

  V/
 
for the Gaussian input, and
2 /
D0   2eV for the Laplacian input.

The ratio V/ is known as the loading factor.


Loading factor,  = V/

A typical value is  = 4, resulting in what is known


as 4 loading.
For 4 loading, the quantizer range 2V = 8.
The figure plots the signal-to-distortion (S/D)
ratio for linear PCM versus the loading factor 
for a Gaussian input signal.
Linear PCM S/Dq Performance for Gaussian
For small , performance is limited by an Signal Input
asymptotic bound due to amplitude overload given
by D0 (above).

Effect of Transmission Errors


Transmission errors cause inversion of received bits resulting in another form of distortion in
the decoded analog signal. The distortion power due to decoded bit errors can be computed by
using the mean square value of the decoded bit errors.
Assuming the use of linear (i.e. uniform) quantizer, each sample s(iT) is transmitted by a
N

weighted binary code that can be written as: s iT    b


j 1
ij 2
j 1

and the reconstructed sample can be written as sˆiT    c


j 1
ij 2 j 1

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 45


where N = the number of bits in the code,
bij = transmitted bits
cij = received bits
 = the quantization step size
It is noted that cij may have errors.
An error in the jth bit will cause an error in the sample value of ej = 2j-1. If each error
occurs with probability pj, then the error has a mean square value of
N N
e 2
 
j 1
e 2j pj  4
j 1
j 1
2 p j

If the bit errors are assumed to be independent, then pj = Pe for each of the bits in the code,
so that

 
N
e 2
  Pe 2
4
j 1
j 1
 2 Pe 4 N  1 / 3

which for N > 3, can be approximated as

e 2  N e  2 Pe 4 N / 3  
S

S / NQ  
The overall signal-to-noise ratio is then
NQ  Ne 1  Ne / NQ  
where is S the signal power, NQ the quantization noise power, and Ne the thermal noise power.

 S  22 N
Substituting gives   
 N  PCM 1  4 Pe 2 2 N
for the signal-to-noise ratio for a PCM system.

In this expression, the probability Pe of bit error (bit error rate) depends on the modulation
scheme used. For example, for PSK and FSK it expressions for Pe are is below in terms of the
signal parameters.

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 46


As shown in the Figure, there are three S
  dB
regions consider:  N out
50

These consist of two extreme cases 48


– weak signals
46
(thermal noise dominates), and
- strong signals
44 Quantization Noise Limited
Region
(dominated by quantization noise), 42

in addition to an 40
36

- intermediate region. 38
where the signal is neither weak nor Thermal Noise Limited
36 Region
strong, in which the PCM signal-to-noise
ratio is in between the above two 34

extremes. 32

30

S
20 21 22 23 24 25 26 27 28   dB
 N  in

Input-Output SNR characteristic for a PCM


System with N=8 bits

Weak input Signals Strong input Signals


When the input signal is very weak (low input When the input signal is very strong, the bit
signal-to-noise ratios), the probability of bit error probability Pe in the denominator can be
error Pe is high, and the 1 in denominator can ignored, and we have:
be ignored, and we have:
 S 
 S  1    22 N
    N  PCM
 N  PCM 4 Pe
that is, quantization noise dominates the
that is, the thermal noise dominates the
system.
system.

In the signal-to-noise ratio given above, the probability Pe of bit error (bit error rate) depends
on the modulation scheme used. For example, for PSK and FSK it is given respectively by
1  Eb 
PSK: Pe  erfc 
 

2  N0 
1  Eb 
FSK: Pe  erfc 
 0.6 

2  N0 

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 47


Let the symbol energy be E and the symbol duration be TS  1 /  2 f M , with f M being the
maximum frequency component in the signal. Then if each symbol is encoded in N bits, then
Eb  E / N =  SiTS  / N where S i is the power in the input signal. The above
expressions then become

1  Eb  1   1   Si 
Pe  erfc   
PSK :
2  N0   2 erfc 
  
 2 N   N0 f M  
   
1   1  E  
 erfc  
 2 N T f 
  
2 
  S M    0  
N

1  E  1   0.3   Si 
Pe  erfc   
FSK :
2  0.6 b
N0   2 erfc 
  
 N   N0 f M  
   
1   0.3  E  
 erfc  
 N T f 
  
2 
  S M    N 0  
(SNR)Out

 S  22 N dB 10 log10  22 N 
  
 N  PCM 1  4 Pe 22 N Dire
ct
or
Since 10 log10  0.6   2.218 , PSK
FSK
it follows that for weak signals, the
2.2 dB
SNR curves for PSK and FSK will be
2.3dB apart, as shown below. For strong
signals, the two expressions are the
dB
same.
(SNR)in
Problems
P.1.1 A signal to be quantized has a range normalized to 1 and a probability density function p(x) = 1 -
|x| with -1  x  1.
(a) Find the quantizer step size and levels for a uniform quantizer with eight levels.
(b) Find the eight levels for a nonuniform quantizer necessary to make the quantizer levels
equiprobable.
(c) Plot the compressor characteristic for part (b).

P.1.2 In a compact-disc (CD) digital audio system, 16-bit linear PCM is used with a sampling frequency
of 44.1 kHz for each of two stereo channels.
(a) What is the resulting data rate?
(b) What is the maximum frequency allowed on the input signal?
FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 48
(c) What is the maximum S/Dq ratio in dB?
(d) If music has a loading factor  = 20, find the average S/Dq in dB.
(e) If the total playing time of the CD is 70 minutes, find the total number of bits stored on the
disc. Assume that error correction coding, synchronization, and other overhead bits make up one
half of the total capacity of the disc with the remaining one-half dedicated to PCM bits.
P.1.3 The bandwidth of a TV video plus audio signal is 4.5 MHz. This signal is to be converted into linear
PCM with 1024 quantizing levels. The sampling rate is to be 20 percent above the Nyquist rate.
(a) Determine the resulting bit rate.
(b) Determine the S/Dq if the quantizer loading factor is  = 6.

P.1.4 A 12-bit linear PCM coder is to be used with an analog signal in the range of 10 volts.
(a) Find the size of the quantizing step.
(b) Find the mean square error of the quantizing distortion.
(c) Find the S/Dq for a full-scale sinusoidal input.
(d) Find the S/Dq for a 1 volt sinusoidal input.

P.1.1 A signal to be quantized has a range normalized to 1 and a probability density function p(x) = 1 -
|x| with -1  x  1.
(a) Find the quantizer step size and levels for a uniform quantizer with eight levels.
Solution:
For a uniform quantizer, the levels rk and the thresholds dk are equally spaced
range 2 1 Output
Step size   = =
No.of levels 8 4 r7 7
8
The first level is at
5
 1 7 r6
1   1    8
2 8 8 3
The last level is at r5
8 
 1 7 3 1 1 3
1   1 
1 r4 1
1    8 1 1 Input
4 2 4 2 4
2 8 8 d3 d4 4
d0 d1 d5 d6 d7 d8
The other levels are spaced a step size d2
r3 
1
8
apart. Therefore the levels are 3 
2V 2 1
 
r2  Q 8 4
7 5 3 1 8
 ,  ,  ,  ,
8 8 8 8 r1 
5
8
1 3 5 7
 ,  ,  , 
8 8 8 8 r0 
7
8

(b) Find the eight levels for a nonuniform quantizer necessary to make the quantizer levels equiprobable.
1
 x dx 
d j 1
Solution: We need Pj  dj
pX
8
, j  0,1, 2..., 7

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 49


1
1  x dx 
d j 1
For positive x: j  4, 5, 6, 7  Pj  
dj 8
  d j 1  d j  
1
2
 d 2j 1  d 2j  
1
8
  d j 1  1   d j  1 
2 2 1
4
 3 2 3
By symmetry d 4  0  d5  1   d5 
2 2
2 3
Taking the smaller root gives d5  = 0.133975
2
1 1 1 2 2
 d6  1   d5  1    d 6  1  
2 2

4 2 2 2
2 2
Taking the smaller root gives d 6   0.292893
2
1 1 2 1
 d7  1   d 6  1    d7 
2 2

4 4 2
2 1
Taking the smaller root gives d 7   0.5000
2
1
 d8  1   d7  1   0  d8  1 (as expected)
2 2

4
By symmetry d 0  d8  1
d1  d7  0.5000
d 2  d6  0.292893
d3  d5  0.133975

The reconstruction levels are half-way between the threshold values

Threshold Va;ues Reconstruction


Levels
d8  1 r7  0.7500
d7  0.5000 r6  0.3964
d6  0.2929 r5  0.2135
d5  0.1340 r4  0.0670
d3  0 r3  0.0670

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 50


d3  0.1340 r2  0.2135
d 2  0.2929 r1  0.3964
d1  0.5000 r0  0.7500
d 0  1

(c) Plot the compressor characteristic for part (b) (see right column of Table above).

P.1.2 In a compact-disc (CD) digital audio system, 16-bit linear PCM is used with a sampling frequency
of 44.1 kHz for each of two stereo channels.
(a) What is the resulting data rate?
Data Rate = (2 Channels)x (44,100 symbols/sec per Channel) x (16 bits/symbol)
= 1,411,200 bit/s = 1.4112 Mbit/s

(b) What is the maximum frequency allowed on the input signal?


The sampling Rate = 44.1 kHz
By Nyquist Theorem , the Maximum frequency = Half of this
= 22.05 kHz
(c) What is the maximum S/Dq ratio in dB?
The Maximum S/Dq in PCM is 22N  10log10(22N) = 96.33 dB

(d) If music has a loading factor  = 20, find the average S/Dq in dB.
2V 2  
Here the step size  is given by   , where N is the number of bits.
Q 2N
The average signal power is given by S 2
 
2
2
The average distortion is given by Dq  N Q  
12 3  22 N
S S 2 3  22 N
The signal-to-distortion ratio is then   
Dq NQ NQ 2
With and N = 16, and  = 20, this becomes

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 51


 S   3  22 N   3  232 
    10 log10    75.08 dB
D 
10 log10 
   2
  400 
 q  dB
(e) If the total playing time of the CD is 70 minutes, find the total number of bits stored on the disc.
Assume that error correction coding, synchronization, and other overhead bits make up one half of the
total capacity of the disc with the remaining one-half dedicated to PCM bits.
From (a), the PCM Data Rate = 1,411,200 bit/s
The overhead doubles this rate to = 2x1,411,200 bit/s = 2,822,400 bit/s
In 70 minutes, date stored = (70 mins) x (60 sec/min) x (2,822,400 bits/sec)
= 11,854,080,000 bits
= 11.85408 Gbits
= 1.379996538 GB (1GB = 230 bits = 8,589,934,592 bits)
1,073,741,824 bytes)
P.1.3 The bandwidth of a TV video plus audio signal is 4.5 MHz. This signal is to be converted into linear
PCM with 1024 quantizing levels. The sampling rate is to be 20 percent above the Nyquist rate.
(a) Determine the resulting bit rate.
The Sampling rate = 2 x 4.5 MHz (1.20) = 10.8 x 106 samples per sec
Each sample takes log2(1024) = 10 bits/sample
The bit rate is then = (10 bits/sample) x (10.8 x 106 samples/sec)
= 10.8 x 107 bits/sec)
= 108 Mbits/sec
(b) Determine the S/Dq if the quantizer loading factor is  = 6.
The average signal power is given by S 2
 
2
2
The average distortion is given by Dq  N Q  
12 3  22 N
S S 2 3  22 N
The signal-to-distortion ratio is then   
Dq NQ NQ 2
With N = 10, and  = 6, this becomes
 S   3  22 N   3  220 
   
D 
10 log10   10 log10  49.41 dB
   2
  3 6 
 q  dB
P.1.4 A 12-bit linear PCM coder is to be used with an analog signal in the range of 10 volts.
(a) Find the size of the quantizing step.
The number of levels is Q = 212 = 4096 levels
2V 20 5
The step is      0.0048828125
Q 4096 1024
(b) Find the mean square error of the quantizing distortion.
 5 / 1024 
2
2
The mean square error is Dq  N Q    1.987 1006
12 12
(c) Find the S/Dq for a full-scale sinusoidal input.

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 52


V2 102
For full-scale sinusoidal input, S =   50
2 2
 S   S   50 
     10 log10  06 
 74.01 dB
D  N   1.987  10  dB
 q  dB  Q  dB

(d) Find the S/Dq for a 1 volt sinusoidal input.


V2 12
For a 1 volt sinusoidal input, S =   0.5
2 2
 S   S   0.5 
   10 log10   
D    54.01 dB
 N   1.987  10 06
 dB
 q  dB  Q  dB

Companding
For real signals, such as speech and video, a linear quantizer (i.e uniform quantizer) is not the
optimum choice, in the sense of achieving minimum mean square error. Yet the uniform quantizer
is simple to design.

The option is to retain the linear quantizer, but transform the input signal by preceding the
quantizer with a compressor. The receiver incorporates an expander that provides the inverse
characteristic of the compressor. This technique is called companding.

Companding Technique

Thus, companding is the process whereby:


 The transmitter first compresses an input signal and then quantizes it
using a uniform quantizer (linear quantizer) for transmission;
 The receiver uses a linear decoder, and then follows this with an
expander, with an inverse characteristic of the compressor.

Logarithmic Companding
For any signal, the goal is to provide a constant S/Dq ratio over all the dynamic range of the
signal.

For speech signals, logarithmic compression is used.

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 53


Two types of compression are used:
 -Law in North America, and Japan,
 A-Law in Europe and the rest of the world

-Law Compandig
 For  = 0, there is no compression, and the result is a linear quantizer.
  = 255 is used for most applications of speech processing
 x 
ln 1   
x
F ( x)  sgn( x)   x
0 1
max

ln 1    xmax
A-Law Compandig
 For A = 0, there is no compression, and the result is a linear quantizer.
 A = 87.6 is used for most applications of speech processing
 x 
ln 1   
 xmax  x
F ( x)  sgn( x) 0 1
ln 1    xmax

Other values of give compression characteristics as shown in the following figures

-Law Compandig A-Law Companding


 x   Ax 1
ln 1    sgn( x) 0 x 
x  1  ln A
F ( x)  sgn( x)   x A
0 1 F ( x)  
max

ln 1    xmax  1  ln  A x  1
sgn( x )  x 1
 1  ln A A

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 54


Compander Implementation

Piecewise Linear Segment Approximation


For  = 255

Example 1.4: For  = 255, 16-segment companded PCM, determine the code word that
represents a 5-volt signal if the encoder is designed for a 10-volt input range.
 What output voltage will be observed at the PCM decoder?
 What is the resulting quantizing error?
Solution
Since a 5-volt signal is half the maximum input value, the corresponding PCM amplitude is
represented by (1/2) 8159 = 4080

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 55


From the -Law table ( = 255), the code is found to be 01110000 (decimal 112)
The corresponding decoder output is
x112  x113
y112   4191
2
 4191 
(i) The associated output value is  (10 volts)  5.14 volts
 8159 
(ii) The quantization error is therefore 5.14 – 5.00 = 0.14 volts

SNRq of Non-Uniform Quantizers


When Q ≫ 1, ∆ and ∆ℓ are small ⇒ fm(m) is a constant fm(mℓ) over ∆ℓ and mℓ is at the midpoint
of the ℓth quantization region.

Since Q ≫ 1, approximate the summation by an integral to obtain

SNRq ofµ-law Compander

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 56


If µ ≫ 1 then the dependence of SNRq on the message‟s characteristics is very small and SNRq
can be approximated as

For practical values of µ = 255 and Q = 256, SNRq = 38.1dB


8-bit Quantizer for the Gaussian-Distributed Message

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 57


The performance for larger input power levels can be forfeited to obtain a performance that
remains robust over a wide range of input levels.

SNRq with 8-bitµ-law quantizer (L=256,µ=255)

Insensitive to variations in input signal power and also insensitive to the actual pdf model – Both
desirable properties.

DIFFERENTIAL PULSE-CODE MODULATION (DPCM)


Most message signals (e.g., voice or video) exhibit a high degree of correlation between
successive samples. Redundancy can be exploited to obtain a better SNRq for a given Q, or
conversely for a specified SNRq the number of levels Q can be reduced.

In Differential PCM, the differences between successive samples are PCM-encoded and
transmitted.

If such differences are transmitted, then simply by adding up (accumulating) these changes can
generate at the receiver a waveform identical in form to the original.

At t = kT, let the sample x(kT) be represented as x(k) = x(kT), i.e. let x(k) = x(kT)

We may well anticipate that the differences x(k) - x(k-1) will be smaller than the sample
values themselves. Hence fewer levels will be required to quantize the difference than are
required to quantize x(k) and correspondingly, fewer bits will be needed to encode the levels.

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 58


For example, suppose that x(k) extends over a range VH–VL , and using PCM, m(k) is encoded
using 28 = 256 levels. Then the step size is  = (VH – VL)/28, that is (VH – VL) = 256
If, however, the difference signal x(k) - x(k - 1) extends only over the range 2 then the
quantized levels needed are at 0.5 and at range 1.5.
There are now only four levels and two bits per sample difference are adequate.

Basic Principle of Differential PCM (DPCM).

Practical Problems
Practically, there are times when the differences may turn out to be very large. In that case,
there are at least two options:
1. One option is to increase the sampling rate to make the differences smaller
(closely-spaced samples will be near in amplitude). But this necessarily increases the bit
rate, thereby requiring more bandwidth.

2. Another option is to use a predictor that uses the previous values of the differences to
predict the next value for the difference signal, and this is then quantized and encoded.

Linear Predictor
If an analogue signal x(t) is oversampled (several times the Nyquist rate), the samples exhibit
high correlation, and this can be used in designing a predictor.

Given the samples m samples, x(n-1), x(n-2), …, x(n-m)

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 59


The linear predictor constructs an estimate xˆ (n) by a linear combination of the samples, as
follows xˆ (n)  a1 x(n 1)  a2 x(n  2)  a3 x(n  3)   am x(n  m)
m

We can rewrite this as xˆ (n)   ai x(n  i )


i 1

The quantity to be minimised is the mean square error, and the minimisation is done with
respect to the coefficients a k , k =1, 2, 3, …, m. The mean square error (MSE) is


  E x(n)  xˆ(n)
2
 which we can expand to give   
  E x 2 (n)  2Ex(n) xˆ(n)  E xˆ 2 (n) 
Substituting for xˆ(n) , we can write this as
m 
  E  x (n)  2 E  x(n) x(n  i)ai  E  x(n  i)ai  x(n  j )a j 
m m
2

i 1  i 1 j 1 
Equivalently,

  E  x (n)  2 ai E  x(n) x(n  i )   a j ai E x(n  i ) x(n  j )


m m m
2

i 1 i 1 j 1

Setting a to zero, we obtain
k
m

 a E x(n  k ) x(n  i)  E x(n) x(n  k )


i 1
i for k = 1,2,3, …, m

These are known as the Yule-Walker Equations or the Normal Equations


Define r k  Ex(n) x(n  k ) for a process that is at least wide sense stationary (WSS).
Then the Yule-Walker equations become:
m

a r
i 1
i k i  rk for k  1, 2, 3, ,m

In matrix form, these equations can be written as:

 r0 r1 rm1   a1   r1 
     
 r1 r0  a
 2  r2 
      
     
 r0 r1     
r r0  a  r 
 m1 r1  m  m
FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 60
The coefficient matrix is Toeplitz matrix, which is another way of saying that the terms along
each diagonal are equal. It is so-named after the person who first published the work on
matrices of this form; (accordingly, the name begins with a capital T).

s(t) { si } { ei } { ei + q i }
Sampler Quantizer
Input + -
{ si } +
Predictor
+

Transmitter

Digital
Channel

{ si + q i } s(t) + q(t)
Low-Pass
+ Filter Output
+
Predictor
{ si }
Receiver
Differential PCM (DPCM).

Levinson-Durbin’s Solution to the Yule-Walker Equations


Independently, Levinson and Durbin developed recursive solutions to the normal [Yule-Walker]
equations. These have been combined into what is now called the Levinson-Durbin algorithm. It
is a recursive method, which assumes that we have a solution to a lower order prediction
problem, and expresses the next order solution in terms of the known solution.

For example, if m = 1, the solution is a11  r1 / r0 


The residual error  1 , after the prediction, is
  
1  E x(n)  xˆ(n) 2  E x(n)  a11x(n  1)2 
1  E  x 2 (n)  2a11E  x(n) x(n  1)  a112 E x 2 (n  1)
 r0  2a11r1  a112 r0
From above, it is already established that a11   r1 / r0   r1  a11r0
Therefore, r1 can be eliminated to give 1  r0  2a11  a11r0   a112 r0
This is then re-organised as 1  r0 (1  a11
2
)
FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 61
Exercise 1.1
(a) For m = 2, show that the Yule-Walker equations are given in matrix form as
 r0 r1  a21   r1 
   
 r1 r0  a22   r2 
(b) By applying Kramer‟s rule, or otherwise, show that the solutions for the predictor
coefficients a21 and a22 are :

a21 
a11r0  r2 
and a22 
r 2  r0 a11
2

1 1
(c) Show that the residual error 2 is given by

 2  r0 1  a11
2

1  a22
2

For the general solution, we define the three vectors, a m , rm , and rmB as:
 am1   r1   rm 
     
 am 2  r   rm1 
am   rm   2  , 
 
rmB 

      
a  r  r 
 mm   m 1 

We observe that the vector rmB is just rm written backwards.

The recursive method is


m = 1: k1 = r1/r0
a11 = k1
1 = r0 [1 – (k1)2]
 rm1  aTm rmB 
k   
For m = 2, 3, 4, … : m1  
 r0  a m rm
T

m+1 = m [1 – (km+1)2]
am+1,j = am,j – (km+1)am,m+1–j j =1,2,3, …, m
am+1,m+1 = km+1
Levinson-Durbin Algorithm

Example: Given a signal with the autocorrelation function r  [3, 2, 1, 0.5, 0.25] , use the
Levinson-Durbin algorithm to construct a suitable linear predictor.

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 62


Solution:
k1 = r1/r0 = 2/3 = 0.667
a11 = k1 = 0.667
1 = r0 [1 – (a11)2] = 3[1 – 0.66672] = 1.667
--------
k2 = (r2 – a11* r1) / (r0 – a11* r1) = (1 –*0.6667*2)/( 3 –*0.6667*2) = – 0.200
a21 = a11 – k2*a11 = a11(1– k2) = 0.6667(1–0.200) = 0.800
a22 = k2 = –0.200
2 = 1[1 – (k2)2] = 1.6667[1 – (-0.20)2] = 1.600
--------
k3 = (r3 – a21*r2– a22*r1) / (r0 – a21*r2–a22*r1)
= (0.5 –0.800*1–(-0.200)*2)/(3 –0.800*1–(-0.200)) = –0.0385
a31 = a21 – k3*a21 = a21(1– k3) = 0.800(1–0.0385) = 0.7692
a32 = a22 – k3*a21 = – 0.200 –(–0.0385)*0.800 = –0.2308
a33 = k3 = –0.0385
3 = 2[1 – (k3)2] = 1.600[1 – (-0.0385)2] = 1.59763
--------
k4 = (r4 – a31*r3– a32*r2– a33*r1) / (r0 – a31*r3– a32*r2– a33*r1)
= (0.25–0.7692*0.5-(-0.2308)*1-0.0385*2)/(3–0.7692*0.5-(-0.2308)*1-0.0385*2 )
= 0.00694
a41 = a31 – k4*a33 = 0.7692 – 0.00694*0.0385 = 0.76896
a42 = a32 – k4*a32 = -0.2308 – 0.00694*(-0.2308 ) = –0.22917
a43 = a33 – k4*a31 = 0.0385 – 0.00694*0.7692 = 0.03312
a44 = k4 = –0.00694
4 = 3[1 – (k4)2] = 1.59763[1 – 0.006942] = 1.59756

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 63


The order of the predictor is determined when there is no significant improvement in
going from one order to the next. The improvement is indicated by the reduction in
prediction error variance. In the current example, there is no significant difference in
going from order-3 to order-4. Therefore a predictor of order-3 will be sufficient.

Below is the figure of a third-order linear predictor, in which the order of the filter has
been suppressed in the subscripts for the filter coefficients.

k1 = r1/r0 = 2/3
Xn Xn-1 Xn-2 Xn-3 = 0.667 k1 = 0.667
z-1 z-1 z-1
a11 = k1 = 0.667 1 = r0[1– (k1)2] = 1.667
a1 a2 a3 a21 = 0.800 k2 = – 0.200
X X X a22 = -0.2000 2 = 1[1 – (k2)2] = 1.600

a31 = 0.7692 k3 = –0.0385


a32 = –0.2308 3 = 2[1 – (k3)2] = 1.59763
Xn a33 = –0.0385
a41 = 0.76896 k4 = 0.00694
Third-Order Linear Predictor a42 = –0.22917 4 = 3[1 – (k4)2] = 1.59756
a43 = 0.03312
a1 = 0.7692, a2 = –0.2308, a3 = –0.0385 a44 =–0.00694

The Table shows the predictor coefficients for the autocorrelation sequence
[3, 2, 1, 0.5, 0.25]. Since there is no significant improvement in going from order 3 to order
4, a third order predictor is sufficient: a1 = 0.7692, a2 = –0.2308, a3 = –0.0385.
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

The Levinson-Durbin Algorithm - Derivation

In matrix form, these equations can be written as:


 r0 r1 rm 1 
 
 r1 r0   am1 
   r1   rm 
   
Rm     am 2  r   rm1 
  am   rm   2  , 
 
rmB 

 r0 r1        
a  r  r 
r r0   mm   m 1 
 m1 r1
The normal equations can then be written as R mam  rm
For the next order m+1, the relevant quantities can be first expressed as

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 64


 am 1,1 
 am 1,2 
a m 1  m    
 
a 
 m 1,m 

Then the equations R m1am1  rm1 can be written as

from which we obtain two equations

1. A Matrix-vector equation: R mam1  m   am1,m1rmB  rm


2. A Scalar equation:
T
rmB am1  m   r0 am1,m1  rm1
Multiply the first equation by the inverse of Rm to obtain

3. am1  m   am1,m1amB  am  am1  m   am  am1,m1amB


Substitute equation (3) into equation (2) to obtain
rm1  rmB
T
am
T
rmB am  am1,m1rmB
T
amB  r0 am1,m1  rm1  am1,m1 
r0  rmB
T
amB
Since
T
rmB amB  rmT am  aTmrm and
T
rmB am  aTmrmB this is equivalently written as

rm1  aTmrmB
4. am1, m1 
r0  aTmrm
Equations 3 and 4 and 4 are the key equations in the algorithm
Knowing am, and rm, use rm+1 in equation 4 to determine am+1,m+1 = km+1
Substitute in equation 3 to obtain
5. am1, j  am, j  km1am,m j 1 for j=1,2, …, m

Mean Square Error (MSE)

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 65


m
From the estimate xˆ (n)  a
i 1
m ,i x(n  i) the mean square error (MSE) is given by

  
 
m 2

 m  E  x(n)  xˆ (n)   E  x(n)   am,i x(n  i )  


2

 i 1  
That is

 m  E  x  n   2 ai E x  n  i  x  n    ai a j E x  n  j  x  n  i 
m m m
2

i 1 i 1 j 1
m m m
 m  r0  2 ai ri   ai a j ri  j
i 1 i 1 j 1

 m  r0  2aTmrm  aTm R mam


Since R mam  rm the above gives the MSE as  m  r0  aTmrm
m

That is, in component form, the MSE is given by  m  r0   am,i ri


i 1

Recursive Form of the MSE


m m 1
 m  r0   am,i ri  r0   am,i ri  am,m rm
i 1 i 1

Recall am, j  am1, j  km am1,m j for j = 1, 2, …, m-1

rm  aTm1rm1, B rm  aTm1rm1, B
and am,m  km  
r0  aTm1rm1  m1
Therefore the MSE becomes
m 1
 m  r0    am1,i  km am1,mi  ri  am,m rm
i 1

 m 1
  m1 
 m   r0   am1,i ri    km  am1,mi ri  am,m rm  ( recall am,m  km )
 i 1   i 1 
 m 1
  m 1

 m   r0   am1,i ri   km  rm   am1,mi ri 
 i 1   i 1 
The first term and the second term in round brackets are, respectively
m 1
 m 1

 r0   am 1, i ri    m1 and rm   am1,mi ri  km m1
 i 1  i 1

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 66


Therefore the MSE is alternatively obtained as  m   m1  km2  m1 which is written more
compactly as
 m  1  km2   m1
This leads to
 m  1  km2  1  k 1  k  
2
2 1
2
0

r1
Now  0  r0 and k1 
r0
Since the MSE must decrease as the order of the prediction filter is increased, it is necessary
the magnitude of the “reflection coefficients” kj be less than unity; that is k j  1 for j = 1, 2,
Autocorrelation Values for a Signal r k 
k r k  k r k  6.0 5.75
5.0
0 5.75 10 2.10
1 3.70 11 1.85 4.0 3.70
2 0.80 12 0.90 3.0
2.10
3 -1.90 13 -0.30 2.0 2.00 1.85
4 -3.15 14 -1.50 1.0 0.80 0.70 0.90 0.55
0.25 0.15
5 -3.35 15 -1.40 0.0
-0.30 -0.45
6 -2.70 16 -0.45 -1.0 -1.10
7 -1.10 17 0.25 -2.0 -1.90 -1.50 -1.40
8 0.70 18 0.55 -2.70
-3.0 -3.15
9 2.00 19 0.15
-4.0 -3.35 k
0 2 4 6 8 10 12 14 16 18 20
Solution:
k1 = r1/r0 = 3.70/5.75 = 0.64348
1 = r0 [1 – (a11)2] = 5.75[1 – 0.643482] = 3.36913
a11 = k1 = 0.64348
--------
k2 = (r2 – a11* r1) /1 = (0.80 – 0.64348*3.70)/3.36913 = -0.4692
2 = 1[1 – (k2)2] = 3.36913 [1 – (-0.4692)2] = 2.6274
a21 = a11 – k2*a11 = a11(1– k2) = 0.64348 (1–(-0.4692)) = 0.9454
a22 = k2 = -0.4692
--------
k3 = (r3 – a21*r2– a22*r1) /2
= (-1.90– 0.9454 *0.80 –(-0.4692)* 3.70)/ 2.6274 = –0.35026
3 = 2[1 – (k3)2] = 2.6274 [1 – (–0.35026)2] = 2.30507

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 67


a31  a21  k3a22  a21  k3k2 = = 0.9454-(–0.35026)(-0.4692) = 0.78106
a32 = a22 – k3*a21 = -0.4692–(–0.35026)*0.9454 = – 0.13806
a33 = k3 = –0.35026
--------
k4 = (r4 – a31*r3– a32*r2– a33*r1)/3
= (-3.15– 0.78106*(-1.90)-(–0.13806)* 0.80 – (–0.35026)* 3.70)/2.30507
= -0.11261
4 = 3[1 – (k4)2] = 2.30507 *(1 –(-0.11261)2) = 2.27584
a41  a31  k4 a33  a31  k4 k3 = 1.2765 – (-0.11261)*(–0.35026) = 1.237057
a42 = a32 – k4*a32 = –0.13806 – 0.29577*( ) = –0.22917
a43 = a33 – k4*a31 = –0.35026 – (-0.11261)*0.78106 = 0.2623048
a44 = k4 = -0.11261
-----
k5 = (r5 – a41*r4– a42*r3– a43*r2) -a44*r1)/4
=(-3.35-1.237057*(-3.15)–(–0.22917)*(-1.90)-(0.2623048)*0.80–(-0.11261)*3.70)/ 2.27584
= 0.13978
5 = 4[1 – (k5)2] = 2.27584*(1 –(0.13978)2) = 2.231374
a51  a41  k5a44  a41  k5k4 = 1.2765 – (-0.11261)*(–0.35026) = 1.237057
a42 = a32 – k4*a32 = –0.13806 – 0.29577*( ) = –0.22917
a43 = a33 – k4*a31 = –0.35026 – (-0.11261)*0.78106 = 0.2623048
a44 = k4 = -0.11261

Delta Modulation
When the sampling rate is sufficiently high, the DPCM samples can be represented by just one
bit, indicating “up or down” (above or below), ( ) and the DPCM then termed delta modulation.

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 68


(Slope Overload Region)
(Quantization-Noise Region)

  Step size
TS  Sample
spacing
Transmitted
Sequence 1 1 1 1 1 0 0 1 0

The low-pass filter in the receiver smoothes the staircase signal to recover the analog signal.

Two types of Distortion


 Slope Overload:
 Granular Noise (Quantizing Noise)

If the slope of the input signal exceeds the slope of the staircase (/T), the resulting error is
known as slope overload distortion.

For an input slope less than /T, the errors are a form of quantizing distortion known as
granular noise.

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 69


For a fixed sampling rate, optimum, S/D performance is obtained by selecting the step size to
minimize the sum of slope overload and quantizing distortion (granular noise).

 Small step size, slope overload distortion will dominate;


 Large step size, quantizing distortion will dominate.

Adaptive Delta Modulator

The step size is chosen to minimize slope overload by continuing to increase the step size for
runs of three consecutive like bits, and to decrease the step size when there is alternation in
bit patterns

The biggest design problems for discrete ADM are:


 the choice of step sizes
 the number of allowed sizes,
 the gain per size, and
 the logic required to control changes in step sizes.

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 70


Baseband Signal Receiver
The block diagram of a receiver for a binary-coded signal is shown below. The noise is assumed
to be white Gaussian Noise (white noise has a constant power spectral density over all the
frequency axis).

Dump Switch,
SW1

White Noise, n(t)


s(t) Gn(f) = N0/2

C Sample Switch,
V
SW2
s(t) v0(t)
+ v0(T)

0 t R Sampled at
T
t =T

Integrate and Dump

Receiver for a Binary Coded Signal

The integrator output sampled at t=T, is


1 T
s(t )  n(t )dt
v 0 (T )  
RC 0
Setting the time constant RC = , we write the sample voltage due to the signal as

T T
 
1 1 VT
s0 (T )  s(t )dt  Vdt 
 0  0 
FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 71
The sample voltage due to the noise is
T

1
n0 (T )  n(t )dt
 0
The sampled noise voltage n0(T) is a random variable as opposed to n(t), which is a stochastic
process (i.e., a random process). The variance of n0(T) is found as follows:

 
 02  E n02 (T ) 
1
2 0
T
 
T

0
E n(t1 )n(t2 )  dt1dt2 
1
2  
T

0
T

0
N0
2
 t1  t2  dt1dt2
where we have used the fact that the noise is white. Finally,

N 0T
 02 
2 2

We would like the output signal voltage to be as large as possible in comparison with the noise
voltage. Hence a figure of merit of Interest is the signal-to-noise ratio

s0 (T )2 
2 2 2E
V T S
E 
n02 (T )  N0 N0

(a) The signal output and (b) the noise output of the integrator

 Note that the signal-to-noise ratio increases with increasing bit duration T and that it
depends on V2T which is the normalized bit energy of the signal.

 Therefore, a bit represented by a narrow, high amplitude signal and one by a wide, low
amplitude signal are equally effective, provided V2T is kept constant.

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 72


 The integrator filters the signal and the noise such that the signal voltage increases
linearly with time, while the standard deviation (rms value) of the noise increases more
slowly, as T . Thus, the integrator enhances the signal relative to the noise, and this
enhancement increases with time.

PROBABILITY OF ERROR
The error probability Pe for the integrate-and-dump receiver above is found as follows:
We assume that the sampled noise n0(T) is Gaussian. That is,
1  x2 
f S0 ( x )  exp  
 2 2 
2 0
2
 0 

where  02 , is the noise variance (already determined above). An error will occur if either the
signal transmitted was positive and the noise component less than -VT/ or when the
transmitted signal is negative and the noise level is higher than +VT/ . The latter event has
probability represented by the area under the tail of the pdf curve as shown

   x2 
 
1
Pe  f S0 ( x)dx  exp  dx
VT /  VT /   2 2 
2 02  0 

Defining 
u  x/ 0 2 , the expression for P may be rewritten as
e

1  2 
Pe      
   1
exp  u 2 du   erfc  V 2T/N0 
2    2  
u  V 2T/N0

Noting that ES = V2T is the signal energy of a bit, we have V 2T/N0  Es / N0 . So we can write
Pe as
1
Pe  erfc
2
 Es /N0 
where, the complementary error function erfc(x) is defined as
FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 73
erfc x  
2
 

x
 
exp  u 2 du

Because of symmetry, the above is the bit error probability.

Schwarz’s Inequality
 2
  2

  


2
X ( f )Y ( f ) df  X ( f ) df Y ( f ) df

where equality holds if and only if


X ( f )  KY * ( f )

The Optimum Filter


White Noise, n(t)
Gn(f) = N0/2
Sample at
t =T
S1 (t) 
 v0(T) S01 (T )  n0 (T )
or  + 
Filter or
S 2 (t) S (T )  n (T )
 02 0

If we wish to distinguish between s1(t) and s2(t), then we form the difference signal
p(t) = s1(t) – s2(t),

If the input signal to the system is this signal p(t), then the output will be
p0(t)= s01(t)–s02(t),

Let the respective Fourier Transforms be P(f) and P0(f). If H(f) is the transfer function of the
filter, then P0(f) = H(f)P(f), and

 
 0  H ( f )P( f )e df
j 2fT j 2fT
p0 (T )  P ( f ) e df 
The output noise n0(t) has power spectral density given by
2
Gn0 ( f )  H ( f ) Gn ( f )
By Parseval‟s Theorem,
 
 
2
 02  Gn0 ( f )df  H ( f ) Gn ( f )df
FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 74
The signal-to-noise ratio is then
 2

 H ( f )P( f )e df
j 2 fT
p02 (T )
 
 02
 H ( f ) G ( f )df
2
n

Before apply Schwarz‟s inequality, we let X ( f )  Gn ( f ) H ( f ) and


1
Y( f )  P( f )e j 2 fT
Gn ( f )

p02 (T )   P( f ) 2
 
2
 Y ( f ) df  df
 02   Gn ( f )
When the equal sign applies, we have the optimum filter as transfer function as

P * ( f )  j 2 fT
H( f )  K e
Gn ( f )

and the maximum signal-to-noise ratio is


 p02 (T )   P( f ) 2
 2 

 0  max

 Gn

( f )
df

White Noise: The Matched Filter


When the noise is white, the power spectral density Gn ( f ) is constant, and is given by
Gn ( f )  N 0 / 2
and the filter that gives the maximum signal-to-noise ratio is called the Matched Filter. Its
transfer function is given by
 2K 
H ( f )    P * ( f )e  j 2 fT
 N0 

The impulse response of this filter is then obtained as the inverse Fourier transform

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 75


2K 
h(t )  1 H ( f )   
P * ( f )e  j 2 fT e j 2 ft df
N 0 
2K 

N 0  
P * ( f )e j 2 f (t T ) df

A physically realisable filter will have a real impulse response h(t) = h*(t), and so we must have
2K
h(t )  p(T  t )
N0

This is the filter that is matched to the pulse p(t). In our example, we started with
p(t) = s1(t)–s2(t). So in this case the matched filter impulse response is

h(t ) 
2K
s1 (T  t )  s2 (T  t )
N0

The signals

(a) .s1(t).

(b) s2(t), and

(c) p(t) = s1(t) –s2(t).

(d) p(t) rotated about the axis


t=0.

(e) The waveform in (d) translated


to the right by amount T, to
create the waveform p(T–t).

A filter with impulse response

h(t) = p(T–t),

where is  scalar constant, is said to


be matched to the waveform p(t).
Other examples of the matched filter are given below

Signal s(t) Impulse Response of the


FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 76
filter matched to s(t)

s1(t) h1(t)= s1(T- t)

0 T/2 T t t
0 T/2 T
s2(t) h2(t)= s2(T- t)

t t
0 T/2 T 0 T/2 T
s3(t) h3(t)= s3(T- t)

t t
0 T/2 T 0 T/2 T
s4(t) h4(t)= s4(1.5T- t)

0 T/2 T 3T/2 t t
0 T/2 T 3T/2

Continuing with the example: The maximum signal-to-noise ratio is

 p02 (T )  2 

2
 2   P( f ) df
  0  max N 0 
At this point, we use Parseval’s Theorem:

 
 
2 2
P( f ) df  p(t ) dt (Parseval‟s Theorem)

and also noting that p(t) = s1(t) – s2(t) for 0  t T, we write

 p02 (T )  2 T

2
 2   s1 (t )  s 2 (t ) dt
  0  max N 0 0

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 77


and expanding the squared term, we obtain:

 p 02 (T )  2  T 2 T T 2 
 2    0 1
  0  max N 0 
s 
(t ) dt  2
0
s1 (
t ) s 2 (t ) dt  
0
s 2 (t )dt 


2
E S1  2E S12  E S 2 
N0
where ES1 and ES2 are respectively, the energies in s1 and s2, while ES12 is the energy due to the
correlation between s1(t) and s2(t). We define the correlation coefficient between the signals
as
T

1
 12 s1 (t ) s 2 (t )dt .
E S1 E S 2 0
When the signals have the same energy ES1 = ES2 = ES, we can write the maximum signal-to-
noise ratio as
 p02 (T ) 
 S 1  12 
4E
 2 
  0  max N0
where 1  12  1 .

Binary Orthogonal Signalling


Orthogonal signalling is type of signalling for which there is zero correlation between the
signals. That it, 12  0 . The maximum signal-to-noise ratio will be
 p02 (T )  4E S
 2  
  0  max N0

Binary Antipodal Signalling


The maximum signal-to-noise ratio will be maximum when the correlation coefficient is –1. That
is, when 12  1 . This occurs when s2(t) = – s1(t). Binary signalling in which one signal is the
negative of the other is referred to as Binary Antipodal Signalling (or simply antipodal
signalling). The maximum signal-to-noise ratio becomes

 p02 (T )  8E
 2   S
  0  max N0
Thus binary antipodal signalling is 3 dB better than orthogonal signalling (double SNR).

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 78


p (t)
h (t)
1 .2

1.2
1 .0

1.0
0 .8
0.8
0 .6
0.6

0 .4
0.4

0 .2
0.2

0 .0 0.0
0.0 0 0.2 5 0.5 0 0 .7 5 1.0 0 1 .2 5 1 .50 0.00 0 .2 5 0.50 0 .7 5 1.00 1 .2 5 1.50

t im e , t tim e , t

p o (t)
3.6

3.2

2.8

2.4

2.0

1.6

1.2

0.8

0.4

0.0
0.00 0.25 0.50 0 .7 5 1 .0 0 1.25 1.50 1.75 2.00 2.25 2 .5 0

Binary Coding Waveforms


Binary coding waveforms simply condition binary signals for transmission. This signal
conditioning provides a square wave characteristic suitable for direct transmission over cable.

Nonreturn-to-Zero (NRZ)
With nonreturn-to-zero (NRZ), the signal level is held constant at one of two voltages for the
duration of the bit interval. If the two allowed voltages are 0 and V, the NRZ waveform is said
to be unipolar, because it has only one polarity. This signal has a nonzero dc component at one-
half the positive voltage, assuming equally likely 1 's and 0's. A polar NRZ signal uses two
polarities, V, and thus provides a zero dc component.

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 79


NRZ(L) – the voltage level of the signal indicates the value of the bit. The assignment of bit
values 0 and 1 to voltage levels can be arbitrary for NRZ(L), but the usual convention is to
assign 1 to the higher voltage level and 0 to the lower voltage level. NRZ(L) coding is the most
common mode of NRZ transmission, due to the simplicity of the transmitter and receiver
circuitry. The coder/decoder consists of a simple line driver and receiver,

NRZ(M) – a level change is used to indicate a mark (that is, a 1) and no level change for a space
(that is, a 0);

NRZ(S) – similar except that the level change is used to indicate a space or zero. Both of these
formats are examples of the general class NRZ(I), also called conditioned NRZ, in which level
inversion is used to indicate one kind of binary digit.

Binary Coding Waveforms


Advantage of the NRZ(I)
The chief advantage of NRZ(I) over NRZ(L) is its immunity to polarity reversals, since the data
are coded by the presence or absence of a transition rather than the presence or absence of a
pulse.
Lack of data transitions would result in poor clock recovery performance, either the binary
signal must be pre-coded to eliminate such strings of 1 's and 0's or a separate timing line must
be transmitted with the NRZ signal.

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 80


Characteristics of NRZ(I) Waveforms
Return-to-Zero (RZ)
With return-to-zero (RZ), the signal level representing bit value 1 lasts for the first half of the
bit interval, after which the signal returns to the reference level (0) for the remaining half of
the bit interval.
A 0 is indicated by no change, with the signal remaining at the reference level.
Its chief advantage lies in the increased transitions vis-a-vis NRZ and the resulting
improvement in timing (clock) recovery.
Note that a string of 0's results in no signal transitions, a potential problem for timing recovery
circuits unless these signals are eliminated by pre-coding.

Characteristics of Return-to-Zero (RZ) Waveforms

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 81


Diphase
Diphase (also called biphase, split-phase, and Manchester) is a method of two-level coding where
 V 0t T /2
f 1 (t )  
 V T /2t T
f 2 (t )   f 1 (t )
This code can be generated from NRZ(L) by EXCLUSIVE OR or MOD 2 ADD logic, as shown in
the figure, if we assume 1's are transmitted as +V and 0's as –V.

Data recovery is accomplished by the same logic employed by the coder.

Characteristics of Diphase Coding

Advantage of diphase.
From the diphase waveforms shown in the figure, it is readily apparent that the transition
density is increased over NRZ(L), thus providing improved timing recovery at the receiver, and
this a significant advantage of diphase.

Bipolar or Alternate Mark Inversion (AMI)


In bipolar or alternate mark inversion (AMI), binary data are coded with three amplitude
levels, 0 and V:

Binary 0's are always coded as level 0; binary 1's are coded as +V or -V where the polarity
alternates with every occurrence of a 1. Bipolar coding results in a zero dc component, a
desirable condition for baseband transmission.

As shown in the following figure, bipolar representations may be NRZ (100 percent duty cycle)
or RZ (50 percent duty cycle).

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 82


Bipolar Coding Waveforms

Power Spectral Density of Binary Codes


The power spectral density of a baseband code describes two important transmission
characteristics: required bandwidth and spectrum shaping.

 Bandwidth
o The bandwidth available in a transmission channel is described by its frequency
response, which typically indicates limits at the high or low end.

o Further bandwidth restriction may be imposed by a need to stack additional signals


into a given channel.

 Spectrum shaping
o Spectrum shaping can help minimize interference from other signals or noise.

o Conversely, shaping of the signal spectrum can allow other signals to be added
above or below the signal bandwidth.

Power Spectral Density


Derivation of the power spectral density is facilitated by starting with the autocorrelation
function, which is defined for a random process x(t) as

RX() = E { x(t)x(t + ) }
FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 83
where E{} represents the expected value or mean. The power spectral density describes the
distribution of power versus frequency and is given by the Fourier transform of the
autocorrelation function (Wiener-Khintchin Theorem)


 R X ( )e d
 j 2f
SX ( f )  (Wiener-Khintchin Theorem)

The following figure indicates a coder/decoder block diagram and waveforms for bipolar signals.
The bipolar signal is generated from NRZ by use of a 1-bit counter that controls the AND gates
to enforce the alternate polarity rule. Recovery of NRZ(L) from bipolar is accomplished by
simple full-wave rectification.

Characteristics of Bipolar Coding

Advantages of Bipolar Transmission:


 Since a data transition is guaranteed with each binary-1, the clock recovery performance
of bipolar is improved over that of NRZ.

 An error detection capability results from the property of alternate mark inversion.
Consecutive positive amplitudes without an intervening negative amplitude (and vice versa)
are a bipolar violation and indicate that a transmission error has occurred.

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 84


o This property allows on-line performance monitoring at a repeater or receiver
without disturbing the data.

The advantages of bipolar transmission have made it a popular choice, for example, by
AT&T for T1 carrier systems that use 50 percent duty cycle bipolar.

Problem with Bipolar Transmission


 Although clock recovery performance of bipolar is improved over that of NRZ, a long
string of 0's produces no transitions in the bipolar signal, which can cause difficulty in
clock recovery.

Solution:
 Replace the string of 0's with a special sequence that contains intentional bipolar
violations, to create additional transitions that improve timing recovery.

 This "filling" sequence must be recognized and replaced by the original string of zeros at
the receiver.

 A commonly used bipolar coding scheme for eliminating strings of zeros is Bipolar N-
Zero Substitution (BNZS)
Bipolar N-Zero Substitution (BNZS)
Bipolar N-Zero Substitution (BNZS), replaces all strings of N 0's with a special N-bit sequence
containing at least one bipolar violation.

All BNZS formats are dc free and retain the balanced feature of AMI, which is achieved by
forcing the substitution patterns to have an equal number of positive and negative pulses.

These substitution rules can also be described by use of the following notation:

 B represents a normal bipolar pulse that conforms to the alternating polarity rule,
 V represents a bipolar violation, and
 0 represents no pulse.

B3ZS
Thus, in the B3ZS code, each block of three consecutive 0's is replaced by BOV or 00V; the
choice of B0V or 00V is made so that the number of B pulses (that is, 1 's) between consecutive
V pulses is odd.

B6ZS
B6ZS replaces six consecutive 0's with the sequence OVB0VB.

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 85


B8ZS
B8ZS replaces each block of eight consecutive 0's with 000VB0VB, such that bipolar violations
occur in the fourth and seventh bit positions of the substitution.

Bipolar Coding Waveforms

FEE 422: Telecommunications and Electro-Acoustics B V. K. Oduol Page - 86

You might also like