You are on page 1of 20

Module – 1: Sampling & Quantization

1.1 Introduction to Digital Communication System:


Communication is the process concerned with the transmission of information through
various means. It can also be defined as the inter-transmitting the content of data (speech,
signals, pulses etc.) from one node to another. Examples of communication between two
points include a telephone conversation, accessing an Internet website from our home or
office computer, or tuning in to a TV or radio station. In the process of communication, one
may be interested to transmit directly the natural signal (analog signal) with or without
modulation (base band transmission). There, all the processors/systems used for transmission
process are analog processors designed using discrete analog elements like op-amps,
transistors, resistors, inductors, capacitors, etc. This is analog communication system.
On the other hand, the system which specifically deals with digital data and digitally
pre-processed signals is a digital communication system. Here, digital processors and
accessories like stored program controlled processors, software codes, digital memory etc.
takes important role in the process of signal transmission and reception. But, as we know, all
the natural understandable signals are analog, overhead hardware for analog to digital
conversion and digital to analog conversion are needed at the transmitter and receiver end,
respectively. Although, the necessity of extra blocks for analog to digital conversion (ADC)
and digital to analog conversion (DAC), digital communication system is preferred over
analog communication system for some particular reasons as follows:
Noise immunity: While transmitting digital information, formatting is very essential, i.e., two
analog correspondence amplitude values are to be assigned for logic zero and logic one
respectively in case of binary transmission. Noise immunity into such formatted digital
information is greater than that of an analog signal.
Memory: Analog signals are generally stored in devices like magnetic tapes, floppy disks,
etc. It requires many magnetic tapes to store the analog signals. Moreover these are easily
affected by the magnetic and other mechanical and physical phenomenon. On the other hand,
digital information is stored in devices like CDs and registers. For example, in a D-flip-flop,
the output can be retained for many years without any external power. When the output is
needed, the trigger clock pulse is supplied to the flip-flop and it will give the output. Lifetime
of digital memory is also higher than that of analog memory.
System Re-configurability: One of the most significant advantages of digital systems is their
ease of system re-configurability. As for an example, let us consider an analog low pass filter.
To convert it to a high pass filter, we will have to remove the components from the circuit
and replace them with other appropriate components. This is a tiresome job if the system is a
complex one. On the other hand, to convert a digital low pass filter into a high pass filter, we
can easily change the digital filter transfer function [H(z)] just by changing some coefficients.
Aging: Aging signifies growing older of the system. It is obviously less effective in case of
digital systems. In case of analog systems, the output may change after a few years due to
aging of the discrete analog components like diode. As time goes, the cut-in voltage of the
diode increases slowly and unsteadily. This causes fluctuation in the system performance. In
digital systems, the system error due to the problem of aging is totally absent.
1.2 Elements of Digital Communication System:
Consider the block diagram of a digital communication link depicted in Figure 1.1. Let us
now briefly discuss the roles of the blocks shown in the figure.

Information Source Source Channel Baseband Processor /


Formatter
and Input Transducer Encoder Encoder Bandpass Modulator

Channel

Output Transducer Source Channel Baseband Decoder /


Deformatter
and Output Source Decoder Decoder Bandpass Demodulator
  
Figure 1.1 Block diagram of a digital communication system

In a digital communication system, the input signal should be in digital form so that digital
signal processing techniques can be employed on these signals. The electrical signal at the
output of the transducer needs to be converted into a sequence of digital signals. The block
performing this task is typically the second block of the digital communication system and is
commonly known as formatter. The output signal of the Formatter is in digital form. If the
output of the information source is digital, then we need not employ the formatter. Hence, the
data communication system between computers does not have a formatter. However, if the
source of information in such cases is keyboard or typewriter connected to the computer, then
the formatting block is required to convert characters (which are in discrete form but not in
digital form) to digital signals.
To represent these digital signals by as few digits as possible, depending on the
information content of the messages, a coding system can be employed which minimises the
requirement of number of digits. This process is called source encoding and the block
performing this task is known as source encode. This source encoder block compresses the
total digits of a message signal for transmission.
To combat noise in the communication channel, some redundancy is deliberately
introduced in the message. This is done by the channel encoder block.
In low- speed wired transmission, the channel encoded signal is generally not
modulated. The transmission takes place in baseband. However, for proper detection in the
receiver and not to combat noise and interference, line coding is used. Some pulse shaping is
alone done to combat interference. Some special filters are also employed in the receiver to
combat noise. All these are collectively called baseband processor. This is the case in fixed
telephony and data storage system.
However for transmission of high speed digital data (eg. Computer communication
system), the digital signal needs to be modulated i.e. frequency translated. The primary
purpose of the baseband modulator is to map the digital signal to high frequency analog
signal waveforms. A performance measure of the modulator is spectral efficiency which is
the number of bits sent per second for every Hz of channel bandwidth. The purpose of
modulation is to increase the spectral efficiency as much as possible. So needless to say that
if the bandpass modulator block is present, the baseband processor block is not required.
Therefore, these two blocks are shown as mutually exclusive blocks.
In the communication channel, the transmitted signal gets corrupted by random
noise. The noise is from various sources: either from electronic devices implementing the
channel (thermal noise, short noise) or from man-made disturbances (automobile noise,
electromagnetic interference from other electronic equipments etc.), or from natural sources
(atmospheric noise, electrical lighting discharges during thunderstorm, radiation from space
falling in the electromagnetic spectrum).
At the receiver, the bandpass demodulator block processes the channel corrupted
transmitted waveform and maps them back to a sequence of number that represents the
estimate of transmitted data sequence. In case of baseband, the task of converting back the
line coded pulse waveform to transmitted data sequence is carried out by the baseband
decoder block.
This sequence of numbers representing the data sequence is passed to the channel
decoder, which attempts to reconstruct the original information sequence (source encoded)
from the knowledge of channel encoding algorithm.
The performance measure of demodulator and decoder is the frequency of bit error
(Bit Error Rate (BER)) in the decoded sequence.BER depends on channel coding
characteristics, type of analog signal used in transmission at modulator, transmission power,
channel characteristics (i.e. amount of noise, nature of interference) and the method of
demodulation and decoding.
Source decoder estimates the digital signal from the information sequence. The
difference of the estimate and the original digital signal is the distortion introduced by the
digital communication system.
If the original information source was not in digital data form and the output of the
receiver needs to be in the original form of information, a deformatter block is needed to
convert back the digital data to either discrete form (like keyboard character) or analog form
(say, speech signal).
Output transducer converts the estimate of digital signal (either in discrete form or
analog form) to analog non electrical signal, if an analog output is needed. However in data
communication system, the input signal and reconstruction signal both are in digital form. So,
an output transducer may not be always present in digital data communication system.

1.3 Communication Channel (Medium) classification


Communication channel is the physical medium between transmitter and receiver. Therefore,
any device which serves the purpose of linking transmitter with receiver can be called a
channel. Channel may be “wired” carrying electrical signal as is the case in telephone wire,
TV cable or Ethernet cable. A wired channel may also carry other forms of signals like
optical fiber carrying modulated light beams. “Wirelesss ”channel is also possible for
example, underwater ocean channel carrying acoustic wave for sea exploration, free space
carrying electromagnetic wave etc. A channel can also link co- located transmitter and
receiver, for example, data storage medium such as magnetic tape, magnetic disk, optical disk
where data is stored and retrieved. Storing and retrieving data requires techniques like
encoding and source coding which employ same communication principles used in other type
of communication channels.
As different media is made of different materials having different electrical properties
and different configurations, they support different frequency bands of operation. If the signal
bandwidth is within this range of frequencies, the channel can pass the signal. In analog
communication days, after doing pre-processing at the transmitter (mainly modulation), the
main worry was whether the signal is transferable through the channel. However the scenario
has changed with the advent of digital communication. A more important question is how fast
the signal can be transmitted by the channel? We would show throughout this book that the
job of an encoding and/or modulating scheme is to squeeze as much bit transmission rate
from a channel having a finite bandwidth. The more it can extract bit transmission rate from a
given amount of bandwidth, the more spectrally efficient the encoding/modulation process is.
So, it turns out that channels can be classified according to the maximum bit transmission rate
it can handle. However, we want to emphasise here that with the invention of new
technologies this classification can change dramatically and the lower bit rate supporting
channel of today can be moved upward in the transmission speed tomorrow. An example is
the ordinary copper telephone line. In the good old days of telephony it was classified as a
channel supporting 20 KHz of analog speech. With the advent of PCM, it started supporting
64 kbps speech transmission. Then came ISDN and it started supporting 256 kbps speech.
Nowadays with DSL technology, particularly with very high speed DSL (VSDL), it can
support 52 Mbps of data rate.

Table 1.1: Classification of communication channels

Channel Type Bit rate/Band width Repeater distance Application


Unshielded
Wireline 64 kbps – 1 Gbps Few km Short-haul PSTN, LAN
twisted pair
Coaxial cable Wireline Few hundred Mbps Few km Cable TV, LAN
Optical fiber Wireline Few Gbps Few tens of km Long Haul PSTN, LAN
Free space Few hundred kHz to
Wireless No repeater Broadcast Radio/TV
broadcast Few hundred MHz
Free space No repeater up to Mobile Telephony,
Wireless 1 – 2 GHz
cellular base station SMS, WLL
No repeater up to
Wireless LAN Wireless Upto 11 Mbps Wi-Fi, Bluetooth
access point
Long-haul PSTN, Video
Terrestrial
transmission from
microwave Wireless 2 – 40 GHz Every 10 – 100 km
playground to studio in
link
a live telecast
Transcontinental
Several Thousand telephony, Cable TV
Satellite Wireless 4/6 GHz, 12/14 GHz
km broadcast, DTH, VSAT,
GPS
Short distance LOS like
Infrared Wireless Few THz No repeater
TV remote
Signal attenuation in a channel is usually a function of distance the signal traverses through
the channel. To keep the signal strength at the receiver sufficient for detection, repeaters are
used in regular intervals along the channel. The inter-repeater distance is an important
parameter of the channel.
Apart from data rate, another interesting feature of a communication channel is its
compatibility to a particular signal or application. Therefore, some channels are more suitable
in linking a particular type of signal to its receiver than the others.
Table 1.1 gives a classification of channel in terms of its parameters described above.
Where transmission mode is still primarily analog, we quote the bandwidth, instead of data
rate, as the performance measure of the channel.

1.4 Performance Measure


We have already seen that a communication channel has a finite bandwidth. This is called the
bandwidth constraint of the communication system. The success of telecommunication
systems in the last century generated huge demand from user community for subscribing to
various telecommunication services. This required accommodating many users and hence
many channels in the finite bandwidth of the channel. For example, a long-distance telephone
provider like VSNL (Videsh Sanchar Nigam Ltd., India) would always prefer a digital
system, which requires less bandwidth to send one person’s voice, over other systems
requiring more transmission bandwidth. The same objective is true for the wireless systems;
rather, for wireless systems, decreasing transmission bandwidth is a question of survival in
the marketplace. All wireless systems utilize the same free space for communication
purposes. So, there would have been tremendous amount of interference had it not been
regulated. To facilitate, International Telecommunication Union (ITU) allocate various
portions of the usable electromagnetic frequency band (Usually called Spectrum) to different
communication applications (usually called Services). When spectrum is limited, it’s costs
huge money to buy that. For example, the cellular service providers spent as much as $1
billion for every MHz of PCS (Personal Communication System) bandwidth. Naturally, the
communication techniques delivering more data per bandwidth helps systems in the
marketplace to cater to more to more number of subscribers. So, the first performance
measure of a digital communication system is spectral efficiency, that is, how much data rate
is supported by a unit of bandwidth against the bandwidth constraints of the system. Usually
coding and and modulation techniques are employed to go on improving the spectral
efficiency until other constraints force the designer to settle for a particular value of the
spectral efficiency.
Channel is the first block in the communication system where a signal gets exposed to
other signals from varied sources. It is here that it gets contaminated by several undesired
waveforms, e.g, noise and interference, which are nothing but signals from varied sources.
The channel behaves like an electrical device to the transmitted signal. So, it introduces
amplitude and (or) phase distortion to the signal. If the channel is such that the signal gets
more than one path to reach the receiver, a distortion called multipath distortion may creep in.
The net effect of all these degradations is to cause errors in detection. In digital
communication systems, the performance measure of this error is BER. BER can be
improved by resorting to the following four techniques:
 Increasing transmitted signal power
 Improving frequency filtering techniques
 Modulation and Demodulation techniques
 Coding and Decoding techniques

All these four techniques have some physical constraint limiting their performance. Each
electronic equipment has a power handling capability, which limits the signal power that can
be transmitted. This is called “power constraint of the system.” Also, due to the bandwidth
constraints of the communication system, filtering cannot be improved indefinitely. Hence
the bandwidth constraint together with the power constraint determines the maximum data
transmission rate that can be achieved by a channel. It is the job of any particular modulation
and/or coding scheme to optimise the BER performance i.e. (Power Constraint) that is
achievable by the system for a given transmission bandwidth i.e. (Bandwidth Constraint).

1.5 Bandwidth
Many important theorems of communication and information theory are based on the
assumption of strictly band- limited channels, which means that no signal power whatever is
allowed outside the defined band. However, our knowledge of Fourier analysis tells us that
strictly band-limited signals are not realisable, because they imply signals with infinite
duration. On the other hand, time-limited signals are realisable, but the Fourier transform
contains significant energies at quite high harmonics. So, the definition is not general and it
depends on the application.
All bandwidth criteria have in common the attempt to specify a measure of the width,
W, of a non-negative real-valued spectral density defined for all possible frequencies. The
figure shows different definitions of bandwidth. A typical rectangular bandpass digital pulse
with time duration T carrier frequency fc has a spectrum.
2
 sin  ( f  f c )T 
Gx ( f )  T   (1.1)
  ( f  f c )T 
In case of digital data sequence, we talk of PSD of the data which is the PSD of a
random sequence of the rectangular pulse just defined above. The plot consists of a main lobe
and smaller side lobes. The general shape of the plot is valid for most digital formats; some
formats, however, do not have well defined lobes. The various definitions of bandwidth
relevant from the digital communication systems are:
Half-power Bandwidth This is the interval between frequencies at which Gx(f) has dropped
half-power or 3 dB below the peak value.
Noise-equivalent Bandwidth WN is defined as WN =Px/Gx(fc), where, Px is the total signal
power overall frequencies and Gx(fc) is the value of the maximum spectral component. For
bandpass signal, the maximum spectral content maximum occurs at the carrier frequency.
Fractional Power Containment Bandwidth The power contained within the band is 99%;
above and below the band is exactly 0.5% of the total signal power resides. The definition is
accepted by FCC.
Bounded Power Spectral Density A popular method of specifying power is to state that
everywhere outside the specified band, Gx(f) must have fallen at least to certain stated level
below that found at the band centre. Typically attenuation level might be 30 or 50 dB.
Absolute bandwidth This is interval between frequencies, outside of which the spectrum is
zero. It is useful way to define an Ideal system. However, for all realisable waveforms, the
absolute bandwidth is infinite.

Figure 1.2: Various bandwidth definitions of digital signal (a) Half–Power (b) Null–to–null (c) 99% of
power (d) 35 dB

1.6 Sampling
The sampling process is an operation that is basic to digital signal processing and digital
communications. Through use of the sampling processes, an analog signal is converted into a
corresponding sequence of samples that are usually spaced uniformly in the time. Clearly, for
such a procedure to have practical utility, it is necessary that we choose the sampling rate
properly, so that the sequence of samples uniquely defined the original analog signal.

g (t ) g ( t )

t t
0 0
Ts

(a) (b)
Figure 1.3: The sampling process (a) Analog signal (b) Instantaneously sampled version of the analog
signal

Let g  ( t ) denote the signal obtained by individually weighting the elements of a


periodic sequence of delta functions spaced Ts seconds apart by the sequence of numbers
{g(nTs)}, as shown (see Figure 1.3(b))

g (t )   g  nT    t  nT 
n 
s s (1.2)

From the definition of a delta function, we have:

g  nTs    t  nTs   g  t    t  nTs 

Hence, we may rewrite Eq. (1.2) in the equivalent form:


g (t )  g  t     t  nT   g  t  
n 
s Ts (t )

Where,  Ts (t ) is the Dirac or ideal sampling function. Let G(f) and Gδ(f) denote the Fourier
transforms of g(t) and gδ(t), respectively. The Fourier transform of  Ts (t ) is:


F Ts (t )  f s    f  mf 
m
s

Where F[.] signifies the Fourier transform operation, and fs is the sampling rate. Thus
transforming Eq. (1.2) in to the frequency domain, we obtain

  
G ( f )  G( f )   f s    f  mf s  
 m 

where * indicates convolution. Interchanging the order of the summation, then

  
G ( f )  f s   G( f )    f  mf s  
 m 

From the properties of the delta function, we find that the convolution of G(f) and δ(f – mfs)
equals G(f – mfs). Hence we may simplify above equations as:

  
G ( f )  f s   G  f  mf s  
m 

In the above equation the Gδ( f ) may also be expressed as


G ( f )  f sG( f )  f s  G( f  mf )
m 
s (1.3)

Let us consider the analog signal g(t) is strictly band limited, that means
G(f ) = 0 for | f | ≥ W

For this signal, we try to plot spectrum of Gδ( f ) for three cases, where in the Case – I,
fs = 2W then the spectrum of Gδ( f ) can be shown as:

Figure 1.4: (a) Spectrum of a strictly band–limited signal g(t) (b) Spectrum of the sampled version of g(t)
for a sampling period Ts = 1/2W

The sampling rate of 2W samples per second, for a signal bandwidth of W hertz, is called the
Nyquist rate and its reciprocal 1/2W (measured in seconds) is called the Nyquist interval.
Case – II: fs < 2W, then the spectrum of Gδ( f ) can be shown as:

Figure 1.5: (a) Spectrum of a signal (b) Spectrum of an under-sampled version of the signal exhibiting the
aliasing phenomenon

This case is called as under-sampling which results aliasing. Aliasing refers to the
phenomenon of a high frequency component in the spectrum of the signal seemingly tacking
on the identity of a lower frequency in the spectrum of its sampled version, as illustrated in
Figure 1.5.
To combat the effect of aliasing in practice, we may use two corrective measures, as
described here:
1. Prior to sampling, a low pass anti-aliasing filter is used to attenuate those high
frequency components of the signal that are not essential to the information being
conveyed by the signal
2. The filtered signal is sampled at a rate slightly higher that the Nyquist rate to recover
the original signal from its sampled version.
Case – III: fs < 2W, then the spectrum of Gδ( f ) can be shown as: Consider the example of a
message signal that has been anti-alias (low-pass) filtered, resulting in the spectrum shown in
Figure 1.6(a). The corresponding spectrum of the instantaneously sampled version of the
signal is shown in Figure 1.6(b), assuming a sampling rate higher than the Nyquist rate.
According to Figure 1.6(b), we readily see that the design of the reconstruction filter may be
specified as follows (see Figure 1.6 (c)):

Figure 1.6: (a) Anti-alias filtered spectrum of an information-bearing signal (b) Spectrum of
instantaneously sampled version of the signal, assuming the use of a sampling rate greater than
the Nyquist rate (c) magnitude response of reconstruction filter

Based on nature, the sampling techniques are classified as Impulse sampling, Natural
sampling and Flat Top sampling.
Impulse Sampling: Impulse sampling can be performed by multiplying input signal g(t) with

impulse train    t  nT  of period ‘Ts’. Here, the amplitude of impulse changes with
n 
s

respect to amplitude of input signal g(t). The output of sampler is given by


 
g (t )  g (t )     t  nT    g  nT    t  nT 
n 
s
n 
s s (1.4)

This kind of sampling is already shown in Figure 1.3 (b).

Natural Sampling: Let an arbitrary analog signal g(t) be applied to a switching circuit
(shown in Figure 1.7) controlled by a sampling function c(t) that consists of an infinite
succession of rectangular functions of amplitude A, duration T, and occurring with period Ts.
The output of switching circuit is shown by s(t). The waveforms of g(t), c(t) and s(t) are
illustrated in part (a), (b) and (c) of Figure 1.8 respectively. We see that the switching
operation merely extracts from an analog signal g(t) successive portion of predetermined
duration T, taken regularly at the rate fs = 1/Ts. Accordingly the sampled signal consists of a
sequence of positive and negative pulses as in Figure 1.8 (c). So, mathematically the sampled
signal g(t) is obtained by multiplication of sampling function (pulse train) c(t) and the input
signal g(t).

Figure 1.7: Switching circuit

Figure 1.8: (a) Analog signal (b) Sampled function (c) Sampled signal
Flat-top Sampling: Consider next situation where the analog signal g(t) is sampled
instantaneously at the rate fs = 1/Ts, and that the duration of each sample is lengthened to T, as
illustrated in Figure 1.9 (c) (A practical reason for intentionally lengthening the duration of
each pulse is to reduce bandwidth). A simple Flat top sampled signal can be generated by the
use of sample and hold circuit. Using s(t) to denote the sequence of flat top pulses generated
in this way, we may write


s (t )   g  nT  h  t  nT 
n 
s s (1.5)

where h(t) is a rectangular pulse of unit amplitude and duration T, as shown in Figure 1.9 (b).

Figure 1.9: (a) Instantaneously sampled signal (b) Rectangular Pulse (c) Flat top sampled signal
1.7 Quantization:
A continuous signal, such as voice, has a continuous range of amplitudes and therefore its
samples have a continuous amplitude range. In other word, within the finite amplitude range
of signals, we find an infinite number of amplitude levels. It is not necessary in fact to
transmit the exact amplitudes of the samples. Any human sense (the eye or the ear), as
ultimate receiver, can detect only finite intensity differences. This means that the original
continuous signal may be approximated by a signal constructed of discrete amplitudes
selected on minimum error basis from an available set. The existence of a finite number of
discrete amplitude levels is a basic condition of pulse-code modulation. Clearly, if we assign
the discrete amplitude levels with sufficiently close spacing, we may make the approximated
signal practically indistinguishable from the original continuous signal.
Amplitude quantization is defined as the process of transforming the sample
amplitudes m(nTs) of a message signal m(t) at time t = nTs into a discrete amplitude v(nTs)
taken from a finite set of possible amplitudes. We assume that the quantization process is
memory-less and instantaneous, which means that the transformation at time t = nTs is not
affected by earlier or later samples of the message signal.

Figure 1.10: Description of a memory-less quantizer

When dealing with a memory-less quantizer, we may simplify the notation by dropping the
time index. We may thus use the symbol m in place of m(nTs), as indicated in the block
diagram of a quantizer shown in Figure 1.10 (a). Then, as shown in Figure 1.10 (b), the signal
amplitude m is specified by the index k if it lies inside the partition cell

I k : mk  m  mk 1 , k  1,2,..., L   (1.6)

where L is the total number of amplitude levels used in the quantizer. The discrete amplitudes
mk, k = 1, 2, …, L, at the quantizer input are called decision levels or decision thresholds. At
the quantizer output, the index k is transformed into an amplitude vk that represents all
amplitudes of the cell Ik. These discrete amplitudes vk, k = 1, 2, …, L, are called
representation levels or reconstruction levels, and the spacing between two adjacent
representation levels is called a quantum or step size. Thus, the quantizer output v equals vk if
the input signal m belongs to the interval Ik. The mapping (see Figure 1.10 (a)) v = g(m) is the
quantizer characteristic, which is a staircase function by definition.
Quantizers can be of uniform or non-uniform type. In a uniform quantizer, the
representation levels are uniformly spaced; otherwise, the quantizer is non-uniform. The
quantizer characteristic can also be of midtread or midrise type. Figure 1.11 (a) shows the
input-output characteristic of a uniform quantizer of the midtread type, which is so called
because the origin lies in the middle of a thread of a staircase like graph. Figure 1.11 (b)
shows the corresponding input-output characteristic of the staircase like graph. Note that both
the midtread and midrise types of uniform quantizers illustrated in Figure 1.11 are symmetric
about the origin.

(a) (b)
Figure 1.11: Two types of quantization: (a) mid-thread and (b) mid-rise

1.7.1 Signal to quantization noise ratio:

The use of quantization introduces an error defined as the difference between the input signal
m and the output signal v. the error is called quantization noise. Figure 1.12 illustrates a
typical variation of the quantization as a function of time, assuming the use of a uniform
quantizer of the midtread type.

Figure 1.12: Illustration of the quantization process


Let the quantizer input m be the sample value of a zero mean random variable M. (If
the input has a nonzero mean, we can always remove it by subtracting the mean from the
input and then adding it back after quantization). A quantizer g(.) maps the input random
variable M of continuous amplitude into a discrete random variable V; their respective sample
values m and v are related by v = g(m). Let the quantization error be denoted by the random
variable Q of sample value q. We may thus write

q=m–v (1.7)

or, correspondingly,

Q=M–V (1.8)

With the input M having zero mean, and the quantizer assumed to be symmetric as in Figure
1.11, it follows that the quantizer output V and therefore the quantization error Q, will also
have zero mean. Thus for a particular statistical characterization of quantizer in terms of
output signal to (quantization) noise ratio, we need only find the mean-square value of the
quantization error Q.
Consider then an input m of continuous amplitude in the range (–Vmax, Vmax).
Assuming a uniform quantizer of the midrise type illustrated in Figure 1.11 (b), we find that
the step-size of the quantizer is given by

2Vmax
   (1.9)
L

where L is the total number of representation levels. For a uniform quantizer, the quantization
error Q will have its samples values bounded by –Δ/2 ≤ q ≤ Δ/2. If the step size is sufficiently
small (i.e., the number of representation levels L is sufficiently large), it is reasonable to
assume that the quantization error Q is a uniformly distributed random variable, and the
interfering effect of the quantization noise on the quantizer input is similar to that of thermal
noise. We may thus express the probability density function of the quantization error Q as
follows:

1  
 ,  q
f Q ( q)    2 2 (1.10)
 0, otherwise

For this to be true, we must ensure that the incoming signal does not overload the quantizer.

The area under probability density function of a uniformly distributed random


variable is equal to unity, because sum of probabilities of each sample should be unity. So,
the magnitude of probability density function of the quantization error Q is 1/Δ over an
interval –Δ/2 to Δ/2 as shown in Figure 1.13).
fQ ( q)
1/ 

q
 
2 2
Figure 1.13: Probability density function of Quantization error Q

Then, with the mean of the quantization error being zero, its variance  Q2 is the same as the
mean-square value:

 /2
 Q2  E Q 2    q2 fQ ( q) dq (1.11)
 /2

 /2
        
 /2
q2 fQ ( q) dq  

(Note: The power of a random variable (let x) equals to its mean-squared value i.e., E[x2].
Usually, the noise has zero mean (µ) and  n2 variance, where  n2  E  ( n   ) 2   E  n 2  . So,
noise power equal to its variance). Substituting Eq. (1.10) into Eq. (1.11), we get

1  /2 2 2
   q dq 
2
Q (1.12)
  /2 12

Let n denote the number of bits per sample used in the construction of binary code, then the
number of quantization levels can be L = 2n, or equivalently,

n  log 2 L (1.13)

Hence the step size given in Eq. (1.9) is:

2Vmax
 (1.14)
2n

Thus the use of Eq. (1.14) in Eq. (1.12) yields

1
 Q2  Vmax
2
2 2 n (1.15)
3

Let P denote the average power of the message signal m(t). We may then express the output
signal to noise ratio of a uniform quantizer as

P  3P 
( SNR )O    2  22 n (1.16)
 2
Q  Vmax 
Eq. (1.16) shows that the output signal to noise ratio of the quantizer increases exponentially
with increasing number of bits per sample, n. Recognizing that an increase in n requires a
proportionate increase in the channel (transmission) bandwidth BT , we thus see that use of a
binary code for the representation of a message signal (as in pulse code modulation) provides
a more efficient method than either frequency modulation (FM) or pulse position modulation
(PPM) for the trade-off of increased channel bandwidth for improved noise performance. In
making this statement, we presume that the FM and PPM systems are limited by receiver
noise, whereas the binary-coded modulation system is limited by quantization noise.
Consider the special case of a full-load sinusoidal modulating signal of amplitude Am,
which utilizes all the representation levels provided. The average signal power is (assuming a
load of 1 ohm)

 
2
A2 Am / 2 Am2
P  rms   (1.17)
R R 2

The total range of the quantizer input is 2 Am, because the modulating signal swings between
– Am and Am. We may therefore set Vmax = Am, in which case the use of Eq. (1.15) yields the
average power (variance) of the quantization noise as

1
 Q2  Am2 22 n (1.18)
3

The output signal to noise ratio of a uniform quantizer, for a full-load test tone, is

Am2 2 3
( SNR )O  2 2 n
  22 n  (1.19)
Am 2 3 2

Expressing signal to noise ratio in decibels, we get

10 log10 ( SNR )O  1.8  6n (1.20)

If n = 2, (SNR)dB required is 13.8, if n = 3, (SNR)dB required is 19.8 and if n = 4, (SNR)dB


required is 25.8. So, for increment of one bit to represent each sample, we require 6 dB of
addition transmitting power.

1.7.2 Non-uniform Quantization:


Speech communication is a very important and specialized area of digital communications.
Human speech is characterized by unique statistical property as illustrated in Figure 1.14. The
abscissa represents speech signal magnitudes, normalized to the root-mean-square (rms)
values, and the ordinate is probability. For most voice communication channels, very low
speech volumes predominate and large amplitude values are relatively rare. That means 50%
of the time, the voltage characterizing detected speech energy is less than one-fourth of the
rms value and only 15% of the time does the voltage exceed the rms value. As the
quantization noise depends on step size, a uniform quantizer would be wasteful for speech
signals, because many of the quantizing steps would rarely be used.
In a system that uses equally spaced quantization levels, the quantization noise is the
same for all signal magnitudes. Therefore, with uniform quantization, the SNR is worst for
low-level signals than for high-level signals. However, non-uniform quantization can provide
fine quantization of weak signals and coarse quantization for the strong signals. Thus in the
case of non-uniform quantization, quantization noise can be made proportional to signal size.
The effect is to improve the overall SNR by reducing the noise for the predominant weak
signals, at the expense of an increase in noise for the rarely occurring strong signals.

Figure 1.14: Statistical distribution of single-talker speech signal magnitudes

One way of achieving non-uniform quantization is to use a non-uniform quantizer


characteristic, shown in Figure 1.15 (a). But, this type of quantizer with varying step-size is
difficult to implement. More often, non-uniform quantization is achieved by first distorting
the original signal with a logarithmic compression characteristic, as shown in Figure 1.15 (b),
and then using a uniform quantizer. For small magnitude signals the compression
characteristic has a much steeper slop than for large magnitude signals. Thus, a given signal
change at small magnitudes will carry the uniform quantizer through more steps than the
same change at large magnitudes. The compression characteristic effectively changes the
distortion of the input signal magnitudes so that there is not a preponderance of low
magnitude signals at the output of the compressor. After compression, the distorted signal is
used as the input to a uniform (linear) quantizer characteristic, shown in Figure 1.15 (c). At
receiver, an inverse compression characteristic, called expansion, is applied so that the
overall transmission is not distorted. The processing pair (compression and expansion) is
usually referred to as companding.
(a) (b) (c)

Figure 1.15: (a) Non-uniform quantizer characteristic (b) Compression characteristic (c) Uniform
quantizer characteristic

The early PCM systems implemented a smooth logarithmic compression function.


Today, most PCM systems use a piecewise linear approximation to the logarithmic
compression characteristics. America and Europe both agreed on the need for compander in
voice telephony systems, but could not agree on the details. Hence two logarithmic
compression laws have been standardised. America and Japan use µ-law compander and
Europe and rest of the world’s national systems and international systems use A-law. Figure
1.16 shows that both the µ-law and A-law transfer functions are logarithmic. µ-law
characteristics can be defined by:

1  x  x
y  y max ln  1   , 0  1  (1.21)
ln(1   )  xmax  xmax

where µ is a positive constant, x and y represents input and output voltages, and xmax and ymax
are the maximum positive excursion of the input and output voltages, respectively. The
compression characteristic is shown in Figure 1.16 (a) for several values of µ. In North
America, the standard value for µ is 255. Notice that µ = 0 corresponds to uniform
quantization. Similarly, the A-law characteristic defined as:

 A  x  x 1
 ymax  , 0 
 1  ln( A)  xmax  xmax A
y (1.22)
y 1   x  1 x
 1  ln  A ,  1
 max
1  ln( A) 
  xmax   A xmax

where A is a positive constant and x and y are as defined in Eq. (1.21). The A-law
compression characteristic is shown in Figure 1.16 (b) for several values of A. A standard
value for A is 87.6.
(a) (b)
Figure 1.16: Compression characteristics (a) µ-law characteristics (b) A-law characteristics

You might also like