Professional Documents
Culture Documents
Channel
In a digital communication system, the input signal should be in digital form so that digital
signal processing techniques can be employed on these signals. The electrical signal at the
output of the transducer needs to be converted into a sequence of digital signals. The block
performing this task is typically the second block of the digital communication system and is
commonly known as formatter. The output signal of the Formatter is in digital form. If the
output of the information source is digital, then we need not employ the formatter. Hence, the
data communication system between computers does not have a formatter. However, if the
source of information in such cases is keyboard or typewriter connected to the computer, then
the formatting block is required to convert characters (which are in discrete form but not in
digital form) to digital signals.
To represent these digital signals by as few digits as possible, depending on the
information content of the messages, a coding system can be employed which minimises the
requirement of number of digits. This process is called source encoding and the block
performing this task is known as source encode. This source encoder block compresses the
total digits of a message signal for transmission.
To combat noise in the communication channel, some redundancy is deliberately
introduced in the message. This is done by the channel encoder block.
In low- speed wired transmission, the channel encoded signal is generally not
modulated. The transmission takes place in baseband. However, for proper detection in the
receiver and not to combat noise and interference, line coding is used. Some pulse shaping is
alone done to combat interference. Some special filters are also employed in the receiver to
combat noise. All these are collectively called baseband processor. This is the case in fixed
telephony and data storage system.
However for transmission of high speed digital data (eg. Computer communication
system), the digital signal needs to be modulated i.e. frequency translated. The primary
purpose of the baseband modulator is to map the digital signal to high frequency analog
signal waveforms. A performance measure of the modulator is spectral efficiency which is
the number of bits sent per second for every Hz of channel bandwidth. The purpose of
modulation is to increase the spectral efficiency as much as possible. So needless to say that
if the bandpass modulator block is present, the baseband processor block is not required.
Therefore, these two blocks are shown as mutually exclusive blocks.
In the communication channel, the transmitted signal gets corrupted by random
noise. The noise is from various sources: either from electronic devices implementing the
channel (thermal noise, short noise) or from man-made disturbances (automobile noise,
electromagnetic interference from other electronic equipments etc.), or from natural sources
(atmospheric noise, electrical lighting discharges during thunderstorm, radiation from space
falling in the electromagnetic spectrum).
At the receiver, the bandpass demodulator block processes the channel corrupted
transmitted waveform and maps them back to a sequence of number that represents the
estimate of transmitted data sequence. In case of baseband, the task of converting back the
line coded pulse waveform to transmitted data sequence is carried out by the baseband
decoder block.
This sequence of numbers representing the data sequence is passed to the channel
decoder, which attempts to reconstruct the original information sequence (source encoded)
from the knowledge of channel encoding algorithm.
The performance measure of demodulator and decoder is the frequency of bit error
(Bit Error Rate (BER)) in the decoded sequence.BER depends on channel coding
characteristics, type of analog signal used in transmission at modulator, transmission power,
channel characteristics (i.e. amount of noise, nature of interference) and the method of
demodulation and decoding.
Source decoder estimates the digital signal from the information sequence. The
difference of the estimate and the original digital signal is the distortion introduced by the
digital communication system.
If the original information source was not in digital data form and the output of the
receiver needs to be in the original form of information, a deformatter block is needed to
convert back the digital data to either discrete form (like keyboard character) or analog form
(say, speech signal).
Output transducer converts the estimate of digital signal (either in discrete form or
analog form) to analog non electrical signal, if an analog output is needed. However in data
communication system, the input signal and reconstruction signal both are in digital form. So,
an output transducer may not be always present in digital data communication system.
All these four techniques have some physical constraint limiting their performance. Each
electronic equipment has a power handling capability, which limits the signal power that can
be transmitted. This is called “power constraint of the system.” Also, due to the bandwidth
constraints of the communication system, filtering cannot be improved indefinitely. Hence
the bandwidth constraint together with the power constraint determines the maximum data
transmission rate that can be achieved by a channel. It is the job of any particular modulation
and/or coding scheme to optimise the BER performance i.e. (Power Constraint) that is
achievable by the system for a given transmission bandwidth i.e. (Bandwidth Constraint).
1.5 Bandwidth
Many important theorems of communication and information theory are based on the
assumption of strictly band- limited channels, which means that no signal power whatever is
allowed outside the defined band. However, our knowledge of Fourier analysis tells us that
strictly band-limited signals are not realisable, because they imply signals with infinite
duration. On the other hand, time-limited signals are realisable, but the Fourier transform
contains significant energies at quite high harmonics. So, the definition is not general and it
depends on the application.
All bandwidth criteria have in common the attempt to specify a measure of the width,
W, of a non-negative real-valued spectral density defined for all possible frequencies. The
figure shows different definitions of bandwidth. A typical rectangular bandpass digital pulse
with time duration T carrier frequency fc has a spectrum.
2
sin ( f f c )T
Gx ( f ) T (1.1)
( f f c )T
In case of digital data sequence, we talk of PSD of the data which is the PSD of a
random sequence of the rectangular pulse just defined above. The plot consists of a main lobe
and smaller side lobes. The general shape of the plot is valid for most digital formats; some
formats, however, do not have well defined lobes. The various definitions of bandwidth
relevant from the digital communication systems are:
Half-power Bandwidth This is the interval between frequencies at which Gx(f) has dropped
half-power or 3 dB below the peak value.
Noise-equivalent Bandwidth WN is defined as WN =Px/Gx(fc), where, Px is the total signal
power overall frequencies and Gx(fc) is the value of the maximum spectral component. For
bandpass signal, the maximum spectral content maximum occurs at the carrier frequency.
Fractional Power Containment Bandwidth The power contained within the band is 99%;
above and below the band is exactly 0.5% of the total signal power resides. The definition is
accepted by FCC.
Bounded Power Spectral Density A popular method of specifying power is to state that
everywhere outside the specified band, Gx(f) must have fallen at least to certain stated level
below that found at the band centre. Typically attenuation level might be 30 or 50 dB.
Absolute bandwidth This is interval between frequencies, outside of which the spectrum is
zero. It is useful way to define an Ideal system. However, for all realisable waveforms, the
absolute bandwidth is infinite.
Figure 1.2: Various bandwidth definitions of digital signal (a) Half–Power (b) Null–to–null (c) 99% of
power (d) 35 dB
1.6 Sampling
The sampling process is an operation that is basic to digital signal processing and digital
communications. Through use of the sampling processes, an analog signal is converted into a
corresponding sequence of samples that are usually spaced uniformly in the time. Clearly, for
such a procedure to have practical utility, it is necessary that we choose the sampling rate
properly, so that the sequence of samples uniquely defined the original analog signal.
g (t ) g ( t )
t t
0 0
Ts
(a) (b)
Figure 1.3: The sampling process (a) Analog signal (b) Instantaneously sampled version of the analog
signal
g (t ) g t t nT g t
n
s Ts (t )
Where, Ts (t ) is the Dirac or ideal sampling function. Let G(f) and Gδ(f) denote the Fourier
transforms of g(t) and gδ(t), respectively. The Fourier transform of Ts (t ) is:
F Ts (t ) f s f mf
m
s
Where F[.] signifies the Fourier transform operation, and fs is the sampling rate. Thus
transforming Eq. (1.2) in to the frequency domain, we obtain
G ( f ) G( f ) f s f mf s
m
G ( f ) f s G( f ) f mf s
m
From the properties of the delta function, we find that the convolution of G(f) and δ(f – mfs)
equals G(f – mfs). Hence we may simplify above equations as:
G ( f ) f s G f mf s
m
G ( f ) f sG( f ) f s G( f mf )
m
s (1.3)
Let us consider the analog signal g(t) is strictly band limited, that means
G(f ) = 0 for | f | ≥ W
For this signal, we try to plot spectrum of Gδ( f ) for three cases, where in the Case – I,
fs = 2W then the spectrum of Gδ( f ) can be shown as:
Figure 1.4: (a) Spectrum of a strictly band–limited signal g(t) (b) Spectrum of the sampled version of g(t)
for a sampling period Ts = 1/2W
The sampling rate of 2W samples per second, for a signal bandwidth of W hertz, is called the
Nyquist rate and its reciprocal 1/2W (measured in seconds) is called the Nyquist interval.
Case – II: fs < 2W, then the spectrum of Gδ( f ) can be shown as:
Figure 1.5: (a) Spectrum of a signal (b) Spectrum of an under-sampled version of the signal exhibiting the
aliasing phenomenon
This case is called as under-sampling which results aliasing. Aliasing refers to the
phenomenon of a high frequency component in the spectrum of the signal seemingly tacking
on the identity of a lower frequency in the spectrum of its sampled version, as illustrated in
Figure 1.5.
To combat the effect of aliasing in practice, we may use two corrective measures, as
described here:
1. Prior to sampling, a low pass anti-aliasing filter is used to attenuate those high
frequency components of the signal that are not essential to the information being
conveyed by the signal
2. The filtered signal is sampled at a rate slightly higher that the Nyquist rate to recover
the original signal from its sampled version.
Case – III: fs < 2W, then the spectrum of Gδ( f ) can be shown as: Consider the example of a
message signal that has been anti-alias (low-pass) filtered, resulting in the spectrum shown in
Figure 1.6(a). The corresponding spectrum of the instantaneously sampled version of the
signal is shown in Figure 1.6(b), assuming a sampling rate higher than the Nyquist rate.
According to Figure 1.6(b), we readily see that the design of the reconstruction filter may be
specified as follows (see Figure 1.6 (c)):
Figure 1.6: (a) Anti-alias filtered spectrum of an information-bearing signal (b) Spectrum of
instantaneously sampled version of the signal, assuming the use of a sampling rate greater than
the Nyquist rate (c) magnitude response of reconstruction filter
Based on nature, the sampling techniques are classified as Impulse sampling, Natural
sampling and Flat Top sampling.
Impulse Sampling: Impulse sampling can be performed by multiplying input signal g(t) with
impulse train t nT of period ‘Ts’. Here, the amplitude of impulse changes with
n
s
Natural Sampling: Let an arbitrary analog signal g(t) be applied to a switching circuit
(shown in Figure 1.7) controlled by a sampling function c(t) that consists of an infinite
succession of rectangular functions of amplitude A, duration T, and occurring with period Ts.
The output of switching circuit is shown by s(t). The waveforms of g(t), c(t) and s(t) are
illustrated in part (a), (b) and (c) of Figure 1.8 respectively. We see that the switching
operation merely extracts from an analog signal g(t) successive portion of predetermined
duration T, taken regularly at the rate fs = 1/Ts. Accordingly the sampled signal consists of a
sequence of positive and negative pulses as in Figure 1.8 (c). So, mathematically the sampled
signal g(t) is obtained by multiplication of sampling function (pulse train) c(t) and the input
signal g(t).
Figure 1.8: (a) Analog signal (b) Sampled function (c) Sampled signal
Flat-top Sampling: Consider next situation where the analog signal g(t) is sampled
instantaneously at the rate fs = 1/Ts, and that the duration of each sample is lengthened to T, as
illustrated in Figure 1.9 (c) (A practical reason for intentionally lengthening the duration of
each pulse is to reduce bandwidth). A simple Flat top sampled signal can be generated by the
use of sample and hold circuit. Using s(t) to denote the sequence of flat top pulses generated
in this way, we may write
s (t ) g nT h t nT
n
s s (1.5)
where h(t) is a rectangular pulse of unit amplitude and duration T, as shown in Figure 1.9 (b).
Figure 1.9: (a) Instantaneously sampled signal (b) Rectangular Pulse (c) Flat top sampled signal
1.7 Quantization:
A continuous signal, such as voice, has a continuous range of amplitudes and therefore its
samples have a continuous amplitude range. In other word, within the finite amplitude range
of signals, we find an infinite number of amplitude levels. It is not necessary in fact to
transmit the exact amplitudes of the samples. Any human sense (the eye or the ear), as
ultimate receiver, can detect only finite intensity differences. This means that the original
continuous signal may be approximated by a signal constructed of discrete amplitudes
selected on minimum error basis from an available set. The existence of a finite number of
discrete amplitude levels is a basic condition of pulse-code modulation. Clearly, if we assign
the discrete amplitude levels with sufficiently close spacing, we may make the approximated
signal practically indistinguishable from the original continuous signal.
Amplitude quantization is defined as the process of transforming the sample
amplitudes m(nTs) of a message signal m(t) at time t = nTs into a discrete amplitude v(nTs)
taken from a finite set of possible amplitudes. We assume that the quantization process is
memory-less and instantaneous, which means that the transformation at time t = nTs is not
affected by earlier or later samples of the message signal.
When dealing with a memory-less quantizer, we may simplify the notation by dropping the
time index. We may thus use the symbol m in place of m(nTs), as indicated in the block
diagram of a quantizer shown in Figure 1.10 (a). Then, as shown in Figure 1.10 (b), the signal
amplitude m is specified by the index k if it lies inside the partition cell
where L is the total number of amplitude levels used in the quantizer. The discrete amplitudes
mk, k = 1, 2, …, L, at the quantizer input are called decision levels or decision thresholds. At
the quantizer output, the index k is transformed into an amplitude vk that represents all
amplitudes of the cell Ik. These discrete amplitudes vk, k = 1, 2, …, L, are called
representation levels or reconstruction levels, and the spacing between two adjacent
representation levels is called a quantum or step size. Thus, the quantizer output v equals vk if
the input signal m belongs to the interval Ik. The mapping (see Figure 1.10 (a)) v = g(m) is the
quantizer characteristic, which is a staircase function by definition.
Quantizers can be of uniform or non-uniform type. In a uniform quantizer, the
representation levels are uniformly spaced; otherwise, the quantizer is non-uniform. The
quantizer characteristic can also be of midtread or midrise type. Figure 1.11 (a) shows the
input-output characteristic of a uniform quantizer of the midtread type, which is so called
because the origin lies in the middle of a thread of a staircase like graph. Figure 1.11 (b)
shows the corresponding input-output characteristic of the staircase like graph. Note that both
the midtread and midrise types of uniform quantizers illustrated in Figure 1.11 are symmetric
about the origin.
(a) (b)
Figure 1.11: Two types of quantization: (a) mid-thread and (b) mid-rise
The use of quantization introduces an error defined as the difference between the input signal
m and the output signal v. the error is called quantization noise. Figure 1.12 illustrates a
typical variation of the quantization as a function of time, assuming the use of a uniform
quantizer of the midtread type.
q=m–v (1.7)
or, correspondingly,
Q=M–V (1.8)
With the input M having zero mean, and the quantizer assumed to be symmetric as in Figure
1.11, it follows that the quantizer output V and therefore the quantization error Q, will also
have zero mean. Thus for a particular statistical characterization of quantizer in terms of
output signal to (quantization) noise ratio, we need only find the mean-square value of the
quantization error Q.
Consider then an input m of continuous amplitude in the range (–Vmax, Vmax).
Assuming a uniform quantizer of the midrise type illustrated in Figure 1.11 (b), we find that
the step-size of the quantizer is given by
2Vmax
(1.9)
L
where L is the total number of representation levels. For a uniform quantizer, the quantization
error Q will have its samples values bounded by –Δ/2 ≤ q ≤ Δ/2. If the step size is sufficiently
small (i.e., the number of representation levels L is sufficiently large), it is reasonable to
assume that the quantization error Q is a uniformly distributed random variable, and the
interfering effect of the quantization noise on the quantizer input is similar to that of thermal
noise. We may thus express the probability density function of the quantization error Q as
follows:
1
, q
f Q ( q) 2 2 (1.10)
0, otherwise
For this to be true, we must ensure that the incoming signal does not overload the quantizer.
q
2 2
Figure 1.13: Probability density function of Quantization error Q
Then, with the mean of the quantization error being zero, its variance Q2 is the same as the
mean-square value:
/2
Q2 E Q 2 q2 fQ ( q) dq (1.11)
/2
/2
/2
q2 fQ ( q) dq
(Note: The power of a random variable (let x) equals to its mean-squared value i.e., E[x2].
Usually, the noise has zero mean (µ) and n2 variance, where n2 E ( n ) 2 E n 2 . So,
noise power equal to its variance). Substituting Eq. (1.10) into Eq. (1.11), we get
1 /2 2 2
q dq
2
Q (1.12)
/2 12
Let n denote the number of bits per sample used in the construction of binary code, then the
number of quantization levels can be L = 2n, or equivalently,
n log 2 L (1.13)
2Vmax
(1.14)
2n
1
Q2 Vmax
2
2 2 n (1.15)
3
Let P denote the average power of the message signal m(t). We may then express the output
signal to noise ratio of a uniform quantizer as
P 3P
( SNR )O 2 22 n (1.16)
2
Q Vmax
Eq. (1.16) shows that the output signal to noise ratio of the quantizer increases exponentially
with increasing number of bits per sample, n. Recognizing that an increase in n requires a
proportionate increase in the channel (transmission) bandwidth BT , we thus see that use of a
binary code for the representation of a message signal (as in pulse code modulation) provides
a more efficient method than either frequency modulation (FM) or pulse position modulation
(PPM) for the trade-off of increased channel bandwidth for improved noise performance. In
making this statement, we presume that the FM and PPM systems are limited by receiver
noise, whereas the binary-coded modulation system is limited by quantization noise.
Consider the special case of a full-load sinusoidal modulating signal of amplitude Am,
which utilizes all the representation levels provided. The average signal power is (assuming a
load of 1 ohm)
2
A2 Am / 2 Am2
P rms (1.17)
R R 2
The total range of the quantizer input is 2 Am, because the modulating signal swings between
– Am and Am. We may therefore set Vmax = Am, in which case the use of Eq. (1.15) yields the
average power (variance) of the quantization noise as
1
Q2 Am2 22 n (1.18)
3
The output signal to noise ratio of a uniform quantizer, for a full-load test tone, is
Am2 2 3
( SNR )O 2 2 n
22 n (1.19)
Am 2 3 2
Figure 1.15: (a) Non-uniform quantizer characteristic (b) Compression characteristic (c) Uniform
quantizer characteristic
1 x x
y y max ln 1 , 0 1 (1.21)
ln(1 ) xmax xmax
where µ is a positive constant, x and y represents input and output voltages, and xmax and ymax
are the maximum positive excursion of the input and output voltages, respectively. The
compression characteristic is shown in Figure 1.16 (a) for several values of µ. In North
America, the standard value for µ is 255. Notice that µ = 0 corresponds to uniform
quantization. Similarly, the A-law characteristic defined as:
A x x 1
ymax , 0
1 ln( A) xmax xmax A
y (1.22)
y 1 x 1 x
1 ln A , 1
max
1 ln( A)
xmax A xmax
where A is a positive constant and x and y are as defined in Eq. (1.21). The A-law
compression characteristic is shown in Figure 1.16 (b) for several values of A. A standard
value for A is 87.6.
(a) (b)
Figure 1.16: Compression characteristics (a) µ-law characteristics (b) A-law characteristics