Professional Documents
Culture Documents
net
1.3 Quantization 03
Page 1
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
APPENDICES
A Question Bank 61
Page 2
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Prediction filtering and DPCM - Delta Modulation - ADPCM & ADM principles-Linear
Predictive Coding.
Properties of Line codes- Power Spectral Density of Unipolar / Polar RZ & NRZ – Bipolar
NRZ - Manchester- ISI – Nyquist criterion for distortionless transmission – Pulse shaping –
Correlative coding - Mary schemes – Eye pattern – Equalization
Geometric Representation of signals - Generation, detection, PSD & BER of Coherent BPSK,
BFSK & QPSK - QAM - Carrier Synchronization - structure of Non-coherent Receivers -
Principle of DPSK.
Channel coding theorem - Linear Block codes - Hamming codes - Cyclic codes -
Convolutional codes - Vitterbi Decoder
TOTAL: 45
PERIODS
TEXT BOOK:
1. S. Haykin, “Digital Communications”, John Wiley, 2005
REFERENCES:
1. B. Sklar, “Digital Communication Fundamentals and Applications”, 2nd Edition, Pearson
Education, 2009
2. B.P.Lathi, “Modern Digital and Analog Communication Systems” 3rd Edition, Oxford
University Press 2007.
3. H P Hsu, Schaum Outline Series - “Analog and Digital Communications”, TMH 2006
4. J.G Proakis, “Digital Communication”, 4th Edition, Tata Mc Graw Hill Company, 2001.
Page 3
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Page 4
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
SAMPLING:
A message signal may originate from a digital or analog source. If the message signal is
analog in nature, then it has to be converted into digital form before it can transmit by digital
means. The process by which the continuous-time signal is converted into a discrete–time
signal is called Sampling. Sampling operation is performed in accordance with the sampling
theorem.
Proof:- Part - I If a signal x(t) does not contain any frequency component beyond W Hz, then
the signal is completely described by its instantaneous uniform samples with sampling
interval (or period ) of Ts < 1/(2W) sec.
Part – II The signal x(t) can be accurately reconstructed (recovered) from the set of uniform
instantaneous samples by passing the samples sequentially through an ideal (brick-wall)
lowpass filter with bandwidth B, where W ≤ B < fs – W and fs = 1/(Ts).
Page 1
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
where x(nTs) = x(t)⎢t =nTs , δ(t) is a unit pulse singularity function and „n‟ is an integer.The
continuous-time signal x(t) is multiplied by an (ideal) impulse train to obtain {x(nTs)} and
can be rewritten as,
Now, let X(f) denote the Fourier Transform F(T) of x(t), i.e.
Now, from the theory of Fourier Transform, we know that the F.T of Σ δ(t- nTs), the
F{Σ δ(t- nTs)} = (1/Ts).Σ δ(f- n/Ts) = fs.Σ δ(f- nfs) -- ----- 1.3
If Xs(f) denotes the Fourier transform of the energy signal xs(t), we can write using Eq.
(1.2.4) and the convolution property:
--------------1.4
Nyquist‟s theorems as stated above and also helps to appreciate their practical implications.
Let us note that while writing Eq.(1.4), we assumed that x(t) is an energy signal so that its
Fourier transform exists. With this setting, if we assume that x(t) has no appreciable
frequency component greater than W Hz and if fs > 2W, then Eq.(1.4) implies that Xs(f), the
Fourier Transform of the sampled signal Xs(t) consists of infinite number of replicas of X(f),
centered at discrete frequencies n.fs, -∞ < n < ∞ and scaled by a constant fs= 1/Ts.
Page 2
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Fig. 1.2.1 indicates that the bandwidth of this instantaneously sampled wave xs(t) is infinite
while the spectrum of x(t) appears in a periodic manner, centered at discrete frequency values
n.fs. Part – I of the sampling theorem is about the condition fs > 2.W i.e. (fs – W) > W and (–
fs + W) < – W. As seen from Fig. 1.2.1, when this condition is satisfied, the spectra of xs(t),
centered at f = 0 and f = ± fs do not overlap and hence, the spectrum of x(t) is present in xs(t)
without any distortion. This implies that xs(t), the appropriately sampled version of x(t),
contains all information about x(t) and thus represents x(t).
The second part suggests a method of recovering x(t) from its sampled version xs(t) by using
an ideal lowpass filter. As indicated by dotted lines in Fig. 1.2.1, an ideal lowpass filter (with
brick-wall type response) with a bandwidth W ≤ B < (fs – W), when fed with xs(t), will allow
the portion of Xs(f), centered at f = 0 and will reject all its replicas at f = n fs, for n ≠ 0. This
implies that the shape of the continuous time signal xs(t), will be retained at the output of the
ideal filter.
Quantization:
The process of transforming Sampled amplitude values of a message signal into a discrete
amplitude value is referred to as Quantization.
A quantizer is memory less in that the quantizer output is determined only by the value of a
corresponding input sample, independently of earlier analog samples applied to the input.
Page 3
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Types of Quantizers:
1. Uniform Quantizer
2. Non- Uniform Quantizer
Uniform Quantizer: In Uniform type, the quantization levels are uniformly spaced, whereas
in non-uniform type the spacing between the levels will be unequal and mostly the relation is
logarithmic.
In the stair case like graph, the origin lies the middle of the tread portion in Mid –Tread type
where as the origin lies in the middle of the rise portion in the Mid-Rise type.
Page 4
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Page 5
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
“The Quantization process introduces an error defined as the difference between the input
signal, x(t) and the output signal, yt). This error is called the Quantization Noise.”
Quantization noise is produced in the transmitter end of a PCM system by rounding off
sample values of an analog base-band signal to the nearest permissible representation levels
of the quantizer. As such quantization noise differs from channel noise in that it is signal
dependent.
Let “Δ‟ be the step size of a quantizer and L be the total number of quantization levels.
Quantization levels are 0, ± Δ., ± 2 Δ., ±3 Δ . . . . . . . The Quantization error, Q is a random
variable and will have its sample values bounded by [-(Δ/2) < q < (Δ/2)]. If Δ is small, the
quantization error can be assumed to a uniformly distributed random variable.
X = Quantizer input
Y = Quantizer output
Y=Q(x)
which is a staircase function that befits the type of mid tread or mid riser quantizer of interest.
In Non – Uniform Quantizer the step size varies. The use of a non – uniform quantizer is
equivalent to passing the baseband signal through a compressor and then applying the
compressed signal to a uniform quantizer. The resultant signal is then transmitted.
Page 6
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
1. Higher average signal to quantization noise power ratio than the uniform quantizer
when the signal pdf is non uniform which is the case in many practical situation.
2. RMS value of the quantizer noise power of a non – uniform quantizer is substantially
proportional to the sampled value and hence the effect of the quantizer noise is
reduced.
signal level q(nTs) is binary encode. The encoder converts the input signal
to v digits binary word.The receiver starts by reshaping the received
pulses, removes the noise and then converts the binary bits to analog.
The received samples are then filtered by a low pass filter; the cut off
frequency is at fc. fc= fm.
Page 7
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Let the quantizer use 'v' number of binary digits to represent each
level. Then the number of levels that can be represented by v digits will
be : q=2v
The number of bits per second is given by : (Number of bits per
second)=(Number of bits per samples)x(number of samples per second) = v (bits
per sample) x fs (samples per second).
The number of bits per second is also called signaling rate of PCM and is
Signaling rate= v fs
Errors are introduced in the signal because of the quantization process. This
error is called "quantization error".
ε= xq (nTs)- x(nTs)
Let an input signal x(nTs) have an amplitude in the range of xmax to - xmax
The total amplitude range is : Total amplitude = xmax-(- xmax)=2 xmax
If the amplitude range is divided into 'q' levels of quantizer, then the step size 'Δ'. Δ= q/2 X
max If the minimum and maximum values are equal to 1, xmax,=1,
- xmax=-1, Δ= q/2.
Page 8
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Let the normalized signal power is equal to P then the signal to quantization
noise will be given by:
Advantages of PCM
1. Effect of noise is reduced.
2. PCM permits the use of pulse regeneration.
3. Multiplexing of various PCM signals is possible.
Page 9
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Digital Multiplexers:
Linear prediction is a mathematical operation where future values of a discrete time signal
are estimated as a linear function of previous samples.
In digital signal processing, linear prediction is often called linear predictive coding (LPC)
and can thus be viewed as a subset of filter theory.
Filter design is the process of designing a signal processing filter that satisfies a set of
requirements, some of which are contradictory. The purpose is to find a realization of the
filter that meets each of the requirements to a sufficient degree to make it useful.
The filter design process can be described as an optimization problem where each
requirement contributes to an error function which should be minimized. Certain parts of the
design process can be automated, but normally an experienced electrical engineer is needed
to get a good result.
optimization is the selection of a best element (with regard to some criteria) from some set of
available alternatives.
Page
10
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
LPC starts with the assumption that a speech signal is produced by a buzzer at the end of a
tube (voiced sounds), with occasional added hissing and popping sounds. Although
apparently crude, this model is actually a close approximation of the reality of speech
production. The glottis the space between the vocal folds) produces the buzz, which is
characterized by its intensity (loudness) and frequency (pitch). The vocal tract (the throat and
mouth) forms the tube, which is characterized by its resonances, which give rise to formats,
or enhanced frequency bands in the sound produced. Hisses and pops are generated by the
action of the tongue, lips and throat during sibilants and plosives.
LPC analyzes the speech signal by estimating the formants, removing their effects from the
speech signal, and estimating the intensity and frequency of the remaining buzz. The process
of removing the formants is called inverse filtering, and the remaining signal after the
subtraction of the filtered modeled signal is called the residue.
The numbers which describe the intensity and frequency of the buzz, the formants, and the
residue signal, can be stored or transmitted somewhere else. LPC synthesizes the speech
signal by reversing the process: use the buzz parameters and the residue to create a source
signal, use the formants to create a filter (which represents the tube), and run the source
through the filter, resulting in speech.
Because speech signals vary with time, this process is done on short chunks of the speech
signal, which are called frames; generally 30 to 50 frames per second give intelligible speech
with good compression.
It is one of the most powerful speech analysis techniques, and one of the most useful
methods for encoding good quality speech at a low bit rate and provides extremely
accurate estimates of speech parameters.
Page
11
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
where x^(nTs) is the prediction for unquantized sample x(nTs). This predicted
value is produced by using a predictor whose input, consists of a quantized versions
of the input signal x(nTs). The signal e(nTs) is called the prediction error.
By encoding the quantizer output, in this method, we obtain a modified
version of the PCM called differential pulse code modulation (DPCM).
Quantizer output,
v(nTs) = Q[e(nTs)]
= e(nTs) + q(nTs) ---- (3.32)
Predictor input is the sum of quantizer output and predictor output,
u(nTs) = x^(nTs) + v(nTs) ---- (3.33)
Using 3.32 in 3.33,
u(nTs) = x^(nTs) + e(nTs) + q(nTs) ----(3.34)
u(nTs) = x(nTs) + q(nTs) ----(3.35)
The receiver consists of a decoder to reconstruct the quantized error signal.
The quantized version of the original input is reconstructed from the decoder output
using the same predictor as used in the transmitter. In the absence of noise the
encoded signal at the receiver input is identical to the encoded signal at the transmitter
output. Correspondingly the receive output is equal to u(nTs), which differs from the
input x(nts) only by the quantizing error q(nTs).
Page
12
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DM provides a staircase approximation to the over sampled version of an input base band
signal. The difference between the input and the approximation is quantized into only two
levels, namely, ±δ corresponding to positive and negative differences, respectively, Thus, if
the approximation falls below the signal at any sampling epoch, it is increased by δ. Provided
that the signal does not change too rapidly from sample to sample, we find that the stair case
approximation remains within ±δ of the input signal. The symbol δ denotes the absolute value
of the two representation levels of the one-bit quantizer used in the DM.
Page
13
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Let the input signal be x(t) and the staircase approximation to it is u(t).
Page
14
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
In the receiver the stair case approximation u(t) is reconstructed by passing the incoming
sequence of positive and negative pulses through an accumulator in a manner similar to that
used in the transmitter. The out-of –band quantization noise in the high frequency staircase
waveform u(t) is rejected by passing it through a low-pass filter with a band-width equal to
the original signal bandwidth. Delta modulation offers two unique features:
1. No need for Word Framing because of one-bit code word.
2. Simple design for both Transmitter and Receiver
Disadvantage of DM: Delta modulation systems are subject to two types of quantization
error:
(1) slope –overload distortion, and (2) granular noise.
Page
15
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Page
16
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Eliminating the dc energy from the single power spectrum enables the transmitter to be
ac coupled. Magnetic recording system or system using transformer coupling are less
sensitive to low frequency signal components. Low frequency component may lost, if
the presence of dc or near dc spectral component is significant in the code itself.
Self synchronization
Any digital communication system requires bit synchronization. Coherent detector
requires carrier synchronization.
For example Manchester code has a transition at the middle of every bit interval
irrespective of whether a 1 or 0 is being sent This guaranteed transmitter provide a
clocking signal at the bit level.
Error detection
Some codes such as duo binary provide the means of detecting data error without
introducing additional error detection bits into the data sequence.
Some codes such as multilevel codes increase the efficiency of the bandwidth
utilization by allowing a reduction in required bandwidth for a given data rate, thus more
information transmitted per unit band width.
DIFFERENTIAL ENCODING
This technique is useful because it allow the polarity of differentially encoded waveform
to be inverted without affecting the data detection. In communication system where
Page
17
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
NOISE IMMUNITY
For same transmitted energy some codes produces lesser bit detection error than
other in the presence of noise. For ex. The NRZ waveforms have better noise
performance than the RZ type.
TRANSPARENCY
A line doe should be so designed that the receiver does not go out of synchronization
for any line sequence of data symbol. A code is not transparent if for some sequence of
symbol, the clock is lost.
Another advantage is to get rid of DC shifts. The DC component in a line code is called
the bias or the DC coefficient. Unfortunately, most long-distance communication
channels cannot transport a DC component. This is why most line codes try to eliminate
the DC component before being transmitted on the channel. Such codes are called DC
balanced, zero-DC, zero-bias, or DC equalized. Some common types of line encoding
in common-use nowadays are unipolar, polar, bipolar, Manchester, MLT-3 and
Duobinary encoding. These codes are explained here: 1. Unipolar (Unipolar NRZ and
Unipolar RZ):
Unipolar is the simplest line coding scheme possible. It has the advantage of being
compatible with TTL logic. Unipolar coding uses a positive rectangular pulse p(t) to
represent binary 1, and the absence of a pulse (i.e., zero voltage) to represent a binary
0. Two possibilities for the pulse p(t) exist3: Non-Return-to-Zero (NRZ) rectangular
pulse and Return-to-Zero (RZ) rectangular pulse. The difference between Unipolar NRZ
and Unipolar RZ codes is that the rectangular pulse in NRZ stays at a positive value
(e.g., +5V) for the full duration of the logic 1 bit, while the pule in RZ drops from +5V to
0V in the middle of the bit time. A drawback of unipolar (RZ and NRZ) is that its average
value is not zero, which means it creates a significant DC-component at the receiver
(see the impulse at zero frequency in the corresponding power spectral density (PSD)
of this line code.
Page
18
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
that polar signals have more power than unipolar signals, and hence have better SNR at the
receiver. Actually, polar NRZ signals have more power compared to polar RZ signals. The
drawback of polar NRZ, however, is that it lacks clock information especially when a long
sequence of 0‟s or 1‟s is transmitted.
Non-Return-to-Zero, Inverted (NRZI): NRZI is a variant of Polar NRZ. In NRZI there are
two possible pulses, p(t) and –p(t). A transition from one pulse to the other happens if the bit
being transmitted is logic 1, and no transition happens if the bit being transmitted is a logic 0.
This is the code used on compact discs (CD), USB ports, and on fiber-based Fast Ethernet at
100-Mbit/s.
Manchester encoding:
In Manchester code each bit of data is signified by at least one transition. Manchester
encoding is therefore considered to be self-clocking, which means that accurate clock
recovery from a data stream is possible. In addition, the DC component of the encoded signal
is zero. Although transitions allow the signal to be self-clocking, it carries significant
overhead as there is a need for essentially twice the bandwidth of a simple NRZ or NRZI
encoding.
Page
19
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Unipolar most of signal power is centered on origin and there is waste of power due to DC
component that is present.
Polar format most of signal power is centered on origin and they are simple to implement.
• Bipolar format does not have DC component and does not demand more bandwidth, but
power requirement is double than other formats.
• Manchester format does not have DC component but provides proper clocking.
Page
20
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Nyquist proposed a condition for pulses p(t) to have zero–ISI when transmitted through a
channel with sufficient bandwidth to allow the spectrum of all the transmitted signal to pass.
Nyquist proposed that a zero–ISI pulse p(t) must satisfy the condition
A pulse that satisfies the above condition at multiples of the bit period Tb will result in zero–
ISI if the whole spectrum of that signal is received. The reason for which these zero–ISI
pulses (also called Nyquist–criterion pulses) cause no ISI is that each of these pulses at the
sampling periods is either equal to 1 at the center of pulse and zero the points other pulses are
centered.
In fact, there are many pulses that satisfy these conditions. For example, any square pulse that
occurs in the time period –Tb to Tb or any part of it (it must be zero at –Tb and Tb) will
satisfy the above condition.
Also, any triangular waveform („Δ‟ function) with a width that is less than 2Tb will also
satisfy the condition. A sinc function that has zeros at t = Tb, 2Tb, 3Tb, … will also satisfy
this condition. The problem with the sinc function is that it extends over a very long period of
time resulting in a lot of processing to generate it. The square pulse required a lot of
bandwidth to be transmitted. The triangular pulse is restricted in time but has relatively large
bandwidth.
There is a set of pulses known as raised–cosine pulses that satisfy the Nyquist criterion and
require slightly larger bandwidth than what a sinc pulse (which requires the minimum
bandwidth ever) requires.
Page
21
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Where ω b is the frequency of bits in rad/s (ω b = 2 /Tb), and x is called the excess bandwidth
and it defines how much bandwidth would be required above the minimum bandwidth that is
required when using a sinc pulse. The excess bandwidth ω x for this type of pulses is
restricted between
We can easily verify that when ωx = 0, the above spectrum becomes a rect function, and
therefore the pulse p(t) becomes the usual sinc function. For ωx = b/2, the spectrum is similar
to a sinc function but decays (drops to zero) much faster than the sinc (it extends over 2 or 3
bit periods on each side). The expense for having a pulse that is short in time is that it
requires a larger bandwidth than the sinc function (twice as much for ωx =ω b/2). Sketch of
the pulses and their spectrum for the two extreme cases of ω x =ωb/2 and ωx = 0 are shown
below.
Page
22
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
The roll–off factor r specifies the ratio of extra bandwidth required for these pulses compared
to the minimum bandwidth required by the sinc function.
The main objective is to study the effect of ISI, when digital data is transmitted through band
limited channel and solution to overcome the degradation of waveform by properly shaping
pulse.
Page
23
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
The effect of sequence of pulses transmitted through channel is shown in fig. The Spreading
of pulse is greater than symbol duration, as a result adjacent pulses interfere. i.e. pulses get
completely smeared, tail of smeared pulse enter into adjacent symbol intervals making it
difficult to decide actual transmitted pulse. First let us have look at different formats of
transmitting digital data.In base band transmission best way is to map digits or symbols into
pulse waveform. This waveform is generally termed as Line codes.
EYE PATTERN
The quality of digital transmission systems are evaluated using the bit error rate. Degradation
of quality occurs in each process modulation, transmission, and detection. The eye pattern is
experimental method that contains all the information concerning the degradation of quality.
Therefore, careful analysis of the eye pattern is important in analyzing the degradation
mechanism.
• Eye patterns can be observed using an oscilloscope. The received wave is applied to the
vertical deflection plates of an oscilloscope and the sawtooth wave at a rate equal to
transmitted symbol rate is applied to the horizontal deflection plates, resulting display is
eye pattern as it resembles human eye.
• The interior region of eye pattern is called eye opening
We get superposition of successive symbol intervals to produce eye pattern as shown below.
Page
24
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
• The width of the eye opening defines the time interval over which the received wave can be
sampled without error from ISI
• The optimum sampling time corresponds to the maximum eye opening
• The height of the eye opening at a specified sampling time is a measure of the margin over
channel noise.
The sensitivity of the system to timing error is determined by the rate of closure of the eye as
the sampling time is varied. Any non linear transmission distortion would reveal itself in an
asymmetric or squinted eye. When the effected of ISI is excessive, traces from the upper
portion of the eye pattern cross traces from lower portion with the result that the eye is
completely closed.
Page
25
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
EQUALISING FILTER
Adaptive equalization
• An equalizer is a filter that compensates for the dispersion effects of a channel. Adaptive
equalizer can adjust its coefficients continuously during the transmission of data.
Adaptive equalization
It consists of tapped delay line filter with set of delay elements, set of adjustable multipliers
connected to the delay line taps and a summer for adding multiplier outputs.
Page
26
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Ci is weight of the ith tap Total number of taps are M .Tap spacing is equal to symbol duration
T of transmitted signal In a conventional FIR filter the tap weights are constant and particular
designed response is obtained. In the adaptive equaliser the Ci's are variable and are adjusted
by an algorithm.
Mechanism of adaptation
Page
27
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Training mode
A known sequence d(nT) is transmitted and synchronized version of it is generated in
the receiver applied to adaptive equalizer. This training sequence has maximal length PN
Sequence, because it has large average power and large SNR, resulting response sequence
(Impulse) is observed by measuring the filter outputs at the sampling instants. The difference
between resulting response y(nT) and desired response d(nT)is error signal which is used to
estimate the direction in which the coefficients of filter are to be optimized using algorithms.
Page
28
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Basis Vectors
The set of basis vectors {e1, e2, …, en} of a space are chosen such that: Should be
complete or span the vector space: any vector a can be expressed as a linear combination
of these vectors.
Page
29
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
orthonormal basis
In an n-dim space, we can have at most n basis vectors
Signal Space
Basic Idea: If a signal can be represented by n-tuple, then it can be treated in much the
same way as a n-dim vector.
Let φ1(t), φ2(t),…., φn(t) be n signals
Consider a signal x(t) and suppose that If every signal can be written as above ⇒ ~ ~ basis
functions and we have a n-dim signal space
Orthonormal Basis
Consider a set of M signals (M-ary symbol) {si(t), i=1,2,…,M } with finite energy. That is
where N ≤ M is the dimension of the signal space and are called the orthonormal basis
functions
Now, we can represent a signal si(t) as a column vector whose elements are the scalar
coefficients
sij, j = 1, 2, ….., N :
Page
30
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Page
31
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
(i) Generation
To generate the BPSK signal, we build on the fact that the BPSK signal is a special case of
DSB-SC modulation. Specifically, we use a product modulator consisting of two
components.
(i) Non-return-to-zero level encoder, whereby the input binary data sequence is encoded in
polar form with symbols 1 and 0 represented by the constant-amplitude.
(ii) Product modulator, which multiplies the level encoded binary wave by the sinusoidal
carrier of amplitude to produce the BPSK signal. The timing pulses used to generate the level
encoded binary wave and the sinusoidal carrier wave are usually, but not necessarily,
extracted from a common master clock.
(ii) Detection
To detect the original binary sequence of 1s and 0s, the BPSK signal at the channel output
is applied to a receiver that consists of four sections
(a) Product modulator, which is also supplied with a locally generated reference signal that
is a replica of the carrier wave
(b) Low-pass filter, designed to remove the double-frequency components of the
product modulator output (i.e., the components centered on ) and pass the zero-
frequency components.
(c) Sampler, which uniformly samples the output of the low-pass filter at where; the local
clock governing the operation of the sampler is synchronized with the clock responsible
for bit-timing in the transmitter.
(d) Decision-making device, which compares the sampled value of the low-pass filters
output to an externally supplied threshold, every seconds. If the threshold is exceeded,
the device decides in favor of symbol 1; otherwise, it decides in favor of symbol 0.
levels.
Page
32
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Page
33
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Page
34
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Page
35
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
FSK Transmitter
FSK receiver
A binary FSK Transmitter is as shown, the incoming binary data sequence is applied to on-
off level encoder. The output of encoder is √ volts for symbol 1 and 0 volts for symbol
“0‟. When we have symbol 1 the upper channel is switched on with oscillator frequency f1,
for symbol “0‟, because of inverter the lower channel is switched on with oscillator
frequency f2. These two frequencies are combined using an adder circuit and then
transmitted. The transmitted signal is nothing but required BFSK signal. The detector consists
of two correlators. The incoming noisy BFSK signal x(t) is common to both correlator. The
Coherent reference signal ᶲ1 (t) & ᶲ2 (t) are supplied to upper and lower correlators
respectively.
The correlator outputs are then subtracted one from the other and resulting a random vector
“l” (l=x1 - x2). The output “l” is compared with threshold of zero volts.
FSK Bandwidth:
Page
36
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
• Applications
– On voice-grade lines, used up to 1200bps
– Used for high-frequency (3 to 30 MHz) radio transmission
– used at higher frequencies on LANs that use coaxial cable.
Therefore Binary FSK system has 2 dimensional signal space with two messages S1(t) and
S2(t), [N=2 , m=2] they are represented,
Page
37
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
In a sense, QPSK is an expanded version from binary PSK where in a symbol consists of two
bits and two orthonormal basis functions are used. A group of two bits is often called a
“dibit”. So, four dibits are possible. Each symbol carries same energy. Let, E: Energy per
Symbol and T: Symbol duration = 2.* Tb, where Tb: duration of 1 bit.
Page
38
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
In QPSK system the information carried by the transmitted signal is contained in the phase.
QPSK Receiver:-
The QPSK receiver consists of a pair of correlators with a common input and supplied with a
locally generated pair of coherent reference signals ᶲ1(t) & ᶲ2(t)as shown in fig(b).The
correlator outputs x1 and x2 produced in response to the received signal x(t) are each
compared with a threshold value of zero.
Page
39
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Probability of error:-
A QPSK system is in fact equivalent to two coherent binary PSK systems working in parallel
and using carriers that are in-phase and quadrature. The in-phase channel output x1 and the Q-
channel output x2 may be viewed as the individual outputs of the two coherent binary PSK
systems. Thus the two binary PSK systems may be characterized as follows.
The bit errors in the I-channel and Q-channel of the QPSK system are statistically
independent . The I-channel makes a decision on one of the two bits constituting a symbol (di
bit) of the QPSK signal and the Q-channel takes care of the other bit.
As an example of QAM, 12 different phases are combined with two different amplitudes.
Since only 4 phase angles have 2 different amplitudes, there are a total of 16 combinations.
With 16 signal combinations, each baud equals 4 bits of information (2 ^ 4 = 16). Combine
ASK and PSK such that each signal corresponds to multiple bits. More phases than
amplitudes. Minimum bandwidth requirement of QAM is same as ASK or PSK.
Page
40
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Hamming Weight: The Hamming weight of this code scheme is the largest number of 1‟s in
a valid codeword. This number is 3 among the 10 codewords we have chosen. (the ones in
the while space).
Page
41
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Where dmin is Hamming distance of the codewords. For a code with dmin = 3, we can both
detect 1 and 2 bit errors. So we want to have a code set with as large a Hamming distance as
possible since this directly effects our ability to detect errors. The number of errors that we
can correct is given by
Creating block codes: The block codes are specified by (n.k). The code takes k information
bits and computes (n-k) parity bits from the code generator matrix. Most block codes are
systematic in that the information bits remain unchanged with parity bits attached either to
the front or to the back of the information sequence.
Reed-Solomon Codes: Reed Solomon (R-S) codes form an important sub-class of the family
of Bose- Chaudhuri-Hocquenghem (BCH) codes and are very powerful linear non-binary
block codes capable of correcting multiple random as well as burst errors. They have an
important feature that the generator polynomial and the code symbols are derived from the
same finite field. This enables to reduce the complexity and also the number of computations
involved in their implementation. A large number of R-S codes are available with different
code rates.
An R-S code is described by a generator polynomial g(x) and other usual important code
parameters such as the number of message symbols per block (k), number of code symbols
per block (n), maximum number of erroneous symbols (t) that can surely be corrected per
block of received symbols and the designed minimum symbol Hamming distance (d). A
parity-check polynomial h (X) of order k also plays a role in designing the code. The symbol
x, used in polynomials is an indeterminate which usually implies unit amount of delay.
For positive integers m and t, a primitive (n, k, t) R-S code is defined as below: Number of
encoded symbols per block: n = 2m – 1 Number of message symbols per block: k Code rate:
R= k/n Number of parity symbols per block: n – k = 2t Minimum symbol Hamming distance
per block: d = 2t +1. It can be noted that the block length n of an (n, k, t) R-S code is bounded
by the corresponding finite field GF(2m). Moreover, as n – k = 2t, an (n, k, t) R-S code has
optimum error correcting capability.
Convolutional codes:
*Convolutional codes are widely used as channel codes in practical communication systems
for error correction. *The encoded bits depend on the current k input bits and a few past input
bits. * The main decoding strategy for convolutional codes is based on the widely used
Viterbi algorithm. *Convolutional codes are commonly described using two parameters: the
code rate and the constraint length. The code rate, k/n, is expressed as a ratio of the number
of bits into the convolutional encoder (k) to the number of channel symbols output by the
Page
42
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
convolutional encoder (n) in a given encoder cycle. *The constraint length parameter, K,
denotes the "length" of the convolutional encoder, i.e. how many k-bit stages are available to
feed the combinatorial logic that produces the output symbols. Closely related to K is the
parameter m, which can be thought of as the memory length of the encoder. A simple
convolutional encoder is shown below(fig.). The information bits are fed in small groups of
k-bits at a time to a shift register. The output encoded bits are obtained by modulo-2 addition
(EXCLUSIVE-OR operation) of the input information bits and the contents of the shift
registers which are a few previous information bits.
The operation of a convolutional encoder can be explained in several but equivalent ways
such as, by
a) state diagram representation.
b) tree diagramrepresentation.
c) trellis diagram representation.
Page
43
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
STATE DIAGRAM:
Page
44
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Page
45
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Page
46
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Page
47
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
• H (7,4)
• Transmission vector x
Page
48
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Error Correction:
Page
49
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
1. i)Consider a single error correction (7,4) linear code and the corresponding decoding
table.
2. Find the (7,4) linear systematic block code word corresponding to 1101.Assume a
suitable generator matrix.
Solution:
Let
n=7 k=4
q = n-k= 3
Page
50
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
C=[010]
Complete code word can be calculated X={M:C}={1 1 0 0 0 1 0}
The parity matrix H=[pT :I] =[I: pT] =
3. Determine the generator polynomial g(X) FOR A (7,4) cyclic code and fine the code
vector for the following data vector 1010, 1111 and 1000
n=7 k=4
q=n-k=3
X=MG
Page
51
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
4. Assume a (2,1) convolutional coder with constraint length 6.Draw the tree diagram,
state diagram and trellis diagram for the assumed coder
Design block code for a message block of size eight that can correct for single
errors Briefly discuss on various error control codes and explain in detail with one
example for convolution code.
Page
52
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Convolutional codes
k = number of bits shifted into the encoder at one time
k=1 is usually used!!
n = number of encoder output bits corresponding to the k0020information bits
r = k/n = code rate
K = constraint length, encoder memory
Each encoded bit is a function of the present input bits and their past ones.
Generator Sequence
Page
53
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Convolutional Codes
An Example – (rate=1/2 with K=2)
Page
54
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Encoding Process
Page
55
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Basic concept
Generate the code trellis at the decoder
The decoder penetrates through the code trellis level by level in search
for the transmitted code sequence
At each level of the trellis, the decoder computes and compares the
metrics of all the partial paths entering a node
The decoder stores the partial path with the larger metric and eliminates
all the other partial paths. The stored partial path is called the survivor.
Page
56
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Page
57
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Page
58
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Page
59
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Page
60
Visit : www.EasyEngineeering.net