You are on page 1of 30

Code No: 56027 R09

JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD


B. Tech III Year II Semester Examinations, April - 2018
DIGITAL SIGNAL PROCESSING
(Electronics and Communication Engineering)
Time:3hours Max. Marks:75
Answer any five questions
All questions carry equal marks

1.a) Check whether the following systems are causal or not: [7M]
1
i) y(n) = x(n) + ii) y(n) = x(-2n) .
2 x (n−2)
i)
ii) Given y(n) = x(–2n)
For n = -1 y(-1)= x(2)
For n = 0 y(0) = x(0)
For n = 1 y(1) = x(-2)
For negative values of n, the output depends on the future values of input.
Therefore, the system is non-causal.

There are many advantages of digital communication over analog communication, such as
 The effect of distortion, noise, and interference is much less in digital signals as they are
less affected.
 Digital circuits are more reliable.
 Digital circuits are easy to design and cheaper than analog circuits.
 The hardware implementation in digital circuits, is more flexible than analog.
 The occurrence of cross-talk is very rare in digital communication.
 Signal processing functions such as encryption and compression are employed in digital
circuits to maintain the secrecy of the information.
 The probability of error occurrence is reduced by employing error detecting and error
correcting codes.
 Spread spectrum technique is used to avoid signal jamming.
 Combining digital signals using Time Division Multiplexing TDMTDM is easier than
combining analog signals using Frequency Division Multiplexing FDMFDM.

b) Explain the bandwidth-S/N trade off. [3M]

The Shannon-Hartley theorem states that the channel capacity is given by


C = B log2 (1 + S/N).
where C is the capacity in bits per second, B is the bandwidth of the channel in Hertz, and S/N is
the signal-to-noise ratio.
 As the bandwidth of the channel increases, it is possible to make faster changes in the
information signal, thereby increasing the information rate.
 As S/N increases, one can increase the information rate while still preventing errors due to noise.

 For no noise, S/N → ∞ and an infinite information rate is possible irrespective of bandwidth.
Thus, we may trade off bandwidth for SNR.
For example, if S/N = 7 and B = 4kHz, then the channel capacity is C = 12 × 103 bits/s. If the
SNR increases to S/N = 15 and B is decreased to 3kHz, the channel capacity remains the same.
c) Define phase locked loop. [2M]

A phase-locked loop or phase lock loop (PLL) is a control system that generates an output signal
whose phase is related to the phase of an input signal. There are several different types of PLL.
the simplest is an electronic circuit consisting of a variable frequency oscillator and a phase
detector in a feedback loop. The oscillator generates a periodic signal, and the phase detector
compares the phase of that signal with the phase of the input periodic signal, adjusting the
oscillator to keep the phases matched.

d) Mention various digital modulation techniques. [3M]


Modulation Classification:
Modulation is the process of converting data into electrical signals optimized for transmission.
Modulation techniques are roughly divided into two types: Continuous wave modulation, Pulse
modulation.

Digital modulation involves transmission of binary signals (0 and 1). This method is divided into
single carrier modulation, in which the carrier occupies the entire bandwidth (i.e.ASK, PSK, &
FSK, DPSK), and a multicarrier scheme that modulates and transmits different data on multiple
carriers (i.e., QPSK, 8PSK,16PSK & QAM…)
e) Define entropy. [2M]
Entropy can be defined as a measure of the average information content per source symbol.

Where pi is the probability of the occurrence of character number i from a given stream of


characters and b is the base of the algorithm used. Hence, this is also called as Shannon’s
Entropy.
The amount of uncertainty remaining about the channel input after observing the channel output,
is called as Conditional Entropy. It is denoted by H(x/y).

f) Write the steps involved in Shannon Fano coding. [3M]

The steps of the algorithm are as follows:


1. Create a list of probabilities or frequency counts for the given set of symbols so that the
relative frequency of occurrence of each symbol is known.
2. Sort the list of symbols in decreasing order of probability, the most probable ones to the left
and least probable to the right.
3. Split the list into two parts, with the total probability of both the parts being as close to each
other as possible.
4. Assign the value 0 to the left part and 1 to the right part.
5. Repeat the steps 3 and 4 for each part, until all the symbols are split into individual
subgroups.

g) What are block codes? [2]

In coding theory, block codes are a large and important family of error-correcting codes that
encode data in blocks. block code acts on a block of k bits of input data to produce n bits of
output data. Here n is greater than k. The transmitter adds redundant bits which are n−k bits. The
ratio k/n is the code rate. It is denoted by r and the value of r is r < 1.
Examples of block codes are Reed–Solomon codes, Hamming codes, Hadamard codes,
Expander codes, Golay codes, and Reed–Muller codes.

h) Write the capabilities of linear block codes. [3]


In order to determine the error-detecting and correcting capabilities of linear block codes, we
first need to define the Hamming weight of a codeword, the Hamming distance between two
codewords, and the minimum distance dmin of a block code.
The Hamming weight of a codeword C is defined as the number of non-zero elements (i.e., 1s)
in the codeword.
The Hamming distance between two codewords is defined as the number of elements in which
they differ.
The minimum distance dmin of a linear block code is the smallest Hamming distance between any
two different codewords, and it is equal to the minimum Hamming weight of the non-zero
codewords in the code.
a linear block code of minimum distance dmin can detect up to t errors if and only if dmin≥ t+1 and
correct up to t errors if and only if dmin≥2t+1. 
i) Define spread spectrum. [2M]
Spread spectrum is a technique used for transmitting radio or telecommunications signals by
spreading the transmitted signal to occupy the entire frequency spectrum available for
transmission.
The advantages of spectrum spreading include noise reduction, security and resistance to
jamming and interception.
Frequency-hopping spread spectrum (FHSS), direct-sequence spread spectrum (DSSS), time-
hopping spread spectrum (THSS), chirp spread spectrum (CSS) are types of Spread spectrum
techniques.

j) What is meant by PN sequence? [3M]


Pseudo-Noise (PN) sequences are commonly used to generate noise that is approximately
"white". It has applications in scrambling, cryptography, and spread-spectrum communications.
It is also commonly referred to as the Pseudo-Random Binary Sequence (PRBS). These are very
widely used in communication.
The PN Sequence Generator block generates a sequence of pseudorandom binary numbers using
a linear-feedback shift register (LFSR). Pseudo noise sequences are typically used for
pseudorandom scrambling, and in direct-sequence spread-spectrum systems.
2.a) Describe Delta modulation with a neat diagram. [6M]
Delta Modulation is a simplified form of DPCM technique, also viewed as 1-bit DPCM scheme.
As the sampling interval is reduced, the signal correlation will be higher.

Fig: Delta Modulator

 x(nTs) = over sampled input


 ep(nTs) = summer output and quantizer input
 eq(nTs)) = quantizer output = v(nTs)v(nTs)
 xˆ(nTs) = output of delay circuit
 u(nTs) = input of delay circuit

The predictor circuit in DPCM is replaced by a simple delay circuit in DM


Using these notations, now we shall try to figure out the process of delta modulation.
ep(nTs) = x(nTs) − xˆ(nTs) ---------equation 1

=x(nTs) − u([n−1]Ts)
=x(nTs) − [xˆ[[n−1]Ts] + v[[n−1]Ts]] ---------equation 2
Further,
v(nTs) = eq(nTs) = S.sig.[ep(nTs)] ---------equation 3

u(nTs) = xˆ(nTs) + eq(nTs)


Where,
 xˆ(nTs) = the previous value of the delay circuit
eq(nTs) = quantizer output = v(nTs)
Hence,
u(nTs) = u([n−1]Ts) + v(nTs) ---------equation 4

Which means, The present input of the delay unit = The previous output of the delay
unit + the present quantizer output.
Assuming zero condition of Accumulation,

Accumulated version of DM output is = ---------equation 5

Now, note that xˆ(nTs ) = u([n−1]Ts)

---------equation 6

Delay unit output is an Accumulator output lagging by one sample.


From equations 5 & 6, we get a possible structure for the demodulator.
A Stair-case approximated waveform will be the output of the delta modulator with the step-size
as delta (Δ).
Delta Demodulator
The delta demodulator comprises of a low pass filter, a summer, and a delay circuit. The
predictor circuit is eliminated here and hence no assumed input is given to the demodulator.
Following is the diagram for delta demodulator.
Fig: Delta Demodulator.
From the above diagram, we have the notations as −
vˆ(nTs) is the input sample
uˆ(nTs) is the summer output
x¯(nTs) is the delayed output
A binary sequence will be given as an input to the demodulator. The stair-case approximated
output is given to the LPF.
Low pass filter is used for many reasons, but the prominent reason is noise elimination for out-
of-band signals.

2.b) Define PCM and explain. [4M]


Modulation is the process of varying one or more parameters of a carrier signal in accordance
with the instantaneous values of the message signal.
In Pulse Code Modulation, the message signal is represented by a sequence of coded pulses.
Following is the block diagram of PCM which represents the basic elements of both the
transmitter and the receiver sections.

Fig: PCM Transmitter & Receiver.


The transmitter section of a Pulse Code Modulator circuit consists of Sampling,
Quantizing and Encoding, which are performed in the analog-to-digital converter section. The
low pass filter prior to sampling prevents aliasing of the message signal.
The basic operations in the receiver section are regeneration of impaired signals,
decoding, and reconstruction of the quantized pulse train.

Fig: PCM-Waveform

Low Pass Filter:


This filter eliminates the high frequency components present in the input analog signal which is
greater than the highest frequency of the message signal, to avoid aliasing of the message signal.
Sampler:
This is the technique which helps to collect the sample data at instantaneous values of message
signal, so as to reconstruct the original signal. The sampling rate must be greater than twice the
highest frequency component W of the message signal, in accordance with the sampling
theorem.
Quantizer
Quantizing is a process of reducing the excessive bits and confining the data. The sampled
output when given to Quantizer, reduces the redundant bits and compresses the value.
Encoder:
The digitization of analog signal is done by the encoder. It designates each quantized level by a
binary code. The sampling done here is the sample-and-hold process. These three sections LPF,
Sampler, and Quantizer LPF, Sampler, and Quantizer will act as an analog to digital converter.
Encoding minimizes the bandwidth used.
Regenerative Repeater:
This section increases the signal strength. The output of the channel also has one regenerative
repeater circuit, to compensate the signal loss and reconstruct the signal, and also to increase its
strength.
Decoder:
The decoder circuit decodes the pulse coded waveform to reproduce the original signal. This
circuit acts as the demodulator.
Reconstruction Filter:
After the digital-to-analog conversion is done by the regenerative circuit and the decoder, a low-
pass filter is employed, called as the reconstruction filter to get back the original signal.
Hence, the Pulse Code Modulator circuit digitizes the given analog signal, codes it and samples
it, and then transmits it in an analog form. This whole process is repeated in a reverse pattern to
obtain the original signal.
3. a) Explain adaptive delta modulation concept in detail. [4M]
This Modulation is the refined form of delta modulation. This method was introduced to solve
the granular noise and slope overload error caused during Delta modulation.
Adaptive Delta Modulation is similar to Delta modulation except that the step size is variable
according to the input signal in Adaptive Delta Modulation where as it is a fixed value in delta
modulation.

Fig: Adaptive-Delta-Modulation-Transmitter.

The transmitter circuit consists of a summer, quantizer, Delay circuit, and a logic circuit for step
size control. The baseband signal X(nTs) is given as input to the circuit. The feedback circuit
present in the transmitter is an Integrator. The integrator generates the staircase approximation of
the previous sample.

At the summer circuit, the difference between the present sample and staircase approximation of
previous sample e(nTs) is calculated. This error signal is passed to the quantizer, where a
quantized value is generated. The step size control block controls the step size of the next
approximation based on either the quantized value is high or low. The quantized signal is given
as output.

At the receiver end Demodulation takes place. The receiver has two parts. First part is the step
size control. Here the received signal is passed through a logic step size control block, where the
step size is produced from each incoming bit. Step size is decided based on present and previous
input. In the second part of the receiver, the accumulator circuit recreates the staircase signal.
This waveform is then applied to a low pass filter which smoothens the waveform and recreates
the original signal.
Fig: Adaptive-Delta-Modulation-Waveform

3.b) Explain the concept of companding. [4M]


Definition: Companding is a technique of achieving non-uniform quantization. It is a word
formed by the combination of words compression and expanding. Companding is done in order
to improve SNR of weak signals.

We know if the characteristics of the quantizer are non-linear then it causes the step size to be
variable despite being constant then it is known as non-uniform quantization.

As we know in non-uniform quantization, the step size varies according to the signal level. If the
signal level is low then step size will be small. So, the step size will be low for weak signal. Thus
the quantization noise will also be low.

So, in order to maintain proper signal to quantization noise ratio, the step size must be variable
according to the signal level. Thus in order to achieve non-uniform quantization the process of
companding is used.

Fig: Companding Model


The companding model consists of a compressor, a uniform quantizer and an expander.
Initially at the transmitting end the signal is compressed and further at the receiving end the
compressed signal is expanded in order to have the original signal. Initially at the transmitting
end, the signal is first provided to the compressor. The compressor unit amplifies the low value
or weak signal in order to increase the signal level of the applied input signal.
While if the input signal is a high level signal or strong signal then compressor attenuates that
signal before providing it to the uniform quantizer present in the model. This is done in order to
have an appropriate signal level as the input to the uniform quantizer. We know a high
amplitude signal needs more bandwidth and also is more likely to distort.
The operation performed by this block is known as compression thus the unit is called
compressor. The output of the compressor is provided to uniform quantizer where the
quantization of the applied signal is performed. At the receiver end, the output of the uniform
quantizer is fed to the expander. It performs the reverse of the process executed by the
compressor. This unit when receives a low value signal then it attenuates it. While if a strong
signal is achieved, then the expander amplifies it. This is done in order to achieve the originally
transmitted signal at the output.

4.a) With a neat block diagram describe the ASK modulator. [5M]
ASK is a type of Amplitude Modulation which represents the binary data in the form of
variations in the amplitude of a signal.
The binary signal when ASK modulated, gives a zero value for Low input while it gives
the carrier output for High input.
The following figure represents ASK modulated waveform along with its input.
ASK Modulator:
The ASK modulator block diagram comprises of the carrier signal generator, the binary sequence
from the message signal and the band-limited filter. Following is the block diagram of the ASK
Modulator.

Fig: ASK Modulator.


The carrier generator, sends a continuous high-frequency carrier. The binary sequence from the
message signal makes the unipolar input to be either High or Low. The high signal closes the
switch, allowing a carrier wave. Hence, the output will be the carrier signal at high input. When
there is low input, the switch opens, allowing no voltage to appear. Hence, the output will be
low.
ASK Demodulator
There are two types of ASK Demodulation techniques. They are −

 Asynchronous ASK Demodulation/detection


 Synchronous ASK Demodulation/detection
Asynchronous ASK Demodulator
The Asynchronous ASK detector consists of a half-wave rectifier, a low pass filter, and a
comparator. Following is the block diagram for the same.
The modulated ASK signal is given to the half-wave rectifier, which delivers a positive half
output. The low pass filter suppresses the higher frequencies and gives an envelope detected
output from which the comparator delivers a digital output.
Synchronous ASK Demodulator
Synchronous ASK detector consists of a Square law detector, low pass filter, a comparator, and
a voltage limiter. Following is the block diagram for the same.

The ASK modulated input signal is given to the Square law detector. A square law detector is
one whose output voltage is proportional to the square of the amplitude modulated input voltage.
The low pass filter minimizes the higher frequencies. The comparator and the voltage limiter
help to get a clean digital output.
4.b) Draw and explain coherent PSK detection process. [5M]
PSK is the digital modulation technique in which the phase of the carrier signal is changed by
varying the sine and cosine inputs at a particular time. PSK technique is widely used for wireless
LANs, bio-metric, contactless operations, along with RFID and Bluetooth communications .
By recovering the band-limited message signal, with the help of the mixer circuit and the band
pass filter, the first stage of demodulation gets completed. The base band signal which is band
limited is obtained and this signal is used to regenerate the binary message bit stream.
In the next stage of demodulation, the bit clock rate is needed at the detector circuit to produce
the original binary message signal. If the bit rate is a sub-multiple of the carrier frequency, then
the bit clock regeneration is simplified. To make the circuit easily understandable, a decision-
making circuit may also be inserted at the 2nd stage of detection.

5.a) Explain the process of FSK detection using PLL[5M]


A phase-locked loop or phase lock loop (PLL) is a control system that generates an output signal
whose phase is related to the phase of an input signal. PLL FM detectors can easily be made
from the variety of phase locked loop integrated circuits that are available, and as a result, PLL
FM demodulators are found in many types of radio equipment ranging from broadcast receivers
to high performance communications equipment.
The PLL technology eliminates the costly RF transformers needed for circuits like the ratio FM
detector and the Foster Seeley circuit.
The working of a PLL FM demodulator is very easy to understand. The input FM signal and the
output of the VCO is applied to the phase detector circuit. The output of the phase detector is
filtered using a low pass filter, the amplifier and then used for controlling the VCO.
When there is no carrier modulation and the input FM signal is in the center of the pass band
(i.e. carrier wave only) the VCO’s tune line voltage will be at the center position. When
deviation in carrier frequency occurs (that means modulation occurs) the VCO frequency
follows the input signal in order to keep the loop in lock. As a result the tune line voltage to the
VCO varies and this variation is proportional to the modulation done to the FM carrier wave.
This voltage variation is filtered and amplified in order to get the demodulated signal.

5.b) Draw and explain DPSK technique. [5M]


In Differential Phase Shift Keying the phase of the modulated signal is shifted relative to the
previous signal element. No reference signal is considered here. The signal phase follows the
high or low state of the previous element. This DPSK technique doesn’t need a reference
oscillator.
DPSK Modulator
DPSK is a technique of BPSK, in which there is no reference phase signal. Here, the transmitted
signal itself can be used as a reference signal. Following is the diagram of DPSK Modulator.
DPSK encodes two distinct signals, i.e., the carrier and the modulating signal with 180° phase
shift each. The serial data input is given to the XNOR gate and the output is again fed back to
the other input through 1-bit delay. The output of the XNOR gate along with the carrier signal is
given to the balance modulator, to produce the DPSK modulated signal.
DPSK Demodulator
In DPSK demodulator, the phase of the reversed bit is compared with the phase of the previous
bit. Following is the block diagram of DPSK demodulator.

From the above figure, it is evident that the balance modulator is given the DPSK signal along
with 1-bit delay input. That signal is made to confine to lower frequencies with the help of LPF.
Then it is passed to a shaper circuit, which is a comparator or a Schmitt trigger circuit, to
recover the original binary data as the output.
The following figure represents the model waveform of DPSK.

It is seen from the above figure that, if the data bit is Low i.e., 0, then the phase of the signal is
not reversed, but continued as it was. If the data is a High i.e., 1, then the phase of the signal is
reversed, as with NRZI, invert on 1 a form of differential encoding a form of differential
encoding.
If we observe the above waveform, we can say that the High state represents an M in the
modulating signal and the Low state represents a W in the modulating signal.
6.a) Describe the operating principle of baseband signal receiver. [5M]
A baseband signal or lowpass signal is a signal that can include frequencies that are very near
zero, by comparison with its highest frequency.
A baseband bandwidth is equal to the highest frequency of a signal or system, or an upper bound
on such frequencies, for example the upper cut-off frequency of a low-pass filter.
A basic problem that often arises in the study of communication systems is that of detecting a
pulse transmitted over a channel that is corrupted by channel noise. This problem is solved by
Matched filter Receiver.

Fig: Matched Filter Receiver

In signal processing, a matched filter is obtained by correlating a known delayed signal, with an
unknown signal to detect the presence of the known delayed signal in the unknown signal. This
is equivalent to convolving the unknown signal with a conjugated time-reversed version of the
template. The matched filter is the optimal linear filter for maximizing the signal-to-noise ratio
(SNR) in the presence of additive stochastic noise.

Matched filters are commonly used in radar, in which a known signal is sent out, and the
reflected signal is examined for common elements of the out-going signal. Pulse compression is
an example of matched filtering. It is so called because the impulse response is matched to input
pulse signals.
6.b) Explain the conditional entropy and redundancy for information. [5M]
Entropy:
Entropy can be defined as a measure of the average information content per source
symbol. Claude Shannon, the “father of the Information Theory”, provided a formula for it as

Where pi is the probability of the occurrence of character number i from a given stream of


characters and b is the base of the algorithm used. Hence, this is also called as Shannon’s
Entropy.
Conditional Entropy:
The amount of uncertainty remaining about the channel input after observing the channel output,
is called as Conditional Entropy. It is denoted by H(x∣y).

The conditional entropy quantifies the amount of information needed to describe the outcome of


a random variable Y given that the value of another random variable X is known.
The conditional entropy of Y given X is defined as

Properties:
1. Conditional entropy equals zero.
H(Y/X) = 0 if and only if the value of Y is completely determined by the value of X.
2. Conditional entropy of independent random variables
H(Y/X) = H(Y) & H(X/Y) = H(X) if and only if Y and X are independent variables
3. Chain rule
H(Y/X) = H(X,Y) - H(Y)
4. Baye’s rule
H(Y/X) = H(X/Y)- H(X)+H(Y)

5. H(Y/X) ≤ H(Y)

6. H(X,Y) = H(Y/X)+ H(X/Y)+I(X;Y)

7. H(X,Y) = H(X)+H(Y)- I(X;Y)


Where I(X;Y) is mutual information between X and Y.

7.a) Explain with an example about variable length coding method. [5M]
Variable length code:
In coding theory, a variable-length code is a code which maps source symbols to a variable
number of bits.
Variable-length codes can allow sources to be compressed and decompressed with zero error
(lossless data compression) and still be read back symbol by symbol. With the right coding
strategy an independent and identically-distributed source may be compressed almost arbitrarily
close to its entropy.
Some examples of well-known variable-length coding strategies are Huffman coding, Lempel–
Ziv coding, arithmetic coding, and context-adaptive variable-length coding.
Example:

Solution:
7.b) Explain the pulse shaping concept to achieve optimum transmissions.[5]
Need for Pulse shaping:
Transmitting a signal at high modulation rate through a band-limited channel can create
intersymbol interference. As the modulation rate increases, the signal's bandwidth increases.
When the signal's bandwidth becomes larger than the channel bandwidth, the channel starts to
introduce distortion to the signal. This distortion usually manifests itself as intersymbol
interference.
In electronics and telecommunications, pulse shaping is the process of changing the waveform
of transmitted pulses. Its purpose is to make the transmitted signal better suited to its purpose or
the communication channel, typically by limiting the effective bandwidth of the transmission.
By filtering the transmitted pulses this way, the intersymbol interference caused by the channel
can be kept in control. In RF communication, pulse shaping is essential for making the signal fit
in its frequency band. Typically pulse shaping occurs after line coding and modulation.
Pulse shaping filters:
Not every filter can be used as a pulse shaping filter. The filter itself must not introduce
intersymbol interference. it needs to satisfy certain criteria. The Nyquist ISI criterion is a
commonly used criterion for evaluation, because it relates the frequency spectrum of the
transmitter signal to intersymbol interference.
Examples of pulse shaping filters that are commonly found in communication systems are:
 Sinc shaped filter
 Raised-cosine filter
 Gaussian filter

Sender side pulse shaping is often combined with a receiver side matched filter to achieve
optimum tolerance for noise in the system. In this case the pulse shaping is equally distributed
between the sender and receiver filters. The filters' amplitude responses are thus pointwise
square roots of the system filters.
8.a) Decode a convolution code using state diagram. [5M]
Convolutional Code Encoder Structure:
A convolutional code introduces redundant bits into the data stream through the use of linear shift
registers as shown in Figure below.
Fig: Convolution Encoder.
The information bits are input into shift registers and the output encoded bits are obtained by modulo-2
addition of the input information bits and the contents of the shift registers.
The code rate r for a convolutional code is defined as r = k/n.
Encoder Representations.
The Convolution encoder can be represented in several different but equivalent ways. They are

1. Generator Representation
2. Tree Diagram Representation
3. State Diagram Representation
4. Trellis Diagram Representation

The two generator vectors for the encoder in above Figure are g1 = [111] and g2 = [101] where the
subscripts 1 and 2 denote the corresponding output terminals.

State Diagram Representation:


The state diagram shows the state information of a convolutional encoder. The state information of a
convolutional encoder is stored in the shift registers. Below Figure shows the state diagram of the
encoder.

Fig: State diagram

In the state diagram, the state information of the encoder is shown in the circles. Each new input
information bit causes a transition from one state to another. The path information between the states,
denoted as x/c, represents input information bit x and output encoded bits c. It is customary to begin
convolutional encoding from the all zero state.

For example, the input information sequence x = {1011} (begin from the all zero state) leads to the state
transition sequence s = {10, 01, 10, 11} and produces the output encoded sequence c = {11, 10, 00, 01}.
Figure below shows the path taken through the state diagram for the given example.
Figure: The state transitions (path) for input information sequence {1011}

8.b) Explain viterbi algorithm. [5M]

The Viterbi algorithm is named after Andrew Viterbi, who proposed it in 1967 as a decoding
algorithm for convolutional codes over noisy digital communication links. The Viterbi algorithm
is a dynamic programming algorithm for finding the most likely sequence of hidden states called
the Viterbi path that results in a sequence of observed events. The algorithm has found universal
application in decoding the convolutional codes.

The Viterbi Decoder:

The decoding algorithm uses two metrics: the branch metric (BM) and the path metric (PM). The
branch metric is a measure of the “distance” between what was transmitted and what was
received, and is defined for each arc in the trellis. In hard decision decoding, where we are given
a sequence of digitized parity bits, the branch metric is the Hamming distance between the
expected parity bits and the received ones.

Figure: The branch metric for hard decision decoding.


In this example, the receiver gets the parity bits 00.

An example is shown in Figure above, where the received bits are 00. For each state transition,
the number on the arc shows the branch metric for that transition. Two of the branch metrics are
0, corresponding to the only states and transitions where the corresponding Hamming distance is
0. The other non-zero branch metrics correspond to cases when there are bit errors. The path
metric is a value associated with a state in the trellis (i.e., a value associated with each node). For
hard decision decoding, it corresponds to the Hamming distance over the most likely path from
the initial state to the current state in the trellis. By “most likely”, we mean the path with smallest
Hamming distance between the initial state and the current state, measured over all possible
paths between the two states. The path with the smallest Hamming distance minimizes the total
number of bit errors, and is most likely when the BER is low.

The key insight in the Viterbi algorithm is that the receiver can compute the path metric
for a (state, time) pair incrementally using the path metrics of previously computed states
and the branch metrics.

Metric: it is the discrepancy between the received signal y and the decoding signal at particular node.

Surviving path: this is the path of the decoded signal with minimum metric.

Finding the Most Likely Path:


1. Add the branch metric to the path metric for the old state.
2. Compare the sums for paths arriving at the new state.
3. Select the path with the smallest value, breaking ties arbitrarily. This path corresponds to the
one with fewest errors.

9.a) Explain how to calculate syndrome in cyclic codes. [5M]

In coding theory, a cyclic code is a block code, where the circular shifts of each codeword gives
another word that belongs to the code. They are error-correcting codes that have algebraic properties
that are convenient for efficient error detection and correction.

If 00010111 is a valid codeword, applying a right circular shift gives the string 10001011. If the code
is cyclic, then 10001011 is again a valid codeword. In general, applying a right circular shift moves
the least significant bit (LSB) to the leftmost position, so that it becomes the most significant bit
(MSB); the other positions are shifted by 1 to the right.
The syndrome computation may be performed by shift registers similar in form to that used for
encoding.

To elaborate, let us consider a systematic cyclic code and let us represent the received code vector R
by the polynomial R(x). In general, R = C + E, where C is the transmitted code word and E is the
error vector. Hence, we have

R(x) = C(x) + E(x)

= D(x) g(x) + E(x) -------Equation 1


Where D(x) is message polynomial

Now, suppose we divide R(x) by the generator polynomial g(x). This division will yield

R(x) / g(x) = Q(x) +S(x) / g(x)

or, equivalently,

R(x) =Q(x) g(x) + S(x) -------Equation 2

The remainder S(x) is a polynomial of degree less than or equal to n - k-1. If we combine Equation 1
with Equation 2, we obtain

E(x) = [D(x) + Q(x)] g(x) + S(x)

This relationship illustrates that the remainder S(x) obtained from dividing R(x) by g(x) depends only
on the error polynomial E(x), and, hence, S(x) is simply the syndrome associated with the error
pattern E. Therefore,

R(x) =Q(x)g(x) + S(x)

where S(x) is the syndrome polynomial of degree less than or equal to

n-k-1. If g(x) divides R(x) exactly, then S(x) = 0 and the received decoded word is Ĉ = R.

The division of R(x) by the generator polynomial g(x) may be carried out by means of a shift
register.
9.b) Explain the error detection and error correction capabilities of linear block codes. [5M]
In order to determine the error-detecting and correcting capabilities of linear block codes, we
first need to define the Hamming weight of a codeword, the Hamming distance between two
codewords, and the minimum distance dmin of a block code.
The Hamming weight of a codeword C is defined as the number of non-zero elements (i.e., 1s) in
the codeword.
The Hamming distance between two codewords is defined as the number of elements in which
they differ.
The minimum distance dmin of a linear block code is the smallest Hamming distance between any
two different codewords, and it is equal to the minimum Hamming weight of the non-zero
codewords in the code.
a linear block code of minimum distance dmin can detect up to t errors if and only if dmin≥ t+1 and
correct up to t errors if and only if dmin≥2t+1.
Noisy channel introduces an error-pattern:

Let C is sent, r is received where r = c + e


w(e) =the number of errors that occur during the transmission
e is called error pattern.
For a code with dmin as minimum distance, any two distinct codewords differ in at least d min
places.
No error pattern of weight dmin − 1 or less can change one codeword into another.
Any error pattern of size at most dmin − 1 can be detected.
Some error patterns of size dmin or larger can be detected.
For c + e ∈ C, and e not equal to 0, the error cannot be detected. Because the code is linear, only
for e as a codeword this constraint is satisfied. So, for a (n, k) code, there are exactly 2 n − 2k
error patterns that can be detected.
The error detection fails only when the error pattern is identical to a non-zero codeword,
otherwise c + e ∈ C and the error is not detected.

10.a) With a neat sketch explain the DSSS concept. [5M]


Direct Sequence Spread Spectrum:
DSSS, direct sequence spread spectrum is a form of spread spectrum transmission which uses
spreading codes to spread the signal out over a wider bandwidth then would normally be
required.

Fig: DSSS Modulation & Demodulation.


When transmitting a DSSS spread spectrum signal, the required data signal is multiplied with what is
known as a spreading or chip code data stream. The resulting data stream has a higher data rate than the
data itself. Often the data is multiplied using the XOR (exclusive OR) function.
Each bit in the spreading sequence is called a chip, and this is much shorter than each information bit. The
spreading sequence or chip sequence has the same data rate as the final output from the spreading
multiplier. The rate is called the chip rate, and this is often measured in terms of a number of M chips /
sec.
The baseband data stream is then modulated onto a carrier and in this way the overall the overall signal is
spread over a much wider bandwidth than if the data had been simply modulated onto the carrier. This is
because, signals with high data rates occupy wider signal bandwidths than those with low data rates.
To decode the signal and receive the original data, the CDMA signal is first demodulated from the carrier
to reconstitute the highspeed data stream. This is multiplied with the spreading code to regenerate the
original data. When this is done, then only the data with that was generated with the same spreading code
is regenerated, all the other data that is generated from different spreading code streams is ignored.
The use of direct sequence spread spectrum is a powerful principle and has many advantages.

10.b) What is the necessity of synchronization in spread spectrum systems? [5M]


Code synchronization is the process of achieving and maintaining proper alignment between the
reference code in a spread spectrum receiver and the spreading sequence that has been used in
the transmitter to spread the information bits. Usually, code synchronization is achieved in two
stages: a) code acquisition and b) code tracking. Acquisition is the process of initially attaining
coarse alignment (typically within ± half of the chip duration), while tracking ensures that fine
alignment within a chip duration is maintained.

Code acquisition:
Initially, neither the phase nor the frequency of the spreading code of the received signal are
known to the receiver. During acquisition, the receiver tries to find this phase and frequency. 
Code Acquisition is basically a process of searching through an uncertainty region, which may
be one-dimensional, e.g. in time alone or two-dimensional, viz. in time and frequency (if there is
drift in carrier frequency due to Doppler effect etc.) – until the correct code phase is found. The
uncertainty region is divided into a number of cells. In the one-dimensional case, one cell may be
as small as a fraction of a PN chip interval.

A basic operation, often carried out during the process of code acquisition is the correlation of
the incoming spread signal with a locally generated version of the spreading code sequence
(obtained by multiplying the incoming and local codes and accumulating the result) and
comparing the correlator output with a set threshold to decide whether the codes are in phase or
not.
The output of the local code generator is a time-shifted version p(t + tau) of the code sequence
used by the transmitter. The bandpass filter (BPF) is centered at the carrier frequency of the
received signal and has a bandwidth equal to the data bandwidth. It filters the product of the
received signal and the locally generated code. The output of the integrator is proportional to the
autocorrelation function of the code, with offset tau. Then the integrator output is compared
against a threshold V_T. If the signal is below the threshold, the phase or the frequency of the
locally generated code will be increased to the next cell. If the signal is above the threshold, then
the correct code phase and frequency have been found and the receiver will start tracking.

Tracking:
Several tracking techniques exist. One well known system is the delay-lock code-tracking loop,
normally abbreviated to DLL. Its operation will be explained in the following example.

Figure: Block diagram of a


DLL.

The received signal is divided (by a constant factor of two to remove BPSK modulation) and
then multiplied with an early (half-chip early) and a late (half-chip late) version of the locally
generated code. Both signals are then bandpass filtered. this gives the "early and late" correlation
signals. After filtering, these correlation signals are envelope-detected. Then, the outputs of the
envelope detectors are subtracted from each other. This results in an error signal (see Figure)
which is proportional to

Tc Tc
abs{ Rc(tau + ---) } - abs{ Rc(tau - ---) }
2 2
After filtering, this error signal can be used to control a voltage-controlled oscillator which
drives the local code generator. Its output chip rate will increase when the local code lags the
received code and decrease when the local code leads the received one.
11.a) Explain the CDMA technique in detail. [5M]

CDMA stands for Code Division for Multiple Access. CDMA is a key technology which is used in
several 3G wireless standards. For instance, CDMA is used in wireless standard; such as WCDMA
which is wide band Code Division Multiple Access.

basically, CDMA is a multiple access technology. CDMA Systems allow users to simultaneously
transmit data over the same frequency band.

CDMA is a form of a Direct Sequence Spread Spectrum (where the transmitted data is coded at a very
high frequency) communication. In spread spectrum, to send data, the signal bandwidth is much larger
than necessary to support multi-user access. In addition, the large bandwidth ensures interference of other
users does not occur. Multi user access is achieved using a code that is pseudo-random. A pseudo-random
code is generated using a special coding generator to separate different users.

CDMA uses codes for Multiple Access and different users are allocated different codes for
transmission over the radio channel. And this can be explained with the aid of a simple example as
follows for instance.

Let us take a 2 users scenario CDMA with 2 users. Let us call them user 0 and user 1.
Now let us also allocate the following codes. So, code 0 for user 0, let us denote this by this
vector 1 1 1 1. So, it contains 4 elements these elements are also known as chips and code 1
of user 1 equals 1, - 1, - 1, 1. We have 2 users. We are considering a 2 user CDMA scenario
with 2 codes c 0 and c 1,

c 0 is 1 1 1 1 and c 1 is 1, - 1, - 1, 1

Each code contains 4 elements and also now let a0

denote the symbol of user 0. Therefore,

a0 = symbol of user 0

a1 = symbol of user 1

What we are going to do now is we are going to multiple the symbol of each user by the
corresponding code and then we are going to add these 2 signals.

So, what we are going to do now is multiply symbol of each user with the respective code therefore,
we have
a0 x c 0 = a0 x [1 1 1 1]
= [a0, a0, a0, a0]
a1 x c 1 = a1 x [1 -1 -1 1]
= [a1, −a1, −a1, a1]

Now, we are going to generate the sum signal or the combined signal at the Base Station and this is
done by adding these 2 signals, this combined signal.

The combined signal is generated as the sum. We have the sum which is equal to
= a0 x c 0 + a1 x c 1

= [a0, a0, a0, a0] + [a1, −a1, −a1, a1]

= [a0+a1, a0−a1, a0−a1, a0 + a1]

This is the combined signal corresponding to the 2 users that is user 0 and user 1.

The sum signal is transmitted. The sum signal is transmitted over the common radio channel and
from this sum signal each user has to extract his own symbol that is user 0 has to extract symbol a0
and user 1 has to extract his symbol a1.

By performing this correlation operation with c 0 which is the code of user 0 you are able to extract 4
a0which is basically nothing but it is a scaled version of the symbol a0. So, user 0 is able to extract
his symbol a0 from this common signal that has been transmitted over the radio channel by
performing this correlation this correlation is an element-by-element multiplication with the code
followed by the sum. Similarly, user 1 can now correlate with his code c 1and similarly extract his
symbol a1.
11.b) Describe the FHSS concept briefly. [5M]
In a frequency hopping (FH) system, the frequency changes from chip to chip. An example FH
signal is shown in Fig below.

Fig. Illustration
of the principle of
frequency hopping
Frequency hopping systems can be divided into fast-hop or slow-hop. A fast-hop FH system is
the kind in which hopping rate is greater than the message bit rate and in the slow-hop system
the hopping rate is smaller than the message bit rate. This differentiation is due to the fact that
there is a considerable difference between these two FH types. The FH receiver is usually non-
coherent. A typical non-coherent receiver architecture is represented in Fig below.

The incoming
signal is multiplied by the signal from the PN generator identical to the one at the transmitter.
Resulting signal from the mixer is a binary FSK, which is then demodulated in a "regular" way.
Error correction is then applied in order to recover the original signal. The timing
synchronization is accomplished through the use of early-late gates, which control the clock
frequency.

You might also like