Professional Documents
Culture Documents
1.a) Check whether the following systems are causal or not: [7M]
1
i) y(n) = x(n) + ii) y(n) = x(-2n) .
2 x (n−2)
i)
ii) Given y(n) = x(–2n)
For n = -1 y(-1)= x(2)
For n = 0 y(0) = x(0)
For n = 1 y(1) = x(-2)
For negative values of n, the output depends on the future values of input.
Therefore, the system is non-causal.
There are many advantages of digital communication over analog communication, such as
The effect of distortion, noise, and interference is much less in digital signals as they are
less affected.
Digital circuits are more reliable.
Digital circuits are easy to design and cheaper than analog circuits.
The hardware implementation in digital circuits, is more flexible than analog.
The occurrence of cross-talk is very rare in digital communication.
Signal processing functions such as encryption and compression are employed in digital
circuits to maintain the secrecy of the information.
The probability of error occurrence is reduced by employing error detecting and error
correcting codes.
Spread spectrum technique is used to avoid signal jamming.
Combining digital signals using Time Division Multiplexing TDMTDM is easier than
combining analog signals using Frequency Division Multiplexing FDMFDM.
For no noise, S/N → ∞ and an infinite information rate is possible irrespective of bandwidth.
Thus, we may trade off bandwidth for SNR.
For example, if S/N = 7 and B = 4kHz, then the channel capacity is C = 12 × 103 bits/s. If the
SNR increases to S/N = 15 and B is decreased to 3kHz, the channel capacity remains the same.
c) Define phase locked loop. [2M]
A phase-locked loop or phase lock loop (PLL) is a control system that generates an output signal
whose phase is related to the phase of an input signal. There are several different types of PLL.
the simplest is an electronic circuit consisting of a variable frequency oscillator and a phase
detector in a feedback loop. The oscillator generates a periodic signal, and the phase detector
compares the phase of that signal with the phase of the input periodic signal, adjusting the
oscillator to keep the phases matched.
Digital modulation involves transmission of binary signals (0 and 1). This method is divided into
single carrier modulation, in which the carrier occupies the entire bandwidth (i.e.ASK, PSK, &
FSK, DPSK), and a multicarrier scheme that modulates and transmits different data on multiple
carriers (i.e., QPSK, 8PSK,16PSK & QAM…)
e) Define entropy. [2M]
Entropy can be defined as a measure of the average information content per source symbol.
In coding theory, block codes are a large and important family of error-correcting codes that
encode data in blocks. block code acts on a block of k bits of input data to produce n bits of
output data. Here n is greater than k. The transmitter adds redundant bits which are n−k bits. The
ratio k/n is the code rate. It is denoted by r and the value of r is r < 1.
Examples of block codes are Reed–Solomon codes, Hamming codes, Hadamard codes,
Expander codes, Golay codes, and Reed–Muller codes.
=x(nTs) − u([n−1]Ts)
=x(nTs) − [xˆ[[n−1]Ts] + v[[n−1]Ts]] ---------equation 2
Further,
v(nTs) = eq(nTs) = S.sig.[ep(nTs)] ---------equation 3
Which means, The present input of the delay unit = The previous output of the delay
unit + the present quantizer output.
Assuming zero condition of Accumulation,
---------equation 6
Fig: PCM-Waveform
Fig: Adaptive-Delta-Modulation-Transmitter.
The transmitter circuit consists of a summer, quantizer, Delay circuit, and a logic circuit for step
size control. The baseband signal X(nTs) is given as input to the circuit. The feedback circuit
present in the transmitter is an Integrator. The integrator generates the staircase approximation of
the previous sample.
At the summer circuit, the difference between the present sample and staircase approximation of
previous sample e(nTs) is calculated. This error signal is passed to the quantizer, where a
quantized value is generated. The step size control block controls the step size of the next
approximation based on either the quantized value is high or low. The quantized signal is given
as output.
At the receiver end Demodulation takes place. The receiver has two parts. First part is the step
size control. Here the received signal is passed through a logic step size control block, where the
step size is produced from each incoming bit. Step size is decided based on present and previous
input. In the second part of the receiver, the accumulator circuit recreates the staircase signal.
This waveform is then applied to a low pass filter which smoothens the waveform and recreates
the original signal.
Fig: Adaptive-Delta-Modulation-Waveform
We know if the characteristics of the quantizer are non-linear then it causes the step size to be
variable despite being constant then it is known as non-uniform quantization.
As we know in non-uniform quantization, the step size varies according to the signal level. If the
signal level is low then step size will be small. So, the step size will be low for weak signal. Thus
the quantization noise will also be low.
So, in order to maintain proper signal to quantization noise ratio, the step size must be variable
according to the signal level. Thus in order to achieve non-uniform quantization the process of
companding is used.
4.a) With a neat block diagram describe the ASK modulator. [5M]
ASK is a type of Amplitude Modulation which represents the binary data in the form of
variations in the amplitude of a signal.
The binary signal when ASK modulated, gives a zero value for Low input while it gives
the carrier output for High input.
The following figure represents ASK modulated waveform along with its input.
ASK Modulator:
The ASK modulator block diagram comprises of the carrier signal generator, the binary sequence
from the message signal and the band-limited filter. Following is the block diagram of the ASK
Modulator.
The ASK modulated input signal is given to the Square law detector. A square law detector is
one whose output voltage is proportional to the square of the amplitude modulated input voltage.
The low pass filter minimizes the higher frequencies. The comparator and the voltage limiter
help to get a clean digital output.
4.b) Draw and explain coherent PSK detection process. [5M]
PSK is the digital modulation technique in which the phase of the carrier signal is changed by
varying the sine and cosine inputs at a particular time. PSK technique is widely used for wireless
LANs, bio-metric, contactless operations, along with RFID and Bluetooth communications .
By recovering the band-limited message signal, with the help of the mixer circuit and the band
pass filter, the first stage of demodulation gets completed. The base band signal which is band
limited is obtained and this signal is used to regenerate the binary message bit stream.
In the next stage of demodulation, the bit clock rate is needed at the detector circuit to produce
the original binary message signal. If the bit rate is a sub-multiple of the carrier frequency, then
the bit clock regeneration is simplified. To make the circuit easily understandable, a decision-
making circuit may also be inserted at the 2nd stage of detection.
From the above figure, it is evident that the balance modulator is given the DPSK signal along
with 1-bit delay input. That signal is made to confine to lower frequencies with the help of LPF.
Then it is passed to a shaper circuit, which is a comparator or a Schmitt trigger circuit, to
recover the original binary data as the output.
The following figure represents the model waveform of DPSK.
It is seen from the above figure that, if the data bit is Low i.e., 0, then the phase of the signal is
not reversed, but continued as it was. If the data is a High i.e., 1, then the phase of the signal is
reversed, as with NRZI, invert on 1 a form of differential encoding a form of differential
encoding.
If we observe the above waveform, we can say that the High state represents an M in the
modulating signal and the Low state represents a W in the modulating signal.
6.a) Describe the operating principle of baseband signal receiver. [5M]
A baseband signal or lowpass signal is a signal that can include frequencies that are very near
zero, by comparison with its highest frequency.
A baseband bandwidth is equal to the highest frequency of a signal or system, or an upper bound
on such frequencies, for example the upper cut-off frequency of a low-pass filter.
A basic problem that often arises in the study of communication systems is that of detecting a
pulse transmitted over a channel that is corrupted by channel noise. This problem is solved by
Matched filter Receiver.
In signal processing, a matched filter is obtained by correlating a known delayed signal, with an
unknown signal to detect the presence of the known delayed signal in the unknown signal. This
is equivalent to convolving the unknown signal with a conjugated time-reversed version of the
template. The matched filter is the optimal linear filter for maximizing the signal-to-noise ratio
(SNR) in the presence of additive stochastic noise.
Matched filters are commonly used in radar, in which a known signal is sent out, and the
reflected signal is examined for common elements of the out-going signal. Pulse compression is
an example of matched filtering. It is so called because the impulse response is matched to input
pulse signals.
6.b) Explain the conditional entropy and redundancy for information. [5M]
Entropy:
Entropy can be defined as a measure of the average information content per source
symbol. Claude Shannon, the “father of the Information Theory”, provided a formula for it as
Properties:
1. Conditional entropy equals zero.
H(Y/X) = 0 if and only if the value of Y is completely determined by the value of X.
2. Conditional entropy of independent random variables
H(Y/X) = H(Y) & H(X/Y) = H(X) if and only if Y and X are independent variables
3. Chain rule
H(Y/X) = H(X,Y) - H(Y)
4. Baye’s rule
H(Y/X) = H(X/Y)- H(X)+H(Y)
5. H(Y/X) ≤ H(Y)
7.a) Explain with an example about variable length coding method. [5M]
Variable length code:
In coding theory, a variable-length code is a code which maps source symbols to a variable
number of bits.
Variable-length codes can allow sources to be compressed and decompressed with zero error
(lossless data compression) and still be read back symbol by symbol. With the right coding
strategy an independent and identically-distributed source may be compressed almost arbitrarily
close to its entropy.
Some examples of well-known variable-length coding strategies are Huffman coding, Lempel–
Ziv coding, arithmetic coding, and context-adaptive variable-length coding.
Example:
Solution:
7.b) Explain the pulse shaping concept to achieve optimum transmissions.[5]
Need for Pulse shaping:
Transmitting a signal at high modulation rate through a band-limited channel can create
intersymbol interference. As the modulation rate increases, the signal's bandwidth increases.
When the signal's bandwidth becomes larger than the channel bandwidth, the channel starts to
introduce distortion to the signal. This distortion usually manifests itself as intersymbol
interference.
In electronics and telecommunications, pulse shaping is the process of changing the waveform
of transmitted pulses. Its purpose is to make the transmitted signal better suited to its purpose or
the communication channel, typically by limiting the effective bandwidth of the transmission.
By filtering the transmitted pulses this way, the intersymbol interference caused by the channel
can be kept in control. In RF communication, pulse shaping is essential for making the signal fit
in its frequency band. Typically pulse shaping occurs after line coding and modulation.
Pulse shaping filters:
Not every filter can be used as a pulse shaping filter. The filter itself must not introduce
intersymbol interference. it needs to satisfy certain criteria. The Nyquist ISI criterion is a
commonly used criterion for evaluation, because it relates the frequency spectrum of the
transmitter signal to intersymbol interference.
Examples of pulse shaping filters that are commonly found in communication systems are:
Sinc shaped filter
Raised-cosine filter
Gaussian filter
Sender side pulse shaping is often combined with a receiver side matched filter to achieve
optimum tolerance for noise in the system. In this case the pulse shaping is equally distributed
between the sender and receiver filters. The filters' amplitude responses are thus pointwise
square roots of the system filters.
8.a) Decode a convolution code using state diagram. [5M]
Convolutional Code Encoder Structure:
A convolutional code introduces redundant bits into the data stream through the use of linear shift
registers as shown in Figure below.
Fig: Convolution Encoder.
The information bits are input into shift registers and the output encoded bits are obtained by modulo-2
addition of the input information bits and the contents of the shift registers.
The code rate r for a convolutional code is defined as r = k/n.
Encoder Representations.
The Convolution encoder can be represented in several different but equivalent ways. They are
1. Generator Representation
2. Tree Diagram Representation
3. State Diagram Representation
4. Trellis Diagram Representation
The two generator vectors for the encoder in above Figure are g1 = [111] and g2 = [101] where the
subscripts 1 and 2 denote the corresponding output terminals.
In the state diagram, the state information of the encoder is shown in the circles. Each new input
information bit causes a transition from one state to another. The path information between the states,
denoted as x/c, represents input information bit x and output encoded bits c. It is customary to begin
convolutional encoding from the all zero state.
For example, the input information sequence x = {1011} (begin from the all zero state) leads to the state
transition sequence s = {10, 01, 10, 11} and produces the output encoded sequence c = {11, 10, 00, 01}.
Figure below shows the path taken through the state diagram for the given example.
Figure: The state transitions (path) for input information sequence {1011}
The Viterbi algorithm is named after Andrew Viterbi, who proposed it in 1967 as a decoding
algorithm for convolutional codes over noisy digital communication links. The Viterbi algorithm
is a dynamic programming algorithm for finding the most likely sequence of hidden states called
the Viterbi path that results in a sequence of observed events. The algorithm has found universal
application in decoding the convolutional codes.
The decoding algorithm uses two metrics: the branch metric (BM) and the path metric (PM). The
branch metric is a measure of the “distance” between what was transmitted and what was
received, and is defined for each arc in the trellis. In hard decision decoding, where we are given
a sequence of digitized parity bits, the branch metric is the Hamming distance between the
expected parity bits and the received ones.
An example is shown in Figure above, where the received bits are 00. For each state transition,
the number on the arc shows the branch metric for that transition. Two of the branch metrics are
0, corresponding to the only states and transitions where the corresponding Hamming distance is
0. The other non-zero branch metrics correspond to cases when there are bit errors. The path
metric is a value associated with a state in the trellis (i.e., a value associated with each node). For
hard decision decoding, it corresponds to the Hamming distance over the most likely path from
the initial state to the current state in the trellis. By “most likely”, we mean the path with smallest
Hamming distance between the initial state and the current state, measured over all possible
paths between the two states. The path with the smallest Hamming distance minimizes the total
number of bit errors, and is most likely when the BER is low.
The key insight in the Viterbi algorithm is that the receiver can compute the path metric
for a (state, time) pair incrementally using the path metrics of previously computed states
and the branch metrics.
Metric: it is the discrepancy between the received signal y and the decoding signal at particular node.
Surviving path: this is the path of the decoded signal with minimum metric.
In coding theory, a cyclic code is a block code, where the circular shifts of each codeword gives
another word that belongs to the code. They are error-correcting codes that have algebraic properties
that are convenient for efficient error detection and correction.
If 00010111 is a valid codeword, applying a right circular shift gives the string 10001011. If the code
is cyclic, then 10001011 is again a valid codeword. In general, applying a right circular shift moves
the least significant bit (LSB) to the leftmost position, so that it becomes the most significant bit
(MSB); the other positions are shifted by 1 to the right.
The syndrome computation may be performed by shift registers similar in form to that used for
encoding.
To elaborate, let us consider a systematic cyclic code and let us represent the received code vector R
by the polynomial R(x). In general, R = C + E, where C is the transmitted code word and E is the
error vector. Hence, we have
Now, suppose we divide R(x) by the generator polynomial g(x). This division will yield
or, equivalently,
The remainder S(x) is a polynomial of degree less than or equal to n - k-1. If we combine Equation 1
with Equation 2, we obtain
This relationship illustrates that the remainder S(x) obtained from dividing R(x) by g(x) depends only
on the error polynomial E(x), and, hence, S(x) is simply the syndrome associated with the error
pattern E. Therefore,
n-k-1. If g(x) divides R(x) exactly, then S(x) = 0 and the received decoded word is Ĉ = R.
The division of R(x) by the generator polynomial g(x) may be carried out by means of a shift
register.
9.b) Explain the error detection and error correction capabilities of linear block codes. [5M]
In order to determine the error-detecting and correcting capabilities of linear block codes, we
first need to define the Hamming weight of a codeword, the Hamming distance between two
codewords, and the minimum distance dmin of a block code.
The Hamming weight of a codeword C is defined as the number of non-zero elements (i.e., 1s) in
the codeword.
The Hamming distance between two codewords is defined as the number of elements in which
they differ.
The minimum distance dmin of a linear block code is the smallest Hamming distance between any
two different codewords, and it is equal to the minimum Hamming weight of the non-zero
codewords in the code.
a linear block code of minimum distance dmin can detect up to t errors if and only if dmin≥ t+1 and
correct up to t errors if and only if dmin≥2t+1.
Noisy channel introduces an error-pattern:
Code acquisition:
Initially, neither the phase nor the frequency of the spreading code of the received signal are
known to the receiver. During acquisition, the receiver tries to find this phase and frequency.
Code Acquisition is basically a process of searching through an uncertainty region, which may
be one-dimensional, e.g. in time alone or two-dimensional, viz. in time and frequency (if there is
drift in carrier frequency due to Doppler effect etc.) – until the correct code phase is found. The
uncertainty region is divided into a number of cells. In the one-dimensional case, one cell may be
as small as a fraction of a PN chip interval.
A basic operation, often carried out during the process of code acquisition is the correlation of
the incoming spread signal with a locally generated version of the spreading code sequence
(obtained by multiplying the incoming and local codes and accumulating the result) and
comparing the correlator output with a set threshold to decide whether the codes are in phase or
not.
The output of the local code generator is a time-shifted version p(t + tau) of the code sequence
used by the transmitter. The bandpass filter (BPF) is centered at the carrier frequency of the
received signal and has a bandwidth equal to the data bandwidth. It filters the product of the
received signal and the locally generated code. The output of the integrator is proportional to the
autocorrelation function of the code, with offset tau. Then the integrator output is compared
against a threshold V_T. If the signal is below the threshold, the phase or the frequency of the
locally generated code will be increased to the next cell. If the signal is above the threshold, then
the correct code phase and frequency have been found and the receiver will start tracking.
Tracking:
Several tracking techniques exist. One well known system is the delay-lock code-tracking loop,
normally abbreviated to DLL. Its operation will be explained in the following example.
The received signal is divided (by a constant factor of two to remove BPSK modulation) and
then multiplied with an early (half-chip early) and a late (half-chip late) version of the locally
generated code. Both signals are then bandpass filtered. this gives the "early and late" correlation
signals. After filtering, these correlation signals are envelope-detected. Then, the outputs of the
envelope detectors are subtracted from each other. This results in an error signal (see Figure)
which is proportional to
Tc Tc
abs{ Rc(tau + ---) } - abs{ Rc(tau - ---) }
2 2
After filtering, this error signal can be used to control a voltage-controlled oscillator which
drives the local code generator. Its output chip rate will increase when the local code lags the
received code and decrease when the local code leads the received one.
11.a) Explain the CDMA technique in detail. [5M]
CDMA stands for Code Division for Multiple Access. CDMA is a key technology which is used in
several 3G wireless standards. For instance, CDMA is used in wireless standard; such as WCDMA
which is wide band Code Division Multiple Access.
basically, CDMA is a multiple access technology. CDMA Systems allow users to simultaneously
transmit data over the same frequency band.
CDMA is a form of a Direct Sequence Spread Spectrum (where the transmitted data is coded at a very
high frequency) communication. In spread spectrum, to send data, the signal bandwidth is much larger
than necessary to support multi-user access. In addition, the large bandwidth ensures interference of other
users does not occur. Multi user access is achieved using a code that is pseudo-random. A pseudo-random
code is generated using a special coding generator to separate different users.
CDMA uses codes for Multiple Access and different users are allocated different codes for
transmission over the radio channel. And this can be explained with the aid of a simple example as
follows for instance.
Let us take a 2 users scenario CDMA with 2 users. Let us call them user 0 and user 1.
Now let us also allocate the following codes. So, code 0 for user 0, let us denote this by this
vector 1 1 1 1. So, it contains 4 elements these elements are also known as chips and code 1
of user 1 equals 1, - 1, - 1, 1. We have 2 users. We are considering a 2 user CDMA scenario
with 2 codes c 0 and c 1,
c 0 is 1 1 1 1 and c 1 is 1, - 1, - 1, 1
a0 = symbol of user 0
a1 = symbol of user 1
What we are going to do now is we are going to multiple the symbol of each user by the
corresponding code and then we are going to add these 2 signals.
So, what we are going to do now is multiply symbol of each user with the respective code therefore,
we have
a0 x c 0 = a0 x [1 1 1 1]
= [a0, a0, a0, a0]
a1 x c 1 = a1 x [1 -1 -1 1]
= [a1, −a1, −a1, a1]
Now, we are going to generate the sum signal or the combined signal at the Base Station and this is
done by adding these 2 signals, this combined signal.
The combined signal is generated as the sum. We have the sum which is equal to
= a0 x c 0 + a1 x c 1
This is the combined signal corresponding to the 2 users that is user 0 and user 1.
The sum signal is transmitted. The sum signal is transmitted over the common radio channel and
from this sum signal each user has to extract his own symbol that is user 0 has to extract symbol a0
and user 1 has to extract his symbol a1.
By performing this correlation operation with c 0 which is the code of user 0 you are able to extract 4
a0which is basically nothing but it is a scaled version of the symbol a0. So, user 0 is able to extract
his symbol a0 from this common signal that has been transmitted over the radio channel by
performing this correlation this correlation is an element-by-element multiplication with the code
followed by the sum. Similarly, user 1 can now correlate with his code c 1and similarly extract his
symbol a1.
11.b) Describe the FHSS concept briefly. [5M]
In a frequency hopping (FH) system, the frequency changes from chip to chip. An example FH
signal is shown in Fig below.
Fig. Illustration
of the principle of
frequency hopping
Frequency hopping systems can be divided into fast-hop or slow-hop. A fast-hop FH system is
the kind in which hopping rate is greater than the message bit rate and in the slow-hop system
the hopping rate is smaller than the message bit rate. This differentiation is due to the fact that
there is a considerable difference between these two FH types. The FH receiver is usually non-
coherent. A typical non-coherent receiver architecture is represented in Fig below.
The incoming
signal is multiplied by the signal from the PN generator identical to the one at the transmitter.
Resulting signal from the mixer is a binary FSK, which is then demodulated in a "regular" way.
Error correction is then applied in order to recover the original signal. The timing
synchronization is accomplished through the use of early-late gates, which control the clock
frequency.