Professional Documents
Culture Documents
3G 04 DigitalComm PDF
3G 04 DigitalComm PDF
Introduction to Digital
Communications System
Local
Loop Mobile
Switch T1/E1 Facilities
Switching
Transmission Center
Equipment regenerator
A/D Conversion Base
Central Office Station
(Digitization)
Local
Loop SONET
Switch T1/E1 Facilities M SDH
U
Transmission T1/E1 Facilities
Equipment regenerator X
Central Office A/D Conversion
(Digitization)
Local
Loop
Switch T1/E1 Facilities
Transmission
Equipment regenerator
Mobile
Central Office A/D Conversion Switching
(Digitization) Center
3
Basic Digital Communication Nomenclature
4
Nomenclature Examples
5
Messages, Characters, and Symbols
6
Typical Digital Communications System
From Other Sources
TX
Source Channel Frequency Multiple
Format Encryption Interleaving Multiplexing Modulation RF
Encoding Encoding Spreading Access
PA
si (t )
Digital
Input C
mi H
A
Bit Digital
Synchronization N
Stream Waveform N
Digital E
Output L
m̂ i
sˆi (t )
RX
Source Channel Frequency Multiple
Format Decryption Deinterleaving Demultiplexing Demodulation RF
Decoding Decoding Despreading Access
IF
7
Wireless Information Transmission System Lab.
Format
TX
Source Channel Frequency Multiple
Format Encryption Interleaving Multiplexing Modulation RF
Encoding Encoding Spreading Access
PA
si (t )
Digital
Input C
mi H
A
Bit Digital
Synchronization N
Stream Waveform N
Digital E
Output L
m̂ i
sˆi (t )
RX
Source Channel Frequency Multiple
Format Decryption Deinterleaving Demultiplexing Demodulation RF
Decoding Decoding Despreading Access
IF
9
Formatting and Baseband Transmission
10
Sampling Theorem
11
Sampling Theorem
Sampling Theorem: A bandlimited signal having no
spectral components above fm hertz can be determined
uniquely by values sampled at uniform intervals of Ts
seconds, where
1
TS ≤ or sampling rate f S ≥ 2 f m
2 fm
In sample-and-hold operation, a switch and storage
mechanism form a sequence of samples of the
continuous input waveform. The output of the sampling
process is called pulse amplitude modulation (PAM).
12
Sampling Theorem
∞
1
X S ( f ) = X ( f ) ∗ Xδ ( f ) =
TS
∑ X ( f − nf
n = −∞
S )
13
Spectra for Various Sampling Rates
14
Natural Sampling
15
Pulse Code Modulation (PCM)
16
Example of Constructing PCM Sequence
17
Uniform and Non-uniform Quantization
18
Statistical Distribution of Single-Talker
Speech Amplitudes
50% of the time, speech voltage is less than ¼ RMS.
Only 15% of the time, voltage exceeds RMS.
Typical voice signal dynamic range is 40 dB.
19
Problems with Linear Quantization
20
Implementation of Non-linear Quantizer
21
Companding Characteristics
In North America: μ-law compression:
loge [1 + µ ( x / xmax )]
y = ymax ⋅ sgn x
loge (1 + µ )
where
⎧+ 1 for x ≥ 0
sgn x = ⎨
⎩−1 for x < 0
In Europe: A-law compression:
⎧ A( x / x max ) x 1
⎪ y max ⋅ sgn x 0< ≤
⎪ 1 + log e A x max A
y=⎨
⎪ y 1 + log e [ A( x / x max )] ⋅ sgn x 1
<
x
≤1
⎪⎩ max 1 + log e A A x max
22
Compression Characteristics
Standard values of μ is 255 and A is 87.6.
23
Wireless Information Transmission System Lab.
Source Coding
TX
Source Channel Frequency Multiple
Format Encryption Interleaving Multiplexing Modulation RF
Encoding Encoding Spreading Access
PA
si (t )
Digital
Input C
mi H
A
Bit Digital
Synchronization N
Stream Waveform N
Digital E
Output L
m̂ i
sˆi (t )
RX
Source Channel Frequency Multiple
Format Decryption Deinterleaving Demultiplexing Demodulation RF
Decoding Decoding Despreading Access
IF
25
Source Coding
Source coding deals with the task of forming efficient
descriptions of information sources.
For discrete sources, the ability to form reduced data
rate descriptions is related to the information content
and the statistical correlation among the source
symbols.
For analog sources, the ability to form reduced data
rate descriptions, subject to a fixed fidelity criterion I
related to the amplitude distribution and the temporal
correlation of the source waveforms.
26
Huffman Coding
The Huffman code is source code whose average word
length approaches the fundamental limit set by the
entropy of a discrete memoryless source.
27
Huffman Encoding Algorithm
1. The source symbols are listed in order of decreasing
probability. The two source symbols of lowest
probability are assigned a 0 and a 1.
2. These two source symbols are regarded as being
combined into a new source symbol with probability
equal to the sum of the two original probabilities. The
probability of the new symbol is placed in the list in
accordance with its value.
3. The procedure is repeated until we are left with a final
list of source statistics of only two for which a 0 and a 1
are assigned.
4. The code for each (original) source symbol is found by
working backward and tracing the sequence of 0s and 1s
assigned to that symbol as well as its successors.
28
Example of Huffman Coding
Symbol Probability Code Word
S0 0.4 00
S1 0.2 10
S2 0.2 11
S3 0.1 010
S4 0.1 011
Symbol Stage 1 Stage 2 Stage 3 Stage 4
.
.
.
Speech Encoding
31
Differential PCM (DPCM)
32
Delta Modulation (DM)
Delta modulation is a one-bit DPCM.
Advantage: bit compression.
Disadvantage: slope overload.
33
Speech Coding Objective
Reduce the number of bits needed to be transmitted,
therefore lowering the bandwidth required.
34
Speech Properties
Voiced Sound
Arises in generation of vowels and latter portion of some consonants.
Displays long-term repetitive pattern corresponding to the duration of a
pitch interval
Pulse-like waveform.
Unvoiced Sound
Arises in pronunciation of certain consonants such as “s”, “f”, “p”, “j”,
“x”, …, etc.
Noise-like waveform.
35
Categories of Speech Encoding
Waveform Encoding
Treats voice as analog signal and does not use properties of
speech:
36
Linear Predictive Coder (LPC)
37
Multi-Pulse Linear Predictive Coder
(MP-LPC)
38
Regular Pulse Excited Long Term Prediction
Coder (RPE-LPT)
39
Code-Excited Linear Predictive (CELP)
40
Speech Coder Complexity
41
Speech Processing for GSM
Channel Coding
TX
Source Channel Frequency Multiple
Format Encryption Interleaving Multiplexing Modulation RF
Encoding Encoding Spreading Access
PA
si (t )
Digital
Input C
mi H
A
Bit Digital
Synchronization N
Stream Waveform N
Digital E
Output L
m̂ i
sˆi (t )
RX
Source Channel Frequency Multiple
Format Decryption Deinterleaving Demultiplexing Demodulation RF
Decoding Decoding Despreading Access
IF
45
Channel Coding
Error detecting coding: Capability of detecting errors so
that re-transmission or dropping can be done.
Cyclic Redundancy Code (CRC)
46
Linear Block Codes
A subset S of Vn is a subspace if
The all-zero vector is in S
The sum of any two vectors in S is also in S.
Example of S: V 0 = 0000
V 1 = 0101
V 2 = 1010
V 3 = 1111
49
Reducing Encoding Complexity
Key feature of linear block codes: the 2k code vectors
form a k-dimensional subspace of all n-tuples.
Example: k = 3, 2k = 8, n = 6, ( 6 , 3 ) code
Message Code Word
000 000000 ⎫
⎪
100 110100 ⎪
⎪
010 011010 ⎪
110 101110 ⎪ A 3 - dimensiona l subspace of
⎬
001 101001 ⎪ the vector space of all 6 - tuples.
⎪
101 011101 ⎪
⎪
011 110011 ⎪
⎭
111 000111
50
Reducing Encoding Complexity
It is possible to find a set of k linearly independent n -
tuples v1 , v 2 , ..., v k such that each n-tuple of the suspace
is a linear combination of v1 , v 2 , ..., v k .
51
Generator Matrix
⎡ v1 ⎤ ⎡ v11 v12 v1n ⎤
⎢ v ⎥ ⎢v v v ⎥
G = ⎢ 2 ⎥ = ⎢ 21 22 2n ⎥
= k × n Generator Matrix
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
⎢⎣ k ⎥⎦ ⎢⎣ k1 k 2
v v v v kn ⎥
⎦
The 2k code vectors can be described by a set of k linearly
independent code vectors.
Let m=[m1, m2, … , mk] be a message.
Code word corresponding to message m is obtained by:
⎡ v1 ⎤
⎢v ⎥
u = mG = [m1 m2 mk ] ⎢ 2 ⎥
⎢ ⎥
⎢ ⎥
⎣v k ⎦
52
Generator Matrix
Storage is greatly reduced.
The encoder needs to store the k rows of G instead of
the 2k code vectors of the code.
For example:
⎡ v1 ⎤ ⎡ 1 1 0 1 0 0 ⎤
Let G = ⎢⎢ v 2 ⎥⎥ = ⎢⎢ 0 1 1 0 1 0 ⎥⎥ and m = [1 1 0 ]
⎢⎣ v 3 ⎥⎦ ⎢⎣ 1 0 1 0 0 1 ⎥⎦
Then
⎡ v1 ⎤ = 1 ⋅ v1 + 1 ⋅ v 2 + 0 ⋅ v 3
u = [1 1 0 ] ⎢⎢ v 2 ⎥⎥ = 1 ⋅ [110100 ] + 1 ⋅ [ 011010 ] + 0 ⋅ [101001]
⎢⎣ v 3 ⎥⎦ = [1 0 1 1 1 0] Code Vector for m = [110 ]
53
Systematic Code
54
Parity Check Matrix
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
⎢⎣ h( n − k ) ⎥⎦ ⎢⎣ h( n − k )1 h( n − k )2 h( n − k ) n ⎥⎦
u = u1 , u2 ,… , un
uH T = u1hi1 + u2 hi 2 + + un hin = 0
where i = 1, 2,… , n − k
U is a code word generated by matrix G if and only if uHT=0
55
Parity Check Matrix and Syndrome
In a systematic code with G=[Pkxr Ikxk]
H=[Irxr PTrxk]
r u e
Received Code Error
= +
Vector Vector Vector
⎧= 0 If r is a code vector
Syndrome s ⎨
⎩≠ 0 Otherwise
56
Example of Syndrome Test
⎤ H = [ I n − k PT ]
⎡
⎢1 1 0 1 0 0⎥
⎢ ⎥
G = ⎢0 1 1 0 1 0⎥ ⎡1 0 0 1 0 1 ⎤
⎢1 0 1 0 0 1⎥ H = ⎢⎢0 1 0 1 1 0 ⎥⎥
⎢⎣ ⎥
P Ik ⎦ ⎢⎣0 0 1 0 1 1 ⎥⎦
The 6-tuple 1 0 1 1 1 0 is the code vector corresponding to the
message 1 1 0. ⎡1 0 0 ⎤
⎢0 1 0 ⎥⎥
⎢
⎢0 0 1⎥
s = u ⋅ H = [1 0 1 1 1 0] • ⎢
T
⎥ = [ 0 0 0]
⎢1 1 0⎥
⎢0 1 1⎥
⎢ ⎥
⎢⎣1 0 1 ⎥⎦
Compute the syndrome for the non-code-vector 0 0 1 1 1 0
s = [ 0 0 1 1 1 0] ⋅ H T = [1 0 0]
57
Weight and Distance of Binary Vectors
58
Minimum Distance of a Linear Code
The set of all code vectors of a linear code form a
subspace of the n-tuple space.
If u and v are 2 code vectors, then u+v must also be a
code vector.
Therefore, the distance d(u,v) between 2 code vectors
equals the weight of a third code vector.
d(u,v) =w(u+v)=w(w)
Thus, the minimum distance of a linear code equals
the minimum weight of its code vectors.
A code with minimum distance dmin can be shown to
correct (dmin-1)/2 erroneous bits and detect (dmin-1)
erroneous bits.
59
Example of Minimum Distance
dmin=3
60
Example of Error Correction and Detection
Capability
u v
d min (u , v ) = 7
⎢ d min − 1 ⎥
t max =⎢ ⎥ : Error Correcting Strength
⎣ 2 ⎦
61
Convolutional Code Structure
1 2 K
1 2 k 1 2 k 1 2 k
k bits
+ 1 + 2 + n-1 + n
Output
62
Convoltuional Code
Convolutional codes
k = number of bits shifted into the encoder at one time
k=1 is usually used!!
n = number of encoder output bits corresponding to the k
information bits
r = k/n = code rate
K = constraint length, encoder memory
Each encoded bit is a function of the present input bits
and their past ones.
63
Generator Sequence
u v
r0 r1 r2
u v
r0 r1 r2 r3
g 0( 2 ) = 1, g1( 2 ) = 1, g 2( 2 ) = 1, g 3( 2 ) = 0, and g 4( 2 ) = 1 .
0 00 00 00 0(01)
1 00 10 11 01 10
1(00)
0 01 00 11
1 01 10 00 0(10) 1(10)
0 10 01 01 11
1 10 11 10
0 11 01 10
1(01)
1 11 11 01
State Diagram
65
Trellis Diagram Representation
0(00) 0(00) 0(00) 0(00)
00 00 00 0(00) 00 0(00) 00 0(00) 00 00 00
1 (1 1
1 (1 1
1 (1 1
1 (1 1
1 (1 1
)
)
)
11
11
11
11
11
0(
0(
0(
0(
0(
01 01 01 01 01
1 (0
1(0
1(0
0)
0)
0)
)
)
)
)
1
1
1
1
0(0
0(0
0(0
0(0
0(0
10 10 10 10 10
1(
10
)
1(
1(
1(
)
)
0(10
0(10
0(10
0(10
10
10
10
)
)
11 1(01) 11 1(01) 11 1(01) 11
Trellis termination: K tail bits with value 0 are usually added to the end of the code.
66
Encoding Process
Input: 1 0 1 1 1 0 0
Output: 11 01 00 10 01 10 11
0(00) 0(00) 0(00) 0(00)
00 00 00 0(00) 00 0(00) 00 0(00) 00 00 00
1 (1 1
1 (1 1
1 (1 1
1 (1 1
1 (1 1
)
)
)
)
)
)
11
11
11
11
11
0(
0(
0(
0(
0(
01 1 (0 01 01 01 01
1 (0
1 (0
0)
0)
0)
1)
)
1)
)
1
1
1
0 (0
0(0
0(0
0 (0
0(0
10 10 10 10 10
1(
10
)
1(
1(
1(
)
)
0(10
0(10
0(10
0(10
10
10
10
)
)
11 1(01) 11 1(01) 11 1(01) 11
67
Viterbi Decoding Algorithm
Maximum Likelihood (ML) decoding rule
ML
received sequence r detected sequence d
min(d,r) !!
68
Viterbi Decoding Algorithm
Basic concept
Generate the code trellis at the decoder
The decoder penetrates through the code trellis level by level in
search for the transmitted code sequence
At each level of the trellis, the decoder computes and
compares the metrics of all the partial paths entering a node
The decoder stores the partial path with the larger metric and
eliminates all the other partial paths. The stored partial path is
called the survivor.
69
Viterbi Decoding Process
Output: 11 01 00 10 01 10 11
Receive: 11 11 00 10 01 11 11
0(00) 0(00) 0(00) 0(00)
00 00 00 0(00) 00 0(00) 00 0(00) 00 00 00
1 (1 1
1 (1 1
1 (1 1
1 (1 1
1 (1 1
2
)
)
)
11
11
11
11
11
0(
0(
0(
0(
0(
01 1 (0 01 01 01 01
1 (0
1 (0
0)
0)
0)
1)
)
)
)
1
1
1
1
0 (0
0(0
0(0
0(0
0(0
10 10 10 10 10
1(
0
10
)
1(
1(
1(
)
)
0(10
0(10
0(10
0(10
10
10
10
)
)
11 1(01) 11 1(01) 11 1(01) 11
70
Viterbi Decoding Process
Output: 11 01 00 10 01 10 11
Receive: 11 11 00 10 01 11 11
0(00) 0(00) 0(00) 0(00)
00 00 00 0(00) 00 0(00) 00 0(00) 00 00 00
1 (1 1
1 (1 1
1 (1 1
1 (1 1
1 (1 1
2 4
)
)
)
11
11
11
11
11
0(
0(
0(
0(
0(
01 1 (0 01 01 01 01
1 (0
1 (0
1
0)
0)
0)
1)
)
)
)
1
1
1
1
0 (0
0(0
0(0
0(0
0(0
10 10 10 10 10
1(
0 2
10
)
1(
1(
1(
)
)
0(10
0(10
0(10
0(10
10
10
10
)
)
11 1(01) 11 1(01) 11 1(01) 11
1 71
Viterbi Decoding Process
Output: 11 01 00 10 01 10 11
Receive: 11 11 00 10 01 11 11
0(00) 0(00) 0(00) 0(00)
00 00 00 0(00) 00 0(00) 00 0(00) 00 00 00
1 (1 1
1 (1 1
1 (1 1
1 (1 1
1 (1 1
2 4 3
)
)
)
11
11
11
11
11
0(
0(
0(
0(
0(
01 1 (0 01 01 01 01
1 (0
1 (0
1 2
0)
0)
0)
1)
)
)
)
1
1
1
1
0 (0
0(0
0(0
0(0
0(0
10 10 10 10 10
1(
0 2 1
10
)
1(
1(
1(
)
)
0(10
0(10
0(10
0(10
10
10
10
)
)
11 1(01) 11 1(01) 11 1(01) 11
1 2 72
Viterbi Decoding Process
Output: 11 01 00 10 01 10 11
Receive: 11 11 00 10 01 11 11
0(00) 0(00) 0(00) 0(00)
00 00 00 0(00) 00 0(00) 00 0(00) 00 00 00
1 (1 1
1 (1 1
1 (1 1
1 (1 1
1 (1 1
2 4 3 3
)
)
)
11
11
11
11
11
0(
0(
0(
0(
0(
01 1 (0 01 01 01 01
1 (0
1 (0
1 2 2
0)
0)
0)
1)
)
)
)
1
1
1
1
0 (0
0(0
0(0
0(0
0(0
10 10 10 10 10
1(
0 2 1 3
10
)
1(
1(
1(
)
)
0(10
0(10
0(10
0(10
10
10
10
)
)
11 1(01) 11 1(01) 11 1(01) 11
1 2 73
1
Viterbi Decoding Process
Output: 11 01 00 10 01 10 11
Receive: 11 11 00 10 01 11 11
0(00) 0(00) 0(00) 0(00)
00 00 00 0(00) 00 0(00) 00 0(00) 00 00 00
1 (1 1
1 (1 1
1 (1 1
1 (1 1
1 (1 1
2 4 3 3 3
)
)
)
11
11
11
11
11
0(
0(
0(
0(
0(
01 1 (0 01 01 01 01
1 (0
1 (0
1 2 2 3
0)
0)
0)
1)
)
)
)
1
1
1
1
0 (0
0(0
0(0
0(0
0(0
10 10 10 10 10
1(
0 2 1 3 3
10
)
1(
1(
1(
)
)
0(10
0(10
0(10
0(10
10
10
10
)
)
11 1(01) 11 1(01) 11 1(01) 11
1 2 74
1 1
Viterbi Decoding Process
Output: 11 01 00 10 01 10 11
Receive: 11 11 00 10 01 11 11
0(00) 0(00) 0(00) 0(00)
00 00 00 0(00) 00 0(00) 00 0(00) 00 00 00
1 (1 1
1 (1 1
1 (1 1
1 (1 1
1 (1 1
2 4 3 3 3 3
)
)
)
11
11
11
11
11
0(
0(
0(
0(
0(
01 1 (0 01 01 01 01
1 (0
1 (0
1 2 2 3 2
0)
0)
0)
1)
)
)
)
1
1
1
1
0 (0
0(0
0(0
0(0
0(0
10 10 10 10 10
1(
0 2 1 3 3
10
)
1(
1(
1(
)
)
0(10
0(10
0(10
0(10
10
10
10
)
)
11 1(01) 11 1(01) 11 1(01) 11
1 2 75
1 1
Viterbi Decoding Process
Output: 11 01 00 10 01 10 11
Receive: 11 11 00 10 01 11 11
0(00) 0(00) 0(00) 0(00)
00 00 00 0(00) 00 0(00) 00 0(00) 00 00 00
1 (1 1
1 (1 1
1 (1 1
1 (1 1
1 (1 1
2 4 3 3 3 3 2
)
)
)
11
11
11
11
11
0(
0(
0(
0(
0(
01 1 (0 01 01 01 01
1 (0
1 (0
1 2 2 3 2
0)
0)
0)
1)
)
)
)
1
1
1
1
0 (0
0(0
0(0
0(0
0(0
10 10 10 10 10
1(
0 2 1 3 3
10
)
1(
1(
1(
)
)
0(10
0(10
0(10
0(10
10
10
10
)
)
11 1(01) 11 1(01) 11 1(01) 11
1 2 76
1 1
Viterbi Decoding Process
Decision:11 01 00 10 01 10 11
Receive: 11 11 00 10 01 11 11
0(00) 0(00) 0(00) 0(00)
00 00 00 0(00) 00 0(00) 00 0(00) 00 00 00
1 (1 1
1 (1 1
1 (1 1
1 (1 1
1 (1 1
2 4 3 3 3 3 2
)
)
)
11
11
11
11
11
0(
0(
0(
0(
0(
01 1 (0 01 01 01 01
1 (0
1 (0
1 2 2 3 2
0)
0)
0)
1)
)
)
)
1
1
1
1
0 (0
0(0
0(0
0(0
0(0
10 10 10 10 10
1(
0 2 1 3 3
10
)
1(
1(
1(
)
)
0(10
0(10
0(10
0(10
10
10
10
)
)
11 1(01) 11 1(01) 11 1(01) 11
1 2 77
1 1
Channel Coding in GSM
78
Channel Coding in IS-54/136
79
Turbo Codes Basic Concepts
Turbo coding uses parallel concatenation of two
recursive systematic convolutional codes joined through
an interleaver.
Information bits are encoded block by block.
Turbo codes uses iterative decoding techniques.
Soft-output decoder is necessary for iterative decoding.
Turbo codes can approach to Shannon limit.
80
Turbo Codes Encoder - An Example
X(t)
Y(t)
X(t)
Interleaver
Y’(t)
X'(t)
When the switch is placed on the low position, the tail bits are feedback
and the trellis will be terminated.
81
Turbo Codes Encoding Example
A systematic convolutional encoder with memory 2
The dotted line is for termination code
Test sequence: 1011
X0
X1
1101 D D
82
Turbo Codes Encoding Example
X0=1
X1=1
1101 0 0
00
11
01
10
11
83
Turbo Codes Encoding Example
X0=0
X1=1
110 1 0
00
11
01
10 01
11
84
Turbo Codes Encoding Example
X0=1
X1=0
11 1 1
00
11
01
10 01
10
11
85
Turbo Codes Encoding Example
X0=1
X1=0
1 1 1
00
11
01
10 01
10 10
11
86
Turbo Codes Encoding Example
X0=0
X1=1
1 1
00
11
01
01
10 01
10 10
11
87
Turbo Codes Encoding Example
X0=1
X1=1
0 1
00 11
11
01
01
10 01
10 10
11
88
Turbo Codes Encoding Example
X0=0
X1=0
0 0
00
00 11
11
01
01
10 01
10 10
11
89
Turbo Codes Encoding Example
X0
X1
1101
D D
Interleaver
(X0)
X2
1011 D D
Output sequence: X0, X1, X2, X0, X1, X2, X0, X1, X2,...
90
Turbo Codes Encoding Example
The second encoder input is the interleaved
data
1 0 1011 1101
1 1
00 00
00 11
11
01
10 00 10
10
11
91
CRC in WCDMA
gCRC24(D) = D 24 + D 23 + D 6 + D 5 + D + 1;
gCRC16(D) = D 16 + D 12 + D 5 + 1;
gCRC12(D) = D 12 + D 11 + D 3 + D 2 + D + 1;
gCRC8(D) = D 8 + D 7 + D 4 + D 3 + D + 1.
92
Channel Coding Adopted in WCDMA
1/3, 1/2
CPCH, DCH, DSCH,
Turbo coding 1/3
FACH
No coding
93
Convolutional Coding in WCDMA
Input
D D D D D D D D
Output 0
G0 = 561 (octal)
Output 1
G1 = 753 (octal)
(a) Rate 1/2 convolutional coder
Input
D D D D D D D D
Output 0
G0 = 557 (octal)
Output 1
G1 = 663 (octal)
Output 2
G2 = 711 (octal)
(b) Rate 1/3 convolutional coder
94
Turbo Coder in WCDMA
xk
Input Output
Turbo code
internal interleaver
2nd constituent encoder
Output z’k
D D D
x’k
x’k
95
Wireless Information Transmission System Lab.
Interleaving
TX
Source Channel Frequency Multiple
Format Encryption Interleaving Multiplexing Modulation RF
Encoding Encoding Spreading Access
PA
si (t )
Digital
Input C
mi H
A
Bit Digital
Synchronization N
Stream Waveform N
Digital E
Output L
m̂ i
sˆi (t )
RX
Source Channel Frequency Multiple
Format Decryption Deinterleaving Demultiplexing Demodulation RF
Decoding Decoding Despreading Access
IF
97
Bursty Error in Fading Channel
98
Interleaving Mechanism (1/2)
x Bit y
Interleaver
y
x j x n-bit
Shift registers
Write Clock Read Clock
99
Interleaving Mechanism (2/2)
Conceptually, the WRITE clock places the bit stream
x by the row while the REA clock takes the bit stream
y by the column:
⎡ a11 a12 . . . a1n ⎤
⎢a a 22 . . . a 2 n ⎥⎥
⎢ 21
⎢ . . . . . . ⎥
⎢ ⎥
⎢ . . . . . . ⎥
⎢ . . . . . . ⎥
⎢ ⎥
⎢⎣ a j1 a j2 . . . a jn ⎥⎦
Bit stream at the output of the bit interleaver:
y = (a11 a21 ... a j1 a12 a22 ... a j 2 ... a1n a2 n ... a jn )
100
Burst Error Protection with Interleaver
101
Wireless Information Transmission System Lab.
Modulation
TX
Source Channel Frequency Multiple
Format Encryption Interleaving Multiplexing Modulation RF
Encoding Encoding Spreading Access
PA
si (t )
Digital
Input C
mi H
A
Bit Digital
Synchronization N
Stream Waveform N
Digital E
Output L
m̂ i
sˆi (t )
RX
Source Channel Frequency Multiple
Format Decryption Deinterleaving Demultiplexing Demodulation RF
Decoding Decoding Despreading Access
IF
103
Modulation
Digital Modulation: digital symbols are transformed into
waveforms that are compatible with the characteristics of the
channel.
In baseband modulation, these waveforms are pulses.
In bandpass modulation, the desired information signal
modulates a sinusoid called a carrier. For radio transmission,
the carrier is converted in an electromagnetic (EM) wave.
Why modulation?
Antenna size should be comparable with wave length –
baseband transmission is not possible.
Modulation may be used to separate the different signals
using a single channel.
104
PCM Waveform Representations
105
PCM Waveform Representations
PCM waveform is also called line codes.
Digital baseband signals often use line codes to provide
particular spectral characteristics of a pulse train.
NRZ-L. Bi-φ-S.
NRZ-M. Dicode-NRZ.
NRZ-S. Dicode-RZ.
Unipolar-RZ. Delay Mode.
Polar-RZ. 4B3T.
Bi-φ-L. Multi-level.
Bi-φ-M. … etc.
106
PCM Waveform : NRZ-L
1 0 1 1 0 0 0 1 1 0 1
+E
0
-E
107
PCM Waveform : NRZ-M
1 0 1 1 0 0 0 1 1 0 1
+E
0
-E
108
PCM Waveform : NRZ-S
1 0 1 1 0 0 0 1 1 0 1
+E
0
-E
109
PCM Waveform : Unipolar-RZ
1 0 1 1 0 0 0 1 1 0 1
+E
0
-E
Unipolar - RZ
“One” is represented by a half-bit width pulse.
“Zero” is represented by a no pulse condition.
110
PCM Waveform : Polar-RZ
1 0 1 1 0 0 0 1 1 0 1
+E
0
-E
Polar - RZ
“One” and “Zero” are represented by opposite
level polar pulses that are one half-bit in width.
111
PCM Waveform : Bi-φ-L
1 0 1 1 0 0 0 1 1 0 1
+E
0
-E
1 0 1 1 0 0 0 1 1 0 1
+E
0
-E
1 0 1 1 0 0 0 1 1 0 1
+E
0
-E
1 0 1 1 0 0 0 1 1 0 1
+E
0
-E
Dicode Non-Return-to-Zero
A “One” to “Zero” or “Zero” to “One” changes polarity.
Otherwise, a “Zero” is sent.
115
PCM Waveform : Dicode - RZ
1 0 1 1 0 0 0 1 1 0 1
+E
0
-E
Dicode Return-to-Zero
A “One” to “Zero” or “Zero” to “One” transition produces
a half duration polarity change.
Otherwise, a “Zero” is sent.
116
PCM Waveform : Delay Mode
1 0 1 1 0 0 0 1 1 0 1
+E
0
-E
Dicode Non-Return-to-Zero
A “One” is represented by a transition at the midpoint of
the bit interval.
A “Zero” is represented by a no transition unless it is
followed by another zero. In this case, a transition is
placed at the end of bit period of the first zero.
117
PCM Waveform : 4B3T
O --
118
PCM Waveform : 4B3T
Ternary words in the middle column are balanced in
their DC content.
Code words from the first and third columns are selected
alternately to maintain DC balance.
If more positive pulses than negative pulses have been
transmitted, column 1 is selected.
Notice that the all-zeros code word is not used.
119
PCM Waveform : Multilevel Transmission
120
Criteria for Selecting PCM Waveform
121
Spectral Densities of Various PCM Waveforms
122
Linear Modulation Techniques
Digital modulation techniques may be broadly classified as linear
and nonlinear.
In linear modulation techniques, the amplitude of the transmitted
signal, s(t), varies linearly with the modulating digital signal, m(t).
Linear modulation techniques are bandwidth efficient, though
they must be transmitted using linear RF amplifiers which have
poor power efficiency.
Using power efficient nonlinear amplifiers leads to the
regeneration of filtered sidelobes which can cause severe adjacent
channel interference, and results in the loss of all the spectral
efficiency gained by linear modulation.
Clever ways have been developed to get around these difficulties:
QPSK, OQPSK, π/4-QPSK.
123
Digital Modulations
125
Extended Modulated Signals – M-FSK
Example: 16-FSK
Every 4 bits is encoded as: A ⋅ cos(ω j t ) j = 1,2,…,16
Gray Coding.
126
Extended Modulated Signals – M-PSK
Example: 16-PSK
Every 4 bits is encoded as: A ⋅ sin(ω t + θ j ) j = 1, 2,… ,16
Gray Coding.
128
Binary Phase Shift Keying (BPSK)
In BPSK, the phase of a constant amplitude carrier signal is
switched between two values according to the two possible
signals m1 and m2 corresponding to binary 1 and 0. Normally,
the two phases are separated by 180o.
2 Eb
sBPSK ( t ) = m ( t ) cos ( 2π f c t + θ c ) 0 ≤ t ≤ Tb
Tb
= Re { g BPSK ( t ) exp ( j 2π f c t )}
2
2 Eb ⎛ sin π fTb ⎞
g BPSK ( t ) = m ( t ) e ⇒ Pg BPSK (t ) ( f ) = 2 Eb ⎜
jθc
⎟
Tb ⎝ π fTb ⎠
Eb ⎛ sin π ( f − f c ) Tb ⎞ ⎛ sin π ( − f − f c ) Tb ⎞ ⎤
⎡ 2 2
PBPSK ( f ) = ⎢⎜ ⎟ + ⎜ ⎟ ⎥
2 ⎢⎝⎜ π ( f − f c ) Tb ⎠⎟ ⎝⎜ π ( − f − f c ) Tb ⎠⎟ ⎥
⎣ ⎦
129
Power Spectral Density (PSD) of a BPSK
Signal.
130
BPSK Receiver
BPSK uses coherent or synchronous demodulation,
which requires that information about the phase and
frequency of the carrier be available at the receiver.
If a low level pilot carrier signal is transmitted along
with the BPSK signal, then the carrier phase and
frequency may be recovered at the receiver using a
phase locked loop (PLL).
If no pilot carrier is transmitted, a Costas loop or
squaring loop may be used to synthesize the carrier
phase and frequency from the received BPSK signal.
131
BPSK Receiver with Carrier Recovery
Circuits
132
Operations of BPSK Receiver with Carrier
Recovery Circuits
133
Differential Phase Shift Keying (DPSK)
Differential PSK is a noncoherent form of phase shift keying
which avoids the need for a coherent reference signal at the
receiver.
d k = mk ⊕ d k −1
134
Block Diagram of DPSK Receiver
135
Quadrature Phase Shift Keying (QPSK)
136
Spectrum of QPSK Signals
⎡⎛ sin π ( f − f ) T 2
⎞ ⎛ sin π ( − f − f c ) Ts ⎞
2
⎤
PQPSK ( f ) = Eb ⎢⎜⎜ c s
⎟⎟ + ⎜⎜ ⎟⎟ ⎥
⎢⎝ π ( f − f c ) T ⎠ ⎝ π ( − f − fc ) T ⎠ ⎥
⎣ ⎦
137
Block Diagram of a QPSK Transmitter
138
Block Diagram of a QPSK Receiver
139
Offset QPSK (OQPSK)
For QPSK, the occasional phase shift of πradians can cause the
signal envelope to pass through zero for just an instant.
The amplification of the zero-crossings brings back the filtered
sidelobes since the fidelity of the signal at small voltage levels is
lost in transmission.
To prevent the regeneration of sidelobes and spectral widening, it
is imperative that QPSK signals that use pulse shaping be
amplified only using linear amplifiers, which are less efficient.
A modified form of QPSK, called offset QPSK (OQPSK) or
staggered QPSK is less susceptible to these deleterious effects
and supports more efficient amplification.
OQPSK ensures there are fewer baseband signal transitions.
Spectrum of an OQPSK signal is identical to that of QPSK.
140
Offset QPSK (OQPSK)
The time offset waveforms that are applied to the in-phase and
quadrature arms of an OQPSK modulator. Notice that a half-
symbol offset is used.
141
π/4-DQPSK
142
Generic π/4-DQPSK Transmitter
143
π/4-DQPSK Baseband Differential
Detector
144
Detection of Binary Signals in Gaussian
Noise
145
Digital Demodulation Techniques
146
Correlation Demodulator
147
Matched Filter Demodulator
148
Inter-Symbol Interference (ISI)
149
Inter Symbol Interference (ISI)
Inter-Symbol Interference (ISI) arises because of
imperfections in the overall frequency response of the
system. When a short pulse of duration Tb seconds is
transmitted through a band-limited system, the
frequency components constituting the input pulse
are differentially attenuated and differentially delayed
by the system. Consequently, the pulse appearing at
the output of the system is dispersed over an interval
longer than Tb seconds, thereby resulting in inter-
symbol interference.
Even in the absence of noise, imperfect filtering and
system bandwidth constraints lead to ISI.
150
Nyquist Channels for Zero ISI
The Nyquist channel is not physically realizable since it
dictates a rectangular bandwidth characteristic and an infinite
time delay.
Detection process would be very sensitive to small timing
errors.
Solution: Raised Cosine Filter.
151
Raised Cosine Filter
1
W0 =
2T
Excess Bandwidth : W − W0
W − W0
Roll - Off Factor : r =
W0
152
Raised Cosine Filter Characteristics
153
Raised Cosine Filter Characteristics
154
Equalization
In practical systems, the frequency response of the
channel is not known to allow for a receiver design that
will compensate for the ISI.
The filter for handling ISI at the receiver contains
various parameters that are adjusted with the channel
characteristics.
The process of correcting the channel-induced distortion
is called equalization.
155
Equalization
156
Introduction to RAKE Receiver
Multiple versions of the transmitted signal are seen at
the receiver through the propagation channels.
Very low correlation between successive chips is in
CDMA spreading codes.
If these multi-path components are delayed in time
by more than a chip duration, they appear like
uncorrelated noise at a CDMA receiver.
Equalization is Combine
NOT necessary Coherently
157
Introduction to RAKE Receiver
158
Maximum Ratio Combining (MRC)
MRC: Gi=Aie-jqi
Coherent Combining
G1 G2 GL
Channel Estimation
Best Performance
Receiver
159
Maximum Ratio Combining (MRC)
L
Received Envelope:rL = ∑ Gl ⋅ rl
l =1
L
Total Noise Power: σ = ∑ Gl σ n2,l
2 2
n
l =1
L 2
r 2 ∑G ⋅r
l =1
l l
SNR: SNRL = = L
2 ⋅σ n
2 L
2 ⋅ ∑ Gl ⋅ σ n2,l
2
l =1
2 2
L L ⎛ rl ⎞
Since ∑G ⋅r l l = ∑ Glσ n ,l ⎜
⎜σ ⎟⎟
l =1 l =1 ⎝ n ,l ⎠
160
Maximum Ratio Combining (MRC)
2 2
L L L
rl
Chebychev's Inequality : ∑ Gl ⋅ rl ≤ ∑ Glσ n ,l ⋅ ∑
2
l =1 l =1 l =1 σ n ,l
2
L L
rl
∑ ⋅∑
2
Gσ
1
l n ,l
σ n ,l 1 rl
L
2
L
= ∑ 2 = ∑ SNRl
l =1 l =1
SNRL ≤
2 L
2 l =1 σ n ,l l =1
∑ l n ,l
2
G σ 2
l =1
rl*
With equality hold : Glσ n ,l = k
σ n ,l
⇒ Output SNR = Sum of SNRs from all branches @ Gl ∝ rl*
161
Example of RAKE Receiver Structure
162
Advantages of RAKE Receiver
163