You are on page 1of 11

Performance of the So -Decision Decoding

 c => transmi ed sequence of bits


 r => received sequence of the demodulator
 𝑐 => bit at the mth posi on of the jth branch of transmi ed sequence
 𝑟 => bit at the mth posi on of the jth branch of received sequence
 In the Hard decision decoding, The received sequence will be either 0 or 1
in the output and for the so decision :
𝒓𝒋𝒎 = 𝜺𝒄 𝟐𝒄𝒋𝒎 − 𝟏 + 𝒏𝒋𝒎
 𝜀 => transmi ed signal energy
 𝑛 => noise introduced by the channel
 Branch Metric(µ𝒋 ) for jth branch and ith path

(𝒊)
µ𝒋 = 𝐥𝐨𝐠 (𝑷(𝒀𝒋 |𝑪𝒋 ))
 Path Metrics(𝑃𝑀( ) ) for the ith path of B branches through the trellis :-

𝑩
(𝐢)
𝑷𝑴(𝒊) = µ𝐣
𝒋 𝟏

 The demodulator output is described sta s cally by the probability density


func on:
𝟐
𝟏 𝒓𝒋𝒎 𝜺𝒄 𝟐𝒄𝒋𝒎 𝟏
(𝒊) 𝟏 𝟐 𝝈
𝐩(𝒓𝒋𝒎 |𝒄𝒋𝒎 ) = 𝒆
√𝟐𝝅𝝈
 Where ,𝜎 = is variance of the noise
 Here , 𝑛 ~ Ν(0,𝑁 /2)
 Neglec ng the terms that are common to all branch metrics,
𝒏
(𝐢)
µ(𝒊) = 𝒓𝒋𝒎 (𝟐𝐜𝐣𝐦 − 𝟏)
𝒎 𝟏

 Correla on Metrics(𝐶𝑀 ( ) ) thus obtained

𝑩 𝑩 𝒏
(𝒊) (𝒊)
𝑪𝑴(𝒊) = µ𝒋 = 𝒓𝒋𝒎 (𝟐𝒄𝒋𝒎 − 𝟏)
𝒋 𝟏 𝒋 𝟏𝒎 𝟏

 In deriving the probability of error, we assume that the all-zero sequence is


transmi ed, and we determine the probability of error in deciding in favor
of another sequence.
 Let, i=0 denote the all-zero path.

So, cjm=0 ∀ {j,m}

𝑩 𝒏
(𝟎)
𝑪𝑴(𝟎) = 𝒓𝒋𝒎 (𝟐𝒄𝒋𝒎 − 𝟏)
𝒋 𝟏𝒎 𝟏
𝑩 𝒏

𝑪𝑴(𝟎) = ( 𝜺𝒄 (𝟐(𝟎) − 𝟏) + 𝒏𝒋𝒎 )(𝟐(𝟎) − 𝟏)


𝒋 𝟏𝒎 𝟏
𝑩 𝒏

𝑪𝑴(𝟎) = (− 𝜺𝒄 + 𝒏𝒋𝒎 )(−𝟏)


𝒋 𝟏𝒎 𝟏
𝑩 𝒏

𝑪𝑴(𝟎) = 𝜺𝒄 𝒏𝑩 − 𝒏𝒋𝒎
𝒋 𝟏𝒎 𝟏

 P2(d) => first event error probability where the incorrect path merge with
the Correct path for the first me.
 Let us take two paths the ith path and the all zero path (i=0)
 Now for the correct path 𝐶𝑀 ( ) < 𝐶𝑀 ( ) but error will occur when the
𝐶𝑀 ( ) ≥ 𝐶𝑀 ( )

∴ 𝑷𝟐 (𝐝) = 𝑷 𝑪𝑴(𝒊) ≥ 𝑪𝑴(𝟎)


= 𝑷 𝑪𝑴(𝒊) − 𝑪𝑴(𝟎) ≥ 𝟎

 Now,
𝑩 𝒏
(𝟎)
𝑪𝑴(𝟎) = 𝒓𝒋𝒎 (𝟐𝒄𝒋𝒎 − 𝟏)
𝒋 𝟏𝒎 𝟏
𝑩 𝒏
(𝒊)
𝑪𝑴(𝒊) = 𝒓𝒋𝒎 (𝟐𝒄𝒋𝒎 − 𝟏)
𝒋 𝟏𝒎 𝟏

 Now applying it in above equa on,


𝑩 𝒏 𝑩 𝒏
(𝒊) (𝟎)
𝑷 𝒓𝒋𝒎 𝟐𝒄𝒋𝒎 − 𝟏 − 𝒓𝒋𝒎 (𝟐𝒄𝒋𝒎 − 𝟏) ≥ 𝟎
𝒋 𝟏𝒎 𝟏 𝒋 𝟏𝒎 𝟏
𝑩 𝒏
(𝒊) (𝟎)
=𝑷 𝟐 𝒓𝒋𝒎 𝒄𝒋𝒎 − 𝒄𝒋𝒎 ≥ 𝟎
𝒋 𝟏𝒎 𝟏

 Let us say the total number of bits are differed by d, rest all terms remains
() ( )
same so (𝑐 −𝑐 ) will be 0.
 So, the above formula will be reduced to,
𝒅

𝑷 𝒓𝒍 ′ ≥ 𝟎
𝒍 𝟏
 Where 𝑙 runs over the d bits in which the two path differs 𝜀 the set {𝑟 ′}
 For all zero path we took cjm=0 ∀ {j,m}
Expecta on, 𝑬{𝒓𝒍 ′} = 𝑬 𝜺𝒄 (𝟎 − 𝟏) + 𝒏𝒋𝒎

𝑬{𝒓𝒍 ′} = 𝑬 − 𝜺𝒄 + 𝒏𝒋𝒎

𝑬{𝒓𝒍 ′} = 𝑬 − 𝜺𝒄 + 𝑬 𝒏𝒋𝒎

𝑬{𝒓𝒍 ′} = − 𝜺𝒄

Variance, 𝒗𝒂𝒓{𝒓𝒍 ′} = 𝒗𝒂𝒓 − 𝜺𝒄 + 𝒏𝒋𝒎

𝒗𝒂𝒓{𝒓𝒍 ′} = 𝟎 + 𝒗𝒂𝒓 𝒏𝒋𝒎


𝑵𝟎
𝒗𝒂𝒓{𝒓𝒍 ′} =
𝟐
 {𝑟 ′} ~ Ν(− 𝜀 ,𝑁 /2) i.e Iden cally and Independently Gaussian
distributed variable
 Now, (using central limit theorem)
𝒅
𝒅𝑵𝟎
𝒓𝒍 ′ ~ 𝑵 −𝒅 𝜺𝒄 ,
𝟐
𝒍 𝟏

 We can convert it into standard normal distribu on 𝑍 ~ 𝑁(0,1)

𝒅
∑𝒅𝒍 𝟏 𝒓𝒍 ′ + 𝒅 𝜺𝒄 𝒅 𝜺𝒄
𝑷 𝒓𝒍 ′ ≥ 𝟎 = 𝑷 ⎛ ≥ ⎞
𝒍 𝟏 𝒅𝑵𝟎 𝒅𝑵𝟎
⎝ 𝟐 𝟐 ⎠
𝟐𝜺𝒄 𝒅
=𝑷 𝒁≥
𝑵𝟎

𝟐𝜺𝒄 𝒅 𝟏 𝒛𝟐
𝑷 𝒛≥ = 𝒆 𝟐 𝒅𝒛
𝑵𝟎 √𝟐𝝅𝝈
𝟐𝜺𝒄 𝒅
𝑵𝟎
 Here, 𝝈 = 𝟏
µ=𝟎

𝟐𝜺𝒄 𝒅 𝟏 𝒛𝟐
∴ 𝑷 𝒛≥ = 𝒆 𝟐 𝒅𝒛
𝑵𝟎 √𝟐𝝅
𝟐𝜺𝒄 𝒅
𝑵𝟎

𝟐𝜺𝒄 𝒅
=> 𝑷𝟐 (𝒅) = 𝑷 = 𝑸
𝑵𝟎

𝟏 𝒖𝟐
∵ 𝑸(𝒙) = 𝒆 𝟐 𝒅𝒖
√𝟐𝝅
𝒙
𝜺𝒃
 Now, 𝑆𝑁𝑅(𝜸𝒃 ) =
𝑵𝟎
𝜺𝒄
 And, 𝜺𝒃 =
𝑹𝒄
𝜺𝒃 𝜺𝒄
 ∴ = = 𝜸𝒃
𝑵𝟎 𝑹𝒄 𝑵 𝟎
 ε => energy per bit
 ε => transmitted symbol energy
 R => Code Rate

𝑷𝟐 (𝒅) = 𝑸 𝟐𝜸𝒃 𝑹𝒄 𝒅 . . . . 𝒆𝒒𝒖𝒂𝒕𝒊𝒐𝒏(𝟏)

 There can be mul ple incorrect path that can merge with the all zero path.
 Thus we can sum up the error probability over all possible path distances.
So, we obtain the upper bound on a first event error probability in the
form
𝑷𝒆 ≤ 𝒂𝒅 𝑷𝟐 (𝒅)
𝒅 𝒅𝒇𝒓𝒆𝒆

 𝑎 => Number of path having distance d from the all zero path
 Now, 𝑃 (𝑑) = 𝑄 2𝛾 𝑅 𝑑

𝑷𝒆 ≤ 𝒂𝒅 𝑸 𝟐𝜸𝒃 𝑹𝒄 𝒅 . . . . 𝒆𝒒𝒖𝒂𝒕𝒊𝒐𝒏(𝟐)
𝒅 𝒅𝒇𝒓𝒆𝒆

 Considering the upper-bound of the Q func on


𝜸𝒃 𝑹𝒄 𝒅
𝑸 𝟐𝜸𝒃 𝑹𝒄 𝒅 ≤ 𝒆
 Replacing the Q func on with the upper bound in the equa on(2):

𝜸 𝒃 𝑹𝒄 𝒅
𝑷𝒆 ≤ 𝒂𝒅 𝒆
𝒅 𝒅𝒇𝒓𝒆𝒆

 Replacing 𝐷 by 𝑒

𝑷𝒆 ≤ 𝒂𝒅 𝑫𝒅
𝒅 𝒅𝒇𝒓𝒆𝒆

𝜸𝒃 𝑹𝒄
𝒘𝒉𝒆𝒓𝒆 , 𝑫 = 𝒆
 Now,

𝒂𝒅 𝑫𝒅 = 𝑻(𝑫)
𝒅 𝒅𝒇𝒓𝒆𝒆

 T(D) => Transfer function

∴ 𝑷𝒆 ≤ 𝑻(𝑫),
𝜸𝒃 𝑹𝒄
𝒘𝒉𝒆𝒓𝒆 𝑫 = 𝒆

 Now we need to determine the bit error probability.


 let us take transfer func on T(D,N) where exponent of N indicates the
number of bit error in selec ng an incorrect path that merges with the all
zero path at some node B.

𝑻(𝑫, 𝑵) = 𝒂𝒅 𝑫𝒅 𝑵𝒇(𝒅)
𝒅 𝒅𝒇𝒓𝒆𝒆

 f(d) => exponent of N as a function of d


 Taking the deriva ve of T(D,N) w.r.t N and then placing N=1.
 If we mul ply 𝑃 (𝑑) by number of incorrectly decoded informa on bit
(f(d)), and then summing up all the possible path we obtain BER(𝑃 ).

𝒅
𝑻(𝑫, 𝑵) = 𝒂𝒅 𝑫𝒅 𝒇(𝒅)
𝒅𝑵
𝑵 𝟏 𝒅 𝒅𝒇𝒓𝒆𝒆

= 𝑩𝒅 𝑫𝒅
𝒅 𝒅𝒇𝒓𝒆𝒆

𝒘𝒉𝒆𝒓𝒆 , 𝑩𝒅 = 𝒂𝒅 𝒇(𝒅)
 Now the probability of bit error rate, 𝑷𝒃
 Bit error probability for k = 1 is upper bounded by,
 By Definiton:

𝑷𝒃 < 𝒂𝒅 𝒇(𝒅)𝑷𝟐 (𝒅)


𝒅 𝒅𝒇𝒓𝒆𝒆

𝑷𝒃 < 𝑩𝒅 𝑷𝟐 (𝒅)
𝒅 𝒅𝒇𝒓𝒆𝒆
𝑷𝒃 < 𝑩𝒅 𝑸 𝟐𝜸𝒃 𝑹𝒄 𝒅 . . . . 𝒆𝒒𝒖𝒂𝒕𝒊𝒐𝒏(𝟑)
𝒅 𝒅𝒇𝒓𝒆𝒆

 Now,

𝑸 𝟐𝜸𝒃 𝑹𝒄 𝒅 ≤ 𝑫𝒅

𝒅𝑻(𝑫, 𝑵)
𝑷𝒃 < 𝑩𝒅 𝑫𝒅 =
𝒅𝑵
𝒅 𝒅𝒇𝒓𝒆𝒆

𝒅𝑻(𝑫, 𝑵)
𝑷𝒃 <
𝒅𝑵
 Bit error probability is upper bounded by the differen a on of transfer
func on w.r.t N
𝒅𝑻(𝑫, 𝑵)
𝑷𝒃 <
𝒅𝑵 𝑵 𝟏,𝑫 𝒆 𝜸𝒃 𝑹𝒄
Performance of Hard-Decision Decoding

 In Hard Decision decoding of the convolu onal code, the metrics in the
Viterbi algorithm are the Hamming distances between the received
sequence and 2 ( ) surviving sequences in the trellis.
 Here “p” is the probability of flipping the bit in the binary symmetric
channel
 The all-zero path is assumed to be transmi ed. We are going to find the
first event error probability, i.e 𝑃 (𝑑).
 We will compare the path output with all-zero sequence at some node B.
B has distance d from the all-zero path.
 When d= odd:
𝒅
𝒅 𝒌
𝑷𝟐 (𝒅) = 𝒑 (𝟏 − 𝒑)𝒅 𝒌
𝒌
(𝒅 𝟏)
𝒌 𝟐

 From the concept of Hamming distance we can only correct t c(error


correc on capacity)= , so number of error greater than tc cannot be
corrected and produces an error.
 When d = even:
𝒅

𝒅 𝒌 𝟏 𝒅
𝑷𝟐 (𝒅) = 𝒑 (𝟏 − 𝒑)𝒅 𝒌
+ 𝒑𝒅/𝟐 (𝟏 − 𝒑)𝒅/𝟐
𝒌 𝟐 𝒅/𝟐
𝒌 𝒅𝟐 𝟏

 Here the second term is for case in which the number of 1s and 0s are
same.
 Taking use of upper bound concept for simplifying the equa ons, when d is
odd.
𝒅
𝒅 𝒌
𝑷𝟐 (𝒅) = 𝒑 (𝟏 − 𝒑)𝒅 𝒌
𝒌
(𝒅 𝟏) 𝒌
𝟐

 Taking k=d/2 for upper bound.


𝒅
𝒅 𝒅/𝟐
≤ 𝒑 (𝟏 − 𝒑)𝒅/𝟐
𝒌
(𝒅 𝟏)
𝒌 𝟐

𝒅
𝒅
= 𝒑𝒅/𝟐 (𝟏 − 𝒑)𝒅/𝟐
𝒌
(𝒅 𝟏)
𝒌 𝟐

 Star ng the summa on by taking the limit from k=0.


𝒅
𝒅
≤ 𝒑𝒅/𝟐 (𝟏 − 𝒑)𝒅/𝟐
𝒌
𝒌 𝟎

≤ 𝟐𝒅 𝒑𝒅/𝟐 (𝟏 − 𝒑)𝒅/𝟐
 So, Finally we will get the simplified term as following,
𝑷𝟐 (𝒅) < [𝟒𝒑(𝟏 − 𝒑)]𝒅/𝟐
 Same result will be come when d is even,
 Now summing up of all the first event error probability we get overall first
event probability 𝑷𝒆

𝑷𝒆 < 𝒂𝒅 [𝟒𝒑(𝟏 − 𝒑)]𝒅/𝟐


𝒅 𝒅𝒇𝒓𝒆𝒆

 We can now infer transfer func on T(D) as following,

𝑷𝒆 < 𝑻(𝑫)
𝒘𝒉𝒆𝒓𝒆, 𝑫 = 𝟒𝒑(𝟏 − 𝒑)
 Now let us take transfer func on T(D,N) where exponent of N indicates the
number of bit error in selec ng an incorrect path that merges with the all
zero path at some node B.

𝑻(𝑫, 𝑵) = 𝒂𝒅 𝑫𝒅 𝑵𝒇(𝒅)
𝒅 𝒅𝒇𝒓𝒆𝒆

 Taking the deriva ve of T(D,N) w.r.t N and then se ng N=1.


 𝑓(𝑑) => 𝑒𝑥𝑝𝑜𝑛𝑒𝑛𝑡 𝑜𝑓 𝑁 𝑎𝑠 𝑎 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛 𝑜𝑓 𝑑

𝒅
𝑻(𝑫, 𝑵)|(𝑵 𝟏) = 𝒂𝒅 𝑫𝒅 𝒇(𝒅)
𝒅𝑵
𝒅 𝒅𝒇𝒓𝒆𝒆

= 𝑩𝒅 𝑫𝒅
𝒅 𝒅𝒇𝒓𝒆𝒆

𝒘𝒉𝒆𝒓𝒆 , 𝑩𝒅 = 𝒂𝒅 𝒇(𝒅)
 Now the probability of bit error rate, 𝑷𝒃
 Useful measure for the performance of Convolu on code is given by bit
error probability(𝑃 ).
 Bit error probability for k = 1 is upper bounded by,

𝑷𝒃 < 𝑩𝒅 𝑷𝟐 (𝒅)
𝒅 𝒅𝒇𝒓𝒆𝒆

 Bit error probability is upper bounded by the differen a on of transfer


func on w.r.t N
𝒅𝑻(𝑫, 𝑵)
𝑷𝒃 <
𝒅𝑵 𝑵 𝟏,𝑫 𝟒𝒑(𝟏 𝒑)

You might also like