Professional Documents
Culture Documents
Technology
𝑥1 = 𝑥 𝑡 𝜙1 𝑡 𝑑𝑡
0
1 1 2
1 1 2
𝑓𝑋1 𝑥1 |0 = exp − 𝑥 − 𝑠21 = exp − 𝑥 + 𝐸𝑏
𝜋𝑁0 𝑁0 1 𝜋𝑁0 𝑁0 1
2
𝜙𝑖 𝑡 = cos 2𝜋𝑓𝑖 𝑡 , 0 ≤ 𝑡 ≤ 𝑇𝑏
𝑇𝑏
0 elsewhere
Take the basic functions
0
𝑠1 = 𝐸𝑏 𝑠2 =
0 𝐸𝑏
Express by the basic functions
Figure: Signal-space diagram for binary FSK system. The diagram also includes
two inserts showing example waveforms of the two signals 𝑠1 𝑡 and 𝑠2 𝑡
Figure: Blocks diagrams for (a) binary FSK transmitter and (b) coherent binary FSK receiver
1 𝐸𝑏
𝑃𝑒 = erfc
2 2𝑁0
1 𝐸𝑏
𝑃𝑒𝐵𝑃𝑆𝐾 = erfc
2 𝑁0
1 𝐸𝑏
𝑃𝑒𝐵𝐹𝑆𝐾 = erfc
2 2𝑁0
𝑇
1 if 𝑖 = 𝑗
𝜙𝑖 𝑡 𝜙𝑗 𝑡 𝑑𝑡 = 𝛿𝑖𝑗 ,
0 if 𝑖 ≠ 𝑗
0
𝑁 0≤𝑡≤𝑇 𝑇 𝑖 = 1,2, … , 𝑀
𝑠𝑖 𝑡 = 𝑗=1 𝑠𝑖𝑗 𝜙𝑗 𝑡 , 𝑠𝑖𝑗 = 𝑠𝜙
0 𝑖 𝑗
𝑡 𝑑𝑡,
𝑖 = 1,2, … , 𝑀 𝑗 = 1,2, … , 𝑁
Solution
We let 𝑣1 𝑡 = 𝑠1 𝑡 and compute
𝑣1 𝑡
𝜙1 𝑡 = = 1, 0≤𝑡≤1
𝑣1
1
Next, we compute 𝑠2 , 𝜙1 = 0
1cos 2𝜋𝑡 𝑑𝑡 = 0
and we set 𝑣2 𝑡 = 𝑠2 𝑡 − 𝑠2 , 𝜙1 𝜙1 = cos 2𝜋𝑡 , 0 ≤ 𝑡 ≤ 1
The second orthonormal function is found from
𝑦𝑗 𝑡 = 𝑥 𝜏 ℎ𝑗 𝑡 − 𝜏 𝑑𝜏
−∞
If set ℎ𝑗 𝑡 =𝜙𝑗 𝑇 − 𝑡 (Matched filter)
∞
𝑦𝑗 𝑡 = 𝑥 𝜏 𝜙𝑗 𝑇 − 𝑡 + 𝜏 𝑑𝜏
−∞
Sampling this output at time 𝑡 = 𝑇
∞
𝑦𝑗 𝑇 = 𝑥 𝜏 𝜙𝑗 𝜏 𝑑𝜏
−∞
Result is same the correlation output
2
= 𝑁 𝑘=1 𝑠𝑖𝑗 = 𝒔𝑖
2
When 𝑝𝑘 is equal, we have the decision rule called the Maximum likelihood
rule (ML)
For an AWGN channel
𝑁
1 2
𝑙 𝑚𝑖 =− 𝑥𝑗 − 𝑠𝑖𝑗 , 𝑖 = 1,2, … , 𝑀
𝑁0
𝑗=1
𝑁 2
attains maximum when 𝑗=1 𝑥𝑗 − 𝑠𝑖𝑗 is minimum or distance minimum
In practice
𝑁 𝑁 𝑁 𝑁
2
𝑥𝑗 − 𝑠𝑘𝑗 = 𝑥𝑗2 − 2 𝑥𝑗 𝑠𝑘𝑗 + 2
𝑠𝑘𝑗
𝑗=1 𝑗=1 𝑗=1 𝑗=1
𝑁 1
Observation vector 𝒙 lies in region 𝑍𝑖 if 𝑗=1 𝑥𝑗 𝑠𝑘𝑗 − 𝐸𝑘 is maximum for
2
𝑘=𝑖
Figure: Illustrating the partitioning of the observation space into decision regions
for the case when 𝑁 = 2 and 𝑀 = 4; it is assumed that the 𝑀 transmitted symbols are equally likely
𝑀
1
𝑃𝑒 = 1 − 𝑓𝑿 𝑿|𝑚𝑖 𝑑𝑿
𝑀
𝑖=1 𝑍𝑖
𝑠𝑖 𝑡 = 𝑠𝑖𝑗 𝜙𝑗 𝑡 , 0 ≤ 𝑡 < 𝑇,
𝑗=1
𝑁 𝑁
𝑠𝑖𝑗 + 𝑛𝑗 𝜙𝑗 𝑡 + 𝑛𝑟 𝑡 = 𝑟𝑗 𝜙𝑗 𝑡 + 𝑛𝑟 𝑡
𝑗=1 𝑗=1
𝑁
𝑟𝑗 =𝑠𝑖𝑗 + 𝑛𝑗 and 𝑛𝑟 𝑡 = 𝑛 𝑡 − 𝑗=1 𝑟𝑗 𝜙𝑗 𝑡
𝐻1 : 𝑦 𝑡 = 2𝐸 𝑇 𝐺1 cos 𝑤1 𝑡 + 𝐺2 sin 𝑤1 𝑡 + 𝑛 𝑡
𝐻2 : 𝑦 𝑡 = 2𝐸 𝑇 𝐺1 cos 𝑤2 𝑡 + 𝐺2 sin 𝑤2 𝑡 + 𝑛 𝑡 , 0≤𝑡≤𝑇
𝜙1 𝑡 = 2𝐸 𝑇 cos 𝑤1 𝑡
𝜙2 𝑡 = 2𝐸 𝑇 sin 𝑤1 𝑡
0≤𝑡≤𝑇
𝜙3 𝑡 = 2𝐸 𝑇 cos 𝑤2 𝑡
𝜙4 𝑡 = 2𝐸 𝑇 sin 𝑤2 𝑡
𝑠𝑖 𝑡 = 𝐴cos 2𝜋 𝑓𝑐 + 𝑖 − 1 Δ𝑓 𝑡 , 0 ≤ 𝑡 ≤ 𝑇𝑠
𝑚
where Δ𝑓 = , m an interger
2𝑇𝑠
Applying Gram-Schmidt procedure. Choosing
𝑣1 𝑡 = 𝑠1 𝑡 = 𝐴cos 2𝜋𝑓𝑐 𝑡 , 0 ≤ 𝑡 ≤ 𝑇𝑠
𝑇𝑠
2 2 2
𝐴2 𝑇𝑠
𝑣1 = 𝐴 cos 2𝜋𝑓𝑐 𝑡 𝑑𝑡 =
2
0
𝑣1 2
𝜙1 𝑡 = = cos 2𝜋𝑓𝑐 𝑡 , 0 ≤ 𝑡 ≤ 𝑇𝑠
𝑣1 𝑇𝑠
𝑚
It can be shown that 𝑠2 , 𝜙1 = 0 if △ 𝑓 = , so that the second orthnormal function is
2𝑇𝑠
2
𝜙2 𝑡 = cos 2𝜋 𝑓𝑐 + Δ𝑓 𝑡 , 0 ≤ 𝑡 ≤ 𝑇𝑠
𝑇𝑠
Similarly for 𝑀 − 2 other orthogonal functions up to 𝜙𝑀 𝑡 . Thus the number of orthonormal functions is the
same as the number of possible signals; orthonormal function as
𝑠𝑖 𝑡 = 𝐸𝑠 𝜙𝑖 𝑡
𝑍𝑖 = 𝑦 𝑡 𝜙𝑖 𝑡 𝑑𝑡 where 𝑦 𝑡 = 𝑠𝑖 𝑡 + 𝑛(𝑡)
0
If 𝑠𝑙 𝑡 is transmitted, the decision rule becomes
𝑀
2
𝑑2 = 𝑍𝑖 − 𝐸𝑠 𝛿𝑙𝑗 = minimum over 𝑙 = 1,2, … , 𝑀
𝑗=1
Take the square root and writing the sum out, this can be expressed as
2
𝑑= 𝑍12 + 𝑍22 + ⋯ + 𝑍𝑙 − 𝐸𝑠 2
+ ⋯ + 𝑍𝑀 = minimum
∞
𝑑2 = 𝑍𝑗2 + 𝐸𝑠 − 2 𝐸𝑠 𝑍𝑙 = minimum
𝑛=1
Since the sums over 𝑗 and 𝐸𝑠 are independence of 𝑙, 𝑑2 can be minimined with respect to 𝑙 by
choosing as the possible transmitted signal the one that will maximize the last term; that is, the
decision rule becomes: Choose the possible transmitted signal 𝑠𝑙 (𝑡) such that
𝑇
𝐸𝑠 𝑍𝑙 = maximum or 𝑍𝑙 = 0
𝑦 𝑡 𝜙𝑖 𝑡 𝑑𝑡 = maximum with respect to 𝑙