Professional Documents
Culture Documents
pM (m)1 − p If m = 0
= p If m = 1
• A binary symmetric channel acts on an input bit to produce an output
bit, by flipping the input with “crossover” probability e, or
transmitting it correctly with probability 1 − e.
P (error) = P (Mˆ /= M ).
(a)No encoding:
The simplest encoder sends each message bit directly through the
channel, i.e. X 1 = X = M . Then, a reasonable decoder is to use
the output channel directly as message estimate: Mˆ = Y1 = Y .
What is the probability of error in this case?
(b)Repeti ti on code with majority decoding rule:
The next thing we attempt to do is to send each message n > 1
times through this channel. On the decoder end, we do what seems
natural: decide 0 when there are more 0s than 1s, and decide 1
otherwise. This is a “majority” decoding rule, and we can write it as
follows (making the dependence of Mˆ on the channel outputs
explicit):
0 If y1 + · · · + yn < n/2.
M (y 1 , · · · , 1 If y1 + · · · + yn ≥
ˆ
yn ) = n/2.
Analytical results:
i. Find an expression of the probability of error as a function of e, p
and n. [Hint: First use the total probability rule to divide the
problem into solving P(error|M = 0) and P(error|M = 1), as we did
in the lecture.]
ii. Choose p = 0.5, e = 0.3 and use your favorite computational
method to make a plot of
P(error) versus n, for n = 1, · · · , 15.
iii.Is majority decoding rule optimal (lowest P(error)) for all p? [Hint:
What is the best decoding rule if p = 0?].
Simulation:
i. Set p = 0.5, e = 0.3, generate a message of length 20.
ii.Encode your message with n = 3.
iii.Transmit the message, decode it, and write down the value of the
message error rate (the ratio of bits in error, over the total number of
message bits).
iv. Repeat Step 3 serveral times, and average out all the message error
rates that you obtain. How does this compare to your analytical
expression of P(error) for this value of n?
v. Repeat Steps 3 and 4 for n = 5, 10, 15, and compare the respective
average message error rates to the corresponding analytical P(error).
(c) Repeti ti on code with maximum a posteriori ( M A P ) rule:
In Part b, majority decoding was chosen almost arbitrarily, and we have
alluded to the fact that it might not be the best choice in some cases.
Here, we claim that the probability of error is in fact minimized when the
decoding rule is as follows
A y
Y<1/4
1/4 1
W>1/4 f W (w)
3
B
W<1/4 w
Then, using the PDFs given in the question we can determine two
probabilities that are clearly relevant for this question and give branch
labels for the tree:
Finally, Bayes’s Rule gives
3. (a) We know that the total length of the edge for red interval is two
times that for black inteval. Since the ball is equally likely to fall in
any position of the edge, probability of falling in a
(c) Since the ball is equally likely to fall on any point of the edge, we can
see it is twice as likely
(d) The total gains (or losses), T, equals to the sum of all Xi, i.e. T = X1
+X2 + + · · · Xn, Since all the Xi’s are independent of each other, and
they have the same Gaussian distribution, the sum will also be a
Gaussian with
4. (a) Y = g(X) = sin(πX). Because g(x) is a monotonic function for −1 < x <
1, we can define 2 an inverse function h(y) = 2 arcsin y and use the PDF
formula given in lecture:
2 ... 2 2..
= X π . ... √ 1 −
=f 1 √arcsi
2
· π y .
2 π 1y 2
n
= √
−1y
π 1−
The final y2
y < −1
answer is:
π
√1
fY (y) 01−y −1 ≤
0
y< 1
=
1≤y
0.5
y value
a b c d
0
−0.5
−1
−1.5
−1.5 −1 −0.5 0 0.5 1 1.5
x value
PDF of x
PDF of x
0.6
f(x) 0.4
0.2
0
a b c d
−0.2
= X ≤b
arcsin
= y +
1π . 1
= 24(0.5)
b− − 4
For any Homework related queries, Call us at :- +1 678 648 4277
You can mail us at :- support@statisticshomeworksolver.com
reach us at :- www.statisticshomeworksolver.com
For 0 ≤ y ≤ 1, consider the following
diagram:
y = sin(2*pi*x) for
−1 < x < 1
1.5
1
y value
0 d a b c
−0.5
−1 0.5
−1.5
−1.5 −1 −0.5 0 0.5 1 1.5
x value
PDF of x
0.6
0.4
f(x)
0.2
0
d a b c
−0.2 0
x
−1.5 −1 −0.5 0.5 1 1.5
(a)No encoding:
P (error) = P (Mˆ = 1, M = 0) + P (Mˆ = 1, M = 0)
= P (Y = 1, X = 0) + P (Y = 0, X = 1)
= P(Y = 1|X = 0)P(X = 0) + P(Y = 0|X = 1)P(X = 1)
= e(1 − p) + ep
(b)Repeti ti on encoding:
i. Similar to the previous case, the errors can be attributed to the
conditions Mˆ = 1, M = 0 and Mˆ = 0, M = 1. Assuming we
encode each bit by repeating it n = 2m + 1, m = 0, 1.. times, we
get
P (error) = P (Mˆ = 1, M = 0) + P (Mˆ = 1, M = 0)
= P (Y 1 + Y 2 . . . Y n ≥ m + 1|M = 0)(1 − p) + P (Y 1 + Y 2 . . .
Y n ≤ m|M = 1)p
= P(atleast
! (m + 1) errors|M = 0)(1 − p) + P(atleast
=(m +Σ 1) n
k ek ( 1 − e) = 0)p
errors|M
n (n− k)
k= m+ 1
0.3
0.25
0.2
P(error)
0.15
0.1
0.05
0
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
n
0.35
e=0.3,p=0.5
0.3
0.25
0.2
P(error)
0.15
0.1
0.05
0 5 10 15
Mˆ(y1 , · ·0 If e (1 − e) ≥
N1 N0
· , yn ) = e (1 − e)
N0 N1
1 Otherwise.
01 If N0 >
Otherwise.
Mˆ (y1 , · · ·N,1yn ) =
The test expression can be rewritten as (1 − e)N0 − N 1 ≥ eN0 − N 1 If we
assume that
e ≤ 0.5, then (1 − e) ≥ e. Therefore,we can re-write the test as,
iii.In this case, we see that the decision rule and hence the
probability of error depends oupon the a-priori probabilities p
and 1 − p. In particular, if we consider the degenerate cases again
p = 0, p = 1, we see from the rule that we always decide in
favor of Mˆ = 0, Mˆ = 1 irrespective of the recieved bits. This is in
contrast to the majority decoding, that still has to count the
number of 1s in the output.
iv. Here we resort to a more intuitive proof of the statement that also
illustrates the concept of ’risk’ in decision making (A
mathematically rigorous proof may be found in 6.011 and
6.432). Consider the case when we have no received data and
make the decision based entirely on prior probabilities P(M = 0)
= 1 − p and P(M = 1) = p. If we decide Mˆ = 1, then we have
the 1 − p probability of being wrong. Similarily if we chose Mˆ =
0,
iii.we have a probability p of being wrong. We choose Mˆ to
minimize the risk of being wrong or maximize the
probability of being right. Thus we choose
Mˆ = arg max P (X = M )
M = (0,1)
Mˆ = arg max P (X = M |Y )
M = (0,1)
E [X 1 + · · · + X n | S n = s n ] = E [X 1 | S n = s n ] + · · · +
E [X n | S n = s n ]/
Because the X i ’s are identically distributed, we have the following
relationship:
E [X i | S n = s n ] = E [X j | S n = s n ], for any 1 ≤ i ≤ n, 1 ≤ j ≤ n.
Therefore,
= s nn]n=sns. n
E [X 1 + · · · + X n | S n = s n ] = nE [X 1 | 1S n n ⇒
E [X | S = s ] =