You are on page 1of 2

The Mathematics of Entanglement - Summer 2013

27 May, 2013

Exercise Sheet 1

Exercise 1
How can we realize a POVM (Qi ) as a projective measurement on a larger Hilbert space?
Solution: Let H be a Hilbert space with POVM (Qi )ni=1 . We can embed H into Hn , the direct
sum of n copies of the original Hilbert space, by sending a state to (, . . . , ). If we equip Hn
with the inner product
(1 , . . . , n )(1 , . . . , n ) = i Qi i
i

(which is positive semidefinite since the Qi 0) then this embedding preserves the norm: Indeed,
(, . . . , )(, . . . , ) = Qi =
i

(since i Qi = 1). Now consider the projective measurement (Pj ), where Pj is the projector onto
the j-th summand in Hn . The associated probabilities are
(, . . . , )Pj (, . . . , ) = Qj .
Thus we seem to have a found a larger Hilbert space on which we can realize the POVM (Qj ) as
a projective measurement.
Now, the above reasoning is slightly flawed, since . . . does not necessarily define an inner
product: There can be vectors for which it is zero. However, these vectors form a subspace N =
{(i ) (1 , . . . , n )(1 , . . . , n ) = 0}, and we can fix the argument by replacing Hn with the
= Hn /N .
quotient H
Exercise 2
Let = {1, . . . , } be an alphabet, and p(x) a probability distribution on . Let X1 , X2 , . . . be
i.i.d. random variables with distribution p(x) each. In the lecture, typical sets were defined by
1
Tp,n, = {(x1 , . . . , xn ) n log pn (x1 , . . . , xn ) H(p) }.
n
1. Show that P((X1 , . . . , Xn ) Tp,n, ) 1 as n .
Hint: Use Chebyshevs inequality.
Solution:
1
1 n
P((X1 , . . . , Xn ) Tp,n, ) = P( log pn (X1 , . . . , Xn ) H(p) ) = P( log p(Xi ) H(p) ).
n
n i=1

=Z

1-1

The expectation of the random variable Z is equal to the entropy,


E(Z) = E( log p(Xi )) = H(p),
because the Xi are all distributed according to the distribution p(x). Moreove, since the Xi
are independent, its variance is given by
Var(Z) =

n
1
1
Var(
log p(Xi )) = Var(log p(X1 )).

n2
n
i=1

Using Chebyshevs inequality, we find that


P(Tp,n, ) = 1 P(Z H(p) > ) 1

Var(Z)
1 Var(log p(X1 ))
=1
= 1 O(1/n)
2

n
2

as n (for fixed p and ). (One can further show, although it is not necessary, that
Var(log p(X1 )) log2 (d).)
2. Show that the entropy of the source is the optimal compression rate. That is, show that we
cannot compress to nR bits for R < H(p) unless the error does not go to zero as n .
Hint: Pretend first that all strings are typical, and that the scheme uses exactly nR bits.
Solution: Suppose that we have a (deterministic) compression scheme that uses nR bits,
where R < H(X). Denote by Cn n {1, . . . , 2nR } the compressor and by Dn {1, . . . , 2nR }
= Dn (En (
n the decompressor, and by An = {
x x
x))} the set of strings that can be
compressed correctly. Note that An has no more than 2nR elements. The probability of
success of the compression scheme is given by
= Dn (En (X)))

An ).
psuccess = P(X
= P(X
Now,
An ) = P(X
An Tp,n, ) + P(X
An T c ) P(X
An Tp,n, ) + P(X
Tc )
P(X
p,n,
p,n,
For any fixed choice of , the right-hand side probability converges to zero as n (by the
previous exercise). Now consider the first summand: The set An Tp,n, has at most 2nR
elements, since this is even true for An . Moreover, since all its elements are typical, we have
that p(
x) 2n(H(X)+) . It follows that
An Tp,n, ) 2n(RH(X)+) ,
P(X
which converges to zero if we fix a such that R < H(X) . Thus the probability of success
of the compression scheme goes to zero as n .

1-2

You might also like