Professional Documents
Culture Documents
Aeysha Khalique
SNS-NUST
Table of contents
1 Overview
2 Classical Information
4 Quantum Information
6 Accessible Information
7 Accessible Information
8 Entanglement Measurement
Information Transfer
Entropy
Information
Information is measure of our a priori ignorance
Consider A = {A, B} ; k = 2 ; n = 4
p1 = 1/2 p2 = 1/2
Quantifying Entropy
Question: What is the probability of getting a particular sequence
AAAB
3 3 3 1
× × × = 0.105
P= (1)
4 4 4 4
We process this probability:
If k is too large fractions are too small and so is P. Convert
products to sums. We do by taking log
log(ab) = log a + log b (2)
3 3 3 1
− log2 P = − log2 ( ) − log2 − log2 − log2 (3)
4 4 4 4
Take average
2
1 3 3 1 1 X
− log2 P = − log2 − log2 = −pi log pi (4)
4 4 4 4 4
i=1
Pk
Shannon Entropy= H = − i=1 pi log pi
Aeysha Khalique, NUST Pakistan Quantum Information 2020 7/66
Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform
Shannon Entropy
AAAB p = 43 ABBB p = 1
4
H = − 43 log 43 − 14 log 14 = 0.81
AABB p = 21
H = − 21 log 21 − 12 log 12 = 1
Shannon Entropy
1.0
0.8
0.6
H
0.4
0.2
0.0
0.0 0.2 0.4 0.6 0.8 1.0
p
Classical coding
Question:
How much a message can be compressed without loosing
information
A = {a1 , a2 , a3 , a4 } k = 4
Simplest coding
Using two bits for each letter:
a1 → 00
a2 → 01
a3 → 10
a4 → 11 (7)
On average two bits per letter
Each letter identifiable
a1 a2 a1 → 000100 (8)
Shannon entropy is
1 1 1 1 1 1 1 1
H = − log − log − log − log
2 2 4 4 8 8 8 8
= 1.75 = n̄ (13)
“The real birth of modern information theory can be traced to the publication in 1948 of Claude Shannons “The
Mathematical Theory of Communicatio” in the Bell System Technical Journal.”(Encyclopedia Britannica)
k
X
log N = log n! − log Πki=1 (npi )! = log n! − log(npi )! (16)
i=1
N ≈ 2nH (17)
Example:
If total number of objects are 8 then
23 = 8
and we need 3 bits to represent 8 different objects
000, 001, 010, 100, 110, 101, 011, 111
No. of bits required=log 23 = 3
For equiprobable N number of typical sequences with
N ≈ 2nH (18)
1 1
H = −4 log = 2 (21)
4 4
No compression! 2 bits are needed to encode each letter
If we apply the same code as Eq. (10)
4
X 1
n̄ = pi li = (1 + 2 + 3 + 3) = 2.25 > 2 (22)
4
i=1
Worse!
For same code as in Eq. (10) and
p1 = 0.9 ; p2 = 0.05 ; p3 = 0.025 ; p4 = 0.025 (23)
H = 0.62 ; n̄ = 1.15
Aeysha Khalique, NUST Pakistan Quantum Information 2020 18/66
Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform
Huffman Code
A = {0, 1} ; p0 = 3/4 ; p1 = 1/4 ; 4-letter words
0 0 · · · log λk
···
λ1 log λ1 0 0
0 λ2 log λ2 · · · 0
S(ρ) = −Tr (ρ log ρ) = −Tr .
.. . .. . .. . ..
k 0 0 · · · λk log λk
X
S(ρ) = − λi log λi = H(λ1 , λ2 , · · · λk ) (29)
i=1
Aeysha Khalique, NUST Pakistan Quantum Information 2020 21/66
Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform
|e
0i = cos θ |0i + sin θ |1i
|e
1i = sin θ |0i + cos θ |1i (33)
ρ =p0 |e
0i he
0| + p1 |e
1i he
1|
=p0 cos2 θ |0i h0| + sin2 θ |1i h1| + cos θ sin θ (|0i h1| + |1i h0|)
+ p1 sin2 θ |0i h0| + cos2 θ |1i h1| + cos θ sin θ (|0i h1| + |1i h0|)
(34)
p0 + p1 = 1 ; p0 = p ; p1 = 1 − p
ρ = p cos2 θ + (1 − p) sin2 θ |0i h0| + p sin2 θ + (1 − p) cos2 θ |1i h1|
sin2 θ + p cos 2θ
cos θ sin θ
ρ= 2 (37)
cos θ sin θ cos θ − p cos 2θ
Eigenvalues:
1
q
λ± = 2
1 ± 1 − 4p(1 − p) cos 2θ (38)
2
|e
0i = cos θ |0i + sin θ |1i ; |e
1i = sin θ |0i + cos θ |1i (40)
1: θ = 0 |e0i = |0i ; |e
1i = |1i ; λ+ = p ; λ− = (1 − p)
Classical case: states are orthogonal S(ρ) = H(p0 , p1 )
θ = π/4: |e
0i = |e
1i ; λ+ = 1 ; λ− = 0
ρ is pure state; ignorance is zero and S(ρ) = 0
Case 2: θ = 0.2π/4; 3: θ = 0.4π/4; 4: θ = 0.6pi/4; 5: θ = 0.8π/4
θ increases: S(ρ) decreases
overlap he0|e
1i = 2 sin θ cos θ = sin 2θ increases
similarity between the states increases, a priori ignorance decreases
Aeysha Khalique, NUST Pakistan Quantum Information 2020 27/66
Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform
A = {|ψ0 i , |ψ1 i}
Alice wishes to send n qubit message drawing qubits |ψ0 i and
|ψ1 i with probability p and (1 − p). Message
|ΨK i = |ψk1 i ⊗ |ψk2 i ⊗ · · · ⊗ |ψkn i (44)
is specified by K = {ki , · · · , kn } with ki ∈ {0, 1}
|ψ0 i = |e
0i and |ψ1 i = |e
1i are non-orthogonal
|e
0i = cos θ |0i + sin θ |1i
|e
1i = sin θ |0i + cos θ |1i (45)
Example 1: contd
pK FK = p cos2 2θ + sin2 2θ
P
Average Fidelity F̄= k
Classical fidelity: If states were {|0i , |1i} for θ = 0
Fc,K = 1 if ref is correct and Fc = 0 if ref is wrong:
Fc,0 = Fc,2 = 1 and Fc,1 = Fc,3 = 0
Average classical Fidelity F¯c = p
1: θ = 0, F̄ = F¯c = p
2 to 5: θ = 0.2π/4 to 0.8π/4, F̄ > F¯c
θ = π/4, |e
0i = |e
1i, F̄ = 1
No information is transferred as states cannot be distinguished
by any measurement. Bob already knows the message.
Aeysha Khalique, NUST Pakistan Quantum Information 2020 36/66
Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform
Accessible Information
Some Preliminaries I
H(X ) quantifies a priori ignorance per letter of receiver,
before any message is received. Need to send nH bits to
completely specify a particular message of n bits
Let the random variables associated with:
the letters generated by Alice ≡ X
Bobs measurement outcomes ≡ Y
Bob measures and on the basis of that upgrades his
information what Alice has sent
But after he learns Y , Bayes’ rule allows him to update his
information of X
p(x|y )p(y ) = p(y |x)p(x) ≡ p(x, y ) (63)
p(y |x)p(x)
p(x|y ) = (64)
p(y )
p(x) is known from a priori probabilities of ensemble,
P p(y |x) is
known from measurement process and p(y ) = x p(y |x)p(x)
Aeysha Khalique, NUST Pakistan Quantum Information 2020 47/66
Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform
Similarly
I (X ; Y ) ≡ Mutual Information
I (X ; Y ) is symmetric under exchange of X and Y . One finds
as much about X by learning Y as about Y by learning X
Learning Y can never reduce knowledge about X , so
I (X ; Y ) ≥ 0
If X and Y are completely uncorrelated then
p(x, y ) = p(x)p(y ) and I (X ; Y ) = 0
Aeysha Khalique, NUST Pakistan Quantum Information 2020 49/66
Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform
Accessible Information
Letters chosen from E = {ρ1 , · · · , ρk , p1 , · · · , pk }.
Alice codes a message in non-orthogonal quantum states and
sends to Bob.
Question: How much information can Bob gain on the
message by performing measurements on the quantum states
received?
Accessible information is the maximum of I (X : Y ) over all
possible measurement schemes
Acc(E) = MaxFy I (X ; Y ) (71)
The upper bound on accessible information is the Holevo
bound
I (X ; Y ) ≤χ(E)
X
I (X ; Y ) ≤S(ρ) − pi S(ρi ) (72)
i
Example 1: Contd
Performing Measurement
Bob performs a projective measurement, along n̂
1 1
F0 = (I + n̂ · σ) ; F1 = (I − n̂ · σ) (79)
2 2
e.g. measurement along z-axis =⇒ n̂ = (0, 0, 1)
We can take the states on Bloch sphere as
1 1
ρ0 = (I + rˆ0 · σ) ; ρ1 = (I + rˆ1 · σ) (80)
2 2
Example 1: Contd
Accessible Information Calculation
First find mutual information, then maximize it over all
measurements
I (X ; Y ) = H(X ) + H(Y ) − H(X ; Y )
X X X
=− p(x) log(p(x)) − p(y ) log(p(y )) + p(x, y ) log(p(x, y ))
x y x,y
(81)
P P
using p(x) = p(x, y ); p(y ) = x p(x, y )
y
X
I (X ; Y ) = − p(x, y ) (log(p(x)) + log(p(y ) − log(p(x, y )))
x,y
X p(x)p(y )
=− p(x, y ) log (82)
x,y
p(x, y )
1 1
p(0|0) = (1 + cos(2θ − θ̄)); p(1|0) = (1 − cos(2θ − θ̄))
2 2
1 1
p(0|1) = (1 − cos(2θ + θ̄)); p(1|1) = (1 + cos(2θ + θ̄)) (85)
2 2
Aeysha Khalique, NUST Pakistan Quantum Information 2020 55/66
Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform
For separable
P state only one term in schmidt decomposition
and since i λi = 1, so that one λ = 1 and
E = S(ρA ) = S(ρB ) = H(1) = 0
For maximally entangled state: |ΨAB i = √12 (|0A 0B i + |1A 1B i),
already in Schmidt decomposition λ1 = λ2 = 1/2 and
E = S(ρA ) = S(ρB ) = H(1/2) = 1
For partially entangled state:
|ΨAB i = √35 |0A ↑B i + √45 |1A ↓B i),
already in Schmidt decomposition λ1 = 9/25 ; λ2 = 16/25 and
E = S(ρA ) = S(ρB ) = H(9/25) = 0.942
Aeysha Khalique, NUST Pakistan Quantum Information 2020 63/66
Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform
RenormalizingpCB 2 2
!
α β
|ΨiACB = α4 + β 4 |0iA p |00i + p |11i
α4 + β 4 α4 + β 4 CB
p 1
+ 2α2 β 2 |1iA √ (|01i + |10i)CB (101)
2
Alice makes a measurement on A in basis {|0i , |1i}. She gets state
|1i with probability 2α2 β 2 and CB are in maximally entangled state
If she starts with N pairs, she will get 2Nα2 β 2 entangled pairs
For remaining N(1 − 2α2 β 2 ) pairs, she does the whole process again
Alice has to announce her result to Bob (classical communication)
to extract succesful pairs.
This entanglement was already present in initial state
F = | hφ+ |ΨAB i = 12 (α + β)2
Aeysha Khalique, NUST Pakistan Quantum Information 2020 66/66