You are on page 1of 66

Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Quantum Information Theory

Aeysha Khalique
SNS-NUST

April 30, 2020

Aeysha Khalique, NUST Pakistan Quantum Information 2020 1/66


Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Table of contents

1 Overview

2 Classical Information

3 Classical Data Compression

4 Quantum Information

5 Quantum Data Compression

6 Accessible Information

7 Accessible Information

8 Entanglement Measurement

Aeysha Khalique, NUST Pakistan Quantum Information 2020 2/66


Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Information Transfer

Information transfer should be done


Efficiently
Channel should be used minimum−→ Coding
Reliably
Receiver should get the same message as sent by the
sender−→ No errors−→ Error correction
Secretly
Quantum Dense Coding X
Quantum Teleportation X
Entanglement Swapping X
QKD X

Aeysha Khalique, NUST Pakistan Quantum Information 2020 3/66


Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Efficient Information Transfer: Overview

Classical Information Transfer Quantum Information Transfer


Quantifying classical information Quantifying quantum information
How much information is How much information is
gained from a message gained from a message
Relating Shannon entropy to Relating von Neumann
information gain entropy to information gain
Data Compression Data Compression
Shannon noiseless coding Schumacher quantum
theorem noiseless coding theorem

Aeysha Khalique, NUST Pakistan Quantum Information 2020 4/66


Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Entropy and Information

Entropy

Ice Water Steam

Entropy: Low Medium High

Avail. states: Small Large Largest

https://bit.ly/34nw1sG https://bit.ly/2RmyFcG https://bit.ly/2VjQmv0

Aeysha Khalique, NUST Pakistan Quantum Information 2020 5/66


Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Entropy and Information

Information
Information is measure of our a priori ignorance
Consider A = {A, B} ; k = 2 ; n = 4
p1 = 1/2 p2 = 1/2

p1 = 3/4 p2 = 1/4 AABB


AAAB ABAB
p1 = 1 p2 = 0
AABA BAAB
AAAA
ABAA BABA
Low entropy
High knowledge BAAA BBAA
Low Info gain Medium entropy ABBA
Medium knowledge
Midium Info gain High entropy
Low knowledge
High Info gain
Aeysha Khalique, NUST Pakistan Quantum Information 2020 6/66
Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Quantifying Entropy
Question: What is the probability of getting a particular sequence
AAAB
3 3 3 1
× × × = 0.105
P= (1)
4 4 4 4
We process this probability:
If k is too large fractions are too small and so is P. Convert
products to sums. We do by taking log
log(ab) = log a + log b (2)
3 3 3 1
− log2 P = − log2 ( ) − log2 − log2 − log2 (3)
4 4 4 4
Take average
2
1 3 3 1 1 X
− log2 P = − log2 − log2 = −pi log pi (4)
4 4 4 4 4
i=1
Pk
Shannon Entropy= H = − i=1 pi log pi
Aeysha Khalique, NUST Pakistan Quantum Information 2020 7/66
Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Shannon Entropy

For binary case:

H = −p log p − (1 − p) log(1 − p) (5)

Always take 0 log2 0 = 0


AAAA p = 1 BBBB p = 0
H = −1 log 1 − 0 log 0 = 0

AAAB p = 43 ABBB p = 1
4
H = − 43 log 43 − 14 log 14 = 0.81

AABB p = 21
H = − 21 log 21 − 12 log 12 = 1

Aeysha Khalique, NUST Pakistan Quantum Information 2020 8/66


Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Shannon Entropy

H = −p log p − (1 − p) log(1 − p) (6)

1.0

0.8

0.6
H

0.4

0.2

0.0
0.0 0.2 0.4 0.6 0.8 1.0
p

H does not depend on the value of the random variable, it


just depends on p
Aeysha Khalique, NUST Pakistan Quantum Information 2020 9/66
Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Classical coding
Question:
How much a message can be compressed without loosing
information
A = {a1 , a2 , a3 , a4 } k = 4
Simplest coding
Using two bits for each letter:
a1 → 00
a2 → 01
a3 → 10
a4 → 11 (7)
On average two bits per letter
Each letter identifiable
a1 a2 a1 → 000100 (8)

Aeysha Khalique, NUST Pakistan Quantum Information 2020 10/66


Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Efficient classical coding: Morse Code

Telegraph Code: Morse Code 1848


Three symbols are used

Aim is to use the channel as minimum


More probable letters are encoded in smaller strings

Aeysha Khalique, NUST Pakistan Quantum Information 2020 11/66


Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Efficient Classical coding


A = {a1 , a2 , a3 , a4 } k=4
a priori information
p1 = 1/2 ; p2 = 1/4 ; p3 = 1/8 ; p4 = 1/8 (9)

Use less bits for more probable letter


a1 → 0
a2 → 10
a3 → 110
a4 → 111 (10)
Average number of bits to send one coded letter
X 1 1 1 1
n̄ = pi li = × 1 + × 2 + × 3 + × 3 = 1.75 (11)
2 4 8 8
i
This is the best we can do!
Aeysha Khalique, NUST Pakistan Quantum Information 2020 12/66
Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Classical Data Compression


Question: How much a message can be compressed without
loosing information
A = {a1 , a2 , a3 , a4 } k = 4
a priori information

p1 = 1/2 ; p2 = 1/4 ; p3 = 1/8 ; p4 = 1/8 (12)

Shannon entropy is
1 1 1 1 1 1 1 1
H = − log − log − log − log
2 2 4 4 8 8 8 8
= 1.75 = n̄ (13)

If Alice has to send Bob n letters taken from alphabet A,


having k letters, each occuring with a priori probability pi , she
can reliably send her message by sending nH bits.
Aeysha Khalique, NUST Pakistan Quantum Information 2020 13/66
Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Shannon Noiseless Coding Theorem

Given a message in which the letters have been chosen


independently from the ensemble A = {a1 · · · ak } with a priori
probabilities {p1 · · · pk }, there exists, asymptotically in the length
of the message, an optimal and reliable code compressing the
message to H(p1 · · · pk ) bits per letter.

“The real birth of modern information theory can be traced to the publication in 1948 of Claude Shannons “The
Mathematical Theory of Communicatio” in the Bell System Technical Journal.”(Encyclopedia Britannica)

Aeysha Khalique, NUST Pakistan Quantum Information 2020 14/66


Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Proof: Shannon Noiseless Coding Theorem I


A = {a1 · · · ak } with a priori probabilities {p1 · · · pk }
Typical Sequence: In which an n letter message from A
contains exactly np1 a1 , np2 a2 and so on.
Number of such sequences are
n!
N= (14)
Πki=1 (npi )!
Examples: n = 4: All below are typical sequences
p1 = 1/2 p2 = 1/2
p1 = 3/4 p2 = 1/4 AABB
AAAB ABAB
p1 = 1 p2 = 0
AABA BAAB
AAAA
4! ABAA BABA
N = 4!0! =1
BAAA BBAA
4!
N = 3!1! =4 ABBA
4!
N = 2!2! =6
Aeysha Khalique, NUST Pakistan Quantum Information 2020 15/66
Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Proof: Shannon Noiseless Coding Theorem II

Number of such sequences are


n!
N= (15)
Πki=1 (npi )!

k
X
log N = log n! − log Πki=1 (npi )! = log n! − log(npi )! (16)
i=1

Stirling’s formula log m! = m log m − m/ ln 2 + O(log m)

N ≈ 2nH (17)

Aeysha Khalique, NUST Pakistan Quantum Information 2020 16/66


Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Proof: Shannon Noiseless Coding Theorem III

Example:
If total number of objects are 8 then
23 = 8
and we need 3 bits to represent 8 different objects
000, 001, 010, 100, 110, 101, 011, 111
No. of bits required=log 23 = 3
For equiprobable N number of typical sequences with

N ≈ 2nH (18)

No. of minimum bits required to identify which one of these


sequence actually occurred are:

log N = log(2nH ) = nH (19)

Aeysha Khalique, NUST Pakistan Quantum Information 2020 17/66


Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Shannon Noiseless Coding Theorem: Examples


A = {a1 , a2 , a3 , a4 } k=4
equiprobable:
p1 = 1/4 ; p2 = 1/4 ; p3 = 1/4 ; p4 = 1/4 (20)

1 1
H = −4 log = 2 (21)
4 4
No compression! 2 bits are needed to encode each letter
If we apply the same code as Eq. (10)
4
X 1
n̄ = pi li = (1 + 2 + 3 + 3) = 2.25 > 2 (22)
4
i=1
Worse!
For same code as in Eq. (10) and
p1 = 0.9 ; p2 = 0.05 ; p3 = 0.025 ; p4 = 0.025 (23)
H = 0.62 ; n̄ = 1.15
Aeysha Khalique, NUST Pakistan Quantum Information 2020 18/66
Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Huffman Code
A = {0, 1} ; p0 = 3/4 ; p1 = 1/4 ; 4-letter words

24 = 16 possible words P1 = p04 ; P2 = p03 p1 · · · ; P16 = p14


Maximum Compression:
4H(1/4) = 4(− 14 log 14 − 34 log 43 ) = 4 × 0.8112 = 3.25
Compression by Huffman code: n̄ = 16
P
i=1 Pi li = 3.27 Close!!
Aeysha Khalique, NUST Pakistan Quantum Information 2020 19/66
Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

von Neumann Entropy: Quantifying Quantum Information


von Neuman entropy of a state ρ is defined as
S(ρ) = −Tr (ρ log ρ) (24)
Problem: Alice has alphabet
A = {ρ1 , ρ2 , · · · ρk } {p1 , p2 , · · · , pk } with letters as pure or
mixed quantum states ρi with probability pi
Alice chooses a letter randomly from the ensemble and sends
to Bob
Bob just knows {ρi , pi } and for him the state of the letter is a
mixed state
X k
ρ= pi ρi (25)
i=1
Its von Neumann entropy is
S(ρ) = − Tr (ρ log ρ) (26)
Aeysha Khalique, NUST Pakistan Quantum Information 2020 20/66
Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Calculating von Neumann Entropy


ρ is diagonal in its eigen basis
···
 
λ1 0 0
0 λ2 ··· 0
ρ = . (27)
 
 .. .. .. .. 
. . .
0 0 ··· λk
For a diagonal matrix ρ,
···
 
log λ1 0 0
 0 log λ2 · · · 0 
log ρ =  . (28)
 
 .. . .. . .. . .. 

0 0 · · · log λk
···
 
λ1 log λ1 0 0
 0 λ2 log λ2 · · · 0 
S(ρ) = −Tr (ρ log ρ) = −Tr  .
 
 .. . .. . .. . .. 

k 0 0 · · · λk log λk
X
S(ρ) = − λi log λi = H(λ1 , λ2 , · · · λk ) (29)
i=1
Aeysha Khalique, NUST Pakistan Quantum Information 2020 21/66
Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

von Neumann Entropy S(ρ): Properties

S(ρ) is average information gain per quantum state


S(ρ) only depends on the eigenvalues
Eigenvalues are basis independent, so is S(ρ);
S(ρ) = S(U † ρU)
For pure state, only one λi = 1, all others are zero, so

S(ρ) = −1 log 1 = 0. (30)

Ignorance is zero in this case. Bob knows which state is sent.


For N dimentional Hilbert space, 0 ≤ S(ρ) ≤ log(N)
Minimum value of S(ρ) is 0, because 0 ≤ λi ≤ 1 so
−λi log λi ≥ 0
S(ρ) = H(λ1 , λ2 , · · · λk ) is maximum when H is maximum,
which occurs when all λi are equal i.e. λi = 1/N ∀i, then
S(ρ) = − log(1/N) = log N
Aeysha Khalique, NUST Pakistan Quantum Information 2020 22/66
Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

von Neumann Entropy: Example 1: Two orthogonal states


A = {ρ0 , ρ1 } ; {p0 , p1 } and ρ0 and ρ1 are orthogonal
ρ0 and ρ1 are orthogonal and thus form basis for a qubit. We
can take them to be ρ0 = |0i h0| and ρ1 = |1i h1|

ρ = p0 |0i h0| + p1 |1i h1| (31)


 
p0 0
= (32)
0 p1

ρ is already diagonal with eigenvalues p0 and p1 .


S(ρ) = H(p0 , p1 ) = −p0 log(p0 ) − p1 log(p1 )
von Neumann entropy is the same as Shannon entropy for two
letters with probability p0 and p1 . So it is same as if the
states were classical
This is because the states are orthogonal and are exactly
distinguishable
Aeysha Khalique, NUST Pakistan Quantum Information 2020 23/66
Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

von Neumann Entropy: Example 2: Two non-orthogonal


states
A = {ρ0 , ρ1 } ; {p0 , p1 } and ρ0 and ρ1 are non-orthogonal
ρ0 = |e0i he
0| and ρ1 = |e 1i he
1|

|e
0i = cos θ |0i + sin θ |1i
|e
1i = sin θ |0i + cos θ |1i (33)

Aeysha Khalique, NUST Pakistan Quantum Information 2020 24/66


Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

von Neumann Entropy: Example 2: contd

ρ =p0 |e
0i he
0| + p1 |e
1i he
1|
=p0 cos2 θ |0i h0| + sin2 θ |1i h1| + cos θ sin θ (|0i h1| + |1i h0|)


+ p1 sin2 θ |0i h0| + cos2 θ |1i h1| + cos θ sin θ (|0i h1| + |1i h0|)


(34)

p0 + p1 = 1 ; p0 = p ; p1 = 1 − p
ρ = p cos2 θ + (1 − p) sin2 θ |0i h0| + p sin2 θ + (1 − p) cos2 θ |1i h1|
 

+ cos θ sin θ (|0i h1| + |1i h0|) (35)

ρ = sin2 θ + p cos 2θ |0i h0| + cos2 θ − p cos 2θ |1i h1|


 

+ cos θ sin θ (|0i h1| + |1i h0|) (36)

Aeysha Khalique, NUST Pakistan Quantum Information 2020 25/66


Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

von Neumann Entropy: Example 2: contd

sin2 θ + p cos 2θ
 
cos θ sin θ
ρ= 2 (37)
cos θ sin θ cos θ − p cos 2θ

Eigenvalues:
 
1
q
λ± = 2
1 ± 1 − 4p(1 − p) cos 2θ (38)
2

von Neumann entropy

S(ρ) = −λ+ log λ+ − λ− log λ− (39)

Aeysha Khalique, NUST Pakistan Quantum Information 2020 26/66


Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

von Neumann Entropy: Example 2: contd

|e
0i = cos θ |0i + sin θ |1i ; |e
1i = sin θ |0i + cos θ |1i (40)

1: θ = 0 |e0i = |0i ; |e
1i = |1i ; λ+ = p ; λ− = (1 − p)
Classical case: states are orthogonal S(ρ) = H(p0 , p1 )
θ = π/4: |e
0i = |e
1i ; λ+ = 1 ; λ− = 0
ρ is pure state; ignorance is zero and S(ρ) = 0
Case 2: θ = 0.2π/4; 3: θ = 0.4π/4; 4: θ = 0.6pi/4; 5: θ = 0.8π/4
θ increases: S(ρ) decreases
overlap he0|e
1i = 2 sin θ cos θ = sin 2θ increases
similarity between the states increases, a priori ignorance decreases
Aeysha Khalique, NUST Pakistan Quantum Information 2020 27/66
Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Quantum Data Compression

A = {|ψ1 i , |ψ2 i , · · · , |ψk i} with probabilities {p1 , p2 , · · · , pk }


Alice sends n qubits to Bob. Each qubit that Bob receives is
in mixed state
k
X
ρ= pi |ψi i hψi | (41)
i=1

n qubits are in state ρ⊗n


We wish to argue that, for n large, this density matrix has nearly
all of its support on a subspace of the full Hilbert space of the
messages, where the dimension of this subspace asymptotically
approaches 2nS(ρ)
(Schumacher’s Noiseless Quantum Coding Theorem)

Aeysha Khalique, NUST Pakistan Quantum Information 2020 28/66


Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Schumacher’s Noiseless Coding Theorem


State ρ of each qubit in eigen-basis is
k
X k
X
ρ= pi |ψi i hψi | = λi |ai i hai | in eigen basis (42)
i=1 i=1
So they are orthogonal states, just like a classical message
A message will be
M = |b1 i ⊗ · · · ⊗ |bn i s.t |bj i ∈ {|ai i} (43)
with probability λ equals product of corresponding λi ’s
e.g. for n = 3 and M = |a3 i |a6 i |a1 i, probability λ = λ3 λ6 λ1
A typical subspace will be subspace spanned by eigen vectors
of ρ⊗n satisfying − n1 log λ = S(ρ)
The dimension of this subspace (no. of eigenvectors) is 2nS(ρ)
We need to be able to identify which eigenvector is sent to us
So we need nS(ρ) qubits to encode our message
Aeysha Khalique, NUST Pakistan Quantum Information 2020 29/66
Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

How is Compression Done: Problem

A = {|ψ0 i , |ψ1 i}
Alice wishes to send n qubit message drawing qubits |ψ0 i and
|ψ1 i with probability p and (1 − p). Message
|ΨK i = |ψk1 i ⊗ |ψk2 i ⊗ · · · ⊗ |ψkn i (44)
is specified by K = {ki , · · · , kn } with ki ∈ {0, 1}
|ψ0 i = |e
0i and |ψ1 i = |e
1i are non-orthogonal

|e
0i = cos θ |0i + sin θ |1i
|e
1i = sin θ |0i + cos θ |1i (45)

Aeysha Khalique, NUST Pakistan Quantum Information 2020 30/66


Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

How is Compression Done: Alice’s Part


The message belongs to Hilbert space H⊗n with dim. 2n
Each qubit in the message is in state
ρ = p |e
0i he
0| + (1 − p) |e
1i he
1| (46)
Alice writes above state in the eigenbasis and then constructs
the typical subspace
Alice decomposes her message |ΨK i into |τK i ∈ Htyp and
|τK⊥ i ∈ Hatyp
|ΨK i = αk |τK i + βK |τK⊥ i (47)
Alice performs a measurement on her message, if it belongs to
typical subspace, she encodes it and send. She’ll require
nS(ρ) qubits to encode it
If it belongs to atypical subspace she substitutes it with some
reference state R from typical subspace and encodes using
nS(ρ) qubits and send.
Aeysha Khalique, NUST Pakistan Quantum Information 2020 31/66
Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

How is Compression Done: Bob’s Part


Bob decodes the nS(ρ) qubits received from Alice and gets
the state
ρeK = |αK |2 |τK i hτK | + |βK |2 |Ri hR| (48)
Reliability of the message is checked by fidelity
FK = hΨK |e
ρK |ΨK i (49)
FK = 1 when ρeK = |ΨK i hΨK |
FK = 0 when ρeK is orthogonal to |ΨK i hΨK |
Average fidelity is by averaging over all possible messages
X
F̄ = pK FK ; pK ≡ prob. of occurance of message |ΨK i
K
X
pK hΨK | |αK |2 |τK i hτK | + |βK |2 |Ri hR| |ΨK i

=
K
X X
= pK |αK |4 + |βK |4 | hΨK |Ri |2 (50)
K
Aeysha Khalique, NUST Pakistan
K Quantum Information 2020 32/66
Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

How is Compression Done: Analysis

Why can’t Alice just send the classical sequence K = {ki , · · · , kn }


with ki ∈ {0, 1}, as it uniquely determines |ΨK i
In fact for non-orthogonal states compression is more than that of
classical message

S(ρ) = H(p) only for θ = 0, which is for orthogonal states


For θ > 0, S(ρ) < H(p) so compression nS(ρ) is more
The price for this extra compression: Bob reliably receives the
information but he doesn’t know what he has received. He cannot
distinguish between two non orthogonal states reliably
Aeysha Khalique, NUST Pakistan Quantum Information 2020 33/66
Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Quantum Data Compression


Example 1: Two qubit message

Alice has to send two qubit message


Each qubit is drawn from A = {|e0i , |e
1i} drawn with
probability p and 1 − p
Alice can afford to send only one qubit
She sends one qubit in original state
Bob guesses the second qubit in some reference state, say
|Ri = |e
0i
|e0i → |e
0e 0e0i
|01i → |00i
ee e e
|e0i → |e
1e 1e0i
|11i → |10i
ee e e

Aeysha Khalique, NUST Pakistan Quantum Information 2020 34/66


Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Example 1: contd

Fidelity of Bob’s guess

pK FK = p cos2 2θ + sin2 2θ
P
Average Fidelity F̄= k
Classical fidelity: If states were {|0i , |1i} for θ = 0
Fc,K = 1 if ref is correct and Fc = 0 if ref is wrong:
Fc,0 = Fc,2 = 1 and Fc,1 = Fc,3 = 0
Average classical Fidelity F¯c = p

Aeysha Khalique, NUST Pakistan Quantum Information 2020 35/66


Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Example 1: contd: Average Fidelity

Average Fidelity of Bob’s guess

1: θ = 0, F̄ = F¯c = p
2 to 5: θ = 0.2π/4 to 0.8π/4, F̄ > F¯c
θ = π/4, |e
0i = |e
1i, F̄ = 1
No information is transferred as states cannot be distinguished
by any measurement. Bob already knows the message.
Aeysha Khalique, NUST Pakistan Quantum Information 2020 36/66
Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Example 2: Three qubit message


Alice has to send three qubit message
Each qubit is drawn from A = {|e 0i , |e
1i} drawn with
probability p ≥ 1/2 and 1 − p
Alice can afford to send only two qubit
Each qubit in the message is in state
ρ = p |e
0i he
0| + (1 − p) |e
1i he
1| (51)
with eigenvalues:
 
1
q
λ± = 1 ± 1 − 4p(1 − p) cos2 2θ (52)
2
and eigen vectors
λ± + p cos 2θ − cos2 θ
 
1
|±i = p (53)
N± sin θ cos θ
N± = (λ± + p cos 2θ − cos2 θ)2 + cos2 θ sin θ
Aeysha Khalique, NUST Pakistan Quantum Information 2020 37/66
Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Example 2: Three qubit message (contd)


The 8 possible messages |ΨK i are,
|Ψ0 i = |e
0e0i , |Ψ1 i = |e
0e 0e1i , · · · , |Ψ7 i = |e
0e 1e
1e
1i (54)
The eigen states of ρ⊗3 are |χJ i
|χ0 i = |+ + +i , |χ1 i = |+ + −i , · · · , |χ7 i = |− − −i (55)
with probabilities λ = λ+ λ+ λ+ , λ+ λ+ λ− ,· · · , λ− λ− λ− ,
respectively
Alice writes the message in eigenbasis of 3-qubit Hilbert space
7
X
|ΨK i = |χJ i hχJ | |ΨK i
J=0
7
X
= CKJ |χJ i (56)
J=0
where CKJ = hχJ |ΨK i
Aeysha Khalique, NUST Pakistan Quantum Information 2020 38/66
Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Example 2: Three qubit message (contd): Identifying


Typical Subspaces
λ+ > λ− for p > 1/2
=⇒ weight λ+ of |+i > weight λ− of |−i
The most likely subspace is spanned by the most likely states
{|χ0 i = |+ + +i , |χ1 i = |+ + −i , |χ2 i = |+ − +i , |χ4 i = |− + +i}
Unlikely subspace is spanned by
{|χ3 i = |+ − −i , |χ5 i = |− + −i , |χ6 i = |− − +i , |χ7 i = |− − −i}
Alice decomposes the state into component |τK i along typical subspace
and component τK⊥ along atypical subspace
|ΨK i =CK 0 |χ0 i + CK 1 |χ1 i + CK 2 |χ2 i + CK 4 |χ4 i
+ CK 3 |χ3 i + CK 5 |χ5 i + CK 6 |χ6 i + CK 7 |χ7 i
=αK (CK 0 |χ0 i + CK 1 |χ1 i + CK 2 |χ2 i + CK 4 |χ4 i) /αK
+βK (CK 3 |χ3 i + CK 5 |χ5 i + CK 6 |χ6 i + CK 7 |χ7 i) /βK
= αK |τK i + βK |τK⊥ i (57)
p
αK = |CK 0 + |CK 1 + |CK 2 + |CK 4
where p |2 |2 |2 |2 and
βK = |CK 3 |2 + |CK 5 |2 + |CK 6 |2 + |CK 7 |2
Aeysha Khalique, NUST Pakistan Quantum Information 2020 39/66
Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Example 2: Three qubit message (contd): Alice’s Coding


Alice applies a unitary transformation U, that transforms the
basis states of typical subspace to |i1 i2 0i and of atypical to
|i1 i2 1i with i1 , i2 ∈ 0, 1 e.g.
U |χ0 i = |000i, U |χ1 i = |110i |0i, U |χ2 i = |010i, U |χ4 i = |100i
U |χ3 i = |001i, U |χ5 i = |111i |0i, U |χ6 i = |011i, U |χ7 i = |101i
She then measures her third qubit
If it comes out |0i her state |ΨK i is projected to typical
subspace and she sends the first two qubits in state |i1 i |i2 i
If it comes out |1i her state |ΨK i is projected to atypical
subspace and she sends to Bob first two qubits of U |Ri,
where|Ri is reference state from typical subspace. Can take
|Ri = |χ0 i, the most likely state.
Bob appends to the two qubits received an ancillary qubit, prepared
in the state |0i. He then applies the operator U −1 to these three
qubits and ends up with
ρeK = |αK |2 |τK i hτK | + |βK |2 |Ri hR| (58)

Aeysha Khalique, NUST Pakistan Quantum Information 2020 40/66


Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Example 2: Three qubit message (contd): Fidelity


Average fidelity is
X X
F̄ = pK |αK |4 + |βK |4 | hΨK |Ri |2 (59)
K K

Classical fidelity is for θ = 0


F¯c = p 3 + 3p 2 (1 − p) (60)
p = 1/2 =⇒ F¯c = 1/2 but F̄ > 1/2, because our a priori
ignorance is less
For θ = π/4, F̄ = 1, but there is no information gain
Aeysha Khalique, NUST Pakistan Quantum Information 2020 41/66
Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Holevo Information: Compression for Mixed States

Schumacher Theorem gives the compressibility of ensemble of


pure stateA = {|ψ1 i , |ψ2 i , · · · , |ψk i}
Compressibility of mixed states A = {ρ1 , ρ2 , · · · , ρk }
S(ρ) won’t be the answer for mixed states
Example: If there is only one mixed state in the ensemble
A = {ρ0 }
For mixed state ρ0 , S(ρ0 ) > 0
It is always chosen with probability p0 = 1.
State of n-letters is ρ⊗n
0
No information is gained
Bob can reconstruct the message without receiving anything
from Alice
Message can be compressed to zero bits per letter which is less
than S(ρ0 ) > 0.

Aeysha Khalique, NUST Pakistan Quantum Information 2020 42/66


Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Holevo Information: Compression for Mixed States

Mutually orthogonal pure states have entropy S(ρ) = H(X )


where X = {p1 , · · · , pk}
Consider an ensemble A of mutually orthogonal mixed states
such that Tr (ρi ρj ) = 0
For any mixed state of A, we can append a system B such
that the combined state |φx iAB is a pure state and
TrB (|φx iAB hφx |AB ) = ρA . So we can compress a message,
|φ1 iAB |φ2 iAB · · · |φn iAB to H(X ) bit. Bob can then trace out
B and retrieve the message
So we need a quantity that reduces to H(X ) for mutually
orthogonal mixed states and S(ρ) for pure states

Aeysha Khalique, NUST Pakistan Quantum Information 2020 43/66


Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Holevo Information: Quantifying


Pk
ρ= i=1 pk ρk
" !#
X X
S(ρ) = −Tr ρ log ρ = −Tr pi ρi log pi ρi (61)
i i
ρ is block diagonalized for ρi orthogonal
X
S(ρ) = − Tr [pi ρi log (pi ρi )]
i
X
=− Tr [pi ρi (log pi + log ρi )]
i
X X
=− pi log pi Tr ρi − pi Tr ρi log ρi
i i
X X
=− pi log pi − pi Tr ρi log ρi
i i
X
= H(p1 , · · · , pk ) + pi S(ρi ) (62)
Aeysha Khalique, NUST Pakistan i
Quantum Information 2020 44/66
Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Holevo Information: Quantifying


P
S(ρ) = H(p1 , · · · , pk ) + Pi pi S(ρi )
H(p1 , · ·P
· , pk ) = S(ρ) − i pi S(ρi )
S(ρ) − i pi S(ρi )
is equal to H(p) for mutually orthogonal mixed states
is equal to S(ρ) for ensemble of pure states as S(ρi ) = 0
is equal to zero if there is only one mixed state with p0 = 1, as
P
i pi S(ρi ) = S(ρ)
Thus it P is a good measure of information
S(ρ) − i pi S(ρi ) ≡ χ(E) The Holevo Information
of the ensemble E = {ρ1 , · · · ρk , p1 , · · · , pk }
χ(E) does not just depends on ρ but also on how is it realized.
χ(E) tells us how much, on the average, the von Neumann
entropy of an ensemble is reduced when we know which
preparation was chosen.
In general, high-fidelity compression to less than χ qubits per
letter is not possible.
Aeysha Khalique, NUST Pakistan Quantum Information 2020 45/66
Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Accessible Information

Letters chosen from A = {a1 , · · · , ak } with probabilities


{p1 , · · · , pk }.
Alice codes a message in non-orthogonal quantum states and
sends to Bob.
Question: How much information can Bob gain on the
message by performing measurements on the quantum states
received?
Non-trivial problem: because Bob cannot perfectly distinguish
between non-orthogonal states.

Aeysha Khalique, NUST Pakistan Quantum Information 2020 46/66


Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Some Preliminaries I
H(X ) quantifies a priori ignorance per letter of receiver,
before any message is received. Need to send nH bits to
completely specify a particular message of n bits
Let the random variables associated with:
the letters generated by Alice ≡ X
Bobs measurement outcomes ≡ Y
Bob measures and on the basis of that upgrades his
information what Alice has sent
But after he learns Y , Bayes’ rule allows him to update his
information of X
p(x|y )p(y ) = p(y |x)p(x) ≡ p(x, y ) (63)
p(y |x)p(x)
p(x|y ) = (64)
p(y )
p(x) is known from a priori probabilities of ensemble,
P p(y |x) is
known from measurement process and p(y ) = x p(y |x)p(x)
Aeysha Khalique, NUST Pakistan Quantum Information 2020 47/66
Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Some Preliminaries II (contd)

Because of the new knowledge, Bob is now less ignorant


about X , and given the y 0 s he has received, Alice can now do
with encoding with lesser number of bits nH(X |Y )

H(X |Y ) =< − log p(x|y ) > Conditional Entropy (65)

From p(x|y ) = p(x, y )/p(y )

H(X |Y ) = H(X , Y ) − H(Y ) (66)

Similarly

H(Y |X ) = < − log p(y |x) >


=H(X , Y ) − H(X ) (67)

Aeysha Khalique, NUST Pakistan Quantum Information 2020 48/66


Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Some Preliminaries III (contd)

The information about X that Bob gains when he learns Y is


quantified by how much the number of bits per letter needed
to specify X is reduced when Y is known.

I (X ; Y ) ≡H(X ) − H(X |Y ) (68)


=H(X ) + H(Y ) − H(X , Y ) (69)
=H(Y ) − H(Y |X ) (70)

I (X ; Y ) ≡ Mutual Information
I (X ; Y ) is symmetric under exchange of X and Y . One finds
as much about X by learning Y as about Y by learning X
Learning Y can never reduce knowledge about X , so
I (X ; Y ) ≥ 0
If X and Y are completely uncorrelated then
p(x, y ) = p(x)p(y ) and I (X ; Y ) = 0
Aeysha Khalique, NUST Pakistan Quantum Information 2020 49/66
Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Accessible Information
Letters chosen from E = {ρ1 , · · · , ρk , p1 , · · · , pk }.
Alice codes a message in non-orthogonal quantum states and
sends to Bob.
Question: How much information can Bob gain on the
message by performing measurements on the quantum states
received?
Accessible information is the maximum of I (X : Y ) over all
possible measurement schemes
Acc(E) = MaxFy I (X ; Y ) (71)
The upper bound on accessible information is the Holevo
bound
I (X ; Y ) ≤χ(E)
X
I (X ; Y ) ≤S(ρ) − pi S(ρi ) (72)
i

Aeysha Khalique, NUST Pakistan Quantum Information 2020 50/66


Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Accessible Information: Orthogonal States


Alice sends a message by selecting states from an ensemble of
orthogonal pure states E{|ψ1 i , |ψ2 i , · · · , |ψk i , p1 , p2 , · · · , pk }
Bob knowing about A, knows he can perfectly distinguish
between states by orthogonal measurement
Fy = |ψy i hψy | (73)
Now conditional probability is
p(y |x) = Tr (Fy ρx ) (74)
= Tr (|ψy i hψy | |ψx i hψx |) (75)
= δx,y (76)
So H(X |Y ) = 0 and hence
Acc(E) = I (X ; Y ) = H(X ) = S(ρ) (77)
Holevo bound is χ(E) = S(ρ)
Aeysha Khalique, NUST Pakistan Quantum Information 2020 51/66
Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Accessible Information: Example 1


Two Non-Orthogonal States
Alice sends a message by selecting states from an ensemble of
non-orthogonal pure states E{|0̃i , |1̃i , p, 1 − p}
What we know
ρ0 = |0̃i h0̃|; ρ1 = |1̃i h1̃|, ρ = pρ0 + (1 − p)ρ1
Pure states, so S(ρ0 ) = S(ρ1 ) = 0
Holevo Information: χ(E) = S(ρ)
Mutual Information: I (X : Y ) ≤ χ(E) =⇒ I (X ; Y ) ≤ S(ρ)
For θ 6= 0, S(ρ) < H(X ), so I (X ; Y ) < H(X )
Bob performs a measurement and for that
I (X ; Y ) = H(X ) − H(X |Y )
= H(X ) + H(Y ) − H(X , Y )
Acc(E) = MaxFy I (X ; Y ) (78)

Aeysha Khalique, NUST Pakistan Quantum Information 2020 52/66


Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Example 1: Contd
Performing Measurement
Bob performs a projective measurement, along n̂
1 1
F0 = (I + n̂ · σ) ; F1 = (I − n̂ · σ) (79)
2 2
e.g. measurement along z-axis =⇒ n̂ = (0, 0, 1)
We can take the states on Bloch sphere as
1 1
ρ0 = (I + rˆ0 · σ) ; ρ1 = (I + rˆ1 · σ) (80)
2 2

rˆ0 = (sin 2θ, 0, cos 2θ)


rˆ1 = (sin 2θ, 0, − cos 2θ)
n̂ = (sin θ̄, 0, cos θ̄)

Aeysha Khalique, NUST Pakistan Quantum Information 2020 53/66


Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Example 1: Contd
Accessible Information Calculation
First find mutual information, then maximize it over all
measurements
I (X ; Y ) = H(X ) + H(Y ) − H(X ; Y )
X X X
=− p(x) log(p(x)) − p(y ) log(p(y )) + p(x, y ) log(p(x, y ))
x y x,y
(81)
P P
using p(x) = p(x, y ); p(y ) = x p(x, y )
y
X
I (X ; Y ) = − p(x, y ) (log(p(x)) + log(p(y ) − log(p(x, y )))
x,y
X p(x)p(y )
=− p(x, y ) log (82)
x,y
p(x, y )

p(x, y ) = p(x)p(y |x)


Aeysha Khalique, NUST Pakistan Quantum Information 2020 54/66
Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Example 1: Contd: Probabilities


p(y |x) = Tr (Fy ρx )
is the probability that measurement outcome is y , when state was ρx s.t
x, y ∈ {0, 1}
1
p(0|0) = Tr (F0 ρ0 ) = (1 + rˆ0 · n̂)
2
1
p(1|0) = Tr (F1 ρ0 ) = (1 − rˆ0 · n̂)
2
1
p(0|1) = Tr (F0 ρ1 ) = (1 + rˆ1 · n̂)
2
1
p(1|1) = Tr (F1 ρ1 ) = (1 − rˆ1 · n̂) (83)
2
rˆ0 · n̂ = sin 2θ sin θ̄ + cos 2θ cos θ̄ = cos(2θ − θ̄)
rˆ1 · n̂ = sin 2θ sin θ̄ − cos 2θ cos θ̄ = − cos(2θ + θ̄) (84)

1 1
p(0|0) = (1 + cos(2θ − θ̄)); p(1|0) = (1 − cos(2θ − θ̄))
2 2
1 1
p(0|1) = (1 − cos(2θ + θ̄)); p(1|1) = (1 + cos(2θ + θ̄)) (85)
2 2
Aeysha Khalique, NUST Pakistan Quantum Information 2020 55/66
Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Exampe I: Contd. Probabilities

p(x): p(X = 0) = p and p(X = 1) = 1 − p


p(x, y ): p(x, y ) = p(x)p(y |x), so
1 1
p(0, 0) = p(1 + cos(2θ − θ̄)); p(0, 1) = p(1 − cos(2θ − θ̄))
2 2
1 1
p(1, 0) = (1 − p)(1 − cos(2θ + θ̄)); p(1, 1) = (1 − p)(1 + cos(2θ + θ̄))
2 2
(86)
P
p(y ): p(y ) = x p(x, y ), so
1 
p(Y = 0) = p(0, 0) + p(1, 0) = 1 + p cos(2θ − θ̄) − (1 − p) cos(2θ + θ̄)
2
1 
p(Y = 1) = p(0, 1) + p(1, 1) = 1 − p cos(2θ − θ̄) + (1 − p) cos(2θ + θ̄)
2
(87)

Aeysha Khalique, NUST Pakistan Quantum Information 2020 56/66


Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Example 1 Contd: Accessible Information


θ = π/10; p = 0.8

Maxθ̄ I (θ̄) = 0.4 at θ̄ = 0.14π


So Accessible information is Acc(E) = 0.4
Holevo bound is χ(E) = S(ρ)
S(ρ) = −λ+ log λ+ − λ− log λ− = 0.526
1 p 
λ± = 1 ± 1 − 4p(1 − p) cos2 2θ (88)
2
Classical bound: H(X ) = −p log p − (1 − p) log(1 − p) = 0.722
Acc(E) < S(ρ) < H(X )
Aeysha Khalique, NUST Pakistan Quantum Information 2020 57/66
Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Accessible Information: Example 2


Three Non-Orthogonal States
Alice sends a message by selecting states from an ensemble of
non-orthogonal pure states E{|φ0 i , |φ1 i , |φ2 i , p0 , p1 , p2 }
|φ0 i = |0i
|φ1 i = cos θ |0i + sin θ |1i
|φ2 i = cos θ |0i − sin θ |1i
p0 = p1 = p2 = 1/3 and θ = 2π/3
   √   √ 
1 0 1 1
√ − 3 1 1
√ 3
ρ0 = ; ρ1 = ; ρ2 =
0 1 4 − 3 3 4 3 3
(89)
ρ = p0 ρ0 + p1 ρ1 + p2 ρ2 = 12 I
S(ρ) = 1
Pure states, so S(ρ0 ) = S(ρ1 ) = S(ρ2 ) = 0
Holevo bound on mutual information is I (X ; Y ) ≤ S(ρ) = 1
Aeysha Khalique, NUST Pakistan Quantum Information 2020 58/66
Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Example 2: Three Non-Orthogonal States (contd)


(Non-Projective Measurement)
The dimentionality of the Hilbert space is 2, but the
measurement outcomes needed are three
In projective measurements, probability of an outcome is
pi = Tr (Fi ρFi ) (90)
= Tr (Fi2 ρ) (91)
= Tr (Fi ρ) (92)
Fi = Fi2 and i Fi = 1
P
We need not have orthogonal measurement Fi 6= Fi2 . We can
stop at Eq. (91)
We only need Pa measurement operator Πi corresponding to
Fi2 such that i Πi = 1 and Πi need not orthogonal
Such a measurement is called Positive Operator Valued
Measurement (POVM)
Aeysha Khalique, NUST Pakistan Quantum Information 2020 59/66
Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Example 2: Three Non-Orthogonal States (contd)


(Non-Projective Measurement)
P
We choose operators as elements of POVM such that iPii = i
     
1 1 r 1 1 −r 1 0 0
Π0 = ; Π1 = ; Π2 =
2 r r2 2 −r r 2 2 0 1 − r2

p(y |x) : p(y |x) = Tr (Πy ρx ), s.t. x, y = 0, 1, 2

p(x) : p(X = 0) = p(X = 1) = p(X = 2) = 13


p(x, y ): p(x, y ) = 13 p(y |x)
P
p(y ): p(y ) = x p(x, y )
p(Y = 0) = p(Y = 1) = 14 (1 + r 2 ); p(Y = 2) = 21 (1 − r 2 )
Aeysha Khalique, NUST Pakistan Quantum Information 2020 60/66
Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Example 2: Three Non-Orthogonal States (Contd)

Maxr I (r ) = 0.585 at r = 0.577


S(ρ) = 1 and Classical bound: H(X ) = 1.585
Acc(E) < S(ρ) < H(X )

Aeysha Khalique, NUST Pakistan Quantum Information 2020 61/66


Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Von Neumann Entropy as Entanglement Measure


von Neumann entropy is a good measure of entanglement for pure
bipartite states |ΨAB i
S(ρ) = 0 for a pure state; S(ρ) = 1 for a totally mixed state
For separable pure state: |ΨAB i = |ψA i ⊗ |φB i, state of
system A only is
ρA = TrA (|ΨAB i hΨAB |
= |ψA i hψA | (93)
A pure state: So E = S(ρA ) = 0
For maximally entangled state: |ΨAB i = √1 (|0A 0B i + |1A 1B i)
2

ρA = TrA (|ΨAB i hΨAB |


1 1
= |0i h0| + |1i h1| (94)
2 2
A totally mixed state: So E = S(ρA ) = 1
Aeysha Khalique, NUST Pakistan Quantum Information 2020 62/66
Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Von Neumann Entropy as Entanglement Measure


E = S(ρA ) = S(ρB ) = H(λi ) and 0 ≤ E ≤ 1
A pure bipartite state can always be written in Schmidt
decomposition, with A and B having common eigenvalues
Xp
|ΨAB i = λi |vA i |uB i (95)
i

For separable
P state only one term in schmidt decomposition
and since i λi = 1, so that one λ = 1 and
E = S(ρA ) = S(ρB ) = H(1) = 0
For maximally entangled state: |ΨAB i = √12 (|0A 0B i + |1A 1B i),
already in Schmidt decomposition λ1 = λ2 = 1/2 and
E = S(ρA ) = S(ρB ) = H(1/2) = 1
For partially entangled state:
|ΨAB i = √35 |0A ↑B i + √45 |1A ↓B i),
already in Schmidt decomposition λ1 = 9/25 ; λ2 = 16/25 and
E = S(ρA ) = S(ρB ) = H(9/25) = 0.942
Aeysha Khalique, NUST Pakistan Quantum Information 2020 63/66
Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Entanglement Measure: Interpretation

For partially entangled state: |ΨAB i = √35 |0A ↑B i + √4


5
|1A ↓B i),
E = S(ρA ) = S(ρB ) = H(9/25) = 0.942
If we have 1000 copies of |ΨAB i, we can extract 942 Bell pairs
(Distillable Entanglement≡ ED )
942 Bell pairs will be needed to create 1000 copies of |ΨAB i
(Entanglement of Formation≡ EF )
EF ≥ ED : You cannot get more entanglement than you put in!
For pure states EF = ED

Aeysha Khalique, NUST Pakistan Quantum Information 2020 64/66


Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Distilling Entanglement; Entanglement Concentration


We start with a partially entangled state
|ΨAB i = α |00iAB + β |11iAB , we want to extract maximally
entangled Bell pairs
Alice knows the coefficients α and β, she prepares an ancilla
in state
|φiC = α |0iC + β |1iC (96)
Combined state of CAB is
|ΨiCAB = α2 |000i + αβ |011i + αβ |100i + β 2 |111i (97)
Alice performs CNOT operation on her qubits with C control
and A target
|ΨiCAB = α2 |000i + αβ |011i + αβ |110i + β 2 |101i (98)
Writing state as ACB
|ΨiACB = α2 |000i + αβ |101i + αβ |110i + β 2 |011i (99)
Aeysha Khalique, NUST Pakistan Quantum Information 2020 65/66
Overview Classical Information Classical Data Compression Quantum Information Quantum Data Compression Accessible Inform

Distilling Entanglement; Entanglement Concentration


Contd
|ΨiACB = |0iA α2 |00i + β 2 |11i CB + αβ |1iA (|01i + |10i)CB (100)


RenormalizingpCB 2 2
!
α β
|ΨiACB = α4 + β 4 |0iA p |00i + p |11i
α4 + β 4 α4 + β 4 CB
p 1
+ 2α2 β 2 |1iA √ (|01i + |10i)CB (101)
2
Alice makes a measurement on A in basis {|0i , |1i}. She gets state
|1i with probability 2α2 β 2 and CB are in maximally entangled state
If she starts with N pairs, she will get 2Nα2 β 2 entangled pairs
For remaining N(1 − 2α2 β 2 ) pairs, she does the whole process again
Alice has to announce her result to Bob (classical communication)
to extract succesful pairs.
This entanglement was already present in initial state
F = | hφ+ |ΨAB i = 12 (α + β)2
Aeysha Khalique, NUST Pakistan Quantum Information 2020 66/66

You might also like