Professional Documents
Culture Documents
Mal Le Treust
ETIS, CNRS UMR8051, ENSEA, Universit de Cergy-Pontoise,
6, avenue du Ponceau,
95014 CERGY-PONTOISE CEDEX,
FRANCE
Email: mael.le-treust@ensea.fr
AbstractCorrelation between channel state and source symbol is under investigation for a joint source-channel coding
problem. We investigate simultaneously the lossless transmission
of information and the empirical coordination of channel inputs
with the symbols of source and states. Empirical coordination is
achievable if the sequences of source symbols, channel states,
channel inputs and channel outputs are jointly typical for a
target joint probability distribution. We characterize the joint
distributions that are achievable under lossless decoding constraint. The performance of the coordination is evaluated by
an objective function. For example, we determine the minimal
distortion between symbols of source and channel inputs for
lossless decoding. We show that the correlation source/channel
state improves the feasibility of the transmission.
Index TermsShannon Theory, State-dependent Channels,
Joint Source-Channel Coding Problem, Empirical Coordination, Empirical Distribution of Symbols, Non-Causal Encoding/Decoding.
I. I NTRODUCTION
State-dependent channels with state information at the
encoder have been widely studied since the publication of
Gelfand Pinskers article [1]. The authors characterize the
channel capacity in the general case and Costa evaluates
it for channels with additive white Gaussian noise in [2].
Interestingly, additional noise does not lower the channel
capacity while it is observed by the encoder. In this setting, the
encoder has non-causal state information, i.e. it observes the
whole sequence of channel states. This hypothesis is relevant
for problems such as computer memory with defect [3] in
which the encoder observes the sequence of defects or digital
watermarking [4] since the sequence of states is a watermarked
image/data. Non-causal state information is also appropriate
for the problem of empirical coordination [5], since the sequence of source symbol acts as channel states. More recently,
the notion of "state amplification" was introduced in [6]. In
this framework, the decoder is also interested in acquiring
some information about the channel state. The amount of such
information is measured by the "uncertainty reduction rate".
The authors characterize the optimal trade-off region between
reliable data rate and uncertainty reduction rate. As a special
case, they consider that the encoder only transmit the sequence
of channel-states.
Un
Xn
Yn
n
U
Sn
Fig. 1. Channel state S is correlated with information source U . Encoder
C and decoder D implement a coding scheme such that sequences of
n ) An
symbols (U n , S n , X n , Y n , U
(Q) are jointly typical for the target
probability distribution Q(u, s, x, y, u
) and the decoding is lossless.
of the decoder u
n U n . We assume the sets U, S, X , Y are
n
discrete and U denotes the n-time cartesian product of set U.
Notation (X ) stands for the set of probability distributions
P(X) over X . The variational distance P
between probabilities
Q and P is denoted by ||QP||v = 1/2 xX |Q(x)P(x)|.
Notation 1(
u|u) denotes the indicator function that equals
1 if u
= u and 0 otherwise. The Markov chain property is
denoted by Y
T n (y n |xn , sn ) =
n
Y
Pus (ui , si ),
(1)
T (yi |xi , si ).
(2)
i=1
n
Y
i=1
(3)
(4)
(u, s, x, y, u
)
N(u, s, x, y, u
|un , sn , xn , y n , un )
,
n
U S X Y U.
(5)
THE
S OURCE
Y
(X, S)
U,
(U S W X
where Q is the set of distributions Q
Y U) with auxiliary random variable W that satisfies:
P
s, x, w, y, u
)
wW Q(u,
= Pus (u, s) Q(x|u, s) T (y|x, s) 1(
u|u),
Y
(X, S)
(U, W ).
(U S Z W
where Q is the set of distributions Q
X Y U) with auxiliary random variable W that satisfies:
P
wW Q(u, s, z, x, w, y, u
Z
(U, S)
(X, Y, W ).
D =
min E d(U, X) ,
(10)
1
p0
Qx|us A
0
0
S
=
0
Q
2
2
w|usx
1
Equation (10) is a direct consequence of Theorem III.1, see
Y
X
also [5]. Consider a smaller distortion level D < D . The
1
p3
0
0
corresponding transition probability Qx|us
/ A is not achievS
=
1
n
1
able. There is no code c(n) such that X is coordinated, i.e.
p4
1
1
jointly typical, with (U n , S n ) for target probability distribution
1
2
p5
Pus Qx|us . Hence, distortion level D is not achievable.
2
2
1
2
Zn
IV. C OMPARISON
Pusz
Un
Xn
Yn
n
U
Sn
Fig. 2.
WITH
P REVIOUS W ORK
(15)
x|s
Q
(16)
Information Constraint
0.4
max
Q
xw|us
max
Q
xw|s
I(U, W ; Y ) I(W ; S|U ) H(U )
I(W ; Y ) I(W ; S) H(U ) min I(U ; Y ). (17)
Q
x|s
S=1
U =0
1+
4
1
4
U =1
1
4
1+
4
Fig. 4.
Joint probability distribution Pus (u, s). If = 0, the random
variables U and S are independent. If = 1, both random variables are
equal: U = S. The marginal probability distributions on U and S are always
equal to the uniform probability distribution (0.5, 0.5).
0.3
= 0.99
0.2
0.1
0.1
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
Fig. 5. Upper bound (18) on (16) is smaller than the lower bound (19) on
information constraint (14) of Corollary III.4.
0.04
Information Constraint
0.02
0.02
0.04
= 0.101
0.06
(16)
(14)
max
max
I(X; Y |S) H(U ) ,
(18)
I(X; Y |U ) I(X; S|U ) H(U |Y ) . (19)
0.08
0.9
0.91
0.92
0.93
0.94
0.95
0.96
0.97
0.98
0.99
Fig. 6. Upper bound (18) on (16) does not depend on since the correlation
between U and S is not taken into account.
VI. C ONCLUSION
We consider a state dependent channel in which the channel
state is correlated with the source symbol. We investigate
simultaneously the lossless transmission of information and
the empirical coordination of channel inputs with the symbols
OF
ACHIEVABILITY
OF
A PPENDIX A
S KETCH
T HEOREM III.1
H(U ) + ,
I(W ; S|U ) + ,
(20)
(21)
RM + RL
I(U, W ; Y ) .
(22)
(m) 6= U
(23)
n
n
n
n
Ec P l ML , W (m, l)
/ A (U (m), S )
,
(24)
n
n
n
n
Ec P (m , l ) 6= (m, l), s.t. (U (m ), W (m , l )) A (Y )
,(25)
n
n
Ec P m 6= m, s.t. (U (m ), W (m , l)) A (Y )
.
(26)
and E = 0 otherwise.
S KETCH
OF
n H(U )
n
)
n
H(U |E = 0) + n
n
n
n
n
I(U ; Y |E = 0) + H(U |Y , E = 0) + n
n
n
I(U ; Y |E = 0) + n 2
n
X
I(U n ; Yi |Y i1 , E = 0) + n 2
i=1
n
X
I(U n , Y i1 ; Yi , E = 0) + n 2
i=1
n
X
n
i1
n
I(U , Y
, Si+1 ; Yi |E = 0)
i=1
n
X
n
n
i1
I(Si+1 ; Yi |U , Y
, E = 0) + n 2
i=1
n
X
n
i1
n
I(U , Y
, Si+1 ; Yi |E = 0)
i=1
n
X
i1
n
n
I(Si ; Y
|U , Si+1 , E = 0) + n 2
i=1
n
X
i
i1
n
I(Ui , U
,Y
, Si+1 ; Yi |E = 0)
i=1
n
X
i
i1
n
I(Si ; U
,Y
, Si+1 |Ui , E = 0) + n 3
i=1
n
n
X
X
I(Wi ; Si |Ui , E = 0) + n 3
I(Ui , Wi ; Yi |E = 0)
i=1
i=1
n max I(U, W ; Y ) I(W ; S|U ) + n 4.
QQ
H(U
(27)
(28)
(29)
(30)
(31)
(32)
(33)
(34)
(35)
(36)
(37)