Professional Documents
Culture Documents
Index Coding With Side Information
Index Coding With Side Information
by
SHUBHAM GIRDHAR
1311041
to the
Date
CERTIFICATE
Certified that the summer project report Index Coding with Side Infor-
mation is the bonafide work of Shubham Girdhar ,Roll No- 1311041 ,School
of Mathematical Sciences, National Institute Of Science Education And Research,
Bhubaneswar carried out under my supervision during 20-May-2016 to 7-July-2016.
Place
Date
SUPERVISOR
Dr.Prasad Krishnan
Assistant Professor
SPCRC
International Institute Of Information Technology
i
ACKNOWLEDGEMENTS
To my family, your constant love and support during good times and bad gets me
through. Thank you for believing in me, and for me at times. To my friends, I am
glad to have every one of you. Thank you for your loyalty and integrity. And there
are many other people from outside mathematics to whom I owe a debt of gratitude,
too numerous to mention here.
To all the staff of the School of Mathematical Sciences,NISER, thank you for fostering
my interest. I would particularly like to thank the head of the Department, Dr. Anil
Karn for providing this oppurtunity. Im also thankful to Dr. Prasad Krishnan, an
extraordinary gentleman who supervised my project. I consider myself very fortu-
nate to have this experience, and have come to enjoy our meetings immensely. As a
chapter closes, I trust that your door will remain open.
Thank you
Shubham Girdhar
ii
Abstract
The index coding problem with side information is generally studied. Information and
matroid theory have been employed to understand the index coding problems with
near extreme rates and attempt has been made to convert graph theoretic results to
matroid theoretic results. The dual of the same has also been studied which leads to
generalized locally repairable codes and a new dual IC problem has been tried to be
defined.
Contents
1 Pre-requisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Information theory . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Matroid theory . . . . . . . . . . . . . . . . . . . . . . . . . . 1
2 Index coding problem Setup . . . . . . . . . . . . . . . . . . . . . . . 3
3 Outer Bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
4 A class of Index Coding Problems with rate 1/2 . . . . . . . . . . . . 5
5 Bounding optimal rate of ICSI . . . . . . . . . . . . . . . . . . . . . . 12
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
5.2 Index Coding with side information . . . . . . . . . . . . . . . 13
5.3 Digraphs with min-rank one less than the order . . . . . . . . 15
6 Generalized Locally Repairable Codes (GLRC) . . . . . . . . . . . . . 18
7 Our contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
7.1 Relation between matroidal theory and GLRC . . . . . . . . . 23
7.2 Results for digraph with (G = 1 . . . . . . . . . . . . . . . . 23
i
1 Pre-requisites
1.1 Information theory
Entropy
random variable and p(x) = P r{X = x}, x X be its probability mass function.
Lemma 1. H(X) 0
1
Proof:0 p(x) 1 implies that log p(x)
0.
1. I
3. For any I1 , I2 I such that |I1 | < |I2 | then there exist some e I1 \I2 such
that I1 e I
Definition 3. A subset of the ground set E that is not independent is called depen-
dent. A maximal independent set is called a basis for the matroid. A circuit in a
The dependent sets, the bases, or the circuits of a matroid can characterize the
matroid completely. For instance, one may define a matroid as:
1
Definition 4. A matroid M be a pair (E , B) , where E is a finite set as before and
1. B is non-empty.
It follows from the basis exchange property that no member of B can be a proper
subset of another.
Rank function
One of the basic result of matroid theory, directly analogous to a similar theorem of
basis in linear algebra, that any two bases of a matroid M have the same number of
elements. This number is called the rank of matroid M. Let A E , then matroid
on A can be defined by considering a subset of A independent iff it is independent
in matroid M. Thus, we can define the ranks of any subset of E . The rank of A is
given by rank function r(A), which maps subsets of E to positive integers. Now, the
matroid can be defined through rank function as follows:
Example
Dual of a matroid
If M is a finite matroid, we can define the dual matroid M* by taking the same ground
set and calling a set a basis in M* if and only if its complement is a basis in M. i.e.
2
M*=(E , B ) such that B = {E \B | B B}
Consider the following Index Coding(IC) problem. There are n messages in the
system, x1 , x2 , ...., xn ,where xj {0, 1}tj for j [n] and some tj . There are n re-
ceivers,where receiver j wants to obtain message xj and knows a subset of the mes-
sages a priori, denoted by x(Aj ) for some Aj [n]\{j}. For simplicity, We will refer
to j as the wanted message and to Aj as the side information of receiver j, respec-
tively. Any instance of this problem can be specified by a side information graph G
with n nodes, in which a directed edge i j represents that receiver j has message
i as side information (i Aj ) . Here [n] denotes the set {1, 2, ..., n} and the set of all
The main difference in the system model compared to centralised index coding
problem is in the server setup. Instead of a single server which contains all messages,
there are 2n 1 servers. For each J N , there is a server that contains all messages
j J and the capacity of the broadcast link connecting server J to all receivers is
denoted by Cj . Hence, we assume that there are 2n 1 ideal bit pipes to the receivers
with arbitrary link capacities.If CJ = 1 only for J = [n] and is zero otherwise, we
recover the centralised index coding problem. A special normalised symmetric case
is where CJ = 1 for all J N . Server J sends sequence yJ {0, 1}uJ for some uJ to
all receivers which is a function of the messages at that server.
3
Figure 1: Distributed Index Coding for n=3, source: [2]
Based on the side information Aj and the received bits yJ {0, 1}uJ from all
servers, receiver j finds the estimate x
bj of the message xj . We say that rate-capacity
tuple (R, C) = ((Rj , j [n]), (CJ , J N )) is achievable if there exists r such that:
tj uJ
Rj , CJ , j [n], J N.
r r
For a given C , the capacity region C of this index coding problem is the closure
of the set of achievable rate tuples R = (R1 , R2 , ......, Rn ).
3 Outer Bounds
We generalize the polymatroidal outer bound for the centralised index coding prob-
lem as done in [2].
4
Rj fT (Bj {j}) fT (Bj ), j T,
1. fT () = 0
P
2. fT (T ) = J : JT 6= CJ ,
4. fT (A B) + fT (A B) fT (A) + fT (B), A, B T.
P P
jS Rj J : JT 6= CJ ,
for all S T for which the subgraph of G induced by S does not contain a directed
cycle.
Throughout this section, we use the following notations. Let [1 : m] denote {1, 2, ...., m}.
For a set of vectors A, sp(A) denotes their span. For a vector space V , dim(V ) de-
notes its dimension. An arbitrary finite field is denoted by F . A vector from the
5
m-dimensional vector space F m is said to be picked at random if it is selected accord-
Formally, the index coding problem (over some field F ) consists of a broadcast
channel which can carry symbols from F , along with the following.
A set of T receivers
A source which has messages W = {Wi , i [1 : n]}, each of which is modelled as
a vector over F .
For each receiver j, a set D(j) W denoting the set of messages demanded by the
receiver j.
For each receiver j, a set S(j) W \D(j) denoting the set of side-information
messages available at the jth receiver.
Definition 6. (Index code of symmetric rate R). An index code of symmetric rate
6
at the receivers j = [1 : T ], mapping the received codeword and the side-information
messages to the demanded messages D(j), i.e.,
Remark 1. We could in general have different rates for different messages, but in
this section we restrict our attention to symmetric rates. Therefore any rate referred
Definition 7. (Achievable rates and rate R feasibility). For a given index coding
problem, a rate R is said to be achievable if there exists an index code of rate R, and
the index coding problem is said to be rate R feasible.
Definition 8. (Scalar index codes and linear index codes). If a rate R = 1/L is
achievable, the associated index code is a scalar index code of length L. If the encoding
and decoding functions are linear, then we have a linear index code.
If we have a linear index code of rate R, then we can represent the encoding func-
tions as follows.
n
X
E (W1 , W2 , ....., Wn ) = Vi Wi ,
i=1
where each Vi is a L LR matrix with elements from F . In the scalar linear index
coding, we have LR = 1. Finding a scalar linear index code with length L is equiv-
alent to finding an assignment of these L-length vectors Vi s to the n messages such
7
that the receivers can all decode their demanded messages, i.e.,
n
X
Dj ( Vi Wi , S(j)) = D(j), j [1 : T ].
i=1
Remark 2. We restrict our attention to scalar linear index codes for the rest of this
section. However we believe that our results can be extended to vector linear index
codes as well.
Definition 9. (Interfering sets and messages, conflicts). For some receiver j and
for some messages Wk D)(j), let Interfk (j) = W \(Wk S(j)) denote the set
of messages (except Wk ) not available at the receiver j. The sets Interfk (j), k are
called the interfering sets at receiver j.If receiver j does not demand message Wk ,
then we define inf erk (j) = . If a message Wi is not available at a receiver j
demanding at least one message Wk = Wi , then Wi is said to interfere at receiver
For a set of vertices A W , let VE (A) denote the vector space spanned by the
vectors assigned to the messages in A, under the specific encoding function E . If
Definition 10. (Resolved conflicts). For a given assignment of vectors to the mes-
sages (or equivalently, for a given encoding function E ), we say that conflicts within
a subsets W 0 W are resolved, if
where Vk is the vector assigned to Wk under the encoding function E . If (2) holds
for W 0 = W , then all the conflicts in the given index coding problem are said to be
resolved.
8
Lemma 2. For any encoding function E , successful decoding at the receivers is pos-
the messages Wi s such that the condition in Lemma 2 is satisfied, then these vectors
naturally define an index code of length L for the given index coding problem.
Definition 11. (Alignment graph and alignment sets). In the alignment graph, the
set.
It is easy to see that the alignment sets define a partition of the alignment graph.
Also, the messages in Interfk (j), for all messages k at all receivers j are fully con-
nected in the alignment graph.
Definition 12. (Conflict graph). In the conflict graph, Wi and Wj are connected by
an edge (called an conflict edge) if Wi is not available at a receiver demanding Wj ,
or Wj is not available at a receiver demanding Wi .
For any receiver j demanding any message Wk , Wk and Interfk (j) are connected
by a hyperedge, which is denoted by {Wk , Interfk (j) }.
Lemma 3. Suppose two index coding problems, denoted by I1 and I2 , are modelled
by the same conflict hypergraph. Then any index coding solution for I1 is an index
coding solution for I2 .
9
Definition 14. (Internal conflict). A conflict between two messages within an align-
Theorem 3. An index coding problem is rate 1/2 feasible iff there are no internal
conflicts.
Proof :-
Corresponding to any vertex Wk in the alignment graph, let Align(k) denote the
alignment set it belongs to (this is unique as the alignment sets partition the align-
ment graph). we first note that in any (scalar linear) index coding scheme for the
given problem, all the vertices must be assigned some non-zero vectors (zero vector
cannot be assigned to any message as this means that the message cannot be decoded
by any receiver).
If part: Suppose that there are no internal conflicts. We assume a large field F .
For each alignment set, we independently generate a random 2 1 vector over F and
assign it to the vertices of the alignment set. Because of random generation, we can
assume that any assigned vector is non-zero and any two assigned vectors are linearly
independent with high probability.Let E denote the associated encoding function and
Vk denote the vector assigned to vertex Wk . Since there are no internal conflicts,
we only have to check conflicts between alignment sets. For any vertex Wk , the set
10
different alignment set than Align(k) because there are no internal conflicts.
Unique alignment set because all the messages in Interfk (j) must be in same align-
ment set.
Since any two alignment sets get independent vectors with high probability, we have
that Vk
/ VE (Interfk (j)), and the same argument is true for all receivers j and all
messages Wk . Hence this assignment of vectors ensures successful decoding by Lemma
2.
Because k 0 , k are part of the same alignment set Align(k), there lies a path from k to
k given by an ordered set {k 0 , i1 , ..., iN 1 , k}, such that every adjacent pair of elements
belong to interfering set of some receiver.
In some assignment corresponding to a rate 1/2 solution, let Vk0 , Vi1 , ...., Vk be
the non-zero vectors assigned to the vertices {k 0 , i1 , ..., iN 1 , k}. We define the sets,
Suppose some dim(sp(Ul )) = 2 for some l. Then a receiver j (at which message
signed vectors are of length at least 3, i.e., the rate can be at most 1/3.
Therefore, for a rate 1/2 index coding assignment, all Ul should be spanning a
11
1, l [1 : N 1]. By Lemma 4, we should thus have dim(sp(N
l=1 Ul )) = 1.
However k 0 and k are in conflict, which means that they should be assigned linearly
independent vectors, i.e., dim(sp({Vk0 , Vk })) = 2 which means that dim(sp(N
l=1 Ul )) >
1. Thus there is a contradiction and thus any internal conflicts forces the rate to be
less than 1/2. This concludes the proof.
Since its introduction in, the problem of index coding has been generalized in a
number of directions. It is a problem that has aroused much interest in recent years,
from the theoretical perspective, its equivalence to network coding has established it
as an important area of network information theory.
In this section we introduce some notation and definitions required for subsequent
subsections. We will assume that q, is some power of p. i.e. q = pl . For any positive
integer, n, we define [n] := {1, 2, . . . , n}. Let Fq denote the finite field and Fqnt denote
12
Definition 16. (paths and circuits) A path in digraph D is the sequence of vertices
(u1 , ..., uk ) such that (ui , ui+1 ) E for all i [k 1]. If a path is closed, i.e. (uk , u1 )
E, then it is called circuit.
A (di)graph is called acyclic if it contains no circuits.
Let (D) be the circuit packing number of D, namely, the maximum number
Lemma 5. A I(X , f )-IC of length N over Fq has a linear encoding map if and only
n
if there exists a matrix L FN
q such that for each i [m], there exists a vector
13
u( i) Fnq satisfying
Support(u(i) ) Xi (3)
Theorem 4. Let I(X , f ) be an instance of ICSI problem and Hbe its hypergraph.
Then the optimal length of q-ary I IC is minrkq (H).
All the users forming a clique in the side information digraph can be simultane-
ously satisfied by transmitting the sum of their packets. This idea shows that the
number of cliques required to cover all the vertices of the graph (the clique cover
number) is an achievable upper bound. An acyclic (di)graph has min-rank equal to
its order and for any subgraph G 0 of a graph G we have
Let M be a matrix that fits G, the sub-matrix M of M restricted on the rows and
columns indexed by the vertices in V(G 0 ) is a matrix that fits G. These two results
Instead of covering with cliques, one can cover the vertices with circuits. It is based
on the observation that the existence of a circuit of length k in the side-information
(di)graph G requires at most k1 transmissions to satisfy the demands of the corre-
14
minrkq n (G)
There is also partition multicast scheme, which outperforms the circuit-packing num-
ber.
Theorem 6. Let G be a graph of order n. Then minrkq (G) n minvV degO (v),
for any q > n.
Let G be a digraph and let (G) denote the minimum number of vertices that must
Lemma 6. Let G = (V, E) be a directed graph of order n such that there exist i1 , i2 V
with
1. (i1 , i2 ) Eand(i2 , i1 )
/E
2. degO (i1 ) = 1
for any q.
Proof: Let M = (mi,j ) be a matrix that fits G of minimum rank. We may assume
that i1 = 1 and i2 = 2 so that the first two rows of M are:
15
M1 = (1, , 0, ..., 0) and
If = 0 then it is easy to check that deleting the first row and the first column of M
For each vertex i V\{1}, label the corresponding vertex in i V\1, label the
corresponding vertex in V 0 by i - 1. Then construct the (n 1) (n 1) matrix
M 0 whose ith row is obtained from the (i + 1)th row of M in the following way: for
Conversely, let M 0 = (m0i,j ) be a matrix that fits G 0 having rank minrkq (G 0 ) and
suppose the rows M10 , M20 , ..., Mminrkq (G 0 ) are linearly independent. Let I = {j|(j, 1)
16
Mi = (0, m0i1,1 , m0i1,2 , ..., m0i1,n1 ),
for i ([n]\I) {2, ..., minrkq (G 0 ) + 1}. For i > minrkq (G 0 ) + 1 we have that the
0
Pminrkq (G 0 )
Mi1 = r=1 r Mr0 ,
0
where the r are the coefficients in the linear combination of Mi1 , with respect to
the first minrkq (G 0 ) rows of M , and = rI
P
/ r 1. If i
/ I we set
1.
Lemma 7. ( [4]) Let G be a directed graph of order n such that (G) = 2. Then
minrkq (G) = n 2, for any q > n.
Proof: Since n (G) minrkq (G), we need only to prove that minrkq (G)
n 2.
We may suppose without loss of generality that there does not exist i V with
out-degree less than 1, otherwise we can delete the node i and consider the induced
17
time that we reduce a graph G by an appropriate arc contraction, we obtain G 0 with
(G 0 ) = 2 and (G 0 = 1. In fact any time that we reduce the graph we only shorten
the circuits that pass through the node that we delete, and we do not create any new
circuit from the fact that the out-degree of the node is 1.
At the point that Lemma 6 is no longer applicable, there are two possible cases:
This last case is not possible, in fact if we consider the circuit C = (i1 , i2 ), from
(G 0 ) = 2 we have that there exists a circuit C 0 which remains after deleting i2 .
Then, C does not pass through i1 otherwise it has to pass through i2 . Then C and
C are disjoint, but this is not possible because (G 0 ) = 1.
Therefore, reducing G we obtain G 0 with k fewer nodes and all nodes have out-degree
Corollary 2. ( [4]) Let G a graph of order n and let q > n. Then minrkq (G) = n 1
if and only if (G) = 1.
We will define a vector linear Index Code(IC), a vector linear Generalized Locally
Repairable Code(GLRC) and obtain a duality between GLRC and IC.
18
Definition 19. An index coding problem instance is given by n distinct messages,
and i
/ Si . This is represented by a directed side information graph G(V, E) where
each vertex represents a user and a directed edge from i to j is present if j Si .
For ease of notation, let x = [xT1 xT2 ..... xTn ]. The objective is to design a suitable
transmission schemes such that each user decodes its desired packet from the encoded
transmission and the side information packets available with them. Formally, a vector
linear index code, which represents a linear transmission scheme, is defined as follows:
P
Definition 20. A valid ( , p, n, k) vector linear index code, for an index coding prob-
lem on G(V, E), is a collection of k linear encoding vectors vi pn1 spanning a sub-
P
space C pn of dimension k such that, from the k broadcast transmissions viT x, all
P
users are able to decode their respective packets using their side-information using lin-
ear decoding. In other words, there are decoding functions i : i ({vi T x}ki=1 , {xj }jSi ) =
P
xi , i which are linear in all the arguments (in all the sub-symbols belonging to ).
The broadcast rate of the index code is given by k/p since every channel use
P
consists of p symbols from the alphabet . The total number of transmission is k in
P
terms of the alphabet . The total number of transmissions that is needed if side
information is not present is np. The index code C has the following generator matrix
19
y = V x is the vector containing the k encoded transmissions corresponding to
the index code C. The complementary index coding problem is essentially is same
as the index coding problem except that the objective is to maximize the number of
transmissions saved. The number of saved transmissions is (np k). The comple-
P
mentary index code rate is given by (n k/p) since log( ) bits are transmitted
every channel use.
P
Definition 21. A ( , p, n, k) vector linear generalized locally repairable code (GLRC)
Pk1
Here, gij , 1 i n, 1 j p is the coding vector that determines
20
and is proved in [3].
C is a valid index code for the side information graph G iff C is a valid GLRC when
G is taken as a recoverability graph.
be Si . If C is a valid index code, then there exists a vector linear decoding function
i : i (y, {xj }jSi ) = xi . This is true for all message vectors x : y = V x. Let w
be a vector such that y = V w. Let x represent the actual message vector(of all n
The last step uses linearity of i . The decoding should work even when w is the
actual message vector. Hence, i (y, {wj }jSi ) = wi . With (5), we have:
21
Since i is linear, this implies that every subsymbol of the ith code supersymbol is
linearly dependent on all the code subsymbols in the set Sj for the dual code C
since z C . Hence, the dual code is valid GLRC proving one direction.
To prove the other direction, let us assume that for every i : 1 i n, there exists
the index coding problem, let x be the message vector not known to the users prior
to receiving the encoded transmission. Let y = V x. Given y, from the previous part
of the proof, we know that x = w + z for some z C . w is known to all users
the side information set Si and the encoded transmission y for all message vector x.
We again note that the choice of w is arbitrary. For every y, users have to pick
some w such that y = V w. Since the forward map is linear, the inverse one-to-one
map 1 (y) determining w can be made linear by fixing 1 (ei ) for all unit vectors
e1 , ..., ek . Then, linearity of the forward map determines a candidate pre-image for
(ei ). Therefore, if i are all linear in all the
Pk
all vectors y, i.e. 1 (y) = i=1 yi
1
subsymbol arguments, then the decoding functions for the index coding problems are
also linear. This completes the proof.
22
7 Our contribution
7.1 Relation between matroidal theory and GLRC
Let there be one source with n messages and n receivers. Let V = V1 ... Vn kn
,
This equation can be interpreted as, the demand vectors at each receivers can be
recovered with the help of side information present at that receiver. This is same
result we obtained in section 6, the only difference here is that we used matroid
theoretic approach.
Lemma 8. Let G be a digraph, such that (G) = 1. Then the column vector of
23
node, removal of which makes G acyclic, in dual matrix cannot be zero for a minrank
solution.
Proof: Let T (G) be the set of nodes of G such that removing any node in T (G)
makes G acyclic and let v T (G). Consider a outgoing neighbour of v, say v1 and
its side information set, say SIv1 . The set SIv1 contains v and some other nodes.
All the nodes in SIv1 , which are not part of any cycle in G have corresponding zero
column vector in dual matrix and are thus, not demanded in dual problem. Therefore,
without loss of generality, SIv1 and subsequent side information sets can be assumed
to contain only the nodes which are part of some cycle in G. Let x SIv1 \{v} and
consider side information set of x, which is SIx . This set cannot contain v1 , because
if it does then the cycle x v1 x will remain after removing v from G, which is not
possible. Now, take some x1 SIx \{v} and consider its side information set SIx1 .
This, set cannot contain v1 , x because, if it does then the possible cycles x x1 x and
x1 x v1 x1 are without v which is not possible. Now, take some x2 SIx1 \{v} and
continue the process. At each step our side information set is getting smaller and as
number of nodes is finite, there will be a node say, xn SIxn1 \{v} such that SIx1
will only have v. It cannot be empty as xn is part of some cycle in G. Let us denote xn
as w, which is different from v.Hence, the column vector of w in dual matrix depends
only on column vector of v by the condition r (SI) = r (SI d).
Consider a outgoing neighbour of w, say w1 and its side information set, say
SIw1 . The set SIw1 contains w and some other nodes(may or may not contain v).
All the nodes in SIw1 , which are not part of any cycle in G have corresponding zero
column vector in dual matrix and are thus, not demanded in dual problem. Therefore,
without loss of generality, SIw1 and subsequent side information sets can be assumed
to contain only the nodes which are part of some cycle in G. Let x SIw1 \{w, v} and
24
consider side information set of x, which is SIx . This set cannot contain w1 , because
if it does then the cycle x w1 x will remain after removing v from G, which is not
possible. Now, take some x1 SIx \{w, v} and consider its side information set SIx1 .
This, set cannot contain w1 , x because, if it does then the possible cycles x x1 x and
x1 x w1 x1 are without v which is not possible. Now, take some x2 SIx1 \{w, v}
and continue the process. At each step our side information set is getting smaller and
as number of nodes is finite, there will be a node say, xn SIxn1 \{w, v} such that
r (SI) = r (SI d). But as column vector of w also depends only on v column
vector of y depends only on column vector of v in dual matrix.
Continuing this way, we get a set, say S = {w, y, ...} such that each node in S is
distinct, part of some cycle in G and is side information for some node. As there are
only finite number of nodes means that S is a finite set.
The only remaining nodes are the ones which are part of a cycle and have no side
information other than v i.e. outgoing neighbour of v with no other nodes as side
information. Let be any such node.Then only cycle possible in this case is v v.
Here, clearly, the column vector of on dual matrix depends on column vector of v.
Theorem 8. Let G be a digraph such that (G) = 1. Then there exists atleast a cycle,
say C in G such that the column vectors in the dual matrix corresponding to messages
in C are all non-zero for a minrank solution.
Proof: Let v be as given in previous lemma. Hence, the column vector corre-
sponding to v in dual matrix is non-zero (by previous lemma). We consider that every
node has a side information because if the node do not have any side information
25
it has zero column vector in the dual matrix. Now, the column vector of v in the dual
in SI of v1 .
v2 in the SI of v1 such that column vector of v2 in dual matrix is non zero.
Continuing this way, since there are finite number of nodes, this sequence forms a
cycle, which we denote by C. This C has to pass through v or else removing v from
G will not make G acyclic. Therefore, for every node in C, we have corresponding
non-zero column vector in dual matrix.
Corollary 3. Let G be a digraph such that (G) = 1 and let T (G) be the set as in
lemma 1, then column vector corresponding to every node in T (G) is non-zero in dual
matrix, for a minrank solution.
Proof:- We know that n (G) minrkq (G) and minrkq (G) n (G) .
So n 1 minrkq (G).
26
References
[1] Prasad Krishnan and V.Lalitha, A class of index coding problems with rate 1/3,
[2] Parastoo Sadeghi, Fatemeh Arbabjolfaei and Young-Han Kim, Distributed Index
Coding, ArXiv, Apr. 2016, https://arxiv.org/abs/1604.03204
http://arxiv.org/abs/1402.3895
[4] Eimear Byrne and Marco Calderini, Bounding the optimal rate of the ICSI and
ICCSI problem, ArXiv, Apr. 2016, https://arxiv.org/abs/1604.05991
[5] Son Hoang Dau, Vitaly Skachek, and Yeow Meng Chee. On the security of in-
dex coding with side information. Information Theory, IEEE Transactions on,
58(6):39753988, 2012.
[6] Thomas M. Cover and Joy A. Thomas, Elements of Information Theory, (Wiley,
Hoboken, New Jersey, 2006) [second edition]
27