This action might not be possible to undo. Are you sure you want to continue?

Raymond W. Yeung

Network Coding Research Centre The Chinese University of Hong Kong N.T., Hong Kong Email: whyeung@ie.cuhk.edu.hk

Ning Cai

Department of Information Engineering The Chinese University of Hong Kong N.T., Hong Kong Email: cai@mathematik.uni-bielefeld.de

Abstract— This paper discusses the relation between network coding, (classical) algebraic coding, and network error correction. In the ﬁrst part, we clarify the relation between network coding and algebraic coding. By showing that the Singleton bound in algebraic coding theory is a special case of the Maxﬂow Min-cut bound in network coding theory, we formally establish that linear multicast and its stronger versions are network generalizations of a maximum distance separation (MDS) code. In the second part, we ﬁrst give an overview of network error correction, a paradigm for error correction on networks which can be regarded as an extension of classical point-to-point error correction. Then by means of an example, we show that an upper bound in terms of classical errorcorrecting codes is not tight even for a simple class of networks called regular networks. This illustrates the complexity involved in the construction of network error-correcting codes.

I. I NTRODUCTION The concept of network coding was introduced for satellite communication networks in [2] and fully developed in [3], where in the latter the term “network coding” was coined and the advantage of network coding over routing was demonstrated. The main result in [3], namely a characterization of the maximum rate at which information generated at a single source node can be multicast, can be regarded as the Max-ﬂow Min-cut theorem for network information ﬂow. An algorithm for constructing linear network codes that achieve the Max-ﬂow Min-cut bound was devised in [5]. Subsequently, a more transparent proof for the existence of such linear network codes was given in [6]. For further references on the subject, we refer the reader to the Network Coding Homepage [10] and the tutorial [7].

Inspired by network coding, network error correction has been introduced in [4] as a paradigm for error correction on networks which can be regarded as an extension of classical point-to-point error correction. Speciﬁcally, the results in [4] [8] [9] are network generalizations of the fundamental bounds in classical algebraic coding theory. In this paper, we discuss the relation between network coding, algebraic coding, and network error correction. The rest of the paper is organized as follows. In Section II, we ﬁrst establish that a linear network code achieving the Max-ﬂow Min-cut bound is a network generalization of a maximum distance separation (MDS) code in classical algebraic coding [1]. This clariﬁes the relation between network coding and classical algebraic coding. In Section III, upon giving an overview of network error correction, we illustrate the complexity involved in the construction of network error-correcting codes by means of an example. Concluding remarks are in Section IV. II. T HE S INGLETON B OUND

AND

MDS C ODES

Consider the network in Fig. 1. In this network, there are three layers of nodes. The top layer consists of the source node , the middle layer consists of nodes each connecting to node , and the bottom layer consists of nodes each connecting to a distinct subset of nodes on the middle layer. We call this network an combination network, or simply an network, where . Assume that a message consisting of information symbols taken from a ﬁnite ﬁeld is generated at the source node , and each channel can transmit one symbol in in the speciﬁed direction. A linear

¡ ¨ © ¦ ¥£ ¢ ¦ £ §¥ ¤¢ ¨ ¦ £ §¥ ¤¢ ¡

In [7]. Thus the Singleton bound is a special case of the Max-ﬂow Min-cut theorem. The existence of MDS codes corresponds. linear broadcast. . The usual approach in existing networks. III. they can all be regarded as network generalizations of an MDS code. linear dispersion. These stronger linear network codes are useful for various applications. or dispersion achieves tightness in the Max-ﬂow Min-cut theorem to different extents. is an classical linear block code with minimum distance satisfying A classical linear block code achieving tightness in the Singleton bound is called a maximum distance separation (MDS) code [1]. network errorcorrecting codes has been introduced in [4] for multicasting a source message to a set of nodes on a network when the communication channels are not error-free. (1) is a necessary condition for any node to be able to decode the source message.. by the Maxﬂow Min-cut theorem. since there is only one input channel. each node on the bottom layer can decode the source message. Network 5 @ 0CB¡ 7 r¨ 3 xw¨ 7 ` © V ¨ S ¡ 7 9uyE E © V E S ¡ v(uBe4¨ d 5 @ DCB¡ ¦ §¥ £ ¢ E¦ 3 Q I H G RPA(£ ¢ £ 5 @ DPB¡ 3 ¦ ¥£ ¢ © V E S ts$¡ E @ © he4fqpE V S ¡ . Moreover. the Singleton bound is a special case of the Max-ﬂow Min-cut theorem.s n it follows that @ © hegfTiT V E S ¡ or which is precisely the Singleton bound for classical linear block code [1]. The proof is straightforward (we already have shown it for ). it is readily seen that a -dimensional linear multicast on the network. broadcast. each being transmitted on one of the outgoing channels of node . This has been discussed in great detail in [7]. Speciﬁcally. the non-source nodes in the network with maximum ﬂow at least equal to are simply all the nodes on the bottom layer. (2) ¡ ¡ E © V E S 9WUT¡ 5 @ DCB¡ 3 1 ) ' % # 420(&$" 3 ¦ Q I H G RPA(£ £ F¢ E 5 @ 0CB¡ 3 ! ! Since @ © hegfTeB6! V E S ¡ d 5 3 1 ) ' % # X2cb&a" . code has minimum distance . N ETWORK E RROR C ORRECTION Inspired by network coding. 1. an classical linear block code with minimum distance is a -dimensional linear multicast on the network for all . An ¦ ¢ . Hence. Since a linear multicast.... to the existence of linear multicasts and their stronger versions.. is a special case of network error correction. and generic linear network code are also deﬁned as linear network codes possessing stronger properties than linear multicast. the code takes the source message as input and outputs symbols. (3) combination network. if ! @ 7 5 A986! © S YXE ! ` 7 5 A986! 3 1 ) ' % # 420(&$" (1) then node can decode the source message. For each node on the middle layer. namely link-by-link error correction. From the foregoing. we assume without loss of generality that the symbol received is replicated and transmitted on each outgoing channel. On the other hand. and each of them can decode the source message. Since the of the nodes on by accessing a subset of the middle layer (corresponding to erasures). network code on a given network is qualiﬁed as a linear multicast [7] if for all non-source node in the network. r Fig. in the more general paradigm of network coding. More generally. where . we conclude that an classical linear block code with minimum distance is a -dimensional linear multicast on the network. linear block code with Consider a classical minimum distance and regard it as a linear network code on the network. by (2).. Note that by the Max-ﬂow Min-cut theorem. From the foregoing.

and consequently ii) and in the case that the code is linear. correcting if it can correct all -errors for i. the size of the source alphabet. ` 5 A} e @ 3 @ 5 Q A} u e @ @ 5 } @ 3 3 5 e l@ } ~¨ d {l@ 5 e if } ~¨ @ 5 i e @ @ 5 @ lCb @ 2icE 5 @ @ 5 i2s e @ @ (i2s 5 E @ Deﬁnition 3: An acyclic network is regular . one symbol from a can be transmitted in the certain code alphabet speciﬁed direction. is a regular cut if its members form an antichain. where is the node set and is the channel set. In particular. and for a network code . in which multiple channels between a pair of nodes is allowed. A message taken from a source alphabet is generated at the source node . j k 5 r vi@ 5 r vC@ q 3 j k s ue tXkC f j @ 5 r vC@ q 3 ce s e &ui@ Q e ~ d is acyclic. i.. A set of channels is called an antichain if the channels in are pairwise incompatible. then there exists no path either from to or from to . A network code on is deﬁned in the usual way (see for example [8]). it naturally deﬁnes a partial Since order on the channel set . In this section.e. f j @ lkib h j lk j h kim q 3 o ipn n h Deﬁnition 1: A network code on is -error. we are naturally interested in the maximum possible value of . then the set of all possible vectors transmitted across . The upper bound on rendered in the above theorem is in terms of bounds deﬁned for classical error-correcting codes. Since the errors occurring at the channels across any cut in a regular network do not interfere with each other (because the set of channels form an antichain). the symbol transmitted on channel when the message is is denoted by . i. Example 1: Consider the network in Fig. 5 @ 3 d U 5 ( 3 A For a -error-correcting network code on a given network .form a classical -error correcting code with alphabet . The following theorem renders an upper bound on .e. the Singleton bound. we will show that this bound is not tight even for a simple class of networks called regular networks. 2 which is speciﬁed by ¡ 5 @ &C@ B¡ 5 @ &i@ B¡ ¡ 4 d 5 e l@ 3 @ f 5 5 r baq$hti@ } ~¨ y wv k0|$" 3 5 r i@ q 3 e cs d g¡ T q 3 s f @ 5 ue g4ib e 3 mq 5 r vC@ 3 A 3 q 3 ce s generalizations of the Hamming bound.e. meaning that linear network codes are asymptotically optimal.e. An acylic communication network is represented by a directed acyclic graph . d f yge where . @ 5 @ &i@ iB¡ 3 Y( @ &C@ B¡ 5 @ 3 mq (T and @ lC2 5 @ 3 3 @ 5 Q @ e 3 3 @ (i@ 5 E 3 3 @ C@ 5 s 3 3 @ 5 C@ ¤3 3 @ 5 @ 5 Q ik e @ 3 @ ¤3 d Let us consider binary codes for this network. however.. one may conjecture that this upper bound on is generally tight for regular networks. By means of an example. which is to be multicast to a set of sink nodes . . then the source message can be recovered by all the sink nodes . Two channels are said to be incompatible if neither nor . respectively. i. i) If is a regular cut between the source node and a sink node .. and let . The following example. the tightness of the Singleton bound in the network setting is preserved. shows the contrary. We refer the reader to [8] [9] for the details. where is the minimum volume of a regular cut between and . we discuss an upper bound obtained in [8] which is given in terms of bounds deﬁned for classical error-correcting codes. This illustrates the complexity involved in the construction of network error-correcting codes. ¤3 y wv u|$" 3 1 ) ' % # " y w v X20(&$zcx$" e 5 e l@ ¤3 } @ @ @ E @ s @ uuCbC0C2ikP@ d Deﬁnition 2: For a partition of the node set . Let us ﬁrst describe the setup of network error correction. and the Gilbert-Varshamov bound in classical algebraic coding have been obtained. if the total number of errors in the network is at most . Theorem 1: [8] Let be a -error-correcting code for an acyclic network with source alphabet and code alphabet . On each channel. if .. and and are the size of an optimal classical -ary -error-correcting code of length and the size of an optimal classical linear -ary -error-correcting code of length .

a channel can be removed if its encoding function can only take one value because such a channel does not convey any information. The network for Example 1. 2. or if the source message is and an error occurs at channel . It is easy to verify that if . respectively. Let us consider the encoding function with the ﬁrst and the second arguments being the outputs of channels and . 2. This contradicts Theorem 1 because of the nonexistence of a (2. we will ﬁnd a sink node such that the minimum cut between the source node and this sink node is reduced to 2. This means that the encoding functions of all the channels must take two values. In particular. Then we must have (6) © Q e 5 @ © @ 3 between 5 @ 2C0E 3 5 E @ bC2s 5 e @ E @ s kkAci2C@ 3 5kC@0E @5C@ @ 5 @ 3 3 ¤3 @ e @ E @ s kAci2C@ 5 5 E bC@ @ 3 ¬ U 3 d ue s the encoding alphabet is given by . Then across the cut ¤3 ¤ ¢ H ¥ c d e g has in-degree one must be a bijection. then by removing that channel from the network. we let d 5 3 © h@ ¤ ¢ § Cx¥5 d ~ d 3 ¤ ¢ ¤¦ Cx¥ez5 d 3 5 ¤ £ Px¥ ¢ @ ¤3 ¤ £ C¡ ¢ and . that multicasts a message from the binary source alphabet . and (8). so we may assume with loss of generality that it is the identity function. channel outputs a 0. It is easy to see that the outputs of channels and are 0 and 1. We will show that there is no way to choose the function such that the code is able to correct 1 error. In light of the existence of a classical binary 1-error-correcting (3. u1 Now again consider the cut in (6). We will show that this leads to a contradiction.1) code. then the outputs of the channels across the cut in (7) is if the source message is and an error occurs at channel . We now consider the cut 5 @ © h§@ 3 ce s Q e 5 C@ ¤3 @ © B5 d @ 3 5 ¤ ¢ H ¥ @ ¤3 d (4) where denotes the encoding function of for channel . Next.1) code that can correct 1 error. For the network in Fig. With (5). respectively. then there would exist a binary -error-correcting network code @ C¤ @ © Chy d « ~y © &§@ ¢ ¨ @ ~©¥PP¤ d w ª &y ¢ ¨ @ ` ` ` @ ¤ ¢ @ ¤ ¢ @ ~©¥C§P§ Cx¥C¦ Px¥CP¤ £ C¡ ¢ Tuie d 5 e @ 3 } ¨ ~T¤ui@ d 5 e 3 1 ) ' % # ¤X20(2a" d © Let us consider the case that the source message is 1 and an error occurs at channel . so that by (5). Without loss of generality. consider the case that the source message is and an error occurs at channel . if the encoding function of any channel can take only one value. without loss of generality. so it is regular. Thus we must have (8) between and 5 k Q ci@ e @ E 5 @ P2C0E @ 3 @ §i@ 5 s ¬ U @ b Q i0C@ e @ E 3 @ P@ 5 ¤3 @ 3 and @ © h~© d 5 3 ¤ ¢ § Px¥ezk© d 5 3 ¤ ¢ ¤¦ Cx¥~© d 5 3 ¤ £ Cx¥ ¢ otherwise the outputs of the channels across the cut in (6) would again be so that the sink node cannot distinguish the source messages 0 and 1.b s f u2 a Fig. let ` d 5 © Bk@ 5 E @ bC2s 3 3 ¤ ¢ H ¥ 5 E bC@ 3 ¤ ¢ H ¥ (5) ~ (7) (9) . etc. it can readily be veriﬁed that @ 5 5 3 55~©§h§h© @ k@ @ @ © @ 5 © ® E d 3 3 3 5i55~© ¤ ¢ H ¥ @ ~© ¤¦ Cx¥ ~© ¤ £ Cx¥ 5 ¤ ¢ @ 5 ¢ 3 3 3 ¤ ¢ H @ 5 u 3 3 ¤ ¢ u ¤¦ Cx¥ @ 5 3 ¤ £ C¡ ¢ 3 3 ® E 5 s §i@ 3 ` © Bkh© d 5 © @ 5 @ © h§@ 3 3 ¤ ¢ H ¥ ` © Ae d d 5 © @ akh© 3 ¤ ¢ H ¥ 5 P@ 3 since by symmetry one can exchange the roles of 0 and 1 componentwise. if the bounds in Theorem 1 are tight. an encoding function of a channel whose input-nodes . We observe that for a particular network code. the outputs of the channels are if the source message is and an error occurs at channel . (6). Assume that the network code in (4) is 1error-correcting. First. It is easy to verify for this network that for .

Bangalore. Cai and R.networkcoding.” submitted to Communications in Information and Systems. 1999. Cai. Theory. a contradiction to (9). C ONCLUDING R EMARKS We have clariﬁed the relation between network coding and algebraic coding. [5] S. Ahlswede. This in turn shows that the upper bound in Theorem 1 is not tight.hk/˜cis [9] N. China (RGC Ref.-Y.” submitted to Communications in Information and Systems. Inform. Yeung. Yeung and N.-Y. “An algebraic approach to nete work coding. “Linear network coding.hk/˜cis [10] Network coding homepage. Inform. Cai. IT-49: 371-381.” IEEE Trans. S. Yeung and Z.ims. N. Theory. N. Inform. Zhang. Theory. vol. Li. S.” IEEE Trans. http://www. Koetter and M. Li. 2003. Yeung and N. W. Cai.ims. W. [4] N. Singleton. Inform. and we conclude that there exists no binary 1-error-correcting network code that can transmit 1 bit. R. http://www. [6] R. No. Theory. IT-46: 12041216. W. R EFERENCES [1] R.By Theorem 1. R. “Network error correction. 11. Cai. 1964. [8] R.-Y. W. “Distributed source coding for satellite communications. R. a paradigm for error correction on networks and an extension of classical point-to-point error correction. Zhang. [7] R.” IEEE Information Theory Workshop. Cai and R. and Z. IT-10: 116-118. CUHK4214/03E).info . Oct 20-25. [2] R. Yeung. http://www.” IEEE/ACM Transactions on network coding. 2002. 2000. Yeung was partially supported by a grant from the Research Grant Council of the Hong Kong Special Administrative Region. the assumption that the code in (4) is 1-error-correcting is incorrect. Yeung. R.” IEEE Trans. ACKNOWLEDGMENT The work of Raymond W. W. We have also have given an overview of network error correction. [3] R. Part II: Lower bounds. © &@ ~ f 5 5 $8ib 3 ¤ ¢ H ¥ b ¦ Px¥ @ ( ¤ £ Cx¥ @ 5 ¤ ¢ 5 ¢ 3 3 3 is a classical 1-error-correcting code so that its minimum distance is at least 3. and R. M´ dard. IT45: 1111-1120. “Maximum distance Q-nary codes. IV. Yeung. 782-795. India. W. “Network information ﬂow. “Network Coding and Error Correction. Part I: Basic concepts and upper bounds.edu.” to appear in Foundation and Trends in Communications and Information Theory. Therefore. “Theory of network coding. W.” IEEE Trans. Li. and discussed the complexity involved in the construction of network error correction codes.cuhk. “Network error correction.edu. C.cuhk. 2003.

Sign up to vote on this title

UsefulNot useful- Error CD
- ECC.pdf
- 07 Trellis Diagram and the Viterbi Algorithm
- Lecture Coding
- Raptor Codes
- Block Convolution
- A.R.Q. techniques using Sub-block retransmission for wireless networks
- a141 CDP Guidelines IP Datacast Over DVB-SH dTS102591-2 v111
- Codes Correcteur d'Erreur
- lecture2b
- 05662100
- ISPACS2005
- Coding
- 1.lecture1
- Burst
- Forward Error Correction
- A Proposal for Research (1)
- ch10_1_v1
- UMTS Physical Layer
- Module 4 LTE Radio Interface Structure
- 20 Convolutional Codes 3
- Physical Layer Design for Packet Data over IS-136
- Joint network/channel decoding algorithm for wireless networks and proof simulation
- Convolution
- ONT-5xx_OTN-Layer
- Lecture 7 Error Detection
- Print
- Concatenation of Block and Convo Codes
- Notes 01
- Turbo Codes
- 263