Professional Documents
Culture Documents
3, MAY 1993
r 0.9999;96715 0 0 0 0.0000003285 1
0.9999589994 0 0 0.0000410006
p= 0
0
0
0
0.9892556344
0
0
0.4434131733
0.0107443656
0.5565868267
j. (34)
0.0725537809 0.0333957486 0.1377107457 0.3283679031 0.4279718217.
VI. CONCLUSION [10] M. Y. Rhee, Ermr-Correcting Coding Theory. New York: McGraw
Hill, 1989.
We have investigated super channels consisting of renewal inner
[11] T. Fergus on, "Channel modeling and code performance evaluation,"
channels and constrained codes. Three sets of recursions representing NSA Tech. J. (Special Mathematics and Engineering Issue), pp. 67-91,
six widely used modulation codes have been developed to obtain 1981-1983.
the error-free runs of such super channels. It should be noted that [12] A. H. S. Ang and W. H. Tang , Probability Concepts in Enginee rin g
Planning and Design, Volume II: Decision, Risk and Reliability. New
. these recursions representing the error-free runs of the super channels
York: John Wiley , 1984.
are fundamentally exact expressions, irrespective of the renewal/non [13] R. Goodman , Introduction to Stochastic Models. Menlo Park, CA:
renewal nature of the super channel. Benjamin/Cummings, 1988.
We have furthermore experimentally investigated the hypothesis [14] J. L. LoCicero, D. J. Costello, and L. C. Peach, "Characteristics of the
that Fritchman partitioned Markov chains can also be used to model Hedeman H - 1, H - 2, and H - 3 codes," IEEE Trans. Commun.,
vol. COM-27, no. 6, pp. 901-908, June 1981.
the super channels, thus implicitly assuming that the latter can be
[15] F. Swarts, D. R. Oosthuizen, and H. C. Ferreira, "On the real time
modeled with adequate numerical precision by renewal processes. measurement and theoretical modelling of error distributions on digital
While we are unable to analytically prove this hypothesis, in all communication channels," in Proc. IEEE Comsig '90 Coni, Johannes
the experimental investigations, we found that Fritchman partitioned burg, South Africa, June 29, 1990, pp. 78-92.
between 1/4 and 7/9, and complete memory (1\1 =k), as well as 12
REFbRENCES
new linear block codes. Most of the new codes found have maximum free
distance.
[1] L. N. Kanal and A. R. K. Sastry, "Models for channels with memory I"dex Terms-Combinatorial optimization, unit-memory codes, column
and their applications to error control," Proc. IEEE, vol. 66, no. 7, pp. selection problem, column matching problem.
724--744, July 1978.
[2] W. Turin, Performance Analysis of Digital Transmission Systems. New
York: W. H. Freeman, 1990.
I. INTRODUCTION
[3] B. D. Fritchman, "A binary channel characterization using partitioned
Markov chains," IF:F:E Trans. Inform. Theory, vol. IT-13, pp. 221-227, It is known that unit-memory convolutional codes (UMC) may
Apr. 1967. have a better error correcting capability than the usual multimemory
[4] S. Tsai, "Markov characterization of the H. F. channel," IEEE Trans.
convolutional codes with the same rate and number of elements
Commun. Technol., vol. COM-l7, no. 2, pp. 24--32, Feb. 1969.
[5] K. Brayer, "Characterization of the digital high speed AUTOVON [1]-[3]. However, each UMC is defined from a larger number
channel, " Final Rep ., The MITRE Corporation, MTR-2968, Feb. 1976. of elements than the corresponding multimemory code; and even
[6] A. I. Drukarev and K. P. Yiu, "Perfor mance of error-correcting codes for a small number of memory elements, finding these codes by
on channels with memory," IEEE Trans. Commun., . vol. COM-34, no. exhaustive search may be practically impossible. As a consequence,
6, pp. 513-521, June 1986.
[7] TEMC 32-2, "Fail-safe telemetry system technical manu al, " M.L. Engi Manuscript received May 3, 1990; revised March 12, 1992. This work
neering Ltd., Plymouth, UK, i s s . 4, pp. 2-8, Jan. 1987. was supported by COllselho Nacional de Desenvolvimento Cientifico e Tec
[8] CCIR: I nternation al Radio Consultive Committee, "Recommendations noI6gico-CNPq, Brazil, under Grant 301416/85-0. This work was presented
and reports of the CCIR, 1986," Vol. VIII-L, Land Mobile Service, in part at the 4th Joint Swedish -Soviet International Workshop on Information
Amateur service, Amateur Satellite Service, Duvrovnik, 1986, report Theory, Gotland, Sweden, August 27-September 1, 1989.
903. The authors are with the Faculty of Electrical Engineering, State University
[91 K. A. S. Imrnink, Coding Techniques for the Optical and Magnetic of Campinas-UNICAMP, l3081-970, Campinas, SP, Brazil.
Recording Channel. New York, Prentice-Hall, 1991. IEEE Log Number 9207884.
few complete memory UMC's with maximum free distance are known upper hound, we also use dr,ee to indicate the estimate of the
known. maximum free distance used by the search algorithms.
This correspondence introduces a UMC design method based on The distance profile of the UMC is the sequence (P2,P3,'" ,Pm,
mathematical optimization models with the purpose of providing ...) with the minimum weight of all codewords of length m (called
a systematic way of finding these codes. To allow the use of extended row distance profile, and denoted by J;;, in [2]). Thommesen
standard optimization techniques, we looked for equivalent models and Justesen [2] show that the set of error patterns which can be
with desirable properties, such as convexity and linearity in the always corrected with Viterbi decoding depends on the distance
real field. It is shown that the problem can be decomposed into profile. Let us also define h (i) as the k-dimensional column vector
two subproblems which can be formulated as standard optimization with elements equal to the binary representation of i. For example,
problems with integer variables. However, even in standard form, the u4(13) [HOI]!. These vectors, with elements from GF(2), will be
=
complexity in solving these optimization problems is large because used as columns in the generator matrices, Go and G1, and called
their variables are constrained to integer values (NP-type problems generator matrices column types.
[7]). We define, from higher to lower priority, the goals to be achieved
To reduce the search effort, we propose two local search algo in the search for good codes as:
rithms, also called heuristic algorithms, that are very efficient in catastrophic codes elimination;
•
finding good codes. The first algorithm employed in the search for free distance maximization;
•
good codes is concerned with the column types selection problem minimization of the number of trellis paths corresponding to
•
(Section Ill), All linear block codes (Be) used in the generation of dfyee;
the (n, k) UMC's came from this algorithm.To show the importance maximization of distance profile.
•
of this algorithm, we tabulate 6 new linear block codes: (32,8, 13), These goals are based on the assumption that maximum likelihood
(60,8,27), (73,8,33), (76,8,35), (43,9,17), and (46,9,19). Since decoding (Viterbi algorithm, [9]) will be used (sequential decoding
these codes have odd Hamming distances, the corresponding extended may require different priorities).The last goal needs a more rigorous
codes (33,8,14), (61,8,28), (74,8,34), (77,8,36), (44,9,18), and definition because the distance profile is a sequence of numbers, and
(47,9,20) are easily determined by the extension method [12, Ch. 1, it may not be possible to maximize the value of all elements of the
pp.27-28]. sequence. This definition is presented later where it can be more
The second algorithm is concerned with the column types matching clearly understood.
problem (Section IV). This is essentially an inhcrent problem of
linear trellis codes, which in particular has to do with unit-memory
codes. Computational results show that these algorithms provide
B. Decomposition Strategy
(n, k) UMC's with maximum free distance for a fairly wide range A necessary condition for a UMC to have dfree = diree is given
of rates and code dimensions. These successful results indicate that by the following lemma.
this modeling approach may be a powerful tool to code design. Lemma 1: A UMC with generator matrix G(D) Go + DG1 =
A time-invariant (n,k) UMC, with input xU) = (X1,X2,···,Xk) the zero state in one transition and returning to the zero state in the
and output V(!) = (V1, V2,"', Vn) at discrete time t, is represented next transition), are equal to the codewords of a BC with generator
by the encoding rule matrix [Go Gd. Hence, the free distance of the UMC cannot be
greater than the minimum distance of this block code. 0
with xU) = 0 for t < O. (1)
An immediate consequence of this lemma is that drree is upper
The code is completely defined by the k x n matrices Go and G1• bounded by the maximum minimum-distance of all linear (2n, k)
Applying the polynomial transform to (1), we obtain Be's, as tabulated by Verhoeff [10]. In fact, diree is quite frequently
equal to this upper bound. This lemma also indicates a relationship
V(D) = x(D)G(D), (2) between the (2n,k) BC's and the (n,k) UMC's: we can constrain
where the search for optimal generator matrices Go and G1 by using the
generator matrices of BC's with minimum distance equal to or greater
G(D) = Go + DG1• (3) than di,ee' as shown below.
Note that while the minimum distance of a BC does not depend
on how the columns are allocated in the generator matrix, the
We call.-v(!) and y(t) the input and output bytes, respectively. For
distance properties of a convolutional code do depend on the pairwise
the purpose of code design, a codeword of length m is defined as
combination of the columns of Go and G1, which in turn defines the
a sequence of m output bytes or, equivalently, a path in the trellis
columns of G(D). For example, if the generator matrix of the (10, .
generated by the encoder, starting in the zero state and returning to
4) BC is formed with the column types {ti} shown below, then its
the zero state for the first time after m input bytes.If a given path
minimum distance is 4:
in the trellis returns to the zero state more than once, we assume
that at least two codewords are generated. For the sake of simplicity, t1 = [1000]!, t2 = [0100]', t3 = [HOO]', t4 = [1010]',
we consider binary encoders with M memory elements, such that ts [OHO]', t6 [1110]', t7 [1001]', ts = [1101]\
M = k, where k is the number of input bits. The encoder states
= = =
are numbered from 0 to N = 2k - 1. We use dIree to designate the tg = [1011]', t10 = [1111]!.
free distance of a specific (n, k) UMC, while drree is the maximum If these ten column types are matched in five ordered pairs, such as
free distance of all the (n, k) UMC's. Of course, drree is not known
in advance; but since we found that in many cases it is equal to a (h, tg), (t2 , t4), (ts, t6), (h, t3), (tg, tID)'
1102 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 39, NO. 3, MAY 1993
[� l
then these pairs may form the following generator matrix of a (5,4) of For example, if = 3, we have
UMC with drcee dicee = 4:
M= [001 001 01 00 0101
1 111]1 .
=
1
1+D D D 1 +D l�D
G(D) =
1+D D
.
1
D 1+D 0 1+D
o 0 1 l+D For this case, the vector c = (Cl. C2. C3, C4, C5, CG, C7) is such that
Ci = 1 for 1 :::: i :::: 7 and that }II is the parity-check matrix of the
We can explore these facts and decompose the UMC design (7,4) Hamming code. Note that matrix Mcontains every one of the
problem into two subproblems: column types, where as its transpose, M', contains all the nonzero
1. find the sets of generator matrix columns that form distinct input bytes. Also note that the elements of the product M,21 Mare
(2n, k) BC's with minimum distance greater than or equal to the single digit result of all the binary scalar products that form the
dicee (column types selection problem); codewords. The N x N matrix H defined by H= l"vlt 2111;[ can be
2. use these sets of columns to find the pairwise combination of used to calculate the vector of Hamming weights w, as stated by the
the columns in Go and G1 that forms a UMC with dr,"" = diccc following theorem [5], [6].
(column types matching problem).
Theorem 1 (MacDonald): A list of the Hamming weights of all
If we aim to find the optimum UMC, then we have to search for
nonzero codewords of a block code represented by the vector c is
all BC's in the column types selection problem, which still may be
equal to the components of the N-dimensional vector w, which can
impractical. On the other hand, if we aim at finding good UMC codes
be calculated using real numbers arithmetic from
then we may search only for a few BC generator matrices. In fact, all
our computational results were obtained using at most six different w=Hc. (4)
block codes for each UMC. The advantage is that already known
BC's can be used or, when searching for new BC's, the computational
effort to find the minimum distance of a (2n, k) BC is much smaller Proof' The element Hij is the result of the scalar product over
than the effort to find the free distance of the UMC. GF(2) between the ith input byte h(i) with the jth column type.
When the minimum distance of the block code is equal to aicee, Hence, if the generator matrix has c] columns of type j, then the
that is, P2 = aicee and pm > aicee' Tn :::: 3, rn an integer, the BC and Hijej is the contribution of these columns to the ith
real product
the corresponding UMC have the same number of minimum weight codeword weight. Summing all these terms up results in (4). 0
codewords. Hence, in some cases, we can minimize the number of We can use this result to search for codes which have no codeword
minimum weight UMC trellis paths simply by choosing a BC with with weight less than d;,ee' Formally, c should satisfy the constraints
the smallest number of minimum-weight codewords.
There are three main reasons to look for a new method to design (5)
block codes: the known code may not have the minimum number
of codewords with minimum weight; only one good code is known where 1 is the N-dimensional vector of all ones and the vector
while we should try several codes to construct the UMC; we wanted inequality represents term-by-term inequalities.
to test the efficiency of combinatorial optimization to solve this kind This linear set of constraints allows the formulation of the design
of problem. Actually, the proposed algorithm provided new BC's problem as a standard optimization problem. The objective function to
that improved the lower bounds on Verhoeff's minimum distance be minimized should add a penalty for each occurrence of a codeword
table [10] and was efficient in finding more than one code with the with weight violating constraints (5), and when all those constraints
desired minimum distance. It turned out to be a simple, versatile, and are satisfied, it should count the number of codewords with minimum
practical tool. weight. If we set drcee as a parameter, then one function that can be
used in the objective function is
TABLE I
SOME PARAMETERS OF THE NEW UNIT-MEMORY CODES
B. Local Search Algorithm Algorithm I-Column Types Selection Problem: 1) choose an ini
Let the vector function 6(i,j) = (81(i,j),82(i,j),···,8N(i,j») tial solution c, satisfying constraints (9) and (10). A random selection
can provide fairly good initial solutions;
be defined for i f. j by
2) find i (/. I such that Ci ;::: 1 and j (/. 0 that minimize the
objective function of Problem (PI): 71"(c+ 6(i,j)); ,
8p(i,j) =
{-I� ifp =i
ifp = j (11)
3) update the solution vector
and 0; save c
C -!- C + 8(i,j) and the tabu lists I
if it is better than the best solution already found; and
otherwise.
4) if a "good solution" c is found, satisfying at least constraint (5),
then stop Algorithm I; otherwise return to step 2.
If a solution vector C is feasible to Problem (PI) [it satisfies In the last step of Algorithm I, we have used the rather vague term
constraints (9) and (10)], and if Ci > 0, then each vector �(i,j) "good solution" to allow considering the number of codewords with
represents the least change that maintains the feasibility of c+ 8(i j).
, minimum weight. Algorithm I can be interpreted as a progressive
It may be interpreted -as a column type substitution in the solution enhancement of a BC via column types substitutions. It is expected
set (column type i in and j out). These vectors can be used in a to give better results than, for example, substitution of rows or single
local search to detect the steepest descent feasible direction in the elements of the generator matrix because it is based on the properties
neighborhood of a given solution, as detailed in Algorithm I. of Problem (Pi).
Due to constraints (10), Problem (PI) is not convex. So, a steepest
descent search algorithm can be trapped by the local minima in IV. THE COLUMN 'TYPEs MATCHING PROBLEM
the objective function, without ever reaching the global minimum.
But there are methods that can be used in the search algorithm
A. Problem Formulation
to overcome this problem. We have chosen "tabu lists" [8]. The
From the solution provided by Algorithm I, we can construct the
objective of these lists is to avoid cycling in a nonmonotonic search.
1\vo "tabu" lists I and 0 are defined, that is, I and 0 contain the last
set {tt, t2 ,···, hn}, where each ti corresponds to a selected column
type. The variables of the matching problem can be represented by
L column types taken "in" and "out" of the set of selectcd column
types, respectively. (The length of the lists, L, was chosen between
a 2n x 2n matrix A such that
TABLE II
GENERATOR MATRIX Rows (IN HEXADECIMAL NOTATION)
Go
k n Gl
5 7 43 21 13 09 06
25 5C 51 33 3A
5 8 87 46 27 15 OB
4C AD 31 A6 FF
-5 9 107 08B 045 02F OlE
IDC 039 077 143 OF2
5 11 43C 21B U9 OB7 06A
6B2 lF6 56B 32A 317
5 12 82B 46E 279 lOE 0B4
063 B2C CB8 E9B 050
5 20 8780B 441FO 262B6 14FIE OD7AB
5047E EC781 472E7 24F6C 9614B
6 9 107 085 045 026 017 OOB
040 15A OAE IE8 194 18B
6 10 200 lOC 08B 046 02F 015
226 OED 279 IFI lOB IDE
6 II 40E 215 100 096 059 03C
6E5 3FC 273 503 46E 169
6 13 107F 0878 0427 0253 014B 0090
14CF ID25 0756 13A6 06F8 OBOF
6 17 10713 08540 046F2 0279C 016AS OOA6B
lC768 1EOBB 0958E 13F02 1280E 1333A
6 18 20BB5 100C5 086EC 04E96 02ACB 01B5C
32574 2781F 3B1CE 29BB1 176A;l 0FE48
6 24 839E7E 434AEO 233045 llOOBB OB0731 05BBA6
239287 E82C90 2F6FOA B67802 0091FC 5 85A3B
uation function that adds a penalty whenever Pm (A) < ,.m. A '1.=2 '13'3
generalization of function (6) can be defined as Fig 1. Calculation of Problem (P2) objective function.
j=l j=l
number of paths with minimum weight as a byproduct. Equation
for i,j 1,2,···,2n. (15)
(13) gives ttie exact definition to the expression "maximization of the
=
distance profile."
As an example of the application of this objective function, let us
Constraints (14) state that each collimn type cannot be simultane consider Fig. 1, with the result of the Viterbi algorithm applied to the
ously in matrices Go and G1, and should be matched to only one code generated by a particular matching. The dark lines represent
IEEE TRANSACI'IONS ON INFORMATION THEORY, VOL 39, NO.3, MAY 1993 1105
TABLE III
GENERATOR MATRIX Rows (IN HEXADECIMAL NOTATION)
GO
k n Gl
paths in the trellis corresponding to the codewords of minimum B. Local Search Algorithm
weight. Observe that there are two paths with length 2 and weight 5, A feasible solution from Problem (P1) makes it possible to obtain
and three paths with length 3 and weight 6 (P2(A) = 5, parA) = 6, matrices Go and G1. The minimum change that keeps the feasibility
1/2(A) = 2, 1/3(A) = 3). Using T2 = 6, Ta = 7, a2 = 10, aa = 2, is interchanging the position of two columns in these matrices. This
(3 2, and S 3, we have
= =
process can be used as the elementary operation required to detect
the steepest descent direction in Problem (P2) as in the following
erA) = 10 x 2 x (7 - 5)2 + 2 x 3 x (8 - 6)2 = 104.
local search algorithm.
The reference distance profile sequence {r m} can be defined to Algorithm II-Column Types Matching Problem: 1) build up ma
impose the colldition that each BC obtained by truncating the UMC trices Go and Gl with the column types provided by Algorithm I
should satisfy the Gilbert-Varshamov lower bound [3]. Defining from a random matching:
wo as the mean grQwt\1 of the distance profile, that is, Wo = 2) for each column i = 1,2", . ,2n, find the column j -:f i, that,
[ps(A) - P2(A)]/(S - 2), we want if iuterchanged with column i, will minimize the objective function
(14), that is, e(A);
Wo > n1-{-l(l - kin) = GB (16) 3) if the objective function decreases, then interchange columns i
and j and go back to step 2, otherwise go to step 4;
where 1-{-1 is the inverse binary entropy function. 4) if a "good" solution is found (at least dfre. = dIree) then stop,
The reference dist!lnce profile sequence {Tm} can be initially else go to step 1 to restart the search from another initial solution.
chosen as Tm = drree + (m - 2) . GB. The set of weights {am} Note that to reduce the search effort, only a subset pf all possible
should form a decreasing sequence, e.g., am = 2-m. The best values column interchanges is tried in each iteration, and the method to
depend on the specific code and usually can be found after a few trials. avoid local minima is to restart with a new initial solution. If the
This choice is not qiti"al: smaIl variations in the distance profile will reference distance profile i� properly chosen and a number S is large
not degrade the code performance. enough, this search algOrithm will naturally eliminate the catastrophic
1106 IEEE TRANSACTIONS ON INFORMAI10N THEORY, VOL. 39, NO.3, MAY 1993
TABLE IV
GENERATOR MATRIX COLUMNS OF THE NEW LINEAR BLOCK CODES
8 32 13 12-13 1 2 4 8 16 32 64 128 28 31
41 49 59 62 67 77 90 102 112 147
149 167 178 180 185 198 209 219 220 233
236 254
8 60 27 26-28 1 2 4 8 16 32 64 128 13 23
25 26 35 37 49 59 61 69 71 73
77 79 81 90 91 92 99 100 111 112
115 117 119 123 133 143 1 47 152 155 173
174 177 191 195 197 203 204 206 208 209
215 226 229 231 233 236 243 249 250 253
8 73 33 32-34 1 2 4 8 16 32 64 128 11 13
21 22 26 27 37 40 47 50 55 60
71 75 76 80 82 85 88 95 96 102
105 106 110 113 115 11� 121 126 127 129
132 133 135 139 147 148 150 152 156 163
167 169 173 174 176 181 186 191 193 194
200 207 210 219 221 222 227 228 230 236
240 249 255
8 76 35 34-36 1 2 4 8 16 32 64 128 13 17
23 26 29 30 38 44 45 47 50 51
52 53 57 59 63 70 82 83 85 91
93 94 99 100 101 106 111 112 118 119
120 124 133 134 138 141 142 145 150 158
159 160 164 167 171 174 182 184 188 194
199 200 201 203 209 212 218 221 222 223
224 227 236 244 247 255
9 43 17 16-18 1 2 4 8 16 32 64 128 256 21
30 37 46 66 95 107 114 124 133 147
156 174 184 200 203 205 244 278 311 327
334 371 372 376 385 414 423 424 429 436
464 471 481
9 46 19 18-20 1 2 4 8 16 32 64 12� 256 14
21 56 62 1 04 115 122 131 167 185 190
220 235 246 267 284 303 311 322 328 345
350 357 361 364 388 402 409 416 429 435
447 451 461 494 500 504
codes, due to their poor distance profile. Now the term "good" is used Tables II and III present for each new (n, k) UMC the rows in
to evaluate the number of minimum weight paths and the distance hexadecimal notation of the corresponding generator matrices. All
profile growth. BC's used were obtained from Algorithm T. In most of the cases,
the maximum minimum-distance is achieved, and Algorithm II was
able to search for a UMC with dfree equal to the maximum minimum-
V. COMPUTATIONAL RESULTS
distance of the block code (2n, k). The only exceptions are the (8,5),
(9,7), (12, 8), and (20, 8) codes. The last two codes,(12,8) and (20, 8),
Algorithms I and II were programmed and run on a PC AT
may achieve the maximum dfree since no (24,8) and (40,8) BC's
microcomputer. To show the efficacy of the new design method, we
with minimum distance equal to the corresponding upper bounds are
have tabulated 33 new codes. We believe this sample of codes gives a
known. All (8,5) UMC's found with dfree = 8 were catastrophic;
good indication of the potentialities of the new method. Some codes
and the code (9,7) may be considered a special case due to its high
which have free distance equal to previously published codes (UMC
rate. The (32,8) BC improves the lower bound on Verhoeff's table
or multimemory) are not listed, but there are some exceptions which
are included, and the reason for that are discussed ahead. Table I (updated version [11j), and the corresponding UMC has higher dfree
contains some parameters of the new codes, viz. than its quasi-cyclic version [3]. Note that all new codes satisfy
inequality (16).
k, n number of input and output bits, respectively
Some codes that duplicate previous results are as follows.
UB upper bound on dfree (from [11])
dfrcc free distance of the new UMC • The code (14,7) is very similar to the code found by Justesen
d= free distance of the optimum multimemory code e/ al. [3]; it is included to show that good distance profiles can
'Tlm number of minimum weight paths be found with Algorithm II. The Justesen et al. code distance
lUa mean growth of the distance profile profile is 12, 12, 13, 15, 18, 19, 22, ..., and the code found by
GB Gilbert- Varshamov lower bound on acceptable lUa Algorithm II has distance profile 12, 12, 13, 15, 17, 19, 21.
1= Rdfree/2 asymptotic coding gain (AWGN channel and hard • Codes (20,5), (18,6), and (24,6) have the same free distance
decoding). as the unit-memory codes found by Lee [1]. The differences are
IEEE TRANSACfIONS ON INFORMATION THEORY, VOL, 39, NO.3, MAY 1993 1107
TABLE V
MATRIX P COLUMNS OF Ho,ig OF THE NEW LINEAR BLOCK CODES
Matrix P Columns
n n d dB (4 bit byte decimal representation)
8 33 14 13-14 7 4 12 14 13 3 0 0
11 14 15 13 12 12 3 0
1 13 4 8 11 6 14 1
12 6 13 0 11 7 0 15
11 5 4 9 11 8 7 15
4 1 11 15 9 7 15 15
8 61 28 27-28 14 5 12 11 7 0 0 0
15 9 4 1 3 15 0 0
15 2 14 9 8 8 7 0
14 5 12 13 3 0 15 0
10 10 5 12 12 3 15 0
11 10 9 8 7 15 15 0
15 13 11 5 12 12 12 3
11 10 1 7 14 1 0 15
7 11 10 10 6 14 1 15
12 5 11 7 0 0 15 15
6 3 2 0 14 1 15 15
14 4 13 3 0 15 15 15
13 10 1 7 15 15 15 15
8 74 34 33-34 14 9 7 12 3 0 0 0
6 12 2 13 12 3 0 0
10 14 ]] 9 7 15 0 0
12 12 10 6 1 0 15 0
5 9 5 3 15 0 15 0
2 5 4 3 0 15 15 0
6 11 9 8 7 15 15 0
11 6 6 14 14 14 14 1
7 3 14 1 0 0 0 15
8 10 6 1 15 0 0 15
7 6 10 9 8 7 0 15
9 4 13 12 3 15 0 15
6 13 4 12 12 12 3 15
5 7 4 13 3 0 15 15
10 6 13 12 12 3 15 15
1 8 12 5 3 15 15 15
8 8 8 8 8 8 8 8
that while Lee's (18,6) code has 45 minimum-weight paths, the For the sake of understanding the notation, let us consider the
new code has only 2, and the distance profile growth rate of first column of matrix P of the linear block code (74,8,34), that
Lee's (20,5) and (24,6) codes do not satisfy (16). is, (14,6,10,12,5,2,6,11,7,8,7,9,6,5,10,1,8)t. This represents the
To illustrate the computational effort required by the algorithms, a binary column (1110 0110 1010 1100 0101 0010 0110 1011 0111
(24,8) UMC, with 384 0-1 design variables, was found after roughly 1000 0111 1001 0110 0101 1010 0001 1000)'. Note that the rightmost
five hours of CPU time, which can be considered an upper bound 3 bits "0" must be disconsidered in order to match the right amount
on the allowed search time. However, the computational effort was of row of Horig.
highly dependent on the code and optimization objectives. In some
cases, the time required to enhance the code's distance profile or VI. CONCLUSION
number of codewords with minimum distance was much higher than We have presented a new method to design good unit-memory
the time required to find a code with maximum dfree• The time convolutional codes. We have shown how the original problem can
required to solve the column types selection problem is usually be decomposed in the column types selection and matching problems.
smaller than the time to solve the matching problem, but for some From this decomposition, efficient algorithms were devised to solve
low rate codes it was greater. the corresponding problems. As a result, we were ahle to find new
Tables IV and V present the new linear block codes found by UMC's, most of them with maximum df,ee.
Algorithm 1. Particularly, Table IV presents the generator matrix From the computational results, we believe that, with further
columns in decimal representation, whereas Table V presents matrix research, this methodology can be enhanced with the inclusion of
P columns, in decimal representation for each 4 bit byte, of algebraic properties of the UMC and with the adaptation of more
powerful combinatorial optimization methods to the particularities of
the problems, leading to more efficient methods to design optimal
Horig = [Pll]
UMC's. The techniques used to obtain the optimization problems
are not very restrictive, hence there is the interesting possibility of
where Horig is the parity-check matrix of the original code, and I is adapting the method to the design of linear unit-memory codes over
the identity matrix when the extension method [12, Ch.1, pp. 27-28] q-ary fields or rings, and using the Hamming, Lee, or Euclidean
is employed'. distance to measure its performance.
1108 IEEE TRANSACfIONS ON INFORMATION THEORY, VOL.39, NO. 3, MAY 1993
TABLE V
(CONTINUED)
Matrix P Columns
n n d dH (4 bit byte decimal representation)
8 77 36 35 - 36 14 3 10 <} 7 U U 0
8 6 15 13 12 3 0 0
13 7 12 12 3 15 0 0
7 1 12 3 15 15 0 0
9 15 12 8 11 8 7 0
14 5 11 7 15 0 15 0
10 9 6 1 0 15 15 0
9 11 11 8 7 15 15 0
2 1 7 12 12 12 12 3
5 10 6 14 1 0 0 15
2 14 14 6 14 1 0 15
6 7 13 3 0 15 0 15
0 9 10 6 14 14 1 15
11 9 8 7 0 0 15 15
9 2 5 3 15 0 15 15
5 13 12 12 12 3 15 15
3 3 15 9 7 15 15 15
9 44 18 1 7 - 18 10 5 15 15 12 3 0 0 0
6 15 4 6 5 3 15 0 0
6 2 13 9 11 8 8 7 0
1 9 8 15 4 12 3 15 0
9 3 15 8 7 5 12 12 3
10 14 13 4 3 3 15 a 15
5 3 3 10 10 9 8 7 15
4 0 6 12 3 14 1 15 15
12 8 8 U 8 4 12 12 12
9 47 20 19 - 20 4 9 13 11 7 3 0 0 0
5 7 0 10 6 14 14 0
12 10 11 7 7 14 1 15 0
10 14 5 11 5 12 12 12 3
12 14 12 9 4 12 3 0 15
11 4 6 13 12 3 15 0 15
1 2 12 9 3 8 8 7 IS
7 3 5 5 3 15 0 15 15
12 10 7 6 1 3 15 15 15
U U 0 8 8 8 8 8 8
ACKNOWLEDGMENT [10] T. Verhoeff, "An upd ated table of minimum-distance bounds for binary
linear codes," IEEE Trans. Inform. Theory, v ol . IT-33, p. 665-680,
The authors would like to thank the referees for their detailed Sepl. 1987.
[11] __ , "An updated table of minimum-distance bounds for binary linear
comments and valuable suggestions which improved the presentation
codes," Dept. M ath. Comput . Sci., Eindhoven Univ. Techno!., Jan 1989.
of this work. [12] F. J. MacWilliams and N. J. A. Sloane, The Theory of Error-Correcting
Codes. Amsterd am, The Netherlands: North-Holland, 1977.
REFERENCES