Professional Documents
Culture Documents
Multimedia
Multimedia
by
A THESIS
IN
COMPUTER SCIENCE
MASTER OF SCIENCE
Approved
Accepted
May, 2001
ACKNOWLEDGEMENTS
I would like to express my sincere gratitude to Dr. Gopal Lakhani, for his constant
guidance and support during the course of this research and my graduate study. Without
his ideas and advice, this thesis would not have been possible. I indeed feel indebted to
him for the time and effort he has put into my thesis. I wish to express my thanks for Dr.
Larry Pyeatt and Dr. Mohammad Maqusi for agreeing to be on my thesis committee and
for their valuable advice and suggestions. Also, I wish to express my gratitude to the
Department of Computer Science, Texas Tech University, for providing me with the
facilities to carry out my research work and the TxTEC consortium for flinding my
research.
understanding of my roommates. Raj, Srinivas and Senthil, who have been like a family
to me.
n
TABLE OF CONTENTS
ACKNOWLEDGMENTS ii
ABSTRACT vi
CHAPTER
1. ESTTRODUCTION 1
ui
2.2.3 Conceahnent 1^
3.1 Objective 16
3.2 Approach 16
4.1 Objective 23
5.1 Objective 41
IV
5.3.3 Comparison of Decoded Erroneous Images 5^
REFERENCES 61
ABSTRACT
Most data compression schemes split the input signal into blocks and then
produce a variable length code for each block. Since variable length codes are highly
sensitive to channel errors that may occur during transmission, synchronization code
words are often inserted between blocks to provide occasional resynchronization thereby
to give increased resilience to random and burst errors while maintaining high
compression. This report proposes improvements to the EREC algorithm to increase its
error resiUency. The objective is to pack code blocks in fixed-size slots, so that in the
presence of bit errors, more data (blocks) can be recovered than is possible with the
VI
LIST OF TABLES
vn
LIST OF FIGURES
vni
CHAPTER 1
INTRODUCTION
This thesis presents a study of coding of multhnedia streams for transmission over
channels of very limited bandwidth. The study concentrates on how to organize the data
so that in case of transmission errors, the erroneous data, which cannot be decoded, is
reduced and skipping the erroneous data has Umited effect on reconstruction of original
input. This thesis does not mention any specific method for recovery of lost data. The
objective is to decode as much input as possible without adding extra redundancy to the
data.
The thesis is organized into a number of chapters. This chapter provides a brief
transmission errors and its effect on coded data. The second chapter introduces error
resilience codmg. It also gives a brief survey of hterature on various approaches for error
resilient coding. The third chapter presents the Error Resilient Entropy Coding (EREC)
algorithm of [1] in detail and discusses its advantages and limitations. The fourth chapter
discusses the crux of this thesis. Bi-directional Error Resilient Entropy Coding (BiEREC)
algorithm. Fifth chapter gives the resuhs of some simulations carried on BiEREC and
EREC using real-life image data. Conclusions and possible extensions are listed in the
sixth chapter.
1.2 Multimedia Data
storage capacity and transmission bandwidth. To reduce the volume of the data, raw data
Most compression schemes work in two phases. Initially, the data is grouped into
separate segments or blocks. The objective of the first phase is to model the input based
on certain characteristics inherent to the data and then extract redundancy as the data is
read. The output of this phase is a stream of abstract symbols, where each symbol
represents a part of the original input stream. The purpose of the second phase is to
encode each symbol using a bit string, the length of which is based on the fi-equency of
occurrences of the symbol. This phase is called the entropy coding. For example, in the
well-known JPEG [4] image compression system, input image is first divided in blocks of
8x8 pixels. During the first phase, each block is transformed using discrete cosine
transformation (DCT) and a stream of consisting of DCT coefficients of each block are
output. During the second phase, DCT coefficients are coded using entropy-coding
symbols occurring in lesser fi-equency are coded using more numbers of bits This
smaller than the information content of a symbol appearing less frequently There are
many entropy-coding techniques; Hufihnan coding and Arithmetic coding are most
reduces the average code length of symbols of an alphabet set. The Huflftnan encoder
produces optimal length code in case, the probability of occurrence of each symbol is an
integral power of 1/2. A Huffman code table can be buih in the following manner:
2. Successively combine the two symbols of the lowest probability to form a new
composite symbol. Add this symbol as a new node of a binary tree. This node denotes
the probability of all nodes in the sub-tree. Eventually we will build a binary tree
3. Trace a path to each leaf, keeping track the direction at each node.
For a given probability/fi-equency distribution, there are many possible sets of Huflhnan
codes, but the total compressed length of the coded input will be the same.
,/
H A 0
Ii C 100
H B 101
ii F 1100
n E 1101
n D 111
Instead of using the Huffman codes, which are of variable length, another choice
is to use fixed length binary codes. Table 1.1 shows fixed and variable length code sets
A 0 000
B 101 001
C 100 010
D 111 Oil
E 1101 100
F 1100 101
Let ACBFEDA be the input to be coded. Its Huffinan codmg will generate 1+3+3+4+4+3
+1 = 19 bits. If the same input were coded using the given fixed length codes, the number
of bits required would be 21 bits. The saving due to use of entropy coding is 2 bits For
large data sets, the savings is generally quite substantial. Therefore, variable length
VLC is used widely for coding of data to be transmitted over network channels.
However, VLC is not suitable for use over channels susceptible to errors, because if a
single bit error occurs during transmission (0 changes to 1 and vice versa, maybe due to
noise), it may be very difficult to locate the presence of the error and recover the original
data. To demonstrate this problem, let us consider transmission of the above sequence,
ACBFEDA, coded using the Huffinan codes of Table 1.1. Let the code of the symbol E
be changed to 1111 due to a bit error. In this case, the decoder would receive the bit
sequence 0 100 101 1100 i i i i 111. It will be able to decode the initial symbols A, C, B,
F correctly but when it reads 111 (first three Is of the underlined bit segment), it would
sequence of three Is and interpret it agam as D. Finally, when the decoder sees only a
single 1, which is not a valid HufiBnan code for any symbol, it would fail. The reason is
that the decoder has no way of knowing the location of the error and how far it has
decoded. It also cannot skip a code segment and continue to decode there after, because
there are no such segment boundaries. This may result in the decoder throwing away
large amount of correct data, which could be just due to a single bit error. However, this
would not be the case if fixed length coding is used, since any bit error can affect only
one symbol and the decoder can skip a known number of bits and resume decoding at the
A Multimedia data stream comprises of two types of data, one describes the input
such as the pixels of a picture and the other describes control parameters such as image
size and display rate. The stream is organized in several layers, with the top layer
describing the most abstract information and lower layer describing input dependent
information. For example, MPEG [5] stream is organized in 5 layers with each layer
7
describing different sets of parameters, which are needed to reconstruct the original
image sequence.
Slice layer
Macro block
layer
A MPEG stream can be divided into segments so that there are alternately
consists of variable-length code blocks. As shown in Figure 1.2, the image data blocks
8
constitute the bottom most layer of the MPEG stream. [5] is a comprehensive reference
During the transmission of coded data stream, transmission error can occur
anywhere. It may be in the control segments or in the data segments. Since the format of
the control segments is fixed, the control information can be coded using some error-
correcting codes to recover the mformation, in case of any transmission error. Adding
limited amount of error correction code to a control segment is tolerable, as its size is far
smaller than a data segment. Furthermore, it is also necessary for the decoder to decode
all control segments completely and correctly in order to be able to decode the data
segments correctly. Using the same error correction coding method for data segments
may not be practical, because it may increase the total code size significantly and hence
affect the bit rate, which must be maintained to get the desired compression. Furthermore,
in the case of wireless transmission, the bandwidth may not be available to transmit the
increase the error resiliency of the data part without increasing any overhead, so that the
over guided media such as thick-net, whereas wireless transmission involves transmission
hold for wirehne communication. The most important constraint is the limited bandwidth
of a wireless channel. Multimedia data, especially video, require large bandwidth, which
transmission errors such as single-bit errors, burst errors and intermittent loss of
connection are basically the same in both wireline and wireless communication, because
of the higher noise level in a wireless channel, the error rate is much higher In this thesis,
we concentrate on the effects on VLCs due to a single-bit error though our algorithms
error correcting codes (FEC) are used to protect data. Most of these schemes are not very
Any discussion on the cause of these errors is beyond the scope of this report.
10
much attractive for wireless communication, because adding FEC codes in the worst-case
design would add prohibitive amount of additional codes and thus cause redundancy in
the data. An alternate solution is to re-transmit erroneous data, but this solution is also not
practical in most situations, because it incurs additional delays. Therefore, error resilience
coding, discussed in the next chapter, is a very attractive solution. This thesis
coded data.
11
CHAPTER 2
Since the capacity of wireless charmels is Umited, it may not be feasible to protect
all kinds of information in a multimedia stream using error-correcting codes. The reason
is that error-correcting code add significant amount of data, which may drop the data rate
and thus affect visual presentation of the stream at the receiving end significantly.
Therefore, error resihent coding is needed for transmission of audio and video data,
which can tolerate a limited amount of transmission error in the data. The overall
objective is to provide error protection, using limited amount of additional code, in such a
way that in face of errors, maximum amount of data is recovered. This objective is
reahzed by coding different segments (e.g., in MPEG [5]) differently and by organizing
data in a manner that in presence of errors, the receiver is able to skip potentially
erroneous data and restart at a later stage. The advantage of this approach is that it allows
for gracefiil degradation of transmission quality as the error rate increase m the chaimel.
12
2.2.1 Resynchronization and Localization
Channel errors can cause loss of synchronization at the receiver end. This loss
could be due to insertion and/or deletion of bits, which could be due to instability of the
clock. Bit errors can also cause loss of synchronization, if variable length codes are used.
When compression schemes with memory are used, i.e., those compression schemes in
which the codes generated for the new symbols depend upon the previous symbols, errors
One way of introducing resilience to this type of errors is to localize the effect of
error. This is usually done by inserting periodically some unique synchronization code
words in the stream such as in MPEG 2 [5] and H.263 [6]. In order to differentiate them
from data, the synchronization code words are generally quite long, and therefore, should
synchronization codes alone may not be sufficient, and the coding scheme may have to
be periodically restarted. For example, MPEG-2 encodes a frame fiilly (intra mode) in
The above solution is sufficient to prevent catastrophic losses for all types of
channel errors mentioned above. Certain measures have also been proposed to target
errors caused due to use of variable-length codes. As shown in section 1.3.2, use of
variable length codes can lead to propagation of errors, because an error in a codeword
may make the decoder not detect a synchronization marker. A simple solution to prevent
this failure, possibly at the cost of coding efficiency, is to use fixed length code words.
Another way, which is more elegant and provides re-synchronization, is to use EREC
13
algorithm [1]. The objective is to localize effects of errors and still not reduce codmg
In image and video data streams, some segments may be more important than the
others. For example, a loss of the information on the quantization levels and motion
vectors is more damaging than the loss of information on DCT coefficients. Even among
DCT coefficients, the DC coefficient and lower frequency coefficients contribute more
to the reconstruction of the image than the higher frequency coefficients. Hence, if the
capacity of the channel is limited, different levels of protection should be accorded as per
where the bit stream is divided into several layers. The quality of received video
improves as data from successively higher layers are received and used to reconstruct
video frames. The data transmitted as part of lower layers is usually more important and
2.2.3 Concealment
When any code incurs a transmission error, entire 8 x 8 pixel blocks may have to
be dropped from the decoded image. A good approximation of these lost blocks can be
obtained by using data already available at the receiver. The simplest method would be to
14
2.3 Summary of the Literature Survey
Apart from the error resilience coding approaches discussed in the previous
sections, various other methods have also been studied in the literature. In [2], Wee Sun
Lee et al. discuss a method of that uses H.261 [25-26] error protected headers as
synchronization flags as well. This avoids the need of an explicit synchronization code
word. In the same paper, they discuss various other means to code the start codes in a
H.261 code stream, so that they fimction also as parity bits. In [11], Soon et al. propose a
scheme to improve the robustness of a H.263 video coder by combining source and
channel coding [12-14], to selectively protect error sensitive bits in the encoded video bit
stream. References [15-24] discuss numerous other approaches for the same problem
Almost all the approaches discussed are either specific to a standard like H.261 or
require addition of code to the data stream. In the next chapter, we present the approach
discussed in [1], which is general and can be adapted to any compression scheme.
15
CHAPTER 3
3.1 Objective
This chapter describes the Error Resihent Entropy Code (EREC) algorithm
presented m [1]. This algorithm is the basis of the research described m the next two
each containing an arbitrary number of symbols coded usmg an encoder that generates
VLCs. The resihency is achieved without adding any additional data to the stream so that
the entropy of the coded output is not affected. It also has the property of gracefijl
degradation in face of increased channel errors, i.e., the amount of data discarded during
decoding due to transmission errors increase only with an increase in the number of
transmission errors.
3.2 Approach
EREC algorithm can be cascaded to any entropy encoder as a second pass. The
fimction of this pass is to reorganize the variable length code blocks generated by the
native encoder so that location of the beginning of the code of each block in the bit
stream can be determined knowing the block number. The main advantage is that in case
the decoder fails to decode a block, it can skip some amount of code bits and resume
decoding at the beginning of the next block. To realize this approach, EREC packs data
in a structure.
16
The EREC follows a block control strategy. Each block is a prefix code, which
means that in absence of any channel errors, the decoder can detect end of block without
The EREC structure is composed of N slots of length Sfbits to give a total length
T = Z i=i Si bits to transmit. It is assumed that both the encoder and decoder know the
of data, each of length bi, provided the total data to be coded does not exceed the total T
bits available.
R = T - Z^i=ibi >0 3 1
The difference R (of the total data to be coded and the data transmitted) represents the
amount of space that may go unused for every N blocks. In order to minimize this
highly protected side information. The slot lengths Si can be predefined as a fimction of T
and N and, therefore, they need not be transmitted for each structure (Note that an EREC
sizes Si from the given value of T could be as follows. Let N be a muhiple of some fixed
integer, say p, i.e. N = p * N . Let k = T/N denote the average size of slots. Then
LT/NJ + [(m -l)/p] < k < LT/NJ + m/p for some integer 1< m < p. Among every p
consecutive slots, first m of them are of size T T / N ] and the next (p-m) are of size LT/NJ.
For larger input, it is necessary to split the input into several groups of N blocks, each of
17
The EREC places the variable-length blocks of data into the EREC structure by
using a bit reorganization algorithm that rehes only on the abihty to determme the end of
each block. Thus, in the absence of channel errors, the decoder can exactly follow the
Figure 3.1 shows a sample example of the operation of the EREC reorganization
algorithm for a sequence of blocks of lengths 10,9,4,3,9,6, and 6 equal length slots Si=7
bits.
The algorithm is iterative and packs data into slots in successive stages. In the
first stage, the algorithm places each block of data Bi to a corresponding code slot Sj.
Starting from the begiiming of each block, all or as many bits as possible are placed
within the corresponding slot. Thus, blocks with an equal number of bits to those
available in the corresponding slot, i.e., bi = Si are coded completely, leaving the slot fiall.
Blocks with bi < Si are coded completely, leaving the slot with Si - bi bit space unused.
Blocks with bi > Si have Si bits coded to fill the slot and leave bi - Si bits remaining to be
coded. For example in Figure 3.1, blocks 1, 2, and 5 have bi > Si, and blocks 3, 4, and 6
have bi<Si. Thus, at the end of stage 1, slots 1, 2, and 5 arefiall,whereas slots 3, 4, and 6
18
1
2
3
4
Stage 1 5
6
a
1
2
3
Stage 2 4
5
6 •••••••••••••••••••••^^
Stage 3
2
3
4
5
6
mmmMmm0MmMM&
•
IlllllillllllllllllllllllllllllllllllllllillJll
n •
IIIIIIIIIIIIIH*S:|:S|:^ ^^^^^H
^^^^^^^1
^H
1
2
Staje 6 3
4
5
6
19
In subsequent stages of the algorithm, each block with data still to be coded
searches for slots with unused space. In stage n, block i searches slot SR, k= !+(()„ (modulo
N), where (j)n is a predefined offset sequence. If the slot Sk, searched by a block has some
space available, then all or as many bits as possible of Bi are placed within that slot. In
Figure 3.1, the offset sequence is 0,1,2,3,4,5. Thus, m stage 2, the offset is 1, and each
block, which is yet to be packed completely, searches the followuig slot. Figure 3.1(b)
shows that by the end of stage 2, one bit from block 5 has been placed m the space
remaining m slot 6 and two bits of block2 have been placed in slot 3.
Provided 3.1 is satisfied, which is true for the given example, then there is enough
space to pack data of aU blocks, and all the bits can be placed within N stages of the
algorithm. Figure 3.1(d) shows that by the end of stage 6, ah the data has been packed
The decoder can foHow this algorithm by decoding each variable-length block
data as it goes along until it detects the end of each block. At that point, it knows that the
remaining contents of the slot correspond to data from other blocks packed at a later stage
in the algorithm. In absence of any channel errors, the decoder can obtain the original
A decoder that can handle channel errors can be constructed as follows. While
decoding a slot, if the decoder hits a bit sequence, which is not a vahd code or prefix of a
code, it detects an error. It marks the blocks B and slot S as erroneous, and skips to the
next block, because the starting location of the next block is known. In subsequent stages,
if it determines that a block B' is continued to S, i.e., at stage k, S = B' + (j) i, (modulo N),
20
The foUowmg describes the error handhng in EREC:
1. In EREC, if there is an erroneous block within a slot, all other blocks within the
slot are also set as erroneous since the EOB of the erroneous block may not be
block.
The EREC algorithm packs N blocks of data in N slots, so that very less or no
space is wasted. As a result, h may have to pack some large than average size blocks by
fragmenting it, into more than one slot and it may take up to N stages to pack all blocks.
If a slot is determined to have incurred a transmission error during a stage, many blocks,
which may not have even been packed in that slot may be declared erroneous and
1 1
2 2
9
3 99
3 2
4 99
4 1
Figure 3.2 shows the placement of 4 blocks with block 1 and 2 packed in two parts each
9 9 J 9 9 »
21
cause the decoder to declare slot 3 erroneous. When in stage 3, the decoder tries to search
for 1 in slot 3, it finds that the slot is erroneous and declares 1 to be erroneous as well.
This is because the decoder has no way to determine that 1 was not packed m slot 3.
Thus, an error in a block could propagate to other blocks yet to be decoded completely. In
the worst case, because of a single bit error, up to N/2 blocks may be declared erroneous.
fewer blocks get fragmented and it should take lesser stages to pack all the blocks. This is
22
CHAPTER 4
4.1 Objective
The main aim of our research was to find ways of knproving the EREC algorithm
in view of its limitations discussed in section 3.3. Our study led to the development of Bi-
resihency of EREC without adding any overhead/redundancy and without changing the
basic idea behind the EREC algorithm. The improvements described in this chapter aim
to decrease the fragmentation of blocks between slots so that if a block is corrupted due
to transmission error, the error is localized and lesser number of blocks are discarded than
in the EREC algorithm. The idea behind these improvements is that for the same amount
of error during transmission, lesser amount of data should be lost when compared to the
data loss that would occur in EREC, and hence BiEREC should show an mcrease in error
The next chapter presents experimental results that show that BiEREC performs
23
4.2 Improvements to EREC
This section introduces the three improvements we have incorporated into the
EREC algorithm.
In EREC algorithm, as explained in the previous chapter, blocks are coded and
data is packed in the forward direction moving from left to right so that the first code bit
of a block always faUs at the left boundary of the slot. Since slot size is known, decoder
can skip the current slot and start decoding at the left boundary of the next slot, which is
the first bit of next block, in case of bit error. We observed that the left boundary of a
block is the same as the right boundary of the previous slot. Therefore, the decoder can
skip the same way to the right boundary of the current slot and start decoding. For this
idea to work, the data must be placed in the backward direction, starting from the right
end of the slot. This idea is basis of BiEREC algorithm presented in this chapter. Here the
blocks are packed in both directions, i.e., the first code block is placed starting at the left
slot boundary and the second code block is placed from the right slot boundary but
moving in the backward direction. Any space left in the center of the slot is used to pack
In order to accommodate two blocks in a slot, the slot size is set to twice the
average block size. The main advantage of doubling the slot size is that it increases the
24
probability that at least one block gets completely packed in a slot, thereby reducing the
In variable-length coded images and/or video frames, the size of code block vary
significantly because some unage blocks contain too many image details and thus more
code is needed to describe them than other blocks. TypicaUy, a sequence of longer blocks
is followed by shorter blocks. If the blocks are packed in First In-First Out (FIFO) order,
more blocks are likely to fit within one slot in one part of input image and not even a
single block fits in the slot in other part of image. This increases fragmentation To
reduce fragments, we pair two blocks, which are some fixed distance apart in the block
sequence. This should increase the probability of both blocks being packed completely in
a slot. This is realized by dividing block sequence into subsequences of blocks of length
searches the slot, (k+i) mod N, to see if it contains any space for continuation of the k^
block (see section 3.2). Therefore it takes N stages to pack an EREC structure of N slots
Further, because slots are scanned in sequential order for locating free spaces, EREC
algorithm ends up not finding any free space in several stages (at least during the later
25
fragmentation is higher. In BiEREC, we have tried to take advantage of the way blocks
are coded in a typical image or video frame. As stated in the previous section, in a
shorter blocks. So, m the initial stages of the packing of blocks, there should be more
probability of findmg free space in slots some distance apart rather than slots m the
unmediate vicinity. Hence the search order hi BiEREC has been modified and is not
linear as in EREC.
In BiEREC, the total slot space is initially divided into two halves and space for
overflowing blocks in the first half are searched in the second half and vice versa. In the
second stage, each of the two initial halves is fijrther divided into two halves and the
searching in this order, the number of stages when BiEREC is either unable to pack
fragments of any blocks or able to pack little is reduced. This in turn reduces
fragmentation of blocks.
BiEREC extends the basic idea of EREC, i.e., it also reorganizes the variable
length code blocks, generated by the native encoder, so that the location of the beginning
of code of each block in the bit stream can be determined knowing the block number. To
realize this approach, BiEREC similar to EREC, packs coded data stream in a sequence
of BiEREC structures.
26
The BiEREC structure is composed of N slots {Si}, where N is a power of 2, and
Si is the size of slot Si. Let T = Z^i=i Si denote the total size of the structure. Each BiEREC
structure is used to pack and transmit 2*N variable-length blocks {Bi} of data. Let bi
2*N
R=T-I bi>0 4.1
i=l
R represents the extra space left unused in a BiEREC structure. In order to minimize this
empty space, T should be chosen for each BiEREC structure separately. The slot lengths
Si can be predefined as a fiinction of T and N and, can be obtained from T The method
used for computation of T is as foUows: First, the next 2N coded blocks are obtained
from the native encoder and B, the sum of their code size is obtained. One choice is to set
Si = r B/N 1 for each slot. This choice may lead to wastage of up to N-1 bits per BiEREC.
choose slots of size so that consecutive slots differ by 1, i.e., |si - Si.i| = 1. We foUowed
this choice. The decoder must know T, in order for it to be able to locate boundary of
slots. Hence, value of T is coded separately and transmitted as highly protected side
information.
The BiEREC algorithm places the variable-length blocks of data into the BiEREC
determine the end of each block. Thus, in the absence of channel errors, the decoder can
27
Like EREC, BiEREC algorithm is iterative. In each stage, given a slot, it tries to
pack any unpacked part of a corresponding block. InitiaUy, the next 2N blocks of the
coded data are arranged in pairs of blocks (Bi, BN+I), denoted by pi, 1 < i < N. Bi is caUed
the left block and BN+I is called the right block of the pair. In the first stage, the algorithm
places the blocks of the pair pi into a corresponding slot Si. Startmg with the beginning of
the left block Bi of each pair pi, all or as many bits of Bi as possible are placed with the
placement starting from the left boundary of the slot Si and going in the forward
direction. Then, starting with the beginning of each right block Bi+N, all or as many bits
as possible are placed with the placement starting from the right boundary of S, going in
the backward direction. Thus, in case, bi + bN+i = Si, blocks Bj and BJ.N are packed
completely and slot Si is fiih. If (bi + bN+i) < Si, the blocks are packed completely, leaving
the slot Si with Si - (bi + bN+i) bit space unused. If (bi + bn+i) > Si, the slot Si isfialland
In subsequent stages the algorithm searches for slots with spaces for blocks yet to
be fiilly packed. In stage n, it searches slot i+(|)n (modulo N) for blocks of pair pi, where ())„
is a predefined offset sequence. In BiEREC, the offset sequence is different from the
cychc sequence used in EREC. If N is the number of slots ( N is a power of 2), the offset
sequence is 0, N/2, N/2', 3N/2^ N/2^ 3N/2^ 5N/2^ 7N/2^ N/2^ 3N/2^ 5N/2^ 7N/2\
9N/2^ llN/2^ 13N/2^ 15N/2%nd so on tih (N-1) N/2'°^2^ The advantage of this search
order is given in section 4.2.3. If the searched slot has some space available, then all or
as many code bits as possible are placed in that slot. Among the two blocks of a pair, the
priority of packing, i.e., which block gets to be packed first, ahernates between the two
28
for each successive stage of the iteration. In the first stage, left block of a pair is tried first
and in any remaining space, the right block is packed. In the next stage, right block is
tried first and then the left block. The direction of packing remains the same, i.e., left
block is packed in the forward direction going from left to right and right block of the
Provided 4.1 is satisfied, then there is enough space to pack data of ah blocks in
every BiEREC structure, and all the bits can be placed within N stages of the algorithm.
We exhibit the above procedure by taking an example. Figure 4.1 shows a sample
example of the operation of the reorganization algorithm for eight blocks of code size 11,
9, 4, 3, 9, 6, 10, 4 by placing them in 4 slots (N = 4) of length Si=14 bits each. The blocks
are represented by pairs (11,9), (9,6), (4,10) and (3,4). In Figure 4.1, N =4 and so the
During the first stage, the algorithm packs left blocks Bi, B2, B3, B4 starting with
the left boundary and then attempts to pack the right blocks B5, Be, B7, Bg in the
remaining spaces of the four slots. At the end of stage 1 (Figure 4.1 a), slots 1,2 and 3 are
completely filled and slot 4 has 7 bits of unused space. The number of bits that remain to
be packed of the four pairs are (0,6), (0,1), (0,0) and (0,0). Note that (0,0) denotes a fiilly
packed pair.
29
1
Stage 1 2
3
4 y///////'
•
1
Stage 2 2
3
4
Stage 3 3
4
Stage 4 1
2
3
4
30
In the second stage, the algorithm attempts to pack pah" pi and p2 of length (0,5)
and (0,1) in slots S3 and S4, respectively (other two pairs are aheady fiilly packed). It
packs the right block of the pah* first and hence succeeds in packing right block of p2 in
slot S4 (see Figure 4.1 b). At the end of stage 2, the number of bits to be packed of the
In the third stage, the algorithm attempts to pack pi in slot S2, since offset is 1 It
checks the left block of the pairs first and then right block. No data is packed in this
stage. In the fourth stage, the algorithm attempts to pack pi m slot S4, smce offset is 3. It
packs the right block of pi first. The final structure after packing is shown in Figure 4.1 d.
The decoder can follow this algorithm while decoding the variable-length block
data as it goes along until it detects the end of each block. At that point, it knows that the
remaining contents of the slot belong to other blocks, placed m some later stages of the
BiEREC algorithm. To explain the above in a bit more detail, in the first stage, starting
from the left boundary of each slot, the decoder starts decoding until it decodes EOB
marker or hits the end of slot. If EOB is reached, the decoder knows that it has
completely decoded the block. Then it starts decoding from the right boundary of that slot
until it reaches either EOB of the right block of the current pair or the point where the
decoder had stopped decodmg from the left direction. If it reaches EOB, then the right
block is completely decoded. Else, the block has an overflow m some other slots. For
input of Figure 4.1, at the end of stage 1, h would extract the following partial segments
of the four pairs (11,3), (9,5), (4,10) and (3,4). In the subsequent stages, each pair with
data stih to be decoded continues searching for slots not fiilly decoded (i.e., slots with
31
space remaining). Decoder uses the same offset sequence as by the encoder. If the slot
searched by a pair has some space available, then the decoder starts decodhig. As in the
encoder, the decoder also ahernates its direction of decodmg for each stage. So, in the
stage 2, it reads from the right and then proceeds from the left.
In the absence of any transmission errors, the decoder can obtain the original
block codes used by the encoder for bit organization. A decoder that can handle channel
errors can be constructed as foUows. The idea here is the similar to the one used for
EREC decoder (see section 3.2). While decoding a slot, if the decoder hits a bit sequence,
which is not a vahd code or prefix of a code, it detects an error. It marks the current block
b erroneous, and tries to decode reading in the opposite direction from the other end of
the slot. If this block is also detected to be erroneous, it marks the current slot s as
erroneous, and skips the rest of the slot and begins with the next block since the starting
location of the next block is known. In subsequent stages, if it determines that a block b'
is continued into slot s, i.e., at stage k, s = b' + (}) R (modulo N), h declares b' to be
erroneous as weU.
The main data structure of BiEREC algorithm represents the BiEREC structure.
One choice is to use a two dimensional array of size N x M, where N is the number of
slots and M is a pre-chosen integer that denotes the maximum fragments of blocks that
32
can be packed in any slot. Each row represents a slot and each column represents a
possible fragment of some block. Another choice for BiEREC structure is to use an array
of linked hsts. Here, a node is added to the linked list for each fragment of the block
placed in a slot. We chose 2D array as our data structure because array implementation is
faster and computational overhead is a constraint for streaming muhimedia services over
wireless channels.
There are two arrays Left_error [N] and Right_error [N], each of which indicate
whether corresponding slot has an error while decoding from the left or right direction,
respectively.
(Note that these are only representative of the data structures used in the actual code and
3. Calculate the offset sequence for given N and store it in offset_array [N].
33
7. if (Left_error [offset_slot_mdex] = NO_ERROR) and (s [offset_slot_index] =
NOTERRONEOUS)
Else
Put the block into the corresponding pair, with its position in
Else
NOTERRONEOUS)
Else
Put the block into the corresponding pair, with its position in
34
Else
ERROR)
12. Perform steps from 5 - 1 0 until all slots are completely decoded.
From the above algorithm, the foUowhig can be implied about error handling:
1. In BiEREC, if there are erroneous left and right blocks, all the blocks with
fragments between these erroneous blocks are also declared erroneous, because
the EOB of the erroneous blocks cannot be identified. Also the slot is set to be
erroneous. This observation holds whenever the decoder detects two errors in a
slot, one while decoding data in the forward dkection and the other while
Although h is possible that the slot might not have had any room for the current
block and the block might have been packed in another slot in a later stage, until
35
fragment of the block was placed in erroneous slot. Hence it has to be declared
erroneous.
algorithm. To summarize, we are pack data in slots from both directions, interleave the
blocks and perform search for free spaces in a different order. We provide in the next
chapter some metrics to measure improvements and results of the experiments. Here we
taking an example. It is the same example given in Figure 4.1. In Figure 4.2, we show
packing of the same 8 blocks by using EREC algorithm. Note that number of slots is also
1
2
liiiiiniii ••111
3 ii
4
5
6 iiiiiiiiiiiiiiiiniiiin^
7
Figure 4.2 shows the final slot structure after completely packing the same blocks in
Figure 4.1. EREC takes 8 stages. We now show that BiEREC indeed reduces the
36
fragmentation of blocks, which directly translates into higher error resihency as the
The above values show that BiEREC method of packing does reduce fragmentation.
37
Error
Location EREC
BiEREC
^ 7
14 \^
1 5 ^ X^ 7. - ^
2 6
1 2 I
3 7 1»
4 1
4 5 6 8
5
6 r
7
8 7'
Let us now compare BiEREC with EREC in terms of data loss, i.e., blocks
affected, due to a single bh error. Assume that an error has occurred in bit position 14
while transmission, for both BiEREC and EREC structures, as shown in Figure 4 3 Note
that the structures in Figure 4.3 are the same as in the previous examples of Figure 4.1
and Figure 4.2. Now we shah do a stage-by-stage decodmg of the structures and find out
38
Table 4.1 shows the stage-by-stage decoding of the BiEREC structure of Figure
4.3. In the first stage, we find that aU the blocks except 5 and 6 are completely decoded.
Also the decoder finds that block 5 is in error and so declares h to be erroneous. It also
declares slot 1 to be erroneous (see sections 4.3, 4.4, 4.5). In the next stage, it completely
decodes block 6, because slot 4 is still vahd. The erroneous block 5 searches in slot 3 in
the same stage, but since the slot is completely decoded, it does not get affected. At the
end of 4 stages, we find from Table 4.1 that ah the blocks except block 5 are decoded
completely.
8 - 5 1
Table 4.2 shows the stage-by-stage decoding for EREC structure. In the first
stage, decoder finds block 2 to be erroneous and hence declares slot 2 to be erroneous
(see Chapter 3). In the second stage, block 1 tries to search hi slot 2. Since slot 2 is
erroneous, block 1 also gets affected. From Table 4.3, we can see that at the end of 8
stages, blocks 1, 2 and 5 are affected due to a bh error at 14. So, using this simple
comparison, we can say that BiEREC looses lesser blocks than EREC for the same
39
As a side note, suppose the bit error was at position 28. In this case, we can see
that BiEREC looses blocks 5 and 6. But, for EREC, only block 5 gets affected. So, even
though BiEREC looses lesser blocks than EREC for most of the cases (see section 5.3.2),
there may be a very few situations where EREC may give a better resuh than BiEREC.
40
CHAPTER 5
5.1 Objective
improvements rendered by BiEREC over EREC algorithm. Two approaches are applied:
one approach is to compute statistics such as (1) reduction in the number of blocks
packed in multiple slots, (2) reduction m the separation between muhiple fragments of a
block and (3) average number of blocks which may have to be discarded in case of a bh
error. Another approach is to introduce random errors in the bh stream and measure the
During the course of this thesis work, separate programs that simulate the
encoding and decoding of EREC and BiEREC algorithm were developed. In order to
examine the effectiveness of the two coding methods, JPEG compressed images were
To briefly explain JPEG compression algorithm, first the image is split into
blocks of 8 X 8 pixels. These blocks are transformed using an eight-point DCT separably
in horizontal and vertical directions. The 8 x 8 transform coefficients are quantized using
a scalar quantizer. The quahty of the coded picture and the compression ratio can be
41
varied by adjusting the step size of the quantizer. The 8 x 8 quantized coefficients are
scanned in a zigzag order from low frequencies to high frequencies. The nonzero
coefRcients for each block are run length coded to produce data words that consist of run-
length and level. These data words together whh an EOB word are coded as variable-
length code words using Huflhnan coding (see section 1.2). Thus, each variable-length
word. To know more about JPEG standard, see [7] and [4]. Due to both presence of EOB
code word and the prefix nature of Huffinan coding, each variable-length block satisfies
sense that channel errors affect current and following data only.
We used two weh-known images as test input; these are shown in Figure 5 1 Gold
and Figure 5.2 Lena. Gold is highly 'blocky'. It means that the size of block codes vary
widely. If the image is scanned from top to bottom, then the block sequence is such that
some consecutive blocks are of code length higher than the average and some later blocks
are of length lower than the average. This helps in testing the advantage of interleaving
feature of the BiEREC algorithm. Lena was chosen, because h is widely used to report
42
WTIIIIfPJWWHJWW^^
R^^^'IM!^!!'!'!^
43
Parameters of EREC and BiEREC structures are basicaUy the same except that these are
N slots in EREC and N/2 slots in BiEREC. However, both pack the same N blocks per
structure. For example, if N is 256 for BiEREC, then N is 512 for EREC so that number
For simulations, we considered only the data part of the JPEG compressed data
and ignored the JPEG headers that contain the control information. Our assumption is
that the header is protected by using error-correcting codes, as h is necessary for the
header to be correctly decoded in order to decode the data stream that follows h.
This section reports resuhs of two simulations; one simulates BiEREC packing
algorithm and the other EREC algorithm. Three sets of results are given. The objective of
the first set is to measure the extent of fragmentation of each block. The second set
measures effect of propagation of an error, and the third set shows the effect of errors on
decoded images.
blocks that result due to packing of the block codes into BiEREC/EREC structures.
Packing-offset-quotient is defined for EREC structure first due to its simplicity. Consider
the t EREC structure packed during coding of an image. Let Bj denote a block of length
44
bi, and Si denote a slot of length Si, i=0,l,...,N-l. Let the encoder divide Bi into ni
fragments and pack the k* segment, Bik, in the slot Sik, k = 1, 2, ..., ni, ni > 1.
Let bik denote the starting position of the k* segment in the slot Sik- Therefore, sequence
Sik, k= 1,2, ..., n i is a subsequence of i, i + 1, ..., N-1, 0,1,..., i - 1 , considering that search
fragments, Bik-i and Bik. It denotes the sum of ah slots between ik-i and i k exclusive, and
ik-i
d i k = E Si +bik
i=ik-i+l
Let us consider the example shown in Figure 5.2. Block 5 in the example has 3 segments,
in slots 4, 5 and 6. Segment in slot 5 is the initial placement and segment at slot 4 is the
ni N- 1
Letdi = Z d i k , and D^ = Z d i ,
k=l i=0
that structure.
N-1
Let S* = S Si, i.e., S^ denotes the size of the t ^ EREC structure in number of bhs.
i=0
45
..., m, where m = number of blocks in the image file / N.
In few words, poq for an image is the summation of offsets of fragments of every block
from the previous placement of its fragment, with average taken over the whole knage.
The reason for considering the separation between two consecutive fragments of a block
as the basis of this metric is that EREC decoder declares remaining part of a block
erroneous), where ik-i < j < i kand fragment Bik could be m S j. In this case, the rest of the
block Bi is not extracted from the coded stream and this part of Bi is not used in the
The above definition of poq can be extended for BiEREC algorithm. For the
EREC algorithm, aU the offsets are measured from the left boundary of slots. In the
BiEREC algorithm, N blocks (where N is the number of slots) are packed in the forward
direction and hence displacement of their fragments is measured from the left boundary.
The other N blocks are packed in the backward direction and hence their offsets are
We will take an example to show the difference. First we calculate poq of the
structure given in Figure 5.2; h was packed using EREC algorithm. There are 8 slots each
of size 7. The number of fragments and their offsets of each block are as follows:
46
Table 5.1 Offset fragments for EREC
The sum D for this structure is 67, and the ratio of D with size of structure is
6 7 / ( 8 * 7 ) =1.19.
Now we calculate poq for the structure (which has the same set of blocks as in Figure
5.2) in Figure 5.1, which was constructed using BiEREC algorithm (Table 5.2)
47
The slot size is 14 and number of slots 4. The sum D for this structure is 54 and
the ratio of D with the size of the structure is 54/ (14 * 4) = 0.96.
From the definition of poq, we can infer that higher the poq, higher is the fragmentation
of blocks.
Table 5.3 shows poq for EREC and BiEREC for the test images considering
The final column in the table shows the ratio of poq of EREC over poq of BiEREC.
This ratio is far smaller than 1 in every case. This clearly indicates that BiEREC packing
method resuhs in far reduced fragmentation of blocks. Hence, output of BiEREC encoder
is less susceptible to propagation of errors. This was the goal in developing the BiEREC
algorithm.
48
Figure 5.3 presents two graphs to show the distribution of poq of consecutive
structures of an image. Graphs plot poq of each structure considering the output of both
encoders. Note that poq of a structure is very much dependent on the characteristics of
image blocks packed in the structure. If the variation in the code size of consecutive
blocks is large, h increases fragmentation and hence poq of the structure. Another point
to be noted is that a BiEREC/EREC structure is the basic unit of transmission and the
packing should be efficient not only for the whole image but also for each individual
structure. The graph in Figure 5.3 shows that irrespective of the distribution of data in a
structure, BiEREC packing method resuhs in smaller poq for each structure. Another
observation we can make from these graphs is that variance in poq for EREC is very
large, whereas h is stable for BiEREC. This stability is due to interieaving of blocks
implemented in BiEREC.
-•-BiEREC
-*-EREC
0 5 10 15 20
BiEREC/EREC structure number
49
5.3.2 Measurement of Error Propagation
When a bit error is detected in a slot, h may affect each block, a part of which
could be placed in the slot. More specificaUy, let a bh error occur in slot S R and let B j be
a block which is yet to be fiilly extracted, i.e., encoder searched Sj+i, Sj+2, .., Sk-i for free
shnulated transmission errors over a wireless channel by msertmg random errors m the
data stream. For this simulation, a BER was assumed and based on the total code size; the
number of bh errors was computed. The locations of bit errors were randomly generated.
It was assumed that the decoder would fail to process erroneous blocks from the position
of the error bits. Note that the error bh may be in any fragment of a block, if the block
was fragmented. Objective of the simulation was to compute the number of blocks
affected and the percentage of total bits lost due to transmission errors. Affected blocks
mean those blocks in which a bit error was found and were found to be completely
Resuhs of simulation are given m Table 5.4 for different BERs. The value of N for EREC
50
Table 5.4 Propagation of Errors for various BER
Results given in Table 5.4 show that a small number of bh errors may affect a
large number of blocks in an EREC structure. However, the number of affected blocks in
BiEREC structure is less than 1/2 of EREC. To elaborate this observation, consider the
BER of 0.001 (i.e., 1 per 1000 bhs). For EREC, N is 256 and assuming average slot sizes
of 80 and that random errors are far apart, BER of 0.001 imphes that about 20 of 256
slots of a EREC structure are erroneous; h denotes about 2/25 of the structure. Resuhs
show that about 1/3 of Gold blocks are affected due to these errors and 1/4 of Lena
blocks are affected. It means that the number of blocks affected is 4-6 times of the slots
error. The resuhs given in the last column of Table 6.2 show that error propagation
affects at least two times more number of blocks if EREC is used compared to BiEREC
51
The reason for the reduction in error propagation for BiEREC is that h packs a greater
portion of blocks per slot, and hence fewer blocks are affected.
Instead of considering a fixed number of errors per structure as per BER, next we
report resuhs of experiments obtained considering a given number of errors over the
whole Lena image. In Figure 5.4, the percentage of bhs lost is plotted for various
numbers of errors. The number of blocks per structure is 256. The graph shows that the
percentage of bhs lost increases with the number of errors. However the increase in errors
for BiEREC is substantially less than that for EREC. These graphs depict the error
number of errors =16. With number of blocks in an EREC/BiEREC structure = 256 and
total number of blocks in Lena = 4096, it means that there is one bit error per structure. In
ideal situation, one bit error should affect only one block of the structure i.e. h should
affect only 1/4096 part (i.e., 0.025%) of the total coded image. With 16 errors, in ideal
situation, the percentage of bhs affected should be 0.4%. The plot for EREC shows that
the percentage of bhs affected is 6.25, which is roughly 16 times of 0 4 However, the
same for BiEREC is 1.2, which is only 3 times. In summary, BiEREC reduces
52
- • - BiEREC
-«-EREC
0 10 15 20 25
No. of Errors
plotting a graph that shows error propagation between successive stages of processing of
the structure during decoding. Recah from section 4.3 that the decoder is iterative In
each heration, say i, h decodes each slot s j ; 1 < i ,j < N. If the slot is erroneous, it
declares those blocks erroneous, which are packed in the slot. Also in the succeeding
stages, all the slots that are searched by the erroneous blocks are declared erroneous and
so on.
Figure 5.5 presents the percentage of blocks affected in each stage, due to the
error detected in the previous stage. The resuhs are presented for Lena with number of
blocks = 256.
53
•o
0)
u
(0 •^-BiEREC
w
o -jh-EREC
ffl
The plot of each graph can be divided in two parts; in the first part, h drops very
fast and in the second part, h remains essentially flat. The first part shows that error
propagation reduces significantly whhin some initial stages. The reason is that most
blocks can be packed in the very few early stages. After that most slots are fiill and
therefore, any block, not yet fiilly packed must find space in a far away slot, which is
determined in a much later stage. Figure 5.5 shows that error propagation reduces much
earher for BiEREC. In other words, BiEREC packs residual of a block in the vicinity of
54
5_.3.3 Comparison of Decoded Erroneous Images
This section demonstrates the visual effect of transmission error on Lena image.
The JPEG code of the image was packed by each of EREC and BiEREC algorithm and
20 bh errors were introduced randomly in the packed data. Each error reflected a
decoding error of certain DCT coefladent of some JPEG block. In the blocks in which an
error was detected, all the DCT coefiQcients were truncated to zero, since the exact
Lena obtained using simulation of EREC is shown in Figure 5.6. The figure
shows distortion in several consecutive blocks. It shows that a single error in a slot can
affect blocks, which were notfiiUypacked in some previous slots, i.e., error is propagated
Lena obtained using simulation of BiEREC is shown in Figure 5.7. Notice that
number of consecutive blocks affected by a single error is very small. This confirms the
55
Figure 5.6 Lena recovered using EREC
56
In Figure 5.6, we see predominantly horizontally consecutive blocks affected
whereas in Figure 5.7, vertically consecutive blocks are affected. Note that this 512 x 512
image was encoded by coding a row of 64 blocks of 8 x 8 pixels. Since number of blocks
BiEREC, since blocks are interleaved during packing, the block that is in the next scan
row in the image may be packed along the present block into a slot. It means that block i
and i + 128 were probably packed in a single slot by BiEREC encoder and i, i + 64, i +
128, i + 192 were placed in stage 2 and so on. So, due to propagation of error between
blocks packed in a single structure, two blocks that may lie next to each other in the
57
CHAPTER 6
CONCLUSIONS AND FUTURE WORK
6.1 Conclusions
over wireless channels. It reports some improvements to increase the error resiliency of
data. This research was motivated by the work of Kmgsburry et al. [1], described as
EREC, that provides a method of packing variable length codes so that it can be
transmitted without adding any redundant data such as synchronization code words to the
actual data.
Our research extends the basic idea of EREC to overcome the hmitations stated in
section 3.3. The main limitation of EREC is that it causes excessive fragmentation of
error. The improvements suggested in this thesis aim to reduce fragmentation of blocks
The motivation behind each of these improvements is explained in section 4.2. The thesis
58
6.2 Contribution of this Research
of block codes, which helps in reducing propagation of error, and hence improves error
BiEREC and EREC encoders. Experiments given in Table 6.1 show that BiEREC method
that BiEREC packs efficiently not only for a whole image but also for individual
BiEREC structures.
In section 5.2.2, we measured the amount of data lost for different BER for both
BiEREC and EREC decoders, and found that the number of blocks lost in BiEREC is
always lesser than EREC by a factor of 2 or more. We also plotted a graph to show a
stage-by-stage propagation of error for both BiEREC and EREC decoding of Lena image.
In section 5.2.3, we compared two erroneous images, one decoded using EREC
and other using BiEREC to measure the visual effect and found that the picture decoded
using BiEREC has far fewer losses than one decoded using EREC.
In summary, the proposed extension of EREC, named BiEREC, does obtain the
59
6.3 Future Work
1. The resuhs presented in Chapter 5 are for stih images. The input to BiEREC
encoder/decoder.
60
REFERENCES
[1] David W. Redmill and Nick G. Kingsbury: "The EREC: An Error Resilient
Technique for coding variable-length blocks of data," IEEE Trans. Image
Processing, 5(4), pp. 564-574, April 96.
[2] Wee Sun Lee, Mark R. Pickering, Michael R. Frater, and John F. Arnold: "Error
Resilience in Video and Multiplexing layers for very low bit rate video coding
Systems," IEEE Journal. Selec Areas in commun, 15(9), pp. 1764-1774,
December 1997.
[3] Bemd Girod and Niko Farber: "Feedback-Based Error Control for Mobile Video
Transmission," Proceedings of IEEE, 87(10), pp. 1707-1723,October 1999.
[4] International Organization for standardization (ISO) "JPEG digital compression and
coding of continuous-tone images," volume Draft ISO 10918, 1991.
[5] ISO/IEC International Standard 13818-2, Generic Coding of Moving Pictures and
Associated Audio Information: Video, 1995.
[6] ITU-T Recommendation H.263, Video coding for low bit rate communication.
[7] Mark Nelson , The Data Compression Book, Redwood, CA: M & T Books, 1991.
[8] Khalid Sayood, Introduction to Data Compression, San Francisco, CA: Morgan
Kaufmann Publishers, 1996.
19] Guy Cote, Bema Erol, Michael Gallant and Faouzi Kossentini: "H.263 +: Video
Coding at low bit rates," IEEE Trans on circuits and systems, 8(7), pp. 849-865,
November 1998.
[10] Wen Jeng Chu and Jin Jang Leou: "Detection and Concealment of Transmission
Errors in H.261 Images," IEEE Trans on circuits and systems, 8(1), pp. 74-83,
February 1998.
[11] W.P.L.Soon and W.C.Wong "Robust H.263 portable video coding" International
conf on Information, comm. and sig. Process, ICICS' 97, pp. 306-310,September 97.
[12] Man Young Rhee ,CDMA cellular mobile communications and network security,
Upper Saddle River, NJ: Prentice Hall PTR, 1998.
61
[13] Andrew J. Viterbi, Principles of Spread spectrum communications,
Reading, MA: Addison-Wesley Pub. Co., 1995.
[15] J.C. Maxted and J.P. Robinson: "Error recovery for variable-length codes," IEEE
Trans. Inform. Theory, vol. IT-31, pp 794-801, 1985.
[16] T.J. Ferguson and J.H. Rabinowitz: "Self-synchronizing Huffman codes," IEEE
Trans. Inform. Theory, vol.30, pp-687-693, 1984.
[17] B.L. Montgomery and J. Abrahams: "Synchronizing of binary source codes," IEEE
Trans. Inform. Theory, vol-IT-32, pp-849-854, 1986.
[18] W.M. Lam and A.R. Reibman: "Self-synchronization variable-length codes for
image transmission," in Proc. Int. Conf Acoustics, Speech, Signal Processing 1992
pp. in-477-ni-480.
[19] N. MacDonald: "Transmission of compressed video over radio links," SPIE Visual
Commun. Image Processing, pp. 1484-1488, 1992.
[20] N.T. Cheng and N.G. Kingsburry: "The ERPC: An efficient error-resilient technique
for encoding positional information of sparse data," IEEE Trans. Commun. Vol.
COM-40, pp. 140-148, 1992.
[21] N.T. Cheng: "Error-resihent video coding for noisy channels," PhD thesis,
Cambridge Univ. Eng. Dept., 1991.
[22] D.W. Redmill and N.G. Kingsburry: "Improving the error resilience of entropy
coded video," in Image Processing: Theory and Applications, Proc. IPTA 93, San
Remo, Italy, June 14-16, 1993.
[23] D.W. Redmill: "Image and video coding for noisy channels," PhD thesis, Cambridge
Univ. Eng. Dept., 1994.
[24] D.W. Redmill and N.G. Kingsburry: "Still image coding for noisy channels," in
IEEE Int. Conf Image Processing, vol. 1, Austin, TX, 1994, pp. 95-99.
[25] M. Liou: "Overview of the p x 64 kbits/s video coding standard," Commun. ACM,
34(4), pp-59-63, 1993.
62
PERMISSION TO COPY
degree at Texas Tech University or Texas Tech University Health Sciences Center, I
agree that the Library and my major department shaU make itfreelyavailable for
research purposes. Permission to copy this thesis for scholarly purposes may be
any copymg or publication of this thesis for financial gam shaU not be aUowed
without my fiirther written permission and that any user may be liable for copyright
infringement.