You are on page 1of 25

Multimedia Systems

Chapter 7: Data Compression (2)


Outline
• Entropy Encoding
– Arithmetic Coding
• Predictive Coding
– Lossless Predictive Coding
• Differential Coding
– Lossy Predictive Coding
• Differential Pulse Code Modulation Coding
(DPCM)
• Delta Modulation (DM)
• Initial idea introduced in 1948 by Shannon
Entropy Encoding: Arithmetic Coding
– Many researchers worked on this idea
– Modern arithmetic coding can be attributed to
Pasco (1976) and Rissanen and Langdon (1979)
• Arithmetic coding treats the whole
message as one unit
– In practice, the input data is usually broken up
into chunks to avoid error propagation
Entropy Encoding: Arithmetic Coding
• A message is represented by a half-open interval
[a,b), where a,b∈ℜ.
– General idea of encoding
• Map the message into an open interval [a,b)
• Find a binary fractional number with minimum length that
belongs to the above interval. This will be the encoded
message
– Initially, [a,b) = [0,1)
Entropy Encoding: Arithmetic Coding
– When the message becomes longer, the length of
the interval shortens and the # of bits needed to
represent the interval increases
• Coding Algorithm
Algorithm ArithmeticCoding
// Input: symbol: Input stream of the message
terminator: terminator symbol
// : Low[] and High[]: all symbols’ ranges //
Output: binary fractional code of the message
low = 0; high = 1; range = 1;
do {
get (symbol);
Entropy Encoding: Arithmetic Coding
high = low + range * High(symbol);
low = low + range * Low(symbol);
range = high – low;
} while (symbol != terminator) return
CodeWord(low,high);
• Binary code generation
Algorithm CodeWord
// Input: low and high
// Output: binary fractional code
code = 0; k = 1;
while (value(code) < low) { assign 1 to
the kth binary fraction bit; if
(value(code) > high)
Entropy Encoding: Arithmetic Coding
replace the kth bit by 0;
k++;
}
• Example: Assume S = {A,B,C,D,E,F,$}, where $ is
the terminator symbol. In addition, assume the
following probabilities for each character:
– Pr (A) = 0.2
– Pr(B) = 0.1
– Pr(C) = 0.2
– Pr(D) = 0.05
– Pr(E) = 0.3
– Pr(F) = 0.05
Entropy Encoding: Arithmetic Coding
– Pr($) = 0.1
Generate the fractional binary code of the message
CAEE
• It can be proven that ⎡log2 (1/Π Pi)⎤ is the upper bound on the
number of bits needed to encode a message
– In our case, the maximum is equal to 12.
– When the length of the message increases, the range decreases and
the upper bound value ......

• Generally, arithmetic coding outperforms Huffman coding


– Treats the whole message as one unit vs. an integral number of bits to
code each character in Huffman coding
Entropy Encoding: Arithmetic Coding
– Redo the previous example CAEE$ using Huffman coding and notice
how many bits are required to code this message.

• Decoding Algorithm
Algorithm ArithmeticDecoding
// Input: code: binary code
// : Low[] and High[]: all symbols’ ranges
// Output: The decoded message value
= convert2decimal(code);
Do { find a symbol s so
that
Low(s) <= value < High(s);
output s;
Entropy Encoding: Arithmetic Coding
low = Low(s); high = High(s); range = high – low;
value = (value – low) / range;
} while s is not the terminator symbol;
• Example
Predictive Coding
• Predictive coding simply means transmitting
differences
– Predict the next sample as being equal to the
current sample
• More complex prediction schemes can be used
– Instead of sending the current sample, send the
error involved in the previous assumption
Predictive Coding: Why?
• The idea of forming differences is to make
the histogram of sample values more peaked.
– In this case, what happens to the entropy?
– As a result, which is better to compress?
Predictive Coding: Why?
Lossless Predictive Coding
• Formally, define the integer signal as the set of
values fn. Then, we predict values f^n and compute
the error en as follows:
t

fˆn =∑an−k fn−k


k=1
en = fn − fˆn
– when t = 1, we get ...
– Usually, t is between 2 and 4 (in this case it is called a
linear predictor)
– We might need to have a truncating or rounding
operation following the prediction computation
Lossless Predictive Coding
Lossless Predictive Coding: Example
• Consider the following predictor:

fˆn =⎢⎢⎣ 12(fn−1 + fn−2)⎥⎥⎦

en = fn − fˆn
Show how to code the following sequence

f1, f2, f3, f4, f5 = 21, 22, 27, 25, 22.


Lossless Predictive Coding
• Examples in the Image Compression Domain
– Differential Coding
– Lossless JPEG
Lossy Predictive Coding: DPCM
• DPCM = Differential Pulse Code
Modulation
– Form the prediction f ^n
– Form an error en
– Quantize the error
Lossy Predictive Coding: DPCM
• The distortion is the average squared
error
– To illustrate the quality of a compression
scheme, diagrams of distortion vs. the number
of bit levels used are usually shown –
Quantization used
• Uniform
• Lloyd-Max
– Does better than “Uniform”
Lossy Predictive Coding: DPCM
Lossy Predictive Coding: DPCM
• Example

(
fˆn =⎢⎢ 12 ~fn−1 +

)
~fn−2 ⎥⎥⎦ ⎣

en = fn − fˆn e~n = Q[en]=16*⎣⎢⎢

25516+en ⎦⎥⎥−256+8

~fn = fˆ +e~n
Lossy Predictive Coding: DPCM
n

Show how to code the following sequence f1, f2


, f3 , f4 , f5 = 130 , 150 , 140 , 200 , 230
Lossy Predictive Coding
• DM (Delta Modulation) is a simplified
version of DPCM that is used as a quick
analog-to-digital converter.
– Note that the prediction simply involves a delay

You might also like