You are on page 1of 22

EC3021D

Information Theory and Coding


Coding Assignment Report

Submitted By:
Simmon Mathew Shaji
Roll No: B200932EC
Class: EC04
Question 1
Implement Huffman and Lempel-Ziv encoders that accept text files as the input and
decoders that give the text file back. (Implement 4 Matlab functions: Huffman encoder,
Huffman decoder, Lempel-Ziv encoder, and Lempel-Ziv decoder). You may have to convert
characters to ASCII and then ASCII to binary before encoding and convert back after
decoding. Try this on your own text file and give the compression ratio in each case.

Huffman

Theory
Huffman coding is a lossless data compression algorithm that is
commonly used in various applications such as data transmission, file
compression, and image compression. The algorithm was invented by
David A. Huffman in 1952 and is based on the concept of entropy
encoding.

The theory behind Huffman coding involves the creation of a binary tree
structure called a Huffman tree, which represents the frequency
distribution of characters or symbols in a given input message. The goal
is to assign shorter codes to more frequently occurring characters and
longer codes to less frequently occurring characters, thus achieving a
more efficient compression of the input message.

The Huffman tree is constructed using a bottom-up approach, starting


with the individual characters in the input message and gradually
merging them into larger groups based on their frequency of occurrence.
This is done by repeatedly selecting the two nodes with the smallest
frequency values and merging them into a new node, whose frequency is
the sum of the two nodes' frequencies. This process continues until all
the nodes have been merged into a single root node, which represents
the entire input message.
Once the Huffman tree is constructed, the next step is to assign codes
to each character based on their position in the tree. The code for each
character is generated by tracing a path from the root of the tree to
the leaf node that represents that character. Each time the path
descends to the left child of a node, a binary 0 is added to the code,
and each time it descends to the right child, a binary 1 is added. The
resulting codes are guaranteed to be unique and prefix-free, meaning
that no code is a prefix of another code, which ensures that the code
can be unambiguously decoded.

The efficiency of Huffman coding depends on the frequency distribution


of the characters in the input message. If the message contains a small
number of frequently occurring characters, the compression ratio can be
very high. Conversely, if the message contains a large number of unique
characters or if the frequency distribution is uniform, the compression
ratio will be lower.

Overall, Huffman coding provides a simple yet effective way to compress


data by exploiting the redundancy in the input message. Its ability to
generate optimal codes based on the frequency distribution of the
characters makes it a popular choice for various applications where
efficient data compression is required.
Example:

Code And Implementation (Matlab)


Huffman Table:
Code:
function Table = huffman_table(vector)
% Find unique characters from vector
U = unique(vector);

% Find frequency and probability of each unique character


f = histc(vector, U);
P = [U.' (f/length(vector)).'];

% Build Huffman tree


Q = sortrows(P, 2, 'descend');
Depth = 0;
for i = 1:size(U, 2)-1
code = repmat('1', [1, Depth]);
H{i, 1} = Q(i, 1);
H{i, 2} = string(code + string(0));
Depth = Depth + 1;
end
H{size(U, 2), 1} = Q(size(U, 2), 1);
H{size(U, 2), 2} = repmat('1', [1, Depth]);

% Combine unique characters and their binary codes into a cell array
Table = H;
end

Implementation:

The purpose of this code is to create a Huffman table, which is a lookup table
that maps characters in a given input vector to their corresponding binary
codes. This is useful for data compression, as Huffman coding is a popular
technique for lossless data compression.

The function takes in an input vector, which is assumed to be a sequence of


characters. The first step is to find the unique characters in the input vector
using the unique function. This creates a vector U that contains each unique
character exactly once.

Next, the frequency of each unique character is calculated using the histc
function. The output f is a vector that contains the count of each unique
character in the input vector.

The probability of each unique character is then calculated by dividing its


frequency by the total length of the input vector. This creates a matrix P
where the first column contains the unique characters, and the second column
contains their corresponding probabilities.

The next step is to build the Huffman tree, which is a binary tree that assigns
binary codes to each unique character. The Huffman algorithm works by
repeatedly combining the two characters with the lowest probabilities into a
single node until all nodes are combined into a single root node. This is
achieved in the code by sorting the matrix P in descending order of probability
using the sortrows function.

Then, a loop is used to create the binary codes for each unique character.
Starting from the first row of P, the binary code is initialized as a string of
ones with length equal to the depth of the node in the tree. The character and
its binary code are then stored in a cell array H. The loop then iterates
through the remaining rows of P, incrementing the depth of the code by one
for each new node.

Finally, the unique characters and their corresponding binary codes are
combined into a single cell array Table, which is returned as the output of the
function.

Huffman Encoder:
Code
function encoded = huffman_encoder(text, Table)
% Encode text using Huffman coding

% Convert text to ASCII code


ascii = double(text);

% Look up binary codes in Huffman table


n = length(ascii);
binary_codes = strings(1, n);
for i = 1:n
binary_codes(i) = get_code(Table, ascii(i));
end

% Concatenate binary codes into a single string


encoded = join(binary_codes, '');

end
Implementation:
This code implements a Huffman encoder, which takes in a text string and a
Huffman table and outputs the binary code for the input text using the
Huffman coding technique.

The first step in the function is to convert the input text to ASCII code using
the double function.

Next, a loop is used to look up the binary code for each ASCII character in the
Huffman table using the get_code function. The resulting binary codes are
stored in a string array.

Finally, the binary codes are concatenated into a single string using the join
function and returned as the output of the function.

Huffman Decoder:
Code
function decoded = huffman_decode(encoded, Table)
% Initialize variables
decoded = "";
current_code = "";

% Loop through each bit of the encoded string


for i = 1:length(char(encoded))
% Add the current bit to the current code
c = extractBetween(encoded, i, i);
current_code = strcat(current_code, c);

% Check if the current code matches a binary code in the Huffman table
for j = 1:size(Table, 1)
if strcmp(current_code, Table{j, 2})
% Add the corresponding character to the decoded string
decoded = decoded + Table{j, 1};

% Reset the current code


current_code = "";

% Exit the inner loop


break;
end
end
end
end

Implementation:
This code implements a Huffman decoder, which takes in an encoded binary
string and a Huffman table, and outputs the corresponding decoded text
string.

The function starts by initializing two variables: decoded, which will store the
output text string, and current_code, which will store the current binary code
being decoded.

Next, the function loops through each bit of the encoded binary string using a
for loop. For each bit, the bit is extracted from the encoded string using the
extractBetween function, and then concatenated onto the end of the
current_code variable.

Then, the function checks whether the current binary code matches any
binary code in the Huffman table using a nested for loop. For each row of the
Huffman table, the function checks whether the current_code matches the
binary code in the second column of the table using the strcmp function. If a
match is found, the corresponding character in the first column of the table is
added to the decoded string using the + operator, the current_code variable is
reset to an empty string, and the inner loop is exited using the break
statement.

This process continues until all bits of the encoded binary string have been
processed. At that point, the decoded string contains the corresponding text
string for the input binary code, which is returned as the output of the
function.

Running Code:
Code:
% Read input text file
filename = 'test.txt';
fileID = fopen(filename, 'r');
text = fscanf(fileID, '%c');
fclose(fileID);

% Create Huffman table from input text


Table = huffman_table(text);

% Encode input text using Huffman coding


encoded = huffman_encoder(text, Table)
%Decoding the huffman encoded bit-stream
decoded=huffman_decode(encoded, Table)
%Compression Ratio calculation
original_size = 8 * length(text)
compressed_size = length(char(encoded))
compression_ratio = original_size / compressed_size

Implementation:

This code reads in an input text file using the fopen, fscanf, and fclose
functions and stores the contents of the file in the text variable.

Next, the code generates a Huffman table for the input text using the
huffman_table function.

Then, the input text is encoded using Huffman coding using the
huffman_encoder function, and the resulting encoded binary string is stored in
the encoded variable.

The code then decodes the encoded binary string using the huffman_decode
function, which should output the original input text.

Finally, the code calculates the compression ratio of the Huffman encoding
scheme by dividing the original size of the input text (in bits) by the size of the
encoded binary string (also in bits). The compression ratio is stored in the
compression_ratio variable.

Input Text:

Output:
Lempel-Ziv
Theory:
Lempel-Ziv (LZ) coding refers to a family of lossless data compression
algorithms that use a dictionary-based approach to eliminate redundancy in
the input data. The basic idea behind LZ coding is to replace repeated
occurrences of substrings in the input data with references to a dictionary
that contains previously seen substrings.

There are two main versions of the LZ algorithm: LZ77 and LZ78. The LZ77
algorithm uses a sliding window approach to look for matches between the
current position in the input data and the previously seen data, while the
LZ78 algorithm uses a dictionary-based approach where each new substring is
assigned a unique code, and the codes for previously seen substrings are used
to represent repeated occurrences of those substrings.

Code and Implementation (Python)


Encoder:
Code:
def lzEncode(message):
dict = {}
code = 256
p = message[0]
output = ""

for i in range(1, len(message)):


pc = p + message[i]
if pc in dict:
p = pc
else:
if len(p) == 1:
output += bin(ord(p))[2:].zfill(9)
else:
output += bin(dict[p])[2:].zfill(9)

dict[pc] = code
code += 1
p = message[i]

if len(p) == 1:
output += bin(ord(p))[2:].zfill(9)
else:
output += bin(dict[p])[2:].zfill(9)

return output
Implementation:
The lzEncode function implements the LZ78 variant of the Lempel-Ziv
compression algorithm to compress a given input message. The function works
by initializing an empty dictionary dict and a code code with the value 256.
The variable p is initialized to the first character of the message. The function
then iterates over the remaining characters of the message and generates
codes for each substring that has not been seen before.

For each new substring pc encountered, the function checks if pc is already


present in the dictionary dict. If pc is present, p is set to pc. Otherwise, the
function checks the length of the current substring p. If p has a length of 1,
the binary representation of the ASCII code for p is appended to the output
string output. Otherwise, the code for p is looked up in the dictionary dict and
appended to output.

If a new substring pc is encountered that has not been seen before, it is added
to the dictionary dict with a unique code code, and code is incremented.
Finally, the function appends the code for the last substring p to the output
string and returns it.
Overall, the lzEncode function compresses the input message by replacing
repeated substrings with codes that refer to a dictionary that contains
previously seen substrings. This approach effectively reduces the redundancy in
the input data, resulting in a more compact representation.

Decoder:
Code:
def lzDecode(encodedMessage):
dict = {}
code = 256
eMsgVect = [
int(encodedMessage[i : i + 9], 2) for i in range(0,
len(encodedMessage), 9)
]
output = ""
curr = eMsgVect[0]
for i in range(1, len(eMsgVect)):
next = eMsgVect[i]
c, n = "", ""
# for next
if next >= 128:
n = dict[next][0]
else:
n = chr(next)
# for current
if curr >= 128:
c = dict[curr]
else:
c = chr(curr)
combo = c + n
dict[code] = combo
code += 1
output += c
curr = next
output += n
return output

Implementation:
The function starts by initializing a dictionary dict that will store the mapping
between codes and their corresponding sequences of characters. It also
initializes the code variable code to 256, which is the starting point for
assigning new codes.
Then, the binary string is converted into a list of integers using list
comprehension. The output variable output is initialized as an empty string.

The function then loops through the list of integers, processing one code at a
time. The variable curr is set to the first code in the list, and the loop begins
at index 1. For each code, the variable next is set to the next code in the list.

The variables c and n are initialized as empty strings, and they will be used to
store the characters represented by the codes curr and next, respectively. If
curr or next is less than 128, it means that the code represents a single
character, so the corresponding ASCII character is obtained using the chr()
function. If curr or next is greater than or equal to 128, it means that the
code represents a sequence of characters, so the corresponding sequence is
looked up in the dictionary.

The variables c and n are combined into a new sequence combo, which is
added to the dictionary with the next available code code. The variable code is
then incremented.

The character represented by curr is added to the output variable output, and
curr is set to next. The loop continues until all codes have been processed.

Finally, the last character represented by the last code in the list is added to
the output variable output, and the function returns the uncompressed
message as a string.

Runner Code:
Code:
# Import necessary libraries
import os
# Define input file name
input_file = "test.txt"

# Read input text file


with open(input_file, "r") as f:
text = f.read()

# Encode input text using LZ77 coding


encoded = lzEncode(text)

# Decode encoded message using LZ77 coding


decoded = lzDecode(encoded)

# Calculate compression ratio


original_size = os.path.getsize(input_file) * 8 # in bits
compressed_size = len(encoded)
compression_ratio = original_size / compressed_size

# Print results
print("Original text: \n", text)
print("Encoded message: \n", encoded)
print("Decoded message: \n", decoded)
print("Original size:\n",original_size)
print("encoded size:\n",compressed_size)
print("Compression ratio: \n", compression_ratio)

Implementation:
The runner code first imports the necessary libraries, in this case, only the os
library is needed. Then it defines the input file name as "test.txt" and reads
the input text file using a with-open statement. The text is then encoded using
the lzEncode() function and stored in the variable encoded. The encoded
message is then decoded using the lzDecode() function and stored in the
variable decoded.

After that, the original size of the input file is calculated using the
os.path.getsize() function and multiplied by 8 to get the size in bits. The size of
the encoded message is determined using the length() function and stored in
the variable compressed_size. Finally, the compression ratio is calculated by
dividing the original size by the compressed size.

The results are then printed to the console using print() statements. The
original text is printed followed by the encoded and decoded messages. The
original and encoded size are also printed along with the compression ratio.
The compression ratio represents the amount of compression achieved by the
encoding method used. A higher compression ratio indicates better
compression.

Input Text:
Output:

Question 2
Channel encode the huffman encoder output in Question-1 using a (7,4)-Hamming code
(Take 4 bits at a time and encode). It should then generate an encoded bit-stream.
Develop
a program for the decoder also (you can take syndrome decoder). Now, perform the
following
tasks.
(a) Write an error generator module that takes in a bit stream and outputs another bit
stream after inverting every bit with probability p, i.e., the probability of a bit error is p.
The value of p can be fixed to a small value (less than 0.1) and should be varied later.
(b) Pass the Hamming encoded bit-stream through the above mentioned module and
then
decode the received words using the decoder block.
(c) Using the Huffman decoder, retrieve the text file back. Vary p and inspect the
contents
of text file retrieved and write your conclusion.

(a)&(b)

Hamming Code:
Theory:
Hamming code is a technique used to detect and correct errors in data transmission. It
adds extra bits to the data that are used to identify and correct errors that may occur
during transmission.

A (7,4)-Hamming code is a specific type of Hamming code that adds 3 parity bits to a 4-
bit data word, resulting in a 7-bit code. These parity bits are calculated based on the
positions of the bits in the code, such that any single-bit error in transmission can be
detected and corrected.
The (7,4)-Hamming code works by arranging the bits in a matrix, with the parity bits in
the rows and the data bits in the columns. The parity bits are calculated based on the
data bits in each column, and the resulting code is transmitted.

At the receiving end, the parity bits are checked and any errors are corrected by flipping
the affected bit(s). If more than one bit is flipped, the error will be detected but not
corrected.

Overall, the Hamming code and (7,4)-Hamming code are important tools in ensuring the
accuracy of data transmission in a variety of contexts, from computer networks to
digital communication systems.

The Generator matrix for a (7,4) code is given as:


[1, 0, 0, 0, 1, 0, 1]
[0, 1, 0, 0, 1, 1, 1]
[0, 0, 1, 0, 1, 1, 0]
[0, 0, 0, 1, 0, 1, 1]
And its corresponding parity matrix is:
[1, 1, 1, 0, 1, 0, 0]
[0, 1, 1, 1, 0, 1, 0]
[1, 1, 0, 1, 0, 0, 1]

Coding and Implementation:

Hamming Encoder:
Code:
def hamming_encode(message_bitstream):
# Convert message_bitstream to array of integers
n = len(message_bitstream) // 4
message_array = np.zeros((n, 4), dtype=int)
for i in range(n):
message_array[i] = np.array([int(b)
for b in message_bitstream[4*i:4*(i+1)]])

# Generate Hamming code generator matrix


G = np.array([
[1, 0, 0, 0, 1, 0, 1],
[0, 1, 0, 0, 1, 1, 1],
[0, 0, 1, 0, 1, 1, 0],
[0, 0, 0, 1, 0, 1, 1]
], dtype=int)

# Encode message_array using Hamming code


encoded_array = np.zeros((n, 7), dtype=int)
for i in range(n):
encoded_array[i] = message_array[i] @ G % 2
# Convert encoded_array to bitstream
encoded_bitstream = ''.join(str(b) for b in encoded_array.flatten())

return encoded_bitstream

Implementation:
The function hamming_encode encodes a message bitstream using a (7,4)
Hamming code. Here's how it works:
1. The input message_bitstream is first converted to an array of integers.
This is done by dividing the bitstream into groups of 4 bits (since this is
a (7,4) code) and converting each group of 4 bits to an integer.
2. The generator matrix G for the (7,4) Hamming code is then created.
This matrix is a 4x7 matrix and is hard-coded into the function.
3. The message array is then encoded using the Hamming code. This is
done by multiplying each 4-bit message by the generator matrix G
using matrix multiplication (i.e., the @ operator in Python) and taking
the result modulo 2. This gives a 7-bit codeword for each 4-bit message.
4. Finally, the encoded array is converted back to a bitstream by flattening
it and joining the resulting bits into a string.
The output of the function is the encoded bitstream.

Huffman Decoder:
Code
def hamming_decode(encoded_bitstream):
# Convert encoded_bitstream to array of integers
n = len(encoded_bitstream) // 7
encoded_array = np.zeros((n, 7), dtype=int)
for i in range(n):
encoded_array[i] = np.array([int(b)
for b in encoded_bitstream[7*i:7*(i+1)]])

# Generate Hamming code parity check matrix


H = np.array([
[1, 1, 1, 0, 1, 0, 0],
[0, 1, 1, 1, 0, 1, 0],
[1, 1, 0, 1, 0, 0, 1]
], dtype=int)

# Decode encoded_array using Hamming code


decoded_array = np.zeros((n, 4), dtype=int)
for i in range(n):
# Calculate syndrome vector
s = encoded_array[i] @ H.T % 2
# Check if syndrome vector is zero
if np.any(s):
# Find the position of error by converting syndrome vector to
decimal
error_pos = int(''.join(str(b) for b in s), 2) - 1
# Flip the bit at error position
encoded_array[i][error_pos] ^= 1
# Extract the message bits from the corrected codeword
decoded_array[i] = encoded_array[i][:4]

# Convert decoded_array to bitstream


decoded_bitstream = ''.join(str(b) for b in decoded_array.flatten())

return decoded_bitstream

Implementation:
This is the implementation of the Hamming decoding process. It takes the
encoded bitstream as input and returns the decoded bitstream.
The function first converts the encoded bitstream into an array of integers,
with each row of the array representing a codeword. It then generates the
parity check matrix H for the (7,4) Hamming code.
Next, the function decodes each codeword in the array. It first calculates the
syndrome vector s by multiplying the codeword with the transpose of the
parity check matrix and taking the result modulo 2. If the syndrome vector is
non-zero, there is an error in the codeword. The function finds the position of
the error by converting the syndrome vector to decimal and subtracting 1. It
then flips the bit at the error position in the codeword.
Finally, the function extracts the message bits from the corrected codeword
and converts the decoded array to bitstream. The decoded bitstream is
returned as output.

Error Generator module


Code:
# Define the error_generator function that takes a bitstream and a probability
p as inputs
def error_generator(bitstream, p):
# Convert bitstream to array of integers
bitarray = np.array([int(b) for b in bitstream])
# Initialize an empty array for output
output = np.zeros(len(bitarray), dtype=int)
# Loop through each bit in the bitarray
for i in range(len(bitarray)):
# Generate a random number between 0 and 1
r = random.random()
# If the random number is less than p
if r < p:
# Invert the bit and store it in the output array
output[i] = 1 - bitarray[i]
# Else
else:
# Keep the bit as it is and store it in the output array
output[i] = bitarray[i]
# Convert output array to bitstream
output_bitstream = ''.join(str(b) for b in output)
# Return output_bitstream
return output_bitstream

Implementation:
This code defines a function called error_generator that takes two inputs:
bitstream, a string of bits (0's and 1's), and p, a probability between 0 and 1.
The function generates a random bit error for each bit in the input bitstream
with probability p and returns the resulting bitstream.
To do this, the function first converts the input bitstream to a numpy array of
integers. It then initializes an empty numpy array called output with the same
length as the input bitstream. The function then loops through each bit in the
input bitstream and generates a random number between 0 and 1 using the
random.random() function. If this random number is less than p, the function
inverts the corresponding bit and stores the result in the output array.
Otherwise, it stores the bit as it is in the output array.
Finally, the function converts the output array back to a bitstream and
returns it.

Runner Code:
Code:
with open('encoded.txt', 'r') as f:
encoded_bitstream = f.read()
print('huffman message:',encoded_bitstream)
# Check if the length of the encoded bitstream is a multiple of 4
if len(encoded_bitstream) % 4 != 0:
# Calculate the number of zeros to append
zeros_to_append = 4 - (len(encoded_bitstream) % 4)
# Append zeros to the encoded bitstream
encoded_bitstream += '0' * zeros_to_append
# Print a message indicating the change
print('Appended', zeros_to_append, 'zeros to the encoded bitstream.')

# Encode the encoded bitstream using hamming_encode function


hamming_encoded_bitstream = hamming_encode(encoded_bitstream)
#Add errors to the encoded bitstream
error_bitstream=error_generator(hamming_encoded_bitstream,0.001)
# Print the hamming encoded bitstream
print('Hamming encoded bitstream:', hamming_encoded_bitstream)
#Print the hamming encoded bitstream with error
print("the error bitstream is:",error_bitstream)
# Decode the hamming encoded bitstream using hamming_decode function
decoded_bitstream = hamming_decode(error_bitstream)
# Print the decoded bitstream
print('Decoded bitstream:', decoded_bitstream)
# Open the huffman_decoded.txt file in write mode
with open('huffman_decoded.txt', 'w') as f:
# Write the decoded bitstream to the file
f.write(decoded_bitstream)
# Print a message indicating the success
print('Decoded bitstream saved to huffman_decoded.txt')

Implementation
This code reads in an encoded bitstream from a file named 'encoded.txt' and
performs Hamming code encoding and decoding while adding errors. It first
checks if the length of the encoded bitstream is a multiple of 4 and appends
zeros to the end if necessary to make it a multiple of 4. Then, it calls the
hamming_encode function to encode the bitstream using Hamming code. Next,
it calls the error_generator function to add errors to the encoded bitstream
with a probability of 0.001. The hamming encoded bitstream with errors is
printed to the console.
Then, the code calls the hamming_decode function to decode the hamming
encoded bitstream with errors. The decoded bitstream is printed to the
console. Finally, the decoded bitstream is written to a file named
'huffman_decoded.txt' using the write method of the file object. A message is
printed to the console to indicate the success of the write operation.
Output:

(C)
Huffman decoding of Hamming decoded text
Code(Matlab)
% Read input text file
filename = 'test.txt';
fileID = fopen(filename, 'r');
text = fscanf(fileID, '%c');
fclose(fileID);

% Create Huffman table from input text


Table = huffman_table(text);

% Encode input text using Huffman coding

encoded = huffman_encoder(text, Table)


% Open the huffman_decoded.txt file and read the contents as a character vector
filetext = fileread('huffman_decoded.txt');
huffman_decode(filetext, Table)

Implementation:
This code reads a text file named "test.txt" and stores its contents in a variable
called "text". It then creates a Huffman table for the input text using a
function called "huffman_table" and stores it in a variable called "Table". Next,
the input text is encoded using Huffman coding with the help of another
function called "huffman_encoder" and the encoded bitstream is stored in a
variable called "encoded".
Afterwards, the code reads the contents of a file named "huffman_decoded.txt"
and stores it in a variable called "filetext". Finally, it decodes the contents of
"huffman_decoded.txt" using the Huffman table "Table" with the help of a
function called "huffman_decode".
To get the output for different probabilities of error introduced we change the
value of p in Error Generator module.
Input Text for Huffman encoder:

Output:
For p=0.001

Hamming Decoded Output

Huffman Decoded Output:

For p=0.01
Hamming Decoded Output

Huffman Decoded Output:


For p=0.1
Hamming Decoded Output

Huffman Decoded Output:

Conclusion
As we increase p, we can see that the final decoded text becomes less accurate
this is because in (7,4) Hamming code, we can correct a single error. If the
probability of error is high enough that more than one bit is flipped, the
decoder may not be able to correct the errors, leading to incorrect decoding.
As a result, increasing the probability of errors beyond a certain threshold can
result in unrecoverable errors, causing the decoded text to be different from
the original text. Here for p=0.001 we see almost no error but whereas for
p=0.01 we see few errors and finally when p=0.1 the decoded text is
unreadable. Therefore, it's important to achieve an appropriate error
probability that ensures reliable decoding while minimizing the amount of
error correction overhead.

You might also like