You are on page 1of 21

Error Detection And Correction with Diagonal Hamming based multi-bit error

Advance Encryption Standard detection and correction technique is


Cryptography For Secured proposed to identify errors 1 bit error for
Communication one row .
The proposed scheme is implemented with
Abstract - On behalf of technology scaling, Data Security By using Advance Encryption
on-chip memories in a die undergoes bit Standard Cryptography. was simulated and
errors because of single events or multiple synthesized using Xilinx implemented in
cell upsets by the ecological factors such as Verilog HDL.
cosmic radiation, alpha, neutron particles or
due to maximum temperature in space, leads Index Terms—Space Applications, Diagonal
to data corruption. Error detection and Hamming, Multibit error correction,
correction techniques (ECC) recognize and Random bit errors and correction
rectify the corrupted data over techniques.
communication channel. In this paper, an
advanced error correction 2-dimensional 1. INTRODUCTION
code based on divide-symbol is proposed to
weaken radiation-induced MCUs in memory BINARY information is stored in a
for space applications. For encoding data storage space called memory. This binary
bits, diagonal bits, parity bits and check bits data is stored within metal-oxide
were analyzed by XOR operation. To semiconductor memory cells on a silicon
recover the data, again XOR operation was integrated circuit memory chip. Memory cell
performed between the encoded bits and the is a combination of transistors and
recalculated encoded bits. After analyzing, capacitors where capacitor charging is
verification, selection and correction process considered as 1 and discharging considered
takes place. as 0 and this can store only one bit. Errors
Temporary errors which are classified under which are temporary or permanent are
soft errors are created because of created in the memory cells and need to be
fluctuations in the voltage or external eliminated. Single bit error correction is
radiations. These errors are very common most commonly used technique which is
and obvious in memories. In this paper, capable of correcting up to one bit. Since
technology is increasing rapidly, there are occur in memory due to voltage fluctuations,
more probabilities of getting multiple errors manufacturing process or due to very high
[1]. Use of Diagonal Hamming method radiations. There are many coding
leads to efficient correction of errors in the techniques for error correction and
memories. Memory was divided as SRAM, detection. These techniques can correct from
DRAM, ROM, PROM, EPROM, EEPROM single to multiple bit errors. Coding methods
and flash memory. Main advantages of like HVD, HVDD, HVPDH, DMC, MDMC,
semiconductor memory is easy to use, less 3D parity check with Hamming are used to
in cost, and have high bits per square correct bits in memory. The 3-Dimensional
micrometers. Temporary errors are called parity check is used for checking and
transient errors which are caused because of correcting the errors in message bits and
fluctuations in potential level. Permanent hamming is used for correction of parity
errors are caused because of defects during bits. In HVPDH method, number of parity
manufacturing process or large number of bits are reduced and hence reliability is
radiations. increased. HVD method is used for
For detection and correction of these correction of soft error bits and the power
soft errors, various methods have been consumption is less in this method. In HVD
proposed . In this paper, memory bits which method, the parity code uses four different
are affected by errors are recognized and directions in block of data.
rectified using Diagonal Hamming based HVDD methods can be used for
multi-bit error detection and correction correction of bit patterns which is in the
technique. The aim of this method is to form of equilateral triangle and any irregular
detect effectively up to 8 bit errors and triangle. This method can’t be used for
correcting them. In this paper, Section-II parallelogram structure. In DMC method
gives overview of other coding techniques. decimal program is used for identification
The advantages and disadvantages of and correction of errors. The main drawback
different coding techniques are discussed. in this method is it uses more redundant bits
for reliability of memory. In MDMC less
II. RELATED WORKS redundant bits are used and it uses re-
Information in the form of binary digits is configurable Array Exclusive OR logic for
stored in a space called memory. Errors addition in decimal. If we use pipe-lined
MDMC, the time is saved since multiple encountered if high data rate service, high
stages of work are done at same time. throughput, etc. are incorporated on it.
Similar methods can be used in ON-chip and Several numbers of error detection and
OFF-chip networks. Even there, increase in correction codes such as Hamming code,
parity bits improves the reliability of Hsiao code, Revering et al. code and CA-
communication. Many schemes used for based error detecting and correcting codes
network communication can also be used for have been introduced in literature. Each of
semi-conductor memories these codes has its own advantage to be used
as the channel coding scheme in a specific
The internet of things (IoT) are communication system. Nowadays, there
interlinked with digital machines, computing have been wide spread of research works
devices, physical objects and human beings based on these codes. Hamming, Hsiao and
which provide a unique identifier for each of Revering codes are well known binary linear
them through internet via clouds. Nowadays codes which offer single error correction-
eco-friendly telecommunication double error detection (SEC-DED)
technologies have been a mandate in the capability. These codes can correct single
wireless network field. One of the major error and detect double error.
problem to face is that of power usage of Delay, power consumption and area
these wireless communication networks requirements of these codes are lowered
which has serious environmental impact. compared to other existing codes. Cha and
Networks must be able to transfer data with Yoon proposed a technique to design an
high accuracy and reliability. So to retain the ECC processing circuits based on SEC-DED
more reliability of these networks, errors code for memories. The area complexity of
must be detected and corrected with great the ECC processing circuits have been
efficiency. There are number of error minimized. On the other hand, the error
detecting and correcting codes available but correction ability of the low density parity
most of them confront lot of challenges such check (LDPC) code depends on the
as unreliable wireless links, the broadcast codeword length. The decoder gives a better
nature of wireless transmissions, performance with a larger codeword and
interference, frequent topology changes, and with parity-check matrices. The higher
other effects of wireless channels. These are codeword length coding technique needs
larger memory and complex decoding matrices (H) for SEC-DED and SEC-DED-
architecture. The LDPC codes are unable to DAEC codes. ii) Same H-matrix is
correct errors if the number of errors employed to build the SEC-DED and
occurred are greater than the error correction SECDED-DAEC codes. iii) The SEC-DED
capability of the decoder nevertheless of the and SEC-DED-DAEC codes with message
number of iterations. length of 4-bit, 8-bit and 16-bit have been
designed and implemented for WSN
These codes have lesser complexity and application. iv) Proposed designs are fast
lower memory requirements compared to and power efficient compared to existing
other existing LDPC codes and Reed related designs. v) These codes have simpler
Solomon (RS) codes. The LCPC (9, 4), encoding and decoding processes and lower
LCPC (8, 3) and LCPC (7, 3) codes do not memory size. This paper is organized for
need any iteration in the decoding process remainder portion as follows.
which may reduce the overall circuit 1. 1. Motivation
complexity. The algorithms, flowchart, error Problem Statement Coding theory is
patterns and its syndrome values are interested in providing a reliability and
presented. Alabady et al. codes arenot fully trustiness in each communication system
satisfied single error correction and double over a noisy channel. In the more
error detection functionality in some cases. communication systems, the error correction
Beside this limitation, there are some codes are used to find and correct the
mistakes in flowchart, tables and figure possible bit changing, for example, in
which are rectified. Ming et al. proposed a wireless phones etc. In any environment,
SEC-DED-DAEC code to diminish noise Environmental interference, physical fault
source in memories. These existing codes noise, electromagnetic radiation and other
require more area, power and delay. To kinds of noises and disturbances in the
overcome these challenges, this paper aims communication system affect
to develop a new fault tolerant channel communication direction to corrupted
coding technique. messages, an error in the received codeword
The main contributions of this paper (message) or bit changing might be
are as follows: i) Proposed for a new happened during a transmission of data. So,
technique to construct the parity check the data can be corrupted during a
transmission and dispatching from the are a bit changing or an error in two or more
transmitter (sender) to the receiver, but than two bits of the sequence of forwarded
during a data transmission, it might be data.
affected by a noise, then the input data or  Burst errors: if an errors or bits
generated codeword is not the same as changing are happened on the set of bits in
received codeword or output data. The error the transmitted codeword, then the burst
or bit changing which is happened in data error is occurred. The burst error will be
bits can change the real value of the bit(s) calculated from the first bit which is
from 0 to 1 or vice versa. In the following changed until the last changed bit. These
simple figure, a base structure of types of errors will be discussed in chapter
communication system is indicated. And this three of this report in detail.
figure indicates that how the binary signal
can be affected by a noise or other effects 1.2. The General Idea
during a transmission on the noisy
communication channel: The main method is using
redundancy (parity) to recover messages
with an error during transmission over a
noisy channel. As a simple example, an
error can happen in human language in both
verbal and written communication. For
instance, if during a reading of this sentence:
“this sentence is mistake”, there is an error
Generally, there are three types of and wrong word in this sentence and the
errors that can be corrupted in data error should be found that in this sentence in
transmission from the sender to the which word the mistake is occurred and then
recipient:  Single bit error: The error is it must be corrected, so two important things
called single bit error when bit changing, or must be achieved here: error detection and
an error happens in one bit of the whole data error correction, and the principles that must
sequence. be used to achieve these goals are first in
 Multiple bit errors: the occurred English language the string “miscake” is not
errors are called multiple bit errors, if there accepted word, so in this point, it is obvious
that the error is occurred, Secondly, the be happened in the communication channel
word “miscake” is closest to the real and over produced codeword. At the end, the
correct word “mistake” in English, so it is corrupted error must be detected and
the closest and the best word which can be corrected by decoder on the receiver side.
used instead of the wrong word [4]. So, it Chapter 2. 2. Literature Review
shows how redundancy can be useful in the 2.1. Historical Background related to
example of human language, but the goal of Hamming Code
this project is how the computers can use One of the proper subset of
some of the same principles to achieve error information theory is called coding theory,
detection and error correction in the digital but the concept of these theories are
communication? completely differen. The main subject is
began from a seminar paper which presented
To get the best idea of correction by by Claude Shannon in the mid of 20th
using a redundancy in digital century. And in that paper, he demonstrated
communication, first of all, it is necessary to that good code exists, but his assertions were
model the main scheme contains two main probabilistic. Source Encoder Channel
parts called encoding and decoding like the Received Message Decoder Receiver
following figure and in the next parts all of In 1947, Dr Hamming introduced
these parts will be explained in detail and and invented the first method and generation
implemented in Verilog language. of error correction code called hamming
code. Hamming code is capable to correct
one error in a block of the received message
According to this figure2 first, the contains binary symbols. After that, Dr
code will be generated by a source, and then Hamming has published a paper in the Bell
to do encoding part, the parity (redundancy) technical journal with a subject of error
bits will be added by the encoder to the data detection and error correction code. In 1960,
bits which sent from a source. After that, the other methods for error detection and error
generated codeword which is a combination correction codes were introduced and
of data bits and parity bits will be invented by another inventor, for example,
transmitted to the receiver side, and during BCH code was invented by Bose, Chaudhuri
transmission, the error or bit changing might and Hocquenghem who are a combination of
the surname of the initial inventors of BCH 2.2.2. Extended Hamming Code :
code. In this type of hamming code (EH), the
Another error detection and error minimum distance will be increased to have
correction code which presented in 1960 one more than the standard form of
was Reed Solomon (RS) code that invented hamming code, so the minimum distance is
by two inventors called Irving Reed and equal to four among every two codewords.
Gustave Solomon and this method was
developed by more powerful computers and The main frame and structure of the
more efficient algorithm for decoding part. Hamming code is the same for both binary
and extended Hamming code. In fact, in the
2.2. Types of Hamming Code extended Hamming code there is an extra bit
There are three more significant types of which is added to the redundant (parity) bit
hamming codes which called that allows the decoder to recognize between
1. Standard hamming code, single and double bit errors. Afterwards, on
2. Extended hamming code, the receiver side, the decoder can detect and
3. Extended hamming product code, but in correct an error with one bit changing and a
this research, the standard hamming code is double bit errors can be detected (not
used and implemented. corrected) at the same time, and if there is
2.2.1. Standard Hamming Code no attempting by a decoder to correct this
The standard method of hamming single bit error, then it also can detect
code is depended on a minimum hamming maximum three errors. In the last example
distance, it means to design the standard above, the extended hamming code defined
hamming code, minimum hamming distance as H (8, 4) and here there are 4 parity bits
which is three among any two codewords is which are calculated by a subscription
needed, for instance, a standard hamming among 8 total bits and 4 data bits.
code with a dimension of H (7, 4) which 2.2.3. Extended Hamming Product
four data bits are encoded into 7 bits by Codes
appending 3 redundancy bits [10]. In the Extended Hamming product codes is
next chapter, this kind of hamming code will a code which is built based on the extended
be explained in detail. Hamming code (EH). This kind of code is
consisted of the new algorithms and
outcomes. The more important issue which codes can attain BER or bit-error-rate levels
is so significant for both researchers and which should be around 10−5 at the code
designers is low error rate for the evaluation rate which is completely close to the
of efficiency for channel coding scheme. specifically related capacity with advisable
Also, there are two remarkable values called complexity and convolution for decoding.
BER (bit error rate) and FER (frame error And the most important key that made it to
rate) which some fresh digital radio links are the successful algorithm was using decoding
designed for them, because nowadays, using algorithm which is called soft-in soft-out.
wireless multimedia applications are going In this method (turbo codes), the
to be extended and low-error-rates are so main goal is preparing a perfect and
important issue for these applications. There complete set of techniques and analytical
are two more important schemes for coding methods to have the performance related to
which has a special performance and the low-error rate for the extended Hamming
proficiency that is very close to Shannon product codes. For doing analytical
limit, and they are called turbo codes, And approximation three important parameters
low-density parity check codes. So the are required which must be evaluated for the
product codes can be more competitive, extended Hamming (EH) product codes,
therewith, this kind of codes can be used for these parameters are called the code
the implementation of the fast parallel performance at low-error-rates, the exact
decoder. The turbo codes is a special knowledge of the code minimum distance
technique and method of the coding theory and its multiplicities, respectively.
which used to prepare a reliable and 2.2.3.1. Binary linear code
reputable communication over a noisy or Linear codes are determined by the
messy communication channel. The turbo special alphabets called Σ which are finite
codes are a special class of FEC (forward fields. All over, it will be denoted by 𝐹𝑞
error correction) codes, and they are used in which means the finite fields with q
3G or 4G mobile communications, for elements (q is a primer power and 𝐹𝑞 is {0,
instance, in LTE. The turbo codes gain their 1, . . . , q − 1}) [16]. If a required field is Σ
considerable efficiency and performance and C ⊂ Σ 𝑛 is a subset of Σ , then C will be
with relatively low complication algorithms called a linear code. And because C is a
for encoding and decoding parts. The turbo subset, so there are several basis c (like: c1,
c2, . . . , 𝑐𝑘 which k is a dimension of the There are three distinguished WEF
subset), and each generated codeword can for a code the weight enumerating function
be declared as the linear combination of is like the following formula: 𝑊𝐶 (𝑦) = ∑ 𝑦
these basis vectors and these vectors can be (𝐶) = ∑ 𝐴𝑖𝑦 𝑛 𝑖 𝑐∈C 𝑖=0 Where 𝐴𝑖 is the
written In the dominant of proper matrix as number of codewords in which the weight is
the columns of a n×k matrix and this proper 𝑊𝐻(𝑐) = i. And another significant formula
matrix is called G matrix (generator will be I𝑊𝐶 (𝑦) which is called the
matrix) . information weight of an enumerating
According to the binary linear code function and it will be written like the
which is written in the form of C (n, k), also downward formula with some changed
there are some values which are described in parameters in comparison to above formula :
the following: There are k data bits or I𝑊𝐶 (𝑦) = ∑ 𝑊𝐻(𝑢)𝑦 𝑤𝐻(𝐶) = ∑ 𝑊𝑖𝑦 𝑛 𝑖
information frame and r parity bits, also, n is 𝑐∈C 𝑖=0 In this formula 𝑊𝑖 is a data
the total number of bits of a codeword where (information) multiplicity and it used instead
n = (k + r). A particular symbol (. ) is the of 𝐴𝑖 because it’s the summation of the
Hamming weight of a vector . u = (𝑢1, 𝑢2, Hamming weights of 𝐴𝑖 and u is a specific
… , 𝑢𝑘) c = (𝑐1, 𝑐2, … , 𝑐𝑛) c = (u | P) data frame which the codeword with a
where the first k bits are a data bits which is weight of 𝑊𝐻(𝑐) = i will be generated by it.
generated by u and P is a specific vector The third significant function is
which is contained r parity bits and these IOWEF (or the input output weight
parity bits will be placed after the first k data enumerating function) which is written like
bits . the underneath formula: IO𝑊𝐶(𝑥, 𝑦, 𝑋, 𝑌) =
A Weight Enumerating Functions ∑ 𝑥 𝑘−𝑊𝐻(𝑢) 𝑦𝑊𝐻(𝑢) 𝑋 𝑟−𝑊𝐻(𝑃)𝑌
(WEF) Weight enumerating function (WEF) 𝑊𝐻(𝑃) 𝑐∈C In this formula the number of
admits the superlative description and codewords c = (u | P) which 𝑊𝐻(𝑢) is equal
complete information for a weight structure. to w and 𝑊𝐻(𝑃) is equal to p, so if we
The turbo codes consist of weight replace w and p instead of 𝑊𝐻(𝑢) and
enumerating function of the component 𝑊𝐻(𝑃) in above formula, respectively [16],
codes, for instance, the traditional trellis then the result will be like the following
search. formula called IO𝑊𝐶(𝑥, 𝑦, 𝑋, 𝑌): IO𝑊𝐶(𝑥,
𝑦, 𝑋, 𝑌) = ∑ ∑ 𝐴(𝑤𝑝) 𝑟 𝑝=0 𝑘 𝑤=0 𝑥 𝑘−𝑤𝑦
𝑤𝑋 𝑟−𝑝𝑌 𝑝 So, the minimum non zero 𝐴𝑖 = be rejected and dropped. FER < = ∑ 1 2 𝐴𝑖
∑𝑤 + 𝑝 = 𝑖 𝐴(𝑤𝑝) and 𝑤𝑖 = w. ∑𝑤 + 𝑝 = 𝑖 𝑛 𝑖=𝑑𝑚𝑖𝑛 𝑒𝑟𝑓𝑐 (√𝑖 𝑘 𝑛 𝐸𝑏 𝑁0 ) BER < = ∑
𝐴(𝑤𝑝) 1 2 𝑤𝑖 𝑘 𝑛 𝑖=𝑑𝑚𝑖𝑛 𝑒𝑟𝑓𝑐 (√𝑖 𝑘 𝑛 𝐸𝑏 𝑁0 )
BER and FER Performance for Where 𝐸𝑏 is the energy of each data
Maximum Likelihood Decoding: In this (information) bit and 𝑁0 is the noise of
part, the main foundation related to the spectral density and the ratio between them
evaluation of analytical code performance is (𝐸𝑏 𝑁0 ) is significant and related to the
at the SNR like, low error rates. According BER and FER performance for maximum
to the binary linear codes C (n, k) which can likelihood decoding.
be transmitted by a binary averse
constellation, for example, a 2-PAM, Gray 2.2.4. Extended Hamming code vs.
labeled 4-PSK and etc. upon the increasable Extended Hamming product code
White Gaussian Noise channel. So it will be 3. Theoretical Description of
so clear that BER and FER performance for
Hamming Code
maximum likelihood decoding which are
corresponded to the specific ratio among the
3.1. Description of Hamming Code
energy of data bits and the spectral density.
A hamming code is a simple method of error
The bit error rate or BER is the
detection and error correction which is used
number of bits containing an errors in each
frequently . A hamming code always check
time, so the ratio of bit error will be
the error for all the bits which exist in the
calculated by the number of bits containing
codeword (in fact, it checks all bits of a
an error over the total number of bits which
codeword from the first bit to the last
are transferred over a communication
available bit) . And exactly for this reason
channel during the time interval (the result
the hamming code is called linear code for
can be expressed as a percentage). The
error detection and error correction that can
frame error rate (FER) is a ratio of errors for
detect maximum two synchronic bit errors
a received data bits, it can be used for
and it is capable to correct only single bit
evaluating a quality of the signal connection.
error ( can correct just one bit error) . This
If the result of FER is so high, it means there
method works by appending a special bits
are lots of errors among the whole received
called parity or redundancy bits. Several
bits, and in this situation, the connection can
piece of these extra bits is installed and
inserted among the data bits at the desired third least considerable set of bits which are:
positions, but the number of these extra bits 4, 5, 6 and 7 (or 100, 101, 110, 111). The
is depend on the number of data bits. code sends message bits padded with
specific parity or redundancy bits in the
The hamming code initially form of is the block size or a whole number
introduced code that enclosed, for example, of bits and k is the number of data bits in the
to construct the Hamming (7, 4) that the generated codeword. All the procedure
total bits are equal to seven, which four bits which is required for adding parity bits to
are data bits into seven bits by adding three the data bits to encode and create a proper
parity bits. And the number of each bit must codeword will be explained in next parts of
start from one, like 1, 2, 3, 4, 5, 6, 7 and to this report. In the following table, all the
write them in binary: 001, 010, 011, 100, possible Hamming codes are indicated:
101, 110, 111. The parity bits are calculated
at the proper positions that are calculated by
powers of two: 1, 2, 4 and etc. And the data
bits must be located at another remained
positions which in this example are: 3, 5, 6,
7. Each data bit is contained in the According to table 1, if there are r
calculation of two or more parity bits. In parity bits, it can cover bits from 1 up to 2 𝑟
particular, parity bit one (r1) is calculated − 1 which called total bits (or n), and also
from those bits that their positions has the the number of parity bits should be more
least considerable or important set of bits than one (r > 1). For each dimension of
which are: 1, 3, 5 and 7 (or in binary 001, hamming code, there is a rate which is equal
011, 101 and 111). Parity bit two or r2 (at to a division of the number of data bits and
index two, or 10 in binary) is calculated number of total bits (rate = ) and always its
from the bits that their index has the second result will be less than 1, another important
least important set of bits which are: 2, 3, 6, factor is called overhead factor which is
7 (or 010, 011, 110, 111 in binary). Parity calculated by a division among total bits or n
bit three or r3 (which must be located at and data bits or k (overhead factor = 𝑛 𝑘 ),
position four or 100 in binary) is calculated and its result is always more than one.
from the bits where their positions has the
3.2. An Alternative Description of
the Hamming Code
In the following figure which
indicated another description of the
Hamming code. There are three nested
circles in this figure and they are related to
In the upper figure which describe
the three parity equation defining the
[7, 4] Hamming code, graphically, each
Hamming code, also there are seven areas
parity bits can cover just its adjacent bit
among these available circles in this figure
positions which can be common with other
that are related to the seven bits in a final
parity bits in this figure: For instance, parity
codeword . The number of the circle can be
bit 1 (r1) covers data bits: k1, k2, k4. And
extended for each dimension of hamming
parity bit 2 (r2) covers data bits: k1, k3, k4.
code. When the single bit error happens,
And parity bit 3 (r3) covers data bits: k2, k3,
during a transmission process, this error will
k4
be fixed by flipping this bit (single bit) in
the proper area, related to the position where
3.3. Theoretical Description for
the error happens.
Encoding Part of Hamming Code
Hamming code is a specific linear
code which can correct just one error by
adding r parity bits to k data bits that
generated by the source to have a codeword
with a length of n which is equal to k + r.
And the data bits or k is equal to 2 𝑟 − 𝑟 − 1
to generate a codeword with a length of 2 𝑟
For example: For H (7, 4) which − 1 [1] [6] [7]. General algorithm for the
encodes four data bits by appending three encoding part of hamming code is described
parity or redundancy bits to generate a in the following: 1. r parity or redundancy
proper codeword which is a combination of bits are combined to k data bits to creating a
both parity and data bits. proper codeword which is contains r + k
bits. So the bit position sequence is from the
position number 1 to r + k. In fact, for
decoding part on the receiver side which is where they are not reserved before. And the
contained detection and correction step for data bits will be appended in the same order
an occurred error, extra bits is needed to as they are appeared in the main data bits
send some extra bits with data bits which which generated by source.
called parity (redundant) bits and these
parity bits are added by the sender (by
encoding) and they always removed by the
receiver on the receiver side (by decoding)
[23]

But how to merge the data bits and


parity bits into a codeword? The parity bit
So, the transmitter adds parity bits
positions are always numbered in powers of
through a process that creates a relationship
two, reserved and prepared for the parity bits
between parity bits and specified data bits.
and another bit positions are related to the
Then the receiver will check the two set of
data bits [23]. The first parity bit is located
bits to detect and correct the errors which
in the bit position: 2 0 = 1. The second
will explain in the decoding part [23], but
parity bit is located in a bit position: 2 1 = 2.
the following figure can present the main
The third parity bit is located in a bit
idea of coding that is used in the hamming
position: 2 2= 2 and so on. In the following
code.
simple table, the position of each parity bits
(r) presented in a grey colored cells and they
are reserved for the parity purpose.

After that, the data bits (k) are copied


to the other free and remaining positions
2. The value of parity bits are two bits are different then the result will be
calculated by XOR operation of some one (according to the following table).
combination of data bits [24]. A
combinational of data bits are indicated in
the downward table which all parity bits and
data bits and their position that should be
calculated according to the rules of encoding
part of the hamming code. For example, for
calculating the value of parity bit number
one which called r1, the XOR calculation The XOR operation in table 4
must begin from r1, then check one bit, and indicates the odd function. If the number of
skip one bit until the last bit which exists in 1’s among the variables X and Y are odd
the sequence of bits in the codeword. number, then the result of XORing among
these two single bits are equal to one,
otherwise if the number of 1’s are even
numbers, then the result will be zero.

4. Diagonal Hamming Code


Design of Diagonal Hamming
method for memory Proposed design of
Diagonal Hamming based multi-bit error
detection and correction technique to the
memory is shown in Fig. 1. Using this
approach of diagonal Hamming bits, the
errors in the message can be recognized and
can be rectified which are saved in the
memory. In encoding technique message
bits are given as input to the Diagonal
Hamming encoder and the Hamming bits are

The result of XOR operation is zero calculated. Message and Hamming bits

if both two bits are the same, otherwise, if (32+24 bits) are saved in the memory after
the encoding technique.
The message is represented in the form m x
n matrix. The grouping of the message bits
is depicted in Fig. 2. The encoder generates
the hamming bits and the hamming bits are
obtained by grouping the message bits and
the hamming bits are calculated with the
help of hamming code. The message bits are
patterned as shown in Fig. 3. Eight
diagonals are considered in this Diagonal
Hamming method and each diagonal
consists of 4 message bits.

Message bits are grouped as shown


in the Fig. 3 in the specified directions. The
first diagonal consists of m3[7], m3[5],
m2[6], m1[7], the second diagonal has
m3[6], m2[7], m0[1], m1[0], the third
Errors which are occurring in the
diagonal has m0[0], m0[2], m1[1], m2[0],
message bits, are saved in the memory and
the fourth diagonal has m3[4], m2[5],
can be recognized and rectified in the
m1[6], m0[7], the fifth diagonal has m3[3],
decoding technique. B. Design of Diagonal
m2[4], m1[5], m0[6], the sixth diagonal has
Hamming Encoder For example, message of
m3[2], m2[3], m1[4], m0[5], the seventh
32 bits is accounted in the proposed method.
diagonal has m3[1], m2[2], m1[3], m0[4] m3[1]; (19) R7[2] = m0[4] ⊕ m2[2] ⊕
and the eight diagonal has m3[0], m2[1], m3[1]; (20) R7[3] = m1[3] ⊕ m2[2] ⊕
m1[2], m0[3]. Each one of the diagonals has m3[1]; (21) For the eight row: R8[1] =
four message bits. For the respective groups, m0[3] ⊕ m1[2] ⊕ m3[0]; (22) R8[2] =
the hamming bits are calculated as shown in m0[3] ⊕ m2[1] ⊕ m3[0]; (23) R8[3] =
Fig. 4. The hamming bits are shown as R1, m1[2] ⊕ m2[1] ⊕ m3[0]; (24) In the
R2, R3, R4, R5, R6, R7, R8 arrays and these encoder, the hamming bits are calculated for
arrays consist of 3 bits. The hamming bits message. We get 24 hamming bits in total
are calculated as given in equations (1)- (24) for 32-bit message. C. Proposed Diagonal
: For the first row: Hamming Decoder The message bits which
R1[1] = m1[7] ⊕ m2[6] ⊕ m3[7]; are encoded and kept in memory as a matrix
(1) R1[2] = m1[7] ⊕ m3[5] ⊕ m3[7]; (2) as shown in Fig. 4, and are given as input to
R1[3] = m2[6] ⊕ m3[5] ⊕ m3[7]; (3) For the decoder. The decoder now segregates
the second row: R2[1] = m1[0] ⊕ m0[1] ⊕ message and hamming bits and it
m3[6]; (4) R2[2] = m1[0] ⊕ m2[7] ⊕ recalculates the hamming bits and evaluates
m3[6]; (5) R2[3] = m0[1] ⊕ m2[7] ⊕ syndrome bits. The syndrome bits are
m3[6]; (6) For the third row: R3[1] = m2[0] evaluated using the equations given in (25)-
⊕ m1[1] ⊕ m0[0]; (7) R3[2] = m2[0] ⊕ (48) : For first row S1[1] = R1[1] ⊕ m1[7]
m0[2] ⊕ m0[0]; (8) R3[3] = m1[1] ⊕ ⊕ m2[6] ⊕ m3[7]; (25) S1[2] = R1[2] ⊕
m0[2] ⊕ m0[0]; (9) For the fourth row: m1[7] ⊕ m3[5] ⊕ m3[7]; (26) S1[3] =
R4[1] = m0[7] ⊕ m1[6] ⊕ m3[4]; (10) R1[3] ⊕ m2[6] ⊕ m3[5] ⊕ m3[7]; (27) For
R4[2] = m0[7] ⊕ m2[5] ⊕ m3[4]; (11) the second row: S2[1] = R2[1] ⊕ m1[0] ⊕
R4[3] = m1[6] ⊕ m2[5] ⊕ m3[4]; (12) For m0[1] ⊕ m3[6]; (28) S2[2] = R2[2] ⊕
the fifth row: R5[1] = m0[6] ⊕ m1[5] ⊕ m1[0] ⊕ m2[7] ⊕ m3[6]; (29) S2[3] =
m3[3]; (13) R5[2] = m0[6] ⊕ m2[4] ⊕ R2[3] ⊕ m0[1] ⊕ m2[7] ⊕ m3[6]; (30) For
m3[3]; (14) R5[3] = m1[5] ⊕ m2[4] ⊕ the third row: S3[1] = R3[1] ⊕ m2[0] ⊕
m3[3]; (15) For the sixth row: R6[1] = m1[1] ⊕ m0[0]; (31) S3[2] = R3[2] ⊕
m0[5] ⊕ m1[4] ⊕ m3[2]; (16) R6[2] = m2[0] ⊕ m0[2] ⊕ m0[0]; (32) S3[3] =
m0[5] ⊕ m2[3] ⊕ m3[2]; (17) R6[3] = R3[3] ⊕ m1[1] ⊕ m0[2] ⊕ m0[0]; (33) For
m1[4] ⊕ m2[3] ⊕ m3[2]; (18) For the the fourth row: S4[1] = R4[1] ⊕ m0[7] ⊕
seventh row: R7[1] = m0[4] ⊕ m1[3] ⊕ m1[6] ⊕ m3[4]; (34) S4[2] = R4[2] ⊕
m0[7] ⊕ m2[5] ⊕ m3[4]; (35) S4[3] = (49) After the position is calculated, the
R4[3] ⊕ m1[6] ⊕ m2[5] ⊕ m3[4]; (36) For error corrector negates the bit corresponding
the fifth row: S5[1] = R5[1] ⊕ m0[6] ⊕ to that position to correct the data bit. This
m1[5] ⊕ m3[3]; (37) S5[2] = R5[2] ⊕ process is done till all the corrupted bits are
m0[6] ⊕ m2[4] ⊕ m3[3]; (38) S5[3] = corrected. Now the output of the decoder
R5[3] ⊕ m1[5] ⊕ m2[4] ⊕ m3[3]; (39) For will be the form of Fig. 2.
the sixth row: S6[1] = R6[1] ⊕ m0[5] ⊕
m1[4] ⊕ m3[2]; (40) S6[2] = R6[2] ⊕
m0[5] ⊕ m2[3] ⊕ m3[2]; (41) S6[3] =
R6[3] ⊕ m1[4] ⊕ m2[3] ⊕ m3[2]; (42) For
the seventh row: S7[1] = R7[1] ⊕ m0[4] ⊕
m1[3] ⊕ m3[1]; (43) S7[2] = R7[2] ⊕
m0[4] ⊕ m2[2] ⊕ m3[1]; (44) S7[3] =
SIMULATIONS AND
R7[3] ⊕ m1[3] ⊕ m2[2] ⊕ m3[1]; (45) For
RESULTS
the eight row: S8[1] = R8[1] ⊕ m0[3] ⊕
m1[2] ⊕ m3[0]; (46) S8[2] = R8[2] ⊕
m0[3] ⊕ m2[1] ⊕ m3[0]; (47) S8[3] =
R8[3] ⊕ m1[2] ⊕ m2[1] ⊕ m3[0]; (48) If
all the syndrome bits are equal to zero, then
it represents that the message bits are not
corrupted and if anyone of the syndrome bits
Fig 1 : 32bit input for Diagonal hamming
in non-zero, then it represents the message grouped in 4 msg signals , mo,m1,m2m,m3
bit(s) are corrupted. These corrupted bits
need correction so, the message bits are sent
to the error correction part. In the correcting
of error part, the location of error is
identified by doing the following
calculations: Suppose if the error is in the
third row of the message organization, then
the error position is calculated as: (S3[3] ∗ Fig 2 : diagonal Arrangement

(22 )) + (S3[2] ∗ (21 )) + (S3[0] ∗ (20 ));


The input given for this circuit is mo,m1,m2,m3
Fig 3 positioning in the form of hamming codes
and output we get parity bit separate order and
with grouped messged data
final_out .
R1,R2,R3,R4,R5,R6,R7,R8 is the parity bits
Final _out = Parity bits + msg_input
= 8 x 3 ( 8 parities each having 3 bits)
32 bit Diagonal Hamming Encoding :
+ 32 bit
= 56 bits

Error Corrector At Reciever side also called


Decoder

Fig : block diagram For


diagonal Hamming Encoder

Fig : block
Diagram For Decoder
Fig : Encoding And Decoding
Block
In this the 32bit input is grouped into 8bit
msg_data ie.., m0,m1,m2,m3;
Output is R_m0,R_m1,R_m2,R_m3
(Recived_msg)
Output is also indicating Any_error Occured in
Matching encoding and Decoding Data .

Fig : output Waveform for Error corrector

DIAGONAL HAMMING CODE 32 BIT with


The output we got from encoding block is Cryptography .
final_out 00b73789678f2a . Now , this is sent at
the receiver side .But at the receiver side
suppose the data getting with some Error
Consider that as
in = 01b63709778e2b input for Error corrector
block or Decoding Block
and Here at Output Error will be getting
corrected and Getting Exact data whatever sent
at Transmitter side .
out = 00b73789678f2a.

Encoding And Decoding One Module :

With The Help of ADVANCE ENCRYPTION


STANDARD ,Here Error Correction Codes are
Designed .
In this Encoded Ouput will be the Input for AES
Encryption By using Secured Key .
The Encrypted Output will forwarded to
DecryptBlock with same Secured Key and the
Decrypt Output will be given for Decoding
Which is Decoding the input without any Error
and The Required Data is Retrived At Reciever
Side .
[1] Paromita Raha , M. Vinodhini and
N.S Murty,” Horizontal Vertical Parity and
Diagonal Hamming Based Soft errors Detection
and Cor- rection of Memories," in Proc. Int Conf
on Comp Com and Info, 2017.
Fig : ECC with Cryptography output waveforms [2] S.ASAI, “Semi conductor memory
trends," in Proc. of IEEE, Vol.74, 1986.
Advantages : [3] Shivani Tambatkar, Siddharth
1. This project Can correct multiple Bit Errors Narayana Menon, Sudarshan.V, M.Vinodhini
2. This project aslo eligible for Provideing Security
and N.S.Murty, “ Error Detection and Correction
For Data in Communication System .
in Semiconductor Memories using 3D Parity
Conclusion
Check Code with Hamming Code,” in Proc.
International Conf on Comm and Signal

Diagonal Hamming coding method is Processing, 2017.

proposed in this paper and the main idea [4] Varinder Singh and Narinder

behind this work is to reduce maximum errors Sharma, " A Review on Various Error Detection

during transmission of bits in memory. Diagonal and Correction Methods Used in

Hamming method identifies and corrects up to Communication," American International

maximum of 8bit errors in a row for 32-bit Journal of Research in Science, Technology,

input. Diagonal Hamming method ensures less Engineering and Mathematics, pp. 252 - 257,

area and delay by 91.76 percentage and 84.18 2015.

percentage respectively. Huge amount of error [5] Mostafa Kishani, Hamid R. Zarandi,

correction is possible using this Diagonal Hossein Pedram, Alireza Ta- jary, Mohsen Raji

Hamming method. and Behnam Ghavami, “HVD: Horizontal-


Vertical- Diagonal error detecting and
correcting code to protect against with soft
errors,” Design automation for embedded
systems Journal, Vol. 15, no. 3, pp. 289- 310,
May 2011.
[6] S. Vijayalakshmi and V. Nagarajan
(FEB 2019), “Design and Implementation of Low

REFERENCES Power High-Efficient Transceiver for Body


Channel Communications”, Springer- Journal of
Medical Systems
[7] Md. Shamimur Rahman,
Muhammad Sheikh Sadi, Sakib Ahammed and
Jan Jurjens, ” Soft Error Tolerance using
Horizontal – Vertical Double – Bit Diagonal
Parity Method,” in Proc 2nd IEEE International
Conference on Electrical Engineering and
Information and Commu- nication Technology
(ICEEICT) , 21-23 May 2015.
[8] Jing Guo, Liyi Xiao, Zhigang Mao and
Qiang Zhao, “Enhanced Memory Reliability
against Multiple Cell Upsets Using Decimal
Matrix Code,” IEEE Transactions on Very Large
Scale Integration (VLSI) Systems, Vol. 22,
Issue:1, pp. 127-135, Jan. 2014
[9] Ahilan A. and Deepa P., “Modified
Decimal Matrix Codes in FPGA Configuration
Memory for Multiple Bit Upsets,” in
Proc.International Conference on Computer
Communication and Informatics (ICCCI), pp. 1-
5, Jan. 2015
[10] M.Vinodhini and N.S.
Murty.,"Relible Low Power NoC Intercon- nect,"
Micoprocessor and Microsystems-Embedded
Hardware Design, Vol.57, March 2018.
[11] U. Sai Himaja, M. Vinodhini, N.S.
Murty, "Multi-Bit Low Redundancy Error control
with Parity Sharing for NoC Interconnects," in
Proc. International Conference on
Communication and Electronics Systems, Oct.
2018.

You might also like