You are on page 1of 161

5 – More codes

This week’s lecture


Advance videos:
➢ 5.1 Hadamard matrices
➢ 5.2 Describing Hadamard codes
➢ 5.3 Hadamard code application – Mariner 9
➢ 5.4 Introduction to polynomial codes
➢ 5.5 More on polynomial codes
On campus:
➢ 5.6 Cyclic codes
➢ 5.7 Other codes
➢ 5.8 Conclusion to coding part of module
5.1 – Hadamard codes
Introduction
Hadamard codes
➢ Previously we have looked at Hamming codes which had an
excellent rate
➢ In fact they were optimal in terms of rate
➢ But they weren’t that great at correcting errors – essentially
they could only correct one error
➢ In this session, we will look at a different type of code (based
upon matrices) which is almost the opposite – great at error
detection but a very poor rate!
Hadamard codes
➢ The supplementary notes that go with this lecture summarise
the important points
➢ In the lecture slides I will tend to go beyond the notes and give
further explanation, and some of the practical applications
➢ So you can use the lecture notes for the basics but the lecture
slides for a deeper understanding – well that is the idea
anyway!
Hadamard
➢ The whole of this topic concerns the name Hadamard
➢ Jacques Hadamard (1865-1963) was a French mathematician
who made numerous contributions to number theory in
particular
➢ The construction of Hadamard codes is named after him, even
though he did not invent the code himself, as it relies on his
principles
➢ (Image in public domain)
Hadamard matrices
➢ Before we get into the codes, we need to set the foundations
➢ A Hadamard matrix is a specific type of matrix
➢ When considered mathematically, it is a matrix consisting of
1s and −𝟏s, such that any two rows are orthogonal – that
means they have dot product zero when considered as vectors
Hadamard matrices
➢ This essentially implies that any two rows differ in exactly half
of their positions
➢ Where they differ the dot product will contribute −𝟏 and
where they are the same the dot product will contribute 1
➢ We want them to cancel out to make zero – so we need the 1s
and −𝟏s to balance
➢ So we need them to differ in exactly half of the possible places
Hadamard matrices
➢ Such matrices are very interesting mathematically and lead to
several open questions
➢ All Hadamard matrices are assumed square
➢ It is clear that every Hadamard matrix (apart from the trivial
𝟏 × 𝟏 case) must be of size 𝒏 × 𝒏 where n is even - since we
need to split the entries into two equal halves
Hadamard conjecture
➢ The Hadamard conjecture states that a Hadamard matrix exists
for every multiple of 4, so (if the conjecture is true) there is a
Hadamard matrix of size 𝟒𝒌 × 𝟒𝒌 for every k
➢ This is surprisingly hard! And has not yet been resolved.
➢ So far (AFAIK in January 2024 –see Nick Higham’s 2020 post at
https://nhigham.com/2020/04/10/what-is-a-hadamard-matrix/)
we still haven’t found a Hadamard matrix of size 𝟔𝟔𝟖 × 𝟔𝟔𝟖.
➢ That seems very small!
➢ 𝟒𝟐𝟖 × 𝟒𝟐𝟖 was shown to exist only in 2005 by Hadi Kharaghani
and Behruz Tayfeh-Rezaie
Hadamard matrices
➢ We will focus only on those matrices of degree 𝟐𝒌 for some k
➢ These definitely exist – we will give a construction for them
➢ However, we want to use them in a somewhat different form
more suitable for coding
➢ Why are Hadamard matrices as they stand (with 1 and −𝟏)
not really suitable for coding?
Hadamard matrices
➢ We are dealing with binary codes, so we want to have 0 and 1
➢ So we will define essentially the same principle for a
Hadamard code, but instead of 1s and -1s we will use 1 and 0
➢ The rest of the concept can stay exactly the same (two rows
differ in exactly half of the positions)
Hadamard matrices
➢ Hence, we make the following definition, which is what we
will be referring to when we use the phrase Hadamard matrix
from now on:
A Hadamard matrix is a square matrix consisting
only of 0s and 1s where between any two rows,
there are exactly half of the columns where the two
entries are equal and the other half of the columns
where the two entries are different
Hadamard matrices
➢ An example of a 4 x 4 Hadamard matrix is below – check for
yourself that this satisfies the required property!

1 0 1 0
1 1 1 1
1 0 0 1
1 1 0 0
Sylvester’s construction
➢ The most widely used method for constructing Hadamard
matrices is Sylvester’s construction
➢ James Joseph Sylvester (1814-1897) was a 19th century
English mathematician who made great strides in matrix
theory, number theory and combinatorics
➢ His construction produces Hadamard matrices of size 𝟐𝒏 in a
recursive manner
➢ Image: public domain
Sylvester’s construction
➢ Given a matrix of binary numbers M we will denote by 𝑴′ the
“complement” of M – that is the matrix obtained by flipping
all the numbers 0s and 1s (so all the 0s become 1s and vice
versa)
➢ Let H be a Hadamard matrix
𝑯 𝑯
➢ Then the matrix formed by writing is also a
𝑯 𝑯′
Hadamard matrix
Sylvester’s construction
➢ This notation means to effectively copy H into the top left etc
parts of the new matrix, so the new matrix is twice the size – if
the original matrix was 𝟐𝒏 × 𝟐𝒏 then the new matrix is 𝟐𝒏+𝟏 ×
𝟐𝒏+𝟏
➢ This is Sylvester’s construction
➢ Most Hadamard matrices used in coding are those formed via
Sylvester’s construction
Sylvester’s construction
➢ Let’s create the first few as examples
➢ We will use the notation 𝑯𝒏 to denote the
𝒏 × 𝒏 Hadamard matrix created by this construction
➢ The case 𝑯𝟏 is trivial!
➢ This is just the 𝟏 × 𝟏 matrix (1) – this is essentially our “base
case” in the definition of these matrices
Sylvester’s construction
𝑯𝟏 𝑯𝟏
➢ To find 𝑯𝟐 we write the matrix
𝑯𝟏 𝑯𝟏 ′
➢ Since 𝑯𝟏 is just (1), and so 𝑯𝟏 ′ is just (0), this gives 𝑯𝟐 =
𝟏 𝟏
𝟏 𝟎
➢ Verify that this does satisfy the condition for a Hadamard
matrix – the rows agree in half (one) of the positions and
disagree in the other half
Sylvester’s construction
➢ Then we can use 𝑯𝟐 to find 𝑯𝟒 similarly – fill in 𝑯𝟐 into three “corners”
and 𝑯𝟐 ′ into the bottom right “corner” to get the 4 × 4 matrix 𝑯𝟒

1 1 1 1
𝑯𝟐 1 0 1 0
1 1 0 0 𝑯𝟐 ′
1 0 0 1
➢ Again, check this is a Hadamard matrix!
Sylvester’s construction
➢ Similarly, you can create the matrix 𝑯𝟖 from this, and so on
for all powers of 2
1 1 1 1 1 1 1 1
1 0 1 0 1 0 1 0
1 1 0 0 1 1 0 0
1 0 0 1 1 0 0 1
1 1 1 1 0 0 0 0
1 0 1 0 0 1 0 1
1 1 0 0 0 0 1 1
1 0 0 1 0 1 1 0
Hadamard codes
➢ In the next section we’ll use these matrices to construct error-
correcting codes
© 2024 Steve Lakin, Tony Mann
and the University of Greenwich
The moral rights of the authors
have been asserted
5.2 – Hadamard codes
So far
➢ We have defined Hadamard matrices and shown how they can
be constructed
➢ We’ll now investigate the codes associated with the Hadamard
matrices created by Sylvester’s construction
Sylvester’s construction of 𝑯𝟖
➢ In the last video we created the Hadamard matrix 𝑯𝟖

1 1 1 1 1 1 1 1
1 0 1 0 1 0 1 0
1 1 0 0 1 1 0 0
1 0 0 1 1 0 0 1
1 1 1 1 0 0 0 0
1 0 1 0 0 1 0 1
1 1 0 0 0 0 1 1
1 0 0 1 0 1 1 0
➢ And similarly for other square matrices with n rows where n is a power of 2
Hadamard codes
➢ The transmitted codewords in a Hadamard code
are the rows of a Hadamard matrix, together with
their “opposites” obtained by flipping the 1s and
0s
➢ So the Hadamard code with block length 8 has
codewords 11111111, 10101010, 11001100,
10011001, 11110000, 10100101, 11000011 and
10010110 (the rows of the matrix) and their
“opposites” 00000000, 01010101, 00110011,
01100110, 00001111, 01011010, 00111100,
01101001, so there are 16 codewords
Hadamard codes
➢ The main benefit of doing this is the very large distance of this
code
➢ Since all rows differ in half of their elements, so do all these
codewords (think about why the opposites do as well)
➢ The only ones that differ in not exactly half of the positions
are 00000000 and 11111111, which differ completely
Linearity
➢ It is possible to show that Hadamard codes created in this
fashion (using Sylvester’s construction) are linear
➢ I won’t prove this, but you can check it for some of the codes
in the tutorial
➢ Note that Hadamard codes created from Hadamard matrices
obtained in some other form may not be linear in general
Generator matrix
➢ Because the code is linear, it can be described by a generator
matrix
Generator matrix
➢ Let us take the Hadamard code corresponding to the 𝟖 ×
𝟖 Hadamard matrix, with 16 codewords
➢ This means that our original messages are 4 bits long, so
there are 16 of them (all possible binary numbers, 0000,
0001 etc)
➢ Our generator matrix will have 8 rows so our codewords
have 8 bits
➢ And each row will have four bits, with first bit 1 and the
rest of the row being, in turn, 000, 001, 010, 011 etc (the
lexicographic order)
Generator matrix
➢ So, our generator matrix G is:
1 0 0 0
1 0 0 1
1 0 1 0
1 0 1 1
1 1 0 0
1 1 0 1
1 1 1 0
1 1 1 1
Generator matrix
➢ For example, this transforms 1011 as follows:
1 0 0 0 1
1 0 0 1 0
1 0 1 0 1 0
1 0 1 1 0 1
=
1 1 0 0 1 1
1 1 0 1 1 0
1 1 1 0 0
1 1 1 1 1
Generator matrix
➢ Hence, 1011 is transmitted as 10011001
➢ Check the list of codewords we created before and verify that
this is indeed a valid codeword available to us
➢ You can do more of these in the tutorial exercises
Decoding
➢ In terms of the transmission, the remaining task is to receive
the transmitted codeword at the other end, check it, and turn
it back into the original message
➢ The idea of a parity check matrix doesn’t really work here –
what we are doing isn’t parity checking
➢ So checking the received string for integrity is not as simple as
with Hamming codes
Decoding
➢ Because there are only a limited number of codewords, it’s
common to do this in a fairly straightforward way, just see
which possible message it is closest to
➢ You can adapt this further, with a range of techniques but for
the purposes of what we are doing here, let’s just assume the
received codeword has been corrected and received safely at
the other end
Decoding
➢ So we just need to extract the original message
➢ Again, this is not as easy as with Hamming
codes, as the message doesn’t lie naturally in
the bits in our case here
➢ But I will show a recovery matrix for the 8-bit
Hadamard code 𝑯𝟖
Decoding
➢ Here is a recovery matriz

1 0 0 0 0 0 0 0
1 0 0 0 1 0 0 0
1 0 1 0 0 0 0 0
1 1 0 0 0 0 0 0
Decoding
➢ For example, receiving 10011001 (as in the example earlier),
we do indeed get 1011:
1
0
1 0 0 0 0 0 0 0 0 1
1 0 0 0 1 0 0 0 1 0
=
1 0 1 0 0 0 0 0 1 1
1 1 0 0 0 0 0 0 0 1
0
1
Decoding – an aside
➢ This matrix is not unique because the coding process means we
can calculate each bit of the message in different ways from the
codeword (because of the redundancy which gives the code its
error-correcting capability)
Error correction
➢ Note the main feature of Hadamard codes is their error-
correction ability
➢ Recall that this basically means that we can detect as many
errors up to one less than the distance, and correct up to half
of them
➢ Taking any two codewords with a block length of 𝟐𝒌−𝟏 bits,
they will differ in half of these bits and so the distance is 𝟐𝒌−𝟐
➢ As k gets large, this tends towards infinity
Code notation
➢ So these have exceptionally good error-correcting properties –
compare with Hamming codes which had a distance 3
➢ Also, since we know the message length, block length and
distance, we can describe this code using the code notation we
used before
➢ For message length k we use the 𝟐𝒌−𝟏 × 𝟐𝒌−𝟏 Hadamard matrix
➢ Hadamard codes are 𝟐𝒌−𝟏 , 𝒌, 𝟐𝒌−𝟐 𝟐
codes – recall what this
notation means!
Rate
➢ In contrast to their excellent error correcting abilities,
Hadamard codes have a very poor rate (especially compared
to Hamming codes which, you can recall, were optimal)
➢ In general, the message length is k and the block length is
𝟐𝒌−𝟏 , and so this has rate 𝒌Τ𝟐𝒌−𝟏 , which is very slow – as k
becomes large this tends to zero
Comparison
➢ Essentially Hamming codes and Hadamard codes are polar
opposites of each other!

Hamming codes have a very high rate but very


low error-correcting capabilities

Hadamard codes have a very low rate but very


high error-correcting capabilities
Comparison
➢ So which is “better”?
➢ If speed and efficiency is the most important thing, Hamming
codes are “better”
➢ If the communication channel is unreliable, the ability to
correct errors is most important, so Hadamard codes are
“better”
➢ As with most practical applications, which you need depends
on your circumstances!
© 2024 Steve Lakin, Tony Mann
and the University of Greenwich
The moral rights of the authors
have been asserted
5.3 – Hadamard codes application
Mariner 9
Mariner 9
➢ To illustrate a real practical use of Hadamard matrices, let’s
briefly look at the Mariner 9 space probe
➢ This was launched in 1971 with the intention of surveying the
surface in Mars
➢ Upon its arrival to Mars in late 1971, it became the first man-
made satellite orbiting another planet
Mariner 9

➢ Image: NASA (public domain, Wikimedia Commons)


Mariner 9
➢ The pictures being sent back from such a long distance were
obviously likely to be corrupted: there would be a significant
amount of noise in this transmission
➢ So we need a good error-correcting code to send the digitised
pictures so that we can recover the original picture correcting
for all the noise – a perfect application for Hadamard codes!
Mariner 9
➢ The code used was a (32, 6, 16) Hadamard code
➢ This meant the block size (codeword length) was 32 bits
➢ The original messages were 6 bits long – each of these
corresponded to a pixel of the picture, this allowed us to
represent 𝟐𝟔 = 𝟔𝟒 pictures, so 64 shades (using greyscale)
➢ This meant it could correct up to 7 errors (up to, but not
including, half of the distance 16) in every 32 bit block – pretty
powerful!
Mariner 9
➢ You might wonder why we didn’t use a longer code
➢ But the technology was not at a level where we could store
and send much longer blocks than this – there was only room
for a small processor
➢ The messages were decoded quickly using techniques such as
Fast Fourier Transforms – we will leave the details here
Mariner 9
➢ The pictures
received were
pretty good – this
is an example
➢ (Image NASA, public
domain, from Wikimedia
Commons)
Mariner 9
➢ This is a useful illustration of codes in practice
➢ In this situation, the quality of the image is more important
than the efficiency, so error-correction is the main priority,
especially in such a noisy communication channel
➢ Hadamard codes are perfect for applications like this!
Mariner 9
➢ In total over 7000 images were received, mapping pretty
much the entire surface of Mars
➢ This was a huge leap on from any previous images obtained
➢ Mariner 9 was switched off in 1972 but continued to orbit
Mars. NASA predicted in 2011 that it would burn up or crash
into Mars around 2022. We don’t know whether it has done
so or is still orbiting the planet.
(https://en.wikipedia.org/wiki/Mariner_9)
Summary
➢ Remember – different codes are useful in different
circumstances!
➢ Hadamard codes have different strengths to Hamming codes,
for example
➢ We will look next at polynomial codes
© 2024 Steve Lakin, Tony Mann
and the University of Greenwich
The moral rights of the authors
have been asserted
5.4 - Polynomial codes: Introduction
Linear codes
➢ Last time we looked at linear codes and gave a few definitions
➢ This time we will expand this idea and look at particular
categories of linear codes
➢ These are polynomial and, further, cyclic codes
➢ These are very commonly used in practice!
Polynomial codes
➢ Recall that we can regard codewords as polynomials
➢ We do everything (mod 2) in binary
➢ For example, the codeword 101011 could be seen to
correspond to the polynomial
𝟏𝑥 5 + 𝟎𝑥 4 + 𝟏𝑥 3 + 𝟎𝑥 2 + 𝟏𝑥 + 𝟏
This would obviously just be written as
5 3
𝑥 +𝑥 +𝑥+𝟏
Polynomial codes
➢ You can manipulate these polynomials just as usual – all you
have to remember is we are doing everything (mod 2)
➢ For example, what are the following?
(𝒙 + 𝟏)(𝒙𝟐 + 𝟏)
(𝒙 + 𝟏)𝟑
(𝒙𝟐 + 𝒙 + 𝟏)(𝒙 + 𝟏)
➢ Express these as codewords as well!
Polynomial codes
➢ (𝒙 + 𝟏)(𝒙𝟐 + 𝟏) = 𝒙𝟑 + 𝒙𝟐 + 𝒙 + 𝟏 which
corresponds to the codeword 1111
➢ (𝒙 + 𝟏)𝟑 would be 𝒙𝟑 + 𝟑𝒙𝟐 + 𝟑𝒙 + 𝟏 in normal
calculations, which is 𝒙𝟑 + 𝒙𝟐 + 𝒙 + 𝟏 (mod 2)
➢ This again corresponds to the codeword 1111
➢ (𝒙𝟐 + 𝒙 + 𝟏)(𝒙 + 𝟏) is 𝒙𝟑 + 𝟐𝒙𝟐 + 𝟐𝒙 + 𝟏 in normal
calculations, which is 𝒙𝟑 + 𝟏 (mod 2)
➢ This corresponds to the codeword 1001
Polynomial codes
➢ We can create a code using these ideas
➢ Let us take a generator polynomial
➢ Then consider all possible multiples of this polynomial that
give rise to an answer of degree less than or equal to some
specified degree n
➢ These codewords we generate using this method will be our
polynomial code
Polynomial codes
➢ Let’s see an example of this, we’ll take this in a fair amount of
detail
➢ We will take our generator polynomial to be 𝒙𝟐 + 𝒙 + 𝟏
➢ Let us specify that our codewords should be of length 5
➢ That means we need all multiples of 𝒙𝟐 + 𝒙 + 𝟏
up to and including anything of degree 4
Polynomial codes
➢ We will work through each possibility – we will just multiply
the polynomial by a possibility and write down what we get as
a codeword
➢ Start by multiplying by 0
➢ This clearly just gives zero as the answer
➢ Hence, as a codeword we just have 00000
➢ This is the first of our codewords
Polynomial codes
➢ Next, consider multiplying by 1
➢ This clearly just gives the same polynomial
𝒙𝟐 + 𝒙 + 𝟏 as the answer
We can write as 𝟎𝑥 4 + 𝟎𝑥 3 + 𝟏𝑥 2 + 𝟏𝑥 + 𝟏
4 3
➢ Note that there are no 𝑥 or 𝑥 terms, so those bits are 0
➢ Hence, as a codeword we have 00111
➢ This is our second codeword
Polynomial codes
➢ Next, consider multiplying by 𝒙
➢ This gives the polynomial 𝒙𝟑 + 𝒙𝟐 + 𝒙 as the answer
➢ We can write as 𝟎𝑥 4 + 𝟏𝑥 3 + 𝟏𝑥 2 + 𝟏𝑥 + 𝟎
➢ Hence, as a codeword we have 01110
➢ This is our third codeword
➢ Hopefully you are getting the idea now – let’s keep
going!
Polynomial codes
➢ Next, consider multiplying by 𝒙 + 𝟏
➢ We need to work out 𝒙 + 𝟏 (𝒙𝟐 + 𝒙 + 𝟏)
➢ This gives 𝒙𝟑 + 𝟐𝒙𝟐 + 𝟐𝒙 + 𝟏 in normal calculations, which is
𝒙𝟑 + 𝟏 (mod 2)
➢ We can write as 𝟎𝑥 4 + 𝟏𝑥 3 + 0𝑥 2 + 𝟎𝑥 + 𝟏 Hence, as a
codeword we have 01001
➢ This is our fourth codeword
Polynomial codes
➢ We just need to continue systematically, taking multiples – it’s
better to be systematic!
2
➢ So next, consider multiplying by 𝑥
➢ This, fairly obviously, gives 𝒙𝟒 + 𝒙𝟑 + 𝒙𝟐
➢ We can write as 𝟏𝑥 4 + 𝟏𝑥 3 + 𝟏𝑥 2 + 𝟎𝑥 + 𝟎
➢ Hence, as a codeword we have 11100
➢ This is our fifth codeword
Polynomial codes
➢ Systematically, it is probably easiest to consider multiplying by
𝒙𝟐 + 𝟏 next
➢ So we need to work out 𝒙𝟐 + 𝟏 (𝒙𝟐 + 𝒙 + 𝟏)
➢ This gives 𝒙𝟒 + 𝒙𝟑 + 𝟐𝒙𝟐 + 𝒙 + 𝟏 in normal calculations,
which is 𝒙𝟒 + 𝒙𝟑 + 𝒙 + 𝟏 (mod 2)
➢ We can write as 𝟏𝑥 4 + 𝟏𝑥 3 + 𝟎𝑥 2 + 𝟏𝑥 + 𝟏
➢ Hence, as a codeword we have 11011
➢ This is our sixth codeword
Polynomial codes
➢ Let’s consider multiplying by 𝒙𝟐 + 𝒙 next
➢ So we need to work out 𝒙𝟐 + 𝒙 (𝒙𝟐 + 𝒙 + 𝟏)
➢ This gives 𝒙𝟒 + 𝟐𝒙𝟑 + 𝟐𝒙𝟐 + 𝒙 in normal calculations,
which is 𝒙𝟒 + 𝒙 (mod 2)
➢ We can write as 𝟏𝑥 4 + 𝟎𝑥 3 + 𝟎𝑥 2 + 𝟏𝑥 + 𝟎
➢ Hence, as a codeword we have 10010
➢ This is our seventh codeword
Polynomial codes
➢ The last one to consider is multiplying by
𝒙𝟐 + 𝒙 + 𝟏
➢ So we need 𝒙𝟐 + 𝒙 + 𝟏 (𝒙𝟐 + 𝒙 + 𝟏)
➢ This gives 𝒙𝟒 + 𝟐𝒙𝟑 + 𝟑𝒙𝟐 + 𝟐𝒙 + 𝟏 in normal
calculations, which is 𝒙𝟒 + 𝒙𝟐 + 𝟏 (mod 2)
➢ We can write as 𝟏𝑥 4 + 𝟎𝑥 3 + 𝟏𝑥 2 + 𝟎𝑥 + 𝟏
➢ Hence, as a codeword we have 10101
➢ This is our eighth codeword
Polynomial codes
➢ We have now exhausted all the possible multiples
➢ We’ve considered, systematically, every multiple that will give
us a polynomial up to 𝒙𝟒 – any further multiple will take us
past this and the limit of our code
➢ So we have our eight codewords:
00000, 00111, 01110, 01001, 11100, 11011, 10010, 10101
Polynomial codes
➢ So, we have eight different 5-bit codewords available for us to
use
➢ Hence, we can code eight different things
➢ Where does the number 8 come from?
➢ How many codewords in general will we create from a system
like this?
➢ Think about it for a moment before we explore this!
© 2024 Steve Lakin, Tony Mann
and the University of Greenwich
The moral rights of the authors
have been asserted
5.5 - Polynomial codes continued
Polynomial codes
➢ We introduced the idea of a polynomial code in the previous
section and asked how many codewords such a code will have
➢ Note that by taking an appropriate multiple, we can get any
combination of the coefficients of the powers of 𝒙𝟒 , 𝒙𝟑 and 𝒙𝟐
➢ So, there are two possibilities for each of these three coefficients,
either 0 or 1
➢ Hence we have 𝟐𝟑 = 8 possibilities for multiples, and hence 8
possible codewords
➢ How does this generalise?
Polynomial codes
➢ In general suppose we generate codewords of length n from a
generating polynomial of degree m
➢ Then we need to generate all multiples of this generating
polynomial, which means we have choices (either 0 or 1) for
each of the n – m coefficients of the polynomials to multiply
by
➢ Hence we will be able to create 𝟐𝒏−𝒎 possible codewords in
general
Polynomial codes
➢ Another important thing to note here is that because we have
every possibility for these coefficients, then we can make a
nice easy mapping from our original messages to this new
codeword set
➢ Have a look at the first three digits of each of the codewords –
what can you notice?
Polynomial codes
➢ We can map the numbers 0 to 7 just using the first three
symbols as the actual data:
Number Binary Code
0 000 00000
1 001 00111
2 010 01001
3 011 01110
4 100 10010
5 101 10101
6 110 11011
7 111 11100
Polynomial codes
➢ This means that the remaining two digits (in this case) are
parity checks
➢ To perform the parity check, we can just divide the generator
polynomial into the string we receive we receive and check it
genuinely is a multiple of the generator polynomial in which
case the string is a valid codeword
➢ If not, then we know something went wrong
➢ Let’s see an example
Polynomial codes
➢ All valid codewords are multiples of the generating polynomial
➢ So to test whether a received string was correctly transmitted, we
see if it is divisible without remainder by the generating polynomial
➢ Mathematically, we would use long division to divide what we get by
the generator polynomial
➢ If we get the remainder to be zero then we know that the codeword
is a multiple of the generator polynomial
➢ This suggests that it was received correctly without error
Polynomial codes
➢ However in practice we wouldn’t do the actual long division,
but rather do it all by XOR
➢ You can check that the method is exactly the same as doing
normal long division (remember we are doing everything
(mod 2))
➢ Let’s consider receiving the codeword 11011, and remember
that our generator polynomial is 𝒙𝟐 + 𝒙 + 𝟏, which
corresponds to 111
Polynomial codes
➢ To divide 111 into 11011, start from the first occurrence of 1 in
the codeword and write 111 underneath it:

1 1 0 1 1
1 1 1

➢ Now take the XOR of these bits, and copy the rest of the row
down
Polynomial codes
➢ This gives

1 1 0 1 1
1 1 1
0 0 1 1 1
Polynomial codes
➢ Now do the same again, writing 111
underneath the next occurrence of 1:
1 1 0 1 1
1 1 1
0 0 1 1 1
1 1 1
➢ Again take the XOR of these bits
Polynomial codes
➢ We just get all zeros
1 1 0 1 1
1 1 1
0 0 1 1 1
1 1 1
0 0 0 0 0
➢ Since we have all zeros, then this is a valid
codeword
Polynomial codes
➢ Note that this procedure is exactly the long division of one
polynomial into another (we don’t need the quotient so we
can ignore it)
➢ You could if you wish do it like that, but I find it easier to just
deal with the 0 and 1 and XOR, and that’s how a computer
would naturally do it
➢ What happens if we get an invalid string?
Polynomial codes
➢ Let us suppose the receive the codeword 10011 instead. The
calculation gives:
1 0 0 1 1
1 1 1
0 1 1 1 1
1 1 1
0 0 0 0 1
Polynomial codes
➢ This time we haven’t got all zeros as the answer, so this means
this received string is invalid and there has been an error
➢ In terms of polynomials, this means that the received string is
not a multiple of the generator polynomial (so it has a
remainder)
➢ Hence it is not one of our defined codewords
Polynomial codes
➢ You might notice a very strong similarity
between this technique and a cyclic
redundancy check, as we discussed previously
➢ As mentioned then, the checks in a cyclic
redundancy check can be defined in terms of
polynomials, and essentially the procedure is
defined as the long division of one polynomial
into another
➢ Though this is the formal definition, in practice
the XOR technique is more natural to use
Polynomial codes are linear: 1
➢ Note that all polynomial codes are linear
➢ Informally:
➢ If you have two valid codewords then they are
both multiples of the generator polynomial
➢ Hence, their sum (mod 2) (or XOR) is also going to
be a multiple of the generator polynomial (as the
generator polynomial is a common factor of both)
➢ Hence the combination of these two codewords is
another codeword, so this is a linear code
Polynomial codes are linear: 2
➢ More formally: suppose that we have a
polynomial code with generator polynomial
g(x). To show that the code is linear, we have
to show that the sum of any two codewords is
also an element of the code.
➢ So suppose we have two codewords given by
the polynomials m(x) and n(x).
Polynomial codes are linear: 3
➢ Since this is a polynomial code generated by g(x), we can write
both m(x) and n(x) as multiples of g(x):
m(x) = r(x) g(x)
n(x) = s(x) g(x)
where r and s are polynomials in x.
Polynomial codes are linear: 4
➢ Then
m(x) + n(x) = (r(x) + s(x)) g(x)
and r(x) + s(x) is a polynomial in x of appropriate degree, so
m(x) + n(x) is a multiple of g(x) and therefore the codeword
corresponding to m(x) + n(x) is a member of our code.
So we have shown that the sum of any two codewords is
another codeword, and therefore our code is linear.
Polynomial codes
➢ Because they are linear, it is easy to work out the distance of
the code
➢ Recall that in a linear code, this is just the smallest weight of
the non-zero codewords (so just count the number of 1s they
contain)
➢ Checking this for our example gives the distance as 2
➢ Note that this is much easier than checking all possible pairs
to work out the distance!
Polynomial codes
➢ So, the code in this example has distance 2
➢ This means that it can detect any single error but it will not in
general be able to correct this error
➢ This isn’t really that powerful, but we only used a small
codeword length and polynomial, so we wouldn’t really
expect to do any better than this!
Polynomial codes
➢ Many of the codes we are going to see will be polynomial
codes
➢ They are easy to create and analyse, and have useful error-
detecting properties
➢ It’s also useful to see the link between binary numbers and
the more abstract theory of polynomials over a certain
modulus – maths and coding really tie together here!
© 2024 Steve Lakin, Tony Mann
and the University of Greenwich
The moral rights of the authors
have been asserted
5.6 Cyclic codes
Cyclic codes
➢ Fundamentally, a cyclic code is one where any cycle of a
codeword is also a codeword
➢ By a cycle, we mean shifting every symbol by a certain
amount, “wrapping round” when we get to the end
➢ For example, a cyclic shift of 1110000 might be 0111000
(moving everything one place to the right) or 0011100
(moving everything two places), and so on
Cyclic codes
➢ However, we normally reserve the term cyclic code to refer
only to those codes that are cyclic and linear
➢ Given that linear codes are essentially all we wish to concern
us with for now, then this is a sensible definition to make
➢ So, you can read cyclic code to mean cyclic linear code
Cyclic codes
➢ For example, consider the code given by the four codewords
{000, 110, 011, 101}
➢ Verify for yourself that this is cyclic and linear
➢ Note that it contains the zero codeword
➢ Codes like {000, 110, 011, 100} and
{000, 110, 011, 111} are not linear or cyclic – check this
yourself!
Cyclic codes
➢ Cyclic codes can also be expressed in terms of polynomials
➢ This is a very useful association to be able to make – it will also
mean that all cyclic codes are polynomial codes
➢ I won’t prove a lot of the theory here but I’ll try to give you an
indication as to where the ideas come from
Cyclic codes
➢ Consider a general codeword in a cyclic code, let’s say
𝒂𝒏−𝟏 𝒂𝒏−𝟐 … 𝒂𝟎
➢ We choose the subscripts like this because this codeword
corresponds to the polynomial
𝒂𝒏−𝟏 𝒙𝒏−𝟏 + 𝒂𝒏−𝟐 𝒙𝒏−𝟐 … + 𝒂𝟏 𝒙 + 𝒂𝟎
➢ Now, the definition of a cyclic code means that we need all
cyclic permutations of this codeword to also be codewords
Cyclic codes
➢ In fact, it is enough to just have 𝒂𝒏−𝟐 𝒂𝒏−𝟑 … 𝒂𝟎 𝒂𝒏−𝟏
➢ This is a single shift to the left
➢ Any other shift is just a combination of these single left shifts
➢ So, we need to have the polynomial
𝒂𝒏−𝟐 𝒙𝒏−𝟏 + 𝒂𝒏−𝟑 𝒙𝒏−𝟐 … + 𝒂𝟎 𝒙 + 𝒂𝒏−𝟏
as well
➢ How do we get from one polynomial to the other?
Cyclic codes
➢ If we take the original polynomial and multiply it by x then we
get the polynomial
𝒂𝒏−𝟏 𝒙𝒏 + 𝒂𝒏−𝟐 𝒙𝒏−𝟏 … + 𝒂𝟏 𝒙𝟐 + 𝒂𝟎 𝒙
➢ If we then subtract 𝒂𝒏−𝟏 (𝒙𝒏 −𝟏) then we have
𝒂𝒏−𝟐 𝒙𝒏−𝟏 + 𝒂𝒏−𝟑 𝒙𝒏−𝟐 … + 𝒂𝟎 𝒙 + 𝒂𝒏−𝟏
which is precisely the new codeword we wanted
➢ If 𝒄𝟏 is the first polynomial, and 𝒄𝟐 the second, then 𝒄𝟐 =
𝒙𝒄𝟏 − 𝒂𝒏−𝟏 (𝒙𝒏 −𝟏)
Cyclic codes
➢ Now, recall that cyclic codes are polynomial (we won’t prove
this)
➢ Since the code is polynomial, then it can be created via
multiples of a generator polynomial
➢ Let’s say the generator polynomial is g(x)
➢ Then the two codewords are both multiples of this – lets say
𝒄𝟏 =g(x)r(x) and 𝒄𝟐 =g(x)s(x)
Cyclic codes
➢ Then, using 𝒄𝟐 = 𝒙𝒄𝟏 − 𝒂𝒏−𝟏 (𝒙𝒏 −𝟏), we have g(x)s(x) =
xg(x)r(x) − 𝒂𝒏−𝟏 (𝒙𝒏 −𝟏), This rearranges as
𝒂𝒏−𝟏 (𝒙𝒏 −𝟏) = 𝐠(𝐱)(𝐱𝐫 𝐱 − 𝐬(𝐱))
➢ What this means is that 𝒙𝒏 − 𝟏 must also be a multiple of
g(x), and hence g(x) is a “factor” of 𝒙𝒏 − 𝟏
Cyclic codes
➢ Hence, we can state the following:
A cyclic code where the codewords are of
length n is a polynomial code with a generator
polynomial that divides exactly into 𝒙𝒏 − 𝟏
➢ Note that this is in general – for binary codes such as we are
using, then 𝒙𝒏 − 𝟏 is just the same as 𝒙𝒏 + 𝟏 , so we might
prefer to use this polynomial instead for our codes
Cyclic codes
➢ Let’s see an example
➢ We will investigate all possible cyclic codes of length 3 - you
can investigate further in the tutorial questions
➢ What we need to do is find the “factors” of
𝒙𝟑 + 𝟏 (I’ll use the + for convenience)
➢ This will be achieved by doing what is known as reducing a
polynomial to its irreducible factors
Cyclic codes
➢ This means to express it as a product of polynomials that
cannot be broken down any more
➢ Unfortunately, this is a very difficult problem in general
➢ You can compare it to the problem of finding the prime factors
of a number, which is a very difficult problem (NP-complete)
Cyclic codes
➢ Although there are criteria to search a little faster,
fundamentally you will just have to try some factors and see
what happens, just as you would if trying to factorise a
number
➢ One thing to note is that all our factors are going to end with a
+ 1, otherwise they would have a factor of x and x is clearly
not a factor of 𝒙𝟑 + 𝟏
Cyclic codes
➢ By investigation, or polynomial long division, you can try
factors to see if they work
➢ One thing to note is that (x + 1) will always be a factor
➢ This follows from the fact that in general (not just binary),
𝒙𝒏 − 𝟏 is the same as
(𝒙 − 𝟏)(𝒙𝒏−𝟏 + 𝒙𝒏−𝟐 + ⋯ + 𝒙 + 𝟏)
➢ Check this by multiplying out!
Cyclic codes
➢ Because we are using binary, then this says that for our
polynomial 𝒙𝟑 + 𝟏 , we can write it as 𝒙𝟑 + 𝟏 =
(𝒙 + 𝟏)(𝒙𝟐 + 𝒙 + 𝟏)
➢ Now, the polynomial 𝒙𝟐 + 𝒙 + 𝟏
➢ cannot be broken down any further
➢ The only possible factors would be x or x + 1 and neither of
these work (check!)
➢ So 𝒙𝟐 + 𝒙 + 𝟏 is irreducible
Cyclic codes
➢ There are two irreducible factors here
➢ That means there are four possible factors overall (you can
take any combination of them)
➢ Hence there are four possible generator polynomials
➢ Therefore there are four possible cyclic codes of length 3
Cyclic codes
➢ The four possible generator polynomials are:
𝟏
(𝒙 + 𝟏)
(𝒙𝟐 + 𝒙 + 𝟏)
(𝒙 + 𝟏)(𝒙𝟐 + 𝒙 + 𝟏)
➢ Let us take each of these in turn and see what codes they give
us
Cyclic codes
➢ Remember, we multiply the generator polynomial by every
possible polynomial that will give us a polynomial of degree of
up to 2 (since we want codes of length 3)
➢ The case where the generator polynomial is 1 is simply all
possible polynomials of degree 2 or less (since we can
multiply by anything we like)
Cyclic codes
➢ Hence the polynomials created are just all polynomials 0, 1, 𝒙,
𝒙 + 𝟏, 𝒙𝟐 , 𝒙𝟐 + 𝟏,
𝒙𝟐 + 𝒙, and 𝒙𝟐 + 𝒙 + 𝟏
➢ These correspond to the codewords 000, 001, 010, 011, 100,
101, 110, 111
➢ Since this consists of all possible codewords, it is obviously
cyclic, since any cyclic permutation has to be in this list!
Cyclic codes
➢ Note that the distance of this code is just 1
➢ That means it does not have any error-detecting or any
correcting properties
➢ It is a trivial case – one which will obviously always appear for
any length
➢ But just because it is trivial, it still has to exist and so has to be
included!
Cyclic codes
I’ll take the other trivial case next, that is where our generator
polynomial is
(𝒙 + 𝟏)(𝒙𝟐 + 𝒙 + 𝟏)
➢ This, because it is how we factorised 𝑥 3 − 1, is just 𝒙𝟑 − 𝟏,
which has degree 3
➢ This is too high (we only need degree 2 or less)
➢ Hence the only thing we can multiply it by is the zero
polynomial 0
Cyclic codes
➢ This simply gives us the zero codeword 000
➢ That’s all this code contains
➢ It is technically cyclic, but is a very trivial case – one codeword
isn’t very useful!
➢ Again, this will always be a possible cyclic code for any length
➢ Trivial (and essentially useless in practice), but has to be
defined!
Cyclic codes
The next easiest one to deal with is the generator polynomial
𝒙𝟐 + 𝒙 + 𝟏
➢ As this is already of degree 2, then the only polynomials we
can possibly multiply it by are 0 and 1
➢ This gives the codewords 000 and 111
➢ Note that this is essentially a repetition code as we discussed
earlier, and can detect two errors and correct one as it has
distance 3
Cyclic codes
Finally, we have the generator polynomial 𝒙 + 𝟏
➢ There are four things we can multiply this by to get a
polynomial of degree 2 or less
Namely, these are 0, 1, 𝒙 and 𝒙 + 𝟏
➢ These give the polynomials 0, 𝒙 + 𝟏, 𝒙𝟐 + 𝒙
and 𝒙𝟐 +𝟏 respectively (only the last one needs any
calculation, remember we are doing everything (mod 2)!)
Cyclic codes
➢ This corresponds to the codewords 000, 011, 110 and 101
respectively
➢ This is what we saw before – you can verify it is cyclic (you
need to check it is linear as well)
➢ Remember, this has distance 2, so can detect single bit errors,
but does not have any correcting abilities
Cyclic codes
➢ One other thing to note is that this is essentially a parity check
code
➢ It consists of all codewords with an even number of 1s
➢ You will see more of this, where the codes we create fit with
some of the other concepts defined!
Cyclic codes
➢ These are the only four possible cyclic codes of length 3
➢ Note that they contain 1, 2, 4 and 8 elements – this follows
the pattern of the number of codewords defined earlier as a
polynomial code, which of course these are
➢ There are no others – for example, {000, 001, 010, 100} is not
cyclic as it is not linear
Summary
➢ Almost all codes used in practice are linear
➢ Most are polynomial
➢ Many are cyclic
➢ Aspects of pure mathematics (in terms of polynomial
equations over a finite field) link perfectly well together with
the coding ideas we have discussed here!
© 2024 Steve Lakin, Tony Mann
and the University of Greenwich
The moral rights of the authors
have been asserted
6.6 – Reed-Muller Codes
Sophisticated modern codes
➢ In the remaining part of the “coding” section of this module
we will look briefly at more sophisticated families of error
correcting codes
➢ This is a very brief overview – you should be aware of these
codes and their strengths but we will not cover their
implementation in any detail
➢ You should also appreciate that some of the codes we have
already discussed appear in these families
Reed-Muller codes
➢ The Reed-Muller codes are a very wide class of codes that are
commonly used
➢ As we will see, they include the likes of the Hadamard codes,
along with many others
➢ They are named after Irving Reed (1923-2012) and David
Muller (1924-2008), both influential people in the field of
coding
➢ The idea was first formed in the 1950s and is still relevant
today
Reed-Muller codes
➢ Reed-Muller codes are formally based on the idea of
polynomial codes
➢ They are denoted by RM(r, m) where r is the order of the code
and the length of the blocks (transmitted codewords) is 𝟐𝒎
➢ The distance of RM(r, m) is 𝟐𝒎−𝒓
➢ RM(r, m) has message length k where
𝒓 𝒎
𝒌 = σ𝒊=𝟎
𝒊
𝒎
Aside – what is ?
𝒊
➢ It is the number of ways of choosing i objects from a set of m,
when we don’t care about the order in which they are chosen.
➢ So
𝒎 𝒎!
=
𝒊 𝒊! 𝒎 − 𝒊 !
RM(0, m)
➢ The Reed-Muller codes RM(0, m) mean that we have an initial
set of messages of length 1 – so we are just transmitting either
0 or 1
➢ The length is 𝟐𝒎
➢ This is defined by just repeating the appropriate symbol 𝟐𝒎
times
➢ This is simply a repetition code as we have seen before!
RM(0, m)
➢ Note that the distance between the two possible transmitted
codewords is 𝟐𝒎 , since they differ in every position
➢ Hence, in general, these Reed-Muller codes RM(0, m) can be
written (using our code notation) as 𝟐𝒎 , 𝟏, 𝟐𝒎 𝟐 codes
➢ This means they have rate 𝟏Τ𝟐𝒎
➢ What happens to this as m gets large?
RM(m, m)
➢ At the other end of the spectrum, the
RM(m, m) codes are where we simply transmit the message
as it stands
➢ That is, the original message is 𝟐𝒎 bits long and the
transmitted message is just the same, so the identical 𝟐𝒎 bits
long message
➢ This obviously has no error correction facility – we just send
the message
RM(m, m)
➢ Since the codeword set consists of all binary words of the
particular length, then this code just has distance 1 as there
are codewords differing in only one place
➢ Hence this can be described as a 𝟐𝒎 , 𝟐𝒎 , 𝟏 𝟐 code using our
usual notation
➢ Therefore these codes have rate 𝟐𝒎 Τ𝟐𝒎 = 𝟏 These RM(m, m)
codes are rather trivial!
RM(m – 1, m)
➢ The RM(m – 1, m) codes are simply parity codes
➢ That is, we just ensure than all of our transmitted codewords
have even parity (so contain an even number of 1s), so there
is effectively one parity bit added
➢ Again, a type of code we have already discussed!
RM(m – 1, m)
➢ Since the block length is 𝟐𝒎 which includes one parity bit, the
original message length is just 𝟐𝒎 − 𝟏
➢ The distance is 2 since the parity bit ensures that any two
distinct codewords differ in at least two places
➢ Hence this is a 𝟐𝒎 , 𝟐𝒎 − 𝟏, 𝟐 𝟐 code
➢ It has rate (𝟐𝒎 −1)Τ𝟐𝒎 which tends to 1 as m gets large
RM(1, m)
➢ The RM(1, m) codes are exactly the Hadamard codes that we
discussed last week
➢ They give a 𝟐𝒎 , 𝒎 + 𝟏, 𝟐𝒎−𝟏 Hadamard code
𝟐
➢ So they have low rate but excellent error-correction abilities!
RM(m – 2, m)
➢ Those codes in the class RM(m – 2, m) correspond to the
extended Hamming codes
➢ Recall that these are just the Hamming codes, but with an
extra parity bit added to the end
➢ For example, RM(1, 3) is a (8, 4, 4) code, which is the
extended version of the Hamming(7, 4) code
➢ Remember these have optimal rate for their parameters
Reed-Muller codes
➢ Some practical uses of Reed-Muller codes include satellite
transmissions and wireless technology
➢ Although their rate drops as they get larger, they are often
also used as a “building block” for more advanced codes
➢ These can combine error correction and rate effectively and
due to this, are especially useful in small portable devices
6.7 – BCH Codes
Reed-Solomon Codes
BCH codes
➢ BCH codes are a class of cyclic codes which are very heavily used in
modern applications
➢ They were invented by Alexis Hocquenghem (1908-1990) in 1959,
while Raj Chandra Bose (1901-1987) and Dijen K. Ray-Chaudhuri
(born 1933) also created them independently in 1960
➢ The name BCH code is taken from the first three letters of their
surnames

➢ (Images from Wikimedia Commons. Image of Ray-Chaudhuri is copyright-free. Image of Bose is fair use for
educational purposes)
BCH codes
➢ The main feature of BCH codes is that they can be defined to
any given error correction level and so we have a great deal of
flexibility
➢ Formally, for any integer 𝒎 ≥ 𝟑 and 𝒕 < 𝟐𝒎−𝟏 there exists a
BCH code (n, k, d) satisfying:
𝒏 = 𝟐𝒎 − 𝟏
𝒏 − 𝒌 ≤ 𝒎𝒕
𝒅 ≥ 𝟐𝒕 + 𝟏
BCH codes
➢ The point about all this is that we can set our BCH code to
have whatever distance we like, and still create it – that makes
it highly suitable for use in practical applications which might
need a variety of good rates and error correcting abilities
➢ While the construction (which I won’t discuss!) seems tricky
for us, its relatively easy to do in a computer!
BCH codes
➢ Because of their potentially high error-correcting abilities, BCH
codes are often very good at dealing with burst errors
➢ Recall that this is a sequence of errors, perhaps caused by a
burst of noise in the transmission channel, or a scratch or
damage to the storage device
➢ These are common errors in practice!
BCH codes
➢ In addition, mathematically BCH codes are not too hard to
define, and encode and decode (at least practically, if not
conceptually!)
➢ Techniques such as the Extended Euclidean Algorithm, or
more advanced methods such as the Peterson–Gorenstein–
Zierler algorithm can be efficiently programmed in a fairly
small amount of memory, which makes these codes highly
attractive for practical use
BCH codes
➢ The fact they can be efficiently programmed means that they
are very useful in portable devices
➢ And the fact that they can detect burst errors as well means
that they have found widespread use in things such as CDs,
DVDs, disk drives and 2D barcodes, where burst errors (e.g.
scratches or damage) can often occur
BCH codes
➢ They also have found many applications as part of audio-
visual codecs for playing digital files on devices, in satellite
transmissions and more
➢ Again, this is because of their ability to correct errors and the
ease of programming into very small hardware
➢ Many common codes nowadays are BCH codes!
Reed-Solomon codes
➢ A particular type of BCH code is known as a Reed-Solomon
code
➢ It was actually created in a different way by Irving Reed and
Gustave Solomon (1930-1996) in 1960, but is now usually
viewed via the BCH construction
➢ It is often used in practice as it is quite simple to find an
algorithm to encode and decode it
Reed-Solomon codes
➢ The design of CDs used Reed-Solomon codes, and was one of
the first mass-produced applications of such types of code
➢ QR codes (that we mentioned at the start of the course) also
use Reed-Solomon codes as their way of encoding the data –
this means if part of the QR code is damaged it can still often
be read!
Reed-Solomon codes
➢ They are also heavily used in space exploration
➢ Voyager spacecraft used Reed-Solomon codes to encode their
images to be transmitted, and they are part of NASA’s Mars
Exploration projects
➢ Remember all this – these codes are all around you in lots of
different applications!
6.9 –Conclusion
Further codes
➢ As well as using such codes as we have designed, modern
systems often use a combination of code approaches
➢ Some types of codes that we won’t discuss here but are used
a lot in practice are convolutional codes and, more recently,
turbo codes
➢ These ideas often mean that we can get very close to the
theoretical limit for transmission!
Further codes
➢ Remember – this is a very new and modern area of
mathematics!
➢ New codes and ideas are still in development and research –
as we rely more and more on digital data, codes have to get
better and better to keep up!
➢ Such recent concepts as turbo codes (developed in the 1990s)
are used, for example, in 3G and 4G mobile networks
Summary
➢ In the course of the last six weeks, we have looked from the
beginning of coding to modern methods used in our
communication-intensive world
➢ The theory of codes such as Huffman codes, Hamming codes
and Hadamard codes have proved influential in the
development of this area as it has had to progress so quickly
Summary
➢ In six weeks, obviously we have not been able to delve into all
the mathematics behind some of the codes
➢ But hopefully you have gained an appreciation as to the
theory and practice of what is now a very important part of
our modern world
➢ Think about how often codes are in things you use and you’ll
realise how important they are!
Coming next in the module
➢ We look at cryptography – the science of keeping
communications secret
➢ First we look at the history of the topic, to introduce the key
ideas
➢ So we begin with examples of cryptography from ancient
times to the Second World War
© 2024 Steve Lakin, Tony Mann
and the University of Greenwich
The moral rights of the authors
have been asserted

You might also like