Professional Documents
Culture Documents
3 - Perfect Secrecy
3 - Perfect Secrecy
May, 2020
Contents
Cipher models
One-Time Pad (OTP)
Quantifying Information
6/12/2020 2
Contents
Cipher models
One-Time Pad (OTP)
Quantifying Information
6/12/2020 3
Encryption
How do we design ciphers?
𝐾𝐸 𝐾𝐷
6/12/2020 4
Attacker’s Capabilities (Cryptanalysis)
Eve wants to some how get information about the secret key.
Attack models
▪ Ciphertext only attack (Tấn công chỉ biết bản mã)
▪ Know plaintext attack (Tất công biết bản rõ)
▪ Chosen plaintext attack (Tấn công bản rõ chọn sẵn)
o Eve has temporary access to the encryption machine. She can choose
the plaintext and get the ciphertext
▪ Chosen ciphertext attack (Tấn công bản mã chọn sẵn)
o Eve has temporary access to the decryption machine. She can choose
ciphertext and get the plaintext
6/12/2020 5
Cipher Models
(What are the goals of the design?)
My cipher can withstand all
attacks with complexity less
than 22048
The best attacker with the If my cipher can be broken
Computational Security best computation resources then large numbers can be
(bảo mật tính toán được) would take 3 centuries to factored easily
attack my cipher
My cipher is secure against Provable Security (bảo mật chứng minh được)
all attacks irrespective of (Hardness relative to a tough problem)
the attacker’s power. I can
prove this!
6/12/2020 7
Encryption
For a given key, the encryption (𝑒𝑘 ) defines an injective mapping (ánh xạ
đơn ánh) between the plaintext set (𝑃) and ciphertext set (𝐶)
Alice picks a plaintext 𝑥 ∈ 𝑃 and encrypts it to obtain a ciphertext 𝑦 ∈ 𝐶
6/12/2020 8
Plaintext Distribution
Plaintext Distribution (phân phối bản rõ)
▪ Let 𝑿 be a discrete random variable (biến ngẫu nhiên rời rạc) over
the set 𝑃
▪ Alice chooses 𝑥 from 𝑃 based on some probability distribution
o Let Pr[𝑿 = 𝑥] be the probability that 𝑥 is chosen
o This probability may depend on the language
Plaintext set
Pr[𝑿 = 𝑎] = 1/2
Pr[𝑿 = 𝑏] = 1/3
Pr[𝑿 = 𝑐] = 1/6
6/12/2020 9
Key Distribution
Key Distribution (phân phối khóa)
▪ Alice & Bob agree upon a key 𝑘 chosen a key set 𝐾
▪ Let 𝑲 be a random variable denoting this choice
Keyspace
Pr 𝐾 = 𝑘1 = 3/4
Pr 𝐾 = 𝑘2 = 1/4
6/12/2020 10
Ciphertext Distribution (phân phối bản mã)
Let 𝒀 be a discrete random variable over the set 𝐶
The probability of obtaining a particular ciphertext 𝑦
depends on the plaintext and key probabilities
Pr 𝒀 = 𝑦 = Pr 𝑘 Pr 𝑑𝑘 𝑦
𝑘
3 1 1 1
Pr 𝒀 = 𝑃 = Pr[𝑘1 ] × Pr[𝑐] + Pr[𝑘2 ] × Pr[𝑐] = × + × = 1/6
4 6 4 6
3 1 1 1
Pr 𝒀 = 𝑄 = Pr[𝑘1 ] × Pr[𝑏] + Pr[𝑘2 ] × Pr[𝑎] = × + × = 3/8
4 3 4 2
3 1 1 1
Pr 𝒀 = 𝑅 = Pr[𝑘1 ] × Pr[𝑎] + Pr[𝑘2 ] × Pr[𝑏] = × + × = 11/24
4 2 4 3
Plaintext Note: Pr 𝒀 = 𝑃 + Pr 𝒀 = 𝑄 + Pr 𝒀 = 𝑅 = 1
6/12/2020 12
A posteriori Probabilities
How to compute the attacker’s a posteriori probabilities?
Pr 𝑿 = 𝑥 𝒀 = 𝑦
▪ Bayes’s Theorem
Pr[𝑥] × Pr[𝑦|𝑥]
Pr 𝑥 𝑦 =
Pr[𝑦]
6/12/2020 13
Pr[y|x]
Pr 𝑃 𝑎 = 0
Pr 𝑃 𝑏 = 0
Pr 𝑃 𝑐 = 1
Pr 𝑄 𝑎 = Pr 𝑘2 = 1ൗ4
Pr 𝑄 𝑏 = Pr 𝑘1 = 3ൗ4
Pr 𝑄 𝑐 = 0
Pr 𝑅 𝑎 = Pr 𝑘1 = 3ൗ4
Pr 𝑅 𝑏 = Pr 𝑘2 = 1ൗ4
Pr 𝑅 𝑐 = 0
Keyspace
Pr 𝐾 = 𝑘1 = 3/4
Pr 𝐾 = 𝑘2 = 1/4
6/12/2020 14
Computing Posteriori Probabilities
Plaintext Pr[y|x]
Pr[𝑿 = 𝑎] = 1/2 Pr 𝑃 𝑎 = 0
Pr[𝑥] × Pr[𝑦|𝑥] Pr 𝑃 𝑏 = 0
Pr 𝑥 𝑦 = Pr[𝑿 = 𝑏] = 1/3 Pr 𝑃 𝑐 = 1
Pr[𝑦]
Pr[𝑿 = 𝑐] = 1/6 Pr 𝑄 𝑎 = Pr 𝑘2 = 1ൗ4
Pr 𝑄 𝑏 = Pr 𝑘1 = 3ൗ4
Ciphertext
Pr 𝑄 𝑐 = 0
Pr[𝒀 = 𝑃] = 1/6
Pr 𝑅 𝑎 = Pr 𝑘1 = 3ൗ4
Pr[𝒀 = 𝑄] = 3/8
Pr 𝑅 𝑏 = Pr 𝑘2 = 1ൗ4
Pr[𝒀 = 𝑅] = 11/24 Pr 𝑅 𝑐 = 0
Pr 𝑎 𝑃 = 0 Pr 𝑏 𝑃 = 0 Pr 𝑐 𝑃 = 1
Pr 𝑎 𝑄 = 1Τ3 Pr 𝑏 𝑄 = 2Τ3 Pr 𝑐 𝑄 = 0
Pr 𝑎 𝑅 = 9Τ11 Pr 𝑏 𝑅 = 2Τ11 Pr 𝑐 𝑅 = 0
If the attacker sees ciphertext 𝑷 then she should know the plaintext was 𝒄
If the attacker sees ciphertext 𝑹 then she would know 𝒂 is the most likely plaintext
Not a good encryption mechanism!!
6/12/2020 15
Perfect Secrecy
Perfect secrecy achieved when
a posteriori probabilities = a priori probabilities
Pr 𝑥 𝑦 = Pr[𝑥]
6/12/2020 16
Perfect Secrecy Example
Find the a posteriori probabilities for the following scheme
Verify that it is perfectly secret
Plaintext
Pr[𝑿 = 𝑎] = 1/2
Pr[𝑿 = 𝑏] = 1/3
Pr[𝑿 = 𝑐] = 1/6
Keyspace
Pr 𝐾 = 𝑘1 = 1/3
Pr 𝐾 = 𝑘2 = 1/3
Pr 𝐾 = 𝑘3 = 1/3
6/12/2020 17
Observations on Perfect Secrecy
Perfect Secrecy iff
▪ Follows from Bayes’s theorem
Pr 𝒀 = 𝑦 𝑿 = 𝑥 = Pr[𝒀 = 𝑦]
Perfect Indistinguishability
▪ ∀𝑥1 , 𝑥2 ∈ 𝑃
Pr 𝒀 = 𝑦 𝑿 = 𝑥1 = Pr[𝒀 = 𝑦|𝑿 = 𝑥2 ]
6/12/2020 18
Shift Cipher with a Twist
Plaintext set: 𝑃 = 0,1,2,3, … , 25
Ciphertext set: 𝐶 = 0,1,2,3, … , 25
Keyspace: 𝐾 = 0,1,2,3, … , 25
Encryption Rule: 𝑒𝑘 𝑥 = 𝑥 + 𝐾 mod 26
Decryption Rule: 𝑑𝑘 𝑥 = 𝑥 − 𝐾 mod 26
where 𝑘 ∈ 𝐾 and 𝑥 ∈ 𝑃
The Twist:
(1)the key changes after every encryption
(2)keys are picked with uniform probability
6/12/2020 19
The Twisted Shift Cipher if Perfectly Secure
Pr 𝒚 = 𝑦 = Pr 𝑲 = 𝐾 Pr 𝑥 = 𝑑𝐾 𝑦
𝐾∈ℤ26
1
= Pr[𝒙 = 𝑦 − 𝐾]
26 Pr 𝑥 𝑦 =
Pr 𝑥 Pr[𝑦|𝑥]
𝐾∈ℤ26 Pr[𝑦]
1
= Pr[𝒙 = 𝑦 − 𝐾] Pr[𝑥] 1ൗ26
26 =
1ൗ
𝐾∈ℤ26 26
1 = Pr[𝑥]
=
26
1
Pr 𝑦 𝑥 = Pr 𝑲 = 𝑦 − 𝑥 mod 26 =
26
6/12/2020 21
Shannon’s Theorem
If 𝑲 = 𝑪 = 𝑷 then the system provides perfect secrecy iff
1. every key is used with equal probability 𝟏ൗ 𝑲 , and
2. for every 𝒙 ∈ 𝑷 and 𝒚 ∈ 𝑪, there exists a unique key 𝒌 ∈ 𝑲 such that
𝒆𝒌 𝒙 = 𝒚
Intuition:
Every 𝑦 ∈ 𝐶 can result from any of the possible plaintexts 𝑥
Since 𝐾 = 𝑃 there is exactly one mapping from each plaintext to 𝑦
Since each key is equiprobable (bình đẳng), each of these mappings is
equally probable
6/12/2020 22
Contents
Cipher models
One-Time Pad (OTP)
Quantifying Information
6/12/2020 23
One Time Pad (Verman’s Cipher)
Length L
ciphertext
plaintext
plaintext ciphertext
exor
Key
Encryption: 𝑥⨁𝑘 = 𝑦
key Decryption: 𝑦⨁𝑘 = 𝑥
Length L
6/12/2020 25
One Time Pad is Perfectly Secure
Proof using indistinguishability
Pr 𝒀 = 𝑦 𝑿 = 𝑥 = Pr 𝑿 = 𝑥, 𝑲 = 𝑘 𝑿 = 𝑥 from 𝑥⨁𝑘 = 𝑦
1
= Pr 𝑲 = 𝑘 = 𝐿
2
1
Pr 𝒀 = 𝑦 𝑿 = 𝑥1 = 𝐿 = Pr 𝒀 = 𝑦 𝑿 = 𝑥2
2
∀𝑥1 , 𝑥2 ∈ 𝑿
This implies perfect Indistinguishability that is independent of the plaintext
distribution
6/12/2020 26
Limitations of Perfect Secrecy
Key must be at least a long as the message
▪ Limits applicability if messages are long
Key must be changed for every encryption
▪ If the same key is used twice, then an adversary can compute the ex-
or of the messages
𝑥1 ⨁𝑘 = 𝑦1
𝑥2 ⨁𝑘 = 𝑦2
𝑥1 ⨁𝑥2 = 𝑦1 ⨁𝑦2
The attacker can then do language analysis to determine 𝑦1 and 𝑦2
6/12/2020 27
Ciphers in Practice
Perfect secrecy is difficult to achieve in practice
Computational Security rather than Perfect Security
Instead we use a crypto-scheme that cannot be broken in
reasonable time with reasonable success
This means
▪ Security is only achieved against adversaries that run in polynomial
time
▪ Attackers can potentially succeed with a very small probability
(attackers need to be very luck to succeed)
6/12/2020 28
Contents
Cipher models
One-Time Pad (OTP)
Quantifying Information
6/12/2020 29
A Metric to Quantify Information
This is one alphabet missing in each of these words. Can you find the
alphabet so that the words make sense?
6/12/2020 30
Metric to Quantify Information
1
log 2
𝑝𝑖
Claude Shannon
A higher probability
indicates lesser
information content.
6/12/2020 31
Metric to Quantify Information
To find the average information content of a language
find weighted sum as follows
𝑛
1
𝑝𝑖 log 2
𝑝𝑖
𝑖=1
Claude Shannon
6/12/2020 32
Metric to Quantify Information
To find the average information content of a language
find weighted sum as follows
Call this term the Entropy
𝑛
1
𝐻 𝑋 = 𝑝𝑖 log 2
𝑝𝑖
𝑖=1
Entropy of English
Contemporary : 4.03 bits
Claude Shannon Shakesphere : 4.106 bits
Entropy provides the German : 4.08 bits
average number of bits French : 4.00 bits
needed to represent Italian : 3.98 bits
letters in the language Spanish : 3.98 bits
Maximum Entropy occurs when each alphabet is equally likely (i.e. 1/26).
1
The maximum entropy is −log 2 = 4.7
26
6/12/2020 33
Entropy of the Weather Forecast
Weather Forecast
M1: Sunny (with probability 0.05)
Tomorrow the weather will be M2: Cloudy (with probability 0.15)
________________________ M3: Light Rain (with probability 0.70)
M4: Heavy Rain (with probability 0.10)
𝑛
1
𝐻 𝐹𝑜𝑟𝑒𝑐𝑎𝑠𝑡 = 𝑝𝑖 log 2
𝑝𝑖
𝑖=1
= − 0.05 log 2 0.05 + 0.15 log 2 0.15 + 0.7 log 2 0.7 + 0.1 log 2 0.1
= 1.319
6/12/2020 34
Entropy & Uncertainty
Alice thinks of a number (0 or 1)
The choice is denoted by a discrete random variable X.
X What is X?
6/12/2020 35
Uncertainty
Lets assume Eve know this probability distribution
If Pr 𝑿 = 1 = 1 and Pr 𝑿 = 0 =0
▪ Then Eve can determine with 100% accuracy
If Pr 𝑿 = 0 = 0.75 and Pr 𝑿 = 1 = 0.25
▪ Eve will guess X as 0, and gets it right 75% of the time
If Pr 𝑿 = 0 = Pr 𝑿 = 1 = 0.5
▪ Eve’s guess would be similar to a uniformly random guess. Gets it right ½ the
time.
What is X?
36
6/12/2020
Entropy (Quantifying Information)
Suppose we consider a discrete R.V.X taking values from set
{𝑥1 , 𝑥2 , 𝑥3 , … , 𝑥𝑛 }, each symbol occurring with probability
𝑝1 , 𝑝2 , 𝑝3 , … , 𝑝𝑛
Entropy is defined as the minimum number of bits (on
average) that is required to represent a string from this set?
Probability that the
ith symbol occurs
𝑛
1
𝐻 𝑋 = 𝑝𝑖 log 2
𝑝𝑖
𝑖=1
Entropy of X
Bits to encode
the ith symbol
6/12/2020 37
What is the Entropy of X?
X What is X?
Pr 𝑿 = 0 = 𝑝 and Pr 𝑿 = 1 = 1 − 𝑝
𝐻 𝑋 = −𝑝 log 2 𝑝 − (1 − 𝑝) log 2 (1 − 𝑝)
1
𝐻 𝑋 𝑝=0 = 0, 𝐻 𝑋 𝑝=1 = 0, 𝐻 𝑋 𝑝=0.5 =1
Using lim 𝑝 log 𝑝 = 0
𝑝→0
𝐻(𝑋)
0 0.5 1
6/12/2020 𝑝 38
Properties of 𝐻(𝑋)
If 𝑋 is a random variable, which takes on values {1,2,3, … , 𝑛}
with probabilities 𝑝1 , 𝑝2 , 𝑝3 , … , 𝑝𝑛 then
1. 𝐻 𝑋 ≤ log 2 𝑛
2. When 𝑝1 = 𝑝2 = 𝑝3 = ⋯ = 𝑝𝑛 = 1Τ𝑛 then 𝐻 𝑋 = log 2 𝑛
6/12/2020 39
Entropy and Coding
Entropy quantifies Information content
“Can we encode a message 𝑀 in such a way that the average
length is as short as possible and hopefully equal to 𝐻(𝑀)?”
Huffman Codes:
allocate more bits to least probable events
allocate less bits to popular events
6/12/2020 40
Example
Encoding
𝑆 = {𝐴, 𝐵, 𝐶, 𝐷} are 4 symbols A: 111
B: 0
Probability of Occurrence is: C: 110
1 1 1 1 D: 10
𝑃 𝐴 = ,𝑃 𝐵 = ,𝑃 𝐶 = ,𝑃 𝐷 =
8 2 8 4
Decode this?
1101010111
6/12/2020 41
Example: Average Length & Entropy
Encoding
𝑆 = {𝐴, 𝐵, 𝐶, 𝐷} are 4 symbols A: 111
B: 0
Probability of Occurrence is: C: 110
1 1 1 1 D: 10
𝑃 𝐴 = ,𝑃 𝐵 = ,𝑃 𝐶 = ,𝑃 𝐷 =
8 2 8 4
Average Length of Huffman code:
3 × 𝑃 𝐴 + 1 × 𝑃 𝐵 + 3 × 𝑃 𝐶 + 2 × 𝑃 𝐷 = 1.75
Entropy
1 1 1 1
𝐻 𝑆 = − log 2 8 − log 2 2 − log 2 8 − log 2 4 = 1.75
8 2 8 4
6/12/2020 42
Example (Entropy Considering One Letter)
Consider a language with 26 letters of the set 𝑆 =
{𝑠1 , 𝑠2 , 𝑠3 , … , 𝑠26 }. Suppose the language is characterized by
the following probabilities. What is the language entropy?
1 1 Maximum Entropy
𝑃 𝑆1 = , 𝑃 𝑆2 =
2 4 𝑅 = log 26 = 4.7
1
𝑃 𝑆𝑖 = for 𝑖 = 3,4, … , 10
64
1
𝑃 𝑆𝑖 = for 𝑖 = 11,12, … , 26
128
Language Entropy
26
1
1
𝑟1 = 𝐻 𝑆 = 𝑃(𝑆𝑖 ) log
𝑃(𝑆𝑖 )
𝑖=1
1 1 1 1
= log 2 + log 4 + 8 log 64 + 16 log 128
2 4 64 128
1 1 6 7
= + + + = 2.625
2 2 8 8
6/12/2020 43
Example (Entropy Considering Two Letters)
In the set 𝑆 = {𝑠1 , 𝑠2 , 𝑠3 , … , 𝑠26 }, suppose the diagram
probabilities is as below. What is the entropy?
1
𝑃 𝑠𝑖+1 𝑠𝑖 = 𝑃 𝑠𝑖+2 |𝑠𝑖 = 2 for 𝑖 = 1 → 24 𝑃 𝑠1 , 𝑠2 = 𝑃 𝑠2 𝑠1 × 𝑃 𝑠1 = 1Τ4 ; 𝑃 𝑠1 , 𝑠3 = 𝑃 𝑠3 𝑠1 × 𝑃 𝑠1 = 1Τ4
1 𝑃 𝑠2 , 𝑠3 = 𝑃 𝑠3 𝑠2 × 𝑃 𝑠2 = 1Τ8 ; 𝑃 𝑠2 , 𝑠4 = 𝑃 𝑠4 𝑠2 × 𝑃 𝑠2 = 1Τ8
𝑃 𝑠26 |𝑠25 = 𝑃 𝑠1 𝑠25 = 𝑃 𝑠1 𝑠26 = 𝑃 𝑠2 𝑠26 = 2 𝑃 𝑠𝑖 , 𝑠𝑖+1 = 𝑃 𝑠𝑖+1 𝑠𝑖 𝑃 𝑠𝑖 = 1Τ128 for 𝑖 = 3,4, … , 10
all other probabilities are 0 𝑃 𝑠𝑖 , 𝑠𝑖+2 = 𝑃 𝑠𝑖+2 𝑠𝑖 𝑃 𝑠𝑖 = 1Τ128 for 𝑖 = 3,4, … , 10
𝑃 𝑠𝑖 , 𝑠𝑖+1 = 𝑃 𝑠𝑖+1 𝑠𝑖 𝑃 𝑠𝑖 = 1Τ256 for 𝑖 = 11,12, … , 24
𝑃 𝑠𝑖 , 𝑠𝑖+2 = 𝑃 𝑠𝑖+2 𝑠𝑖 𝑃 𝑠𝑖 = 1Τ256 for 𝑖 = 11,12, … , 24
𝑃 𝑠25 , 𝑠26 = 𝑃 𝑠25 , 𝑠1 = 𝑃 𝑠26 , 𝑠1 = 𝑃 𝑠26 , 𝑠2 = 1/256
6/12/2020 44
Redundancy in Languages
𝐻 𝑆 = 2.625
𝐻 𝑆 (2) = 3.625
𝐻 𝑆 (2) − 𝐻 𝑆 = 1
This means, that having the first letter, we can
obtain the second one using one bit only.
i.e. if we know the first letter, then there are
only 2 equally possible candidates for the
second.
6/12/2020 45
Measuring the Redundancy in a Language
Let 𝑆 be letter in a language (e.g. 𝑆 = {𝐴, 𝐵, 𝐶, 𝐷})
𝑆 = 𝑆 × 𝑆 × 𝑆 × ⋯ × 𝑆 (𝑘 times) is a set representing messages of
length 𝑘
Let 𝑆 (𝑘) be a random variable in 𝑆
6/12/2020 46
Measuring the Redundancy in a Language
Absolute Rate (𝑅): The maximum amount of information per
character in a language
▪ The absolute rate of languages 𝑆 is 𝑅 = log 2 𝑆
▪ For English, 𝑆 = 26, therefore 𝑅 = 4.7 bits/ letter
Redundancy of a language is 𝐷 = 𝑅 − 𝑟𝑘
▪ For English when 𝑟𝑘 = 1, then 𝐷 = 3.7 → around 70% redundant
6/12/2020 47
Conditional Entropy
Suppose 𝑋 and 𝑌 are two discrete random variables, then
conditional entropy is defined as
1
𝐻 𝑋 𝑌 = 𝑝(𝑦) 𝑝 𝑥 𝑦 log 2
𝑝 𝑥𝑦
𝑦 𝑥
𝑝(𝑥)
= 𝑝 𝑦 𝑝(𝑥, 𝑦) log 2
𝑝(𝑥, 𝑦)
𝑥 𝑦
Conditional entropy means …
▪ What is the remaining uncertainty about 𝑋 given 𝑌
▪ 𝐻 𝑋 𝑌 ≤ 𝐻(𝑋) with equality when 𝑋 and 𝑌 are independent
6/12/2020 51
Entropy and Encryption
K distribution
𝑛 :length of message/ciphertext
k
𝑀𝑛 distribution 𝐶 𝑛 distribution
E
m c
6/12/2020 52
Entropy and Encryption
K distribution
𝑛 :length of message/ciphertext
k
𝑀𝑛 distribution 𝐶 𝑛 distribution
E
m c
Key Equivocation:
▪ If the attacker can view 𝑛 ciphertexts, what is his uncertainty about the key
1
𝐻 𝐾 𝐶 (𝑛) = 𝑝(𝑐) 𝑝 𝑘 𝑐 log 2
𝑝 𝑘𝑐
𝑐∈𝐶 𝑛 𝑚∈𝑀𝑛
6/12/2020 53
Unicity Distance
1
𝐻 𝐾 𝐶 (𝑛) = 𝑝(𝑐) 𝑝 𝑘 𝑐 log 2
𝑝 𝑘𝑐
𝑐∈𝐶 𝑛 𝑚∈𝑀𝑛
6/12/2020 54
Unicity Distance & Classical Ciphers
6/12/2020 55
Product Ciphers
Consider a cryptosystem where 𝑃 = 𝐶 (this is an
endomorphic system)
▪ Thus the ciphertext and the plaintext set is the same
Combine two ciphering schemes to build a product cipher
𝐾1 ∥ 𝐾2
Given 2 endomorphic crypto-systems 𝐾2
𝐾1
𝑆1 : (𝑃, 𝑃, 𝐾1 , 𝐸1 , 𝐷1 )
𝑆2 : (𝑃, 𝑃, 𝐾2 , 𝐸2 , 𝐷2 )
𝑃 𝐶1 = 𝑃2 𝐶
Resultant Product Cipher E1 E2
𝑆1 × 𝑆2 : (𝑃, 𝑃, 𝐾1 × 𝐾2 , 𝐸, 𝐷)
Resultant Key Space 𝐾1 × 𝐾2
6/12/2020 56
Product Ciphers
Consider a cryptosystem where 𝑃 = 𝐶 (this is an
endomorphic system)
▪ Thus the ciphertext and the plaintext set is the same
Combine two ciphering schemes to build a product cipher
𝐾1 ∥ 𝐾2
Given 2 endomorphic crypto-systems 𝐾2
𝐾1
𝑆1 : 𝑥 = 𝑑𝐾1 𝑒𝐾1 𝑥
𝑆2 : 𝑥 = 𝑑𝐾2 𝑒𝐾2 𝑥 𝑃 𝐶1 = 𝑃2 𝐶
E1 E2
Resultant Product Cipher
𝑆1 × 𝑆2
𝑒 𝐾1 ,𝐾2 𝑥 = 𝑒𝐾2 (𝑒𝐾1 (𝑥))
𝑑 𝐾1 ,𝐾2 𝑥 = 𝑑𝐾2 𝑑𝐾1 𝑥
Ciphertext of first cipher fed
Resultant Key Space 𝐾1 × 𝐾2 as input to the second cipher
6/12/2020 57
Affine Cipher is a Product Cipher
Multiplicative Cipher
𝑃 = 𝐶 = 0,1,2, … , 25 Shift Cipher
Affine Cipher = 𝑀 × 𝑆
6/12/2020 58
Is 𝑺 × 𝑴 same as the Affine Cipher
𝑆 × 𝑀: 𝑦 = 𝑎 𝑥 + 𝑏 mod 26
= 𝑎𝑥 + 𝑏𝑎 mod 26
Key is (𝑏, 𝑎)
𝑏𝑎 mod 26 is some 𝑏′ such that 𝑎−1 𝑏 ′ = 𝑏 mod 26
This can be represented as an Affine cipher
𝑦 = 𝑎𝑥 + 𝑏 ′ mod 26
6/12/2020 59
Idempotent Ciphers
If 𝑆1 : (𝑃, 𝑃, 𝐾, 𝐸1 , 𝐷1 ) is an endomorphic cipher
Then it is possible to construct product ciphers of the form
𝑆1 × 𝑆1 , denoted: 𝑆 2 : 𝑃, 𝑃, 𝐾 × 𝐾, 𝐸, 𝐷
If 𝑆 2 = 𝑆 then the cipher is called idempotent cipher
6/12/2020 60
Iterative Cipher
An n-fold product of this is 𝑆 × 𝑆 × 𝑆 … 𝑛 times = 𝑆 𝑛 is an
iterative cipher
All modern block ciphers like DES, 3-DES, AES, etc. are iterative,
non-idempotent, product ciphers.
6/12/2020 61