Professional Documents
Culture Documents
Textbook Cyber Security Cryptography and Machine Learning Itai Dinur Ebook All Chapter PDF
Textbook Cyber Security Cryptography and Machine Learning Itai Dinur Ebook All Chapter PDF
https://textbookfull.com/product/cyber-security-cryptography-and-
machine-learning-fourth-international-symposium-cscml-2020-be-er-
sheva-israel-july-2-3-2020-proceedings-shlomi-dolev/
https://textbookfull.com/product/cyber-security-cryptography-and-
machine-learning-first-international-conference-cscml-2017-beer-
sheva-israel-june-29-30-2017-proceedings-1st-edition-shlomi-
dolev/
https://textbookfull.com/product/machine-learning-for-cyber-
security-second-international-conference-ml4cs-2019-xi-an-china-
september-19-21-2019-proceedings-xiaofeng-chen/
https://textbookfull.com/product/machine-learning-and-security-
protecting-systems-with-data-and-algorithms-first-edition-chio/
Cyber Security Erickson Karnel
https://textbookfull.com/product/cyber-security-erickson-karnel/
https://textbookfull.com/product/machine-learning-for-cyber-
physical-systems-selected-papers-from-the-international-
conference-ml4cps-2018-jurgen-beyerer/
https://textbookfull.com/product/cyber-security-power-and-
technology-martti-lehto/
https://textbookfull.com/product/cyber-security-intelligence-and-
analytics-zheng-xu/
https://textbookfull.com/product/cyber-security-for-cyber-
physical-systems-1st-edition-saqib-ali/
Itai Dinur · Shlomi Dolev
Sachin Lodha (Eds.)
Cyber Security
LNCS 10879
Cryptography and
Machine Learning
Second International Symposium, CSCML 2018
Beer Sheva, Israel, June 21–22, 2018
Proceedings
123
Lecture Notes in Computer Science 10879
Commenced Publication in 1973
Founding and Former Series Editors:
Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen
Editorial Board
David Hutchison
Lancaster University, Lancaster, UK
Takeo Kanade
Carnegie Mellon University, Pittsburgh, PA, USA
Josef Kittler
University of Surrey, Guildford, UK
Jon M. Kleinberg
Cornell University, Ithaca, NY, USA
Friedemann Mattern
ETH Zurich, Zurich, Switzerland
John C. Mitchell
Stanford University, Stanford, CA, USA
Moni Naor
Weizmann Institute of Science, Rehovot, Israel
C. Pandu Rangan
Indian Institute of Technology Madras, Chennai, India
Bernhard Steffen
TU Dortmund University, Dortmund, Germany
Demetri Terzopoulos
University of California, Los Angeles, CA, USA
Doug Tygar
University of California, Berkeley, CA, USA
Gerhard Weikum
Max Planck Institute for Informatics, Saarbrücken, Germany
More information about this series at http://www.springer.com/series/7410
Itai Dinur Shlomi Dolev
•
Cyber Security
Cryptography and
Machine Learning
Second International Symposium, CSCML 2018
Beer Sheva, Israel, June 21–22, 2018
Proceedings
123
Editors
Itai Dinur Sachin Lodha
Ben-Gurion University of the Negev Tata Consultancy Services (India)
Beer Sheva Chennai, Tamil Nadu
Israel India
Shlomi Dolev
Ben-Gurion University of the Negev
Beer Sheva
Israel
This Springer imprint is published by the registered company Springer International Publishing AG
part of Springer Nature
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
Organizing Committee
General Chairs
Shlomi Dolev Ben-Gurion University of the Negev, Israel
Sachin Lodha Tata Consultancy Services, India
Program Chairs
Sitaram Chamarty Tata Consultancy Services, India
Itai Dinur Ben-Gurion University of the Negev, Israel
Organizing Chair
Timi Budai Ben-Gurion University of the Negev, Israel
Program Committee
Yehuda Afek Tel Aviv University, Israel
Adi Akavia Tel Aviv Yaffo Academic College, Israel
Amir Averbuch Tel Aviv University, Israel
Roberto Baldoni Università di Roma La Sapienza, Italy
Michael Ben-Or Hebrew University, Israel
Anat Bremler-Barr IDC Herzliya, Israel
VIII Organization
Ran Canetti Boston University, USA and Tel Aviv University, Israel
Itai Dinur (Chair) Ben-Gurion University, Israel
Karim ElDefrawy SRI International, USA
Bezalel Gavish Southern Methodist University, USA
Yossi Gilad Boston University and MIT, USA
Niv Gilboa Ben-Gurion University, Israel
Shafi Goldwasser Weizmann Institute of Science and MIT, USA
Ehud Gudes Ben-Gurion University, Israel
Shay Gueron University of Haifa, Israel
Carmit Hazay Bar-Ilan University, Israel
Danny Hendler Ben-Gurion University, Israel
Amir Herzberg Bar-Ilan University, Israel
Stratis Ioannidis Northeastern University, USA
Gene Itkis MIT Lincoln Laboratory, USA
Nathan Keller Bar-Ilan University, Israel
Sarit Kraus Bar-Ilan University, Israel
Mark Last Ben-Gurion University, Israel
Guy Leshem Ashkelon Academic College, Israel
Ximing Li South China Agricultural University, China
Yin Li Fudan University, China
Michael Luby Qualcomm, USA
Avi Mendelson Technion, Israel
Aikaterini Mitrokosta Chalmers University of Technology, Sweden
Benny Pinkas Bar-Ilan University, Israel
Ely Porat Bar-Ilan University, Israel
Dan Raz Technion, Israel
Christian Riess FAU, Germany
Elad M. Schiller Chalmers University of Technology, Sweden
Galina Schwartz UC Berkeley, USA
Gil Segev Hebrew University, Israel
Haya Shulman Fraunhofer SIT, Germany
Paul Spirakis Univ. of Liverpool, UK and Univ. of Patras, Greece
Naftali Tishby Hebrew University, Israel
Ari Trachtenberg Boston University, USA
Philippas Tsigas Chalmers University of Technology, Sweden
Doug Tygar UC Berkeley, USA
Kalyan Veeramachaneni MIT LIDS, USA
Michael Waidner Fraunhofer SIT and TU Darmstadt, Germany
Rebecca Wright Rutgers University, USA
Moti Yung Columbia University and Snapchat, USA
Organization IX
Additional Reviewers
Eran Amar
Mai Ben Adar-Bessos
Rotem Hemo
Yu Zhang
Sponsors
Contents
1 Short Description
Short “laymen” description of the proposed approach can be as follows: The concept
described in this paper aims to encode the optical information by doing it in the
“photon” level before it is being sampled and converted into a “digital electron” of
information. Since the encryption is done in the analog domain it can be added in
parallel to all the existing digital encryption techniques which can add strength to the
proposed concept and which are completely orthogonal to what we are doing. Since in
optics conventional sensors capture only intensity and not phase of the arriving
wavefront, then the process of capturing the information and converting the analog
photon into a digital electron that can be processed and decoded already destroys large
portion of the information that is needed for the decoding process. The analog encoding
that we do in the photon level properly plays with the phase and the sampling scheme
both in the time domain as well as in the Fourier domain such that in both domains the
signal is lowered below the noise level. Thus, the information signal is below the noise
existing in the system and thus it is unseen by the intruder and when the intruder
attempts to capture the analog “photon” and to convert it into a digital one in order to
try to see if there is an encrypted information, then the sampling process which destroys
the phase (captures only intensity) and adds quantization noise completely erases the
information that is being hidden below the noise. A more elaborated description
including both theoretical mathematical background as well as numerical and experi-
mental validation can be found in Refs. [1–4].
If we now adopt a more mathematical description methodology, then the proposed
stealthy communications system is illustrated in the attached Fig. 1. While in Fig. 1(a)
© Springer International Publishing AG, part of Springer Nature 2018
I. Dinur et al. (Eds.): CSCML 2018, LNCS 10879, pp. 1–5, 2018.
https://doi.org/10.1007/978-3-319-94147-9_1
2 T. Yeminy et al.
(a).
Filter
Amplifiers
Modulator
ADC
DAC
Laser
Receiver source
(b).
one may see the schematic system configuration and in Fig. 1(b) one can see the
constructed experimental setup. The temporal phase of the pulses at the transmitted
pulse sequence is encrypted in order to reduce its power spectral density (PSD) below
the noise level. The PSD reduction occurs since each pulse in the sequence has different
phase, hence, the pulses are added incoherently in frequency domain. Then, the spectral
amplitude of the signal is deliberately spread wide, essentially enabling to transmit a
signal with low PSD (lowering the signal below the noise level in the frequency
domain).
The spectral spreading is achieved by optically sampling the signal (this can be
implemented by an optical amplitude modulator), which generates replicas of the signal
in frequency domain. The spectral phase of the sampled signal is subsequently
encrypted. The spectral phase encryption has two goals. The first is signal spreading
below the noise level in time domain. The second goal is to prevent signal detection in
frequency domain by coherent addition of the various spectral replicas of the signal.
Optical Cryptography for Cyber Secured and Stealthy Fiber-Optic 3
20
15 Theory
Measurement
10 Eavesdopper
SNR [dB]
-5
-10
10 20 30 40 50 60 70 80
Transmission Bandwidth [GHz]
(a).
0
10
-1
10 Theory
Calculated from SNR
-2 Measured
10 Eavesdopper
BER
-3
10
-4
10
-5
10
-6
10
10 20 30 40 50 60 70 80
Transmission Bandwidth [GHz]
(b).
Fig. 2. Estimated performance. (a) SNR after decoding. (b) BER after decoding.
Only an authorized user having the temporal and spectral encrypting phases in hand
can raise the signal above the noise level and detect it. The encrypted signal is then
amplified and sent to the receiver.
At the receiver, the spectral phase of the signal is decrypted and the signal is
subsequently coherently detected. Then, all the spectral replicas of the signal are folded
to the baseband by means of electrical sampling, therefore the PSD of the signal is
reconstructed and in turn, the signal to noise ratio (SNR) is improved. This is achieved
by coherently adding all the signal’s spectral replicas at the baseband (hence the signal
is reinforced) whereas the spectral replicas of the noise are added incoherently (con-
sequently they are averaged to a low value). Then, a matched filter is applied and the
temporal phase of the signal is decrypted. Thus, the signal is raised above the noise
level in both time and frequency domains. A decision circuit is subsequently used to
recover the original transmitted symbol sequence.
It should be noted that an eavesdropper cannot decrypt the transmitted signal since
using wrong decrypting temporal and spectral phases does not raise the signal above
4 T. Yeminy et al.
Fig. 3. Decoded signal, authorized user and eavesdropper. (a) Original noiseless pulse sequence
and authorized user noisy decoded pulse sequence. (b) Original noiseless pulse sequence and
eavesdropper noisy decoded pulse sequence. (c) Authorized user pulse sequence and noise power
spectral density. (d) Eavesdropper pulse sequence and noise power spectral density.
the noise level. Hence, very low SNR and high bit error rate (BER) are experienced by
the eavesdropper, being unable to reveal the transmission’s existence.
The SNR and BER performance of an authorized user and an eavesdropper are
shown in Figs. 2(a) and (b), respectively, as a function of the communication systems
transmission bandwidth (higher transmission bandwidth requires higher transmission
power but also results in better SNR and BER performance since more spectral replicas
of the signal are added coherently at the baseband while the noise is added incoherently).
Figure 3 shows the original and decrypted signal in time and frequency domains for
an authorized user and an eavesdropper. It is clearly seen that while the authorized user
properly decrypts the received signal, the eavesdropper does not succeed to reveal the
signal as it remains below the noise level in both time and frequency domains due to
the usage of incorrect temporal and spectral decrypting phases. The parameters used for
the simulations presented in the figures are common in optical communication systems.
Our stealthy communications fiber optic system can also efficiently cope with
jamming. When an opponent tries to jam our secure transmitted signal by occupying its
temporal and spectral domains with a high power signal, the jamming signal will be
lowered by the decryption module below the noise level due to its spreading in time
and frequency domains. However, our secure transmitted signal will be properly
decrypted and raised above the noise level since it has the right encrypting temporal
and spectral phase.
The required resources needed for our proposed encryption method are standard
fiber optical communication system, optical amplitude modulators (to implement the
optical sampling), temporal phase modulators and spectral phase modulators. All of the
above can easily be integrated in a given photonic communication link by using
Optical Cryptography for Cyber Secured and Stealthy Fiber-Optic 5
2 Conclusions
This short paper gives an insight on a new way of analog photonic encryption that can
strengthen the existing encryption concepts working on top of the digital electrons of
information. The analog photon of information is lowered below the noise level both in
the time as well as in the Fourier domain by performing temporal and spectral phase
encoding and by properly sampling the signal in the time domain such the encoding
phases uniquely redistribute the energy of the signal over both time and spectral
domain and lower them below the noise level to make then un-visible and un-
detectable as much as possible. Due to its properties the proposed scheme has high
applicability in commercial cyber based configurations [5].
References
1. Yeminy, T., Sadot, D., Zalevsky, Z.: Spectral and temporal stealthy fiber-optic communi-
cation using sampling and phase encoding. Opt. Exp. 19, 20182–20198 (2011)
2. Yeminy, T., Sadot, D., Zalevsky, Z.: Sampling impairments influence over stealthy fiber-optic
signal decryption. Opt. Commun. 291, 193–201 (2013)
3. Wohlgemuth, E., Yoffe, Y., Yeminy, T., Zalevsky, Z., Sadot, D.: Demonstration of coherent
stealthy and encrypted transmission for data center interconnection. Opt. Exp. 26, 7638–7645
(2018)
4. Wohlgemuth, E., Yeminy, T., Zalevsky, Z., Sadot, D.: Experimental demonstration of
encryption and steganography in optical fiber communications. In: Proceedings of the
European Conference on Optical Communication (ECOC 2017), Gothenburg, Sweden, 17–21
September 2017
5. Sadot, D., Zalevsky, Z., Yeminy, T.: Spectral and temporal stealthy fiber optic communi-
cation using sampling and phase encoding detection systems. US patent No. 9288045;
EP2735111A1
Efficient Construction of the Kite
Generator Revisited
1 Introduction
with 2 leaves, to support the herding attack [8]. Blackburn et al. [4] point out
an inaccuracy in Kelsey-Kohno’s analysis and fix it, resulting in an increased
time complexity. Later work presented new algorithms for more efficient con-
structions of the diamond structure [10,14].
A different second pre-image attack is based on the kite generator. This
is a long message (with 2k blocks) second pre-image attack due to Andreeva
et al. [1] on the Merkle-Damgård structure and its dithered variant [13]. The
kite generator is a strongly connected directed graph of 2n−k chaining values
that for each two chaining values a1 , a2 covered by the kite generator, there exist
a sequence of message blocks of almost any desired length that connects a1 to
a2 . Their analysis claims that the kite generator ’s construction takes about 2n+1
compression function calls.
We start this paper by pointing out an inaccuracy in their construction based
on some theorems from the Galton-Watson branching process field: We show
that the resulting graph, using the original construction, is not strongly con-
nected and therefore is unusable in the online phase. We proceed by offering cor-
rected analysis that shows that the construction of the kite generator takes about
(n − k) · 2n compression function calls.
We then show a completely different method to build the kite generator.
This new method allows constructing the kite generator in time of 2n compres-
sion function calls, i.e., it takes half the time of the originally inaccurate claim.
Finally, we adapt all these issues to the dithered variant of the Merkle-Damgård
structure.
This paper is organized as follows: Sect. 2 gives notations and definitions
used in this paper. In Sect. 3 we quickly recall Andreeva et al.’s second pre-
image attack, and most importantly, the construction of the kite generator. We
identify and analyze the real complexity of constructing a usable kite generator in
Sect. 4. We introduce a new method for constructing kite generators in Sect. 5.
We treat the analysis of the kite generator and the new construction in the
case of dithered Merkle-Damgård in Sect. 6. Finally, we conclude the paper in
Sect. 7.
1. Padding step1
(a) Concatenate ‘1’ at the end of the message.
(b) Let b be the number of bits in the message, and be the number of bits
used to encode the message length in bits.2 Pad a sequence of 0 ≤ k < m
zeros, such that b + 1 + k + ≡ 0 (mod m).
(c) Append the message with the original message length in bits, encoded in
bits.
2. Divide the message to blocks of m bits, so if the length of padded message is
L · m then
M = m0 ||m1 || . . . ||mL−1 .
3. The iterative process starts with a constant IV , denoted by h−1 , and it
updated in every iteration, according to the appropriate message block mi
(for 0 ≤ i ≤ L − 1), to new chaining value:
hi = f (hi−1 , mi ).
MDHf (M ) = hL−1
m0 m1 mL−2 mL−1
f f ... f f
IV hL−1 = MDHf (M )
h0 hL−3 hL−2
1
We describe here the standard padding step done in many real hash functions such
as MD5 and SHA1. Other variants of this step exist, all aiming to achieve prefix-
freeness.
2
It is common to set 2 − 1 as the maximal length of a message.
Efficient Construction of the Kite Generator Revisited 9
Merkle [11] and Damgård [5] proved that if the compression function is
collision-resistant then the whole structure (when the padded message includes
the original message length) is also collision-resistant. Although the Merkle-
Damgård structure is believed to be secure also from second pre-image attacks,
in practice it is not [1,2,6,9].
1. Let Z0 := 1.
2. For each (i, t) ∈ N × N let Xit be a random variable follows a fixed probability
distribution P : N ∪ {0} → [0, 1] with expected value μ < ∞.
3. Define inductively:
∀t ∈ N : Zt := Xit .
1≤i≤Zt−1
The random variable Xit represents the number of offspring produced by the i’th
element (if there is one) of the Zt−1 elements from the time t − 1.
A central issue in the theory of branching processes is ultimate extinction,
i.e., the event of some Zt = 0. One can see that E(Zt ) = μt . Still, even for
μ ≥ 1 as long as P r[Xi = 0] > 0 the ultimate extinction is an event with a
positive probability. To study such events of ultimate extinction we need to study
their probability, given by
In [3] Athreya and Ney show that the probability of ultimate extinction is the
smallest fixed point x ∈ [0, 1] of the P ’s moment-generating function fP (x).
For example, if Xit ∼ P oi(λ), then the probability of ultimate extinction is the
smallest solution x ∈ [0, 1] of eλ(x−1) = x.
In [1] Andreeva et al. suggest a method to generate second pre-images for long
messages of 2k blocks. Using an expensive precomputation of 2n+1 compression
n−k
function calls, the online complexity of their attack is max(O(2k ), O(2 2 )) time
and O(2n−k ) memory.
10 O. Dunkelman and A. Weizman
The Online Phase. In the online phase, given a long message M , the adversary
computes H(M ) and finds, with high probability, a chaining value hi , for n−k <
i < 2k , such that hi ∈ A. Now the adversary creates, using the kite generator, a
sequence of i message blocks, starting from the IV , that leads to hi . It is done
in the following steps:
The concatenation s1 ||s2 ||s3 is a sequence of i message blocks that leads from
the IV to hi , as desired. Now, the adversary creates a second pre-image:
blocks ma,1 , ma,2 such that f (a, ma,1 ), f (a, ma,2 ) ∈ A. To do so, he generates
2 · 2k message blocks, each leads to one of the 2n−k chaining values of A with
probability of 2−k . Hence, is expected to find two such message blocks, and the
total complexity is about 2 · 2k · 2n−k = 2n+1 compression function calls.3
The online Complexity. First of all, the memory used to store the kite generator
is O(2n−k ). Second, the online phase has two steps:
1. The adversary should compute the M ’s chaining values to find the common
chaining value with the kite generator’s chaining values. This step requires
O(2k ) compression function calls.
2. The adversary should find a collision between the two lists described in
n−k
Sect. 3.1. Since each list contains about 2 2 chaining values, this step
n−k
requires O(2 2 ) time and memory.4
Thus, the online time complexity is
n−k
max(O(2k ), O(2 2 )),
O(2n−k ).
3
Note that using this method dout (a) follows a P oi(2) distribution, and about 13%
of the chaining values are expected to have dout (a) = 0. To solve this issue, it is
possible to generate for each chaining value as many message blocks as needed to
find two out-edges. Now, the average time complexity needed for a chaining value
a is 2k+1 . The actual running time for a given chaining value is the sum of two
geometric random variables with mean 2k each. Hence, the total running time is the
n−k+1
sum of 2n−k+1 geometric random variables Xi ∼ Geo(2−k ). Since 2i=1 (Xi −1) ∼
n−k+1
N B(2n−k+1 , 1−2−k ), then 2i=1 Xi ∼ 2n−k+1 +N B(2n−k+1 , 1−2−k ). Therefore,
n−k+1 −k n−k+1
E[ 2i=1 Xi ] = 2n−k+1 + (1−2 2−k)2
= 2n+1 with a standard deviation of
√
2n−k+1 (1−2−k ) n+k+1
2−k
≤2 2 .
4
Andreeva et al. [1] note that it is possible to find the common chaining value by a
more sophisticated algorithm which requires the same time but negligible additional
memory, using memoryless collision finding. Our findings affect these variants as
well.
12 O. Dunkelman and A. Weizman
As described in Sect. 3.1, for constructing the kite generator Andreeva et al. [1]
suggest to find from each chaining value of A two message blocks, each of them
leads to a chaining value of A. They claim that as ∀a ∈ A : E[din (a)] = 2, the
resulting graph is strongly connected.
We agree with their claim that due to the fact that dout (a) = 2 then
∀a ∈ A : E[din (a)] = 2, but we claim that their conclusion, that the result-
ing graph is strongly connected, is wrong. The actual distribution of din (a) can
be approximated by din (a) ∼ P oi(2), as the number of entering edges follows
a Poisson distribution with a mean value of 2. Hence, for each chaining value
a ∈ A:
e−2 · 20
P r[din (a) = 0] = = e−2 .
0!
Thus, about 2n−k · e−2 ≈ 13% of the chaining values in the kite generator are
expected to have din (a) = 0. Obviously, the resulting graph is not strongly
connected.
Moreover, there are more hi ’s in the kite generator for which the attack
fails. The construction of the “backwards” tree from hi is a branching process
with P oi(2) offspring (see Definition 4). Therefore, an ultimate extinction of
the branching process suggests that hi cannot be connected to, and the online
phase fails. According to the branching process theorems [3], the probability of
ultimate extinction in a branching process with offspring distribute according a
distribution P is the smallest fixed point x ∈ [0, 1] of the moment-generating
function of P . In our case the distribution is P oi(2), and the moment-generating
function is f (x) = e2(x−1) . Hence, the probability of ultimate extinction is the
smallest solution x ∈ [0, 1] of e2(x−1) = x. Using numerical computation we get
that x ≈ 0.2032. It means that in about 20% of the cases the “backwards” tree is
limited. We note that usually this extinction happens very quickly. For example,
about 85% of the “extinct” hi do so in one or zero steps (i.e., their backwards
tree is of depth of at most 1).
To fix this problem we need that E[din (a)] = n − k, and then the expected
number of chaining values a ∈ A with din (a) = 0 is 2n−k · e−(n−k) 1. The
naive approach to do so, is to generate from each chaining value a ∈ A, n − k
message blocks, ma,1 , ma,2 , . . . , ma,n−k , for which f (a, ma,i ) ∈ A \ {a}. Using
this approach, the complexity of the precomputation increases to (n − k) · 2n
compression function calls.
A different approach for fixing the problem is to increase the kite generator
by adding vertices such that the intersection between a message of length 2k and
the kite generator is sufficiently large (i.e., that there are several joint chaining
values). Hence, even if some of the pairs of the joint chaining values fail to
connect through the kite generator, there is a sufficient number of pairs that do
connect. This approach does not increase the precomputation time beyond 2n+1
(as the additional vertices in the kite generator reduce the “cost” of connecting
Efficient Construction of the Kite Generator Revisited 13
any vertex). At the same time, it increases the memory complexity of the attack.
We do not provide a full analysis of this approach given the improved attack of
Sect. 5 which does not require additional memory, and enjoys a smaller time
complexity.
In the construction described in Sect. 3.1, the set A of the chaining values is
chosen arbitrarily. We now suggest a new method for choosing the chaining
values in order to optimize the complexity of constructing a kite generator.
Our main idea is to define the set A iteratively in a manner that ensures that
din (a) ≥ 1 (for all but one chaining value, the IV ).
1. Let L0 := {h0 = IV }.
2. Set the second layer L1 = {h1 = f (IV, m1 ), h2 = f (IV, m2 )}.
3. Continue by the same method to set
f (b, mb ) = a
5
It is not necessary to use only two different message blocks in the setting, but it is
possible since they are used for different chaining values.
6
With high probability we expect some collisions in A. This can be easily solved
during the construction: If a chaining value f (hi , mj ) is already generated, replace
the message block mj one by one until a new chaining value is reached. It is easy to
see that the additional time complexity is negligible.
14 O. Dunkelman and A. Weizman
i.e.,
din (a) ≥ 1.
The case of din (IV ) = 0 is not problematic, since we need the IV in the kite
generator only as the source of the random walking, and it is done only with
the out-edges. In addition, In this method of constructing A, each chaining value
a = IV follows din (a) ∼ 1+P oi(1). It implies that ∀a = IV : P r[din (a) = 0] = 0,
and therefore the probability of ultimate extinction in the branching process
defined by the backwards tree is 0.
Analysis. Steps 1–4 generate arbitrary message blocks for each reached chaining
value until about 2n−k chaining values are reached. They require about 2n−k
compression function calls. Step 5, of finding two out-edges from each chaining
Efficient Construction of the Kite Generator Revisited 15
value that reached in the last layer Ln−k−1 , requires about 2 · 2n−k−1 · 2k = 2n
compression function calls.7 Thus, the precomputation complexity is about
2n + 2n−k ≈ 2n
compression function calls. It means that our method not only ensures that the
resulting graph is strongly connected, but is also more efficient than the original
method.
2 · |B| · 2n
(n − k) · |B| · 2n
Analysis. The complexity of the first step, of constructing the kite generator
using one symbol, is similar to the one in Sect. 5, i.e., about 2n−1 compres-
sion function calls (using the improved method). The complexity of the sec-
ond step, of completing the kite generator for the remaining symbols, is about
(n − k) · (|B| − 1) · 2n compression function calls. Thus, the total complexity of
the precomputation is about
6.4 Improvement II
In Sect. 6.3 we adapted our new method while considering the probability of
ultimate extinction in the construction of the “backwards” tree tends to zero.
We now show that by allowing a small probability of ultimate extinction, denoted
by p, we can reduce the complexity as follows. We look for a λ(p) such that the
probability of ultimate extinction in a branching process with P oi(λ(p)) offspring
is p. According to the branching process theorems [3] we need that
eλ(p)(1−p) = p
that implies
ln(p)
λ(p) = .
1−p
For examples, λ(0.01) ≈ 4.65, and λ(0.001) ≈ 6.91. It means that in the con-
struction described in Sect. 6.3, we can replace the second step of looking for n−k
message blocks per symbol for each chaining value, by looking for such a con-
stant number. Thus, consider p = 0.001, the complexity of the precomputation
is reduced to about
(6.91 · (|B| − 1) + 0.5) · 2n
compression function calls.
7 Summary
As a concluding discussion, we note that when the kite generator has 2n−k
chaining values and the message is of length 2k blocks, one should expect the
kite generator to contain one of the message’s chaining values with probability
63%, which translates to about 50% success rate. The way to fix this issue is
trivial — increase the size of the kite generator. Multiplying the number of nodes
in the kite generator by a factor of 2, reduces the probability of disjoint sets of
chaining values from 1/e ≈ 37% to merely 1/e2 ≈ 13.5%, and this rate can be
further reduced to as small probability as the adversary wishes.8 However, when
the kite generator is not strongly connected, as our analysis shows, the success
probability of the original is upper bounded by 80%, no matter how large the
kite generator is taken to be.
To conclude, in this work we pointed out an inaccuracy in the analysis of the
construction of the kite generator suggested by Andreeva et al. [1]: The kite gen-
erator is not strongly connected, and thus the online phase fails in probability
of at least 20%. We showed that to fix the inaccuracy, we need to increase the
complexity of the construction phase by a factor of n−k 2 . We then suggested a
8
This issue happens also in the online phase, when the adversary looks for common
chaining values between the two lists described in Sect. 3.1. The fixing is similarly –
increase the size of these lists accordingly.
18 O. Dunkelman and A. Weizman
Method Complexity
Merkle-Damgård Dithered Merkle-Damgård
a
Andreeva et al. [1] 2n+1 |B| · 2n+1
Our fixed analysis (n − k) · 2n (n − k) · |B| · 2n
n
Our new method 2 ((n − k) · (|B| − 1) + 1) · 2n
n−1
Improvement I 2 ((n − k) · (|B| − 1) + 0.5) · 2n
Improvement II Not relevant (6.91 · (|B| − 1) + 0.5) · 2n
a
— Andreeva et al.’s analysis is inaccurate.
new method to optimize the construction of the kite generator that is both cor-
rect and more efficient than the original method. Finally, we adapted the fixing
analysis and our new method to the dithered Merkle-Damgård construction.
For comparison, we present in Table 1 the number of required compression
function calls to construct a kite generator, using the different methods, for the
Merkle-Damgård structure and its dithered variant.
References
1. Andreeva, E., Bouillaguet, C., Dunkelman, O., Fouque, P.-A., Hoch, J., Kelsey,
J., Shamir, A., Zimmer, S.: New second-preimage attacks on hash functions. J.
Cryptol. 29(4), 657–696 (2016)
2. Andreeva, E., Bouillaguet, C., Fouque, P.-A., Hoch, J.J., Kelsey, J., Shamir, A.,
Zimmer, S.: Second preimage attacks on dithered hash functions. In: Smart, N.
(ed.) EUROCRYPT 2008. LNCS, vol. 4965, pp. 270–288. Springer, Heidelberg
(2008). https://doi.org/10.1007/978-3-540-78967-3 16
3. Athreya, K.B., Ney, P.E.: Dover books on mathematics. In: Branching Processes,
pp. 1–8. Dover Publications, New York (2004). Chap. 1
4. Blackburn, S.R., Stinson, D.R., Upadhyay, J.: On the complexity of the herding
attack and some related attacks on hash functions. Des. Codes Crypt. 64(1–2),
171–193 (2012)
5. Damgård, I.B.: A design principle for hash functions. In: Brassard, G. (ed.)
CRYPTO 1989. LNCS, vol. 435, pp. 416–427. Springer, New York (1990). https://
doi.org/10.1007/0-387-34805-0 39
6. Dean, R.D.: Formal aspects of mobile code security. Ph.D. thesis, Princeton Uni-
versity, Princeton (1999)
7. Joux, A.: Multicollisions in iterated hash functions. application to cascaded con-
structions. In: Franklin, M. (ed.) CRYPTO 2004. LNCS, vol. 3152, pp. 306–316.
Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-28628-8 19
Another random document with
no related content on Scribd:
A COUSIN OF THE STRAWBERRY
Fig. 245
Now I will tell you what this difference is; and I want you to try and
understand it clearly, so that you will be able to explain it to others,
for I doubt if the grown-up people could give any better answers than
you. I think your fathers and mothers will be both surprised and
pleased when you show them some summer day how truly different
are these two berries.
You remember that in the strawberry we saw plainly that it was the
flat flower cushion which swelled into the ripe strawberry,—the
cushion which was quite hidden by the many pistils; and though
these pistils were scattered thickly all over the ripe, red fruit, these
little pistils with their seedboxes were too small and dry to add flavor
or richness to the berry.
But if we watch the growth of this blackberry, we see that things
are different.
Fig. 246
We see that the pistils of this fruit do not remain small and dry, as
with the strawberry. No, indeed! their little seedboxes grow bigger
and juicier every day, and they turn from green to red and from red to
black. They do not remain hard to the touch, but become so soft that
a slight pressure will bruise them and stain your fingers purple. And
we enjoy eating the full-grown blackberry (Fig. 249) because a
quantity of these juicy seedboxes are so packed upon the juicy
flower cushion that together they make a delicious mouthful (Figs.
247, 248).
The flower cushion of the blackberry is long and narrow, not broad
and flat like that of the strawberry.
So do not forget that in the strawberry we enjoy eating the ripened
flower cushion, while in the blackberry the juicy seedboxes give to
the fruit more of its size and flavor than does the flower cushion.
ANOTHER COUSIN
Fig. 250
Here we see a branch from the raspberry bush (Fig. 250). How is
the raspberry unlike both strawberry and blackberry? Let us place
side by side these three berries (Figs. 251, 252, 253).
Fig. 254
Fig. 255
I hope you will remember how these three berries differ one from
another.
Why the blossoms of these three plants grow into berries in three
different ways, we do not know; but our time has been well spent if
we remember that they do change in these three ways.
The more we see and question and learn, the more pleasure we
shall find in our own lives, and the better able we shall be to make
life pleasant for others.
PEA BLOSSOMS AND PEAS
T HE Pea family is a large one, and it is worth our while to find out
what plan it uses in flower building.
Let us look at a pea blossom and see of what parts it is made up.
“There is the green cup, or calyx,” you say.
Yes, that is plain enough. It is cut up into five little leaves.
“And there is a circle of flower leaves, which makes the corolla.”
Let us pull apart both calyx and corolla, and place the separate
leaves as in the picture (Fig. 256).
The five smaller leaves, the ones marked ca, are the green of the
calyx.
The five larger ones, marked co, belong to the corolla. These, you
notice, are not all alike. The upper one is much the largest.
Fig. 256
As I told you, the two lower leaves of the corolla are joined so as
to form a sort of pocket (Fig. 257). Now, surely, a pocket is meant to
hold something. So take a pin and slit open this pocket. As the two
sides spring apart, out flies some golden pollen, and we see that the
little pocket is far from empty. It holds ten stamens and one pistil.
If you look at these carefully (Fig. 256), you see that one stamen
stands alone, while the other nine have grown together, forming a
tube which is slit down one side. This tube clings to the lower part of
the pistil.
Now, if you pull this tube away, what do you see?
You see a little, green, oblong object, do you not (Fig. 258)?
And what is it? Do you not recognize it?
Fig. 258
Why, it is a baby pea pod. Within it lie the tiny green seeds (Fig.
259) which are only waiting for the fresh touch of life from a pollen
grain to grow bigger and bigger till they become the full-grown seeds
of the pea plant,—the peas that we find so good to eat when they
are cooked for dinner.
Fig. 259
So, after all, the building plan of the pea blossom is nothing but the
old-fashioned one which reads
1. Calyx.
2. Corolla.
3. Stamens.
4. Pistil.
Had I not told you to do so, I wonder if you would have been bright
enough to pull apart the little pocket and discover the stamens and
pistil.
What do you think about this?
THE CLOVER’S TRICK
H ERE you see the bees buzzing about the pretty pink clover
heads,—the sweet-smelling clover that grows so thickly in the
fields of early summer.
Can you tell me what plan the clover uses in flower building?
You will not find this easy to do. Indeed, it is hardly possible, for
the clover plays you a trick which you will not be able to discover
without help.
You believe, do you not, that you are looking at a single flower
when you look at a clover head?
Well, you are doing nothing of the sort. You are looking at a great
many little clover flowers which are so closely packed that they make
the pink, sweet-scented ball which we have been taught to call the
clover blossom.
It is incorrect to speak of so many flowers as one; and whenever
we say, “This is a clover blossom,” really we ought to say, “These are
clover blossoms.” We might just as well take a lock of hair—a lock
made up of ever so many hairs—and say, “This is a hair.” Now, you
all know it would not be correct to do this, and no more is it correct to
call a bunch of clover blossoms “a blossom.” But as most people do
not understand this, undoubtedly the mistake will continue to be
made.
Fig. 260 shows you one little flower taken out of the ball-like clover
head.
Can you think of any good reason why so many of these little
flowers should be crowded together in a head?
What would happen if each little blossom grew quite alone?
Fig. 260
Why, it would look so small that the bee could hardly see it. And
sweetly though the whole clover head smells, the fragrance of a
single flower would be so slight that it would hardly serve as an
invitation to step in for refreshments.
So it would seem that the clover plant does wisely in making one
good-sized bunch out of many tiny flowers, for in this way the bees
are persuaded to carry their pollen from one blossom to another.
The moral of the clover story is this: Be very careful before you
insist that you hold in your hand or see in the picture only one flower.
MORE TRICKS
Fig. 261
Here you would have been quite mistaken; for instead of one large
flower, the picture shows you a number of tiny blossoms, so closely
packed, and so surrounded by the four white leaves, that they look
like only one blossom.
Try to get a branch from the dogwood tree (only be sure to break it
off where it will not be missed), and pull apart what looks so much
like one large flower.
First pull off the four white leaves. Then you will have left a bunch
of tiny greenish blossoms. Look at one of these through a magnifying
glass. If eyes and glass are both good, you will see a very small
calyx, a corolla made up of four little flower leaves, four mites of
stamens, and a tiny pistil,—a perfect little flower where you never
would have guessed it.
But all by themselves they would never be noticed: so a number of
them club together, surrounding themselves with the showy leaves
which light up our spring woods.
In Fig. 262 you see the flower cluster of the hobblebush.
The hobblebush has still another way of attracting attention to its
blossoms. It surrounds a cluster of those flowers which have
stamens and pistils, and so are ready to do their proper work in the
world, with a few large blossoms which have neither stamens nor
pistils, but which are made up chiefly of a showy white corolla. These
striking blossoms serve to call attention to their smaller but more
useful sisters.
Fig. 262
T HERE is one plant (Fig. 264) which you city children ought to
know almost as well as the country children. In the back yards
and in the little squares of grass which front the street, it sends up its
shining stars; and as for the parks, they look as if some generous
fairy had scattered gold coins all over their green lawns.
Fig. 264
Now, what is this flower which is not too shy to bring its brightness
and beauty into the very heart of the crowded city?
It is the dandelion, of course. You all know, or ought to know, this
plucky little plant, which holds up its smiling face wherever it gets a
chance.
And now, I am sure, you will be surprised to learn that this
dandelion, which you have known and played with all your lives, is
among those mischievous flowers which are laughing at you in their
sleeves, and that regularly it has played you its “April fool;” for, like
the dogwood and the clover, this so-called dandelion is not a single
flower.
No, what you call a dandelion is a bunch made up of a great many
tiny blossoms.
If you pull to pieces a dandelion head, you will find a quantity of
little yellow straps. Each little strap is a perfect flower.
Now, if you had been asked for the building plan of the dandelion,
you would have looked for the calyx, and you would have thought
you had found it in the green cup which holds the yellow straps.
And when you were looking for the corolla, perhaps you would
have said, “Well, all these yellow things must be the flower leaves of
the corolla.”
But when you began your hunt for stamens and pistils, you would
have been badly puzzled; and no wonder, for these are hidden away
inside the yellow straps, the tiny flowers of the dandelion.
So remember that when you cannot find the stamens and pistils
within what you take to be the single flower, you will do well to stop
and ask yourself, “Can this be one of the plants which plays tricks,
and puts a lot of little flowers together in such a way as to make us
think that they are one big flower?”
THE LARGEST PLANT FAMILY IN THE WORLD
Fig. 265
Now, if I had asked you some time ago for the building plan of the
daisy, I think you would have told me that the arrangement of little
green leaves underneath the flower head made up the calyx, and
naturally you would have believed the white leaves above to have
formed the corolla; and the chances are that the yellow center would
have seemed to be a quantity of stamens. As for the seed holders,
you might have said, “Oh, well! I suppose they are hidden away
somewhere among all these stamens.”
It would not have been at all strange or stupid if you had answered
my question in this way.
I know of no plant which dresses up its flowers more cleverly, and
cheats the public more successfully, than this innocent-looking daisy;
for not only does it deceive boys and girls, but many of the grown-up
people who love flowers, and who think they know something about
them, never guess how they have been fooled the daisy. Indeed,
some of them will hardly believe you when you tell them that when
they pick what they call a daisy, they pick not one, but a great many
flowers; and they are still more surprised when they learn that not
only the yellow center of the daisy is composed of a quantity of little
tube-shaped blossoms, but that what they take to be a circle of
narrow white flower leaves is really a circle of flowers, each white
strap being a separate blossom.
Fig. 266
I dare not try to tell you how many separate blossoms you would
find if you picked to pieces a daisy and counted all its flowers,—all
the yellow ones in the center, and all the white outside ones,—but
you would find a surprisingly large number.