Professional Documents
Culture Documents
Finally, E1 and VI are considered as the first block of the Next, V' is compared with V to verify data integrity and
ciphertext referred to as C1 { VI, El } authentication. In general, at the end of the data
encryption/decryption stages, the Cipher Block Chaining- following
Message Authentication Code (CBC-MAC) is prepared (or
examined if already exists) to ensure data integrity. wkl(n+l) = Wkl(n) +AWkl(n). (15)
4) RRNN and Cipher Design. The RRNN (shown in Fig.
1) uses the forward dynamics equations (3) to (6) to generate Both forward and backward dynamics vary in time to
the ciphertext and the MAC. The output of the network ensure that the learning procedure of the RRNN has the
forward dynamics is computed as: capability to detect temporal patterns of the training data.
Consequently, the cipher can prepare the MAC to maintain
jE B (10) both the data integrity and the data authentication.
Y (n + 1) = (K
( wi (n)U (n) ),
i =AuB
B. Security ofthe Cipher
With explosive growth and existence of a large set of
where rp is a nonlinear activation function and the mathematical tools useful in cryptanalysis, security analysis
variable wj1 represents the synaptic weight of the neuron j. In of any newly design cipher is very important. A complete
(10), U1 (n) is the input vector to the RRNN, defined as in description of security analysis of the neural network-based
symmetric cipher can be found in [1] and here only some of
[2], the main issues are highlighted.
In key extension stage, the weight distribution of the
U (n) X, (n) If iEA (11) hidden layers relies on the knowledge of the training data
J~(n f ileB (i.e. the original secret key) along with architecture of the
network and learning algorithm [2]. Therefore, it is not
where A denotes the set of indices i for which Ki (n) is an feasible for a cryptanalyst to analyze the extended key. By
changing the length of the secret key and the dimension or
external input, and B denotes the set of indices i for which the hierarchy of the hidden layers, the user can adjust the
U, (n) is the output of the neuron. Furthermore, the term security level accordingly. A major advantage of this cipher
representing the argument of the linear activation function in is its capability to release the constraints imposed on the
(10) is the neuron internal activity function Vi defined in (3). length of the secret key.
The design of the cipher provides resistance against
To define the initial values for the weight wj, (0), a set of ciphertext only attack and known ciphertext attack as well as
uniformly distributed random numbers is chosen. Next, the high security in data transfer. The ciphertext contains the
dynamic process for updating the network weights in real intermediary output of hidden layer (compressed image of
time is defined by means of the following triple index the plaintext) and the error (difference between the network
output and the plaintext). Without the knowledge of the
weight matrix, and output of the network, it is
YE wji (n),9,, (n) + 5kklUI (n)H
,9kil (n + 1)=f(Vj (n)) -icB (12) computationally infeasible to obtain the plaintext based only
on the ciphertext.
Also, the cipher is resistant to attacks against the MAC
where je B, k e B, I A u B, and (.) is the derivative of and the data encryption scheme itself. In this regard, a self-
the nonlinear activation function. In (12), is the
8k1 adaptive procedure (to update the network learning rate) was
Kronecker delta, which equals to one when k =I and 0 introduced to maintain a satisfactory level of security for the
cipher. This procedure requires very little tuning of the
otherwise. The triply index is initialized such that '9il (0) 0.
cipher parameters and can help accommodate different
The index in (12) is used to update the RRNN weights as applications requirements.
follows A simulation program in Matlab had been used to validate
the cipher [1]. Two simulation experiments were carried out
Awj(n) = i1 E. (n)19k (n) (13) to analyze the effect of the learning rate on the network,
while in the third experiment, the effect of the self-adaptive
algorithm for updating the network's learning rate was
where Awkl denotes the update to the weight wkl, and the investigated. The result showed that when the learning rate is
parameter refers to the learning rate of the network. In set to a large value, the learning procedure can be prevented
(13), the error function Ej at time instant n is computed as from convergence. A small mismatch between the output and
the learning target will have a dramatic effect on the weight
update process; hence will cause the forward propagation to
E1 (n) = M1 (n) - (n). (14) be unstable, i.e. chaotic. This chaotic oscillation of the
learning behavior can then be generated in order to provide
Finally, the weight Wkl is updated according to the the desired data security. The control of learning rate value
was achieved by the self-adaptive function of the symmetric network which approximates F, (an equivalent neural
cipher [1]. network), according to section II, 4) and [1], [12]-[13].
The self-adaptive function is a necessary component to Second Step:
resist possible cryptanalysis attacks. It detects the trend of Consider the following sequence of plaintext messages:
the learning procedure via monitoring the mean squared
error performance function (MSE), and then adjusts the { ml, M2 },{MI1
1 M22},...{IM, M2,k }, each message contains
learning rate by a Multiplicative-Increase Gradual-Decrease exactly two blocks and the first block is fixed as Ml. The
(MIGD) method. The self-adaptive procedure should be corresponding sequence of ciphertext message is given by
performed at the conclusion of each epoch of training in both
the encryption and decryption procedures. Its critical value a
can guarantee that the learning procedure will not settle at a
qC, C2,jI j =1,...,k. (22)
stable point. At the same time, it helps maintain the learning
rate close to the maximum allowable value so that the After the computation of C1, the neural network will be
learning trajectory is closely related to the training data. trained for one epoch. After the encryption of the first block
Although it has been shown that the cipher is secure and (Al1 to Cl ), the functions FI and F, will change due to one
capable of providing both high secure data encryption and epoch training. The new functions will be F1 and F2 . Then
data integrity services, next section presents a chosen
plaintext attack, which can break the cipher.
X2,j = (YI I M2,j) (23)
III. CHOSEN PLAINTEXT ATTACK
V2,j = F2 (X2,j) (24)
A chosen plaintext attack is an attack in which the attacker
is capable of choosing arbitrary plaintexts, feeding them into (25)
the cipher and analyzing the resulting ciphertexts. In this
section, a chosen plaintext attack is proposed and it is proved E2j =M2,j- Y2,j (26)
mathematically that the cipher can be broken by this attack.
As it was described in section II, plaintext messages have the and enemy computes Y'2j from M2lj and C21j. In a similar
form: Mi(i=1 ...n) = {M1 M2, M3, ... M } and way like first step she can approximate F1 and F2 with
ciphertext blocks have the form: {C1, C2,..., Cn }, where neural network's. If all the plaintext messages have Ml as the
Cl ={ V, El }. Now the following steps can be considered first block (starting from Ml), then she can use F2, F1 and
for the attack: F2 in order to encrypt them since she can approximate F2,
First Step:
F1 and F2 with neural networks, according to [1], [12]-
Enemy selects {Ml},{M2},...{Mk} , that is a sequence of [13]. Therefore she owns a substitute of the original key.
plaintext messages. Each message contains exactly one
block. Then for j = 1,...k, IV. CONCLUSION
It has been shown that the symmetric ciphers based on
xV (16) RRNN are not completely secure and can be broken by a
chosen plaintext attack.
V : (17)
REFERENCES
Y F2 (Vj) (18) [1] M. Arvandi, S. Wu, A. Sadeghian, W.W. Melek, and I. Woungang,
"Symmetric Cipher Design Using Recurrent Neural Networks", Proc.
Ej =Mj -Y (19) 2006 Int'l Joint Conf on Neural Nets, pp. 2039- 2046.
[2] S. Haykin, Neural Networks: a Comprehensive Foundation. 2nd ed.
Cj {Vj,Ej}. (20) MacMillan College Publishing Company, 1994, ch. 6, 13.
[3] L. Yee, and C. De Silva, "Application of Multilayer Perceptron
Networks in Public Key Cryptography", Proc. 2002 Int'l Joint Conf
Then enemy computes: Yj Mj- EJ. Function F is the on Neural Nets, vol. 2, pp. 1439-1443, 2000.
[4] L. Yee, and C. De Silva, "Application of Multilayer Perceptron
second part of the neural network after the first training by Networks as a One-Way Hash Function", Proc. 2002 Int'l Joint Conf
the secret key, S (X, Y, ). She has a number of equations
a on Neural Nets, vol. 2, pp. 1459-1462, 2000.
[5] L. Yee, and C. De Silva, "Application of multilayer perceptron
networks in symmetric block ciphers", Proc. 2002 Int'l Joint Conf
Yj =F2() j= 1,...k. (21) on Neural Nets, vol. 2, pp. 1455-1458, 2000
[6] G.C. Meletiou, D.K. Tasoulis, and M.N. Vrahatis, "A First Study of
the Neural Network Approach in the RSA Cryptosystem", 7th
Also she knows YjI Vi, so she can organize a neural
IASTED International Conference Artificial Intelligence and Soft
Computing, 2002.
[7] I. Kanter, W. Kinzel, and E. Kanter, "Secure Exchange of Information
by Synchronization of Neural Networks", Europhys., Lett. 57, 141,
2002.
[8] T. Zhou, X. Liao, and Y. Chen, "A Novel Symmetric Cryptography
Based on Chaotic Signal Generator and a Clipped Neural Network",
Proc. 2004 Int'l Symp on Neural Nets, pp. 639-644.
[9] A. Klimov, A. Mityaguine, and A. Shamir, "Analysis of Neural
Cryptography", Proc. AsiaCrypt 2002, pp. 288-298. Springer Verlag,
2002.
[10] C. Li, S. Li, X. Liao, and G. Chen, "Chosen-Plaintext Cryptanalysis of
a Clipped-Neural-Network-Based Chaotic Cipher", Proc. 2005 Int'l
Symp on Neural Nets, pp. 630-636.
[11] S. Wu, "A Block Cipher Design Using Recurrent Neural Networks,"
M. Eng. Dissertation, Dept. Elec. and Comp. Eng., Ryerson
University, Toronto, ON, Canada, 2003.
[12] A. Pincus, "Approximation Theory of the MLP Model in Neural
Networks", Acta Numerica, Cambridge Univ. Press, pp. 143-195,
1999.
[13] H. White, "Connectionist Nonparametric Regression: Multilayer
Feedforward Networks can Learn Arbitrary Mappings", Neural
Networks, vol. 2, pp. 359-366, 1989.