Professional Documents
Culture Documents
PDF Advances in Neural Networks Isnn 2016 13Th International Symposium On Neural Networks Isnn 2016 ST Petersburg Russia July 6 8 2016 Proceedings 1St Edition Long Cheng Ebook Full Chapter
PDF Advances in Neural Networks Isnn 2016 13Th International Symposium On Neural Networks Isnn 2016 ST Petersburg Russia July 6 8 2016 Proceedings 1St Edition Long Cheng Ebook Full Chapter
https://textbookfull.com/product/advances-in-neural-networks-
isnn-2020-17th-international-symposium-on-neural-networks-
isnn-2020-cairo-egypt-december-4-6-2020-proceedings-min-han/
https://textbookfull.com/product/experimental-algorithms-15th-
international-symposium-sea-2016-st-petersburg-russia-
june-5-8-2016-proceedings-1st-edition-andrew-v-goldberg/
https://textbookfull.com/product/ad-hoc-mobile-and-wireless-
networks-15th-international-conference-adhoc-now-2016-lille-
france-july-4-6-2016-proceedings-1st-edition-nathalie-mitton/
https://textbookfull.com/product/business-information-
systems-19th-international-conference-bis-2016-leipzig-germany-
july-6-8-2016-proceedings-1st-edition-witold-abramowicz/
https://textbookfull.com/product/engineering-secure-software-and-
systems-8th-international-symposium-essos-2016-london-uk-
april-6-8-2016-proceedings-1st-edition-juan-caballero/
Long Cheng
Qingshan Liu
Andrey Ronzhin (Eds.)
LNCS 9719
Advances in
Neural Networks – ISNN 2016
13th International Symposium
on Neural Networks, ISNN 2016
St. Petersburg, Russia, July 6–8, 2016, Proceedings
123
Lecture Notes in Computer Science 9719
Commenced Publication in 1973
Founding and Former Series Editors:
Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen
Editorial Board
David Hutchison
Lancaster University, Lancaster, UK
Takeo Kanade
Carnegie Mellon University, Pittsburgh, PA, USA
Josef Kittler
University of Surrey, Guildford, UK
Jon M. Kleinberg
Cornell University, Ithaca, NY, USA
Friedemann Mattern
ETH Zurich, Zürich, Switzerland
John C. Mitchell
Stanford University, Stanford, CA, USA
Moni Naor
Weizmann Institute of Science, Rehovot, Israel
C. Pandu Rangan
Indian Institute of Technology, Madras, India
Bernhard Steffen
TU Dortmund University, Dortmund, Germany
Demetri Terzopoulos
University of California, Los Angeles, CA, USA
Doug Tygar
University of California, Berkeley, CA, USA
Gerhard Weikum
Max Planck Institute for Informatics, Saarbrücken, Germany
More information about this series at http://www.springer.com/series/7407
Long Cheng Qingshan Liu
•
Advances in
Neural Networks – ISNN 2016
13th International Symposium
on Neural Networks, ISNN 2016
St. Petersburg, Russia, July 6–8, 2016
Proceedings
123
Editors
Long Cheng Andrey Ronzhin
The Chinese Academy of Sciences SPIIRAS
Beijing St. Petersburg
China Russia
Qingshan Liu
Huazhong University of Science
and Technology
Wuhan, Hubei
China
This volume of Lecture Notes in Computer Science constitutes the proceedings of the
13th International Symposium on Neural Networks (ISNN 2016) held during July 6–8,
2016, in Saint Petersburg, Russia. Building on the success of the previous events,
ISNN has become a well-established series of popular and high-quality conferences on
neural networks and their applications. This year the symposium was held for the
second time outside China, in Saint Petersburg, a very beautiful city in Russia. As
usual, it achieved great success.
ISNN aims at providing a high-level international forum for scientists, engineers,
educators, as well as students to gather so as to present and discuss the latest progresses
in neural network research and applications in diverse areas. It encouraged open dis-
cussion, disagreement, criticism, and debate, and we think this is the right way to push
the field forward.
This year, we received 104 submissions from about 291 authors in 40 countries and
regions. Based on the rigorous peer-reviews by the Program Committee members and
reviewers, 84 high-quality papers were selected for publication in the LNCS pro-
ceedings. These papers cover many topics of neural network-related research including
intelligent control, neurodynamic analysis, memristive neurodynamics, computer
vision, signal processing, machine learning, optimization etc.
Many organizations and volunteers made great contributions toward the success of
this symposium. We would like to express our sincere gratitude to the City University
of Hong Kong, the St. Petersburg Institute for Informatics and Automation, Russian
Academy of Sciences, the IEEE Hong Kong Section (CIS Chapter), the International
Neural Network Society, the Asia Pacific Neural Network Society, and the Russian
Neural Networks Society for their technical co-sponsorship. We would also like to
sincerely thank all the committee members for all their great efforts and time in
organizing the symposium. Special thanks go to the Program Committee members and
reviewers whose insightful reviews and timely feedback ensured the high quality of the
accepted papers and the smooth flow of the symposium. We would also like to thank
Springer for their cooperation in publishing the proceedings in the prestigious Lecture
Notes in Computer Science series. Finally, we would like to thank all the speakers,
authors, and participants for their support.
General Chairs
Tatiana V. St. Petersburg State University, Saint Petersburg, Russia
Chernigovskaya
Jun Wang City University of Hong Kong, Hong Kong, SAR China
Rafael Yusupov St. Petersburg Institute for Informatics and Automation,
Russian Academy of Sciences, Saint Petersburg, Russia
Advisory Chairs
Vladimir Cherkassky University of Minnesota, Minneapolis, USA
Boris Kryzhanovsky Russian Academy of Sciences, Moscow, Russia
Danil Prokhorov Toyota Motor Corporation, Ann Arbor, USA
Steering Chairs
Haibo He University of Rhode Island, Kingston, USA
Derong Liu University of Illinois-Chicago, Chicago, USA
Program Chairs
Long Cheng Institute of Automation, Chinese Academy of Sciences,
Beijing, China
Qingshan Liu Huazhong University of Science and Technology, Wuhan,
China
Andrey Ronzhin St. Petersburg Institute for Informatics and Automation,
Russian Academy of Sciences, Saint Petersburg, Russia
Publicity Chairs
Huaguang Zhang Northeastern University, Shenyang, China
Jun Zhang Sun Yat-sen University, Guangzhou, China
Li Zhang Chinese University of Hong Kong, Hong Kong,
SAR China
Publications Chairs
Zhenyuan Guo Hunan University, Changsha, China
Biao Luo Institute of Automation, Chinese Academy of Sciences,
Beijing, China
Sitian Qin Harbin Institute of Technology at Weihai, Weihai, China
Registration Chairs
Yiu-Ming Cheung Hong Kong Baptist University, Hong Kong, SAR China
Shenshen Gu Shanghai University, Shanghai, China
Daniel W.C. Ho City University of Hong Kong, Hong Kong, SAR China
Program Committee
Jose Aguilar Universidad de Los Andes, Venezuela
Plamen Angelov Lancaster University, UK
Daniel Araújo Universidade Federal do Rio Grande do Norte, Brazil
Laxmidhar Behera Indian Institute of Technology Kanpur, India
Pascal Bouvry University of Luxembourg, Luxembourg
Salim Bouzerdoum University of Wollongong, Australia
Chien-Lung Chan Yuan Ze University, Taiwan
Jonathan Chan King Mongkut’s University of Technology Thonburi,
Thailand
Andrey Chechulin St. Petersburg Institute for Informatics and Automation,
Russia
Ke Chen Tampere University of Technology, Finland
Yuehui Chen University of Jinan, China
Zengqiang Chen Nankai University, China
Long Cheng Chinese Academy of Sciences, China
Peng Cheng Zhejiang University, China
Zhenbo Cheng ZheJiang University of Technology, China
Organization IX
Optimal Real-Time Price in Smart Grid via Recurrent Neural Network . . . . . 152
Haisha Niu, Zhanshan Wang, Zhenwei Liu, and Yingwei Zhang
Intelligent Control
Language Models with RNNs for Rescoring Hypotheses of Russian ASR . . . 418
Irina Kipyatkova and Alexey Karpov
Long Exposure Point Spread Function Modeling with Gaussian Processes . . . 540
Ping Guo, Jian Yu, and Qian Yin
Evolutionary Computation
Conversion from Rate Code to Temporal Code – Crucial Role of Inhibition . . . 665
Mikhail V. Kiselev
1 Introduction
Steganography and steganalysis can be considered as opposite sides of the game, they
are progressing during the competition between each other. Steganalysis is the opposite
art to steganography, whose goal is to detect whether or not the seemly innocent
objects like image hiding message. Many steganography models have been proposed,
such as LSB (least significant bit) replace and LSB matching, STC (Space-time-
coding), HUGO (Highly Undetectable Steganography), JSteg, OutGuess, F5, nsF5,
MBS, YASS, MOD and so on [1–3]. Steganalysis aiming at a special Steganography
method has lots research work [4, 5]. Universal steganalysis is a difficult problems.
Some research has been done on the topic [6–8]. But how to improve the steganalysis
performance is still a difficult issue.
With the development of information technology, data scale becomes larger and
larger. Big data technology develops quickly in recent years. Taking full use of
large scale training data’s information to improve steganalysis performance is the
develop trend [9]. Most classic classification methods are out of reach in practice in
face of big data. Efficient parallel algorithms and implementation techniques are the
key to meeting the scalability and performance requirements entailed in such large
scale data mining analyses. Many parallel algorithms are implemented using different
parallelization techniques such as threads, MPI, MapReduce, and mash-up or workflow
technologies yielding different performance and usability characteristics [9]. MapRe-
duce is a cloud technology developed from the data analysis model of the information
retrieval field. Several MapReduce architectures are developed now. Iterative
MapReduce based on memory computing is the most efficient architecture to deal with
large scale computing problems. Commonly used iterative MapReduce software are
Spark and Twister [9, 10]. In this paper, parallel SVM model is proposed to deal with
large scale image steganalysis problem. The efficiency is illustrated with a practical
experience.
The rest of this paper is organized as follows. In Sect. 2, image feature extraction
based on wavelet transform is introduced. Iterative MapReduce architecture is present
in Sect. 3. Parallel SVM model based on iterative MapReduce is present in Sect. 4.
Image steganalysis procedure based on MapReduce is described in detail in Sect. 5.
A practical example is analyzed in Sect. 6 to illustrate the efficiency of the proposed
method.
where j denote the level of decomposition, denote horizontal, vertical, and diagonal
component respectively. HT and GT denote the conjugate transpose matrix of H and G.
jHðfk Þj
pðfk Þ ¼ PN=2 ð3Þ
j¼N=2 Hðfj Þ
2 Iterative MapReduce
There are many parallel algorithms with simple iterative structures. Most of them can
be found in the domains such as data clustering, dimension reduction, link analysis,
machine learning, and computer vision. These algorithms can be implemented with
iterative MapReduce computation. Professor Fox developed the first iterative
MapReduce computation model Twister. It has several components, i.e. MapReduce
main job, Map job, Reduce job, and combine job. Twister’s programming model can
be described as in Fig. 1.
Twister is a distributed in-memory MapReduce runtime optimized for iterative
MapReduce computations. It reads data from local disks of the worker nodes and
handles the intermediate data in distributed memory of the worker nodes. All com-
munication and data transfers are handled via a publish/subscribe messaging infras-
tructure. During configuration, the client assigns MapReduce methods to the job,
prepares KeyValue pairs and prepares static data for MapReduce tasks through the
partition file if required. Map daemons operate on computation nodes, loading the Map
classes and starting them as Map workers. During initialization, Map workers load
static data from the local disk according to records in the partition file and cache the
data into memory. Most computation tasks defined by the users are executed in the
Map workers. Reduce daemons operate on computation nodes. The reduce jobs depend
on the computation results of Map jobs. The communication between daemons is
through messages. Combine job is to collect MapReduce results.
6 Z. Sun et al.
kðxi ; xj Þ ¼ /ðxi Þ/ðxj Þ. Classification SVM can be implemented through solving the
following equations.
( )
1 X
min kw k2 þ C ni
w;ni ;b 2
i
ð4Þ
s:t: yi ðUT ðXi Þw þ bÞ 1 ni 8i ¼ 1; 2; . . .; n
ni 0 8i ¼ 1; 2; . . .; n
X X
l
min ai aj yi yj kðxi ; xj Þ ai
a
i;j i¼1
ð5Þ
s:t: yT a ¼ 0
0 ai \C; i ¼ 1; 2; . . .; l
If the filters in the first few layers are efficient in extracting the support vectors then the
largest optimization, the one of the last layer, has to handle only a few more vectors
than the number of actual support vectors. Therefore, the training sets of each
sub-problems are much smaller than that of the whole problem when the support
vectors are a small subset of the training vectors.
4 Image Steganalysis
where TP (True Positive) the number of positive samples are estimated positive, TN
(True Negative) is the number of negative samples are estimated negative, FP (False
Positive) is the number of negative samples are estimated positive, and FN (False
Negative) is the number of positive samples are estimated negative.
5 Example
6 Conclusions
Image steganalysis plays more and more important role in our information society.
How to improve the steganalysis performance is the research focus of information
security. The parallel image steganalysis based on MapReduce can take full use of
large scale training sample’s information to build more precise steganalysis classifier.
The experience results show that the proposed method is efficient in dealing with large
scale image steganalysis problems. Big data technology will be the develop trend to
resolve steganalysis problems.
Acknowledgments. This work is supported by the national science foundation (No. 61472230),
National Natural Science Foundation of China (Grant No. 61402271), the Natural Science
Foundation of Shandong Province (Grant No. ZR2015JL023 and Grant No. ZR2015FL025),
Shandong science and technology development plan (Grant No. J15LN54), Key R & D program
in Shandong Province (Grant No. 2015GGX101012).
Large Scale Image Steganalysis Based on MapReduce 11
References
1. Chanu, Y.J., Tuithung, T., Manglem, S.K.: A short survey on image steganography and
steganalysis techniques. In: 3rd National Conference on Emerging Trends and Applications
in Computer Science, pp. 52–55 (2012)
2. Bilal, I., Roj, M.S., Kumar, R., Mishra, P.K.: Recent advancement in audio steganography.
In: 2014 International Conference on Parallel, Distributed and Grid Computing, pp. 402–405
(2014)
3. Mazurczyk, W., Caviglione, L.: Steganography in modern smartphones and mitigation
techniques. IEEE Commun. Surv. Tutorials 17(1), 334–357 (2015)
4. Khosravirad, S.R., Eghlidos, T., Ghaemmaghami, S.: Closure of sets: a statistically
hypersensitive system for steganalysis of least significant bit embedding. IET Signal
Process. 5(4), 379–389 (2011)
5. Tang, M.W., Fan, M.Y., Wang, G.W.: An extential steganalysis of information hiding for
F5. In: 2nd International Workshop on Intelligent Systems and Applications, pp. 1–4 (2010)
6. Zhang, Z., Hu, D.H., Yang, Y., Su, B.: A universal digital image steganalysis method based
on sparse representation. In: 9th International Conference on Computational Intelligence and
Security, pp. 437–441 (2013)
7. Gul, G., Kurugollu, F.: SVD-based universal spatial domain image steganalysis. IEEE
Trans. Inf. Forensics Secur. 5(2), 349–353 (2010)
8. Davidson, J., Jalan, J.: Steganalysis using partially ordered markov models. In: Böhme, R.,
Fong, P.W.L., Safavi-Naini, R. (eds.) IH 2010. LNCS, vol. 6387, pp. 118–132. Springer,
Heidelberg (2010)
9. Dai, Z.H., Xiong, Q., Peng, Y., Gao, H.H.: Research on the large scale image steganalysis
technology based on cloud computing and BP neutral network. In: The 8th International
Conference on Intelligent Information Hiding and Multimedia Signal Processing, pp. 415–
419 (2012)
10. Fox, G.C., Bae, S.H.: Parallel data mining from multicore to cloudy grids. In: High
Performance Computing and Grids workshop, pp. 311–340 (2008)
11. Ekanayake, J., Li, H.: Twister: a runtime for iterative MapReduce. In: The First International
Workshop on MapReduce and Its Applications of ACM HPDC, pp. 810–818 (2010)
12. Islam, N.S., Wasi-ur-Rahman, M., Lu, X.Y., Shankar, D., Panda, D.K.: Performance
characterization and acceleration of in-memory file systems for Hadoop and Spark
applications on HPC clusters. In: IEEE International Conference on Big Data, pp. 243–252
(2015)
13. Huang, C., Gao, J.J., Xuan, G.R., Shi, Y.Q.: Steganalysis based on moments in the
frequency domain of wavelet sub- band’s histogram for JPEG images. Comput. Eng. Appl.
30, 95–97 (2006)
Another random document with
no related content on Scribd:
*** END OF THE PROJECT GUTENBERG EBOOK THE BEAST OF
BOREDOM ***
Updated editions will replace the previous one—the old editions will
be renamed.
1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside the
United States, check the laws of your country in addition to the terms
of this agreement before downloading, copying, displaying,
performing, distributing or creating derivative works based on this
work or any other Project Gutenberg™ work. The Foundation makes
no representations concerning the copyright status of any work in
any country other than the United States.
• You pay a royalty fee of 20% of the gross profits you derive from
the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”
• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.
1.F.
1.F.4. Except for the limited right of replacement or refund set forth in
paragraph 1.F.3, this work is provided to you ‘AS-IS’, WITH NO
OTHER WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR ANY PURPOSE.
Please check the Project Gutenberg web pages for current donation
methods and addresses. Donations are accepted in a number of
other ways including checks, online payments and credit card
donations. To donate, please visit: www.gutenberg.org/donate.
Most people start at our website which has the main PG search
facility: www.gutenberg.org.