You are on page 1of 45

Advances in Neural Networks ISNN

2016 13th International Symposium on


Neural Networks ISNN 2016 St
Petersburg Russia July 6 8 2016
Proceedings 1st Edition Long Cheng
Visit to download the full and correct content document:
https://textbookfull.com/product/advances-in-neural-networks-isnn-2016-13th-internati
onal-symposium-on-neural-networks-isnn-2016-st-petersburg-russia-july-6-8-2016-pr
oceedings-1st-edition-long-cheng/
More products digital (pdf, epub, mobi) instant
download maybe you interests ...

Advances in Neural Networks ISNN 2020 17th


International Symposium on Neural Networks ISNN 2020
Cairo Egypt December 4 6 2020 Proceedings Min Han

https://textbookfull.com/product/advances-in-neural-networks-
isnn-2020-17th-international-symposium-on-neural-networks-
isnn-2020-cairo-egypt-december-4-6-2020-proceedings-min-han/

Experimental Algorithms 15th International Symposium


SEA 2016 St Petersburg Russia June 5 8 2016 Proceedings
1st Edition Andrew V. Goldberg

https://textbookfull.com/product/experimental-algorithms-15th-
international-symposium-sea-2016-st-petersburg-russia-
june-5-8-2016-proceedings-1st-edition-andrew-v-goldberg/

Advances in Neural Networks ISNN 2017 14th


International Symposium ISNN 2017 Sapporo Hakodate and
Muroran Hokkaido Japan June 21 26 2017 Proceedings Part
I 1st Edition Fengyu Cong
https://textbookfull.com/product/advances-in-neural-networks-
isnn-2017-14th-international-symposium-isnn-2017-sapporo-
hakodate-and-muroran-hokkaido-japan-june-21-26-2017-proceedings-
part-i-1st-edition-fengyu-cong/

Advances in Neural Networks ISNN 2017 14th


International Symposium ISNN 2017 Sapporo Hakodate and
Muroran Hokkaido Japan June 21 26 2017 Proceedings Part
II 1st Edition Fengyu Cong
https://textbookfull.com/product/advances-in-neural-networks-
isnn-2017-14th-international-symposium-isnn-2017-sapporo-
hakodate-and-muroran-hokkaido-japan-june-21-26-2017-proceedings-
Engineering Applications of Neural Networks 17th
International Conference EANN 2016 Aberdeen UK
September 2 5 2016 Proceedings 1st Edition Chrisina
Jayne
https://textbookfull.com/product/engineering-applications-of-
neural-networks-17th-international-conference-eann-2016-aberdeen-
uk-september-2-5-2016-proceedings-1st-edition-chrisina-jayne/

Internet of Things Smart Spaces and Next Generation


Networks and Systems 16th International Conference
NEW2AN 2016 and 9th Conference ruSMART 2016 St
Petersburg Russia September 26 28 2016 Proceedings 1st
Edition Olga Galinina
https://textbookfull.com/product/internet-of-things-smart-spaces-
and-next-generation-networks-and-systems-16th-international-
conference-new2an-2016-and-9th-conference-rusmart-2016-st-
petersburg-russia-september-26-28-2016-proceedings/

Ad hoc Mobile and Wireless Networks 15th International


Conference ADHOC NOW 2016 Lille France July 4 6 2016
Proceedings 1st Edition Nathalie Mitton

https://textbookfull.com/product/ad-hoc-mobile-and-wireless-
networks-15th-international-conference-adhoc-now-2016-lille-
france-july-4-6-2016-proceedings-1st-edition-nathalie-mitton/

Business Information Systems 19th International


Conference BIS 2016 Leipzig Germany July 6 8 2016
Proceedings 1st Edition Witold Abramowicz

https://textbookfull.com/product/business-information-
systems-19th-international-conference-bis-2016-leipzig-germany-
july-6-8-2016-proceedings-1st-edition-witold-abramowicz/

Engineering Secure Software and Systems 8th


International Symposium ESSoS 2016 London UK April 6 8
2016 Proceedings 1st Edition Juan Caballero

https://textbookfull.com/product/engineering-secure-software-and-
systems-8th-international-symposium-essos-2016-london-uk-
april-6-8-2016-proceedings-1st-edition-juan-caballero/
Long Cheng
Qingshan Liu
Andrey Ronzhin (Eds.)
LNCS 9719

Advances in
Neural Networks – ISNN 2016
13th International Symposium
on Neural Networks, ISNN 2016
St. Petersburg, Russia, July 6–8, 2016, Proceedings

123
Lecture Notes in Computer Science 9719
Commenced Publication in 1973
Founding and Former Series Editors:
Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen

Editorial Board
David Hutchison
Lancaster University, Lancaster, UK
Takeo Kanade
Carnegie Mellon University, Pittsburgh, PA, USA
Josef Kittler
University of Surrey, Guildford, UK
Jon M. Kleinberg
Cornell University, Ithaca, NY, USA
Friedemann Mattern
ETH Zurich, Zürich, Switzerland
John C. Mitchell
Stanford University, Stanford, CA, USA
Moni Naor
Weizmann Institute of Science, Rehovot, Israel
C. Pandu Rangan
Indian Institute of Technology, Madras, India
Bernhard Steffen
TU Dortmund University, Dortmund, Germany
Demetri Terzopoulos
University of California, Los Angeles, CA, USA
Doug Tygar
University of California, Berkeley, CA, USA
Gerhard Weikum
Max Planck Institute for Informatics, Saarbrücken, Germany
More information about this series at http://www.springer.com/series/7407
Long Cheng Qingshan Liu

Andrey Ronzhin (Eds.)

Advances in
Neural Networks – ISNN 2016
13th International Symposium
on Neural Networks, ISNN 2016
St. Petersburg, Russia, July 6–8, 2016
Proceedings

123
Editors
Long Cheng Andrey Ronzhin
The Chinese Academy of Sciences SPIIRAS
Beijing St. Petersburg
China Russia
Qingshan Liu
Huazhong University of Science
and Technology
Wuhan, Hubei
China

ISSN 0302-9743 ISSN 1611-3349 (electronic)


Lecture Notes in Computer Science
ISBN 978-3-319-40662-6 ISBN 978-3-319-40663-3 (eBook)
DOI 10.1007/978-3-319-40663-3

Library of Congress Control Number: 2016941302

LNCS Sublibrary: SL1 – Theoretical Computer Science and General Issues

© Springer International Publishing Switzerland 2016


This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the
material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation,
broadcasting, reproduction on microfilms or in any other physical way, and transmission or information
storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now
known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this book are
believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors
give a warranty, express or implied, with respect to the material contained herein or for any errors or
omissions that may have been made.

Printed on acid-free paper

This Springer imprint is published by Springer Nature


The registered company is Springer International Publishing AG Switzerland
Preface

This volume of Lecture Notes in Computer Science constitutes the proceedings of the
13th International Symposium on Neural Networks (ISNN 2016) held during July 6–8,
2016, in Saint Petersburg, Russia. Building on the success of the previous events,
ISNN has become a well-established series of popular and high-quality conferences on
neural networks and their applications. This year the symposium was held for the
second time outside China, in Saint Petersburg, a very beautiful city in Russia. As
usual, it achieved great success.
ISNN aims at providing a high-level international forum for scientists, engineers,
educators, as well as students to gather so as to present and discuss the latest progresses
in neural network research and applications in diverse areas. It encouraged open dis-
cussion, disagreement, criticism, and debate, and we think this is the right way to push
the field forward.
This year, we received 104 submissions from about 291 authors in 40 countries and
regions. Based on the rigorous peer-reviews by the Program Committee members and
reviewers, 84 high-quality papers were selected for publication in the LNCS pro-
ceedings. These papers cover many topics of neural network-related research including
intelligent control, neurodynamic analysis, memristive neurodynamics, computer
vision, signal processing, machine learning, optimization etc.
Many organizations and volunteers made great contributions toward the success of
this symposium. We would like to express our sincere gratitude to the City University
of Hong Kong, the St. Petersburg Institute for Informatics and Automation, Russian
Academy of Sciences, the IEEE Hong Kong Section (CIS Chapter), the International
Neural Network Society, the Asia Pacific Neural Network Society, and the Russian
Neural Networks Society for their technical co-sponsorship. We would also like to
sincerely thank all the committee members for all their great efforts and time in
organizing the symposium. Special thanks go to the Program Committee members and
reviewers whose insightful reviews and timely feedback ensured the high quality of the
accepted papers and the smooth flow of the symposium. We would also like to thank
Springer for their cooperation in publishing the proceedings in the prestigious Lecture
Notes in Computer Science series. Finally, we would like to thank all the speakers,
authors, and participants for their support.

April 2016 Long Cheng


Qingshan Liu
Andrey Ronzhin
Organization

General Chairs
Tatiana V. St. Petersburg State University, Saint Petersburg, Russia
Chernigovskaya
Jun Wang City University of Hong Kong, Hong Kong, SAR China
Rafael Yusupov St. Petersburg Institute for Informatics and Automation,
Russian Academy of Sciences, Saint Petersburg, Russia

Advisory Chairs
Vladimir Cherkassky University of Minnesota, Minneapolis, USA
Boris Kryzhanovsky Russian Academy of Sciences, Moscow, Russia
Danil Prokhorov Toyota Motor Corporation, Ann Arbor, USA

Steering Chairs
Haibo He University of Rhode Island, Kingston, USA
Derong Liu University of Illinois-Chicago, Chicago, USA

Organizing Committee Chair


Alexey Potapov St. Petersburg State University, Saint Petersburg, Russia

Program Chairs
Long Cheng Institute of Automation, Chinese Academy of Sciences,
Beijing, China
Qingshan Liu Huazhong University of Science and Technology, Wuhan,
China
Andrey Ronzhin St. Petersburg Institute for Informatics and Automation,
Russian Academy of Sciences, Saint Petersburg, Russia

Special Sessions Chairs


Jinde Cao Southeast University, Nanjing, China
Min Han Dalian University of Technology, Dalian, China
Xiaolin Hu Tsinghua University, Beijing, China
VIII Organization

Publicity Chairs
Huaguang Zhang Northeastern University, Shenyang, China
Jun Zhang Sun Yat-sen University, Guangzhou, China
Li Zhang Chinese University of Hong Kong, Hong Kong,
SAR China

Publications Chairs
Zhenyuan Guo Hunan University, Changsha, China
Biao Luo Institute of Automation, Chinese Academy of Sciences,
Beijing, China
Sitian Qin Harbin Institute of Technology at Weihai, Weihai, China

Registration Chairs
Yiu-Ming Cheung Hong Kong Baptist University, Hong Kong, SAR China
Shenshen Gu Shanghai University, Shanghai, China
Daniel W.C. Ho City University of Hong Kong, Hong Kong, SAR China

Local Arrangements Chairs


Alexey Karpov St. Petersburg Institute for Informatics and Automation,
Russian Academy of Sciences, Saint Petersburg, Russia
Natalia P. Nesmeyanova St. Petersburg State University, Saint Petersburg, Russia

Program Committee
Jose Aguilar Universidad de Los Andes, Venezuela
Plamen Angelov Lancaster University, UK
Daniel Araújo Universidade Federal do Rio Grande do Norte, Brazil
Laxmidhar Behera Indian Institute of Technology Kanpur, India
Pascal Bouvry University of Luxembourg, Luxembourg
Salim Bouzerdoum University of Wollongong, Australia
Chien-Lung Chan Yuan Ze University, Taiwan
Jonathan Chan King Mongkut’s University of Technology Thonburi,
Thailand
Andrey Chechulin St. Petersburg Institute for Informatics and Automation,
Russia
Ke Chen Tampere University of Technology, Finland
Yuehui Chen University of Jinan, China
Zengqiang Chen Nankai University, China
Long Cheng Chinese Academy of Sciences, China
Peng Cheng Zhejiang University, China
Zhenbo Cheng ZheJiang University of Technology, China
Organization IX

Vladimir Cherkassky University of Minnesota, USA


Zheru Chi Hong Kong Polytechnic University, Hong Kong,
SAR China
Fengyu Cong Dalian University of Technology, China
José Alfredo Ferreira Universidade Federal do Rio Grande do Norte, Brazil
Costa
Ruxandra Liana Costea Polytechnic University of Bucharest, Romania
Francisco De A.T. De Federal University of Pernambuco, Brazil
Carvalho
Mingcong Deng Tokyo University of Agriculture and Technology, Japan
Habib Dhahri University of Sfax, Tunisia
Qiulei Dong Chinese Academy of Sciences, China
Andries Engelbrecht University of Pretoria, South Africa
Jianchao Fan National Marine Environmental Monitoring Center, China
Mauro Forti Universita di Siena, Italy
Wai-Keung Fung Robert Gordon University, UK
Wenyin Gong China University of Geosciences, China
Shenshen Gu Shanghai University, China
Chengan Guo Dalian University of Technology, China
Ping Guo Beijing Normal University, China
Zhishan Guo University of North Carolina at Chapel Hill, USA
Honggui Han Beijing University of Technology, China
Ran He National Laboratory of Pattern Recognition, China
Xing He Chongqing University, China
Iliya Hodashinsky TUSUR University, Russia
Tzung-Pei Hong National Univesity of Kaohsiung, Taiwan
Zhongsheng Hou Beijing Jiaotong University, China
Bill Howell Natural Resources Canada, Canada
Jinglu Hu Waseda University, Japan
Danchi Jiang University of Tasmania, Australia
Haijun Jiang Xinjiang University, China
Min Jiang Xiamen University, China
Yaochu Jin University of Surrey, UK
Kostas Karatzas Aristotle University, Greece
Alexey Karpov St. Petersburg Institute for Informatics and Automation,
Russia
Sungshin Kim Pusan National University, Korea
Irina Kipyatkova St. Petersburg Institute for Informatics and Automation,
Russia
Evgeniy Kostyuchenko Tomsk State University of Control Systems
and Radioelectronics, Russia
Georgios National and Kapodistrian University of Athens, Greece
Kouroupetroglou
Chiman Kwan Signal Processing, Inc., USA
Chengdong Li Shandong Jianzhu University, China
Chuandong Li Chongqing University, China
X Organization

Gang Li Deakin University, Australia


Hongyi Li Bohai University, China
Jianmin Li Tsinghua University, China
Miqing Li Brunel University, UK
Shuai Li Hong Kong Polytechnic University, Hong Kong,
SAR China
Tieshan Li Dalian Maritime University, China
Yangmin Li University of Macau, Macau, SAR China
Yongjie Li University of Electronic Science and Technology of China,
China
Jie Lian Dalian University of Technology, China
Hualou Liang Drexel University, USA
Alan Wee-Chung Liew Griffith University, Australia
Chih-Min Lin Yuan Ze University, Taiwan
Huaping Liu Tsinghua University, China
Ju Liu Shandong University, China
Lianqing Liu Chinese Academy of Sciences, China
Qingshan Liu Huazhong University of Science and Technology, China
Xingwen Liu Southwest University for Nationalities of China, China
Wenlian Lu Fudan University, China
Jinwen Ma Peking University, China
Deyuan Meng Beihang University, China
Felix Pasila Petra Christian University, Indonesia
Zhouhua Peng Dalian Maritime University, China
Irina Perfilieva University of Ostrava, Czech
Leonid Perlovsky US Air Force Research Laboratory, USA
Vladimir Red’Ko Russian Academy of Sciences, Russia
Andrey Ronzhin St. Petersburg Institute for Informatics and Automation,
Russia
Manuel Roveri Politecnico di Milano, Italy
Juha Röning University of Oulu, Finland
Alireza Sadeghian Ryerson University, Canada
Igor Saenko St. Petersburg Institute for Information and Automation,
Russia
Md. Shahjahan Khulna University of Engineering and Technology,
Bangladesh
Bo Shen Donghua University, China
Qiankun Song Chongqing Jiaotong University, China
Stefano Squartini Università Politecnica delle Marche, Italy
Lev Stankevich St. Peterburg Polytechnic University, Russia
Jian Sun Beijing Institute of Technology, China
Ning Sun Nankai University, China
Zhanquan Sun Shandong Computer Science Center, China
Manchun Tan Jinan University, China
Qing Tao Chinese Academy of Sciences, China
Christos Tjortjis International Hellenic University, Greece
Organization XI

Feng Wan University of Macau, Macau, SAR China


Dan Wang Dalian Maritime University, China
Ding Wang Chinese Academy of Sciences, China
Dong Wang Dalian University of Technology, China
Hanlei Wang Beijing Institute of Control Engineering, China
Jun Wang City University of Hong Kong, Hong Kong, SAR China
Yong Wang Central South University, China
Yunpeng Wang Beijing Institute of Control Engineering, China
Guanghui Wen Southeast University, China
Huai-Ning Wu Beijing University of Aeronautics and Astronautics, China
Tao Xiang Chongqing University, China
Lin Xiao Jishou University, China
Rui Xu Hohai University, China
Chenguang Yang University of Plymouth, UK
Qinmin Yang Zhejiang University, China
Yingjie Yang De Montfort University, UK
Mao Ye University of Electronic Science and Technology of China,
China
Jianqiang Yi Chinese Academy of Science, China
Wen Yu Cinvestav, Mexico
Yang Yu Nanjing University, China
Ales Zamuda University of Maribor, Slovenia
Yi Zeng Chinese Academy of Sciences, China
Jie Zhang Newcastle University, UK
Jinhui Zhang Beijing University of Chemical Technology, China
Mengjie Zhang Victoria University of Wellington, New Zealand
Xuebo Zhang Nankai University, China
Dongya Zhao China University of Petroleum, China
Jun Zhao Dalian University of Technology, China
Liang Zhao University of São Paulo, Brazil
Contents

Signal and Image Processing

Large Scale Image Steganalysis Based on MapReduce. . . . . . . . . . . . . . . . . 3


Zhanquan Sun, Huifen Huang, and Feng Li

Edge Detection Using Convolutional Neural Network . . . . . . . . . . . . . . . . . 12


Ruohui Wang

Spectral-spatial Classification of Hyperspectral Image Based on Locality


Preserving Discriminant Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Min Han, Chengkun Zhang, and Jun Wang

Individual Independent Component Analysis on EEG: Event-Related


Responses Vs. Difference Wave of Deviant and Standard Responses . . . . . . . 30
Tiantian Yang, Fengyu Cong, Zheng Chang, Youyi Liu,
Tapani Ristainiemi, and Hong Li

Parallel Classification of Large Aerospace Images by the Multi-alternative


Discrete Accumulation Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Vladimir I. Vorobiev, Elena L. Evnevich, and Dmitriy K. Levonevskiy

Instantaneous Wavelet Correlation of Spectral Integrals Related to Records


from Different EEG Channels. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Sergey V. Bozhokin and Irina B. Suslova

The Theory of Information Images: Modeling of Communicative


Interactions of Individuals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Alexandr Y. Petukhov and Sofia A. Polevaya

A Two-Stage Channel Selection Model for Classifying EEG Activities


of Young Adults with Internet Addiction . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Wenjie Li, Ling Zou, Tiantong Zhou, Changming Wang,
and Jiongru Zhou

Text-independent Speaker Recognition Using Radial Basis Function


Network. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Anton Yakovenko and Galina Malychina

Usage of DNN in Speaker Recognition: Advantages and Problems . . . . . . . . 82


Oleg Kudashev, Sergey Novoselov, Timur Pekhovsky,
Konstantin Simonchik, and Galina Lavrentyeva
XIV Contents

Boosted Inductive Matrix Completion for Image Tagging. . . . . . . . . . . . . . . 92


Yuqing Hou

Neurological Classifier Committee Based on Artificial Neural Networks


and Support Vector Machine for Single-Trial EEG Signal Decoding . . . . . . . 100
Konstantin Sonkin, Lev Stankevich, Yulia Khomenko,
Zhanna Nagornova, Natalia Shemyakina, Alexandra Koval,
and Dmitry Perets

Calculation of Analogs for the Largest Lyapunov Exponents for Acoustic


Data by Means of Artificial Neural Networks . . . . . . . . . . . . . . . . . . . . . . . 108
German A. Chernykh, Yuri A. Kuperin, Ludmila A. Dmitrieva,
and Angelina A. Navleva

Robust Acoustic Emotion Recognition Based on Cascaded Normalization


and Extreme Learning Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Heysem Kaya, Alexey A. Karpov, and Albert Ali Salah

Dynamical Behaviors of Recurrent Neural Networks

Matrix-Valued Hopfield Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . 127


Călin-Adrian Popa

Synchronization of Coupled Neural Networks with Nodes of Different


Dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Manchun Tan and Desheng Xu

Asymptotic Behaviors for Non-autonomous Difference Neural Networks


with Impulses and Delays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
Shujun Long and Bing Li

Optimal Real-Time Price in Smart Grid via Recurrent Neural Network . . . . . 152
Haisha Niu, Zhanshan Wang, Zhenwei Liu, and Yingwei Zhang

Exponential Stability of Anti-periodic Solution of Cohen-Grossberg Neural


Networks with Mixed Delays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
Sitian Qin, Yongyi Tan, and Fuqiang Wang

Stability of Complex-Valued Cohen-Grossberg Neural Networks


with Time-Varying Delays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
Zhenjiang Zhao and Qiankun Song

Space-Time Structures of Recurrent Neural Networks with


Controlled Synapses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
Vasiliy Osipov

A Practical Simulator of Associative Intellectual Machine . . . . . . . . . . . . . . 185


Sergey Baranov
Contents XV

Hopfield Network with Interneuronal Connections Based on


Memristor Bridges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
Mikhail S. Tarkov

Two-Dimensional Fast Orthogonal Neural Networks . . . . . . . . . . . . . . . . . . 204


A. Yu. Dorogov

Existence of Periodic Solutions to Non-autonomous Delay


Cohen-Grossberg Neural Networks with Impulses on Time Scales . . . . . . . . 211
Zhouhong Li

Intelligent Control

Improved Direct Finite-control-set Model Predictive Control Strategy with


Delay Compensation and Simplified Computational Approach for Active
Front-end Rectifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
Xing Liu, Dan Wang, and Zhouhua Peng

Distributed Tracking Control of Uncertain Multiple Manipulators Under


Switching Topologies Using Neural Networks . . . . . . . . . . . . . . . . . . . . . . 233
Long Cheng, Ming Cheng, Hongnian Yu, Lu Deng,
and Zeng-Guang Hou

A Novel Emergency Braking Method with Payload Swing Suppression


for Overhead Crane Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
He Chen, Yongchun Fang, and Ning Sun

Neural Network Approximation Based Multi-dimensional Active Control


of Regenerative Chatter in Micro-milling . . . . . . . . . . . . . . . . . . . . . . . . . . 250
Xiaoli Liu, Chun-Yi Su, and Zhijun Li

A Distributed Delay Consensus of Multi-Agent Systems with Nonlinear


Dynamics in Directed Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
Li Qiu, Liuxiao Guo, Jia Liu, and Yongqing Yang

Discrete-Time Two-Player Zero-Sum Games for Nonlinear Systems


Using Iterative Adaptive Dynamic Programming . . . . . . . . . . . . . . . . . . . . . 269
Qinglai Wei and Derong Liu

Neural Network Technique in Boundary Value Problems for Ordinary


Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
Elena M. Budkina, Evgenii B. Kuznetsov, Tatiana V. Lazovskaya,
Sergey S. Leonov, Dmitriy A. Tarkhov, and Alexander N. Vasilyev

Transmission Synchronization Control of Multiple Non-identical Coupled


Chaotic Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
Xiangyong Chen, Jinde Cao, Jianlong Qiu, and Chengdong Yang
XVI Contents

Pneumatic Manipulator with Neural Network Control . . . . . . . . . . . . . . . . . 292


Anton Aliseychik, Igor Orlov, Vladimir Pavlovsky,
Alexey Podoprosvetov, Marina Shishova, and Vladimir Smolin

Dynamic Noise Reduction in the System Measuring Efficiency of Light


Emitting Diodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
Galina Malykhina and Yuri Grodetskiy

Neural Network Technique in Some Inverse Problems of


Mathematical Physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
Vladimir I. Gorbachenko, Tatiana V. Lazovskaya, Dmitriy A. Tarkhov,
Alexander N. Vasilyev, and Maxim V. Zhukov

The Model of the Robot’s Hierarchical Behavioral Control System . . . . . . . . 317


A.V. Bakhshiev and F.V. Gundelakh

Object Trajectory Association Rules for Tracking Trailer Boat


in Low-frame-rate Videos. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
Jing Zhao, Shaoning Pang, Bruce Hartill,
and Abdolhossein Sarrafzadeh

Learning Time-optimal Anti-swing Trajectories for Overhead


Crane Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
Xuebo Zhang, Ruijie Xue, Yimin Yang, Long Cheng, and Yongchun Fang

Attitude Estimation for UAV with Low-Cost IMU/ADS Based on


Adaptive-Gain Complementary Filter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
Lingling Wang, Li Fu, Xiaoguang Hu, and Guofeng Zhang

Hot-Redundancy CPCI Measurement and Control System Based on


Probabilistic Neural Networks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
Dan Li, Xiaoguang Hu, Guofeng Zhang, and Haibin Duan

Individually Adapted Neural Network for Pilot’s Final Approach


Actions Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
Veniamin Evdokimenkov, Roman Kim, Mikhail Krasilshchikov,
and German Sebrjakov

Clustering, Classification, Modeling, and Forecasting

On Neurochemical Aspects of Agent-Based Memory Model. . . . . . . . . . . . . 375


Alexandr A. Ezhov, Andrei G. Khromov, and Svetlana S. Terentyeva

Intelligent Route Choice Model for Passengers’ Movement


in Subway Stations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
Eric Wai Ming Lee and Michelle Ching Wa Li
Contents XVII

A Gaussian Kernel-based Clustering Algorithm with Automatic


Hyper-parameters Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
Francisco de A.T. de Carvalho, Marcelo R.P. Ferreira,
and Eduardo C. Simões

Network Intrusion Detection with Bat Algorithm for Synchronization


of Feature Selection and Support Vector Machines . . . . . . . . . . . . . . . . . . . 401
Chunying Cheng, Lanying Bao, and Chunhua Bao

Motion Detection in Asymmetric Neural Networks . . . . . . . . . . . . . . . . . . . 409


Naohiro Ishii, Toshinori Deguchi, Masashi Kawaguchi,
and Hiroshi Sasaki

Language Models with RNNs for Rescoring Hypotheses of Russian ASR . . . 418
Irina Kipyatkova and Alexey Karpov

User-Level Twitter Sentiment Analysis with a Hybrid Approach . . . . . . . . . . 426


Meng Joo Er, Fan Liu, Ning Wang, Yong Zhang,
and Mahardhika Pratama

The Development of a Nonlinear Curve Fitter Using RBF Neural Networks


with Hybrid Neurons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434
Michael M. Li

Networks of Coupled Oscillators for Cluster Analysis: Overview


and Application Prospects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444
Andrei Novikov and Elena Benderskaya

Day-Ahead Electricity Price Forecasting Using WT, MI and LSSVM


Optimized by Modified ABC Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 454
H. Shayeghi, A. Ghasemi, and M. Moradzadeh

Categorization in Intentional Theory of Concepts . . . . . . . . . . . . . . . . . . . . 465


Dmitry Zaitsev and Natalia Zaitseva

A Novel Incremental Class Learning Technique for Multi-class


Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
Meng Joo Er, Vijaya Krishna Yalavarthi, Ning Wang,
and Rajasekar Venkatesan

Basis Functions Comparative Analysis in Consecutive Data Smoothing


Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482
F.D. Tarasenko and D.A. Tarkhov

FIR as Classifier in the Presence of Imbalanced Data . . . . . . . . . . . . . . . . . 490


Solmaz Bagherpour, Àngela Nebot, and Francisco Mugica
XVIII Contents

Neural Network System for Monitoring State of a High-Speed


Fiber-Optical Linear Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497
I.A. Saitov, O.O. Basov, A.I. Motienko, S.I. Saitov, M.M. Bizin,
and V. Yu. Budkov

Pattern Classification with the Probabilistic Neural Networks Based


on Orthogonal Series Kernel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505
Andrey V. Savchenko

Neural Network Methods for Construction of Sociodynamic


Models Hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513
Ekaterina A. Blagoveshchenskaya, Aleksandra I. Dashkina,
Tatiana V. Lazovskaya, Viktoria V. Ryabukhina, and Dmitriy A. Tarkhov

Application of Hybrid Neural Networks for Monitoring and Forecasting


Computer Networks States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
Igor Saenko, Fadey Skorik, and Igor Kotenko

Multiclass Ensemble of One-against-all SVM Classifiers . . . . . . . . . . . . . . . 531


Catarina Silva and Bernardete Ribeiro

Long Exposure Point Spread Function Modeling with Gaussian Processes . . . 540
Ping Guo, Jian Yu, and Qian Yin

Neural Network Technique for Processes Modeling in Porous Catalyst


and Chemical Reactor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547
Tatiana A. Shemyakina, Dmitriy A. Tarkhov, and Alexander N. Vasilyev

Fine-Grained Real Estate Estimation Based on Mixture Models . . . . . . . . . . 555


Peng Ji, Xin Xin, and Ping Guo

Intellectual Analysis System of Big Industrial Data for Quality


Management of Polymer Films . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565
Tamara Chistyakova, Mikhail Teterin, Alexander Razygraev,
and Christian Kohlert

Some Ideas of Informational Deep Neural Networks Structural


Organization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573
Vladimir Smolin

Meshfree Computational Algorithms Based on Normalized Radial


Basis Functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583
Alexander N. Vasilyev, Ilya S. Kolbin, and Dmitry L. Reviznikov
Contents XIX

Evolutionary Computation

An Experimental Assessment of Hybrid Genetic-Simulated Annealing


Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 595
Cong Jin and Jinan Liu

When Neural Network Computation Meets Evolutionary Computation:


A Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603
Zonggan Chen, Zhihui Zhan, Wen Shi, Weineng Chen, and Jun Zhang

New Adaptive Feature Vector Construction Procedure for Speaker Emotion


Recognition Based on Wavelet Transform and Genetic Algorithm . . . . . . . . . 613
Alexander M. Soroka, Pavel E. Kovalets, and Igor E. Kheidorov

Integration of Bayesian Classifier and Perceptron for Problem Identification


on Dynamics Signature Using a Genetic Algorithm for the Identification
Threshold Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 620
Evgeny Kostyuchenko, Mihail Gurakov, Egor Krivonosov,
Maxim Tomyshev, Roman Mescheryakov, and Ilya Hodashinskiy

Cognition Computation and Spiking Neural Networks

Tracking Based on Unit-Linking Pulse Coupled Neural Network Image


Icon and Particle Filter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 631
Hang Liu and Xiaodong Gu

Quaternion Spike Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640


Luis Lechuga-Gutiérrez and Eduardo Bayro-Corrochano

Vector-Matrix Models of Pulse Neuron for Digital Signal Processing . . . . . . 647


Vladimir Bondarev

About RP-neuron Models of Aggregating Type . . . . . . . . . . . . . . . . . . . . . 657


Zaur Shibzukhov and Denis Cherednikov

Conversion from Rate Code to Temporal Code – Crucial Role of Inhibition . . . 665
Mikhail V. Kiselev

Analysis of Oscillations in the Brain During Sensory Stimulation:


Cross-Frequency Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 673
Elena Astasheva, Maksim Astashev, and Valentina Kitchigina

Memristor-Based Neuromorphic System with Content Addressable


Memory Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 681
Yidong Zhu, Xiao Wang, Tingwen Huang, and Zhigang Zeng

Detailed Structure of the Cortical Magnetic Response to Words . . . . . . . . . . 691


V.L. Vvedensky and A. Yu. Nikolayeva
XX Contents

Pre-coding & Testing Technique for Interfacing Neural Networks


Associative Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 698
Fayçal Saffih, Wan Abdulllah, and Zainol Ibrahim

The Peculiarities of Perceptual Set in Sensorimotor Illusions . . . . . . . . . . . . 706


Valeria Karpinskaia, Vsevolod Lyakhovetskii, Viktor Allakhverdov,
and Yuri Shilov

Generalized Truth Values: From Logic to the Applications in Cognitive


Sciences. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 712
Oleg Grigoriev

Modeling of Cognitive Evolution: Agent-Based Investigations in


Cognitive Science . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 720
Vladimir G. Red’ko

Usage of Language Particularities for Semantic Map Construction: Affixes


in Russian Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 731
Anita Balandina, Artyom Chernyshov, Valentin Klimov,
and Anastasiya Kostkina

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 739


Signal and Image Processing
Large Scale Image Steganalysis
Based on MapReduce

Zhanquan Sun1,2(&), Huifen Huang2, and Feng Li3


1
Shandong Provincial Key Laboratory of Computer Network,
Shandong Computer Science Center (National Supercomputing Center in Jinan),
Jinan, Shandong, China
sunzhq@sdas.org
2
Shandong Yingcai University, Jinan, Shandong, China
3
Department of History, College of Liberal Arts, Shanghai University,
Shanghai, China

Abstract. Steganalysis is the opposite art to steganography, whose goal is to


detect whether or not the seemly innocent objects like image hiding message.
Image steganalysis is important research issue of information security field.
With the development of steganography technology, steganalysis becomes more
and more difficult. Regarding the problem of improving the performance of
image steganalysis, many research work have been done. Based on current
research, large scale training set will be the feasible means to improve the
steganalysis performance. Classic classifier is out of work to deal with large
scale images steganalysis. In this paper, a parallel Support Vector Machines
based on MapReduce is used to build the steganalysis classifier according to
large scale training samples. The efficiency of the proposed method is illustrated
with an experiment analysis.

Keywords: Image steganalysis  MapReduce  Support vector machines 


Feature extraction

1 Introduction

Steganography and steganalysis can be considered as opposite sides of the game, they
are progressing during the competition between each other. Steganalysis is the opposite
art to steganography, whose goal is to detect whether or not the seemly innocent
objects like image hiding message. Many steganography models have been proposed,
such as LSB (least significant bit) replace and LSB matching, STC (Space-time-
coding), HUGO (Highly Undetectable Steganography), JSteg, OutGuess, F5, nsF5,
MBS, YASS, MOD and so on [1–3]. Steganalysis aiming at a special Steganography
method has lots research work [4, 5]. Universal steganalysis is a difficult problems.
Some research has been done on the topic [6–8]. But how to improve the steganalysis
performance is still a difficult issue.
With the development of information technology, data scale becomes larger and
larger. Big data technology develops quickly in recent years. Taking full use of
large scale training data’s information to improve steganalysis performance is the

© Springer International Publishing Switzerland 2016


L. Cheng et al. (Eds.): ISNN 2016, LNCS 9719, pp. 3–11, 2016.
DOI: 10.1007/978-3-319-40663-3_1
4 Z. Sun et al.

develop trend [9]. Most classic classification methods are out of reach in practice in
face of big data. Efficient parallel algorithms and implementation techniques are the
key to meeting the scalability and performance requirements entailed in such large
scale data mining analyses. Many parallel algorithms are implemented using different
parallelization techniques such as threads, MPI, MapReduce, and mash-up or workflow
technologies yielding different performance and usability characteristics [9]. MapRe-
duce is a cloud technology developed from the data analysis model of the information
retrieval field. Several MapReduce architectures are developed now. Iterative
MapReduce based on memory computing is the most efficient architecture to deal with
large scale computing problems. Commonly used iterative MapReduce software are
Spark and Twister [9, 10]. In this paper, parallel SVM model is proposed to deal with
large scale image steganalysis problem. The efficiency is illustrated with a practical
experience.
The rest of this paper is organized as follows. In Sect. 2, image feature extraction
based on wavelet transform is introduced. Iterative MapReduce architecture is present
in Sect. 3. Parallel SVM model based on iterative MapReduce is present in Sect. 4.
Image steganalysis procedure based on MapReduce is described in detail in Sect. 5.
A practical example is analyzed in Sect. 6 to illustrate the efficiency of the proposed
method.

1.1 Discrete Wavelet Transform


A wavelet series is a representation of a square-integrable function by a certain
orthonormal series generated by a wavelet. It is a multi-resolution analysis tool.
Apply L levels wavelet decomposition on an image, it will produce multi sub-band
images. Let the initial image be denoted C0 . The coefficients matrix of wavelet coef-
ficients wðxÞ is denoted by and the coefficients matrix of scaling coefficients UðXÞ is
denoted by H. The wavelet decomposition algorithm is as follows.
8
> Cj þ 1 ¼ HCj HT
>
>
>
>
>
< Dhjþ 1 ¼ GCj HT
ð1Þ
>
> Dvjþ 1 ¼ HCj GT
>
>
>
>
: d
Dj þ 1 ¼ GCj GT

where j denote the level of decomposition, denote horizontal, vertical, and diagonal
component respectively. HT and GT denote the conjugate transpose matrix of H and G.

1.2 Histogram Moment in Frequency Domain


The histogram moment is defined as the following equation.
XN=2
Mn ¼ k¼N=2
jfk jn pðfk Þ ð2Þ
Large Scale Image Steganalysis Based on MapReduce 5

where n denotes the moment number, N is the variable number of histogram in


horizontal direction. fk is the k-order frequency of Fourier transform where k ¼ N=2;
   ; 1; 0; 1;    ; N=2. pðfk Þ is the distribution of the amplitude of histogram after
Fourier transform, i.e.

jHðfk Þj
pðfk Þ ¼ PN=2   ð3Þ
 
j¼N=2 Hðfj Þ

where jHðfk Þj is k-order amplitude of histogram hðxk Þ after Fourier transforming.

1.3 Image Feature Extraction


Given a JPEG image, operate DCT (Discrete Cosine Transform) on it and the obtained
DCT coefficients are taken as the object to be processed.
Operate discrete wavelet transform on the picture. In this paper, 3 level wavelet
decomposition is adopted. 12 sub-band images can be obtained. Calculate the his-
tograms of 12 sub-band images and picture. Operate Fourier transform on the his-
tograms and calculate the 3-order histogram moment according to Eq. (2). We can
obtain 39 features in total.

2 Iterative MapReduce

There are many parallel algorithms with simple iterative structures. Most of them can
be found in the domains such as data clustering, dimension reduction, link analysis,
machine learning, and computer vision. These algorithms can be implemented with
iterative MapReduce computation. Professor Fox developed the first iterative
MapReduce computation model Twister. It has several components, i.e. MapReduce
main job, Map job, Reduce job, and combine job. Twister’s programming model can
be described as in Fig. 1.
Twister is a distributed in-memory MapReduce runtime optimized for iterative
MapReduce computations. It reads data from local disks of the worker nodes and
handles the intermediate data in distributed memory of the worker nodes. All com-
munication and data transfers are handled via a publish/subscribe messaging infras-
tructure. During configuration, the client assigns MapReduce methods to the job,
prepares KeyValue pairs and prepares static data for MapReduce tasks through the
partition file if required. Map daemons operate on computation nodes, loading the Map
classes and starting them as Map workers. During initialization, Map workers load
static data from the local disk according to records in the partition file and cache the
data into memory. Most computation tasks defined by the users are executed in the
Map workers. Reduce daemons operate on computation nodes. The reduce jobs depend
on the computation results of Map jobs. The communication between daemons is
through messages. Combine job is to collect MapReduce results.
6 Z. Sun et al.

Fig. 1. Program model of Twister.

3 Parallel SVM Based on Iterative MapReduce


3.1 SVM
SVM first maps the input points into a high-dimensional feature space with a nonlinear
mapping function U and then carries through linear classification or regression in the
high-dimensional feature space. The linear regression in high-dimension feature space
corresponds to the nonlinear classification or regression in low-dimensional input
space. The general SVM can be described as follows.
Let l training samples be T ¼ fðx1 ; y1 Þ; . . .; ðxl ; yl Þg, where xi 2 Rn , yi 2 f1; 1g
(classification) or yi 2 R (regression), i ¼ 1; 2; . . .; l. Nonlinear mapping function is
Large Scale Image Steganalysis Based on MapReduce 7

kðxi ; xj Þ ¼ /ðxi Þ/ðxj Þ. Classification SVM can be implemented through solving the
following equations.
( )
1 X
min kw k2 þ C ni
w;ni ;b 2
i
ð4Þ
s:t: yi ðUT ðXi Þw þ bÞ  1  ni 8i ¼ 1; 2; . . .; n
ni  0 8i ¼ 1; 2; . . .; n

By introducing Lagrangian multipliers, the optimization problem can be trans-


formed into its dual problem.

X X
l
min ai aj yi yj kðxi ; xj Þ  ai
a
i;j i¼1
ð5Þ
s:t: yT a ¼ 0
0  ai \C; i ¼ 1; 2; . . .; l

After obtaining optimum solution a ; b , the following decision function is used to


determine which class the sample belongs to.
!
X
l
f ðxÞ ¼ sgn yi ai Kðxi ; xÞ þ b ð6Þ
i¼1

The classification precision of the SVM model can be calculated as

#correctly predicted data


Accuracy ¼  100%
#total testing data

3.2 Parallel SVM Based on Twister


The parallel SVM is based on the cascade SVM model. The SVM training is realized
through partial SVMs. Each subSVM is used as filter. This makes it straightforward to
drive partial solutions towards the global optimum, while alternative techniques may
optimize criteria that are not directly relevant for finding the global solution. Through
the parallel SVM model, large scale data optimization problems can be divided into
independent, smaller optimizations. The support vectors of the former subSVM are
used as the input of later subSVMs. The subSVM can be combined into one final SVM
in hierarchical fashion. The parallel SVM training process can be described as in
Fig. 2.
In the architecture, the sets of support vectors of two SVMs are combined into one
set and to be input a new SVM. The process continues until only one set of vectors is
left. In this architecture a single SVM never has to deal with the whole training set.
8 Z. Sun et al.

Fig. 2. Training flow of parallel SVM.

If the filters in the first few layers are efficient in extracting the support vectors then the
largest optimization, the one of the last layer, has to handle only a few more vectors
than the number of actual support vectors. Therefore, the training sets of each
sub-problems are much smaller than that of the whole problem when the support
vectors are a small subset of the training vectors.

4 Image Steganalysis

Large scale image steganalysis can be summarized as follows.


Step 1: Sample generate
Collect large scale raw images from available resources. Resize them into same
size.
Step 2: Steganography
Write hide information into those resized images with different steganography
methods, such as F5, Outguess and so on. Resized images and steganography images
are combined together to generate sample images.
Step 3: Feature extraction
Operate feature extraction introduced as in Sect. 2 on the sample images. We get
the features of all samples. They are divided into 2 parts. One part is taken as training
samples and the others are taken as test samples.
Large Scale Image Steganalysis Based on MapReduce 9

Step 4: Classifier training


The training samples are input into the parallel SVM introduced as in Sect. 4. The
test samples are used to test the performance of the trained classifier.
Step 5: Performance evaluation
The performance of the steganalysis classifier is evaluated with the 4 following
indices.
(1) TPR (True Positive Rate)

TPR ¼ TP=ðTP þ FNÞ ð7Þ

(2) TNR (True Negative Rate)

TNR ¼ TN=ðTN þ FPÞ ð8Þ

(3) FPR (False Positive Rate)

FPR ¼ FP=ðFP þ TNÞ ð9Þ

(4) FNR (False Negative Rate)

FNR ¼ FN=ðFN þ TPÞ ð10Þ

where TP (True Positive) the number of positive samples are estimated positive, TN
(True Negative) is the number of negative samples are estimated negative, FP (False
Positive) is the number of negative samples are estimated positive, and FN (False
Negative) is the number of positive samples are estimated negative.

5 Example

5.1 Data Source


We download Imagenet database from website http://image-net.org. There are 1.3
million images in total. They are all JPEG images. We select 50000 images randomly
for study. They are resized into 256 × 256 firstly. Operate steganography on 25000
images with LSB method and on the other 25000 images with F5 method. The 100000
images are taken as the samples.

5.2 Feature Extraction


Operate feature extraction method on the 100000 images. We get 100000 vectors with
39 dimensions. 75000 vectors are selected as training samples and the others as taken
as test sample.
10 Z. Sun et al.

5.3 Steganalysis Identify


The example is analyzed in Shandong Cloud platform. Twister0.9 software is deployed
in each computational node. ActiveMQ is used as message broker. The configuration of
each virtual machine is as follows. Each node is installed Ubuntu Linux OS. The
processor is 3 GHz Intel Xeon with 4 GB RAM. 75000 training vectors are partitioned
into 4, 2 and 1 sections respectively. They are trained with parallel SVM introduced as
in Sect. 4 on 4, 2, and 1 computational nodes respectively. The 25000 test sample used
to test the performance of different training model. The test results are listed in Table 1.

Table 1. Steganalysis results based on different partition.


Computational node number Training time(s) TPR TNR
1 12032 0.981 0.972
2 72003 0.978 0.971
4 48098 0.978 0.972

5.4 Result Comparison


From above analysis results we can find that parallel classifier based on MapReduce
can process large scale training samples. It will improve the classifier’s performance
with large scale training samples. Based on the same training samples, more compu-
tational nodes can improve the training speed and the performance are similar. It is
efficient in dealing with large-scale image steganalysis problems.

6 Conclusions

Image steganalysis plays more and more important role in our information society.
How to improve the steganalysis performance is the research focus of information
security. The parallel image steganalysis based on MapReduce can take full use of
large scale training sample’s information to build more precise steganalysis classifier.
The experience results show that the proposed method is efficient in dealing with large
scale image steganalysis problems. Big data technology will be the develop trend to
resolve steganalysis problems.

Acknowledgments. This work is supported by the national science foundation (No. 61472230),
National Natural Science Foundation of China (Grant No. 61402271), the Natural Science
Foundation of Shandong Province (Grant No. ZR2015JL023 and Grant No. ZR2015FL025),
Shandong science and technology development plan (Grant No. J15LN54), Key R & D program
in Shandong Province (Grant No. 2015GGX101012).
Large Scale Image Steganalysis Based on MapReduce 11

References
1. Chanu, Y.J., Tuithung, T., Manglem, S.K.: A short survey on image steganography and
steganalysis techniques. In: 3rd National Conference on Emerging Trends and Applications
in Computer Science, pp. 52–55 (2012)
2. Bilal, I., Roj, M.S., Kumar, R., Mishra, P.K.: Recent advancement in audio steganography.
In: 2014 International Conference on Parallel, Distributed and Grid Computing, pp. 402–405
(2014)
3. Mazurczyk, W., Caviglione, L.: Steganography in modern smartphones and mitigation
techniques. IEEE Commun. Surv. Tutorials 17(1), 334–357 (2015)
4. Khosravirad, S.R., Eghlidos, T., Ghaemmaghami, S.: Closure of sets: a statistically
hypersensitive system for steganalysis of least significant bit embedding. IET Signal
Process. 5(4), 379–389 (2011)
5. Tang, M.W., Fan, M.Y., Wang, G.W.: An extential steganalysis of information hiding for
F5. In: 2nd International Workshop on Intelligent Systems and Applications, pp. 1–4 (2010)
6. Zhang, Z., Hu, D.H., Yang, Y., Su, B.: A universal digital image steganalysis method based
on sparse representation. In: 9th International Conference on Computational Intelligence and
Security, pp. 437–441 (2013)
7. Gul, G., Kurugollu, F.: SVD-based universal spatial domain image steganalysis. IEEE
Trans. Inf. Forensics Secur. 5(2), 349–353 (2010)
8. Davidson, J., Jalan, J.: Steganalysis using partially ordered markov models. In: Böhme, R.,
Fong, P.W.L., Safavi-Naini, R. (eds.) IH 2010. LNCS, vol. 6387, pp. 118–132. Springer,
Heidelberg (2010)
9. Dai, Z.H., Xiong, Q., Peng, Y., Gao, H.H.: Research on the large scale image steganalysis
technology based on cloud computing and BP neutral network. In: The 8th International
Conference on Intelligent Information Hiding and Multimedia Signal Processing, pp. 415–
419 (2012)
10. Fox, G.C., Bae, S.H.: Parallel data mining from multicore to cloudy grids. In: High
Performance Computing and Grids workshop, pp. 311–340 (2008)
11. Ekanayake, J., Li, H.: Twister: a runtime for iterative MapReduce. In: The First International
Workshop on MapReduce and Its Applications of ACM HPDC, pp. 810–818 (2010)
12. Islam, N.S., Wasi-ur-Rahman, M., Lu, X.Y., Shankar, D., Panda, D.K.: Performance
characterization and acceleration of in-memory file systems for Hadoop and Spark
applications on HPC clusters. In: IEEE International Conference on Big Data, pp. 243–252
(2015)
13. Huang, C., Gao, J.J., Xuan, G.R., Shi, Y.Q.: Steganalysis based on moments in the
frequency domain of wavelet sub- band’s histogram for JPEG images. Comput. Eng. Appl.
30, 95–97 (2006)
Another random document with
no related content on Scribd:
*** END OF THE PROJECT GUTENBERG EBOOK THE BEAST OF
BOREDOM ***

Updated editions will replace the previous one—the old editions will
be renamed.

Creating the works from print editions not protected by U.S.


copyright law means that no one owns a United States copyright in
these works, so the Foundation (and you!) can copy and distribute it
in the United States without permission and without paying copyright
royalties. Special rules, set forth in the General Terms of Use part of
this license, apply to copying and distributing Project Gutenberg™
electronic works to protect the PROJECT GUTENBERG™ concept
and trademark. Project Gutenberg is a registered trademark, and
may not be used if you charge for an eBook, except by following the
terms of the trademark license, including paying royalties for use of
the Project Gutenberg trademark. If you do not charge anything for
copies of this eBook, complying with the trademark license is very
easy. You may use this eBook for nearly any purpose such as
creation of derivative works, reports, performances and research.
Project Gutenberg eBooks may be modified and printed and given
away—you may do practically ANYTHING in the United States with
eBooks not protected by U.S. copyright law. Redistribution is subject
to the trademark license, especially commercial redistribution.

START: FULL LICENSE


THE FULL PROJECT GUTENBERG LICENSE
PLEASE READ THIS BEFORE YOU DISTRIBUTE OR USE THIS WORK

To protect the Project Gutenberg™ mission of promoting the free


distribution of electronic works, by using or distributing this work (or
any other work associated in any way with the phrase “Project
Gutenberg”), you agree to comply with all the terms of the Full
Project Gutenberg™ License available with this file or online at
www.gutenberg.org/license.

Section 1. General Terms of Use and


Redistributing Project Gutenberg™
electronic works
1.A. By reading or using any part of this Project Gutenberg™
electronic work, you indicate that you have read, understand, agree
to and accept all the terms of this license and intellectual property
(trademark/copyright) agreement. If you do not agree to abide by all
the terms of this agreement, you must cease using and return or
destroy all copies of Project Gutenberg™ electronic works in your
possession. If you paid a fee for obtaining a copy of or access to a
Project Gutenberg™ electronic work and you do not agree to be
bound by the terms of this agreement, you may obtain a refund from
the person or entity to whom you paid the fee as set forth in
paragraph 1.E.8.

1.B. “Project Gutenberg” is a registered trademark. It may only be


used on or associated in any way with an electronic work by people
who agree to be bound by the terms of this agreement. There are a
few things that you can do with most Project Gutenberg™ electronic
works even without complying with the full terms of this agreement.
See paragraph 1.C below. There are a lot of things you can do with
Project Gutenberg™ electronic works if you follow the terms of this
agreement and help preserve free future access to Project
Gutenberg™ electronic works. See paragraph 1.E below.
1.C. The Project Gutenberg Literary Archive Foundation (“the
Foundation” or PGLAF), owns a compilation copyright in the
collection of Project Gutenberg™ electronic works. Nearly all the
individual works in the collection are in the public domain in the
United States. If an individual work is unprotected by copyright law in
the United States and you are located in the United States, we do
not claim a right to prevent you from copying, distributing,
performing, displaying or creating derivative works based on the
work as long as all references to Project Gutenberg are removed. Of
course, we hope that you will support the Project Gutenberg™
mission of promoting free access to electronic works by freely
sharing Project Gutenberg™ works in compliance with the terms of
this agreement for keeping the Project Gutenberg™ name
associated with the work. You can easily comply with the terms of
this agreement by keeping this work in the same format with its
attached full Project Gutenberg™ License when you share it without
charge with others.

1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside the
United States, check the laws of your country in addition to the terms
of this agreement before downloading, copying, displaying,
performing, distributing or creating derivative works based on this
work or any other Project Gutenberg™ work. The Foundation makes
no representations concerning the copyright status of any work in
any country other than the United States.

1.E. Unless you have removed all references to Project Gutenberg:

1.E.1. The following sentence, with active links to, or other


immediate access to, the full Project Gutenberg™ License must
appear prominently whenever any copy of a Project Gutenberg™
work (any work on which the phrase “Project Gutenberg” appears, or
with which the phrase “Project Gutenberg” is associated) is
accessed, displayed, performed, viewed, copied or distributed:
This eBook is for the use of anyone anywhere in the United
States and most other parts of the world at no cost and with
almost no restrictions whatsoever. You may copy it, give it away
or re-use it under the terms of the Project Gutenberg License
included with this eBook or online at www.gutenberg.org. If you
are not located in the United States, you will have to check the
laws of the country where you are located before using this
eBook.

1.E.2. If an individual Project Gutenberg™ electronic work is derived


from texts not protected by U.S. copyright law (does not contain a
notice indicating that it is posted with permission of the copyright
holder), the work can be copied and distributed to anyone in the
United States without paying any fees or charges. If you are
redistributing or providing access to a work with the phrase “Project
Gutenberg” associated with or appearing on the work, you must
comply either with the requirements of paragraphs 1.E.1 through
1.E.7 or obtain permission for the use of the work and the Project
Gutenberg™ trademark as set forth in paragraphs 1.E.8 or 1.E.9.

1.E.3. If an individual Project Gutenberg™ electronic work is posted


with the permission of the copyright holder, your use and distribution
must comply with both paragraphs 1.E.1 through 1.E.7 and any
additional terms imposed by the copyright holder. Additional terms
will be linked to the Project Gutenberg™ License for all works posted
with the permission of the copyright holder found at the beginning of
this work.

1.E.4. Do not unlink or detach or remove the full Project


Gutenberg™ License terms from this work, or any files containing a
part of this work or any other work associated with Project
Gutenberg™.

1.E.5. Do not copy, display, perform, distribute or redistribute this


electronic work, or any part of this electronic work, without
prominently displaying the sentence set forth in paragraph 1.E.1 with
active links or immediate access to the full terms of the Project
Gutenberg™ License.
1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form,
including any word processing or hypertext form. However, if you
provide access to or distribute copies of a Project Gutenberg™ work
in a format other than “Plain Vanilla ASCII” or other format used in
the official version posted on the official Project Gutenberg™ website
(www.gutenberg.org), you must, at no additional cost, fee or expense
to the user, provide a copy, a means of exporting a copy, or a means
of obtaining a copy upon request, of the work in its original “Plain
Vanilla ASCII” or other form. Any alternate format must include the
full Project Gutenberg™ License as specified in paragraph 1.E.1.

1.E.7. Do not charge a fee for access to, viewing, displaying,


performing, copying or distributing any Project Gutenberg™ works
unless you comply with paragraph 1.E.8 or 1.E.9.

1.E.8. You may charge a reasonable fee for copies of or providing


access to or distributing Project Gutenberg™ electronic works
provided that:

• You pay a royalty fee of 20% of the gross profits you derive from
the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”

• You provide a full refund of any money paid by a user who


notifies you in writing (or by e-mail) within 30 days of receipt that
s/he does not agree to the terms of the full Project Gutenberg™
License. You must require such a user to return or destroy all
copies of the works possessed in a physical medium and
discontinue all use of and all access to other copies of Project
Gutenberg™ works.

• You provide, in accordance with paragraph 1.F.3, a full refund of


any money paid for a work or a replacement copy, if a defect in
the electronic work is discovered and reported to you within 90
days of receipt of the work.

• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.

1.E.9. If you wish to charge a fee or distribute a Project Gutenberg™


electronic work or group of works on different terms than are set
forth in this agreement, you must obtain permission in writing from
the Project Gutenberg Literary Archive Foundation, the manager of
the Project Gutenberg™ trademark. Contact the Foundation as set
forth in Section 3 below.

1.F.

1.F.1. Project Gutenberg volunteers and employees expend


considerable effort to identify, do copyright research on, transcribe
and proofread works not protected by U.S. copyright law in creating
the Project Gutenberg™ collection. Despite these efforts, Project
Gutenberg™ electronic works, and the medium on which they may
be stored, may contain “Defects,” such as, but not limited to,
incomplete, inaccurate or corrupt data, transcription errors, a
copyright or other intellectual property infringement, a defective or
damaged disk or other medium, a computer virus, or computer
codes that damage or cannot be read by your equipment.

1.F.2. LIMITED WARRANTY, DISCLAIMER OF DAMAGES - Except


for the “Right of Replacement or Refund” described in paragraph
1.F.3, the Project Gutenberg Literary Archive Foundation, the owner
of the Project Gutenberg™ trademark, and any other party
distributing a Project Gutenberg™ electronic work under this
agreement, disclaim all liability to you for damages, costs and
expenses, including legal fees. YOU AGREE THAT YOU HAVE NO
REMEDIES FOR NEGLIGENCE, STRICT LIABILITY, BREACH OF
WARRANTY OR BREACH OF CONTRACT EXCEPT THOSE
PROVIDED IN PARAGRAPH 1.F.3. YOU AGREE THAT THE
FOUNDATION, THE TRADEMARK OWNER, AND ANY
DISTRIBUTOR UNDER THIS AGREEMENT WILL NOT BE LIABLE
TO YOU FOR ACTUAL, DIRECT, INDIRECT, CONSEQUENTIAL,
PUNITIVE OR INCIDENTAL DAMAGES EVEN IF YOU GIVE
NOTICE OF THE POSSIBILITY OF SUCH DAMAGE.

1.F.3. LIMITED RIGHT OF REPLACEMENT OR REFUND - If you


discover a defect in this electronic work within 90 days of receiving it,
you can receive a refund of the money (if any) you paid for it by
sending a written explanation to the person you received the work
from. If you received the work on a physical medium, you must
return the medium with your written explanation. The person or entity
that provided you with the defective work may elect to provide a
replacement copy in lieu of a refund. If you received the work
electronically, the person or entity providing it to you may choose to
give you a second opportunity to receive the work electronically in
lieu of a refund. If the second copy is also defective, you may
demand a refund in writing without further opportunities to fix the
problem.

1.F.4. Except for the limited right of replacement or refund set forth in
paragraph 1.F.3, this work is provided to you ‘AS-IS’, WITH NO
OTHER WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR ANY PURPOSE.

1.F.5. Some states do not allow disclaimers of certain implied


warranties or the exclusion or limitation of certain types of damages.
If any disclaimer or limitation set forth in this agreement violates the
law of the state applicable to this agreement, the agreement shall be
interpreted to make the maximum disclaimer or limitation permitted
by the applicable state law. The invalidity or unenforceability of any
provision of this agreement shall not void the remaining provisions.
1.F.6. INDEMNITY - You agree to indemnify and hold the
Foundation, the trademark owner, any agent or employee of the
Foundation, anyone providing copies of Project Gutenberg™
electronic works in accordance with this agreement, and any
volunteers associated with the production, promotion and distribution
of Project Gutenberg™ electronic works, harmless from all liability,
costs and expenses, including legal fees, that arise directly or
indirectly from any of the following which you do or cause to occur:
(a) distribution of this or any Project Gutenberg™ work, (b)
alteration, modification, or additions or deletions to any Project
Gutenberg™ work, and (c) any Defect you cause.

Section 2. Information about the Mission of


Project Gutenberg™
Project Gutenberg™ is synonymous with the free distribution of
electronic works in formats readable by the widest variety of
computers including obsolete, old, middle-aged and new computers.
It exists because of the efforts of hundreds of volunteers and
donations from people in all walks of life.

Volunteers and financial support to provide volunteers with the


assistance they need are critical to reaching Project Gutenberg™’s
goals and ensuring that the Project Gutenberg™ collection will
remain freely available for generations to come. In 2001, the Project
Gutenberg Literary Archive Foundation was created to provide a
secure and permanent future for Project Gutenberg™ and future
generations. To learn more about the Project Gutenberg Literary
Archive Foundation and how your efforts and donations can help,
see Sections 3 and 4 and the Foundation information page at
www.gutenberg.org.

Section 3. Information about the Project


Gutenberg Literary Archive Foundation
The Project Gutenberg Literary Archive Foundation is a non-profit
501(c)(3) educational corporation organized under the laws of the
state of Mississippi and granted tax exempt status by the Internal
Revenue Service. The Foundation’s EIN or federal tax identification
number is 64-6221541. Contributions to the Project Gutenberg
Literary Archive Foundation are tax deductible to the full extent
permitted by U.S. federal laws and your state’s laws.

The Foundation’s business office is located at 809 North 1500 West,


Salt Lake City, UT 84116, (801) 596-1887. Email contact links and up
to date contact information can be found at the Foundation’s website
and official page at www.gutenberg.org/contact

Section 4. Information about Donations to


the Project Gutenberg Literary Archive
Foundation
Project Gutenberg™ depends upon and cannot survive without
widespread public support and donations to carry out its mission of
increasing the number of public domain and licensed works that can
be freely distributed in machine-readable form accessible by the
widest array of equipment including outdated equipment. Many small
donations ($1 to $5,000) are particularly important to maintaining tax
exempt status with the IRS.

The Foundation is committed to complying with the laws regulating


charities and charitable donations in all 50 states of the United
States. Compliance requirements are not uniform and it takes a
considerable effort, much paperwork and many fees to meet and
keep up with these requirements. We do not solicit donations in
locations where we have not received written confirmation of
compliance. To SEND DONATIONS or determine the status of
compliance for any particular state visit www.gutenberg.org/donate.

While we cannot and do not solicit contributions from states where


we have not met the solicitation requirements, we know of no
prohibition against accepting unsolicited donations from donors in
such states who approach us with offers to donate.

International donations are gratefully accepted, but we cannot make


any statements concerning tax treatment of donations received from
outside the United States. U.S. laws alone swamp our small staff.

Please check the Project Gutenberg web pages for current donation
methods and addresses. Donations are accepted in a number of
other ways including checks, online payments and credit card
donations. To donate, please visit: www.gutenberg.org/donate.

Section 5. General Information About Project


Gutenberg™ electronic works
Professor Michael S. Hart was the originator of the Project
Gutenberg™ concept of a library of electronic works that could be
freely shared with anyone. For forty years, he produced and
distributed Project Gutenberg™ eBooks with only a loose network of
volunteer support.

Project Gutenberg™ eBooks are often created from several printed


editions, all of which are confirmed as not protected by copyright in
the U.S. unless a copyright notice is included. Thus, we do not
necessarily keep eBooks in compliance with any particular paper
edition.

Most people start at our website which has the main PG search
facility: www.gutenberg.org.

This website includes information about Project Gutenberg™,


including how to make donations to the Project Gutenberg Literary
Archive Foundation, how to help produce our new eBooks, and how
to subscribe to our email newsletter to hear about new eBooks.

You might also like