You are on page 1of 54

Artificial Neural Networks and Machine

Learning ICANN 2014 24th International


Conference on Artificial Neural
Networks Hamburg Germany
September 15 19 2014 Proceedings 1st
Edition Stefan Wermter
Visit to download the full and correct content document:
https://textbookfull.com/product/artificial-neural-networks-and-machine-learning-icann
-2014-24th-international-conference-on-artificial-neural-networks-hamburg-germany-s
eptember-15-19-2014-proceedings-1st-edition-stefan-wermter/
More products digital (pdf, epub, mobi) instant
download maybe you interests ...

Artificial Neural Networks and Machine Learning ICANN


2019 Deep Learning 28th International Conference on
Artificial Neural Networks Munich Germany September 17
19 2019 Proceedings Part II Igor V. Tetko
https://textbookfull.com/product/artificial-neural-networks-and-
machine-learning-icann-2019-deep-learning-28th-international-
conference-on-artificial-neural-networks-munich-germany-
september-17-19-2019-proceedings-part-ii-igor-v-tet/

Artificial Neural Networks and Machine Learning ICANN


2019 Image Processing 28th International Conference on
Artificial Neural Networks Munich Germany September 17
19 2019 Proceedings Part III Igor V. Tetko
https://textbookfull.com/product/artificial-neural-networks-and-
machine-learning-icann-2019-image-processing-28th-international-
conference-on-artificial-neural-networks-munich-germany-
september-17-19-2019-proceedings-part-iii-igor-v/

Artificial Neural Networks and Machine Learning ICANN


2019 Text and Time Series 28th International Conference
on Artificial Neural Networks Munich Germany September
17 19 2019 Proceedings Part IV Igor V. Tetko
https://textbookfull.com/product/artificial-neural-networks-and-
machine-learning-icann-2019-text-and-time-series-28th-
international-conference-on-artificial-neural-networks-munich-
germany-september-17-19-2019-proceedings-part-iv-igor/

Artificial Neural Networks and Machine Learning ICANN


2018 27th International Conference on Artificial Neural
Networks Rhodes Greece October 4 7 2018 Proceedings
Part I V■ra K■rková
https://textbookfull.com/product/artificial-neural-networks-and-
machine-learning-icann-2018-27th-international-conference-on-
artificial-neural-networks-rhodes-greece-
Artificial Neural Networks and Machine Learning ICANN
2018 27th International Conference on Artificial Neural
Networks Rhodes Greece October 4 7 2018 Proceedings
Part III V■ra K■rková
https://textbookfull.com/product/artificial-neural-networks-and-
machine-learning-icann-2018-27th-international-conference-on-
artificial-neural-networks-rhodes-greece-
october-4-7-2018-proceedings-part-iii-vera-kurkova/

MATLAB Deep Learning: With Machine Learning, Neural


Networks and Artificial Intelligence 1st Edition Phil
Kim

https://textbookfull.com/product/matlab-deep-learning-with-
machine-learning-neural-networks-and-artificial-intelligence-1st-
edition-phil-kim/

Artificial Neural Networks with TensorFlow 2: ANN


Architecture Machine Learning Projects Poornachandra
Sarang

https://textbookfull.com/product/artificial-neural-networks-with-
tensorflow-2-ann-architecture-machine-learning-projects-
poornachandra-sarang/

Artificial Intelligence for Business: What You Need to


Know about Machine Learning and Neural Networks Doug
Rose

https://textbookfull.com/product/artificial-intelligence-for-
business-what-you-need-to-know-about-machine-learning-and-neural-
networks-doug-rose/

Artificial Neural Networks for Engineering Applications


1st Edition Alma Alanis

https://textbookfull.com/product/artificial-neural-networks-for-
engineering-applications-1st-edition-alma-alanis/
Stefan Wermter Cornelius Weber
Wlodzislaw Duch Timo Honkela
Petia Koprinkova-Hristova Sven Magg
Günther Palm Alessandro E.P. Villa (Eds.)

Artificial Neural Networks


LNCS 8681

and Machine Learning –


ICANN 2014
24th International Conference on Artificial Neural Networks
Hamburg, Germany, September 15–19, 2014
Proceedings

123
Lecture Notes in Computer Science 8681
Commenced Publication in 1973
Founding and Former Series Editors:
Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen

Editorial Board
David Hutchison
Lancaster University, UK
Takeo Kanade
Carnegie Mellon University, Pittsburgh, PA, USA
Josef Kittler
University of Surrey, Guildford, UK
Jon M. Kleinberg
Cornell University, Ithaca, NY, USA
Alfred Kobsa
University of California, Irvine, CA, USA
Friedemann Mattern
ETH Zurich, Switzerland
John C. Mitchell
Stanford University, CA, USA
Moni Naor
Weizmann Institute of Science, Rehovot, Israel
Oscar Nierstrasz
University of Bern, Switzerland
C. Pandu Rangan
Indian Institute of Technology, Madras, India
Bernhard Steffen
TU Dortmund University, Germany
Demetri Terzopoulos
University of California, Los Angeles, CA, USA
Doug Tygar
University of California, Berkeley, CA, USA
Gerhard Weikum
Max Planck Institute for Informatics, Saarbruecken, Germany
Stefan Wermter Cornelius Weber
Wlodzislaw Duch Timo Honkela
Petia Koprinkova-Hristova Sven Magg
Günther Palm Alessandro E.P. Villa (Eds.)

Artificial Neural Networks


and Machine Learning –
ICANN 2014
24th International Conference
on Artificial Neural Networks
Hamburg, Germany, September 15-19, 2014
Proceedings

13
Volume Editors
Stefan Wermter
Cornelius Weber
Sven Magg
University of Hamburg, Hamburg, Germany
E-mail: {wermter, weber, magg}@informatik.uni-hamburg.de
Wlodzislaw Duch
Nicolaus Copernicus University, Torun, Poland
E-mail: wduch@is.umk.pl
Timo Honkela
University of Helsinki, Helsinki, Finland
E-mail: timo.honkela@helsinki.fi
Petia Koprinkova-Hristova
Bulgarian Academy of Sciences, Sofia, Bulgaria
E-mail: pkoprinkova@bas.bg
Günther Palm
University of Ulm, Ulm, Germany
E-mail: guenther.palm@uni-ulm.de
Alessandro E.P. Villa
University of Lausanne, Lausanne, Switzerland
E-mail: alessandro.villa@unil.ch

ISSN 0302-9743 e-ISSN 1611-3349


ISBN 978-3-319-11178-0 e-ISBN 978-3-319-11179-7
DOI 10.1007/978-3-319-11179-7
Springer Cham Heidelberg New York Dordrecht London
Library of Congress Control Number: 2014947446
LNCS Sublibrary: SL 1 – Theoretical Computer Science and General Issues
© Springer International Publishing Switzerland 2014
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of
the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation,
broadcasting, reproduction on microfilms or in any other physical way, and transmission or information
storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology
now known or hereafter developed. Exempted from this legal reservation are brief excerpts in connection
with reviews or scholarly analysis or material supplied specifically for the purpose of being entered and
executed on a computer system, for exclusive use by the purchaser of the work. Duplication of this publication
or parts thereof is permitted only under the provisions of the Copyright Law of the Publisher’s location,
in ist current version, and permission for use must always be obtained from Springer. Permissions for use
may be obtained through RightsLink at the Copyright Clearance Center. Violations are liable to prosecution
under the respective Copyright Law.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
While the advice and information in this book are believed to be true and accurate at the date of publication,
neither the authors nor the editors nor the publisher can accept any legal responsibility for any errors or
omissions that may be made. The publisher makes no warranty, express or implied, with respect to the
material contained herein.
Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India
Printed on acid-free paper
Springer is part of Springer Science+Business Media (www.springer.com)
Preface

The International Conference on Artificial Neural Networks (ICANN) is the an-


nual flagship conference of the European Neural Network Society (ENNS). Its
wide scope in neural networks ranges from machine learning algorithms to models
of real nervous systems. ICANN aims at bringing together researchers from dif-
ferent research fields, such as computer science, neuroscience, cognitive science,
and engineering. Further aims are to address new challenges, share solutions,
and discuss future research directions toward developing more intelligent artifi-
cial systems and increasing our understanding of neural and cognitive processes
in the brain.
The ICANN series of conferences was initiated in 1991 and soon became the
major European conference in its field, with experts coming from several conti-
nents. The 24th ICANN was held during 15–19 September 2014 at the University
of Hamburg. The hosts were the University of Hamburg and its Knowledge Tech-
nology Institute (http://www.informatik.uni-hamburg.de/WTM/).
The conference attracted contributions from among the most internation-
ally established researchers in the neural network community. The six keynote
speakers in 2014 covered a wide spectrum: Christopher M. Bishop, expert in
machine learning; Jun Tani, expert in recurrent neural networks; Paul F.M.J.
Verschure, expert in autonomous systems; Yann LeCun, expert in neural vision;
Barbara Hammer, expert in computational intelligence; Kevin N. Gurney, expert
in computational neuroscience. We also acknowledge support from the Körber
Foundation for a special session on “Human-Machine Interaction”.
A total of 173 papers was submitted to the
ICANN 2014 conference. A large Program Com- 10 rejected 11
00
11
00 00
11 11
00
11
00
00
11
0
1 00 accepted
11
011
1
0
1 0
100
mittee, including accepted authors from recent 8 0
1 00
11
0 11
1
0
1 00
00
11
0
100
11
00
11
011
1
0
100
0
1 0
1
00
110
1 0
100
11 00
11
ICANN conferences, performed altogether 744 re- 6 0 1
1 0
00
110
1 100
11
011 00
11
# Papers

0
1
00
11 0
1 0
1
0 11
00 00
11
0011
0
1 00
11
0
1 1 00
0
1 00
11
00 00
11
0
1
00
11
0
1
00
11 0
1 00
11
0
1 0
1 11
00
11 00
11
0
1
00
11 00
11
00
11
00
1100
11
01
1 0
00
11 00 1
11 0 1
00
11
0 0
1 00
11 00
1100
11
011
1 00
110 1
1
00
11 01
1 0
1
0
1 0
1
001
11 00
11 0
1 00
11
0
1 01
1 0011
11 0
1
00
1100
00
11
00
11 0
1
00
11 0 0
views, delivering an average of 4.3 reviews per pa- 11
4
001100 1
11
0
1
01
1
0
1
0
1
01
1
0
0
1
00
11
0
00
11
00
11
0
001
11
0
1
00
11
0
101
1
0
1
00
11
0
001
11
0
1
00
11
0
1
0
00
11
00
11
0
1
0 1
1 00
11
0
00
11
0
1
0
1
0
00
11
0
1
00
11
0
1
0
1
00
00
11
0
10
1
00
11
00
11
011
1 00
11
00
0
1
0
1
0 1
00
11
1
00
11
0
1001
11
0
00
11
0
1
00
11
0
00
11
0
1
0 11
1
0
1
00
00
11
00
11
0
1
00
11 00 0 0
1 0
1
001
11 00
11
00
1 0
00
11 0
1
00
11 0
1
00
1101
1 001
11
0 00
11
00
1 00
11
00
11
0
1 00
11
0
100
11 0
1
00
110
1 00
11
01
1 01
1 0
1
00
11 00
11
0
1 0
1
00
11 0
1
0 1
1 00
110
1 00
11
0
1 00
11 00
11
00
11
011
1 00
11
0
1 0
1
00
1101
1 0
00
11
per. This helped to obtain a reliable evaluation 11 00
0011
11
2
00
11
00
11
00 1
11
00
0
1
0
0
01
1
01
1
0
00
11
0
1
001
11
0
00
11
0
10
1
00
11
01
1
00
11
00
1
00
11
0
1
00
11
0
00
11
00
11
0
1
00
11
0
1
00
11
0
00
11
0
1
00
11
0
1
0
1
01
1
0
1
00
11
0
1
001
11
0
00
11
0
1
00
11
0
10
1
00
11
00
1
00
11
0
1
00
11
00
0
1
00
11
00
11
0
1
00
11
00
11 0
1
0
1
00
11
00
11
0
1001
11
00
11
00
11
00
11
0
1
00
11
0
00
11
0
1
0
1
01
1
01
1
0
1
00
11
0
1
00
1
00
11
0
1
00
11
00
11 00
11
00
11 01
1 0 0
00
11
0
1 0
10
1
00
11 0
1
00
11
00
11 0 1
1
00
11 00
11
0 0
1 0
1
00
11
0
1 0
1011
1
00
11 0
1
00
11
00 00
11
00
11
0
1 00
11
0
1 0
1 00
11
00
1
00
11 00
11 0
1 0
1
00
11 0
1 0
1
00
11
011
1 0
1
00
11 00
11
0
1 01
1 0
1
00
11 0
10
1
00
11 0
1
00
11
00
11 001
11
00
11 00
11 00
11
0
10
1
score for each paper, which was computed by the 11 00
11
00
011
0011
0
001
11
00
11 0
00 1
0.5
0
1 0
101
1
00
11
0 1 00
11
00
1
0 11
1
00
00
0
1
00
11
00 1
11
1.5
0
1
00
11
0200
11
0
1 0
1 001
1
00
11
0 1
1
2.5
00
1
00
11 0
1
00
11
00
11
03 1
0
100
11
00
11
0 1 0
00
11
01
3.5
0
1
0 11
00
0
10
1
4
OCS Score
Springer Online Conference Service (OCS) by av-
eraging the reviewers’ ratings and taking into account the reviewers’ confidences.
Papers were sorted with respect to their scores (see figure) and 108 papers with
a score of 2.0 or higher were accepted. Furthermore, the multiple professional
reviews delivered valuable feedback to all authors.
The conference program featured 24 sessions, which contained 3 talks each,
and which were arranged in 2 parallel tracks. There were 2 poster sessions with
33 posters and 2 live demonstrations of research results. Talks and posters were
categorized into topical areas, providing the titles for the conference sessions
and for the chapters in this proceedings volume. Chapters are ordered roughly
in chronological order of the conference sessions.
VI Preface

We would like to thank all the participants for their contribution to the
conference program and for their contribution to these proceedings. Many thanks
go to the local organizers for their support and hospitality. We also express our
sincere thanks to all active reviewers for their assistance in the review procedures
and their valuable comments and recommendations.

July 2014 Stefan Wermter


Cornelius Weber
Wlodzislaw Duch
Timo Honkela
Petia Koprinkova-Hristova
Sven Magg
Günther Palm
Alessandro E.P. Villa
Organization

General Chair
Stefan Wermter Hamburg, Germany

Program Co-chairs
Wlodzislaw Duch Torun, Poland, ENNS Past-President
Timo Honkela Helsinki, Finland
Petia Koprinkova-Hristova Sofia, Bulgaria
Günther Palm Ulm, Germany
Alessandro E.P. Villa Lausanne, Switzerland, ENNS President
Cornelius Weber Hamburg, Germany

Local Organizing Committee Chairs (Hamburg, Germany)


Sven Magg
Johannes Bauer
Jorge Dávila-Chacón
Stefan Heinrich
Doreen Jirak
Katja Kösters
Erik Strahl

Program Committee and Reviewers


Shigeo Abe Kobe, Hyogo, Japan
Ajith Abraham Auburn Washington, USA
Antti Airola Turku, Finland
Frederic Alexandre Nancy, France
Luı́s Alexandre Covilhã, Portugal
Vedat Ali Özkan Ankara, Turkey
Cesare Alippi Milano, Italy
Parimala Alva Hertfordshire, UK
Mauricio Alvarez Pereira, Colombia
Boudour Ammar Cherif Sfax, Tunisia
Peter Andras Newcastle, UK
Plamen Angelov Lancaster, UK
Bruno Apolloni Milano, Italy
Ricardo Araujo Pernambucano, Brazil
VIII Organization

Botond Attila Bócsi Cluj-Napoca, Romania


Nabiha Azizi Annaba, Algeria
George Azzopardi Groningen, The Netherlands
Upul Bandara Kelaniya, Sri Lanka
Pablo Barros Pernambuco, Brazil
Ute Bauer-Wersing Frankfurt, Germany
Benjamin Bedregal Rio Grande, Brazil
Sven Behnke Bonn, Germany
Ulysses Bernardet Vancouver, Canada
Byron Bezerra Pernambuco, Brazil
Monica Bianchini Siena, Italy
Marcin Blachnik Silesian, Poland
Martin Bogdan Leipzig, Germany
Sander Bohte Amsterdam, The Netherlands
Abdelkrim Boukabou Jijel, Algeria
Antônio Braga Minas Gerais, Brazil
Ivo Bukovsky Prague, Czech Republic
Nathan Burles York, UK
Thomas Burwick Frankfurt, Germany
Roberto Calandra Darmstadt, Germany
Francesco Camastra Naples, Italy
Angelo Cangelosi Plymouth, UK
Anne Canuto Natal, Brazil
Snaider Carrillo Belfast, UK
Marta Castellano Osnabrück, Germany
Antonio Chella Palermo, Italy
Farouk Chérif Sousse, Tunisia
KyungHyun Cho Espoo, Finland
Jorg Conradt München, Germany
Paulo Cortez Braga, Portugal
Nanyi Cui London, UK
Robertas Damaševicius Kaunas, Lithuania
Jorge Dávila Chacón University of Hamburg, Germany
Andre de Carvalho Sao Paulo, Brazil
Isaac de Lima Oliveira Filho Rio Grande, Brazil
Giseli de Sousa Hertfordshire, UK
Berat Denizdurduran Lausanne, Switzerland
Alessandro Di Nuovo Plymouth, UK
Morgado Dias Funchal, Portugal
Uri Dinur Beer-Sheva, Israel
Nhat-Quang Doan Paris, France
Simona Doboli New York, USA
Sergey Dolenko Moscow, Russia
Manuel J. Domı́nguez-Morales Seville, Spain
Organization IX

Jose Dorronsoro Madrid, Spain


Hiroshi Dozono Saga, Japan
Ivo Draganov Sofia, Bulgaria
Jan Drugowitsch Geneva, Switzerland
Wlodzislaw Duch Torun, Poland
Boris Duran Skövde, Sweden
Mark Elshaw Coventry, UK
Tolga Ensari Istanbul, Turkey
Péter Érdi Budapest, Hungary
Igor Farkas Bratislava, Slovak Republic
Bruno Fernandes UPernambuco, Brazil
Flora Ferreira Braga, Portugal
Maurizio Fiasché Milano, Italy
Maurizio Filippone Glasgow, UK
Mauro Gaggero Genoa, Italy
Christophe Garcia Lyon, France
Petia Georgieva Aveiro, Portugal
Marc-Oliver Gewaltig Lausanne, Switzerland
Alessandro Ghio Genoa, Italy
Andrej Gisbrecht Bielefeld, Germany
Michele Giugliano Antwerpen, Belgium
Giorgio Gnecco Lucca, Italy
André R. Gonçalves Campinas São Paulo, Brazil
Perry Groot Nijmegen, The Netherlands
André Grüning Guildford, UK
Beata Grzyb Plymouth, UK
German Gutierrez Madrid, Spain
Tatiana V. Guy Prague, Czech Republic
Barbara Hammer Bielefeld, Germany
Kazuyuki Hara Tokyo, Japan
Yoichi Hayashi Tokyo, Japan
Stefan Heinrich Hamburg, Germany
Claus Hilgetag Hamburg, Germany
Xavier Hinaut Lyon, France
Hideitsu Hino Tsukuba, Japan
Kim-Fong Ho Kuala Lumpur, Malaysia
Zeng-Guang Hou Beijing, China
Emmanouil Hourdakis Crete, Greece
Xiaolin Hu Beijing, China
Amir Hussain Stirling, UK
Christian Huyck Middlesex, UK
Aapo Hyvarinen Helsinki, Finland
Lazaros Iliadis Orestiada, Greece
Hunor S. Jakab Cluj-Napoca, Romania
X Organization

Liangxiao Jiang Wuhan, China


Jenia Jitsev Köln, Germany
Vacius Jusas Kaunas, Lithuania
Ryotaro Kamimura Tokyo, Japan
Juho Kannala Oulu, Finland
Iakov Karandashev Moscow, Russia
Juha Karhunen Espoo, Finland
Nikola Kasabov Auckland, New Zealand
Enkelejda Kasneci Tübingen, Germany
Gjergji Kasneci Potsdam, Germany
Yohannes Kassahun Bremen, Germany
Naimul Mefraz Khan Toronto, Canada
Abbas Khosravi Melbourne, Australia
DaeEun Kim Seoul, South Korea
Edson Kitani Sao Paulo, Brazil
Katsunori Kitano Shiga, Japan
Jyri Kivinen Edinburgh, UK
Jens Kleesiek Heidelberg, Germany
Tibor Kmet Nitra, Slovakia
Mario Koeppen Kawazu, Japan
Denis Kolev Lancaster, Russia
Stefanos Kollias Athens, Greece
Ekaterina Komendantskaya Dundee, UK
Peter König Osnabrück, Germany
Petia Koprinkova-Hristova Sofia, Bulgaria
Irena Koprinska Sydney, Australia
Pavel Kromer Ostrava, Czech Republic
Vera Kurkova Prague, Czech Republic
Giancarlo La Camera New York, USA
Jong-Seok Lee Yonsei University, South Korea
Grégoire Lefebvre Orange Labs, France
Diego Liberati Milano, Italy
Aristidis Likas Ioannina, Greece
Priscila Lima Rio de Janeiro, Brazil
Alejandro Linares-Barranco Seville, Spain
Jindong Liu London, UK
Oliver Lomp Bochum, Germany
Matthew Luciw Lugano, Switzerland
Marcin Luckner Warsaw, Poland
Jörg Lücke Oldenburg, Germany
Teresa Ludermir Recife, Brazil
Zhiyuan Luo London, UK
Jesus M. Lopez Valencia, Spain
Organization XI

Magomed Malsagov Moscow, Russia


Franck Mamalet Orange Labs, France
Agata Manolova Sofia, Bulgaria
Juan Manuel Moreno Madrid, Spain
Vitor Manuel Dinis Pereira Lisbon, Portugal
Francesco Marcelloni Pisa, Italy
Thomas Martinetz Lübeck, Germany
Jonathan Masci Lugano, Switzerland
Tomasz Maszczyk Torun, Poland
Yoshitatsu Matsuda Tokyo, Japan
Maurizio Mattia Rome, Italy
Michal Matuszak Torun, Poland
Stefano Melacci Siena, Italy
Bjoern Menze Zurich, Switzerland
Norbert Michael Mayer Minxiong, Taiwan
Alessio Micheli Pisa, Italy
Kazushi Mimura Hiroshima, Japan
Valeri Mladenov Sofia, Bulgaria
Adrian Moise Ploiesti, Romania
Francesco C. Morabito Reggio Calabria, Italy
Tetsuro Morimura Tokyo, Japan
Kendi Muchungi Guildford, UK
Alberto Muñoz Madrid, Spain
Shinichi Nakajima Tokyo, Japan
Ryohei Nakano Kasugai, Aichi, Japan
Roman Neruda Prague, Czech Republic
Nikolay Neshov Sofia, Bulgaria
Shun Nishide Kyoto University, Japan
Jarley Nobrega Pernambuco, Brazil
Klaus Obermayer Berlin, Germany
Erkki Oja Espoo, Finland
Hiroshi Okamoto Wako, Saitama, Japan
Simon O’Keefe York, UK
Adriano Oliveira Recife, Brazil
Ayad Omar Reims, France
Luca Oneto Genova, Italy
Mohamed Oubbati Ulm, Germany
Stefanos Ougiaroglou Thessaloniki, Greece
Seiichi Ozawa Kobe, Hyogo, Japan
Joonas Paalasmaa Helsinki, Finland
Carlos Padilha Rio Grande do Sul, Brazil
Tapio Pahikkala Turku, Finland
Günther Palm Ulm, Germany
Christo Panchev Sunderland, UK
XII Organization

Charis Papadopoulos Ioannina, Greece


Eli Parviainen Espoo, Finland
Jaakko Peltonen Espoo, Finland
Tonatiuh Pena Centeno Greifswald, Germany
Ramin Pichevar Los Angeles, USA
Gadi Pinkas Or Yehuda, Israel
Gordon Pipa Osnabrück, Germany
Vincenzo Piuri Milano, Italy
Paavo Pylkkäanen Skövde, Sweden & Helsinki, Finland
Tapani Raiko Espoo, Finland
Felix Reinhart Bielefeld, Germany
Marina Resta Genova, Italy
Jaldert Rombouts Amsterdam, The Netherlands
Nicolas Rougier Bordeaux, France
Manuel Roveri Milano, Italy
Alessandro Rozza Napoli, Italy
Araceli Sanchis Madrid, Spain
Yulia Sandamirskaya Bochum, Germany
Marcello Sanguineti Genova, Italy
Roberto Santana University of the Basque Country, Spain
Jorge Santos Porto, Portugal
Yasuomi Sato Kyushu Institute of Technology, Japan
Hannes Schulz University Bonn, Germany
Friedhelm Schwenker Ulm, Germany
Basabdatta Sen Bhattacharya Lincoln, UK
Neslihan Serap Sengor Istanbul, Turkey
Sohan Seth Espoo, Finland
Denis Sheynikhovich Paris, France
Hiroshi Shimodaira Edinburgh, UK
Keem Siah Yap Pahang, Malaysia
Jiri Sima Prague, Czech Republic
Leslie Smith Stirling, UK
Alessandro Sperduti Padova, Italy
Michael Spratling King’s College London, UK
Martin Spüler Tübingen, Germany
Volker Steuber Hertfordshire, UK
Johan Suykens Leuven, Belgium
Martin Takác Bratislava, Slovakia
Jarno M.A. Tanskanen Tampere, Finland
Anastasios Tefas Thessaloniki, Greece
Margarita Terziyska Plovdiv, Bulgaria
Yancho Todorov Sofia, Bulgaria
Michel Tokic University of Ulm, Germany
Jochen Triesch Frankfurt, Germany
Nandita Tripathi Sunderland, UK
Francesco Trovo Milano, Italy
Organization XIII

Athanasios Tsadiras Thessaloniki, Greece


Nikolaos Tziortziotis Ioannina, Greece
Jean-Philippe Urban Mulhouse Cedex, France
Tatiana V. Guy Prague, Czech Republic
Giorgio Valentini Milano, Italy
Sergio Varona-Moya Marbella, Spain
Eleni Vasilaki Sheffield, UK
Tommi Vatanen Espoo, Finland
Michel Verleysen Louvain-la-Neuve, France
Alessandro E.P. Villa Lausanne, Switzerland
Nathalie Villa-Vialaneix Paris, France
Shinji Watanabe Tokyo, Japan
Cornelius Weber Hamburg, Germany
Roseli Wedemann Rio de Janeiro, Brazil
Hui Wei Shanghai, China
Thomas Wennekers Plymouth, UK
Heiko Wersing Bielefeld, Germany
Tadeusz Wieczorek Silesian, Poland
Janet Wiles Brisbane, Australia
Rolf Würtz Bochum, Germany
Francisco Zamora-Martinez Valencia, Spain
Rafal Zdunek Wroclaw, Poland
He Zhang Espoo, Finland
John Z. Zhang Lethbridge, Canada
Junpei Zhong Hamburg, Germany
Tom Ziemke Skövde, Sweden
Table of Contents

Recurrent Networks

Sequence Learning
Dynamic Cortex Memory: Enhancing Recurrent Neural Networks
for Gradient-Based Sequence Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Sebastian Otte, Marcus Liwicki, and Andreas Zell
Learning and Recognition of Multiple Fluctuating Temporal Patterns
Using S-CTRNN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Shingo Murata, Hiroaki Arie, Tetsuya Ogata, Jun Tani,
and Shigeki Sugano
Regularized Recurrent Neural Networks for Data Efficient Dual-Task
Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Sigurd Spieckermann, Siegmund Düll, Steffen Udluft,
and Thomas Runkler

Echo State Networks


On-line Training of ESN and IP Tuning Effect . . . . . . . . . . . . . . . . . . . . . . . 25
Petia Koprinkova-Hristova
An Incremental Approach to Language Acquisition:
Thematic Role Assignment with Echo State Networks . . . . . . . . . . . . . . . . 33
Xavier Hinaut and Stefan Wermter
Memory Capacity of Input-Driven Echo State Networks at the Edge
of Chaos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Peter Barančok and Igor Farkaš
Adaptive Critical Reservoirs with Power Law Forgetting of Unexpected
Input Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Norbert Michael Mayer

Recurrent Network Theory


Interactive Evolving Recurrent Neural Networks Are Super-Turing
Universal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Jérémie Cabessa and Alessandro E.P. Villa
Attractor Metadynamics in Adapting Neural Networks . . . . . . . . . . . . . . . 65
Claudius Gros, Mathias Linkerhand, and Valentin Walther
XVI Table of Contents

Basic Feature Quantities of Digital Spike Maps . . . . . . . . . . . . . . . . . . . . . . 73


Hiroki Yamaoka, Narutoshi Horimoto, and Toshimichi Saito

Competitive Learning and Self-Organisation


Discriminative Fast Soft Competitive Learning . . . . . . . . . . . . . . . . . . . . . . . 81
Frank-Michael Schleif

Human Action Recognition with Hierarchical Growing Neural Gas


Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
German Ignacio Parisi, Cornelius Weber, and Stefan Wermter

Real-Time Anomaly Detection with a Growing Neural Gas . . . . . . . . . . . . 97


Nicolai Waniek, Simon Bremer, and Jörg Conradt

Classification with Reject Option Using the Self-Organizing Map . . . . . . . 105


Ricardo Sousa, Ajalmar R. da Rocha Neto, Jaime S. Cardoso,
and Guilherme A. Barreto

Clustering and Classification


A Non-parametric Maximum Entropy Clustering . . . . . . . . . . . . . . . . . . . . . 113
Hideitsu Hino and Noboru Murata

Instance Selection Using Two Phase Collaborative Neighbor


Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Fadi Dornaika and I. Kamal Aldine

Global Metric Learning by Gradient Descent . . . . . . . . . . . . . . . . . . . . . . . . 129


Jens Hocke and Thomas Martinetz

Leaving Local Optima in Unsupervised Kernel Regression . . . . . . . . . . . . . 137


Daniel Lückehe and Oliver Kramer

Trees and Graphs


An Algorithm for Directed Graph Estimation . . . . . . . . . . . . . . . . . . . . . . . . 145
Hideitsu Hino, Atsushi Noda, Masami Tatsuno, Shotaro Akaho,
and Noboru Murata

Merging Strategy for Local Model Networks Based on the Lolimot


Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
Torsten Fischer and Oliver Nelles

Factor Graph Inference Engine on the SpiNNaker Neural Computing


System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
Indar Sugiarto and Jörg Conradt
Table of Contents XVII

High-Dimensional Binary Pattern Classification by Scalar Neural


Network Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
Vladimir Kryzhanovsky, Magomed Malsagov,
Juan Antonio Clares Tomas, and Irina Zhelavskaya

Human-Machine Interaction
Human Activity Recognition on Smartphones with Awareness of Basic
Activities and Postural Transitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
Jorge-Luis Reyes-Ortiz, Luca Oneto, Alessandro Ghio, Albert Samá,
Davide Anguita, and Xavier Parra

sNN-LDS: Spatio-temporal Non-negative Sparse Coding for Human


Action Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
Thomas Guthier, Adrian Šošić, Volker Willert, and Julian Eggert

Interactive Language Understanding with Multiple Timescale Recurrent


Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
Stefan Heinrich and Stefan Wermter

A Neural Dynamic Architecture Resolves Phrases about Spatial


Relations in Visual Scenes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
Mathis Richter, Jonas Lins, Sebastian Schneegans,
and Gregor Schöner

Chinese Image Character Recognition Using DNN and Machine


Simulated Training Samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
Jinfeng Bai, Zhineng Chen, Bailan Feng, and Bo Xu

Polyphonic Music Generation by Modeling Temporal Dependencies


Using a RNN-DBN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
Kratarth Goel, Raunaq Vohra, and J.K. Sahoo

On Improving the Classification Capability of Reservoir Computing


for Arabic Speech Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
Abdulrahman Alalshekmubarak and Leslie S. Smith

Neural Network Based Data Fusion for Hand Pose Recognition


with Multiple ToF Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
Thomas Kopinski, Alexander Gepperth, Stefan Geisler,
and Uwe Handmann

Sparse Single-Hidden Layer Feedforward Network for Mapping Natural


Language Questions to SQL Queries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
Issam Hadj Laradji, Lahouari Ghouti, Faisal Saleh,
and Musab A. AlTurki
XVIII Table of Contents

Towards Context-Dependence Eye Movements Prediction in Smart


Meeting Rooms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
Redwan Abdo A. Mohammed, Lars Schwabe, and Oliver Staadt

Deep Networks
Variational EM Learning of DSBNs with Conditional Deep Boltzmann
Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
Xing Zhang and Siwei Lyu

Improving Deep Neural Network Performance by Reusing Features


Trained with Transductive Transference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
Chetak Kandaswamy, Luı́s M. Silva, Luı́s A. Alexandre,
Jorge M. Santos, and Joaquim Marques de Sá

From Maxout to Channel-Out:


Encoding Information on Sparse Pathways . . . . . . . . . . . . . . . . . . . . . . . . . . 273
Qi Wang and Joseph JaJa

Minimizing Computation in Convolutional Neural Networks . . . . . . . . . . . 281


Jason Cong and Bingjun Xiao

One-Shot Learning with Feedback for Multi-layered Convolutional


Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
Kunihiko Fukushima

Theory

Optimization
Row-Action Projections for Nonnegative Matrix Factorization . . . . . . . . . 299
Rafal Zdunek

Structure Perturbation Optimization for Hopfield-Type Neural


Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
Gang Yang, Xirong Li, Jieping Xu, and Qin Jin

Complex-Valued Multilayer Perceptron Search Utilizing Singular


Regions of Complex-Valued Parameter Space . . . . . . . . . . . . . . . . . . . . . . . . 315
Seiya Satoh and Ryohei Nakano

Layered Networks
Mix-Matrix Transformation Method for Max-Cut Problem . . . . . . . . . . . . 323
Iakov Karandashev and Boris Kryzhanovsky
Table of Contents XIX

Complexity of Shallow Networks Representing Functions with Large


Variations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
Věra Kůrková and Marcello Sanguineti
Visualizing Hierarchical Representation in a Multilayered Restricted
RBF Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
Pitoyo Hartono, Paul Hollensen, and Thomas Trappenberg

Reinforcement Learning and Action


Contingent Features for Reinforcement Learning . . . . . . . . . . . . . . . . . . . . . 347
Nathan Sprague
A Non-stationary Infinite Partially-Observable Markov Decision
Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
Sotirios P. Chatzis and Dimitrios Kosmopoulos
Tool-Body Assimilation Model Based on Body Babbling
and a Neuro-Dynamical System for Motion Generation . . . . . . . . . . . . . . . 363
Kuniyuki Takahashi, Tetsuya Ogata, Hadi Tjandra, Shingo Murata,
Hiroaki Arie, and Shigeki Sugano
A Gaussian Process Reinforcement Learning Algorithm with
Adaptability and Minimal Tuning Requirements . . . . . . . . . . . . . . . . . . . . . 371
Jonathan Strahl, Timo Honkela, and Paul Wagner
Sensorimotor Control Learning Using a New Adaptive Spiking
Neuro-Fuzzy Machine, Spike-IDS and STDP . . . . . . . . . . . . . . . . . . . . . . . . . 379
Mohsen Firouzi, Saeed Bagheri Shouraki, and Jörg Conradt
Model-Based Identification of EEG Markers for Learning Opportunities
in an Associative Learning Task with Delayed Feedback . . . . . . . . . . . . . . . 387
Felix Putze, Daniel V. Holt, Tanja Schultz, and Joachim Funke

Vision

Detection and Recognition


Structured Prediction for Object Detection in Deep Neural Networks . . . 395
Hannes Schulz and Sven Behnke
A Multichannel Convolutional Neural Network for Hand Posture
Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
Pablo Barros, Sven Magg, Cornelius Weber, and Stefan Wermter
A Two-Stage Classifier Architecture for Detecting Objects
under Real-World Occlusion Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
Marvin Struwe, Stephan Hasler, and Ute Bauer-Wersing
XX Table of Contents

Towards Sparsity and Selectivity: Bayesian Learning of Restricted


Boltzmann Machine for Early Visual Features . . . . . . . . . . . . . . . . . . . . . . . 419
Hanchen Xiong, Sandor Szedmak, Antonio Rodrı́guez-Sánchez,
and Justus Piater

Invariances and Shape Recovery


Online Learning of Invariant Object Recognition in a Hierarchical
Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
Markus Leßmann and Rolf P. Würtz

Incorporating Scale Invariance into the Cellular Associative Neural


Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435
Nathan Burles, Simon O’Keefe, and James Austin

Shape from Shading by Model Inclusive Learning with Simultaneously


Estimating Reflection Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443
Yasuaki Kuroe and Hajimu Kawakami

Attention and Pose Estimation


Instance-Based Object Recognition with Simultaneous Pose Estimation
Using Keypoint Maps and Neural Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . 451
Oliver Lomp, Kasim Terzić, Christian Faubel, J.M.H. du Buf,
and Gregor Schöner

How Visual Attention and Suppression Facilitate Object


Recognition? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459
Frederik Beuth, Amirhossein Jamalian, and Fred H. Hamker

Analysis of Neural Circuit for Visual Attention Using Lognormally


Distributed Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467
Yoshihiro Nagano, Norifumi Watanabe, and Atsushi Aoyama

Supervised Learning

Ensembles
Dynamic Ensemble Selection and Instantaneous Pruning for Regression
Used in Signal Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475
Kaushala Dias and Terry Windeatt

Global and Local Rejection Option in Multi–classification Task . . . . . . . . 483


Marcin Luckner
Table of Contents XXI

Comparative Study of Accuracies on the Family of the Recursive-Rule


Extraction Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491
Yoichi Hayashi, Yuki Tanaka, Shota Fujisawa, and Tomoki Izawa

Improving the Convergence Property of Soft Committee Machines


by Replacing Derivative with Truncated Gaussian Function . . . . . . . . . . . 499
Kazuyuki Hara and Kentaro Katahira

Regression
Fast Sensitivity-Based Training of BP-Networks . . . . . . . . . . . . . . . . . . . . . 507
Iveta Mrázová and Zuzana Petřı́čková

Learning Anisotropic RBF Kernels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515


Fabio Aiolli and Michele Donini

Empowering Imbalanced Data in Supervised Learning:


A Semi-supervised Learning Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
Bassam A. Almogahed and Ioannis A. Kakadiaris

A Geometrical Approach for Parameter Selection of Radial Basis


Functions Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531
Luiz C.B. Torres, André P. Lemos, Cristiano L. Castro,
and Antônio P. Braga

Sampling Hidden Parameters from Oracle Distribution . . . . . . . . . . . . . . . . 539


Sho Sonoda and Noboru Murata

Incremental Input Variable Selection by Block Addition and Block


Deletion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547
Shigeo Abe

Classification
A CFS-Based Feature Weighting Approach to Naive Bayes Text
Classifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555
Shasha Wang, Liangxiao Jiang, and Chaoqun Li

Local Rejection Strategies for Learning Vector Quantization . . . . . . . . . . . 563


Lydia Fischer, Barbara Hammer, and Heiko Wersing

Efficient Adaptation of Structure Metrics in Prototype-Based


Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 571
Bassam Mokbel, Benjamin Paassen, and Barbara Hammer

Improved Adaline Networks for Robust Pattern Classification . . . . . . . . . . 579


César Lincoln C. Mattos, José Daniel Alencar Santos,
and Guilherme A. Barreto
XXII Table of Contents

Learning under Concept Drift with Support Vector Machines . . . . . . . . . . 587


Omar Ayad
Two Subspace-Based Kernel Local Discriminant Embedding . . . . . . . . . . . 595
Fadi Dornaika and Alireza Bosaghzadeh

Dynamical Models and Time Series


Coupling Gaussian Process Dynamical Models with Product-of-Experts
Kernels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603
Dmytro Velychko, Dominik Endres, Nick Taubert,
and Martin A. Giese
A Deep Dynamic Binary Neural Network and Its Application to Matrix
Converters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611
Jungo Moriyasu and Toshimichi Saito
Improving Humanoid Robot Speech Recognition with Sound Source
Localisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 619
Jorge Dávila-Chacón, Johannes Twiefel, Jindong Liu,
and Stefan Wermter
Control of UPOs of Unknown Chaotic Systems via ANN . . . . . . . . . . . . . . 627
Amine M. Khelifa and Abdelkrim Boukabou
Event-Based Visual Data Sets for Prediction Tasks in Spiking Neural
Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 635
Tingting (Amy) Gibson, Scott Heath, Robert P. Quinn,
Alexia H. Lee, Joshua T. Arnold, Tharun S. Sonti, Andrew Whalley,
George P. Shannon, Brian T. Song, James A. Henderson, and
Janet Wiles
Modeling of Chaotic Time Series by Interval Type-2 NEO-Fuzzy Neural
Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 643
Yancho Todorov and Margarita Terziyska

Neuroscience

Cortical Models
Excitation/Inhibition Patterns in a System of Coupled Cortical
Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651
Daniel Malagarriga, Alessandro E.P. Villa,
Jordi Garcı́a-Ojalvo, and Antonio J. Pons
Self-generated Off-line Memory Reprocessing Strongly Improves
Generalization in a Hierarchical Recurrent Neural Network . . . . . . . . . . . . 659
Jenia Jitsev
Table of Contents XXIII

Lateral Inhibition Pyramidal Neural Networks Designed by Particle


Swarm Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667
Alessandra M. Soares, Bruno J.T. Fernandes,
and Carmelo J.A. Bastos-Filho

Bio-mimetic Path Integration Using a Self Organizing Population


of Grid Cells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 675
Ankur Sinha and Jack Jianguo Wang

Learning Spatial Transformations Using Structured Gain-Field


Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 683
Jan Kneissler and Martin V. Butz

Line Attractors and Neural Fields


Flexible Cue Integration by Line Attraction Dynamics and Divisive
Normalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 691
Mohsen Firouzi, Stefan Glasauer, and Jörg Conradt

Learning to Look: A Dynamic Neural Fields Architecture for Gaze


Shift Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 699
Christian Bell, Tobias Storck, and Yulia Sandamirskaya

Skeleton Model for the Neurodynamics of Visual Action


Representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 707
Martin A. Giese

Latency-Based Probabilistic Information Processing in Recurrent


Neural Hierarchies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 715
Alexander Gepperth and Mathieu Lefort

Spiking and Single Cell Models


Factors Influencing Polychronous Group Sustainability as a Model
of Working Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 723
Panagiotis Ioannou, Matthew Casey, and André Grüning

Pre- and Postsynaptic Properties Regulate Synaptic Competition


through Spike-Timing-Dependent Plasticity . . . . . . . . . . . . . . . . . . . . . . . . . 733
Hana Ito and Katsunori Kitano

Location-Dependent Dendritic Computation in a Modeled Striatal


Projection Neuron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 741
Youwei Zheng, Lars Schwabe, and Joshua L. Plotkin

Classifying Spike Patterns by Reward-Modulated STDP . . . . . . . . . . . . . . 749


Brian Gardner, Ioana Sporea, and André Grüning
XXIV Table of Contents

Applications

Users and Social Technologies


Quantifying the Effect of Meaning Variation in Survey Analysis . . . . . . . . 757
Henri Sintonen, Juha Raitio, and Timo Honkela
Discovery of Spatio-Temporal Patterns from Foursquare
by Diffusion-type Estimation and ICA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 765
Yoshitatsu Matsuda, Kazunori Yamaguchi, and Ken-ichiro Nishioka
Content-Boosted Restricted Boltzmann Machine
for Recommendation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 773
Yongqi Liu, Qiuli Tong, Zhao Du, and Lantao Hu
Financial Self-Organizing Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 781
Marina Resta

Technical Systems
RatSLAM on Humanoids - A Bio-Inspired SLAM Model Adapted
to a Humanoid Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 789
Stefan Müller, Cornelius Weber, and Stefan Wermter
Precise Wind Power Prediction with SVM Ensemble Regression . . . . . . . . 797
Justin Heinermann and Oliver Kramer
Neural Network Approaches to Solution of the Inverse Problem
of Identification and Determination of Partial Concentrations of Salts
in Multi-component Water Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 805
Sergey Dolenko, Sergey Burikov, Tatiana Dolenko,
Alexander Efitorov, Kirill Gushchin, and Igor Persiantsev
Lateral Inhibition Pyramidal Neural Network for Detection of Optical
Defocus (Zernike Z5 ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 813
Bruno J.T. Fernandes and Diego Rativa
Development of a Dynamically Extendable SpiNNaker Chip Computing
Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 821
Rui Araújo, Nicolai Waniek, and Jörg Conradt
The Importance of Physiological Noise Regression in High Temporal
Resolution fMRI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 829
Norman Scheel, Catie Chang, and Amir Madany Mamlouk
Development of Automated Diagnostic System for Skin Cancer:
Performance Analysis of Neural Network Learning Algorithms
for Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 837
Ammara Masood, Adel Ali Al-Jumaily, and Tariq Adnan
Table of Contents XXV

Demonstrations
Entrepreneurship Support Based on Mixed Bio-Artificial Neural
Network Simulator (ESBBANN) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 845
Eugenio M. Fedriani and Manuel Chaves-Maza

Live Demonstration: Real-Time Motor Rotation Frequency Detection


by Spike-Based Visual and Auditory Sensory Fusion on AER and
FPGA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 847
Antonio Rios-Navarro, Angel Jimenez-Fernandez,
Elena Cerezuela-Escudero, Manuel Rivas,
Gabriel Jimenez-Moreno, and Alejandro Linares-Barranco

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 849


Dynamic Cortex Memory:
Enhancing Recurrent Neural Networks
for Gradient-Based Sequence Learning

Sebastian Otte1 , Marcus Liwicki2 , and Andreas Zell1


1
Cognitive Systems Group, University of Tübingen, Tübingen, Germany
2
German Research Center for Artificial Intelligence, Kaiserslautern, Germany

Abstract. In this paper a novel recurrent neural network (RNN) model


for gradient-based sequence learning is introduced. The presented dy-
namic cortex memory (DCM) is an extension of the well-known long
short term memory (LSTM) model. The main innovation of the DCM
is the enhancement of the inner interplay of the gates and the error
carousel due to several new and trainable connections. These connec-
tions enable a direct signal transfer from the gates to one another. With
this novel enhancement the networks are able to converge faster dur-
ing training with back-propagation through time (BPTT) than LSTM
under the same training conditions. Furthermore, DCMs yield better
generalization results than LSTMs. This behaviour is shown for different
supervised problem scenarios, including storing precise values, adding
and learning a context-sensitive grammar.

Keywords: Dynamic Cortex Memory (DCM), Recurrent Neural Net-


works (RNN), Neural Networks, Long Short Term Memory (LSTM).

1 Introduction

Almost two decades ago, Hochreiter et al. [6] introduced the long short term
memory (LSTM) as a novel learning architecture to solve hard long time lag
problems, such as, context-sensitive grammars. The key idea of that work was
the introduction of multiplicative units, which simulate a behavior of gates
to set, read, and reset [2] the internal state of the memory cell. The internal
state is provided by self-connected linear cell referred as constant error carousel
(CEC). LSTM networks have been successfully applied to various tasks, such as
speech [4] and handwriting recognition [5], optical character recognition [10] or
OCT based lung tumor detection [9].
While in the original work some connections between the gates existed (though
not in the same structured manner as proposed in this paper), they have been
removed by later work because of training issues [3]. Note that earlier LSTM
variants were trained using a truncated gradient, while it is more common today
to use the full gradient computed with back-propagation through time (BPTT).
However, we think that there is a great potential of speeding up the training and

S. Wermter et al. (Eds.): ICANN 2014, LNCS 8681, pp. 1–8, 2014.

c Springer International Publishing Switzerland 2014
2 S. Otte, M. Liwicki, and A. Zell

improving the generalization abilities of the LSTM if specific gate connections


are available.
The main contribution of this paper is to investigate the importance of the
gate connections and propose a novel LSTM structure called dynamic cortex
memory (DCM). The DCM contains connections between all the three gates
and therefore enable a collaborative learning. We show that these networks are
able to train faster and generalize better than traditional LSTM networks on
diverse problems. Our experiments include precise value storing, adding and
learning a context-sensitive grammar.
Noteworthy, the importance of the connections has also been investigated
recently in [1], where the whole structure of the network evolves during training.
The authors of [1] have shown that in their experiments the mutated structures
work better for formal language experiments than the common LSTM structure.
We have rerun the experiments and experienced similar results when using the
standard training parameters (see Section 3.3). However, a simple change of the
training parameters lead to significantly better generalization abilities. Thus,
LSTM and DCM generalize perfectly for such problems.

2 Dynamic Cortex Memory


During LSTM training the gates learn to handle the inner state, i.e, to write,
forget, and read it. Hereby, the gates may — at least partially — learn redundant
information. This might lead to long training times as similar behavior should
be learned at several gates.
The main motivation of our work is the idea enabling the gates to share infor-
mation, such that they can learn commonly important patterns collectively and,
due to the relief of learning effort, particularly important patterns earlier and,
perhaps, even better. The DCM model bases on LSTM and establishes an inter
gate communication infrastructure by adding new and trainable connections. In
particular, a circular connection pattern connecting the gates with one another
(which we call cortex ) and a self recurrent connection for each of the gates (that
can be seen as a very local gate state) are added. An illustration of our DCM
model is presented in Fig. 1.

2.1 Forward Pass


In the following the equations for the forward pass of a single DCM block D are
given. D may contain multiple CECs indexed by c. We assume that the neural
network consists of an input layer I, a recurrent hidden layer H (containing D),
and an output layer K. All layers are fully connected. Further, υj denotes the
input, ϕj the activation function and xj the activation of a unit j. fc gives the
input squashing function and gc the output squashing function of a particular
inner cell c. Note that the order of the equations given of the forward pass, as
well as of the backward pass in the next section is essential to correctly compute
the activations and the gradient respectively. Let all xtj and stc at time t = 0 be
defined as 0.
An Enhanced Recurrent Neural Network Model 3

···
Forget gate φ

CEC

Π gc Π
Input gate ι fc Output gate ω

···
···

Input ··· Output


···

Fig. 1. Illustration of the Dynamic Cortex Memory. Like in the LSTM model one or
more CECs are surrounded by three gating neurons. Each gate is linked with each
other gate in both directions (cortex) and has a self-recurrent connection (gate state).

Input Gate:
  
υιt = wiι xti + whι xht−1 + wcι sct−1
i∈I h∈H c∈D

+wιι xιt−1 + wφι xφt−1 + t−1


wωι xω (1)
xtι = ϕι (υιt ) (2)

Forget Gate:
  
υφt = wiφ xti + whφ xht−1 + wcφ sct−1
i∈I h∈H c∈D

+wιφ xtι + wφφ xφt−1 + wωφ xω


t−1
(3)
xtφ = ϕφ (υφt ) (4)

Cell Input:
 
υct = wic xti + whc xht−1 (5)
i∈I h∈H

Cell State:
stc = xtι fc (υct ) + xtφ sct−1 (6)
4 S. Otte, M. Liwicki, and A. Zell

Output Gates:   
υωt = wiω xti + whω xt−1
h + wcω st−1
c
i∈I h∈H c∈D

+wιω xtι + wφω xtφ + t−1


wωω xω (7)
xtω = ϕω (υωt ) (8)

Cell Output:
xtc = xtω gc (stc ) (9)

2.2 Backward Pass


Here the equation for computing the gradient are given. Let further all δjt , tc
and ζct at time t = T + 1 be 0. Additionally, the following helper definitions are
used.
∂E ∂E
tc =def , ζct =def . (10)
∂xtc ∂stc
Cell Output:  
tc = wck δkt + wch δht+1 (11)
k∈K h∈H

Output Gate:  

δωt = ϕω (υωt ) σc (stc )tc + wωι διt+1 + wωφ δφt+1 + wωω δωt+1 (12)
c∈D

Cell State:
ζct = xtω gc (stc )tc + xt+1
φ ζc
t+1
+ wcι διt+1 + wcφ δφt+1 + wcω δωt (13)

Cell Input:
δct = xtι fc (υct )ζct (14)

Forget Gate:  

δφt = ϕφ (υφt ) st−1
c ζc
t
+ wφι διt+1 + wφφ δφt+1 + wφω δωt (15)
c∈D

Input Gate:  

διt = ϕι (υιt ) ϕc (υct )ζct + wιι διt+1 + wιφ δφt + wιω δωt (16)
c∈D

3 Experiments
In this section DCMs are compared with LSTMs on three different sequential
problems with long term dependencies. In order to provide a fair comparison our
general proceeding for the experiments was to first look for a training configura-
tion that works well for the LSTM and then use exactly the same configuration
for the DCM.
An Enhanced Recurrent Neural Network Model 5

3.1 Storing
In this relatively simple experiment the task is just to learn memorizing a value
at a marked position within a sequence and keep the value until the end of the
sequence. Note, that just keeping a constant value for several time steps (> 10)
and reproducing it precisely afterwards is difficult for classical RNNs.
A training sample is generated as follows. Given a sequence over R2 of length
T/2 + rand(T/2), whereby the components of a vector at a certain time step t are
denoted with ut1 and ut2 . All ut1 are randomly chosen from [0, 1]. One sequence

index t is randomly selected and the value of ut2 is set to 1, while all other ut2


with t = t are 0. The target value of the sequence is then ut1 .
The tested architectures are in particular a standard LSTM, a DCM, a DCM
with no cortex and a DCM with no gate state. Each consists of two input units,
only one self-recurrent hidden block and one non-linear output unit. The training
was performed with gradient descent and momentum term, whereby the gradient
was computed with BPTT using 6,000 training samples generated with T = 60
and 200 epochs.
Fig. 2 depicts the typical training behavior of all four architectures for the
given learning problem. The corresponding generalization results are given in
Table 1. By adding the gate states (DCM with no cortex) the network behaves
turbulent in the very early phase of the training. But afterwards, it converges
slightly faster than the LSTM. However, despite the lower final training error,

DCM
DCM (no gs)
DCM (no cort)
0.04
Training error (MSE)

LSTM

0.018

0.008

0.0036

0.0016
0 50 100 150 200
Epochs

Fig. 2. Training convergence of different memory cell architectures for storing

Table 1. Generalization results for the storing experiment

Generalization results
Network Training error
T = 60 T = 100 T = 1, 000
LSTM 0.007 99.3 % 96.4 % 67.4 %
DCM (no cort) 0.006 99.6 % 95.6 % 60.0 %
DCM (no gs) 0.003 100 % 100 % 72 %
DCM 0.002 100 % 100 % 72.2 %
Another random document with
no related content on Scribd:
ABANDONED. Cr. 8vo. 6s.

See also Books for Boys and Girls.

Sergeant (Adeline). ANTHEA’S WAY. Cr. 8vo. 6s.


THE PROGRESS OF RACHAEL. Cr. 8vo. 6s.
THE MYSTERY OF THE MOAT. Second Edition. Cr. 8vo. 6s.
MRS. LYGOIN’S HUSBAND. Cr. 8vo. 6s.
THE COMING OF THE RANDOLPHS. Cr. 8vo. 6s.

See also Strand Novels.

Shannon (W. F.) THE MESS DECK. Cr. 8vo. 3s. 6d.

See also Strand Novels.

Sonnischsen (Albert). DEEP-SEA VAGABONDS. Cr. 8vo.


6s.
Thompson (Vance). SPINNERS OF LIFE. Cr. 8vo. 6s.
Urquhart (M.). A TRAGEDY IN COMMONPLACE. Second
Ed. Cr. 8vo. 6s.
Waineman (Paul). BY A FINNISH LAKE. Cr. 8vo. 6s.
THE SONG OF THE FOREST. Cr. 8vo. 6s. See also Strand
Novels.
Waltz (E. C.). THE ANCIENT LANDMARK: A Kentucky
Romance. Cr. 8vo. 6s.
Watson (H. B. Marriott). ALARUMS AND EXCURSIONS.
Cr. 8vo. 6s.
CAPTAIN FORTUNE. Third Edition. Cr. 8vo. 6s.
TWISTED EGLANTINE. With 8 Illustrations by Frank
Craig. Third Edition. Cr. 8vo. 6s.
THE HIGH TOBY. With a Frontispiece. Third Edition. Cr.
8vo. 6s.
See also Strand Novels.

Wells (H. Q.). THE SEA LADY. Cr. 8vo. 6s.


Weyman (Stanley), Author of ‘A Gentleman of France.’
UNDER THE RED ROBE. With Illustrations by R. C.
Woodville. Twentieth Edition. Cr. 8vo. 6s.
White (Stewart E.), Author of ‘The Blazed Trail.’
CONJUROR’S HOUSE. A Romance of the Free Trail.
Second Edition. Cr. 8vo. 6s.
White (Percy). THE SYSTEM. Third Edition. Cr. 8vo. 6s.
THE PATIENT MAN. Second Edition. Cr. 8vo. 6s.
Williamson (Mrs. C. N.), Author of ‘The Barnstormers.’
THE ADVENTURE OF PRINCESS SYLVIA. Second Edition.
Cr. 8vo. 3s. 6d.
THE WOMAN WHO DARED. Cr. 8vo. 6s.
THE SEA COULD TELL. Second Edition. Cr. 8vo. 6s.
THE CASTLE OF THE SHADOWS. Third Edition. Cr. 8vo.
6s.
PAPA. Cr. 8vo. 6s.
Williamson (C. N. and A. M.). THE LIGHTNING
CONDUCTOR: Being the Romance of a Motor Car.
Illustrated. Fifteenth Edition. Cr. 8vo. 6s.
THE PRINCESS PASSES. Illustrated. Seventh Edition. Cr.
8vo. 6s.
MY FRIEND THE CHAUFFEUR. With 16 Illustrations.
Eighth Edition. Cr. 8vo. 6s.
LADY HETTY ACROSS THE WATER. Sixth Edition. Cr. 8vo.
6s.
Wyllarde (Dolf). Author of ‘Uriah the Hittite.’ THE
PATHWAY OF THE PIONEER. Fourth Edition. Cr. 8vo. 6s.
Methuen’s Shilling Novels
Cr. 8vo. Cloth, 1s. net.
Encouraged by the great and steady sale of their Sixpenny
Novels, Messrs. Methuen have determined to issue a new series
of fiction at a low price under the title of ‘The Shilling
Novels.’ These books are well printed and well bound in cloth,
and the excellence of their quality may be gauged from the
names of those authors who contribute the early volumes of the
series.
Messrs. Methuen would point out that the books are as good
and as long as a six shilling novel, that they are bound in cloth
and not in paper, and that their price is One Shilling net. They
feel sure that the public will appreciate such good and cheap
literature, and the books can be seen at all good booksellers.
The first volumes are—
Balfour (Andrew). VENGEANCE IS MINE.
TO ARMS.
Baring-Gould (S.). MRS. CURGENVEN OF CURGENVEN.
DOMITIA.
THE FROBISHERS.
Barlow (Jane), Author of ‘Irish Idylls. FROM THE EAST
UNTO THE WEST.
A CREEL OF IRISH STORIES.
THE FOUNDING OF FORTUNES.
Barr (Robert). THE VICTORS.
Bartram (George). THIRTEEN EVENINGS.
Benson (E. F.), Author of ‘Dodo.’ THE CAPSINA.
Bowles (G. Stewart). A STRETCH OFF THE LAND.
Brooke (Emma). THE POET’S CHILD.
Bullock (Shan F.). THE BARRYS.
THE CHARMER.
THE SQUIREEN.
THE RED LEAGUERS.
Burton (J. Bloundelle). ACROSS THE SALT SEAS.
THE CLASH OF ARMS.
DENOUNCED.
FORTUNE’S MY FOE.
Capes (Bernard). AT A WINTER’S FIRE.
Chesney (Weatherby). THE BAPTIST RING.
THE BRANDED PRINCE.
THE FOUNDERED GALLEON.
JOHN TOPP.
Clifford (Mrs. W. K.). A FLASH OF SUMMER.
Collingwood (Harry). THE DOCTOR OF THE ‘JULIET.’
Cornford (L. Cope). SONS OF ADVERSITY.
Crane (Stephen). WOUNDS IN THE RAIN.
Denny (C. E.). THE ROMANCE OF UPFOLD MANOR.
Dickson (Harris). THE BLACK WOLF’S BREED.
Dickinson (Evelyn). THE SIN OF ANGELS.
Duncan (Sara J.). THE POOL IN THE DESERT.
A VOYAGE OF CONSOLATION.
Embree (C. F.). A HEART OF FLAME.
Fenn (G. Manville). AN ELECTRIC SPARK.
Findlater (Jane H.). A DAUGHTER OF STRIFE.
Findlater (Mary). OVER THE HILLS.
Forrest (R. E.). THE SWORD OF AZRAEL.
Francis (M. E.). MISS ERIN.
Gallon (Tom). RICKERBY’S FOLLY.
Gerard (Dorothea). THINGS THAT HAVE HAPPENED.
Gilchrist (R. Murray). WILLOWBRAKE.
Glanville (Ernest). THE DESPATCH RIDER.
THE LOST REGIMENT.
THE KLOOF BRIDE.
THE INCA’S TREASURE.
Gordon (Julien). MRS. CLYDE. WORLD’S PEOPLE.
Goss (C. F.). THE REDEMPTION OF DAVID CORSON.
Gray (E. M‘Queen). MY STEWARDSHIP
Hales (A. G.). JAIR THE APOSTATE.
Hamilton (Lord Ernest). MARY HAMILTON.
Harrison (Mrs. Burton). A PRINCESS OF THE HILLS.
Illustrated.
Hooper (I.). THE SINGER OF MARLY.
Hough (Emerson). THE MISSISSIPPI BUBBLE.
‘Iota’ (Mrs. Caffyn). ANNE MAULEVERER.
Jepson (Edgar). KEEPERS OF THE PEOPLE.
Kelly (Florence Finch). WITH HOOPS OF STEEL.
Lawless (Hon. Emily). MAELCHO.
Linden (Annie). A WOMAN OF SENTIMENT.
Lorimer (Norma). JOSIAH’S WIFE.
Lush (Charles K.). THE AUTOCRATS.
Macdonell (Anne). THE STORY OF TERESA.
Macgrath (Harold). THE PUPPET CROWN.
Mackle (Pauline Bradford). THE VOICE IN THE
DESERT.
Marsh (Richard). THE SEEN AND THE UNSEEN.
GARNERED.
A METAMORPHOSIS.
MARVELS AND MYSTERIES.
BOTH SIDES OF THE VEIL.
Mayall (J. W.). THE CYNIC AND THE SYREN.
Monkhouse (Allan). LOVE IN A LIFE.
Moore (Arthur). THE KNIGHT PUNCTILIOUS.
Nesbit (Mrs. Bland). THE LITERARY SENSE.
Norris (W. E.). AN OCTAVE.
Oliphant (Mrs.). THE LADY’S WALK.
SIR ROBERT’S FORTUNE.
THE TWO MARY’S.
Penny (Mrs. Frank). A MIXED MARAGE.
Phillpotts (Eden). THE STRIKING HOURS.
FANCY FREE.
Pryce (Richard). TIME AND THE WOMAN.
Randall (J.). AUNT BETHIA’S BUTTON.
Raymond (Walter). FORTUNE’S DARLING.
Rayner (Olive Pratt). ROSALBA.
Rhys (Grace). THE DIVERTED VILLAGE
Rickert (Edith). OUT OF THE CYPRESS SWAMP.
Roberton (M. H.). A GALLANT QUAKER.
Saunders (Marshall). ROSE À CHARLITTE.
Sergeant (Adeline). ACCUSED AND ACCUSER.
BARBARA’S MONEY.
THE ENTHUSIAST.
A GREAT LADY.
THE LOVE THAT OVERCAME.
THE MASTER OF BEECHWOOD.
UNDER SUSPICION.
THE YELLOW DIAMOND.
Shannon (W. F.). JIM TWELVES.
Strain (E. H.). ELMSLIE’S DRAG NET.
Stringer (Arthur). THE SILVER POPPY.
Stuart (Esmè). CHRISTALLA.
Sutherland (Duchess of). ONE HOUR AND THE NEXT.
Swan (Annie). LOVE GROWN COLD.
Swift (Benjamin). SORDON.
Tanqueray (Mrs. B. M.). THE ROYAL QUAKER.
Trafford-Taunton (Mrs. E. W.). SILENT DOMINION.
Upward (Allen). ATHELSTANE FORD.
Waineman (Paul). A HEROINE FROM FINLAND.
Watson (H. B. Marriott). THE SKIRTS OF HAPPY
CHANCE.
‘Zack.’ TALES OF DUNSTABLE WEIR.

Books for Boys and Girls


Illustrated. Crown 8vo. 3s. 6d.
The Getting Well of Dorothy. By Mrs. W. K. Clifford.
Second Edition.
The Icelander’s Sword. By S. Baring-Gould.
Only a Guard-Room Dog. By Edith E. Cuthell.
The Doctor of the Juliet. By Harry Collingwood.
Little Peter. By Lucas Malet. Second Edition.
Master Rockafellar’s Voyage. By W. Clark Russell. Third
Edition.
The Secret of Madame de Monluc. By the Author of “Mdlle.
Mori.”
Syd Belton: Or, the Boy who would not go to Sea. By G.
Manville Fenn.
The Red Grange. By Mrs. Molesworth.
A Girl of the People. By L. T. Meade. Second Edition.
Hepsy Gipsy. By L. T. Meade. 2s. 6d.
The Honourable Miss. By L. T. Meade. Second Edition.
There was once a Prince. By Mrs. M. E. Mann.
When Arnold comes Home. By Mrs. M. E. Mann.

The Novels of Alexandre Dumas


Price 6d. Double Volumes, 1s.
The Three Musketeers. With a long Introduction by Andrew
Lang. Double volume.
The Prince of Thieves. Second Edition.
Robin Hood. A Sequel to the above.
The Corsican Brothers.
Georges.
Crop-Eared Jacquot; Jane; Etc.
Twenty Years After. Double volume.
Amaury.
The Castle of Eppstein.
The Snowball, and Sultanetta.
Cecile; or, The Wedding Gown.
Acté.
The Black Tulip.
The Vicomte de Bragelonne.
Part I. Louise de la Vallière. Double Volume.
Part II. The Man in the Iron Mask. Double Volume.
The Convict’s Son.
The Wolf-Leader.
Nanon; or, The Women’s War. Double volume.
Pauline; Murat; and Pascal Bruno.
The Adventures of Captain Pamphile.
Fernande.
Gabriel Lambert.
Catherine Blum.
The Chevalier D’Harmental. Double volume.
Sylvandire.
The Fencing Master.
The Reminiscences of Antony.
Conscience.
Pere La Ruine.
Henri of Navarre. The second part of Queen Margot.
The Great Massacre. The first part of Queen Margot.
The Wild Duck Shooter.

Illustrated Edition.
Demy 8vo. Cloth.
The Three Musketeers. Illustrated in Colour by Frank
Adams. 2s. 6d.
The Prince of Thieves. Illustrated in Colour by Frank
Adams. 2s.
Robin Hood the Outlaw. Illustrated in Colour by Frank
Adams. 2s.
The Corsican Brothers. Illustrated in Colour by A. M.
M’Lellan. 1s. 6d.
The Wolf-Leader. Illustrated in Colour by Frank Adams.
1s. 6d.
Georges. Illustrated in Colour by Munro Orr. 2s.
Twenty Years After. Illustrated in Colour by Frank
Adams. 3s.
Amaury. Illustrated in Colour by Gordon Browne. 2s.
The Snowball, and Sultanetta. Illustrated in Colour by
Frank Adams. 2s.
The Vicomte de Bragelonne. Illustrated in Colour by
Frank Adams.
Part I. Louise de la Vallière. 3s.
Part II. The Man in the Iron Mask. 3s.
Crop-Eared Jacquot; Jane; Etc. Illustrated in Colour by
Gordon Browne. 2s.
The Castle of Eppstein. Illustrated in Colour by Stewart
Orr. 1s. 6d.
Acté. Illustrated in Colour by Gordon Browne. 1s. 6d.
Cecile; or, The Wedding Gown. Illustrated in Colour by
D. Murray Smith. 1s. 6d.
The Adventures of Captain Pamphile. Illustrated in
Colour by Frank Adams. 1s. 6d.

Methuen’s Sixpenny Books


Austen (Jane). PRIDE AND PREJUDICE.
Bagot (Richard). A ROMAN MYSTERY.
Balfour (Andrew). BY STROKE OF SWORD.
Baring-Gould (S.). FURZE BLOOM.
CHEAP JACK ZITA.
KITTY ALONE. URITH.
THE BROOM SQUIRE.
IN THE ROAR OF THE SEA.
NOÉMI.
A BOOK OF FAIRY TALES. Illustrated.
LITTLE TU’PENNY.
THE FROBISHERS.
Barr (Robert). JENNIE BAXTER, JOURNALIST.
IN THE MIDST OF ALARMS.
THE COUNTERS TEKLA.
THE MUTABLE MANY.
Benson (E. P.). DODO.
Brontë (Charlotte). SHIRLEY.
Brownell (C. L.). THE HEART OF JAPAN.
Burton (J. Bloundelle). ACROSS THE SALT SEAS.
Caffyn (Mrs)., (‘Iota’). ANNE MAULEVERER.
*Capes (Bernard). THE LAKE OF WINE.
Clifford (Mrs. W. K.). A FLASH OF SUMMER.
MRS. KEITH’S CRIME.
Connell (F. Norreys). THE NIGGER KNIGHTS.
Corbett (Julian). A BUSINESS IN GREAT WATERS.
Croker (Mrs. B. M.). PEGGY OF THE BARTONS.
A STATE SECRET.
ANGEL.
JOHANNA.
Dante (Alighieri). THE VISION OF DANTE (CARY).
Doyle (A. Conan). ROUND THE RED LAMP.
Duncan (Sara Jeannette). A VOYAGE OF
CONSOLATION.
THOSE DELIGHTFUL AMERICANS.
Eliot (George). THE MILL ON THE FLOSS.
Findlater (Jane H.). THE GREEN GRAVES OF
BALGOWRIE.
Gallon (Tom). RICKERBY’S FOLLY.
Gaskell (Mrs.). CRANFORD.
MARY BARTON.
NORTH AND SOUTH.
Gerard (Dorothea). HOLY MATRIMONY.
THE CONQUEST OF LONDON.
MADE OF MONEY.
Gissing (George). THE TOWN TRAVELLER.
THE CROWN OF LIFE.
Glanville (Ernest). THE INCA’S TREASURE.
THE KLOOF BRIDE.
Gleig (Charles). HUNTER’S CRUISE.
Grimm (The Brothers). GRIMM’S FAIRY TALES.
Illustrated.
Hope (Anthony). A MAN OF MARK.
A CHANGE OF AIR.
THE CHRONICLES OF COUNT ANTONIO.
PHROSO.
THE DOLLY DIALOGUES.
Hornung (E. W.). DEAD MEN TELL NO TALES.
Ingraham (J. H.). THE THRONE OF DAVID.
Le Queux (W.). THE HUNCHBACK OF WESTMINSTER.
Levett-Yeats (S. K.). THE TRAITOR’S WAY.
Linton (E. Lynn). THE TRUE HISTORY OF JOSHUA
DAVIDSON.
Lyall (Edna). DERRICK VAUGHAN.
Malet (Lucas). THE CARISSIMA.
A COUNSEL OF PERFECTION.
Mann (Mrs. M. E.). MRS. PETER HOWARD.
A LOST ESTATE.
THE CEDAR STAR.
Marchmont (A. W.). MISER HOADLEY’S SECRET.
A MOMENT’S ERROR.
Marryat (Captain). PETER SIMPLE.
JACOB FAITHFUL.
Marsh (Richard). THE TWICKENHAM PEERAGE.
THE GODDESS.
THE JOSS.
Mason (A. E. W.). CLEMENTINA.
Mathers (Helen). HONEY.
GRIFF OF GRIFFITHSCOURT.
SAM’S SWEETHEART.
Meade (Mrs. L. T.). DRIFT.
Mitford (Bertram). THE SIGN OF THE SPIDER.
Montresor (F. F.). THE ALIEN.
Moore (Arthur). THE GAY DECEIVERS.
Morrison (Arthur). THE HOLE IN THE WALL.
Nesbit (E.). THE RED HOUSE.
Norris (W. E.). HIS GRACE.
GILES INGILBY.
THE CREDIT OF THE COUNTY.
LORD LEONARD.
MATTHEW AUSTIN.
CLARISSA FURIOSA.
Oliphant (Mrs.). THE LADY’S WALK.
SIR ROBERTS FORTUNE.
THE PRODIGALS.
Oppenheim (E. Phillips). MASTER OF MEN.
Parker (Gilbert). THE POMP OF THE LAVILETTES.
WHEN VALMOND CAME TO PONTIAC.
THE TRAIL OF THE SWORD.
Pemberton (Max). THE FOOTSTEPS OF A THRONE.
I CROWN THEE KING.
Phillpotts (Eden). THE HUMAN BOY.
CHILDREN OF THE MIST.
Ridge (W. Pett). A SON OF THE STATE.
LOST PROPERTY.
GEORGE AND THE GENERAL.
Russell (W. Clark). A MARRIAGE AT SEA.
ABANDONED.
MY DANISH SWEETHEART.
Sergeant (Adeline). THE MASTER OF BEECHWOOD.
BARBARA’S MONEY.
THE YELLOW DIAMOND.
Surtees (R. S.). HANDLEY CROSS. Illustrated.
MR. SPONGE’S SPORTING TOUR. Illustrated.
ASK MAMMA. Illustrated.
Valentine (Major E. S.). VELDT AND LAAGER.
Walford (Mrs. L. B.). MR. SMITH.
THE BABY’S GRANDMOTHER.
Wallace (General Lew). BEN-HUR.
THE FAIR GOD.
Watson (H. B. Marriot). THE ADVENTURERS.
Weekes (A. B.). PRISONERS OF WAR.
Wells (H. G.). THE STOLEN BACILLUS.
White (Percy). A PASSIONATE PILGRIM.
TRANSCRIBER’S NOTES
1. Silently corrected obvious typographical errors and
variations in spelling.
2. Retained archaic, non-standard, and uncertain spellings
as printed.
3. Re-indexed footnotes using numbers.
4. The music files are the music transcriber’s interpretation
of the printed notation and are placed in the public
domain.
*** END OF THE PROJECT GUTENBERG EBOOK A YEAR IN
RUSSIA ***

Updated editions will replace the previous one—the old editions


will be renamed.

Creating the works from print editions not protected by U.S.


copyright law means that no one owns a United States copyright
in these works, so the Foundation (and you!) can copy and
distribute it in the United States without permission and without
paying copyright royalties. Special rules, set forth in the General
Terms of Use part of this license, apply to copying and
distributing Project Gutenberg™ electronic works to protect the
PROJECT GUTENBERG™ concept and trademark. Project
Gutenberg is a registered trademark, and may not be used if
you charge for an eBook, except by following the terms of the
trademark license, including paying royalties for use of the
Project Gutenberg trademark. If you do not charge anything for
copies of this eBook, complying with the trademark license is
very easy. You may use this eBook for nearly any purpose such
as creation of derivative works, reports, performances and
research. Project Gutenberg eBooks may be modified and
printed and given away—you may do practically ANYTHING in
the United States with eBooks not protected by U.S. copyright
law. Redistribution is subject to the trademark license, especially
commercial redistribution.

START: FULL LICENSE


THE FULL PROJECT GUTENBERG LICENSE
PLEASE READ THIS BEFORE YOU DISTRIBUTE OR USE THIS WORK

To protect the Project Gutenberg™ mission of promoting the


free distribution of electronic works, by using or distributing this
work (or any other work associated in any way with the phrase
“Project Gutenberg”), you agree to comply with all the terms of
the Full Project Gutenberg™ License available with this file or
online at www.gutenberg.org/license.

Section 1. General Terms of Use and


Redistributing Project Gutenberg™
electronic works
1.A. By reading or using any part of this Project Gutenberg™
electronic work, you indicate that you have read, understand,
agree to and accept all the terms of this license and intellectual
property (trademark/copyright) agreement. If you do not agree to
abide by all the terms of this agreement, you must cease using
and return or destroy all copies of Project Gutenberg™
electronic works in your possession. If you paid a fee for
obtaining a copy of or access to a Project Gutenberg™
electronic work and you do not agree to be bound by the terms
of this agreement, you may obtain a refund from the person or
entity to whom you paid the fee as set forth in paragraph 1.E.8.

1.B. “Project Gutenberg” is a registered trademark. It may only


be used on or associated in any way with an electronic work by
people who agree to be bound by the terms of this agreement.
There are a few things that you can do with most Project
Gutenberg™ electronic works even without complying with the
full terms of this agreement. See paragraph 1.C below. There
are a lot of things you can do with Project Gutenberg™
electronic works if you follow the terms of this agreement and
help preserve free future access to Project Gutenberg™
electronic works. See paragraph 1.E below.

You might also like