You are on page 1of 54

Advances in Neural Computation

Machine Learning and Cognitive


Research III Selected Papers from the
XXI International Conference on
Neuroinformatics October 7 11 2019
Dolgoprudny Moscow Region Russia
Boris Kryzhanovsky
Visit to download the full and correct content document:
https://textbookfull.com/product/advances-in-neural-computation-machine-learning-an
d-cognitive-research-iii-selected-papers-from-the-xxi-international-conference-on-neur
oinformatics-october-7-11-2019-dolgoprudny-moscow-region-russia/
More products digital (pdf, epub, mobi) instant
download maybe you interests ...

Advances in Neural Computation, Machine Learning, and


Cognitive Research II: Selected Papers from the XX
International Conference on Neuroinformatics, October
8-12, 2018, Moscow, Russia Boris Kryzhanovsky
https://textbookfull.com/product/advances-in-neural-computation-
machine-learning-and-cognitive-research-ii-selected-papers-from-
the-xx-international-conference-on-neuroinformatics-
october-8-12-2018-moscow-russia-boris-kryzhano/

Advances in Neural Computation, Machine Learning, and


Cognitive Research IV: Selected Papers from the XXII
International Conference on Neuroinformatics, October
12-16, 2020, Moscow, Russia Boris Kryzhanovsky
https://textbookfull.com/product/advances-in-neural-computation-
machine-learning-and-cognitive-research-iv-selected-papers-from-
the-xxii-international-conference-on-neuroinformatics-
october-12-16-2020-moscow-russia-boris-kryzh/

Artificial Neural Networks and Machine Learning ICANN


2018 27th International Conference on Artificial Neural
Networks Rhodes Greece October 4 7 2018 Proceedings
Part III V■ra K■rková
https://textbookfull.com/product/artificial-neural-networks-and-
machine-learning-icann-2018-27th-international-conference-on-
artificial-neural-networks-rhodes-greece-
october-4-7-2018-proceedings-part-iii-vera-kurkova/

Distributed Computer and Communication Networks 17th


International Conference DCCN 2013 Moscow Russia
October 7 10 2013 Revised Selected Papers 1st Edition
Vladimir Vishnevsky
https://textbookfull.com/product/distributed-computer-and-
communication-networks-17th-international-conference-
dccn-2013-moscow-russia-october-7-10-2013-revised-selected-
Advances in Bionanomaterials II Selected Papers from
the 3rd International Conference on Bio and
Nanomaterials BIONAM 2019 September 29 October 3 2019
Stefano Piotto
https://textbookfull.com/product/advances-in-bionanomaterials-ii-
selected-papers-from-the-3rd-international-conference-on-bio-and-
nanomaterials-bionam-2019-september-29-october-3-2019-stefano-
piotto/

Distributed Computer and Communication Networks: 22nd


International Conference, DCCN 2019, Moscow, Russia,
September 23–27, 2019, Revised Selected Papers Vladimir
M. Vishnevskiy
https://textbookfull.com/product/distributed-computer-and-
communication-networks-22nd-international-conference-
dccn-2019-moscow-russia-september-23-27-2019-revised-selected-
papers-vladimir-m-vishnevskiy/

Artificial Neural Networks and Machine Learning ICANN


2018 27th International Conference on Artificial Neural
Networks Rhodes Greece October 4 7 2018 Proceedings
Part I V■ra K■rková
https://textbookfull.com/product/artificial-neural-networks-and-
machine-learning-icann-2018-27th-international-conference-on-
artificial-neural-networks-rhodes-greece-
october-4-7-2018-proceedings-part-i-vera-kurkova/

Artificial Neural Networks and Machine Learning ICANN


2019 Image Processing 28th International Conference on
Artificial Neural Networks Munich Germany September 17
19 2019 Proceedings Part III Igor V. Tetko
https://textbookfull.com/product/artificial-neural-networks-and-
machine-learning-icann-2019-image-processing-28th-international-
conference-on-artificial-neural-networks-munich-germany-
september-17-19-2019-proceedings-part-iii-igor-v/

Advances in Water Jetting: Selected Papers from the


International Conference on Water Jet 2019 - Research,
Development, Applications, November 20-22, 2019,
■eladná, Czech Republic Dagmar Klichová
https://textbookfull.com/product/advances-in-water-jetting-
selected-papers-from-the-international-conference-on-water-
jet-2019-research-development-applications-
Studies in Computational Intelligence 856

Boris Kryzhanovsky
Witali Dunin-Barkowski
Vladimir Redko
Yury Tiumentsev Editors

Advances in Neural
Computation, Machine
Learning, and
Cognitive Research III
Selected Papers from the XXI
International Conference on
Neuroinformatics, October 7–11, 2019,
Dolgoprudny, Moscow Region, Russia
Studies in Computational Intelligence

Volume 856

Series Editor
Janusz Kacprzyk, Polish Academy of Sciences, Warsaw, Poland
The series “Studies in Computational Intelligence” (SCI) publishes new develop-
ments and advances in the various areas of computational intelligence—quickly and
with a high quality. The intent is to cover the theory, applications, and design
methods of computational intelligence, as embedded in the fields of engineering,
computer science, physics and life sciences, as well as the methodologies behind
them. The series contains monographs, lecture notes and edited volumes in
computational intelligence spanning the areas of neural networks, connectionist
systems, genetic algorithms, evolutionary computation, artificial intelligence,
cellular automata, self-organizing systems, soft computing, fuzzy systems, and
hybrid intelligent systems. Of particular value to both the contributors and the
readership are the short publication timeframe and the world-wide distribution,
which enable both wide and rapid dissemination of research output.
The books of this series are submitted to indexing to Web of Science,
EI-Compendex, DBLP, SCOPUS, Google Scholar and Springerlink.

More information about this series at http://www.springer.com/series/7092


Boris Kryzhanovsky Witali Dunin-Barkowski
• •

Vladimir Redko Yury Tiumentsev


Editors

Advances in Neural
Computation, Machine
Learning, and Cognitive
Research III
Selected Papers from the XXI International
Conference on Neuroinformatics,
October 7–11, 2019, Dolgoprudny,
Moscow Region, Russia

123
Editors
Boris Kryzhanovsky Witali Dunin-Barkowski
Scientific Research Institute for System Scientific Research Institute for System
Analysis of Russian Academy of Sciences Analysis of Russian Academy of Sciences
Moscow, Russia Moscow, Russia

Vladimir Redko Yury Tiumentsev


Scientific Research Institute for System Moscow Aviation Institute
Analysis of Russian Academy of Sciences (National Research University)
Moscow, Russia Moscow, Russia

ISSN 1860-949X ISSN 1860-9503 (electronic)


Studies in Computational Intelligence
ISBN 978-3-030-30424-9 ISBN 978-3-030-30425-6 (eBook)
https://doi.org/10.1007/978-3-030-30425-6
© Springer Nature Switzerland AG 2020
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part
of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission
or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt from
the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, expressed or implied, with respect to the material contained
herein or for any errors or omissions that may have been made. The publisher remains neutral with regard
to jurisdictional claims in published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface

The international conference “Neuroinformatics” is the annual multidisciplinary


scientific forum dedicated to the theory and applications of artificial neural networks,
the problems of neuroscience and biophysics systems, artificial intelligence,
adaptive behavior, and cognitive studies.
The scope of the conference is wide, ranging from theory of artificial neural
networks, machine learning algorithms, and evolutionary programming to
neuroimaging and neurobiology.
Main topics of the conference cover theoretical and applied research from the
following fields:
Neurobiology and neurobionics: cognitive studies, neural excitability, cellular
mechanisms, cognition and behavior, learning and memory, motivation and emotion,
bioinformatics, adaptive behavior and evolutionary modeling, brain–computer
interface;
Neural networks: neurocomputing and learning, paradigms and architectures,
biological foundations, computational neuroscience, neurodynamics, neuroinformatics,
deep learning networks, neuro-fuzzy systems, hybrid intelligent systems;
Machine learning: pattern recognition, Bayesian networks, kernel methods,
generative models, information theoretic learning, reinforcement learning, relational
learning, dynamical models, classification and clustering algorithms,
self-organizing systems;
Applications: medicine, signal processing, control, simulation, robotics, hardware
implementations, security, finance and business, data mining, natural language
processing, image processing, and computer vision.
More than 100 reports were presented at the Neuroinformatics 2019 Conference.
Of these, 50 papers were selected, including 3 invited papers, for which articles
were prepared and published in this volume.

Boris Kryzhanovsky
Witali Dunin-Barkowski
Vladimir Redko
Yury Tiumentsev
v
Organization

Editorial Board
Boris Kryzhanovsky Scientific Research Institute for System Analysis
of Russian Academy of Sciences
Witali Dunin-Barkowsky Scientific Research Institute for System Analysis
of Russian Academy of Sciences
Vladimir Red’ko Scientific Research Institute for System Analysis
of Russian Academy of Sciences
Yury Tiumentsev Moscow Aviation Institute
(National Research University)

Advisory Board

Prof. Alexander N. Gorban (Tentative Chair of the International Advisory Board)


Department of Mathematics
University of Leicester
Email: ag153@le.ac.uk
Homepage: http://www.math.le.ac.uk/people/ag153/homepage/
Google scholar profile:
http://scholar.google.co.uk/citations?user=D8XkcCIAAAAJ&hl=en
Tel. +44 116 223 14 33
Address: Department of Mathematics
University of Leicester
Leicester LE1 7RH
UK
Prof. Nicola Kasabov
Professor of Computer Science and Director KEDRI
Phone: +64 9 921 9506
Email: nkasabov@aut.ac.nz
http://www.kedri.info

vii
viii Organization

Physical Address:
KEDRI
Auckland University of Technology
AUT Tower, Level 7
Corner Rutland and Wakefield Street
Auckland
Postal Address:
KEDRI
Auckland University of Technology
Private Bag 92006
Auckland 1142
New Zealand
Prof. Jun Wang, PhD, FIEEE, FIAPR
Chair Professor of Computational Intelligence
Department of Computer Science
City University of Hong Kong
Kowloon Tong, Kowloon, Hong Kong
+852 34429701 (tel.)
+852-34420503 (fax)
jwang.cs@cityu.edu.hk

Program Committee of the XXI International Conference


“Neuroinformatics-2019”
General Chair

Vedyakhin A. A. Sberbank and Moscow Institute of Physics


and Technology, Dolgoprudny,
Moscow Region

Co-chairs

Kryzhanovskiy Boris Scientific Research Institute for System Analysis,


Moscow
Dunin-Barkowski Witali Scientific Research Institute for System Analysis,
Moscow
Gorban Alexander University of Leicester, Great Britain
Nikolaevich
Organization ix

Program Committee

Ajith Abraham Machine Intelligence Research Labs (MIR Labs),


Scientific Network for Innovation and
Research Excellence, Washington, USA
Anokhin Konstantin National Research Centre “Kurchatov Institute,”
Moscow
Baidyk Tatiana The National Autonomous University of Mexico,
Mexico
Balaban Pavel Institute of Higher Nervous Activity
and Neurophysiology of RAS, Moscow
Borisyuk Roman Plymouth University, UK
Burtsev Mikhail National Research Centre “Kurchatov Institute,”
Moscow
Cangelosi Angelo Plymouth University, UK
Chizhov Anton Ioffe Physical Technical Institute, Russian
Academy of Sciences, St. Petersburg
Dolenko Sergey Skobeltsyn Institute of Nuclear Physics,
Lomonosov Moscow State University
Dolev Shlomi Ben-Gurion University of the Negev, Israel
Dosovitskiy Alexey Albert-Ludwigs-Universität, Freiburg, Germany
Dudkin Alexander United Institute of Informatics Problems, Minsk,
Belarus
Ezhov Alexander State Research Center of Russian Federation
“Troitsk Institute for Innovation and Fusion
Research,” Moscow
Frolov Alexander Institute of Higher Nervous Activity
and Neurophysiology of RAS, Moscow
Golovko Vladimir Brest State Technical University, Belarus
Hayashi Yoichi Meiji University, Kawasaki, Japan
Husek Dusan Institute of Computer Science, Czech Republic
Ivanitsky Alexey Institute of Higher Nervous Activity
and Neurophysiology of RAS, Moscow
Izhikevich Eugene Brain Corporation, San Diego, USA
Jankowski Stanislaw Warsaw University of Technology, Poland
Kaganov Yuri Bauman Moscow State Technical University
Kazanovich Yakov Institute of Mathematical Problems of Biology
of RAS, Pushchino, Moscow Region
Kecman Vojislav Virginia Commonwealth University, USA
Kernbach Serge Cybertronica Research, Research Center
of Advanced Robotics and Environmental
Science, Stuttgart, Germany
Koprinkova-Hristova Petia Institute of Information and Communication
Technologies, Bulgaria
x Organization

Kussul Ernst The National Autonomous University of Mexico,


Mexico
Litinsky Leonid Scientific Research Institute for System Analysis,
Moscow
Makarenko Nikolay The Central Astronomical Observatory
of the Russian Academy of Sciences
at Pulkovo, Saint Petersburg
Mishulina Olga National Research Nuclear University (MEPhI),
Moscow
Narynov Sergazy Alem Research, Almaty, Kazakhstan
Nechaev Yuri Honored Scientist of the Russian Federation,
Academician of the Russian Academy
of Natural Sciences, St. Petersburg
Pareja-Flores Cristobal Complutense University of Madrid, Spain
Prokhorov Danil Toyota Research Institute of North America,
USA
Vladimir Red’ko Scientific Research Institute for System Analysis
of Russian Academy of Sciences, Moscow
Rudakov Konstantin Dorodnicyn Computing Centre of RAS, Moscow
Rutkowski Leszek Czestochowa University of Technology, Poland
Samarin Anatoly A. B. Kogan Research Institute
for Neurocybernetics Southern Federal
University, Rostov-on-Don
Samsonovich Alexei George Mason University, USA
Sandamirskaya Yulia Institute of Neuroinformatics, UZH/ETHZ,
Switzerland
Shumskiy Sergey P. N. Lebedev Physical Institute of the Russian
Academy of Sciences, Moscow
Sirota Anton Ludwig Maximilian University of Munich,
Germany
Snasel Vaclav Technical University Ostrava, Czech Republic
Terekhov Serge JSC Svyaznoy Logistics, Moscow
Tikidji-Hamburyan Ruben Louisiana State University, USA
Tiumentsev Yury Moscow Aviation Institute
(National Research University)
Trofimov Alexander National Research Nuclear University (MEPhI),
Moscow
Tsodyks Misha Weizmann Institute of Science, Rehovot, Israel
Tsoy Yury Institut Pasteur Korea, Republic of Korea
Ushakov Vadim National Research Centre “Kurchatov Institute,”
Moscow
Velichkovsky Boris National Research Centre “Kurchatov Institute,”
Moscow
Vvedensky Viktor National Research Centre “Kurchatov Institute,”
Moscow
Organization xi

Yakhno Vladimir The Institute of Applied Physics of the Russian


Academy of Sciences, Nizhny Novgorod
Zhdanov Alexander Lebedev Institute of Precision Mechanics
and Computer Engineering, Russian Academy
of Sciences, Moscow
Contents

Invited Papers
Deep Learning a Single Photo Voxel Model Prediction from Real
and Synthetic Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Vladimir V. Kniaz, Peter V. Moshkantsev, and Vladimir A. Mizginov
Tensor Train Neural Networks in Retail Operations . . . . . . . . . . . . . . . 17
Serge A. Terekhov
Semi-empirical Neural Network Based Modeling and Identification
of Controlled Dynamical Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Yury Tiumentsev and Mikhail Egorchev

Artificial Intelligence
Photovoltaic System Control Model on the Basis of a Modified
Fuzzy Neural Net . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Ekaterina A. Engel and Nikita E. Engel
Impact of Assistive Control on Operator Behavior Under High
Operational Load . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Mikhail Kopeliovich, Evgeny Kozubenko, Mikhail Kashcheev,
Dmitry Shaposhnikov, and Mikhail Petrushan
Hierarchical Actor-Critic with Hindsight for Mobile Robot
with Continuous State Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Staroverov Aleksey and Aleksandr I. Panov
The Hybrid Intelligent Information System for Music Classification . . . 71
Aleksandr Stikharnyi, Alexey Orekhov, Ark Andreev,
and Yuriy Gapanyuk
The Hybrid Intelligent Information System for Poems Generation . . . . 78
Maria Taran, Georgiy Revunkov, and Yuriy Gapanyuk

xiii
xiv Contents

Cognitive Sciences and Brain-Computer Interface, Adaptive Behavior


and Evolutionary Simulation
Is Information Density a Reliable Universal Predictor of Eye
Movement Patterns in Silent Reading? . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Valeriia A. Demareva and Yu. A. Edeleva
Bistable Perception of Ambiguous Images – Analytical Model . . . . . . . . 95
Evgeny Meilikov and Rimma Farzetdinova
Video-Computer Technology of Real Time Vehicle Driver
Fatigue Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
Y. R. Muratov, M. B. Nikiforov, A. S. Tarasov, and A. M. Skachkov
Consistency Across Functional Connectivity Methods and Graph
Topological Properties in EEG Sensor Space . . . . . . . . . . . . . . . . . . . . . 116
Anton A. Pashkov and Ivan S. Dakhtin
Evolutionary Minimization of Spin Glass Energy . . . . . . . . . . . . . . . . . . 124
Vladimir G. Red’ko and Galina A. Beskhlebnova
Comparison of Two Models of a Transparent
Competitive Economy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Zarema B. Sokhova and Vladimir G. Red’ko
Spectral Parameters of Heart Rate Variability as Indicators
of the System Mismatch During Solving Moral Dilemmas . . . . . . . . . . . 138
I. M. Sozinova, K. R. Arutyunova, and Yu. I. Alexandrov
The Role of Brain Stem Structures in the Vegetative Reactions
Based on fMRI Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
Vadim L. Ushakov, Vyacheslav A. Orlov, Yuri I. Kholodny,
Sergey I. Kartashov, Denis G. Malakhov, and Mikhail V. Kovalchuk
Ordering of Words by the Spoken Word Recognition Time . . . . . . . . . 151
Victor Vvedensky, Konstantin Gurtovoy, Mikhail Sokolov,
and Mikhail Matveev

Neurobiology and Neurobionics


A Novel Avoidance Test Setup: Device and Exemplary Tasks . . . . . . . . 159
Alexandra I. Bulava, Sergey V. Volkov, and Yuri I. Alexandrov
Direction Selectivity Model Based on Lagged
and Nonlagged Neurons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
Anton V. Chizhov, Elena G. Yakimova, and Elena Y. Smirnova
Wavelet and Recurrence Analysis of EEG Patterns of Subjects
with Panic Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
Olga E. Dick
Contents xv

Two Delay-Coupled Neurons with a Relay Nonlinearity . . . . . . . . . . . . 181


Sergey D. Glyzin and Margarita M. Preobrazhenskaia
Brain Extracellular Matrix Impact on Neuronal Firing Reliability
and Spike-Timing Jitter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
Maiya A. Rozhnova, Victor B. Kazantsev, and Evgeniya V. Pankratova
Contribution of the Dorsal and Ventral Visual Streams
to the Control of Grasping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
Irina A. Smirnitskaya

Deep Learning
The Simple Approach to Multi-label Image Classification Using
Transfer Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Yuriy S. Fedorenko
Application of Deep Neural Network for the Vision System
of Mobile Service Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
Nikolay Filatov, Vladislav Vlasenko, Ivan Fomin,
and Aleksandr Bakhshiev
Research on Convolutional Neural Network for Object
Classification in Outdoor Video Surveillance System . . . . . . . . . . . . . . . 221
I. S. Fomin and A. V. Bakhshiev
Post-training Quantization of Deep Neural Network Weights . . . . . . . . 230
E. M. Khayrov, M. Yu. Malsagov, and I. M. Karandashev
Deep-Learning Approach for McIntosh-Based Classification
Of Solar Active Regions Using HMI and MDI Images . . . . . . . . . . . . . . 239
Irina Knyazeva, Andrey Rybintsev, Timur Ohinko,
and Nikolay Makarenko
Deep Learning for ECG Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . 246
Viktor Moskalenko, Nikolai Zolotykh, and Grigory Osipov
Competitive Maximization of Neuronal Activity in Convolutional
Recurrent Spiking Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
Dmitry Nekhaev and Vyacheslav Demin
A Method of Choosing a Pre-trained Convolutional Neural
Network for Transfer Learning in Image Classification Problems . . . . . 263
Alexander G. Trofimov and Anastasia A. Bogatyreva
The Usage of Grayscale or Color Images for Facial Expression
Recognition with Deep Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . 271
Dmitry A. Yudin, Alexandr V. Dolzhenko, and Ekaterina O. Kapustina
xvi Contents

Applications of Neural Networks


Use of Wavelet Neural Networks to Solve Inverse Problems
in Spectroscopy of Multi-component Solutions . . . . . . . . . . . . . . . . . . . . 285
Alexander Efitorov, Sergey Dolenko, Tatiana Dolenko, Kirill Laptinskiy,
and Sergey Burikov
Automated Determination of Forest-Vegetation Characteristics
with the Use of a Neural Network of Deep Learning . . . . . . . . . . . . . . . 295
Daria A. Eroshenkova, Valeri I. Terekhov, Dmitry R. Khusnetdinov,
and Sergey I. Chumachenko
Depth Mapping Method Based on Stereo Pairs . . . . . . . . . . . . . . . . . . . 303
Vasiliy E. Gai, Igor V. Polyakov, and Olga V. Andreeva
Semantic Segmentation of Images Obtained by Remote Sensing
of the Earth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
Dmitry M. Igonin and Yury V. Tiumentsev
Diagnostics of Water-Ethanol Solutions by Raman Spectra
with Artificial Neural Networks: Methods to Improve Resilience
of the Solution to Distortions of Spectra . . . . . . . . . . . . . . . . . . . . . . . . . 319
Igor Isaev, Sergey Burikov, Tatiana Dolenko, Kirill Laptinskiy,
and Sergey Dolenko
Metaphorical Modeling of Resistor Elements . . . . . . . . . . . . . . . . . . . . . 326
Vladimir B. Kotov, Alexandr N. Palagushkin, and Fedor A. Yudkin
Semi-empirical Neural Network Models of Hypersonic Vehicle
3D-Motion Represented by Index 2 DAE . . . . . . . . . . . . . . . . . . . . . . . . 335
Dmitry S. Kozlov and Yury V. Tiumentsev
Style Transfer with Adaptation to the Central Objects of the Scene . . . 342
Alexey Schekalev and Victor Kitov
The Construction of the Approximate Solution of the Chemical
Reactor Problem Using the Feedforward Multilayer
Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
Dmitriy A. Tarkhov and Alexander N. Vasilyev
Linear Prediction Algorithms for Lossless Audio Data Compression . . . 359
L. S. Telyatnikov and I. M. Karandashev

Neural Network Theory, Concepts and Architectures


Approach to Forecasting Behaviour of Dynamic System Beyond
Borders of Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
A. A. Brynza and M. O. Korlyakova
Contents xvii

Towards Automatic Manipulation of Arbitrary Structures


in Connectivist Paradigm with Tensor Product Variable Binding . . . . . 375
Alexander V. Demidovskij
Astrocytes Organize Associative Memory . . . . . . . . . . . . . . . . . . . . . . . . 384
Susan Yu. Gordleeva, Yulia A. Lotareva, Mikhail I. Krivonosov,
Alexey A. Zaikin, Mikhail V. Ivanchenko, and Alexander N. Gorban
Team of Neural Networks to Detect the Type of Ignition . . . . . . . . . . . . 392
Alena Guseva and Galina Malykhina
Chaotic Spiking Neural Network Connectivity Configuration
Leading to Memory Mechanism Formation . . . . . . . . . . . . . . . . . . . . . . 398
Mikhail Kiselev
The Large-Scale Symmetry Learning Applying Pavlov Principle . . . . . . 405
Alexander E. Lebedev, Kseniya P. Solovyeva,
and Witali L. Dunin-Barkowski
Bimodal Coalitions and Neural Networks . . . . . . . . . . . . . . . . . . . . . . . 412
Leonid Litinskii and Inna Kaganowa
Building Neural Network Synapses Based on Binary Memristors . . . . . 420
Mikhail S. Tarkov

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427


Invited Papers
Deep Learning a Single Photo Voxel
Model Prediction from Real
and Synthetic Images

Vladimir V. Kniaz1,2(B) , Peter V. Moshkantsev1,3 , and Vladimir A. Mizginov1


1
State Research Institute of Aviation Systems (GosNIIAS), Moscow, Russia
{vl.kniaz,vl.mizginov}@gosniias.ru, petermosh79@gmail.com
2
Moscow Institute of Physics and Technology (MIPT), Moscow, Russia
3
Moscow Aviation Institute, Moscow, Russia

Abstract. Reconstruction of a 3D model from a single image is chal-


lenging. Nevertheless, recent advances in deep learning methods demon-
strated exciting progress toward single-view 3D object reconstruction.
However, successful training of a deep learning model requires an exten-
sive dataset with pairs of geometrically aligned 3D models and color
images. While manual dataset collection using photogrammetry of laser
scanning is challenging, the 3D modeling provides a promising method
for data generation. Still, a deep model should be able to generalize from
synthetic to real data. In this paper, we evaluate the impact of the syn-
thetic data in the dataset on the performance of the trained model. We
use a recently proposed Z-GAN model as a starting point for our research.
The Z-GAN model leverages generative adversarial training and a frustum
voxel model to provide the state-of-the-art results in the single-view voxel
model prediction. We generated a new dataset with 2k synthetic color
images and voxel models. We train the Z-GAN model on synthetic, real,
and mixed images. We compare the performance of the trained models
on real and synthetic images. We provide a qualitative and quantitative
evaluation in terms of the Intersection over Union between the ground
truth and predicted voxel models. The evaluation demonstrates that the
model trained only on the synthetic data fails to generalize to real color
images. Nevertheless, a combination of synthetic and real data improves
the performance of the trained model. We made our training dataset
publicly available (http://www.zefirus.org/SyntheticVoxels).

Keywords: Generative adversarial networks · Deep learning ·


Voxel model prediction · 3D object reconstruction

1 Introduction

Prediction of a 3D model from an image requires an estimation of the camera


pose related to the object and reconstruction of the object’s shape. While tradi-
tional multi-view stereo approaches [22,23,25] provide a robust solution for 3D
c Springer Nature Switzerland AG 2020
B. Kryzhanovsky et al. (Eds.): NEUROINFORMATICS 2019, SCI 856, pp. 3–16, 2020.
https://doi.org/10.1007/978-3-030-30425-6_1
4 V. V. Kniaz et al.

Fig. 1. Results of our image-to-voxel translation based on generative adversarial net-


work (GAN) and frustum voxel model. Input color image (left). Ground truth frustum
voxel model slices colored as a depth map (middle). The voxel model output (right).

reconstruction, prediction of a 3D model from a monocular camera is required


in such applications as mobile robotics, augmented reality for smartphones, and
reconstruction o f lost cultural heritage [21]. Single image 3D reconstruction is
ambiguous. Firstly, a single image doesn’t provide enough data to estimate the
distance to the object’s surface. Secondly, back surfaces are not visible on a sin-
gle photo. Therefore, a priori knowledge about the object’s shape is required for
an accurate single-view reconstruction.
Recent advances of deep learning methods demonstrated impressive progress
in single-view 3D reconstruction [13,35,41,43]. Modern voxel model prediction
methods fall into two categories: object-centered and view-centered [35]. Object-
centered methods [13,41] predict the same voxel model for any camera pose
relative to an object. They aim to recognize object class in the input photo
and to predict its voxel model in the object-centered coordinate system. For
example, an object-centered method will generate the same voxel model for the
front facing car and the car captured from the rear side.
In contrast to object-centered methods, view-centered models provide differ-
ent outputs for different camera poses. They aim to generate a voxel model of
the object in the camera’s coordinate system. While a training dataset for an
object-centered method requires only a single voxel model for all images of a
single object class, each image in the training dataset for a view-centered app-
roach requires a geometrically aligned voxel model. Thus, generation of a view-
centered dataset is challenging. Nevertheless, view-centered methods generally
outperform object-centered methods [24,35].
A research project has been recently started by the authors with the aim
of developing a low-cost driver assistance system with a monocular camera. An
efficient training dataset generation technique is required to train a single-view
3D reconstruction model successfully. The technique should provide means for
modeling various traffic and weather conditions.
Recently a new kind of a view-centered 3D object representation was pro-
posed [24]. It is commonly called a frustum voxel model (fruxel model). Unlike
ordinary voxel models with cubic elements, fruxel models have trapezium-shaped
Hamiltonian Mechanics 5

elements that represent slices of the camera’s frustum. Each fruxel is aligned with
the pixel of the input color image (see Fig. 1). Fruxel models facilitate robust
training of a view-centered model as the contour alignment between the input
image and the fruxel model is preserved.
To the best of our knowledge, there are no results in the literature regarding
view-centered voxel model dataset generation using synthetic images and 3D
modeling. In this paper, we explore the impact of the synthetic data in the
performance of a view-centered model. We use a recently proposed generative
adversarial model Z-GAN [24] as a starting point for our research. We prepared
an extensive SyntheticVoxels dataset with 2k synthetic images of three object
classes and corresponding ground truth fruxel models. We made our dataset
publicly available. We compare the performance of the Z-GAN model trained on
real, synthetic, and mixed data.
The results of joint training on the synthetic and real data are encouraging
and show that synthetic data allows the model to generalize to previously unseen
objects. The developed view-centered dataset generation technique allows mod-
eling challenging 3D object configurations and traffic situations that can not be
reconstructed online using laser scanning or similar approaches.

2 Related Work
Generative Adversarial Networks. Development of a new type of neural
networks known as Generative Adversarial Networks (GANs) [14] made it possi-
ble to provide a mapping from a random noise vector to a domain of the desired
outputs (e.g., images, voxel models). GANs have received a lot of scholar atten-
tion in recent years. These networks provide inspiring results in such tasks as
image-to-image translation [20] and the voxel model generation [42].

Single-Photo 3D Model Reconstruction. Accurate 3D reconstruction is


challenging if only a single color image is used as an input. This problem was
intensively studied recently [10,31,32]. Recently some authors proposed new
methods that leverage deep learning [7,13,19,33,35,39,41,42,45]. Despite some
methods were proposed for prediction of unobserved voxels from a single depth
map [12,37,46–48], prediction of the voxel model of a complex scene from a
single color (RGB) image is more ambiguous. The 3D shape of an object should
be known for the accurate performance of the method. Therefore, the solution of
the problem occurs in 2 steps: object recognition and a 3D shape reconstruction.
In [13] a deep learning method for a single image voxel model reconstruction
was proposed. The method leverages an auto-encoder architecture for a voxel
model prediction. The method showed encouraging results, but the resolution of
the model was only 20 × 20 × 20 elements. A combined method for 3D model
reconstruction was proposed in [7]. In [33] a new voxel decoder architecture
was proposed that uses voxel tube and shape layers to increase the resulting
voxel model resolution. A comparison of surface-based and volumetric 3D model
prediction is performed in [35].
6 V. V. Kniaz et al.

Methods that leverage a latent space for 3D shape synthesis were developed
recently [5,13,42]. Wu et al. have proposed a GAN model [42] for a voxel model
generation (3D-GAN). This made it possible to predict models with a resolution
64 × 64 × 64 elements from a randomly sampled noise vector. The developed
method was used for a single-image 3D reconstruction using an approach pro-
posed in [13]. Despite the fact that 3D-GAN increased the number of elements in
the model compared to [13], the generalization ability of this method was low,
especially for previously unseen objects.

3D Shape Datasets. Several 3D shape datasets were designed [6,27,38,44]


for deep learning. Semantic segmentation was made for the Pascal VOC dataset
[11] to align a set of CAD models with color photos. The extended dataset was
named Pascal 3D+ [44]. However, the models trained with this dataset showed
a rough match between a 3D model and a photo. ShapeNet dataset [6] was
used to solve the problem of 3D shape recognition and prediction. However, the
ShapeNet provides only synthetic images and the exact reconstruction of the
model using single image is possible only with synthetic data. Hinterstoisser et
al. have generated a large Linemod dataset [15] with aligned RGB-D data. The
Linemod dataset was intensively used for training 6D pose estimation algorithms
[1–4,8,17,18,26,28,30,36,40]. In [16] a large dataset for 6D pose estimation of
texture-less objects was developed. An MVTec ITODD dataset [9] addresses the
challenging problem of 6D pose prediction in industrial application.

3 Method
The aim of the present research is to compare the performance of a single photo
voxel model prediction method trained on synthetic, real and mixed data. In
our research we use a generative adversarial network Z-GAN [24] that performs
color image-to-voxel model translation. Z-GAN model uses a special kind of voxel
model in which the voxel model is aligned with an input image.
While a depth map that present distances only to the object surface from
a given viewpoint, the voxel model includes information about the entire 3D
scene. The proposed frustum voxel models combines features of a depth map
and a voxel model. We use a hypothesis made by [41] as the starting point
for our research. To provide the aligned voxel model, we combine depth map
representation with a voxel grid. We term the resulting 3D model as a Frustum
Voxel model (Fruxel model).

3.1 Frustum Voxel Model


The main idea of the fruxel model is to provide a precise alignment of voxel slices
with contours of a color image. Such alignment can be achieved with a common
voxel model if the camera has an orthographic projection and its optical axis
coincides with the Z-axis of the voxel model. As the camera frustum is no longer
corresponds to the cube voxel elements, we use sections of a pyramid.
Hamiltonian Mechanics 7

Fruxel model representation provides multiple advantages. Firstly, each XY


slice of the model is aligned with some contours on a corresponding color photo
(some parts of them can be invisible). Secondly, a fruxel model encodes a shape
of both visible and invisible surfaces. Hence, unlike the depth map, it contains
complete information about the 3D shapes. In other words, the fruxel model
imitates perspective space. It is important to note that all slices of the fruxel
model have the same number of fruxel elements (e.g., 128 × 128 × 1).
A fruxel model is characterized by a following set of parameters: {zn , zf , d, α},
where zn is a distance to a near clipping plane, zf is a distance to a far clipping
plane, d is the number of frustum slices, α is a field of view of a camera.
Fruxel model is a special kind of a voxel model optimized for the training of
conditional adversarial networks. However, a fruxel model can be converted into
3 common data types: (1) voxel model, (2) depth map, (3) object annotation.
A voxel model can be generated from the fruxel model by scaling each con-
sequent layer slice by the coefficient k defined as:
zn
k= , (1)
zn + sz
z −z
where sz = f d n is the size of the fruxel element along the Z-axis.
To generate a depth map P from the fruxel model, we multiply indices of the
frontmost non-empty elements by the step sz

P (x, y) = argmin[F (x, y, i) = 1] · sz + zn , (2)


i

where P (x, y) is an element of a depth map, F (x, y, i) element of a fruxel model


at slice i with coordinates (x, y).
An object annotation is equal to a product of all elements with given x, y
coordinates

d
A(x, y) = F (x, y, i). (3)
i=0

3.2 Conditional Adversarial Networks

Generative adversarial networks generate a signal B̂ for a given random noise


vector z, G : z → B̂ [14,20]. Conditional GAN transforms an input image
A and the vector z to an output B̂, G : {A, z} → B̂. The input A can be
an image that is transformed by the generator network G. The discriminator
network D is trained to distinguish “real” signals from target domain B from the
“fakes” B̂ produced by the generator. Both networks are trained simultaneously.
Discriminator provides the adversarial loss that enforces the generator to produce
“fakes” B̂ that cannot be distinguished from “real” signal B.
We train a generator G : {A} → B̂ to synthesize a fruxel model B̂ ∈ Rw×h×d
conditioned by a color image A ∈ Rw×h×3 .
8 V. V. Kniaz et al.

256 × 256 × 3 128 × 128 × 64 64 × 64 × 128 32 × 32 × 256 16 × 16 × 512 8 × 8 × 512


4 × 4 × 512
2 × 2 × 512
1 × 1 × 512
conv2D conv2D conv2D conv2D conv2D conv2D conv2D conv2D
4×4 4×4 4×4 4×4 4×4 4×4 4×4 4×4

deconv3D

deconv3D

deconv3D

deconv3D

deconv3D

deconv3D
deconv3D

deconv3D
4×4×4

4×4×4

4×4×4

4×4×4

4×4×4
2×4×4
4×4×4

2×4×4
1 × 1 × 1 × 1024
2 × 2 × 2 × 1024
4 × 4 × 4 × 1024
16 × 16 × 16 × 1024 8 × 8 × 8 × 1024
128 × 128 × 128 128 × 128 × 128 × 128 64 × 64 × 64 × 256 32 × 32 × 32 × 512

Feature map 2D convolution 3D deconvolution Copy inflate

Fig. 2. The architecture of the generator.

3.3 Z-GAN Framework

We use pix2pix [20] framework as a base to develop our Z-GAN model. We keep
the encoder part of the generator unchanged. We change 2D convolution layers
with 3D deconvolution layers to encode a correlation between neighbor slices
along the Z-axis.
We keep the skip connections between the layers of the same depth that
were proposed in the U-Net model [34]. We believe that skip connections help
to transfer high-frequency components of the input image to the high-frequency
components of the 3D shape.

3.4 Z-GAN Model

The main idea of our volumetric generator G is to use the correspondence


between silhouettes in a color image and slices of a fruxel model. The original
U-Net generator leverages skip connections between convolutional and deconvo-
lutional layers of the same depth to transfer fine details from the source to the
target domain effectively.
We made two contributions to the original U-Net model. Firstly, we replaced
the 2D deconvolutional filters with 3D deconvolutional filters. Secondly, we mod-
ified the skip connections to provide the correspondence between shapes of 2D
and 3D features. The outputs of 2D convolutional filters in the left (encoder)
side of our generator are F2D ∈ Rw×h×c tensors, where w, h is the width and the
height of a feature map and c is the number of channels. The outputs of a 3D
deconvolutional filters in the right (decoder) side are F3D ∈ Rw×h×d×c tensors.
Hamiltonian Mechanics 9

Fig. 3. Synthetic dataset generation technique: (a) virtual camera, (b) slice of
fruxel model, (c) cutting plane, (d) low-poly 3D model, (e) synthetic color image.

We use d copies of each channel of F2D to fill the third dimension of F3D . We
term this operation as “copy inflate”. The architecture of generator is presented
in Fig. 2.

3.5 Synthetic Dataset Generation Technique


We developed a synthetic dataset generation technique to create our Synthet-
icVoxels dataset (see Fig. 3). We use low poly 3D models of objects to render
both realistic synthetic images and generate frustum voxel models. We use 360◦
panoramic textures to provide a variety of realistic backgrounds. For each object,
we sample random points on a hemisphere around the object and use them as
virtual camera (a) locations. For each frame, we point the camera’s optical axis
at the object and select a random background texture. We randomly select the
color of the object’s texture for each frame.
When camera locations and background textures are prepared for all frames,
we perform dataset generation twofold. Firstly, we render a synthetic color image.
Secondly, we move a cutting plane object (c) normal to the camera optical axis
from the distance zn to the distance zf of the target fruxel model with the step
sz . Therefore, for each synthetic color image, we render d slices of the fruxel
model. We use a Boolean intersection between the cutting plane and the low-
poly 3D model to get all slices (b) of the fruxel model. Such approach allows
us to keep contours in color images (e) and slices (b) geometrically aligned. We
stack all d slices along the camera’s optical axis to obtain the resulting fruxel
model with dimensions w × h × d.
We generate our dataset using the Blender 3D creation suite. We automate
background and object color randomization, the camera movement, and the
10 V. V. Kniaz et al.
Off-road vehicle

Fig. 4. Examples of color images and corresponding fruxel models from our Synthet-
icVoxels dataset. Fruxel models are presented as depth maps in pseudo-colors.

cutting plane movement using the Blender Python API. We use an additional
ground plane to provide realistic object shadows. We render the plane with shad-
ows separately and use alpha-compositing to obtain the final synthetic image.
SyntheticVoxels Dataset. Examples of synthetic images with ground truth
fruxel models from our SyntheticVoxels dataset are presented in Figs. 4 and 5.
The dataset includes images and fruxel models for four object classes: car, truck,
off-road vehicle, and van.

4 Experiments
4.1 Network Training
Our Z-GAN framework was trained on the VoxelCity [24] and SyntheticVoxels
datasets using PyTorch library [29]. We use independent test splits of Synthet-
icVoxels and VoxelCity datasets for evaluation with fruxel model parameters
{zn = 2, zf = 12, d = 128, α = 40◦ }. The training was performed using the
NVIDIA 1080 Ti GPU and took 20 hours for the whole framework. For network
optimization, we use a minibatch stochastic gradient descent with an Adam
solver. We set the learning rate to 0.0002 with momentum parameters β1 = 0.5,
β2 = 0.999 similar to [20].

4.2 Qualitative Evaluation


We show results of single-view voxel model generation in Figs. 6 and 7. We use
three object classes: car, off-road vehicle, and van. The Z-GAN model trained
Hamiltonian Mechanics 11

Car
Van

Fig. 5. Examples of color images and corresponding fruxel models from our Synthet-
icVoxels dataset. Fruxel models are presented as depth maps in pseudo-colors.

only on synthetic data fails to generalize to real images. Nevertheless, it success-


fully predicts realistic fruxel models for the synthetic input. The real data from
VoxelCity dataset [24] contains images of only nine models of cars. Therefore,
the Z-GAN model trained only on real data fails to predict fruxel models for cars
with a new 3D shape or color. The Z-GAN model trained on the union of real and
synthetic data produces voxel models of the complex objects with fine details.
12 V. V. Kniaz et al.

Input GT Real Synthetic Real+Synthetic


Off-road vehicle Off-road vehicle Car

Fig. 6. Qualitative evaluation on synthetic images from SyntheticVoxels dataset. Fruxel


models are presented as depth maps in pseudo-colors.

Input GT Real Synthetic Real+Synthetic


Car
Car
Car

Fig. 7. Qualitative evaluation on real images from VoxelCity dataset. Fruxel models
are presented as depth maps in pseudo-colors.
Hamiltonian Mechanics 13

4.3 Quantitative Evaluation

We present results of the quantitative evaluation in terms of Intersection over


Union (IoU) in Table 1. The Z-GAN model predicts probability p of each element
of fruxel model being occupied by an object. We use a threshold p > 0.99 to
compare a predicted fruxel model with the ground truth model. The Z-GAN model
trained on synthetic and real data provides the best IoU for all object classes
except the van class. Most of images for the van class in our SyntheticVoxels
dataset do not provide backgrounds similar to the van in the VoxelCity dataset.
We believe that this is the reason for a slightly lower performance on the van
class. Nevertheless, the Z-GAN model trained on synthetic and real data provides
the highest mean IoU.

Table 1. IoU metric for different object classes for Z-GAN model trained on real, syn-
thetic and mixed data.

Method Object class


car van off-road vehicle mean
Z-GAN synthetic 0.06 0.15 0.07 0.34
Z-GAN real 0.71 0.84 0.53 0.73
Z-GAN real + synthetic 0.76 0.79 0.79 0.78

5 Conclusions

We demonstrated that augmentation of the dataset with the synthetic data


improves the performance of image-to-frustum voxel model translation method.
While methods trained on purely synthetic data fail to generalize to real images,
joint training on synthetic and real images allows our model to achieve higher
IoU and to generalize to previously unseen objects. Our main observation is
that the variety of background textures aids the model’s generalization ability.
In our experiments, we use the Z-GAN generative adversarial network. To train
the Z-GAN model, we generated a new SyntheticVoxels dataset with 2k synthetic
images of three object classes and view-centered frustum voxel models. We devel-
oped a technique for the automatic generation of a view-centered dataset using
low-poly 3D models and 360◦ panoramic background textures. Our technique
and dataset can be used to train a single-view 3D reconstruction models. The
Z-GAN model trained on our SyntheticVoxels dataset achieves state-of-the-art
results in single photo voxel model prediction.

Acknowledgments. The reported study was funded by Russian Foundation for Basic
Research (RFBR) according to the project No 17-29-04410, and by the Russian Science
Foundation (RSF) according to the research project No 19-11-11008.
14 V. V. Kniaz et al.

References
1. Balntas, V., Doumanoglou, A., Sahin, C., Sock, J., Kouskouridas, R., Kim, T.:
Pose guided RGBD feature learning for 3d object pose estimation. In: IEEE Inter-
national Conference on Computer Vision, ICCV 2017, Venice, Italy, 22–29 October
2017, pp. 3876–3884 (2017). https://doi.org/10.1109/ICCV.2017.416
2. Balntas, V., Doumanoglou, A., Sahin, C., Sock, J., Kouskouridas, R., Kim, T.K.:
Pose guided RGBD feature learning for 3D object pose estimation. In: The IEEE
International Conference on Computer Vision (ICCV) (2017)
3. Brachmann, E., Krull, A., Nowozin, S., Shotton, J., Michel, F., Gumhold, S.,
Rother, C.: DSAC - differentiable RANSAC for camera localization. In: The IEEE
Conference on Computer Vision and Pattern Recognition (CVPR) (2017)
4. Brachmann, E., Rother, C.: Learning less is more - 6d camera localization via
3d surface regression. In: The IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) (2018)
5. Brock, A., Lim, T., Ritchie, J., Weston, N.: Generative and discriminative voxel
modeling with convolutional neural networks, pp. 1–9 (2016). https://nips.cc/
Conferences/2016. Workshop contribution; Neural Information Processing Con-
ference : 3D Deep Learning, NIPS, 05–12 Dec 2016
6. Chang, A.X., Funkhouser, T.A., Guibas, L.J., Hanrahan, P., Huang, Q.X., Li, Z.,
Savarese, S., Savva, M., Song, S., Su, H., Xiao, J., Yi, L., Yu, F.: Shapenet: an
information-rich 3d model repository (2015). CoRR arXiv:abs/1512.03012
7. Choy, C.B., Xu, D., Gwak, J., Chen, K., Savarese, S.: 3d-r2n2: a unified approach
for single and multi-view 3d object reconstruction. In: Proceedings of the European
Conference on Computer Vision (ECCV) (2016)
8. Doumanoglou, A., Kouskouridas, R., Malassiotis, S., Kim, T.: Recovering 6d object
pose and predicting next-best-view in the crowd. In: 2016 IEEE Conference on
Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA,
27–30 June 2016, pp. 3583–3592 (2016). https://doi.org/10.1109/CVPR.2016.390
9. Drost, B., Ulrich, M., Bergmann, P., Hartinger, P., Steger, C.: Introducing mvtec
itodd - a dataset for 3d object recognition in industry. In: The IEEE International
Conference on Computer Vision (ICCV) Workshops (2017)
10. El-Hakim, S.: A flexible approach to 3d reconstruction from single images. In: ACM
SIGGRAPH, vol. 1, pp. 12–17 (2001)
11. Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The
pascal visual object classes (VOC) challenge. Int. J. Comput. Vis. 88(2), 303–338
(2009)
12. Firman, M., Mac Aodha, O., Julier, S., Brostow, G.J.: Structured prediction of
unobserved voxels from a single depth image. In: The IEEE Conference on Com-
puter Vision and Pattern Recognition (CVPR) (2016)
13. Girdhar, R., Fouhey, D.F., Rodriguez, M., Gupta, A.: Learning a predictable and
generative vector representation for objects, chap. 34, pp. 702–722. Springer, Cham
(2016)
14. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair,
S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Advances in Neural
Information Processing Systems, pp. 2672–2680 (2014)
15. Hinterstoisser, S., Lepetit, V., Ilic, S., Holzer, S., Bradski, G., Konolige, K., Navab,
N.: Model based training, detection and pose estimation of texture-less 3d objects
in heavily cluttered scenes. In: Asian Conference on Computer Vision, pp. 548–562.
Springer, Heidelberg (2012)
Another random document with
no related content on Scribd:
The Project Gutenberg eBook of Less than kin
This ebook is for the use of anyone anywhere in the United
States and most other parts of the world at no cost and with
almost no restrictions whatsoever. You may copy it, give it away
or re-use it under the terms of the Project Gutenberg License
included with this ebook or online at www.gutenberg.org. If you
are not located in the United States, you will have to check the
laws of the country where you are located before using this
eBook.

Title: Less than kin

Author: Alice Duer Miller

Release date: September 18, 2023 [eBook #71674]

Language: English

Original publication: New York: Henry Holt and Company, 1909

Credits: Steve Mattern and the Online Distributed Proofreading


Team at https://www.pgdp.net (This book was
produced from images made available by the
HathiTrust Digital Library.)

*** START OF THE PROJECT GUTENBERG EBOOK LESS THAN


KIN ***
Less Than Kin
By
Alice Duer Miller

New York
Henry Holt and Company
1909
Copyright, 1909
BY
HENRY HOLT AND COMPANY

Published May, 1909

QUINN & BODEN COMPANY PRESS


RAHWAY, N. J.
LESS THAN KIN
Chapter I
The curtain rolled down, the horns gave forth a final blare, and the
whole house rustled with returning self-consciousness. Mrs. Raikes
and Miss Lewis had always had orchestra seats for Monday nights.
Their well-brushed heads, their high jeweled collars, their little bare
backs were as familiar to experienced opera-goers as the figure of
the long-suffering doorman. They had the reputation of being
musical. What indeed could prove it better than their preference for
orchestra seats, when they might so easily have gone whenever
they wanted in the boxes of their friends?
As the lights went up, they both turned to the glittering tiers above
them. The opera was a favorite and the house was full, though here
and there an empty box caught one’s eye like a missing tooth. Miss
Lewis was sweeping the semicircle like an astronomer in full cry after
a comet. She had begun conscientiously at the stage box, and with
but few comments she had reached the third or fourth, when her
hand was arrested. There were three people in it—an old man in a
velvet skull-cap, tall, thin, wrinkled, and strangely somber against the
red-and-gold background; a younger man dimly seen in the shadow;
and a slim young woman in gray.
The curve of the house afforded examples of every sort and kind
of brilliantly dressed lady. There were dowagers and young girls,
there were women who forgot the public and lounged with an arm
over the back of a little gilt chair, and there were others who sat
almost too erect, presenting their jewels and their composed
countenances to the gaze of whoever cared to admire.
The lady in gray did neither. She sat leaning a little forward, and
looking down absently into the orchestra, so that it was hard to tell
how attentively she was listening to the man behind her. She had an
extremely long waist, and had the effect of being balanced like a
flower on its stalk.
Miss Lewis, with her glass still on the box, exclaimed:
“What, again! Wasn’t he with the Lees last week?”
“You mean James Emmons,” answered Mrs. Raikes. “He is not
with Nellie. He belongs somewhere on the other side of the house.
He came into the box just before the entr’acte. Rather she than me.
He has a singularly heavy hand in social interchange.”
“He could give Nellie things she would value. I am sure she feels
she would shine in high politics.” Miss Lewis raised her glass again.
“You know she is not really pretty.”
“I think she is, only she looks as cold as a little stone.”
“If you say that, every one answers, ‘But see how good she is to
her uncle.’”
“My dear, if you were a penniless orphan, wouldn’t you be good to
a rich uncle?”
Miss Lewis hesitated. “I’m not so sure, if he were like Mr. Lee.
Besides, some people say he hasn’t anything left, you know.”
“Look how they live, though.”
“My innocent! Does that prove that they pay their bills? Nellie
strikes me as being very short of cash now and then.”
“Who is not?”
“And the reprobate son will have to come in for something, won’t
he?”
“Oh, I fancy not. I don’t think they have anything to do with him. He
has disappeared, to South America or somewhere.”
“Well,” said Miss Lewis, “I should advise Nellie not to take
chances, but to accept—” And then she stopped. “Look at that,” she
added. “Don’t you think that is a mistake?”
For the girl in gray had risen slowly, and disappeared into the back
of the box, followed by Emmons.
He was a short man, no longer very young. Nature had intended
him to be fat, but he had not let her have her way.
The two sat down in the little red-lined room behind the box, with
its one electric light and its mirror. Nellie had established herself on
the tiny sofa.
“Well, James,” she said.
“I wanted to tell you that I had been appointed to this commission
to inquire into the sources of our Russian immigration. I start in
September.”
“I congratulate you. You will be an ambassador within a few years,
I feel sure.”
Her praise did not seem to elate him. He went on in exactly the
same tone:
“I shall be gone three months or more.”
“I shall miss you.” Her manner was too polite to be warm, and he
answered, without temper,
“You don’t care whether I go or not.”
She looked at him. “Yes, I do, James,” she said mildly. “You know I
depend on you, but it would be very selfish if I thought of myself
instead of——”
He brushed it aside, as one anxious only for facts.
“You are not really fond of me,” he said.
“Well, I am not romantically in love with you. I never was with any
one, and I don’t suppose I ever shall be, but I like you well enough to
marry you, and that is something, you know.”
“You don’t like me well enough to marry me in August and come to
Russia with me.” If he had been watching her face at this suggestion,
he would not have needed an answer, but fortunately he was looking
another way.
“You know I can not leave my uncle, old and ill——”
“Will you be any better able to leave him in three months?”
She hesitated, but as if it were her own motives that she was
searching. “When you come back there will be no need for leaving
him.”
“Oh,” said Emmons. He glanced through the curtains at the old
man’s thin back, as if the idea of a common household were not
quite agreeable to him.
There was a short pause, and then he went on,
“It sometimes strikes me that if it weren’t your uncle it would be
something else.”
“James,” said Nellie seriously, “I give you my word that if there
were anybody who could take my place at home, I would marry you
in August.”
Emmons nodded. “Well, I can’t ask more than that,” he answered,
and added, with a smile, “though it is a perfectly safe offer, for I
suppose no one can take your place.”
“No one,” said Nellie, with the conviction of a person who does not
intend to look.
The box door opened, and a man half entered, and paused as he
saw how prearranged was the tête-à-tête on which he was intruding.
But Nellie welcomed him in.
“Don’t be frightened away, Mr. Merriam,” she said, smiling. “Mr.
Emmons and I aren’t talking secrets. We weren’t even quarreling—at
least I wasn’t. But the lights in front hurt my eyes. Don’t you think at
my age I can do as I like?”
Mr. Merriam was eminently of that opinion—especially as a
moment later Emmons rose to go.
“Good-night.” Nellie held out her hand. “Don’t forget that you are
dining with us on the 22d.”
“I shan’t forget,” Emmons answered. “I’ve written it down.”
“I shouldn’t have to write it down,” said Merriam.
“Ah, you are not such a busy man as he is,” she returned, but she
could not help smiling. It was so like James to tell her he had written
it down.
Chapter II
There is nothing so radiant, so blue and green (unless it be a
peacock), nothing so freshly washed and shining, as an early
morning in the tropics.
A new President having decided to add cavalry to the army, the
recruits were being drilled on a flat furrowed savannah outside the
city limits. Behind them a line of hills, rugged in outline but softened
by heavy vegetation, were hidden by the mist that was rolling away
over the Atlantic; and all about them, at the edge of the meadow,
were tall flat-topped trees, under which were dotted little pink and
blue houses, like toys.
The soldiers wore blue cotton uniforms, and many of them were
barefooted. Their horses were diminutive, but sure-footed and
nimble, not ill built forward of the saddle, but pitifully weak behind.
The instructor was very differently mounted. He rode a round
strong bay mare, which, in contrast to the pony-like creatures about
her, looked a hand higher than her actual height. Her rider sat still
watching his pupils. Little of his face was visible under the brim of his
broad Panama hat except a brown chin and a pair of long blond
mustaches. Now and then he shouted to the men in excellent
Spanish; and once or twice swore with the tolerant, unmistakable
drawl of the Yankee. On the whole, however, one would have said
after watching him for some minutes that his temper seemed fairly
unruffled in a climate which tries men’s tempers, and in an
occupation which induces irritation.
Once, with some instinctive motion of his body, he put his horse at
a hand gallop, and riding over to one of the soldiers offered some
individual suggestions. The man plainly did not understand, and a
minute later the instructor had changed mounts with the man, and
presently the pony was wheeling hither and thither in response to his
bit, as a boat answers its rudder.
Exactly at ten o’clock the door of a square building in the town
opened; a little trumpeter came out, and the clear notes of a bugle—
so appropriate to the fierce brilliance of the morning—were flung out
like a banner upon the air. It was the signal that the lesson was over.
The men formed into fours, and jogged away under the command of
a non-commissioned officer, leaving the American alone.
He sat a moment, watching the retreating backs, as he took a
grass cigarette case from his breeches pocket, and lit a little yellow
native cigarette. Then he turned his horse with one hand, and
cantered away across the savannah. As he did so, the motion and
the clear brightness of the morning moved him to song. Pushing
back his hat from his forehead he lifted his head:

“Oh, I’m not in a hurry to fuss or to worry,


For fear I should grow too stout,
And I don’t care a bit if my boots don’t fit,
For I walk just as well without.”

He stopped in front of one of the toy houses, and shouted “Oh,


Señor Doctor.”
The door, which stood open, was at once filled by the figure of a
man in crash clothes. He was middle-aged and wore spectacles, so
powerful that the eyes appeared to glare upon you with unspeakable
ferocity, until, seeing round them or over, you found the expression
friendly in the extreme.
“Ah, ha, Don Luis,” he said, “I did not know you were a singer.”
“And a poet, my dear Doctor,” returned the other, bowing. “My own
words. Could you hear them across the savannah?”
“I could have heard them over the frontier. Will you come in?”
“No, gracias,” he answered. “I only stopped in to ask you to a party
this evening, Doctor, for the lovely Rosita. It became necessary to do
something to cut out that handsome young dog of a native. Will you
come?”
The doctor gave a sound indicative of hesitation.
“What kind of a party?” he asked cautiously.
“Oh, a perfectly respectable little party,” returned Vickers, “not a bit
like my last. At least it will begin respectably. It will end as my guests
please. Will you come early or late, Doctor?”
“Early,” said the doctor; “it is always permitted to go home. No,
wait a moment,” he added, as he saw Vickers preparing to go. “I
want to ask you something. Did you ever know a big American who
lived on the Pacific side—a man named Lee? Not a relation of yours,
was he?”
“Certainly he is not,” retorted Vickers. “I have not many causes for
gratitude, but that is one. I met him only once, and then he borrowed
fifteen pesos from me on the strength of a hypothetical likeness
between us.”
“There is a certain resemblance,” observed the doctor.
“Is there? I never saw it. What has he been doing? Getting into
trouble?”
“Getting out of it. He died at my house this morning.”
“What of? Fever?”
“No, drink. I found him two days ago in his hut on the Pacific slope,
and brought him here. One can not drink safely in this climate.
Nature is beneficent, she gives much,” the doctor waved his hand,
“but she also exacts much. One can not drink here, and live.”
“Oh, nonsense, Doctor,” said Vickers, “look at me. I’m as sound as
a dollar.”
“What I want of you,” said the other, “is to write to his family. My
English is not sufficient to make him out a hero, and,” he added, with
a smile, “when we write home they are always heroes. Will you
undertake it?”
“Sure,” said Vickers, swinging a light leg over the mare’s head. As
he stepped to the ground, one could see his great height, an inch or
two over six feet.
“You know,” the doctor went on persuasively, as they walked up
the steps into the house, “that he might just as well have died, as
you suggested, of fever.”
“Fever, pooh!” exclaimed Vickers. “How tame! We must think of
something better than that. Would fever be any consolation to the
survivors? No, no, my dear Nuñez, something great, something
inspiring. ‘My dear Madame, your son, after a career unusually
useful and self-denying’ (the worthless dog), ‘has just met a death as
noble as any I have ever seen or heard of. A group of children—’ No,
‘a group of little children returning from school were suddenly
attacked by an immense and ferocious tigre——’”
“Oh, come, Don Luis,” murmured the doctor, “who ever heard of a
tigre attacking a group?”
“My dear Señor Doctor,” replied Vickers, “I perceive with regret
that you are a realist. I myself am all for romance, pure ethereal
romance. I scorn fact, and by Heaven, if I can’t describe a tigre so
that Lee’s mother will believe in it, I’ll eat my hat.”
“In that case,” returned the doctor dryly, “I suppose it is
unnecessary to mention that Lee does not seem to have a mother.”
“Oh, well,” said Vickers, in evident discouragement, “if a fellow
hasn’t got a mother, that prohibits pathos at once. A wife? At least a
sister?”
Nuñez shook his head. “Nothing but a father,” he said firmly.
Vickers flung himself into a chair with his legs very far apart and
his hands in his pockets.
“Now, how in thunder,” he said, “can I get up any interest in a
father? A father probably knew all about Lee, and very likely turned
him out of the house. A father will think it all for the best. Or no,
perhaps not. An old white-haired clergyman—Lee was just the fellow
to be a clergyman’s son.”
“I am often glad that I belong to a religion whose priests do not
marry,” said the doctor. “Let me get you Lee’s papers.”
They made but a small bundle and most of them were bills,
unreceipted. Vickers drew out one with an American stamp. It was
dated Hilltop, Connecticut. Vickers read:
“My Dear Son: I enclose the money you desire for your
journey home, which Nellie and I have managed to save
during the last three months. I can hardly realize that I am to
see you again after almost ten years.”
Vickers looked up. “Why, the poor beggar,” he said, “he was just
going home after ten years. I call that hard luck.” And then his eye lit
on the date of the letter, which was many months old. “By Jove, no.
He took the old man’s money and blew it in, instead. Isn’t that the
limit? But who is Nellie?”
The doctor shrugged his shoulders, and Vickers returned to the
perusal of the papers. “Bills, bills, notes, letters from women. I seem
to recognize that hand, but no matter. Ah, here is another from
home. Ten years old, too.”
The writing was feminine, neat, and childish.
“Dear Bob,” it said, “if you left home on my account, you
need not have gone.
“Your affectionate cousin,
“Nellie.”
There was a moment’s silence. A feeling of envy swept over
Vickers. The mere sight of an American stamp made him homesick;
the mail from the States never brought him anything; and yet
somewhere at home there was a girl who would write like that to a
worthless creature like Lee.
“They were using those stamps when I was at home,” he said
reminiscently, “but they don’t use them any more.”
“Indeed,” said the doctor, without very much interest.
“Ten years ago, just fancy it,” Vickers went on, turning the letter
over. “And he did not go back. I would have, in his place. If I had an
affectionate cousin Nellie—I have always been rather fond of the
name Nellie. Can you understand his not going?”
“We do not understand the Anglo-Saxon, nor pretend to,” returned
the doctor. “You know very well, Don Luis, you all seem strangely
cold to us.”
“Cold!” cried Vickers, with a laugh; “well, I never was accused of
that before. Wait till you see my letter to Nellie: for of course it will be
to Nellie that I shall write. Or no, I can’t, for I’m not sure of the last
name. No. I’ll write the old man after all. ‘Dear Sir: It is my task to
communicate a piece of news which must necessarily give you pain.’
(I wish I knew how much the old boy would really care.) ‘Your son
expired yesterday in the performance of the bravest action that it has
ever been my good fortune to see, or hear tell of. As you probably
know, Mr. Lee held a position of some responsibility in the railroad.’
(It is a responsibility to keep the bar.) ‘Yesterday we were all standing
about after working hours’ (I wonder when Lee’s began), ‘when a
dispute arose between two of the men. In these hot climes tempers
are easily roused, and words too quickly lead to blows, and blows to
weapons. We all saw it, and all stood hesitating, when your son
stepped forward and flung himself between the two. I grieve to say
that he paid for his nobility with his life. It may be some satisfaction
to you to know, my dear sir, that one of the boys whose life he saved,
for both were hardly full grown, was the only son of a widowed
mother.’ We could not make them both only sons of widowed
mothers, could we? When are you going to bury him?”
“To-morrow.”
“Let me chip in for the funeral. We’ll have it handsome while we
are about it. I must not stay now. Give me the letters, and I’ll get it off
by to-morrow’s steamer. I’ll make it a good one, but I need time. And
I have a report to write for the President, on the progress of my
troop. Have you seen them? Don’t they do me credit?”
Doctor Nuñez looked at him gravely, as he stooped his head and
passed out into the sunlight. As he was gathering up the reins, the
older man said suddenly,
“Don Luis, would you be very much of a Yankee if I offered you a
piece of advice?”
“Very much of a Yankee? I don’t understand. I should be very
uncommonly grateful. Your advice is rare. What is it? To give up
whiskey?”
“No, but to give up Cortez. He is in bad odor with the President.”
“Oh, I know, I know, but if I changed my friends in order to choose
adherents of the administration—! However, I am an administration
man. I am almost in the army.”
“Not always the safest place to be.”
“Oh, Cortez is all right, Doctor. You don’t do Cortez justice.”
“On the contrary,” said the doctor, “I do him full justice. I do him the
justice of thinking him a very brilliant man,—but I do not walk about
arm in arm with him in broad daylight. Is he coming to the party this
evening?”
“I expect him.”
“You could not put him off?”
“Hardly. He brings the phonograph to amuse the señoritas. Now,
come, Doctor, you would not cut me off from the only man in the
country who owns a talking-machine?”
The doctor sighed. “I knew you would be a Yankee,” he said, and
turned and walked into the house, while Vickers rode away,
resuming his song about his indifference as to the fit of his boots.
Vickers’s house was on the slopes of the hills, and a steep little
white adobe stairway led up to it. The house itself was a blue-green
color, and though from the outside it presented an appearance of
size, it was literally a hollow mockery, for the interior was taken up
with a square garden, with tiled walks, and innumerable sweet-
smelling flowers. Round the inner piazza or corridor there were
arches, and in these Vickers had hung orchids, of which he was
something of a fancier. In the central arch was a huge gilded
birdcage in which dangled a large bright-colored macaw.
“You beauty,” said Vickers, stopping for an instant as he crossed
the hallway.
The macaw hunched his shoulders, shifted his feet on the perch,
and said stridently,
“Dame la pata.”
“You betcher life!” said Vickers, thrusting his finger between the
bars. The two shook hands solemnly, and Vickers went on his way to
the dining-room, shouting at the top of a loud voice,
“Ascencion, almuerzo.”
An instant later he was being served with coffee, eggs, and a
broiled chicken by an old woman, small, bent, wrinkled, but plainly
possessed of the fullest vitality.
“And what are you going to give us for supper to-night?” Vickers
asked, with his mouth full.
With some sniffing, and a good deal of subterranean grunting,
Ascencion replied that she did not know what to give los Americanos
unless it were half an ox.
“Ah, but the lovely señoritas,” said Vickers.
A fresh outburst of grunting was the reply. “Ah, the Señorita
Rosita. I have already had a visit from her this morning. She comes
straight into my kitchen,” said the old woman. “She expects to live
there some day.”
“In the kitchen, Ascencion!” said her employer. “You talk as if she
were a rat.”
“Oh, you will see. The Señor Don Papa,—he goes about saying
that he will marry his daughter to none but foreigners,—that they
make the best husbands.”
“So they do.”
“Oh, very well, very well, if you are satisfied. It makes no
difference to me. It is all the same to me that every one says this is a
betrothal party, and the niña does not deny it.”

You might also like