Professional Documents
Culture Documents
(Download PDF) Engineering Applications of Neural Networks 17Th International Conference Eann 2016 Aberdeen Uk September 2 5 2016 Proceedings 1St Edition Chrisina Jayne Online Ebook All Chapter PDF
(Download PDF) Engineering Applications of Neural Networks 17Th International Conference Eann 2016 Aberdeen Uk September 2 5 2016 Proceedings 1St Edition Chrisina Jayne Online Ebook All Chapter PDF
https://textbookfull.com/product/engineering-applications-of-
neural-networks-19th-international-conference-eann-2018-bristol-
uk-september-3-5-2018-proceedings-elias-pimenidis/
https://textbookfull.com/product/web-reasoning-and-rule-
systems-10th-international-conference-rr-2016-aberdeen-uk-
september-9-11-2016-proceedings-1st-edition-magdalena-ortiz/
https://textbookfull.com/product/engineering-applications-of-
neural-networks-20th-international-conference-
eann-2019-xersonisos-crete-greece-may-24-26-2019-proceedings-
john-macintyre/
https://textbookfull.com/product/e-commerce-and-web-
technologies-17th-international-conference-ec-web-2016-porto-
portugal-september-5-8-2016-revised-selected-papers-1st-edition-
derek-bridge/
https://textbookfull.com/product/web-age-information-
management-17th-international-conference-waim-2016-nanchang-
china-june-3-5-2016-proceedings-part-i-1st-edition-bin-cui/
Chrisina Jayne
Lazaros Iliadis (Eds.)
Engineering Applications
of Neural Networks
17th International Conference, EANN 2016
Aberdeen, UK, September 2–5, 2016
Proceedings
123
Communications
in Computer and Information Science 629
Commenced Publication in 2007
Founding and Former Series Editors:
Alfredo Cuzzocrea, Dominik Ślęzak, and Xiaokang Yang
Editorial Board
Simone Diniz Junqueira Barbosa
Pontifical Catholic University of Rio de Janeiro (PUC-Rio),
Rio de Janeiro, Brazil
Phoebe Chen
La Trobe University, Melbourne, Australia
Xiaoyong Du
Renmin University of China, Beijing, China
Joaquim Filipe
Polytechnic Institute of Setúbal, Setúbal, Portugal
Orhun Kara
TÜBİTAK BİLGEM and Middle East Technical University, Ankara, Turkey
Igor Kotenko
St. Petersburg Institute for Informatics and Automation of the Russian
Academy of Sciences, St. Petersburg, Russia
Ting Liu
Harbin Institute of Technology (HIT), Harbin, China
Krishna M. Sivalingam
Indian Institute of Technology Madras, Chennai, India
Takashi Washio
Osaka University, Osaka, Japan
More information about this series at http://www.springer.com/series/7899
Chrisina Jayne Lazaros Iliadis (Eds.)
•
Engineering Applications
of Neural Networks
17th International Conference, EANN 2016
Aberdeen, UK, September 2–5, 2016
Proceedings
123
Editors
Chrisina Jayne Lazaros Iliadis
Robert Gordon University Lab of Forest Informatics (FiLAB)
Aberdeen Democritus University of Thrace
UK Orestiada
Greece
General Chair
Chrisina Jayne Robert Gordon University, UK
Advisory Chair
Nikola Kasabov Auckland University of Technology, New Zenland
Program Chairs
Chrisina Jayne Robert Gordon University, UK
Lazaros Iliadis Democritus University of Thrace, Greece
Program Committee
A. Canuto Federal University of Rio Grande do Norte, Brazil
A. Petrovski Robert Gordon University, UK
B. Beliczynski Institute of Control and Industrial Electronics, Poland
D. Coufal Czech Academy of Sciences
D. Pérez University of Oviedo, Spain
E. Kyriacou Frederick University, Cyprus
H. Leopold Austrian Institute of Technology GmbH, Austria
I. Bukovsky Czech Technical University in Prague, Czech Republic
J.F. De Canete University of Malaga, Spain
Rodriguez
K.L. Kermanidis Ionian University, Greece
K. Margaritis University of Macedonia, Greece
M. Holena Academy of Sciences of the Czech Republic
M. Fiasche Politecnico di Milano, Italy
M. Trovati Derby University, UK
N. Wiratunga Robert Gordon University, UK
P. Hajek University of Pardubice, Czech Republic
P. Kumpulainen Tempere University of Technology, Finland
S. Massie Robert Gordon University, UK
V. Kurkova Czech Academy of Sciences
Z. Ding Hewlett Packard Enterprise, USA
VIII Organization
Supporting Organizations
Semi-supervised Modeling
Classification Applications
Clustering Applications
Elastic Net Application: Case Study to Find Solutions for the TSP
in a Beowulf Cluster Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Marcos Lévano and Andrea Albornoz
Predictive Model for Detecting MQ2 Gases Using Fuzzy Logic on IoT
Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
Catalina Hernández, Sergio Villagrán, and Paulo Gaona
Time-Series Prediction
Learning-Algorithms
Short Papers
Tutorials
1 Introduction
One of the important aspects of artificial intelligence is the ability of autonomous
agents to behave effectively and realistically in a given task. There is a rising
demand for applications in which agents can act and make decisions similar to
human behavior in order to achieve a goal. Imitation learning is a paradigm in
which an agent learns how to behave by observing demonstrations of correct
c Springer International Publishing Switzerland 2016
C. Jayne and L. Iliadis (Eds.): EANN 2016, CCIS 629, pp. 3–17, 2016.
DOI: 10.1007/978-3-319-44188-7 1
4 A. Hussein et al.
the policy with a limited number of queried instances. Once trained, the agent is
able to extract features from the scene and predict actions in real time. We con-
duct our experiments on benchmark testbed that makes it seamless to replicate
our results and compare with other approaches.
Benchmark environments are useful tools for evaluating intelligent agents.
A few benchmarks are available for 2D tasks such as [3,15,25] and are being
increasingly employed in the literature. 3D environments however have not been
as widely explored, although they provide a closer simulation to real robotic
applications. We use mash-simulator [19] as our testbed to facilitate the evalua-
tion and comparison of learning methods. It is also convenient for extending the
experiments to different navigation tasks within the same framework.
In the next section we review related work. Section 3 describes the proposed
methods. Section 4 details our experiments and results. Finally we present our
conclusions and discuss future steps in Sect. 5.
2 Related Work
2.1 Navigation
Navigation tasks have been of interest in AI in general and imitation learning
specifically from an early stage. Sammut et al. [29] provides an early exam-
ple of an aircraft learning autonomous flight from demonstrations provided via
remote control. Later research tackle more elaborate navigation problems includ-
ing obstacles and objects of interest. Chernova et al. [7] use Gaussian mixture
models to teach a robot to navigate through a maze. The robot is fitted with
an IR sensor to provide information about the proximity of obstacles. This data
coupled with input from a teacher controlling the robot is used to learn a policy.
The robot is then able to make a decision to execute one of 4 motion primi-
tives(unit actions) based on its sensory readings. In [10] the robot uses a laser
sensor to detect and recognize objects of interest. A policy is learned to predict
subgoals associated with the detected objects rather than directly predicting
the motion primitives. Such sensing methods provide an abstract view of the
environment, but can’t convey visual details that might be needed for intelligent
agents to mimic human behavior. [22] use neural networks to learn a policy for
driving a car in racing game using features extracted from the game engine (such
as position of the car relative to the track). Driving is a complex task compared
to other navigation problems due to the complexity of the possible actions. The
outputs of the neural network in [22] are high DOF low level actions. However,
the features extracted from the game engine to train the policy would be dif-
ficult to extract in the real world. Advances in computational resources have
prompted the use of visual data over simpler sensory data. Visual sensors pro-
vide detailed information about the agents surrounding and are suitable to use in
real world applications. In [28] a policy for a racing game is learned from visual
data. Demonstrations are provided by capturing the games video stream and the
controller input. The raw frames (downsampled) without extracting engineered
features are used as input to train a neural network.
6 A. Hussein et al.
Deep learning methods are highly effective in problems that don’t have estab-
lished sets of engineered features. CNNs have been used with great success to
extract features from images. In recent studies [20,21] CNNs are coupled with
reinforcement learning to learn several Atari games. A sequence of raw frames is
used as input to the network and trial and error is used to learn a policy. Trial
and error methods such as reinforcement learning have been extensively used
to learn policies for intelligent agents [16]. However, providing demonstrations
of correct behavior can greatly expedite the learning rate. Moreover, learning
through trial and error can lead the agent to learn a way of performing the
task that doesn’t seem natural or intuitive to a human observer. In [12] learn-
ing from demonstrations is applied on the same Atari benchmark. A supervised
network is used to train a policy using samples from a high performing but non
real time agent. This approach is reported to outperform agents that learn from
scratch through reinforcement learning. Other examples of using deep learning
to play games include learning the game of ‘GO’ using supervised convolution
networks [9] and a combination of supervised and reinforcement learning [33].
These examples all focus on learning 2D games that have a fixed view. However
in real applications, visual sensors would capture 3D scenes, and the sensors
would most likely be mounted on the agent which means it is unrealistic to have
a fixed view of the entire scene at all times.
In [18] a robot is trained to perform a number of object manipulation tasks.
First a trajectory is learned using reinforcement learning with the position of
the objects and targets known to the robot. These trajectories then serve as
demonstrations train a supervised convolutional neural network. In this case no
demonstrations are needed to be provided by a teacher. However, this approach
requires expert knowledge for the initial setup of the reinforcement learning
phase. Compared to related work that employs deep learning to teach an intel-
ligent agent, this is a realistic application implemented with a physical robot.
However, the features are extracted from a set scene with small variations. This
is different from applications where the agent moves and turns around, and with
that completely altering it’s view.
1 Introduction
Motion has been one of the main cues studied in Computer Vision and Pattern Recog‐
nition. As stated by David Marr [1]: “Motion pervades the visual world, a circumstance
that has not failed to influence substantially the process of evolution. The study of visual
motion is the study of how information about only the organization of movement in an
image can be used to make inferences about the structure and the movement of the
outside world”.
In many cases motion enables three-dimensional perception (consider for example
the counter-rotating cylinders effect described by Ullman [2]) and, in its simplest form,
explains the unrivaled ability of humans to perform scene segmentation and object
tracking [3].
Object tracking can be defined as the estimation of the trajectory of an object in the
image plane as it moves around a scene. Errors in tracking are often due to abrupt changes
in object motion, changes in the objects appearance, non-rigidity of the object, occlu‐
sions and non-linear camera motion [4]. The robustness of the representation of target
appearance, against these and other unpredictable events, is crucial to successfully track
objects over time. Interestingly, assumptions are often made to constrain the tracking
problem within the context of a particular application.
Recent tracking algorithms are classified into two major categories, based on the
learning strategy adopted: generative and discriminative methods. Generative methods
describe the target appearance by a statistical model estimated from the previous frames.
To maintain the integrity of the target appearance model, various approaches have been
proposed, including sparse representation [5, 8, 9], on-line density estimation [10]. On
the other hand, discriminative methods [11, 13] directly implement classifiers to discrim‐
inate the target from the surrounding background. Several learning algorithms have been
adopted, including on-line boosting [13], multiple instance learning [11], structured
support vector machines [12] and random forests [14, 15]. These approaches are often
limited by the adoption of hand-crafted features for target representation, such as iconic
templates, Haar-like features, histograms and others, which may not generalize well to
handle the challenges arising in video sequences from everyday life scenes.
In this paper, an original method is proposed for robust visual tracking, based on a
combination of image prediction and weighted correlation matching techniques. Image
prediction is based on a novel recurrent neural network which can be easily generalized
to track any visual pattern in dynamic scenes. The proposed approach is derived from
both Recurrent Neural Networks (RNNs) and Convolutional Neural Networks (CNNs).
A Recurrent Neural Network (RNN) is an artificial neural network with feedback
connections between nodes, with the capability to model dynamic systems [16]. Elman’s
neural network, also known as Simple Recurrent Network (SRN), is a partially recurrent
neural network first proposed by Elman [17]. Because of the context neurons and local
recurrent connections between the context layer and the hidden layer, the Elman’s neural
network has several dynamic advantages over a static neural network. Training and
convergence of SRNs usually take a long time, which makes them useless in time critical
applications [10] and/or when dealing with high resolution images. Therefore, to effi‐
ciently process high resolution images, a compromise between the representation power
and the dimension of the network must be sought.
A CNN is a feed-forward artificial neural network where individual neurons are
arranged to respond to overlapping regions in the visual field. A CNN consists of
multiple layers of small neuron collections taking as input small overlapping areas of
the image. The outputs of each layer are tiled and overlapping to better represent the
original image. This feature ensures a reasonable invariance to planar translation on the
image plane [18]. Due to their representation power, Convolutional Neural Networks
have recently attracted a considerable attention in the Computer Vision community [7],
particularly for image- and video-based recognition. However, only few attempts can
be found in the literature to employ CNNs for visual tracking. One reason is that off-
line classifiers require a model of the objects class. On the other hand, performing on-
line learning based on CNNs is not straightforward, due to the large network size and
Another random document with
no related content on Scribd:
which a sliding element containing a fulminate strikes. The sliding
block carries a small charge of black powder which is set off by the
fulminate, thus igniting the train which leads to the high explosive
charge detonator. Were this sliding block left free to slide back and
forth at all times it would be unsafe to transport the fuze, as it might
be set off by accident. There must be therefore some means of
holding it safely away from the anvil until it is desired to detonate the
charge. There are thus two conflicting conditions to be met: safety
during transportation and sensitiveness at the point of departure. It
may not be understood at first why sensitiveness at the point of
departure should be a condition to be met. Suffice it to say that all
fuzes are designed to arm at discharge or soon after leaving the
bore for they must be ready to act at any time after leaving the
muzzle. Were they to be safe during flight they might be so safe that
the remaining velocity would not be sufficient to set them off. All
fuzes are designed to arm as we say either during travel through the
bore or immediately after.
Methods of Arming.
Spring method.—Let us suppose that after our projectile has
started on its way the sliding block is free to move within a cavity at
the forward end of which is the anvil. If the projectile comes to a
sudden drop or even sudden reduction of velocity the block if
unrestrained will, according to the principle of inertia, keep on going
till something stops it. The something in this case is the anvil and the
fulminate cap is set off. But it is not so simple. For while the projectile
is in flight it is acted upon by the air resistance and slows down but
the block in the cavity of the head is not subjected to this resistance.
It therefore gains on the projectile or creeps forward in the cavity
unless restrained as it is by a spring. Now one more point and this
type of fuze is complete. We supposed that our block was free to
slide. For safety’s sake it is pinned to the cavity. Again we call upon
inertia to bread the pin so as to leave the block free to slide. The
strength of the pin is calculated so that the force of inertia of the
mass of the block is greater than the resistance of the safety pin and
when the projectile starts the pin breaks and the spring forces the
block to the rear of the cavity until the sudden stop of the projectile
permits the block to slide forward as explained. Such a fuze requires
a comparatively high initial velocity and is not adapted to howitzers
using low muzzle velocities.
There are three other methods in use to arm the fuze. They are
inertia of a sleeve; centrifugal force and powder pellet system,
that is, combustion of a grain of powder holding the sliding block
from the anvil by means of an arm resting against the unburned
powder grain. These are more sensitive than the type described.
In the first system, a sleeve fitting around the plunger carrying
the cap slides to the rear by inertia when the projectile starts and two
clips engage in notches on the plunger body making the sleeve and
plunger thereafter move as one body, they are thus held together by
a plunger spring which before arming held the plunger away from the
anvil. The safety spring held the sleeve and plunger away from the
anvil and after arming prevents forward creeping by the plunger and
sleeve now locked together. Upon striking, the plunger and sleeve
move forward as one body and the cap strikes the anvil.
In centrifugal systems the primer plunger is kept safely away
from the anvil by a lock which is kept in place by springs. When the
rotational velocity reaches a certain point the force of the springs is
overcome by the centrifugal force and the locks are thrown aside or
opened and the plunger is free to move forward on impact.
In the powder pellet system (the one largely used by the
Germans) there is a well or channel filled with compressed powder,
this is set off by a fulminate cap which is fired by inertia, a small
plunger-anvil striking the cap. When the powder is consumed it
leaves a channel into which an arm attached to the sliding block
carrying the igniting fulminate for the charge may slide, thus
permitting the block to slide forward to the anvil fixed in the forward
part of the cavity. It is held from creeping forward after the
compressed powder is burned by a safety spring, thus insuring
sufficiently hard an impact to set off the cap.
Heretofore in our service the fulminating cap has been fixed and
the plunger carried the anvil or as we call it the firing pin. Such is
now the system in our base detonating fuzes, and in our combination
fuze.
The new point detonating fuzes are patterned after the French and
are practically French fuzes.
Fuzes Classification.
Fuses are classified as:
(a) Percussion if it acts on impact, producing a low order of
explosion.
(b) Time when it acts in the air at a certain point of the trajectory.
(c) Combination if it is able to act in the air or upon impact.
(d) Detonating when it contains a fulminate which will bring about
detonation upon impact.
The detonator may be separate or incorporated in the fuse. For
the 75-mm gun and the 155-mm howitzer it forms a part of the fuze.
Many fuzes are armed on set-back. An exception to this is the long
detonating fuse, MK 111, which is armed by the unrolling of a brass
spiral holding together two half rings made of steel so fitted as to
prevent the anvil and the head of the fuse from getting close
together. The spiral unrolls when the rotational velocity of the
projectile reaches a certain speed, thus drawing away the two steel
rings and arming the fuse.
DETONATING FUZE—MARK-III.
DETONATING FUZE—MARK-V.
Fuse Tables.
Tables showing American and French fuses to be used by our
Field Artillery, with information concerning markings, color, time of
delay, size of fuse, etc.
DETONATING FUSES.
Size
Corresponding
Time of delay. Color. of Cannon.
to.
Fuse.
2- 3” gun for target
MK I White head. Short. Russian 3GT. practice
100 only.
2-
M II (non delay) 8”, 9.2”,
100
2-
MK II (non delay) White top. Short. 204-m/m.
100
5- Gun and
MK II (short delay) Black top. Short. Modified.
100 Howitzer.
15-
M II (long delay) Black head. Short. Russian.
100
75 G; 3.8”G and
H; 4.7 in. G
MK III
zero No color. Long. French IAL. and H; 6”H;
(Supersensitive)
155H; all
gas shells.
2- French 24/31
MK IV (non-delay) White top. Short. SR (99- Howitzer only.
100 15).
5- French 24/31
MK IV (short delay) Black top. Short. AR (99- Howitzer only.
100 15).
Black top
15- French 24/31
violet
MK IV (long delay) Short. SR (99- Howitzer only.
100 detonator
15).
socket.
2- French 24/31
All guns, but not
MK V (non-delay) White top. Short. SR (99-
100 Howitzers.
08).
MK V (short delay) 5- Black top. Short. French 24/31 All guns, but not
AR (99- Howitzers.
100 08).
Mark—VII (non 2-
White. Short 6” T. M.
delay) 100
Black top with
20- violet
Mark VII (long delay) Short 6” T. M.
100 detonator
socket
COMBINATION FUSES.
Total time By what
Corresponding On what projectile Wt. of
Fuse. burning cannon
French Type. used. fuse.
Sec. fired.
21 s/comb. F. All 3” and
22/31M 1897, Com. Shrapnel.
A., 1907 21 75-mm 1¼ lbs.
24 sec. MKi.
M. guns
21 s/comb. 22/31M 1916, All 3” and
Com. Shrapnel.
F.A., 21 24 sec. 75-mm 1¼ lbs.
MKi.
1915 AA. guns
31 s/comb. F. 30/55M 1889,
31 Com. Shrapnel. 4.7” gun. 2 lbs.
A. 1915 40 sec.
45 s/comb. F.
Same as
A. 1907 45
above.
M.
30/55M 1889, Com. Shrapnel,
155 How.
40 sec. MKi.
30/55M 1913, C. S. Shell AA 4.7” gun
40 sec. MKiii AA. Anti-
AA. Shrapnel. aircraft.