Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Download
Standard view
Full view
of .
Look up keyword
Like this
1Activity
0 of .
Results for:
No results containing your search query
P. 1
An Incremental Growing Neural Network and Its Application to Robot Control

An Incremental Growing Neural Network and Its Application to Robot Control

Ratings: (0)|Views: 14|Likes:
This paper describes a novel network model, which is able to control its growth on the basis of the
approximation requests. Two classes of self-tuning neural models are considered; namely Growing Neural
Gas (GNG) and SoftMax function networks. We combined the two models into a new one: hence the name
GNG-Soft networks. The resulting model is characterized by the effectiveness of the GNG in distributing
the units within the input space and the approximation properties of SoftMax functions. We devised a
method to estimate the approximation error in an incremental fashion. This measure has been used to tune
the network growth rate. Results showing the performance of the network in a real-world robotic
experiment are reported.
This paper describes a novel network model, which is able to control its growth on the basis of the
approximation requests. Two classes of self-tuning neural models are considered; namely Growing Neural
Gas (GNG) and SoftMax function networks. We combined the two models into a new one: hence the name
GNG-Soft networks. The resulting model is characterized by the effectiveness of the GNG in distributing
the units within the input space and the approximation properties of SoftMax functions. We devised a
method to estimate the approximation error in an incremental fashion. This measure has been used to tune
the network growth rate. Results showing the performance of the network in a real-world robotic
experiment are reported.

More info:

Published by: Sagun Man Singh Shrestha on Nov 09, 2010
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

01/02/2013

pdf

text

original

 
An Incremental Growing Neural Network and its Application to RobotControl
A. Carlevarino, R. Martinotti, G. Metta and G. Sandini
 Lira Lab
 – DIST – University of GenovaVia Opera Pia, 13 – 16145 Genova, Italy E-mail: pasa@lira.dist.unige.it 
Abstract
This paper describes a novel network model, which is able to control its growth on the basis of theapproximation requests. Two classes of self-tuning neural models are considered; namely Growing NeuralGas (GNG) and SoftMax function networks. We combined the two models into a new one: hence the nameGNG-Soft networks. The resulting model is characterized by the effectiveness of the GNG in distributingthe units within the input space and the approximation properties of SoftMax functions. We devised amethod to estimate the approximation error in an incremental fashion. This measure has been used to tunethe network growth rate. Results showing the performance of the network in a real-world roboticexperiment are reported.
Introduction
This paper describes an incremental algorithm that includes aspects of both the GrowingNeural Gas (GNG) (Fritzke, 1994), and the SoftMax (Morasso & Sanguineti, 1995) networks.The proposed model retains the effectiveness of the GNG in distributing the units within theinput space
, and the “good” approximation properties of the SoftMax basis functions.Furthermore, a mechanism to control the growth rate, based on the estimate of the approximationerror, has been included. In the following sections, we briefly describe the peculiarities of theproposed algorithm, and present some experimental results in the context of sensori-motormapping, where the network has been used to approximate an inverse model. The inverse modelis employed in a robot control schema.
Growing Neural Gas and SoftMax Function Networks
Growing Neural Gas is an unsupervised network model, which learns topologies (Fritzke,1995). A set of units connected by edges is distributed in the input space (a subset of 
ℜ 
 N 
) with anincremental mechanism, which tends to minimize the mean distortion error. GNG networks canbe effectively combined with different types of basis functions in order to approximate a giveninput-output distribution. As cited above, Morasso and coworkers, for instance, proposed anormalized Gaussian function (SoftMax) for their self-organizing network model. The SoftMaxfunctions have the following analytic expression:
=
 j jiii
c xGc xGc x
)()(),(
 
22
2
 x
eG(x)
=
 
(1)
c
i
is the center of the activation function, where
i
has its maximum. Benaim and Tomasini(Benaim & Tomasini, 1991) proposed a Hebbian learning rule for the optimal placement of units:
)(x,c)U c(x
F
iiii
=
1
 (2)
0-7695-0619-4/00/$10.00 (C) 2000 IEEE
 
where
 x
 N 
is an input pattern and
η 
1
the learning rate. Indeed, a SoftMax network can learn asmooth non-linear mapping
 z=z(x)
. The “reconstruction” formula is:
iii
)U(x,cv z(x)
 (3)where the parameters
ν 
i
are the weights of the output layer and
z
 M 
. In particular, consideringan approximation case, this formula can be interpreted as a minimum variance estimator (Specht,1990). The learning rule is:
)(x,c)U v(z
Y
iiii
=
2
 (4)The normalizing factor in
i
 
can be assimilated to a lateral inhibition mechanism (Morasso &Sanguineti, 1995).
GngSoft Algorithm
It is now evident how to combine the two previously illustrated self-organizing models toobtain an incremental and plastic network. Moreover, as mentioned before, we inserted a controlmechanism based on an online estimate of the approximation error. The global error is estimatedfrom the “sample error” as follows:
)()1(
111
ξ ζ ψ ψ 
gee
+=
 (5)and,
)()1(
1212
+=
eeee
ψ ψ 
&&
 (6)The behavior of 
e
resembles that of the instantaneous error
)(
ξ ζ 
g
, although with a sort of memory. In fact, it is a low pass filtered version of the raw error signal.
ψ 
i
=1,2
are positiveconstants, which determine the filters cut-off frequency.
e
&
is an estimate of the derivative of theerror. The online measures have been used to block the insertion of new units according to thefollowing conditions:
1
threshold e
<
 (7)And,
2
threshold e
>
&
 (8)The derivative of the error was checked, and insertion was allowed if, and only if, it wasapproximately zero – i.e. the error itself does not decrease anymore. Figure 1 shows a typicalresult using the proposed estimation formulas.
Experimental results
The proposed algorithm (GngSoft netwrok) has been used in two different test-beds. Thefirst experiment concerned a typical vector quantization task, whose goal was to build anappropriate code in a 16- and 64-dimensional space. The second experiment regarded theacquisition of an inverse model in the context of sensori-motor coordination.
0-7695-0619-4/00/$10.00 (C) 2000 IEEE
 
 
0
 
200 400 600 800 10000
 
0.1
 
0.2
 
0.3
 
0.4
 
(a)
 
   E  r  r  o  r
0
 
200 400 600 800 1000-1
 
0
 
1
 
2
 
(b)
 
   D  e  r   i  v  a   t   i  v  e
0
 
200 400 600 800 10000
 
0.01
 
0.02
 
0.03
 
(c)
 
Step
   M  e  a  s  u  r  e   d   E  r  r  o  r
 
Figure 1: Some experimental results concerning the proposed error estimation formula. The network was trained for10000 steps.
Upper row
: (a) the estimated error during the training (the solid horizontal line indicates thethreshold), (b) the estimated derivative (according to equation (6)).
Lower row
: (c) the error estimatedindependently over a test set four times denser than the training set.
ψ 
1
and
ψ 
2
were 0.999 and 0.99 respectively.
A classical vector quantization experiment has been performed in order to assess theperformance of the network in high dimensional spaces
– i.e. the ability to map a big input space.The training data were built from images, which were divided in 4
×
4 and 8
×
8 blocks andeventually generated a 16- and 64-dimensional training space respectively. The training set wascreated from one image only (512
×
512 pixels), while the network testing was carried out onthree different images (see (Ridella, Rovetta, & Zunino, 1998)). Figures 2 shows the meansquare error versus the number of units for the 16- (left) and 64-dimensional (right) case. Infigure 3, it is important to note that the CPU time grows only linearly with respect to the numberof neurons.The setup employed for the second experiment consists of the robot head shown in figure4. The head kinematics allows independent vergence (both cameras can move independentlyalong a vertical axis) and a common tilt motion. Furthermore the neck is capable of a tilt and pan
Figure 2: Mean square error vs. the number of neurons for the 16- (left) and 64-dimensional (right) caserespectively.
0 500 1000 1500 200000.010.020.030.040.050.060.070.08number of neurons
  m  e  a  n  s  q  u  a  r  e   d  e  r  r  o  r
0 500 1000 1500 200000.10.20.30.40.50.6numer of neurons
  m  e  a  n  s  q  u  a  r  e   d  e  r  r  o  r
0-7695-0619-4/00/$10.00 (C) 2000 IEEE

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->