Professional Documents
Culture Documents
www.elsevier.com/locate/neunet
Abstract
We have been studying a system of many harmonic oscillators (neurons) interacting via a chaotic force since 2002. Each harmonic
oscillator is driven by chaotic force whose bifurcation parameter is modulated by the position of the harmonic oscillator. Moreover, a system
of mutually coupled chaotic neural networks was investigated. Different patterns were stored in each network and the associative memory
problem was discussed in these networks. Each network can retrieve the pattern stored in the other network.
On the other hand, we have been developing new mechanisms and functions for a humanoid robot with the ability to express emotions and
communicate with humans in a human-like manner. We introduced a mental model which consisted of the mental space, the mood, the
equations of emotion, the robot personality, the need model, the consciousness model and the behavior model. This type of mental model was
implemented in Emotion Expression Humanoid Robot WE-4RII (Waseda Eye No.4 Refined II).
In this paper, an associative memory model using mutually coupled chaotic neural networks is proposed for retrieving optimum memory
(recognition) in response to a stimulus. We implemented this model in Emotion Expression Humanoid Robot WE-4RII (Waseda Eye No.4
Refined II).
q 2005 Elsevier Ltd. All rights reserved.
the pattern. Each network retrieves only a single pattern. We described a model of mutually coupled chaotic neural
However, we found that each network can retrieve the networks in Section 2 and Emotion Expression Humanoid
pattern stored in the other network if they are coupled, Robot WE-4RII in Section 3. The relations between human
meaning namely that Virtual Attractor corresponding to memory and their mood and between human performance and
the pattern stored in the other network appeared in each their activation level were shown in Section 4. Subsequently,
network. we developed a co-associative memory model using mutually
On the other hand, robots have become indispensable for coupled chaotic neural networks in Section 5. This model was
human life. Most popular robots are of the industrial kind applied to WE-4RII in Section 6.
with various functions, such as assembly and conveyance.
However, operators have to define their behavior with very
complex processes and methods. We hope that personal
robots which are active in joint work and community life 2. Neural network model
with humans will become popular in the future. Such
personal robots must adapt to partners and the environment
2.1. Chaotic neuron model
and to communicate naturally with humans. Therefore, new
mechanisms and functions are developed in order to realize
First, we discuss a neuron model that behaves as a simple
the natural communication by expressing the emotions,
harmonic oscillator driven by a chaotic force. If the position
behaviors and personality in a human-like manner. Sony
of the harmonic oscillator at time t is denoted by x(t),
Corporation developed the entertainment humanoid QRIO.
the time evolution of the internal state of the neuron is
It can autonomously walk with CCD cameras information
governed by:
on the head and control its behavior using the homeostasis
regulation mechanism (Fujita et al., 2003). In addition, it _ C u20 xðtÞ Z f ðtÞ;
€ C kxðtÞ
xðtÞ (1)
realized behavior module selection and motion modulation
by emotion in Emotionally GrOunded (EGO) Architecture where k and u0 are the damping constant and the eigen-
for autonomous robots (Sawada et al., 2004). frequency, respectively and f(t) is the input from the
We developed WE-3 (Waseda Eye No.3) series which surrounding neurons. We assumed that f(t) was given by a
achieved coordinated head-eye motion with V.O.R. chaotic force, whose amplitude changes chaotically over
(Vestibular-Ocular Reflex), depth perception using the time interval t:
angle of convergence between the two eyes, adjustment to
the brightness of an object with the eyelids and four K
f ðtÞ Z pffiffiffi yðnÞ
sensations; visual, auditory, cutaneous and olfactory t
sensations. In addition, we produced emotional expressions for nt% t! ðn C 1Þt ðn Z 0; 1; 2; .Þ; ð2Þ
and various kinds of behavior by Emotion Expression
Humanoid Robot WE-4 (Waseda Eye No.4) series with the here
pffiffiffiK is the magnitude of the chaotic force f(t). The factor
face, neck, lungs, waist, 9-DOFs emotion expression 1= t is required to obtain a finite diffusion constant in the
humanoids arms and humanoid robot hands RCH-1 (Robo small-t limit. In Eq. (2), y(n) denotes the output of the
Casa Hand No.1) (Miwa et al., 2004; Roccella et al., 2004). neuron and represents the nth iterate of a map. As an
In addition, the mental model for humanoid robots has been example of this map, we employ the Logistic map:
under development from both robotic and psychological
yðn C 1Þ ZrðnÞð0:5KyðnÞÞð0:5 C yðnÞÞK0:5
perspectives in order to realize human-like motion. We
introduced a mental space with three independent par- ðK0:5% yðnÞ% 0:5Þ; ð3Þ
ameters, the mood, the second order equations of emotion,
where, r(n) is the bifurcation parameter and y(n) changes
the robot personality (Miwa et al., 2003a), the need model
chaotically or periodically according to the bifurcation
(Miwa et al., 2003b), the consciousness model and the
parameter r(n). Since, the chaotic output changes stepwise
behavior model.
over time interval t, we can solve x(t) formally as a function
However, the previous robot had just a single kind of
of y(n) and then observe the system stroboscopically over
recognition in response to a stimulus and showed behavior
time interval t. We denote the internal state and velocity at
corresponding to this recognition. Humans recognize a
time tZnt by x(n) and v(n), respectively. If we solve
stimulus and generate behavior depending on their mood at
Eq. (1), we obtain the recurrent relation:
that time. We considered human-like recognition to be
realized by developing a memory model in order to retrieve h m i KyðnÞ
xðn C 1Þ Z 1K sin ut C cos ut a 2 pffiffiffi
an optimum memory (recognition) in response to a stimulus. u u0 t
Therefore, we proposed a co-associative memory model m
using mutually coupled chaotic neural networks and applied sin ut
C sin ut C cos ut axðnÞ C avðnÞ
this model to Emotion Expression Humanoid Robot WE-4RII u u
(Waseda Eyes No.4 Refined II) (Itoh et al., in press). (4)
668 K. Itoh et al. / Neural Networks 18 (2005) 666–673
sin ut KyðnÞ u20 here, Wi,j is the coupling constant from neuron j to neuron i.
vðn C 1Þ Z a pffiffiffi K axðnÞsin ut It is clear from Eqs. (9)–(13) that each neuron is driven by
u t u
m the chaotic force yi(n) of the surrounding neurons and itself,
K sin utKcos ut avðnÞ; (5) whose bifurcation parameter ri(n) are modulated by the
u internal state xi(n). To store patterns in the network, we use
here, m, a and u are defined by: the Hebb rule to determine Wi,j as:
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
k
a Z eKmt ; mZ ; u Z u20 Km2 : (6) 1 XP
2 Wi;j Z xs xs ; (14)
N sZ1 i j
If we denote u as:
u Z 2p=Tt; (7) where, xi denotes the stored pattern vector and xi takes C1
orK1. The self-coupling constant is equal to 1; Wi,iZ1 from
T denotes the period of the harmonic oscillator in units of Eq. (14). If we put TZ2.0, the neural network can retrieve
the time interval t. We proposed a model in which the the original and reverse patterns alternately, meaning we
bifurcation parameter is modulated by the internal state of can say that this neural network can perform very well.
the neuron as:
rðnÞ Z 4Kb C b cos2 bxðnÞ ð0% b% 4Þ; (8) 2.3. Mutually coupled neural networks
where b and b are the control parameters. Since, the
bifurcation parameter is modulated by the internal state of In this paper, we study a system where two neural
the neuron as in Eq. (8), chaos is controlled by the neuron networks (Networks A and B) modeled according to Eqs.
itself. Therefore, a new type of feed back mechanism is (9)–(12) are coupled mutually. We denote the internal state
included in this model. The internal state x(n) determines and the velocity of neuron i in Network a (aZA or B) at
the bifurcation parameter r(n), which, in turn, determines time tZnt by xai;j ðnÞ and vai;j ðnÞ, respectively. As a way to
the dynamics of the chaotic output y(n). The chaotic output couple two neural networks, we connected the neurons at
then affects the dynamics of the neuron. Thus, the dynamics the same location in the two networks. That is, neuron i in
of the chaotic output is changed by the system itself. Network A(B) was connected with neuron i in Network
B(A), whose coupling constant is given by ZiA;B ðZiB;A Þ, as
shown in Fig. 1. Therefore, hai ðnÞ is defined as Eq. (15)
2.2. One neural network instead of Eq. (13):
consists of many harmonic oscillators (neurons) interacting hAi ðnÞ Z Wi;jA yAj ðnÞ C ZiA;B yBi ðnÞ
jZ1
via chaotic force. We denote the internal state of neuron i at
(15)
time tZnt by xi(n)(iZ1,2,.,N). The time evolution of a X
N
system is defined by: hBi ðtÞ Z Wi;jB yBj ðtÞ C ZiB;A yAi ðtÞ:
h m i Kh ðnÞ jZ1
i
xi ðn C 1Þ Z 1K sin ut C cos ut a pffiffiffi
u u20 t If two networks are not coupled and a different pattern
m was stored in each network, the energy of each network has
sin ut
C sin ut C cos ut axi ðnÞ C avi ðnÞ only one minimum corresponding to the pattern. Each
u u network retrieves only one pattern. In this model, however,
(9) each network can retrieve the pattern stored in the other
network if they are coupled.
sin ut Khi ðnÞ u20
vi ðn C 1Þ Z a pffiffiffi K axi ðnÞsin ut
u t u
m
K sin utKcos ut avi ðnÞ (10)
u
Part DOF
3.1. Mechanical hardware
Neck 4
Eyes 3
Fig. 2 presents the hardware overview of Emotion Eyeilds 6
Expression Humanoid Robot WE-4RII (Waseda Eye No.4 Eyebrows 8
Refined II) developed in 2004. It has 59 DOFs for Lips 4
expressing motions and emotions as shown in Table 1, Jaw 1
Lung 1
and four of the five human senses: visual, auditory, olfactory
Waist 2
and tactile senses for detecting external stimuli. The hand is Arms 18
Humanoid Robot Hand RCH-1 (RoboCasa Hand No. 1) Hands 12
which is designed and developed by ARTS Lab, Scuola Total 59
Superiore Sant’Anna in order to express not only emotional
expressions but also active behavior. clarify the object of the robot’s behavior, and the behavior
model in order to generate various kinds of behavior.
3.2. Previous mental model The 3D mental space is shown in Fig. 3. Emotion Vector
E was defined in the mental space as the robot’s mental
The mental model was developed with a 3D mental space state as:
consisting of pleasantness, activation, and certainty, the E Z ðEP ; EA ; EC Þ; (16)
mood, the second order equations of emotion, the robot
personality (Miwa et al., 2003a), the need model in order to where EP is the pleasantness component of the emotion, EA
realize not only passive motion but also active motion is the activation component of the emotion and EC is the
(Miwa et al., 2003b), the consciousness model in order to certainty component of the emotion. Seven different
emotions, namely happiness, anger, surprise, sadness, fear,
disgust, and neutral, are mapped into the space as shown in
Fig. 4. The robot determines its emotion depending on the
emotional region traversed by Emotion Vector E.
The mental state is affected by not only the emotion but
also mood. Therefore, Mood Vector M, consisting of a
pleasantness component and activation component was
introduced:
certain
Emotion Vector E
arousal
Pleasantness
unpleasant n pleasant
tio
va
Certainty
t i
Ac Trajectory of E
asleep
uncertain
Fig. 2. Emotion expression humanoid robot WE-4RII. Fig. 3. 3D mental space and emotion vector E to define robot mental state.
670 K. Itoh et al. / Neural Networks 18 (2005) 666–673
Apple
Apple Robot
50000
Toma
Tomatoto Sensing Motion Reflex Motion
40000 Recognition Intelligence Behavior
Memory
30000
Sensing Emotion Expression
20000 Personality Mood Personality
10000 Emotion Emotion
Consciousness
Need Need
0 Need
0 15 30 ZiA,B 45 60 75
B,A
Zi =0.0 Se
Sensing Autonomic Reflex Motion
(b) Internal Environment
60000
Number of Retrieving times
Apple
Apple
Fig. 7. New mental model with co-associative memory model for humanoid
50000
Toma
Tomatoto robot control.
40000
high or too low. The relation between performance and
30000
activation level was assumed as sown in Fig. 6. Moreover,
20000 we considered that the wrong memory was sometimes
10000 retrieved for too high or too low activation level. Fig. 5(b)
shows the number with Network A retrieved each memory
0 for ZiB;A ZK4:0. Network A retrieved ‘Tomato’ in the
0 15 30 Z A,B 45 60 75
i
B,A
smaller half region of ZiA;B and ‘Apple’ in the larger half
Zi =–4.0
region of ZiA;B , mostly. However, it sometimes retrieved
Fig. 5. Number of retrieving each memory to ZiA;B . another memory. Therefore, we defined ZiB;A as equal to 0.0
for a suitable activation level and K4.0 for too high or too
which the robot recognizes ‘Red’ as ‘Red’ itself was low activation level. Fig. 7 shows a new mental model for
defined. Accordingly, human-like recognition is realized. humanoid robots, including a co-associative memory
model.
On the other hand, we controlled the time required for
memory retrieval depending on the activation component of
the robot emotion in order to realize the relation between
performance and activation level. A robot with a suitable 6. Experimental result
activation level can retrieve a memory corresponding to its
mood. However, the robot needs time for memory retrieval We evaluated a new co-associative memory model
if the activation component Ea of the emotion becomes too through implementation in Emotion Expression Humanoid
Robot WE-4RII. We set the parameter values as KZ14.0,
mZ0.05, TZ2.0, bZ0.05, tZ0.1 and bZ1.2, and the
initial values as xi(0)Z25.0, vi(0)Z0.0 and yi(0)Z0.2.
Figs. 8 and 9 show the time evaluation of the robot emotion,
the robot mood and the retrieved memory.
At first, we describe the results of mood state-
dependency and mood congruency. In Fig. 8(a), we showed
the red ball to the robot. Since, the robot started to feel
pleasant when stroked, it retrieved the pleasant memory
‘Apple’ and exhibited ‘Happiness’ emotional expression.
The pleasant level Ep became higher once ‘Apple’ was
retrieved. Next, the robot became unpleasant by recognizing
ammonia. The robot retrieved the unpleasant memory
‘Tomato’ and exhibited ‘Disgust’ emotional expression.
Ep dropped when retrieving ‘Tomato’. In Fig. 8(b), the robot
started feeling unpleasant after being hit. Therefore, it
retrieved the unpleasant memory ‘Tomato’. If the robot
moved considerably, it felt hungry. Then it retrieved the
Fig. 6. Relation between performance and activation level. pleasant memory ‘Apple’ in spite of unpleasant mood due to
672 K. Itoh et al. / Neural Networks 18 (2005) 666–673
0
–1000 20 40 60 80 100
5000
–2000
–3000
0
0 10 20 30 40 50 60 70 –4000
–5000 Ep
–5000
Ep –6000 Mp
Mp
Recalling –7000 NA
A
–10000 –8000
time s time s
Fig. 8. Experimental results of mood state-dependency and mood congruency by implementing in WE-4RII.
8000 Mp
Ea
8000
6000 Mp
6000
4000
4000
2000
2000
0 0
0 5 10 15 20
–2000 0 5 10 15 20 25 30 35
–2000
–4000 –4000
–6000 –6000
time s time s
Fig. 9. Experimental results of the relation between performance and activation level by implementing in WE-4RII.
hunger. The robot takes the apple and mimics eating it. If the order to realize human-like recognition. We used mutually
robot’s hunger is satisfied by breathing its favorite smell, the coupled chaotic neural networks for this model, which
robot becomes happy. consists of many harmonic oscillators (neurons) interacting
Next, we considered the relation between performance via a chaotic force. This memory model was implemented in
and activation level. Fig. 9(a) shows the result for the Emotion Expression Humanoid Robot WE-4RII. We
medium activation level and Fig. 9(b) shows the result for confirmed the ability of the robot to retrieve optimum
an excessively high activation level. It is clear from Fig. 9(a) memory according to the mood and appetite in response to a
that it took only about 2[s] for the robot from looking at the stimulus. Moreover, the time taken to retrieve a memory was
red ball until retrieving ‘Tomato’. On the other hand, the controlled by the activation level. The robot could retrieve
robot needed time for memory retrieval since the activation the memory corresponding to its mood at the time for the
level became very high by receiving strong stimuli ‘Hit’ in medium activation level. However, it sometimes mistook
Fig. 9(b). It took about 9[s]. In addition, the robot retrieved retrieving memory for an overly high or low activation level.
the wrong memory ‘Apple’, in spite of an unpleasant mood. Subsequently, the robot shows optimum behavior according
Thus, we confirmed the ability of the robot to recognize a to the mood, appetite and activation level.
stimulus depending on the mood and appetite and show In this paper, the robot’s mood was divided into two
optimum behavior. In addition, the time changed according cases of pleasantness and unpleasantness. However, we
to the activation level and the robot sometimes mistook the consider that it is necessary for natural communication with
recognition for an overly high or low activation level. humans to express the robot mood continuously. In addition,
we will study a method to retrieve far more memories.
would like to express their thanks to Okino Industries LTD, Itoh, K., Shimizu, T. (2000). Cooperative phenomena between the chaotic
OSADA ELECTRIC CO. LTD, SHARP CORPORATION, force and the harmonic oscillator. Proceeding of NOLTA2000, (pp.645–
648).
Sony Corporation, Tomy Company LTD and ZMP INC. for
Itoh, K., & Shimizu, T. (2002). The virtual attractor in mutually
their financial support for HRI. The authors would also like coupled networks. Journal of the Korean Physical Society, 40(6),
to thank Italian Ministry of Foreign Affairs, General 1018–1022.
Directorate for Cultural Promotion and Cooperation, for Itoh, K., Miwa, H., Nukariya, Y., Zecca, M., Takanobu, H., Dario, P., &
its support to the establishment of the ROBOCASA Takanishi, A. (in press). New memory model for humanoid robots—
laboratory and for the realization of the two artificial introduction of co-associative memory using mutually coupled chaotic
neural networks. Proceedings of the IJCNN2005.
hands. In addition, this research was supported by a Grant- Miwa, H., Okuchi, T., Itoh, K., Takanobu, H., & Takanishi, A. (2003). A
in-Aid for the WABOT-HOUSE Project by Gifu Prefecture. new mental model for humanoid robots for human friendly
Finally, the authors would like to express thanks to ARTS communication—introduction of learning system, mood vector and
Lab, NTT Docomo, SolidWorks Corp., Advanced Research second order equations of emotion. Proccedings of the ICRA2003,
Institute for Science and Engineering of Waseda University, (pp.3588–3593).
Miwa, H., Itoh, K., Ito, D., Takanobu, H., & Takanishi, A. (2003).
and Prof. Hiroshi Kimura for their support to our research.
Introduction of the need model for humanoid robots to generate active
behavior. Proceedings of the IROS2003, (pp.1400–1406).
Miwa, H., Itoh, K., Matsumoto, M., Zecca, M., Takanobu, H., Roccella, S.,
References Carrozza, C.M., Dario, P., & Takanishi, A. (2004). Effective emotional
expressions with emotion expression humanoid robot WE-4RII.
Fujita, M., Kuroki, Y., Ishida, T., & Doi, T. (2003). Autonomous behavior Proceedings of the IROS2004, (pp.2203–2208).
control architecture of entertainment Humanoid Robot SDR-4X. Roccella, S., Carrozza, C.M., Cappiello, G., Dario, P., Cabibhan, J., Zecca,
Proceedings of the IROS2003, (pp.960–967). M., Miwa, H., Itoh, K., Matsumoto, M., & Takanishi, A. (2004).
Hebb, D.O. (1975). Koudougaku Nyumon (in Japanese). Kinokuniya Design, fabrication and preliminary results of a novel anthropomorphic
Syoten. (pp.257-262). hand for humanoid robotics: RCH-1. Proceedings of the IROS2004,
Hopfield, J. J. (1984). Neurons with graded response have collective (pp.266–271).
computational properties like those of two-state neurons. Proceedings Sawada, T., Takagi, T., & Fujita, M. (2004). Behavior selection and motion
of National Academy of Sciences USA, 81, 3088–3092. modulation in emotionally grounded architecture for QRIO SDR-4X II.
Hopfield, J. J., & Tank, D. W. (1985). Neural computation of decisions in Proceedings of the IROS2004, (pp.2514–2519).
optimization problems. Biological Cybernetics, 52, 141–152. Shimizu, T. (1998). Chaotic brownian network. Physica A, 256,
Itoh, K., & Shimizu, T. (2000a). New type of feed-back mechanism with 163–177.
bifurcation parameter modulation. American Institute of Physics Takano, Y. (1995). Ninchi Shinrigaku 2 Kioku (in Japanese). Tokyo
Conference Proceedings, 519, 649–651. Daigaku Shuppan Kai. (pp. 11-13, 240-241).