You are on page 1of 8


Social Robot Partners: Still Sci-fi?

Kadir Firat Uyanik

KOVAN Research Lab.

Dept. of Computer Eng.
Middle East Technical Univ.
Ankara, Turkey

Abstract—Designing a man-made-man has always been

one of the most exciting dreams of humankind. It has
attracted many scientists, engineers and inquisitive people
through the history of technology. Particularly in the last
decade, many roboticists have shifted their fields of interest
from robotic manipulation and navigation to humanoid
science (e.g. human-robot interaction, social robots, robot
learning etc.). Although computational power, sensor tech-
nology and production techniques advanced a lot, the world
is still waiting for the first heartbeat signal of a robot being
able to recognize itself and its environment, walk around
without falling over, communicate with people, do daily-life
tasks for/with people and learn how to behave properly
in an unanticipated situation. However, it is obvious that Fig. 1. First programmable humanoid robotic system
robotics still has a long way to go. The question is “How
much complicated can it really be?”.
designed a humanoid automaton, 1495. Leonardo’s robot
I. I NTRODUCTION was capable of doing humanlike movements such as,
sitting up, moving it’s arms and neck, and anatomically
A. Historical Notes
correct jaw. Late in the 1700s, Wolfgang von Kempelen
Artificial humans, human-shaped mechanisms and built the Turk, a chess-playing humanoid automaton
human-like automata are nothing new for mankind. The controlled by a human staying inside the cabinet. In
Greek myths, such as Hephaestus and Talos, talks about the same century, Jacques de Vaucanson built The Flute
golden robots and bronze human machines. A Chinese Player, a life-size figure of a shepherd, and also The
artificer Yan Shi designed a mechanical handiwork [1] Tambourine Player that plays a flute and a tambourine.
which is able to sing and act, BC 1000s. In the eighth Pierre Jaquet-Droz, his son and Jean-Frederic Leschot
century, the Muslim alchemist, Jabir ibn Hayyan (la- built the Musician, the drawer and the writer which are
tinized as Geber), gives recipes of artificial slave humans controlled by operators so as to realize some basic works,
in his Book of Stones based on the ultimate goal,takwin1 . such as playing an instrument, drawing a woman’s
Ebu’l Iz (=Al-Jazari) is known as the creator of the first picture and writing 40-letter long texts.
programmable humanoid robot, 1206 [2]. His mecha-
Throughout years of study, humanoid robots became
nism was a programmable drum machine consisting of
more and more complicated. After the mid of the 20th
four automatic musicians in a boat floating in a lake to
century, many theoretical models of biped locomotion
entertain guests during royal drinking parties. Melody
are suggested and the first active anthropomorphic ex-
of the music is changed by moving pegs in what may
oskeleton was built [6] at the Waseda University in
be called programming. According to Charles B. Fowler,
Tokyo. During 90’s, there were many humanoid robots
”more than fifty facial and body actions can be generated
like famous ASIMO (Advanced Step in Innovative MO-
during each musical selection”[3]. Leonardo Da Vinci
bility)being able to walk on two legs and even run fairly
1 The act of takwin is an emulation of the divine, creative and life- enough, or like the robot Cog being designed by Rodney
giving powers of God. Brooks from MIT which is intended to emulate human
helicopters performing inverted-flight [10], and even
been to Mars[29]. As a next step, they will enter probably
the most sophisticated environment, our living rooms (!).
They should not only act on physical objects around
them, but also interact with people. They should be
capable of not only doing things for us, but also with us,
which necessitates generating proper actions in unantic-
ipated situations and understanding of human believes,
desires, and physical actions. Hence, those robots should
setup a human-centric communication, move around
Fig. 2. Left:Leonardo’s robot, a knight. Right:Reconstruction of the
the environment particularly designed for people, make
Turk, a Chess playing humanoid robot controlled by a human operator sense of what they see, hear and touch, and learn doing
things in a social manner.

thought and learn how to behave by experiencing the

A. Appearance and Interactive Behaviors
world as we humans do.Today, robotics community tries
to make robots more social, more dexterous and more People try to make animals, plants or even inanimate
mobile, or in short, much more humanlike. objects talk, walk, see and think or in a way pretend
to behave like a human. There are many examples of
this attempt in the movies (iRobot, Artificial Intelligence,
B. Converging to human day by day Transformers, Wall-e, Short Circuit etc.), cartoons (Irona
Sixty five years ago, there was only one working in Richey Rich, Rossie in Jetsons), toy designs, adver-
computer in the world. A computer to debug a program tisements but more seriously in robotics science.
of which you should open it up and walk inside (see The There are several concerns about the humanlikeness of
first computer bug, in the collections of the US Naval the robots, like appearance and behavior. To tackle with
Historical Center). More strangely, people of that time these problems two different approaches are necessary.
have confidently predicted that United States will only One is from a robotics science point of view, that is
need six of these machines which is certainly not the building humanlike robots based on the knowledge from
case today. cognitive science. The other one is from a cognitive
Just 40 years after the first computer, we got robots, science which uses robots to verify hypotheses to under-
at least, working in the factories where environment is stand humans. This interdisciplinary framework is called
well structured and working space is fully under control.
Nevertheless, RoboCup2 , aims to build a team of fully
autonomous humanoid robot soccer players that shall
win the soccer game, comply with the official rule of
the FIFA, against the winner of the most recent World
Cup, by 2050. This may imply that we can have robot
partners, companions and assistants in 40 years, like we
have computers, laptops and pda’s today.
The rest of this article examines how scientists cope
with the issues related to the humanlikeness of the robots
mainly in terms of appearance and intelligence.

Information technology has made a remarkable
progress recently. Internet, networking and communi-
cation advanced, and the form of communication and
social life changed considerably. Until now, robots have
been to oceans[27] and volcanoes[28]. They’ve become
Fig. 3. Ishiguro and his android twin
2 one of the most well known annual robotics competition that was
started in 1997, see for detailed information as android science[9] by many Japanese roboticists, like

Prof. Hiroshi Ishiguro, whose lab has child and adult the basis of smooth communication. That’s why, humans
sized androids including his electromechanical twin, will tend to prefer more familiar, in a way more social,
figure 3. This robot, Geminoid [11], is not able to walk or robots as their partners among the robots having identical
create complicated movements autonomously. It is tele- functionalities.
operated by Ishiguro since AI technology is inadequate,
for now, to create humanlike conversations. As it can
be seen from the figure 4, captured sound and the lip
movements are encoded and transmitted through internet
to the Geminoid server. Server maintains the state of
the conversation and generates necessary outputs by
evaluating the input data packet and the state of the
conversation. It also generates unconscious behaviors
such as breathing, blinking and other hand and head
Ishiguro investigates the followings by developing this
• How we define humanlikeness,
• What human existence and presence mean,
• How recognition mechanism works in human brain,
• Is intelligence or long term communication crucial
Fig. 6. The Robot Robovie is managing a conversation (adapted from
factor in overcoming the uncanny valley. B. Mutlu et al. 2009)

A communication robot should have capabilities that

androids don’t have yet. Firstly, this robot should be self-
contained in terms of its actuation mechanisms, which
makes communication more effective. This means, there
shouldn’t be any wire or other links that prevents robot
from moving around, or communication mechanisms like
speakers or microphones outside of the robot’s body,
again limit its travel area. Haptic communication is also
important which makes communication more familiar
requiring touch sensors on the body of the robot.
Communication robots are supposed to serve various
kinds of informational tasks ( guide, infor-
mation booth personnel, shop assistant etc). This requires
enhanced communication skills like managing turn-
taking behavior and performing appropriate listening
Fig. 5. Simplified version of the figure in [13].
behavior. During conversation, people switches between
different participant roles, such as speaker and addressee.
Uncanny valley is a hypothesis introduced by However, it is fairly possible that there might be other
Masahiro Mori in 1970 [13]. It represents a revulsion side participants and also non-participant bystanders and
among human observers which happens when robots over-hearers. Although communication robots are still
and any other thing resembling to humans act almost not capable of recognizing speech robustly and generat-
but not entirely like actual humans.To bridge this valley, ing speech adaptively, it is proved that gaze behaviors
robot’s behaviors and communication capabilities should play an important role in establishing and maintaining
be as familiar as possible to the humans. An interac- those conversational roles [15].
tion that increases familiarity and makes communication
smoother is called as social interaction [14]. Although B. Intelligence and Learning
it seems unnecessary, humans chat with each other to A humanoid robot should be able to adapt itself to
accomplish tasks. This interaction may not have an the dynamically changing circumstances and it should
explicit purpose of information exchange, but it serves as also be a quick learner to be useful in human populated

Fig. 4. Overall system and data flow in Geminoid System[12].

environments. The degree of intelligence of a robot Robot vision is one of the major problems in per-
is generally understood as how much successful it is ception. An example is grasping of novel objects that
in a task or several tasks. For an intelligent robot, are seen for the first time. Stereo cameras are not
achieving a goal or accomplishing a task depends on its good enough if the objects are textureless or they
perception, decision making and actuation capabilities. are transparent. Time of flight range cameras are low
Although actuation is not totally solved yet, like in the resolution sensors, and laser range finders necessitates
robot ASIMO ( it has zero-moment point theory based too much time to scan 3D. Although stereopsis or 3D
control and non-regenerative actuation system -highly reconstruction works well, only visible portions of the
inefficient-) or the robot Petman[16] ( it is the first object can be reconstructed. One solution is not to try
robot moving dynamically like a real person with its to obtain whole 3D representation of the object but
heel-toe type walking pattern yet having a combustion learn how to use partial shape information to find an
engine -improper for indoor environments-), perception optimum grasping point [17], [18], [19] by computing
and learning are not even that much promising. and evaluating several features, such as contact area,
contact symmetry, force closure and so on.
1) Perception: Humanoid robots should be aware of
themselves, they also should get necessary information However, another problem with objects is the un-
from the outside world to behave successfully. Today self derstanding of their permanence. Human infants obtain
awareness can be mimicked by using several sensors, knowledge of their environment by interacting with the
such as motor encoders, force and tactile sensors, poten- objects. One of the milestones in developing this ability
tiometers etc. to obtain proprioceptive information; gyro- is learning the permanence of objects or conception
accelerometer couples are used to get the information of physical causality. That is, knowing the continuity
about posture alignment, microphones are for audial in- of the existence of an object even it is occluded by
formation. Stereo cameras and other superhuman sensors other objects. This requires extracting information about
( infrared range cameras or ultrasonic range finders) are an object depending on the state of its environment.
used as vision sensors. Recently, a model of situation-dependent predictor is
The problem is, robots never ”understand” what they proposed in [20]. This model consists of four major
sense. They pretend as if they sensed by utilizing the modules, such as attention, environment, predictor se-
algorithms that are just the interpretations of the roboti- lector and motion predictor modules which are briefly
cists. Unfortunately, scientists still don’t know how ex- explained in figure 7.
actly human brain interprets the electrical signals which Not only visual recognition but also audial recognition
are similar to the numerical values that robots obtain has similar problems. One of the major problem is
from their sensors. discrimination of the sound source of interest (SoI) or

Fig. 7. A model of physical causality perception. Attention module extracts the geometrical information of the object and its surroundings. In
environment module this information is self organized by Restricted Boltzmann Machine type network[21] and the next position of the object
is calculated by prediction module based on the current state of the object and its environment [20].

detection of sound source location (SSL). Although there ferent agents and between different tasks for the same
are several methods to detect SSL, such as receiver agent. To deal with those problems scientists proposed
operating characteristic analysis [30] or time-delay of several techniques;
arrival based approaches [31], speech recognition and a) Reinforcement Learning:
mood detection is still a crucial problem in the human In reinforcement learning (RL), a robot is rewarded or
communication partner case. punished according to the results of it’s interactions with
Researchers, in a way, neglects these problems, so the environment. Learning is done by finding a policy
as to deal with higher level problems, by utilizing of actions that maximizes the subsequent award. If we
teleoperation systems in which perception ability is define accomplishing a task as the properly generated
distributed to the environment by adding multi infrared action sequences, a robot -learning to achieve a goal-
range camera setup (motion capture system) to obtain actually learns what to do next in a particular state.
more complete information about the object of interest, RL is successfully implemented on different plat-
or piezoelectric pressure sensors to the floor ground forms, such as an autonomous helicopter which learns to
to locate communication partners (e.g. addressee or flight invertedly [10], a robot soccer team which learns
bystander partners during a conversation), or multiple to keep the ball away from opponent robots[32], or a
microphones to locate SoI, or remote control panels to humanoid robot learns how to play air-hockey against
control some of the higher level behaviors of the robot, a human opponent[33]. One difficulty with RL is that
like the robot Geminoid (see figure 4) and the Robonaut the state-action space can be very large (slows down the
of NASA [22]. learning process and decreases the generalizability of the
2) Learning: Social robot partners are supposed to learned tasks as well ), which is a usual case in highly
work in environments designed in a human-centered antropomorphic robots having large degree of freedom
manner.That is, those robots will come up with highly body kinematics. A solution to this problem would be
changing circumstances and they should adapt their ca- manually defining or hard-coding some parts of the task
pabilities according to those changes and add new skills to be learn. For instance, in Atkeson’s work on air-
to their repertoire quickly. Today, robots are suffering hockey playing, primitive behaviors are manually given
from the computational complexity of the perception to the system which decreases the state-action space and
algorithms, long-time requiring task learning phase, low helps system to converge to the optimum action policy
generalizablity of the learned behaviors between dif- much quicker.

b) Affordance Learning: of the world in terms of several features including
J.J. Gibson introduced the concept of affordances em- shape, orientation, color and many other relevant factors.
phasizing the relationships between the organism and Robot’s experiences with the objects are categorized (e.g.
its environment[26]. Gibson claims that each action via support vector machines) so as to build higher level
needs only relevant perceptual feature for its execution symbols of the world.
which can be supplied by dedicated filters -running
concurrently- to extract certain cues from the environ-
ment. This results in an immense perceptual economy.
He also mentions that an affordance is relative to the
organism. For instance, a bowling ball is liftable for an
adult, yet it is not for a little child.
This concept has been studied by various research
groups, commonly in terms of learning of consequences
of a particular action [24] or learning of invariant prop-
erties of environments that afford a certain action[25].
According to the representation given in figure 8 affor-
dances can be used to estimate outcomes of actions, to
plan actions to accomplish a task or to recognize objects
and actions of others. This representation has been Fig. 8. Encoding affordances as relationships between actions, objects
applied to various problems, such as directly ground- and effects [34].
ing symbolic planning operators to continuous sensory-
motor experiences[35], reaching goal-directed behaviors c) Social Learning:
from primitive behaviors by learning the effects of There are some useful mechanisms to transfer knowledge
actions on different objects [36], learning how to grasp between agents (biological, computational or robotic
novel objects by learning local visual descriptors of good autonomous systems), such as social learning, be-
grasping points [37], or learning traversability affordance havior matching, imitation[7] and programming by
[38]. In the affordance study of Sahin et. al., affordance demonstration[23]. For instance, humans rely on im-
is formalized as a nested triple of itation or observational learning in social interaction,
mostly to broaden their behavior repertoire, coordinate
(effect, (entity, behavior)) interactional characteristics and ground the understand-
ings of other’s behaviors in own experience.
where entity represents the initial state of the envi-
Psychologists proposed different theories about how
ronment (directly perceived by the agent) before robot
imitation ( i.e. social learning), occurs in the hu-
performing the action, behavior is the mean by which
man infant. Three of them are active intermodal map-
agent interact with the entity, and effect represents the
ping(e.g. Meltzoff and Moore, 1983, 1994, 1997), asso-
perceptual change of the entity ( including the object
ciative sequence learning(Heyes, 2001, 2005; Heyes and
of interest) after the behavior is applied. For instance,
Ray 2000) and the theory of goal directed imitation(
a robot can obtain the relationship between a black-can
Wohlschlger et al., 2003). These theories explain how
and the action it applied to this object as;
matching behaviors are generated and correspondence
(lifted, (black-can, lift-with-right-hand)) problem is bridged. Correspondence problem[8] is a cru-
cial problem in imitation which shows up when imitator
In addition, if the same agent applies the same behavior agent tries to find and execute sequence of actions, using
on a different can, let it be yellow, and obtains the same own embodiment, that are generated by a demonstrator
effect, then it will generalize it’s representation as; possibly having a dissimilar embodiment.
(lifted, (can, lift-with-right-hand)) In robotics, one difficulty is the perception of the
counterpart. To overcome this problem, generally motion
Here, perception of the color of the object -can in this capture systems are used to sense the movement of the
case- looses it’s importance when the behavior lift-with- counterpart. However, obtaining the information about
right-hand is to be realized which is an example of the motion of the counterpart is inadequate, this data
perceptual economy. Hence, a robot,learning via affor- should also be mapped to the robot’s frame of reference.
dances schema, does not try to extract an object model However, in this stage, the problem of ”what to imi-
to plan actions upon, yet it obtains its own representation tate” emerges. There are studies enabling robots to per-

Minsky’s1 -former head of the AI Lab of MIT-, various
problems related to perception, control and learning
still makes us suspicious about having that dream robot
which is able to adapt itself to our highly dynamic envi-
ronment, understand and learn what we say, show, do and
even think in order to set up an intuitive communication.

[1] C. Cheng-Yih, A Re-examination of the Physics of Motion, Acout-
Fig. 9. The problem of producing a behavior that matches an stics, Astronomy and Scientific Thoughts, p. 11, Hong Kong
observed one is due to the coding that represents observed and executed University Press,1996
movements. Things get more and more complicated if the agents [2] N. Gunalan, Islamic Automation: A Reading of al-Jazari’s The
have different body kinematics. The picture is from the book Robot Book of Knowledge of Ingenious Mechanical Devices, Media Art
Programming by Demonstration, by Sylvain Calinon. Histories, edited by Oliver Grau, Cambridge (Mass.): MIT Press,
2007, pp. 163-178
[3] Based on Prof. N. Sharkey’s work, A 13th Century
Programmable Robot, The University of Sheffield,
ceive relevant aspects of the counterpart’s movements. .
For instance, Breazeal and Scasselati’s [39] work on the [4] W.C. Chittick The Sufi path of knowledge: Ibn al-’Arabi’s meta-
physics of imagination, State University of New York Press, 1989,
robots Cog and Kismet includes the detection of human p. 183
faces and eyes, and following human’s gaze direction. [5] G. Wood, Living Dolls: A Magical History Of The Quest For
Those robots are also capable of recognizing human Mechanical Life by Gaby Wood,
[6] M. Vukobratovic, Legged Locomotion Robots and Anthropomor-
facial expressions and emotional vocalizations. Billiard phic Mechanisms, Mihailo Pupin Institute, Belgrade, 1975
and Schaal’s work [40]also shows how to segment rel- [7] Edited C.L. Nehaniv and K. Dautenhahn, Imitation and Social
evant actions, that is, starting and finishing instances of Learning in Robots, Humans and Animals, Cambridge University
Press, 2007
the action to be matched. [8] C.L. Nehaniv and K. Dautenhahn, The Correspondence Problem,
Inferring the goal of the demonstrator is another diffi- MIT Press, 2002
culty. Currently, researchers set goals by hand. For exam- [9] H. Ishiguro, Android Science-Toward a new cross-interdisciplinary
framework, -, Stresa, Italy, July 25-26,2005, pp. 1-6
ple, Alissandrakis et. al.’s work [41] shows how a robot [10] Ng A., Coates A., Diel M., Autonomous inverted helicopter
can be told to imitate at the ’path’ level, ’trajectory’ level flight via reinforcement learning, International Symposium on
or ’end point’ level which correspond to imitation of Experimental Robotics, 2004 pp.1-10
[11] Ishiguro H., Tele-operated Android of an Existent Person, Hu-
whole action, sub-goals only or goal only, respectively. manoid Robots: New Developments, 2007 pp.2-4
On the other hand, Billard et. al.’s work [42] shows how [12] Ishiguro H., Building artificial humans to understand humans, ,
a robot can infer the demonstrator’s goal. Their robot Artificial Organs, 2007, pp. 133-142
[13] Masahiro Mori, Building artificial humans to understand humans,
extracts the invariants across each demonstrations ( e.g. Artificial Organs, 2007, pp. 133-142
moving several different boxes by using left hand). The [14] Mitsunga N, Miyashita T, Ishiguro H, Kiyoshi K, Hagita N.,
robot starts to copy this behavior at the coarse level, by Robovie-IV: A Communication Robot Interacting with People
replicating all the trajectory or path of the action, then it Daily in an Office, IROS, 2006.
[15] Mutlu B, Shiwa T, Kanda T, Ishiguro H, Hagita N., Footing
obtains the crucial parts of the movement and it tries to In Human-Robot Conversations: How Robots Might Shape Par-
reach to the same results by using the actions that robot ticipant Roles Using Gaze Cues, 4th ACM/IEEE Conference on
has already know. Human-Robot Interaction.Vol 2., 2009.
[16] Boston Dynamics, Petman,
[17] Saxena A, Driemeyer J, Kearns J, Osondu C., NG A.Y., Learning
to grasp novel objects using vision, International Symposium of
III. C ONCLUSION Experimental Robotics, 2006
[18] Saxena A, Driemeyer J, Kearns J, Ng A.Y., Robotic grasping
Considering four decades of research and eminently of novel objects, Neural Information Processing Systems (NIPS
promising results, social robot partners are not a matter 19).Vol 19.; 2007
of science-fiction anymore. Due to it’s multi interdisci- [19] Saxena A, Wong LL, Ng A.Y., Learning Grasp Strategies with
Partial Shape Information, AAAI, 2008
plinary nature, robotics benefits from the advancements
in social sciences and engineering which results in a 1 ”In three to eight years we will have a machine with the general
growing community and rapidly accumulating knowl- intelligence of an average human being. I mean a machine that will be
edge. Today, many researchers believe that robotics is able to read Shakespeare, grease a car, play office politics, tell a joke,
have a fight. At that point, the machine will begin to educate itself
at the edge of a revolution like the computers in 80’s. with fantastic speed. In a few months it will be at genius level, and a
Although they have stronger groundings than Marvin few months after that its powers will be incalculable.”

[20] Ogino M, Fujita T, Fuke S, Asada M. Learning of Situation De-
pendent Prediction toward Acquiring Physical Causality, EpiRob.;
[21] Hinton, G. E., Osindero, S., and Teh, Y., A fast learning algorithm
for deep belief nets, Neural Computation, 18:15271554, 2006
[22] Ambrose R, Aldridge H, Askew R., Robonaut: NASA’s space
humanoid, IEEE Intelligent Systems. 2000;15(4):57-63
[23] Cypher A. (Ed.), Watch What I Do: Programming by Demon-
stration, MIT Press, 1993
[24] Stoytchev, A., Behavior-grounded representation of tool affor-
dances, ICRA, 2005.
[25] MacDorman, K. Responding to affordances: Learning and pro-
jecting a sensorimotor mapping, ICRA, 2000
[26] Gibson, J.J, The Theory of Affordances, In R. Shaw & J.
Bransford (Eds.). Perceiving, Acting, and Knowing: Toward an
Ecological Psychology. Hillsdale, NJ: Lawrence Erlbaum pp.67-
82, 1977
[27] Michael V. Jakuba, Modeling and Control of an Autonomous
Underwater Vehicle with Combined Foil/Thruster Actuators, MS
[28] G. Muscato, D. Caltabiano,, S. Guccione,, D. Longo,, M.
Coltelli,, A. Cristaldi,, E. Pecora,, V. Sacco,, P. Sim,, G.S. Virk,, P.
Briole,, A. Semerano, T. White, ROBOVOLC: a robot for volcano
exploration result of first test campaigns, Industrial Robot: An
International Journal,2003, vol.30/3, pp:231 - 242
[29] Huntsberger, TL, Rodriguez, G, Schenker, PS, Robotics chal-
lenges for robotic and human mars explorations, 2000
[30] D.M. Green and J.M. Swets, Signal detection theory and
psychophysics, New York: John Wiley and Sons Inc., 1966
[31] Valin, J.M., Michaud, F., Rouat, J., Letourneau, D., Robust sound
source localization using a microphone array on a mobile robot
International Conference on Intelligent Robots and Systems, pp.
1228-1233, 2003
[32] Stone P., Sutton R.S., Park F., Scaling Reinforcement Learning
toward RoboCup Soccer, Machine Learning, 2001, pp:537-544
[33] Bentivegna D.C, Atkeson C.G., Learning From Observation
Using Primitives, ICRA, 2001,
[34] Montesano L., Lopes M., Bernardino A., Learning object af-
fordances: From sensory-motor coordination to imitation, IEEE
Transactions on Robotics, 2008
[35] Ugur E., Oztop E., Sahin E., Learning object affordances for
planning, ICRA, 2009
[36] Dogar M, Cakmak M, Ugur E, Sahin E., From Primitive Be-
haviors to Goal-Directed Behavior Using Affordances, IEEE/RSJ,
2007, pp:729-734
[37] Montesano L, Lopes M., Learning affordance visual descriptors
for grasping, ICDL, 2009
[38] Ugur E, Sahin E. A Case Study for Learning and Perceiving
Affordances in Robots, Ecological Psychology, 2009, pp:1-27.
[39] Breazeal C., Scassellati B., Challenges in Building Robots That
Imitate People, Imitation in animals and artifacts, 2002
[40] Billard , Schaal S., 1. Billard A., Schaal S., Robust learning
of arm trajectories through human demonstrationInternational
Conference on Intelligent Robots and Systems, 2001
[41] Alissandrakis A., Nehaniv C.L., Dautenhahn K., Imitation With
ALICE: Learning to Imitate Corresponding Actions Across Dis-
similar Embodiments IEEE Transactions on Systems, Man, and
Cybernetics, vol. 32, no. 4, July 2002
[42] Billard A., Epars Y., Cheng G., Schaal S., Discovering imitation
strategies through categorization of multi-dimensional data, In-
ternational Conference on Intelligent Robots and Systems. IEEE;