Professional Documents
Culture Documents
Zycus
Zycus
CHAPTER 1.
INTRODUCTION[1]
Currently, various types of robots are there, such as intelligent service robots,
entertainment robots, etc. are in various stages of development. One of the key issues for
these robots is human‐robot interaction (HRI). For successful HRI, it is desirable for a
robot to recognize and interact with the user’s facial expressions and pose, as well as their
gestures and voice.
“Neurotic Robot” the newest in the series, coming with an intelligence as powerful as
human brain. Scientists are programming robots to be more 'neurotic' in order to help
them make smarter, human-like decisions. Jeff Krichmar, professor of cognitive science
at the University of California, Irvine is experimenting with building neurotic robots that
exhibit signs of obsessive-compulsive disorder, just like humans, or are afraid of open
spaces. He is doing this by making a robot act like a mouse in a cage
Aim is to make the robot brain more like human brain. The brain has incredibly
flexibility and adaptability. If you look at any artificial system, it's far more brittle than
biology, if you put a rodent in a room that is open and unfamiliar, it will hug the walls. It
will hide until it becomes comfortable, then it will move across the room. It will wait until
it feels comfortable. It was so anxious it would never cross the room. Preparing
mathematical models of brain or cognitive system, then putting that in software and it
becomes the controller for the robot.
Modern robots can solve math problems, play chess and even read and respond to
some human emotions. However, when they try to perform other basic human tasks like
walking, running, carrying on a conversation or recognizing basic objects in their
environment, their abilities fall short.
CHAPTER 2.
TYPES OF ROBOTS[7]
suggested that if there were nanobots which could reproduce, the earth would turn into
"grey goo", while others argue that this hypothetical outcome is nonsense.
CHAPTER 3.
NEUROTIC ROBOTS[2]
CHAPTER 4.
The first concerns ‘external’ aspect of emotions; the second ‘internal’ aspects. In
animals, these aspects have co-evolved. How might they enter robot design? Both robots
and animals need to survive and perform efficiently within their ‘ecological niche’ and, in
each case, patterns of coordination will greatly influence the suite of relevant emotions (if
such are indeed needed) and the means whereby they are communicated.
for alarm), but its impact on others is high. Moreover, neurobiology shows that simplified
but high impact information is communicated between brain areas, through the very
different ‘vocabulary’ of neuromodulation.
The similarity in facial expressions between certain animals and humans prompted
classic evolutionary analyses, which support the view that mammals (at least) have
emotions (although not necessarily the same as human emotions), and work reviewed
below explores their (neuro) biological underpinnings. What of robots? Robots are
mechanical devices with silicon ‘brains’, not products of biological evolution. But as we
better understand biological systems we will extract ‘brain operating principles’ that do
not depend on the physical medium in which they are implemented. These principles
might then be instantiated for particular robotics architectures to the point where we
might choose to speak of robot-emotions.
4.2 Neuromodulation
Neuromodulation refers to the action on nerve cells of endogenous substances
called neuromodulators. These are released by a few specialized brain nuclei that have
somewhat diffuse projections throughout the brain and receive inputs from brain areas
that are involved at all levels of behaviour from reflexes to cognition. Each
neuromodulator typically activates specific families of receptors in neuronal membranes.
The receipt of its own neuromodulator by a receptor has very specific effects on the
neuron at various time scales, from a few milliseconds to minutes and hours. Each neuron
has its own mixture of receptors, depending on where it is located in the brain.
4.2.1 Dopamine
In the mammalian brain, dopamine appears to play a major role in motor
activation, appetitive motivation, reward processing and cellular plasticity, and might be
important in emotion. Dopamine is contained in two main pathways that ascend from the
midbrain to innervate many cortical regions. Dopamine neurons in the monkey have been
observed to fire to predicted rewards Moreover, dopamine receptors are essential for the
ability of prefrontal networks to hold neural representations in memory and use them to
guide adaptive behaviour. Therefore, dopamine plays essential roles all the way from
‘basic’ motivational systems to working memory systems essential for linking emotion,
cognition and consciousness.
4.2.2 Serotonin
Serotonin has been implicated in behavioural state regulation and arousal, motor
pattern generation, sleep, learning and plasticity, food intake, mood and social behaviour.
The cell bodies of serotonergic systems are found in midbrain and pontine regions in the
mammalian brain and have extensive descending and ascending projections. Serotonin
plays a crucial role in the modulation of aggression and in agonistic social interactions in
many animals. In crustaceans, serotonin plays a specific role in social status and
aggression; in primates, with the system’s expansive development and innervations of the
cerebral cortex, serotonin has come to play a much broader role in cognitive and
emotional regulation, particularly control of negative mood or affect. The serotonin
system is the target of many widely used anti-depressant drugs.
4.2.3 Opioids
The opioids, which include endorphins, enkephalins and dynorphins, are found
particularly within regions involved in emotional regulation, responses to pain and stress,
endocrine regulation and food intake. Increased opioid function is associated with
positive affective states such as relief of pain, and feelings of euphoria, wellbeing or
relaxation. Activation of opioid receptors promotes maternal behaviour in mothers and
attachment behaviour and social play in juveniles. Separation distress, exhibited by
archetypal behaviours and calls in most mammals and birds, is reduced by opiate agonists
and increased by opiate antagonists in many species. Opiates can also reduce or eliminate
the physical sensation induced by a painful stimulus, as well as the negative emotional
state it induces. Opioids and dopamine receptors are two major systems affected by
common drugs of abuse.
CHAPTER 5.
MOTION PLANNING[5]
Progress in motion planning algorithms has enabled general and flexible solutions
for slowly moving robots, but we believe that in order to quickly and efficiently traverse
very difficult terrain, extending these algorithms to dynamic gaits is essential. In this
work we present progress towards achieving agile locomotion over rough terrain using
the LittleDog robot.
Fig. 5.1(a) Little Dog Robot and a corresponding five-link planar model
Fig. 5.1(b) Little Dog Robot and illustrated the geometric shape of the limbs and
body
A modified form of the Rapidly Exploring Random Tree (RRT) planning
framework to quickly find feasible motion plans for bounding over rough terrain. The
principal advantage of the RRT is that it respects the kinematic and dynamic constraints
which exist in the system; however for high-dimensional robots the planning can be
prohibitively slow. We highlight new sampling approaches that improve the RRT
efficiency. The dimensionality of the system is addressed by biasing the search in a low-
dimensional task space. A second strategy uses reach ability guidance as a heuristic to
encourage the RRT to explore in directions that are most likely to successfully expand the
tree into previously unexplored regions of state space. This allows the RRT to incorporate
smooth motion primitives, and quickly find plans despite challenging differential
constraints introduced by the robot’s under actuated dynamics. This planner operates on a
carefully designed model of the robot dynamics which includes the subtleties of motor
saturations and ground interactions.
The use of static stability for planning allows one to ignore velocities, which
halves the size of the state space, and constrains the system to be fully actuated, which
greatly simplifies the planning problem. Statically stable motions are, however, inherently
conservative (technically a robot is truly statically stable only when it is not moving).
This constraint can be relaxed by using dynamic stability criteria (see Pratt and Tedrake
(2005) for review of various metrics). These metrics can be used either for gait generation
by the motion planner, or as part of a feedback control strategy. One popular stability
metric requires the centre of pressure, or the Zero Moment Point (ZMP), to be within the
support polygon defined by the convex hull of the feet contacts on the ground. While the
ZMP is regulated to remain within the support polygon, the robot is guaranteed not to roll
over any edge of the support polygon. In this case, the remaining degrees of freedom can
be controlled as if the system is fully actuated using standard feedback control techniques
applied to fully actuated systems. Such approaches have been successfully demonstrated
for gait generation and execution on humanoid platforms such as the Honda Asimo
(Sakagami et al. 2002; Hirose and Ogawa 2007), and the HRP series of walking robots
(Kaneko et al. 2004). Lower dimensional “lumped” models of the robot can be used to
simplify the differential equations that define ZMP.
otherwise through the use of reflexive control algorithms that tend to react to terrain.
Recent applications of these strategies, for example on the Rhex robot (Altendorfer et al.
2001) and Big Dog (Raibert et al. 2008), have produced impressive results, but these
systems do not take into account knowledge about upcoming terrain.
Feedback control of under actuated “dynamic walking” bipeds has recently been
approached using a variety of control methods, including virtual holonomic constraints
(Chevallereau et al. 2003;Westervelt et al. 2003, 2007) with which impressive results
have been demonstrated for a single limit-cycle gait over flat terrain. In this paper, we use
an alternative method based on the combination of transverse linearization and time-
varying linear control techniques (Shiriaev et al. 2008; Manchester et al. 2009;
Manchester 2010). This allows one to stabilize more general motions, however a nominal
trajectory is required in advance, so this feedback controller must be paired with a motion
planning algorithm which takes into account information about the environment.
Fig. 5.3 Example of a hip trajectory, demonstrating position command (thin dashed
red), motor model prediction (solid magenta), and actual encoder reading (thick
dashed blue).
Fig. 5.4 Illustration of bounding up step with centre of mass trajectories indicated
nominal, open loop with perturbations, and stabilized with perturbation
CHAPTER 6.
LOCALIZATION SYSTEM[6]
localization system based on the fusion of visual information and sound source
localization, implemented on a social robot One of the main requisites to obtain a natural
interaction between human-human and human-robot is an adequate spatial situation
between the interlocutors, that is, to be orientated and situated at the right distance during
the conversation in order to have a satisfactory communicative process. Our social robot
uses a complete multimodal dialog system which manages the user-robot interaction
during the communicative process. One of its main components is the presented user
localization system. To determine the most suitable allocation of the robot in relation to
the user, a proxemics study of the human-robot interaction is required. The study has been
made with two groups of users: children, aged between 8 and 17, and adults. Finally, at
the end, experimental results with the proposed multimodal dialog system are presented.
In order to localize sound sources, according to the works presented in, there are
two main approaches to this problem. One of them is to study the amplitude differences
generated by a sound source among the microphones (or ears) used to perceive the signal.
This method is the one used in this work, and it is based on the comparison of the volume
differences between the microphones in order to determine the angular difference in
relation to the sound source. The microphone closest to the sound source should receive
the signal with the biggest amplitude in comparison to those received by the rest of
microphones. It must be said that the accuracy of this method is greatly influenced by the
reflection of the sound signal due to the objects present in the environment, such as walls
and furniture. The second method consists of the analysis of the phase differences
produced between the different signals received by each of the microphones related to the
same sound source. This method is based on the idea that the same signal generated by
the sound source will be perceived by the closest microphone before by the rest of them.
The accuracy of this second method depends on the size and the relative position of the
microphones: if the microphones are very close to each other, all of them will receive
almost the same signal.
There is a third method, not so extended due to its complexity that only needs one
microphone to localize the sound source. This method analyses the differences in the
spectrum produced by the same sound source from different positions in relation to the
microphone.
6.3 Age
Analysing the collected data, age also influences the interaction distance. In
average, in the interaction with children from 8 to 10 years old, the normal distance to the
robot is bigger than 2 m, however, children over 10, and adults, decrease the interaction
distance to about 1 m, children under 10 years old feel more intimidated by the robot than
older children. Children over 10 years old feel more curious and try to interact more
closely with the robot. In the case of the adults, since they have already interacted with
robot in other occasions, they feel more comfortable.
6.4 Personality
Personality is a factor that influences proxemics as shown in. In experiments,
observed that, when the group of children is interacting with robot, those situated closer
to the robot are the most extrovert ones. On the contrary, the shy ones tend to keep a
certain distance to the robot, always looking at their teacher.
The most extrovert children interact with robot more enthusiastically, trying to
catch its attention over the rest of children. It is obvious that it is difficult to measure the
personality of the children, but in relation to HRI, it seems that the shyness degree is
related to the interaction distance and its duration.
6.5 Gender
Another factor that could influence proxemics is the gender of the user. It seems
that women prefer to be in front of the robot and men at the side. However, in studies, it
could not corroborate this statement, since no significant differences between the boys’
and girls’ behaviours were obtained.
to interact alone with the robot (situated far from it) approaches Maggie when more
classmates are included in the interaction.
CHAPTER 7.
7.1 Advantages
1. Work’s under extreme conditions
2. Good ability of problem solving
3. Constant and continuous work can be done
4. Perform tasks faster than humans and much more effectively and accurately
5. They can capture moments just too fast for the human eye to get, for example the
Atlas detector in the LHC project can capture ~ 600000 frames per second while
we can see at about 60
6. Most of robots are automatic so they can work by themselves without any human
interference
7.2 Disadvantages
1. Unemployment cause
2. Consumes more power
3. Need good maintenance
4. Costly
5. Complexity
7.3 Applications
Humanoid robots may used in
1. Space research centres
2. Medical field
3. Industries
4. Military
5. Education field
6. Chemical research centres
7. Programming & Logic solving
8. Mining industry
9. Entertainment field
10.Service provider field
DEPARTMENT OF MECHANICAL ENGINEERING, JCE BELAGAVI. Page 19
NEUROTIC ROBOTS
CHAPTER 8.
In order to localize, the robot first computes the position of the user using the
sound source localization system, which makes use of eight microphones.
In the near future, expect to try a similar planning algorithm on a dynamic biped
to achieve walking over rough terrain, and on a forklift operating in a highly constrained
environment.
CHAPTER 9.
REFERENCES
1. www.wikipedia.com
2. Emotional Interaction with a Robot Using Facial Expressions, Face Pose and
Hand Gestures Regular Paper Myung-Ho Ju and Hang-Bong Kang Received 31
May 2012; Accepted 16 Jul 2012
3. Emotions: from brain to robot by Michael A. Arbib1 and Jean-Marc Fellous,
Computer Science, Neuroscience and USC Brain Project, University of Southern
California, Los Angeles, CA 90089-2520, USA
4. Influences on Proxemic Behaviors in Human-Robot Interaction by Leila
Takayama and Caroline Pantofaru
5. Bounding on Rough Terrain with the LittleDog Robot
The International Journal of Robotics Research The International Journal of
Robotics Research published online 7 December 2010 byAlexander Shkolnik,
Michael Levashov, Ian R. Manchester and Russ Tedrake
6. User Localization During Human-Robot Interaction by F. Alonso-Mart´ın *, Javi
F. Gorostiza, Mar´ıa Malfaz and Miguel A. Salichs, Robotics Lab, Universidad
Carlos III de Madrid, Av. de la Universidad 30, 28911 Legan´es, Madrid, Spain;
E-Mails: jgorosti@ing.uc3m.es (J.F.G.); mmalfaz@ing.uc3m.es (M.M.);
salichs@ing.uc3m.es (M.A.S.)
7. www.google.co.in