Professional Documents
Culture Documents
Textbook Geminoid Studies Hiroshi Ishiguro Ebook All Chapter PDF
Textbook Geminoid Studies Hiroshi Ishiguro Ebook All Chapter PDF
https://textbookfull.com/product/neuroimmune-diseases-from-cells-
to-the-living-brain-hiroshi-mitoma/
https://textbookfull.com/product/mucosal-vaccines-innovation-for-
preventing-infectious-diseases-hiroshi-kiyono-editor/
https://textbookfull.com/product/computational-and-experimental-
simulations-in-engineering-proceedings-of-icces2019-hiroshi-
okada/
https://textbookfull.com/product/functional-genomics-and-
biotechnology-in-solanaceae-and-cucurbitaceae-crops-1st-edition-
hiroshi-ezura/
PET CT for Inflammatory Diseases Basic Sciences Typical
Cases and Review Hiroshi Toyama
https://textbookfull.com/product/pet-ct-for-inflammatory-
diseases-basic-sciences-typical-cases-and-review-hiroshi-toyama/
https://textbookfull.com/product/marx-s-grundrisse-and-hegel-s-
logic-routledge-library-editions-marxism-hiroshi-uchida/
https://textbookfull.com/product/sago-palm-multiple-
contributions-to-food-security-and-sustainable-livelihoods-1st-
edition-hiroshi-ehara/
https://textbookfull.com/product/general-purpose-technology-spin-
out-and-innovation-technological-development-of-laser-diodes-in-
the-united-states-and-japan-hiroshi-shimizu/
https://textbookfull.com/product/critical-historical-studies-
vol-7-num-1-1st-edition-critical-historical-studies/
Hiroshi Ishiguro · Fabio Dalla Libera
Editors
Geminoid
Studies
Science and Technologies for Humanlike
Teleoperated Androids
Geminoid Studies
Hiroshi Ishiguro ⋅ Fabio Dalla Libera
Editors
Geminoid Studies
Science and Technologies for Humanlike
Teleoperated Androids
123
Editors
Hiroshi Ishiguro Fabio Dalla Libera
Osaka University Osaka University
Toyonaka, Osaka Toyonaka, Osaka
Japan Japan
This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd.
part of Springer Nature
The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721,
Singapore
Preface
The first part of this book surveys the technologies involved in building a very
humanlike robot, or android. The appearance, movements, and perceptions of such
an android must mimic those of a human, and thus require unique developmental
techniques. Chapters 1 and 2 describe how to develop a very humanlike robot. First,
the android must have a very humanlike appearance. Body-part molds are made
from a real person using shape-memory foam. Then, soft silicon skins are made for
the entire body using these molds. Covering the entire body with soft material is a
unique feature that is quite different from that employed in existing humanoid robots.
The soft skin not only provides a humanlike texture for the body surface, but also
allows safe physical interaction, that is, humanlike interpersonal interactions
between real people and the android. The humanlike movements are produced by
pneumatic actuators that are driven by highly compressed air. The high compress-
ibility of air provides very flexible motions in the android joints without software
control. This also makes the human–android interactions more humanlike and safer.
To generate humanlike body movements in the android, the motions of a human
performer are measured by means of a motion capture system, and the measured
movements are copied to the android. People strongly expect humanlike responsive
behaviors when interacting with the android, because it looks very similar to a
human. It is, therefore, necessary for the android perceptual system to obtain various
information to produce rich interactions. However, cameras and microphones
embedded in the androids body are not sufficient, as the range and type of obser-
vations are restricted. To overcome this limitation, a sensor network is integrated
with the android. Multiple types of sensor (such as camera, microphone, floor sensor,
and laser range finder) are placed in the environment surrounding the android, thus
endowing the environment with perceptive capabilities. This method allows the
android to store a large amount of information on human activities and obtain
contextual information. The mental state of the android can be defined based on
contextual information, allowing humanlike responsive behavior to be produced. In
the process of making the android more humanlike, we require knowledge about
human behavior and interpersonal cognition; this is a novel approach to studying
human nature by developing the android. We call this research framework android
v
vi Preface
relationship between head motion and dialogue acts. Issues regarding eye gaze
control during head motion control are also discussed in Chap. 7. Evaluation results
show that the rule-based methods can generate more natural motions than con-
ventional methods of directly mapping the original motions to the android.
Almost fifty years ago, Mori proposed the hypothesis of a non-monotonic
relation between how humanlike an artificial device appears and people’s fondness
toward it. As a device becomes similar to our body in its appearance, our affinity
gradually rises. However, when the device acquires a certain level of similarity to a
real human or human body parts, people suddenly start to feel a strong negative
impression toward it. Mori called this sudden change in trend the “Bukimi no Tani”
(valley of uncanniness). This hypothesis is very persuasive alongside the examples
he presented comparing a very humanlike myoelectric hand and a wooden hand
created by a Buddhist image sculptor. Mori’s hypothesis attracted interest in various
fields, including human–robot interaction researchers, and has resulted in a large
number of studies. Despite considerable efforts, until now, no clear evidence has
been found to prove Mori’s hypothesis. One reason for this is the limitations of the
methods used in most studies. Still photographs, such as morphed figures between a
real human face and an artificial face, or video recordings of animated characters
and robots have mainly been used for testing. In his original paper, Mori described
his hypothesis in terms of people’s interactions with an artificial device, but few
studies have considered such an interactive situation. Another possible reason is the
technical limitation that very humanlike artificial devices with interactive capabil-
ities, such as robots, are hard to realize.
Now, how does this “uncanny valley” response of mankind affect the design of
robots intended to interact with people in socially functional ways, as we do with
other humans? The main purpose of developing a humanoid robot is so that it can
interact with people. If the task of a robot is just to fold laundry or clean rooms, the
robot need not interact with people, and thus humanlike behavior is not required.
Only when helping people to conduct more complicated tasks that require dialogue
with verbal and nonverbal communication, as among humans, are humanlike robots
necessary. This humanlike appearance and behavior are important, as we are
fine-tuned to recognize and understand detailed intentions from observing human
behavior. Past studies have found evidence that people react differently toward
machine-looking entities and humans. People may be able to infer the other entity’s
“intention” as well as those of humans, such as with dogs, horses, or factory
machines, but this is only after long-term experience and training. It is much easier
for us to understand the intention of other humans. Therefore, the appearance
(including behavior) of robots is one of the most important factors in designing
socially interactive robots. However, given the “uncanny valley” response, how
should we design a robot’s appearance?
While the mechanism behind the “uncanny valley” remains unclear, the basic
system behind such human responses is gradually being revealed by using androids
as a device for exploring human nature. In the following chapters, Shimada et al.
hypothesize that lateral inhibition, as seen in neuronal responses, may be one cause
of the uncanny valley response (Chap. 8).
viii Preface
robot. They talk slowly in order to synchronize with the robot’s lip motion and
make small movements to mimic those of the robot. Some operators even feel as if
they themselves have been touched when others touch the teleoperated android.
This illusion, the Body Ownership Transfer (BOT) effect of teleoperated robots, is
due to the synchronization between the act of teleoperating the robot and the visual
feedback of the robot’s motion. If BOT can be induced merely by operation without
haptic feedback, then applications such as highly immersive teleoperation interfaces
and prosthetic hands/bodies that can be used as real body parts could be developed.
In the following chapters, initial studies on BOT are introduced. Starting from
the exploration of BOT and its basic nature (Chap. 18), various aspects of BOT are
examined. How BOT occurs under different teleoperation interfaces (Chap. 19),
how social situations influence BOT (Chap. 20), and the minimal requirements for
BOT to occur are investigated through neuropsychological experiments (Chap. 21).
It is shown that the will to control the robot (agency) and seeing the robot in motion
(visual feedback) are sufficient for BOT, and thus the operator’s sense of propri-
oception is not required.
An interesting conjecture arises here: When the operator feels that the android’s
body is his/her own, will a feedback phenomenon occur? For example, when the
android’s facial expression changes on its own, without being controlled by the
operator, will the operator’s emotion be affected? In the past, psychologists studied
a phenomenon known as the “facial feedback hypothesis,” which is based on
William James famous idea that the awareness of bodily changes activates emotion.
If feedback from the teleoperated robot could affect the operator, we may be able to
implement a new device that can support the regulation of one’s physical or psy-
chological activities. The final two chapters explore some ambitious trials on reg-
ulating people through the act of teleoperating android robots. Chapter 22 describes
an attempt to regulate people’s emotion through facial expression feedback from
android robots, while Chap. 23 presents experimental results that indicate the neural
activity of the operator can be enhanced through teleoperating android robots with a
brain–computer interface.
Most previous studies have focused on laboratory-based experiments. In this
book, however, we introduce field studies that exemplify the kind of roles androids
might take in the real world. These studies demonstrate how people perceive
androids when faced with them in everyday situations. Androids have a similar
appearance to humans, but, at the same time, can be recognized as robots. There-
fore, it is expected that androids could have a similar presence as humans in the real
world. Moreover, androids have the potential to create a different influence to that
of actual humans. In this part, we introduce several field experiments that were held
in four different situations (a café, hospital, department store, and stage play). In the
first experiment, an android copy of Prof. Ishiguro, named Geminoid HI-2, was
installed in a café at Ars Electronica in Linz, Austria (Chaps. 24 and 25). The
results show that people treated the android as a social entity, not merely a physical
object. In Chap. 26, we describe a study on how a bystander android influences
human behavior in a hospital. This experiment shows that when the android nods
and smiles in synchronization with the patient, the patients impressions toward the
x Preface
doctor are positively enhanced. As a third field experiment, the android tried to sell
a cashmere sweater to customers in a department store. The results show that 43
sweaters were sold over 10 days. This indicates that the android could operate as a
salesperson in a store (Chaps. 27 and 28). Finally, in Chaps. 29 and 30, we
investigate how people perceive an android that reads poetry as a stage play. In this
experiment, we create a stage play in collaboration with an artist and ask the
audience about their impressions of how the android performs as an actor.
We conclude this preface by thanking the Japan Society for the Promotion of
Science (JSPS) for supporting this work through the KAKENHI Grant-in-Aid for
Scientific Research (S) Number 25220004.
xi
xii Contents
Abstract To develop a robot that has a humanlike presence, the robot must be given
very humanlike appearance and behavior, and a sense of perception that enables it
to communicate with humans. We have developed an android robot called “Repliee
Q2” that closely resembles human beings; however, sensors mounted on its body
are not sufficient to allow humanlike communication with respect to factors such as
the sensing range and spatial resolution. To overcome this issue, we endowed the
environment surrounding the android with perceptive capabilities by embedding it
with a variety of sensors. This sensor network provides the android with humanlike
perception by constantly and extensively monitoring human activities in a less obvi-
ous manner. This paper reports on an android system that is integrated with a sensor
network system embedded in the environment. A human–android interaction exper-
iment shows that the integrated system provides relatively humanlike interaction.
1.1 Introduction
In recent years, there has been much research and development toward intelligent
robots that are capable of interacting with humans in daily life [3, 4]. However, even
if robots were able to deal with various tasks, mechanical-looking robots are not
This Chapter is a Modified Version of Previously Published Papers, [1, 2], edited to be
comprehensive and fit with the context of this book.
T. Chikaraishi ⋅ H. Ishiguro
Graduate School of Engineering, Osaka University, Osaka, Japan
T. Minato (✉)
Asada Project, ERATO, Japan Science and Technology Agency, Osaka, Japan
e-mail: minato@atr.Jp
H. Ishiguro
ATR Intelligent Robotics and Communication Laboratories, Kyoto, Japan
acceptable for tasks that require a humanlike presence. This is because our living
space is designed for humans, and the presence of other people provides a kind of
assurance to individuals. For a robot to have a humanlike presence, it needs to be
recognized as a person, behave like a human, and be able to communicate naturally
with people.
Apart from the engineering aspects, the development of such a robot will con-
tribute to studies about the principles of human communication. This is because
improving the robot’s function to communicate with people step-by-step will lead to
a gradual elucidation of the communication function of human beings.
Turning now to interpersonal communication, there are various communication
channels, such as language, paralanguage (e.g., tone and voice patterns), physi-
cal motion (e.g., gazing and gestures), interpersonal distance, and clothes [5]. To
develop a robot that can naturally communicate with people through these commu-
nication channels, the following points are required:
∙ The robot must have a very humanlike appearance and behavior.
∙ The robot must have a rich sense of perception.
In this study, we use an android called “Repliee Q2” (Fig. 1.1) that closely resem-
bles a human and can produce humanlike motions. To give the android percep-
tive capabilities, we implement voice recognition, gesture recognition with multiple
omnidirectional cameras, and human tracking with floor sensors.
This section presents the hardware organization and software architecture (Figs. 1.2
and 1.3). The system must process a large amount of data from a variety of sensors
to achieve humanlike perception. Therefore, the sensor processing is distributed to
a number of computers. The android’s motion modules contain datasets of desired
values for each degree of freedom (DOF), and their recognition results trigger the
execution of motions in the android. Briefly, this involves the following steps:
Recognition
result
Scenario play and motion execution processes
Gesture
Android control
Gesture Recognition Omni cameras x4
Data selection Integration process
Gesture Recognition Omni cameras x4
Scenario data
Human
position
Position
Motion data Human tracking Floor sensor
Command transmission
Human
position
Sound data Conversation
Voice recognition Noise reduction Microphone selection Microphone array
Refined
candidate
air controller
Face position
Face detection Pan-tilt cameras
Android body
Tactile recognition
Skin sensors
(1) Obtain recognition results from computers where sensor processes are running.
(2) Use the recognition results to select the motion module’s ID from the scenario.
(3) Execute the android’s motion and play a voice file.
(4) Return to step (1).
Repeating these steps determines the context of the situation in order to execute
human–android communication. Determining the context of the situation has the
additional advantage that the system can refine recognition candidates. For example,
in a greeting situation, the system can restrict greetings to such candidates as bowing,
shaking hands, or waving a hand, and exclude other candidates. In addition, because
these modules are described in XML, they are easily created by people without pro-
gramming skills (Fig. 1.4). Combining various sensors has another advantage in that
recognition by one sensor can help another type of sensor. For example, position
data from floor sensors can help identify the region of interest for camera images
to recognize a gesture, and gesture recognition can help voice recognition to refine
recognition candidates.
6 T. Chikaraishi et al.
Repliee Q2 was modeled on an actual person and constructed with medical silicon
rubber. Hence, it has a very humanlike appearance and artificial skin that is as soft
as human skin. It has 42 air actuators in its body, including nine on the left and right
arms, two on the left and right hands, four on the waist, three on the eyes, one on the
eyebrows, one on the eyelids, one on the cheek, seven on the mouth, and three on
the neck. These enable Repliee Q2 to generate rich facial expressions (Fig. 1.5). The
face, neck, arms, and hands are covered with silicone skin, and 42 highly sensitive
tactile sensors are mounted beneath the skin.
(a) Pneumatic controller and air compressor: The reactions of the actuators are very
natural because they incorporate air dampers against external force. This achieves a
much safer interaction with humans than other actuators such as oil pressure actu-
ators and DC servomotors. The actuators can represent the unconscious breathing
movements of the chest with a silent drive, because the air compressor can be located
at a point distant from the android.
Seven computers (Pentium IV 3 GHz) are used to process the sensor data.
(a) Floor tactile sensor units: Floor tactile sensor units (Vstone VS-SS-SF55) are
used for human tracking (Fig. 1.6). The floor sensor units consist of densely arranged
tactile sensors (at 5 mm intervals) to detect human position by gravity. Covering the
floor of a room with these units enables human tracking. The specifications of the
sensor units are as follows. Each floor sensor unit measures 500 × 500 × 15 mm,
with detection blocks measuring 100 mm × 100 mm, and has a sensitivity of 200–
250 g∕cm2 . The resolution ability is 10 × 10 cm, which is enough to detect human
positions, because most adult feet are 20–30-cm long. In this study, 4 × 4 units
(4 m2 ) are placed in front of the android in the region where we expect human–
android communication to occur.
The floor sensors are connected through a serial port. After processing the data
from the connection, an android control computer obtains the data via TCP/IP. A
gesture recognition computer and voice recognition computer also obtain the data
to improve their recognition accuracy. When a person walks, he or she activates
multiple detection blocks that cause a many-to-one problem for tracking. Therefore,
we utilize “Human Tracking Using Floor Sensors Based on the Markov Chain Monte
Carlo Method” by Murakita et al. [10].
(b) Omnidirectional cameras: Eight omnidirectional cameras (Vstone VS-C42N-
TR) are embedded into the environment to recognize human gestures, including
“bowing” and “hands up.” The cameras are connected through an image capture
board. The specifications of the omnidirectional cameras include an imaging angle
of 15◦ above the horizontal plane and 60◦ below the horizontal plane, with 768 ×
494 effective pixels.
In this research, we utilize “VAMBAM: View and motion based aspect models
for distributed omnidirectional vision systems” by Nishimura et al. [11, 12], which
enables the system to recognize human gestures independent from the location of
the person relative to the cameras. However, the method applies a subtraction image
to determine the region of interest, but this is not robust against optical noise. For
this reason, we use human position data from floor sensors to define the detection
region in the images from the omnidirectional cameras.
(c) Microphone array: Eight non-directional microphones mounted in the lower part
of the omnidirectional cameras are used for voice recognition. The microphone-
equipped omnidirectional cameras are embedded into the environment to obtain
8 T. Chikaraishi et al.
We verified that the android system integrated with the sensor network is effective for
natural interaction by developing the waiting behavior of the android. We assumed
a scene in which the android waits while another person stands nearby, as shown in
Fig. 1.12. Generally, an individual’s motions change according to his or her mental
state, even when simply waiting (remaining in one place). For example, the frequency
of eye blinking increases, breathing is stronger, and gazing becomes unstable when
a person feels stressed. It is assumed that the android’s motion changes according to
sensor information, causing people to attribute mental states to the android, result-
ing in a more humanlike impression. Additionally, the android should not simply
stand still, because a past study has found that a perfectly still android has a less
humanlike appearance [15]. To define the scenario, we set up the same situation as
in Fig. 1.12, but replaced the android with a person (subject B). We investigated the
mental state of subject B while recording the motion of the other person (subject A)
10 T. Chikaraishi et al.
using the sensor network (the tactile sensor data of subject B was simulated from
the observations). We classified the sensor data on subject A’s motion relative to
subject B’s mental states. We also prepared the android’s motion modules to make
them similar to the motions of individuals with the same mental states as subject B.
We then defined the scenario using the motion modules and the sensor data pattern.
In Sect. 1.5, the android behavior for specific scenarios is compared with that of an
android with a randomly selected motion module.
Subject A
Curtain
Wall
2.0 m
2.0 m
1 Development of an Android System Integrated with Sensor Networks 11
Table 1.2 Classification of the feelings of subject B toward subject A. Subject B was later replaced
with the android in the experiment
Position ID of A Behavior of A Feeling of B toward A Mood of B
2, 5 Stare at B Unpleasant Disgust
8 Stare at B Unpleasant +
oppressive feeling
11 Stare at B Very unpleasant +
oppressive feeling
1–6 Not stare + walk Not so concerned Acceptance
around about the action of A
1, 3, 4, 6 Stare at B Harassed Fear
7–12 Not stare + walk Anxiety Apprehension
around
8, 11 Look somewhere Anxiety
1–12 Stare + walk around Anxiety
7, 9, 10, 12 Stare at subject B Anxiety
1–7, 9, 10, 12 Look somewhere Worry about where A Interest
is looking
ReplieeQ2
0.5 m
0.5 m
2.0 m
0.5 m
0.5 m
be harmful to them; that is, the emotional function of “rejection” was activated. In
this case, subject B’s emotion is “disgust.”
(b) Acceptance: Subject B reported that they did not care because subject A did
not care about them. This implies that subject B accepted the situation; that is, the
emotional function “affiliation” was activated. In this case, subject B’s emotion is
“acceptance.”
(c) Fear: Subject B reported that they felt mental distress from being stared at by
subject A. This implies that subject B wanted to protect themselves from suffering;
that is, the emotional function is one of “protection.” In this case, subject B’s emotion
is “fear.”
(d) Apprehension: The emotion “fear” occurred because subject B reported that they
felt mental distress from being stared at or being too close to subject A. Moreover,
they reported that they worried about subject A’s intentions. This implies that the
emotional function of “exploration” was also activated and the emotion of “anticipa-
tion” occurred. As a result of this mixture, the emotion of “apprehension” occurred.
1 Development of an Android System Integrated with Sensor Networks 13
(e) Interest: The emotion of “anticipation” occurred because subject B reported being
worried about what subject A was looking at. However, they did not feel disgust; that
is, they accepted it (the emotion was “acceptance”). As a result of this mixture, the
emotion of “interest” occurred.
In this situation, a person’s mental state is defined according to the other per-
son’s motion and position. The sensor network that we developed obtains this kind
of information more robustly than on-body sensors.
Based on the above analysis, we defined a process to recognize the behavior of sub-
ject A (see Fig. 1.11). The first step is to check for the presence of a person (subject
A) within the area covered by the floor sensors. If subject A is in the area, the sensor
network obtains her/his position. Next, the movement of subject A is checked. For
example, if subject A does not move, the face direction (i.e., looking direction) is
measured. Furthermore, if subject A is moving toward subject B (i.e., the android),
the duration of her/his gaze is measured, and the behavior of subject A is specified
accordingly. “Neutral” in Fig. 1.11 denotes that subject A does nothing. The android
system is able to detect when it is touched and the direction of a sound source; how-
ever, no behavior that requires the sensor network to be recognized was observed in
N N
other Y
Y
N N
other
the above experiment. The android’s mental state is determined by the recognized
behavior according to Table 1.2 (where subject B is replaced with the android).
In this scenario, the system repeats the following steps.
(1) Recognize a person’s behavior.
(2) Determine the mental state of the android according to Table 1.2.
(3) Select and execute the motion module according to the procedure of Fig. 1.11.
The remainder of this section describes the android motion features to be regarded
when creating motion modules. We intuitively designed a motion module for each
mental state while referring to the motions of subject B in the above experiment. In
the following description, the frequency of motion is standardized for the situation
in which nobody is around the android (hereafter, the normal situation).
The motion of subject B when feeling disgust had the following common features.
The motion modules of the android for this emotion were designed by taking these
features into consideration.
∙ Does not make an eye contact with a person.
∙ Bends backward to create some distance from the person.
∙ Increases blinking rate.
The motions for fear, apprehension, and interest had the following common fea-
tures. The motion modules for these emotions were designed by taking these features
into consideration.
∙ The frequency of looking at the person is high, but no staring occurs.
∙ The frequency of blinking is high.
∙ The movement of the eyes is quick.
The motion modules for acceptance were designed by taking the following fea-
tures into consideration.
∙ The frequency of looking at the person is low, and no staring occurs.
The frequency of looking at the person, blinking, and eye movement for each
emotion was designed in the following order:
normal = acceptance < interest < apprehension < disgust < fear
1.5 Evaluation
for the androids. Hereafter, the android that follows the above scenario is called the
scenario android and the android with randomly selected motion is called the ran-
dom android. A subject interacts with both androids in the experiment. The subject
was not informed of the difference between the two androids. To counterbalance the
order of the interaction, we designed two conditions:
Condition (1) The subject interacts with the random android for 3 min (first ses-
sion) and then interacts with the scenario android for 3 min (second session) after
a 1 min break.
Condition (2) The subject interacts with the scenario android for 3 min (first ses-
sion) and then interacts with the random android for 3 min (second session) after
a 1 min break.
Subject
Curtain
Wall
2.0 m
2.0 m
16 T. Chikaraishi et al.
1.5.2 Results
The averaged scores are presented in Table 1.3. To consider the order of interaction,
the averaged scores of the first and second sessions are given separately.
(a) A T test revealed that there is a significant difference between the random
android in the first session (R1) and the scenario android in the first session
(S1) (p < 0.05).
(b) A T test revealed that there is no significant difference between the random
android in the second session (R2) and the scenario android in the second session
(S2) (p = 0.452).
From result (a), the humanlikeness of the scenario android is significantly greater
than that of the random android. This means that the android’s behavior in response
to the subject’s motion made the android more humanlike. Moreover, the difference
between results (a) and (b) suggests that there is an order effect.
To check this order effect, we compared the scores with respect to the order.
(c) A T test revealed that there was a significant difference between the random
androids in the first and second sessions (p < 0.1).
(d) A T test revealed that there was no significant difference between the scenario
androids in the first and second sessions (p = 0.438).
Another random document with
no related content on Scribd:
character of, 93;
wounded, 99;
death of, 100;
burial of, 100
Jenner, Major, murder of, 215
Jombeni Hills, 87
Jora, measurement of, 53
Jumbi ben Aloukeri, 11, 105, 155, 158, 234, 267
Juma, 14, 176
Kamasia, 136
Karanjui, camp at, 84;
fight at, 324
Karama, 147
Kenia, Mount, 34;
first sight of, 66;
again in sight of, 291;
north of, 137
Kibaba, capacity of, 53
Kibuyu, 54
Kilaluma, 43, 343
Kinangop, Mount, 338, 342
Kinuthia, 49-51
Kinyala, song of, 310
Kiongozi, 12
Kirrimar plain, 253
Kitoma, 53
Kolb, Dr., 5, 11;
account of death of, 156
Koranja, 51;
accompanies us, 59;
trepidation of, 62;
bids us farewell, 63;
meeting with, 349
Koromo, meeting with, 104;
makes blood-brotherhood with El Hakim, 113
Kota, 52
Kriger and Knapp, Messrs., 18;
decide to accompany us, 24;
return of, 35
Kriger’s farm, arrival at, 22;
departure from, 25;
return to, 361
Kundu, 53
Kwa-Ngombe, 345
Leeches, 212
Lemoro, 213
Leopard, 35
Lion, number of, 32;
hunt, 156, 279;
El Hakim’s adventure with, 292, 332, 333
Loads, method of carrying, 16
Lokomogo, 213;
present of, 270
Lolokwe, Mount, 136
Longfield, Capt., 353
Lorian, 3, 4;
news of, 246;
the start for, 246;
where is it?, 258;
dissertation on, 265
Loroghi Mountains, 5, 137
Lubo, 213;
immense possessions of, 232
Lykipia plateau, 52, 137
Mahogo, 52
Makono, measurement of, 53
Malwa, 13
Manga, 51;
indisposition of, 56;
greeted by, 349
Marabout stork, 39
Maranga, arrival at, 51;
abundance of food in, 52;
departure from, 59;
return to, 349;
peculiar method of crossing a river of people of, 356
Maragua river, 46;
bridge across the, 47;
return to the, 358
Maragua bean, 53
Marathwa river, 352
Marazuki, death of, 128
Marisi-al-lugwa-Zambo plateau, 204-245
Marlé tribe, 228
Marsabit, 222
Masai, 242, 338
Materu, camp at, 62
Mau escarpment, 137
M’biri, visit to, 352
M’bu, arrival at, 64;
hostility of natives of, 67;
an anxious journey, 71;
the guides desert, 74;
farewell to, 73
Metama, 52;
matindi, 53
Midges, attacked by, 178
Milindi, 31
Milk, method of adulteration of, 54
Mineral spring, 161;
salts, 162
Mogogodo, 181
Mogoroni river, 345
Moravi, route across, 81
Morio trees, 135, 160
“Morning Whiffs,” terrible results of, 39;
further experiences with, 55
M’thara, arrival at, 90;
purchasing food in, 91;
description of camp at, 98;
camp in danger, 104;
getting supplies of food in, 113;
departure from, 148;
bad news from, 297;
return to the camp at, 302;
farewell to, 331
Muhindi, 52
Mules, 19;
accident to one of the, 78
Mumoni Hills, 47
Mumunye, 53
Munipara, 11
Munithu, arrival at, 84;
departure from, 84;
return to, 116;
story of the pillaged goods at, 116;
departure from, 131;
attack on, 319
M’wele, 52
Nairobi, 1, 2, 6, 8, 9;
river, 27;
falls, 27;
the return to, 362
Naivasha, 338
N’dizi, 53
N’Dominuki, chief of M’thara, character of, 89;
declining influence of, 90;
is accused of treachery by the Somalis, 106;
explanation of, 108;
offers himself as guide, 111;
fidelity of, 299
N’doro, 341
Neumann, Mr. A. H., 5;
influence over N’Dominuki of, 89;
acquaintance with Dr. Kolb, 155
Ngare Mussoor, 336
Ngare Nanuki, 337
N’gombe, Mount, 178
Noor Adam, 17;
appearance and character of, 94
Nyemps, 136;
the old men of, 330
Oryx, 183
Sabaki river, 31
Sadi ben Heri, 11;
death of, 128
Sagana, 45
Salt, failure to procure, 179;
large plain of, 233
Sand rivers, 200, 290
Sand rats, 212
Seton-Karr, Mr., 276
Sirimon river, 337
Sheba, Mount, 164
Sheep, buying from the Rendili, 268;
death of many of the, 314;
the disposal of, 358
“Sherlock Holmes,” 13;
illness of, 340;
death of, 342
Shooting, useful hints on, 146
Skene, Captain, 358
Smallpox, 59, 232, 238, 354
Somali caravan, start of, 17;
news of the, 83;
attacked by the Wa’Embe, 84;
conference with leaders of, 93;
friction with, 141;
meeting with portion of, 201;
disaster to the, 217;
the alarm in the, 271;
the panic in the, 272
Song of Kinyala, 310
Spot, death of, 263
Sulieman, 12;
desertion of, 259
Swahili porters, character of, 7
Sweinfurth Falls, 46
Swamp Camp, 178, 291
Sycamores, 79
Uimbe, 52
Viazi, 52
Victoria Nyanza, Lake, 1, 7
Vikwer, 52
Viseli, 132
Zanzibar, 7
Zebra, 33, 43, 336, 341
Zuka, entry into, 75;
camp in, 77
Zura, welcome at, 81
THE END.
Updated editions will replace the previous one—the old editions will
be renamed.
1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside the
United States, check the laws of your country in addition to the terms
of this agreement before downloading, copying, displaying,
performing, distributing or creating derivative works based on this
work or any other Project Gutenberg™ work. The Foundation makes
no representations concerning the copyright status of any work in
any country other than the United States.