Professional Documents
Culture Documents
Leila Takayama
Human-Robot Interaction
Research Scientist
Scientific Principles
•Hypothesis testing
•Observable, empirical, and
measurable evidence
•Reliable
•Reproducible and falsifiable
Science & Technology
Variable-based Research
X→Y
11
Task game Procedure
• Goal is to collect most points in 10 minutes
• Bombs sometimes explode when touched
• Bomb detonations deduct 30 seconds
• Bomb number and time controlled
• Questionnaire
12
Assembler Manipulation
•Manipulating assembler
– Self: Built a robot, operated same robot
– Other: Built a robot, participants told they
needed to operate a different, identical
robot
13
Robot Form Manipulation
Measures:
Self extension
Trait overlap
– Personality similarity of self and other
•Galinsky and Moskowitz
– Overlap in concepts of self and human other
•Kiesler and Kiesler
– Self extension into objects
Self Other
15
Measures:
Self extension
Determining trait overlap
– Thirty item modified Wiggin’s personality test
•Completed by participants about themselves before task
•Completed by participants about robot after task
– Delta of items calculated, summed to index
•(Cronbach’s α=.86)
– Smaller scores indicate greater overlap of
concepts of self and robot
16
Measures:
Self extension
Self reports
• 10 point scales asking about “the device you
guided through the minefield”
• Robot control (α=.83)
– Who was more responsible for your general
performance on this task?
– Who had more control over your general
performance on this task?
• Sense of team
– “I felt that the robot and I were a team.”
17
Measures:
Robot personality
•Robot friendliness
– Nine item index (α=.90)
– cheerful, enthusiastic, extroverted
•Robot integrity
– Five item index (α=.73)
– Honest, reliable, trustworthy
•Robot malice
– Five item index (α=.74)
– Dishonest, unkind, harsh
18
Results:
Self extension
Trait overlap
25
Summary of Results
H3. People will self extend more into a robot
they assemble than a robot assembled by
another.
26
Design Implications
Goal-specific guidelines
27
Design Implications
•When self extension is desired
– Tele-operated robots as media, human
representations
•Medical care, remote therapy
– Non-humanoid form
– Promote pre-mission interaction
•Assembly, customization
28
Design Implications
•When self extension is undesirable
– Robots in hostile environments, likely
failures
•Search and rescue
– Humanoid form
– Minimize pre-mission interaction
•Identical but different robots
– Change robot’s name
•Altered robots
– Change voice, appearance
29
Limitations and Next Steps
• Broader population
• Outside the lab
• Using other robots
• Long-term interactions
• Long-term effects
• Balancing needs of people operating and
encountering robot
30
Study 2
Disagreeing Robots
Why would a robot ever
disagree with a person?
Research Questions
•What influences an interface’s point of
interaction? Body location? Voice
location?
•(How) do politeness strategies from
human-human interaction inform
human-computer interaction?
Design Questions
•What influences a robot’s point of
interaction?
•Where should speakers be placed?
•(How) can computer agents influence
human decisions, using effective
politeness strategies?
Hypotheses
H1. People will change their decisions more often
when the robot disagrees with them than when it
always agrees with them, even with identical
substantive content.
H2. People will feel more similar to (H2a) and more
positively toward (H2b) the agreeing robot than the
disagreeing one.
H3. A disagreeing voice coming from a separate
control box will be more acceptable than a
disagreeing voice that came from the robotic body.
Study Design (N=40)
Between- Voice location: Voice location:
participants on robot in box
Disagree 0%
Disagree 60%
Attitudes
• Perceived agreeableness of robot
(2 items, Cronbach’s α=.69)
• Perceived similarity of robot to self
(4 items, Cronbach’s α=.94)
• Liking of the robot
(8 items, Cronbach’s α=.75)
Perceived robot
agreeableness
Perceived similarity to robot
People changed their minds
People like disagreement to
come from elsewhere
Checking against hypotheses
H1. People will change their decisions more often
when the robot disagrees with them than when it
always agrees with them, even with identical
substantive content.
H2. People will feel more similar to (H2a) and more
positively toward (H2b) the agreeing robot than the
disagreeing one.
H3. A disagreeing voice coming from a separate
control box will be more acceptable than a
disagreeing voice that came from the robotic body.
Theory-oriented
Interpretations
•Politeness: distancing
•Disembodiment
•Perceived source
– Two separate agents: Thinker and doer
– Single distributed agent
Design-oriented Implications
www.willowgarage.com
Thanks!