You are on page 1of 7

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/281863047

Measuring Trust of Autonomous Vehicles: A Development and Validation


Study

Conference Paper · August 2015


DOI: 10.1007/978-3-319-21383-5_102

CITATIONS READS

21 2,815

4 authors, including:

David Garcia Christine Kreutzer-Seaver


University of Central Florida University of Central Florida
3 PUBLICATIONS 42 CITATIONS 11 PUBLICATIONS 213 CITATIONS

SEE PROFILE SEE PROFILE

Mustapha Mouloua
University of Central Florida
160 PUBLICATIONS 2,887 CITATIONS

SEE PROFILE

All content following this page was uploaded by Christine Kreutzer-Seaver on 18 September 2015.

The user has requested enhancement of the downloaded file.


Measuring Trust of Autonomous Vehicles:
A Development and Validation Study

David Garcia, Christine Kreutzer(&), Karla Badillo-Urquiola,


and Mustapha Mouloua

Psychology Department, University of Central Florida, Orlando, USA


{david.garcia,mustapha.mouloua}@ucf.edu,
christine_kreutzer@knights.ucf.edu,
kbadillo@ist.ucf.edu

Abstract. Recent advances in technology have improved the ability of vehicles


to act autonomously, thereby enabling the implementation of these systems into
the lives of the everyday consumer. For example, in the past three years nearly
several major vehicle manufacturer, supplier, and technology company have
announced projects involving autonomous vehicles (AVs). While the notion of
AVs has been popular within the military, the urgency to make them com-
monplace has gathered pace as companies outside the auto industry have
illustrated the feasibility and benefits that AVs offer. However, in order to
predict user adoption of these autonomous features, attitudes towards them must
be understood. Thus, the purpose of the present in-progress study is to develop
and validate a scale to quantify trust towards autonomous vehicles. Upon the
completion of data collection, the data will be subjected to a factor analysis. It is
hypothesized that the scale ratings will converge to a single underlying
dimension. It is also hypothesized that there will be differences in trust among
the levels of vehicle autonomy.

Keywords: Autonomous vehicles  Trust  Unmanned vehicles  Robotics

1 Introduction
1.1 Background
As vehicles with autonomous features become standard in today’s market, so too does
our need to understand the intricate role human trust plays in the operation of these
vehicles. Certainly, human trust towards AVs has become a salient issue in
human-robot interaction literature. For example, when an autonomous car has
anthropomorphized features, humans are more likely to trust the vehicle [1]. However,
previous research has indicated that humans are poor at monitoring automated systems
[2–5]. Despite numerous advances in technology, autonomous systems still remain
prone to automation failures [6, 7]. In addition to technical problems, there are a
number of human factors design issues facing AV designers, such as displayed
information, situation awareness, level of training and experience, control design,
support from backup personnel or systems, data-link delays, and cognitive load
limitations [3].
© Springer International Publishing Switzerland 2015
C. Stephanidis (Ed.): HCII 2015 Posters, Part II, CCIS 529, pp. 610–615, 2015.
DOI: 10.1007/978-3-319-21383-5_102
Measuring Trust of Autonomous Vehicles 611

1.2 Levels of Vehicle Automation


The National highway Traffic Safety Administration [8] organizes vehicle autonomy as
having 5 different levels. Autonomous.
• No-Automation (Level 0) in this level, there are no autonomous features in this
vehicle. The drive is controlling all aspects of the vehicle at all times.
• Function-Specific Automation (Level 1) this level of autonomy includes vehicles
with one or more specific control functions. Some examples of this are pre-charged
brakes, or cruise control.
• Combined Function Automation (Level 2) vehicles with combined function auto-
mation features have at least two principal functions that are designed to work in
together in order to relieve the operator of controlling those functions. Level 2 of
vehicle automation is where a human begins to lessen their role as an operator and
begins to take on the role of a supervisor.
• Limited Self-Driving Automation (Level 3) is defined as a vehicle that can enable
the drive to relinquish complete control of functions critical to safety under some
conditions. In this level, the driver must still be available for manual control of the
vehicle.
• Full Self-Driving Automation (Level 4) implies that the vehicle is designed to
monitor roadway conditions and perform functions critical to safety for the duration
of a trip.

1.3 The Current Study


Despite the infiltration of autonomous features in the automotive market, and the
potential design and safety issues associated with them, researchers have not yet
explored attitudes towards these features. For example, how comfortable are humans
with a car that can park itself versus a car that can pick you up, and take you to a
destination? Our research seeks to bridge this gap and put forth a validated measure to
attempt to quantify these new constructs. More specifically, the purpose of this in
progress study is to explore the factor structure underlying a novel scale aimed at
quantifying trust towards autonomous vehicles. It is hypothesized that scale ratings will
converge to a single underlying dimension. It is also hypothesized that trust ratings will
differ between each level of autonomy.

2 Methods

2.1 Participants
A total of 400 participants from the University of Central Florida will be recruited for
participation in this study. Participants can sign up for this study via SONA, the
university’s online research participant pool. Extra credit will be awarded in exchange
for participation.
612 D. Garcia et al.

2.2 Materials
Vignettes reflecting the five levels of vehicle autonomy as identified by The National
highway Traffic Safety Administration [8] will be presented. Each vignette describes
the features of the corresponding level of autonomy as follows:
• Level 0. The driver is in complete and sole control of the primary vehicle controls
(brake, steering, throttle, and motive power) at all times, and is solely responsible
for monitoring the roadway and for safe operation of all vehicle controls. This
vehicle may have certain driver support/convenience systems but do not have
control authority over steering, braking, or throttle. Examples include systems that
provide only warnings (e.g., forward collision warning, lane departure warning,
blind spot monitoring) as well as systems providing automated secondary controls
such as wipers, headlights, turn signals, and hazard lights.
• Level 1. Automation at this level involves one or more specific control functions; if
multiple functions are automated, they operate independently from each other. The
driver has overall control, and is solely responsible for safe operation, but can
choose to cede limited authority over a primary control (as in adaptive cruise
control), the vehicle can automatically assume limited authority over a primary
control (as in electronic stability control), or the automated system can provide
added control to aid the driver in certain normal driving or crash-imminent situa-
tions (e.g., dynamic brake support in emergencies). The vehicle may have multiple
capabilities combining individual driver support and crash avoidance technologies,
but does not replace driver vigilance and does not assume driving responsibility
from the driver. The vehicle’s automated system may assist or augment the driver in
operating one of the primary controls – either steering or braking/throttle controls
(but not both). As a result, there is no combination of vehicle control systems
working in unison that enables the driver to be disengaged from physically oper-
ating the vehicle by having his or her hands off the steering wheel AND feet off the
pedals at the same time. Examples of function-specific automation systems include:
cruise control, automatic braking, and lane keeping.
• Level 2. This level involves automation of at least two primary control functions
designed to work in unison to relieve the driver of control of those functions.
Vehicles at this level of automation can utilize shared authority when the driver
cedes active primary control in certain limited driving situations. The driver is still
responsible for monitoring the roadway and safe operation and is expected to be
available for control at all times and on short notice. The system can relinquish
control with no advance warning and the driver must be ready to control the vehicle
safely. An example of this would be “smart”, or adaptive cruise control as seen with
some newly released cars.
• Level 3. Vehicles at this level of automation enable the driver to cede full control of
all safety-critical functions under certain traffic or environmental conditions and in
those conditions to rely heavily on the vehicle to monitor for changes in those
conditions requiring transition back to driver control. The driver is expected to be
available for occasional control, but with sufficiently comfortable transition time.
The vehicle is designed to ensure safe operation during the automated driving
Measuring Trust of Autonomous Vehicles 613

mode. An example would be an automated or self-driving car that can determine


when the system is no longer able to support automation, such as from an oncoming
construction area, and then signals to the driver to reengage in the driving task,
providing the driver with an appropriate amount of transition time to safely regain
manual control.
• Level 4. The vehicle is designed to perform all safety-critical driving functions and
monitor roadway conditions for an entire trip. Such a design anticipates that the
driver will provide destination or navigation input, but is not expected to be
available for control at any time during the trip. This includes both occupied and
unoccupied vehicles. By design, safe operation rests solely on the automated
vehicle system. An example of this would be Google’s self-driving car, which uses
four radars, a laser guidance system, a traffic-light-detecting camera, GPS, an
inertial measurement unit, and a wheel encoder (to determine the vehicle’s location)
to successfully navigate around complex city settings. As is evident by the cost of
the equipment used by Google’s self-driving car, this may take some time before it
becomes affordable enough for consumer use.
Respondents will rate the extent to which they agree with a variety of statements
relating to each autonomous level. These items were generated based on the facets of
trust identified within the HRI literature [9–12]. Examples from items on this scale
include: ‘I believe that this type of vehicle would be reliable’, ‘I believe that my
interactions with this type of vehicle would be predictable’, and ‘I would trust this type
of vehicle for my everyday travel’. Responses will be rated on a 5-point Likert scale.

2.3 Design and Procedure


After reading and agreeing to the terms outlined in the informed consent, participants
will complete the scale online through Qualtrics. The study will utilize a
within-subjects design. All participants will complete each scale corresponding to each
vignette. The order in which participants receive each vignette will be randomized. The
vignettes will not include the corresponding autonomous level.

3 Expected Results

Upon the completion of data collection, the data will be subjected to a factor analysis to
reveal the underlying factor structure of the experimental scale. The goal of factor
analysis is to condense a larger set of variables to a smaller set of factors, which account
for a sizeable proportion of variability within the items. Thus, it is desirable to have a
few factors that account for a large portion of the variance. It is anticipated that scale
ratings will converge to support one underlying factor.
A one-way analysis of variance (ANOVA) will also be conducted to examine
differences in trust among the five levels of autonomy. It is anticipated that there will be
differences in trust depending on the autonomous level. More specifically, we antici-
pate that trust will attenuate with higher levels of autonomy.
614 D. Garcia et al.

4 Discussion

The aim of the present ongoing investigation is to examine the factor structure of an
experimental metric designed to quantify attitudes towards different levels of autonomy
in vehicles. In particular, this will allow us to identify the factors underlying trust
towards autonomy. In addition, this study will identify differences in trust among the
levels of autonomous vehicles.
The technological capacities of vehicles have vastly increased in recent years,
leading to the advancement of both the functional capability and autonomy of current
systems. With these advancements, autonomous features have become increasingly
prevalent within everyday vehicles. This has led to a transition of the human role from
an operator to that of a supervising member, assistant, or even bystander. As such, the
intricacies of interaction have changed to where consumers must place increasing
amounts of trust into these technological features. Thereby, the individual’s trust in that
system takes a prominent role in the success of any interaction and therefore the future
use of the vehicle.
Despite the infiltration of autonomous vehicles faced by consumers, researchers
have not examined attitudes towards various autonomous features. This is problematic,
as vehicle manufacturers and government agencies responsible for vehicle regulations
must understand if consumers are receptive of these recent and ongoing advancements.
The deployment of AVs today is less about technological capabilities and more about
the ability of stakeholders to implement such vehicles into an everyday environment.
One barrier to successful deployment may be a lack of consumer trust. Thus, our study
will make the contribution of providing a means to quantifying trust towards autono-
mous features in vehicles. Additionally, our study seeks to identify how consumers feel
about different autonomous features. More specifically, we hope to identify how
trusting individuals are of different autonomous features that are gaining popularity.
Thus, the results of our study will have implications for vehicle manufacturers, as better
understanding their consumers will help them to design more desirable vehicles,
thereby increasing profit and user adoption.
We believe that the validation of our measure will also provide fruitful avenues for
future research. According to Schaefer [9], individuals’ mental models change as trust
changes from pre- to post-interaction with a robot. Future work should examine if a
similar relationship exists within the context of autonomous vehicles. That is, future
research should examine how operating vehicles of varying autonomy changes indi-
viduals’ degree of trust and thereby, mental models. Moreover, our measure could be
utilized to quantify these changes in trust from pre- to post-interaction. An additional
avenue for future work is related to trust as it applies to expansion and transition of the
human role. The human element is often overlooked or even forgotten during the
design and development process [9]. Thus, future work should be conducted to further
understand the differences in trust perceptions between individuals as it applies to this
process.
Measuring Trust of Autonomous Vehicles 615

References
1. Waytz, A., Heafner, J., Epley, N.: The mind in the machine: anthropomorphism increases
trust in an autonomous vehicle. J. Exp. Soc. Psychol. 52, 113–117 (2014)
2. Hancock, P., Mouloua, M., Gilson, R., Szalma, J., Oron-Gilad, T.: Provocation: is the UAV
control ratio the right question? Ergon. Des. 15(1), 7 (2007)
3. Mouloua, M., Gilson, R., Hancock, P.: Human-centered design of unmanned aerial vehicles.
Ergon. Des.: Q. Hum. Factors Appl. 11(1), 6–11 (2003)
4. Mouloua, M., Parasuraman, R.: Human Performance In Automated Systems: Recent
Research And Trends. Erlbaum, Hillsdale, NJ (1994)
5. Parasuraman, R., Riley, V.: Humans and automation: use, misuse, disuse, abuse. Hum.
Factors 39(2), 220–253 (1997)
6. Wiener, E.L., Nagel, D.C. (eds.): Human Factors In Aviation. Gulf Professional Publishing,
Houston (1989)
7. Mouloua, M., Koone, J.: Human-Automation Interaction: Research and Practice. Erlbaum,
Hillsdale, NJ (1997)
8. National Highway Traffic Safety Administration. Preliminary statement of policy concerning
automated vehicles. Washington, DC (2013)
9. Schaefer, K.E.: The perception and measurement of human-robot trust. Doctoral
dissertation, University of Central Florida Orlando, Florida (2013)
10. Joosse, M., Sardar, A., Lohse, M., Evers, V.: Behave-ii: the revised set of measures to assess
users’ attitudinal and behavioral responses to a social robot. Int. J. Soc. Robot. 5(3), 379–
388 (2013)
11. Parasuraman, R., Sheridan, T.B., Wickens, C.D.: A model for types and levels of human
interaction with automation. Syst., Man Cybern., Part A: IEEE Trans. Syst. Hum. 30(3),
286–297 (2000)
12. Yagoda, R.E., Gillan, D.J.: You want me to trust a ROBOT? The development of a human–
robot interaction trust scale. Int. J. Soc. Robot. 4(3), 235–248 (2012)

View publication stats

You might also like