You are on page 1of 9

International Journal of Hospitality Management 110 (2023) 103437

Contents lists available at ScienceDirect

International Journal of Hospitality Management


journal homepage: www.elsevier.com/locate/ijhm

The dark side of artificial intelligence in service: The “watching-eye” effect


and privacy concerns
Yaou Hu a, *, Hyounae (Kelly) Min b
a
Jinan University, Guangzhou, Guangdong, China
b
The Collins College of Hospitality Management, California State Polytechnic University Pomona, CA, USA

A R T I C L E I N F O A B S T R A C T

Keywords: The potential privacy issues associated with artificial intelligence (AI) in service delivery require careful
Artificial intelligence attention but remain understudied. Through the lens of the watching-eye effect, this research examines the
Service robots impact of AI on customers’ uneasiness through the mediation of privacy concerns. Study 1 confirms the watching
Watching-eye effect
“camera eye” effect of AI devices. Moreover, it identifies the service setting as a contextual boundary condition,
Privacy concerns
Uneasiness
wherein this effect holds in a private setting but not in a public setting. Study 2 further indicates that when there
is a built-in camera, the “physical eye” in an AI device matters; humanoid AI devices trigger stronger privacy
concerns than nonhumanoid AI devices and tablets, leading to greater uneasiness. This impact affects both
genders but is more pronounced among women. This research extends the AI literature in service and business
ethics and offers insight into managing personal information and privacy issues effectively.

1. Introduction and security concerns (Ioannou and Tussyadiah, 2021; Tussyadiah et al.,
2019). Customers’ personal information, including biometric data,
Artificial intelligence (AI) has gained extensive attention from identifiers, behavior logs, and biographic details (Ioannou et al., 2021),
scholars, practitioners, and the general public due to its seemingly can easily be collected and stored when interacting with AI devi­
promising benefits (Davenport et al., 2020). As a collection of technol­ ces—sometimes even without customers’ knowledge (Manikonda et al.,
ogies, such as sensors, voice recognition, robotics, automation, and 2018). What worries customers most is that such information may be
intelligent learning (Fan et al., 2022; Huang and Rust, 2018), AI can used and distributed in unauthorized ways or be susceptible to security
offer customers entertaining, efficient, and personalized services as well breaches (Davenport et al., 2020). A recent incident gives truth to this
as social interaction (Grewal et al., 2021). These devices have intro­ apprehension: Henn-na Hotel in Japan, known as the first robot-staffed
duced and will continue to bring profound changes to the service hotel, modified its in-room robots to block hackers after apologizing for
landscape, especially in the hospitality and tourism industry. neglecting guests’ privacy and security (Hertzfeld, 2019).
Given this potential, a growing number of service practitioners are Despite the hospitality and tourism industry’s relative vulnerability
deploying AI devices in numerous settings. For instance, in-room voice- to privacy violations (Tussyadiah et al., 2019), the potential dark side of
based digital assistants have been deployed in hotels such as Aloft and AI has scarcely been studied (Fu et al., 2022; Ioannou et al., 2020).
Wynn Resorts Las Vegas (Buhalis and Moldavska, 2021); the Marriott Privacy issues in service settings also often vary by context (e.g., online
Hotel group introduced the Mario robot receptionist, and the Mandarin vs. on-site; Ioannou et al., 2020; Morosan, 2019) and by person (e.g.,
Oriental in Las Vegas employs the Pepper robot to engage with cus­ Araujo et al., 2020). Customers’ privacy concerns and their subsequent
tomers (Choi et al., 2021). At the same time, research institutes and AI perceptions and behavior may depend on the interplay between AI de­
designers are endeavoring to advance AI devices’ technological features. vices’ features, the service setting, and individual traits (e.g., Araujo
In one case, human–computer interaction researchers are developing a et al., 2020). It is thus critical for academia and the industry to under­
social robot to interact with guests and serve as an in-room companion stand the dynamics of customers’ privacy concerns amid recent break­
(Nakanishi et al., 2019). throughs in AI (Manikonda et al., 2018) while accounting for situational
As the technological capabilities of AI rise, so do the public’s privacy and personal characteristics (Tussyadiah et al., 2019).

* Corresponding author.
E-mail address: yaouhu@outlook.com (Y. Hu).

https://doi.org/10.1016/j.ijhm.2023.103437
Received 22 May 2022; Received in revised form 28 December 2022; Accepted 30 January 2023
Available online 10 February 2023
0278-4319/© 2023 Elsevier Ltd. All rights reserved.
Y. Hu and H.(K. Min International Journal of Hospitality Management 110 (2023) 103437

To bridge this knowledge gap in literature, the present research range of sensitive information collected while people travel and stay at
considers customers’ privacy concerns when interacting with AI devices service facilities (e.g., identity details, biometric data, and behavioral
through the lens of the “watching-eye” effect. Extending the theory of logs) might be susceptible to data breaches, identity theft, and hacking
the watching-eye effect (Esmark et al., 2017; Haley and Fessler, 2005), (Tussyadiah et al., 2019).
this work suggests that, as a form of social presence (van Doorn et al., Such risks may increase customers’ privacy concerns, mitigate their
2017), AI devices may make customers feel like they are being watched trust in AI, and reduce their acceptance of AI technologies, leading to
and in turn spark privacy concerns and a sense of unease. The “watch­ negative consequences for AI adoption and management (Martin and
ing-eye” can take two forms in AI devices: the camera eye (i.e., built-in Murphy, 2017). The design features of AI devices (Ebbers et al., 2021;
cameras) and the physical eye (i.e., in AI devices’ appearance). Manikonda et al., 2018) have been shown to influence customers’ pri­
Responding to calls to explore people’s privacy concerns in different vacy perceptions as well. Scholars have hence advocated for more
contexts (e.g., Tussyadiah et al., 2019; Yoganathan et al., 2021), this research on the interplay of AI design features, service contextual factors
research also tests the boundary conditions of such an effect—both (Tussyadiah et al., 2019), and customers’ characteristics (Ioannou et al.,
context-specific (i.e., the service setting) and individual-specific (i.e., the 2020) on customers’ privacy perceptions. Building upon prior work, this
customer’s gender). A series of two empirical studies were carried out to research addresses customers’ privacy concerns based on the dynamics
address these aims. Study 1 examined the effects of the watching of AI design features, service settings, and gender against the theoretical
“camera eye” in an AI device on customers’ privacy concerns and un­ backdrop of the watching-eye effect.
easiness. Study 1 also identified the service setting (public vs. private) as
a contextual factor of this effect. Building on these findings, Study 2 2.2. The watching-eye effect
investigated the impact of the watching “physical eye” in an AI device (i.
e., the device’s appearance). Customer gender was taken as a critical Traditionally, the watching-eye effect applies to situations where
boundary condition of this effect. individuals are watched by social others (Haley and Fessler, 2005).
This research adds insightful contributions to the emerging AI liter­ When feeling as though one is being watched, the person being observed
ature in service and business ethics. It enriches the understanding of the may modify their behavior in accordance with social rules (e.g., acting
watching-eye effect when customers interact with AI devices in service altruistically) because they care about how others perceive them (Pfat­
settings and uncovers the mechanism driving customers’ privacy con­ theicher and Keller, 2015). Beyond the original context of being watched
cerns. It further unveils the boundary conditions of this watching-eye by other people, studies have extended the watching-eye effect to subtle
effect. Moreover, it adds insights to the business ethics research and artificial surveillance cues such as the image of eyes (Haley and
regarding AI. Scholars have provided the conceptual foundation for the Fessler, 2005) and cameras (Tussyadiah and Miller, 2019). In other
key debates in AI ethics (Ivanov and Umbrello, 2021); they are calling words, simply being in an environment where stylized eyes are present
for research addressing the emerging ethical concerns of AI (Haenlein evokes a sense of being seen.
et al., 2022) and designing AI devices that ameliorate such ethical ten­ People may feel uncomfortable (i.e., uneasy) when another social
sions (Ivanov and Umbrello, 2021). This research responds to this call by entity is watching them because of perceived privacy invasion. For
providing meaningful implications to help service practitioners and AI example, when a store employee gazes at a shopper, the customer may
device designers manage privacy issues, mitigate potential risks, and feel watched and as though their privacy control is lost (Esmark et al.,
increase customers’ confidence. The results could also help customers 2017). This effect might also hold in service encounters with AI devices:
make informed decisions about their privacy when using AI devices. owing to their intelligent and interactive features, these devices could be
seen as a social presence or social entities rather than simply machines
2. Literature review and hypothesis development (Fan et al., 2022; Tussyadiah and Miller, 2019; van Doorn et al., 2017).
When customers view an AI device with “eyes” as another social entity
2.1. Artificial intelligence in service and privacy concerns that occupies a shared space, they may feel watched (Tussyadiah and
Miller, 2019) and thus experience privacy concerns and uneasiness.
AI, which consists of a family of technologies that acquire, process, Based on AI devices’ design features, this research suggests that the
analyze, and return helpful information (Grewal et al., 2021), is often watching-eye(s) of AI devices can assume two forms: the built-in
embodied in machines or devices that can sense, understand, learn, and “camera eye” and the physical eye(s) as displayed in devices’ appear­
exhibit certain aspects of human intelligence (Huang and Rust, 2018). ance (e.g., Choi et al., 2021; Nakanishi et al., 2019).
Based on whether an AI device has a virtual presence or a physical
embodiment (Tung and Law, 2017), AI-powered devices can take a 2.2.1. The built-in “camera eye”
variety of forms. AI devices common in service settings include virtual In the hospitality and tourism contexts, AI devices and associated
bots, digital assistants embedded in smartphones or tablets (e.g., Apple’s technologies (e.g., digital assistants and robots) are often equipped with
Siri) or standalone devices (e.g., Amazon’s Alexa), and nonhumanoid cameras and sensors to facilitate customized service delivery and to
and humanoid service robots (e.g., Buhalis and Moldavska, 2021). These optimize customers’ experiences (e.g., Nakanishi et al., 2019). For
devices have been deployed in multiple settings to deliver an array of instance, facial recognition has been widely used across service settings
services and to satisfy customers’ needs, ranging from simple tasks such (Leong, 2019). More AI-powered tablets, digital assistants, and robots
as check-in and luggage transportation to advanced capabilities such as are now being embedded with video cameras (e.g., Jackson and Ore­
social interaction and companionship (Hu et al., 2020; Huang and Rust, baugh, 2018; Jia et al., 2021).
2018). The application of AI in the service industry is generally viewed Despite the best intentions of AI device developers and the practi­
with optimism (Grewal et al., 2021). tioners who adopt such technologies, customers may feel uncomfortable
Along with the ongoing development of AI and associated technol­ being exposed to these cameras (Caine et al., 2012). People may feel like
ogies, the potential dark side of AI has gained researchers’ attention. they are under constant surveillance and that their every move is being
Privacy is a crucial ethical issue in the era of AI (Grewal et al., 2021; recorded (Leong, 2019). In addition, because data captured by these
Ivanov and Umbrello, 2021). Data privacy has long been a prominent cameras might be used without customers’ knowledge and can be sus­
topic in consumer research and information studies (e.g., Bleier et al., ceptible to hacking, the mere display of cameras may arouse privacy
2020; Xu et al., 2011), and AI and related innovations have birthed new concerns. Furthermore, customers’ perceptions matter. Customers may
challenges (e.g., Bleier et al., 2020; Haenlein et al., 2022; Ivanov and not always be able to identify the cameras installed in an AI device
Umbrello, 2021; Ebbers et al., 2021). AI and related technologies rely simply by looking at it (Lee et al., 2011). Occasionally, a device’s design
heavily on customers’ personal information (Ebbers et al., 2021). A can be deceptive (Danaher, 2020); some customers may not recognize

2
Y. Hu and H.(K. Min International Journal of Hospitality Management 110 (2023) 103437

the “eyes” of the AI device whereas others remain acutely aware of the Customers may feel as though an AI device with a human-like appear­
camera. Thus, customers’ perceptions of cameras affect their privacy ance and eyes (e.g., humanoid) is constantly staring at them in a private
concerns and subsequent responses (Lee et al., 2011). When customers service setting, compared with AI devices with no eyes (e.g., a tablet or
perceive a built-in camera in an AI device, they may feel watched and nonhumanoid). They then may become more uneasy staying in the
have privacy concerns (Jackson and Orebaugh, 2018), provoking feel­ service setting because of higher privacy concerns as postulated:
ings of unease. The first hypothesis is proposed accordingly:
H4. : In a private service setting, the appearance of an AI device with a
H1. : Customers’ perceptions of a built-in camera in AI devices affect built-in camera (tablet vs. nonhumanoid vs. humanoid) affects cus­
their uneasiness through the mediation of privacy concerns. tomers’ uneasiness through the mediation of privacy concerns.

2.2.2. The boundary condition of service setting 2.2.4. The boundary condition of customer gender
AI devices are deployed across service contexts. There are certain Gender differences in privacy concerns and affiliated effects on
public settings in which these devices are relatively more acceptable to people’s behavior have been identified in the literature on technology
customers; that is, although customers are aware of devices’ cameras, usage (e.g., Garbarino and Strahilevitz, 2004; Hoy and Milne, 2010;
they tend to feel less concerned about privacy and more comfortable Sheehan, 1999; Wu et al., 2012). Studies have demonstrated that, in
using these devices. For example, some AI devices use cameras with terms of online information sharing, women express stronger privacy
facial recognition systems at airports and train stations (Naudé, 2020) to concerns than men (e.g., Fogel and Nehmad, 2009; Hoy and Milne,
enable customers to purchase tickets and complete self-check-in. This 2010; Sheehan, 1999). Further, when facing a potential threat to pri­
convenience renders the service process more efficient and has helped to vacy, customers of each gender deal with it differently: men are more apt
minimize human interaction during the COVID-19 pandemic. Similar to be confrontational in reacting to privacy concerns, whereas women
self-check-in devices with facial recognition can also be seen at hotels’ tend to display less proactive behavior (Sheehan, 1999).
front desks (Morosan, 2019). Regarding the impacts of privacy concerns on individuals’ psycho­
By contrast, customers’ behavioral information becomes more per­ logical responses, a study of visually impaired customers’ concerns
sonal and sensitive in relatively private service settings such as hotel about camera-based assistive applications revealed that women were
rooms or private restaurant rooms. In such places, AI devices equipped more uncomfortable than men (Akter et al., 2020). A similar effect was
with cameras or simply “listening” features can make people uncom­ found for teenagers (Youn and Hall, 2008). The possibility of a privacy
fortable due to privacy concerns (Manikonda et al., 2018). A notable breach concerns both genders, although women perceive higher risks
example is the deployment of Amazon Echo in hotel rooms. With a and severity associated with the potential consequences of a privacy
camera, speaker, and AI technology, Echo has sparked strong privacy violation (Garbarino and Strahilevitz, 2004; Hoy and Milne, 2010). It is
concerns and unease among guests because it might make their in-room reasonable to anticipate that, in the AI era, women may feel more uneasy
data vulnerable to attackers (Jackson and Orebaugh, 2018). The service (vs. men) when encountering privacy concerns about a “staring” AI
setting (public vs. private) might therefore play a role in the device. Stated formally:
watching-camera-eye effect of AI devices; compared with a public
H5. : Customer gender (male vs. female) moderates the effect of pri­
setting, this effect may induce greater privacy concerns and subsequent
vacy concerns on customers’ uneasiness, such that the effect of privacy
uneasiness for customers in a private setting. The following hypotheses
concerns on customers’ uneasiness is stronger for women than for men.
are hence put forth:
H6. : Customer gender (male vs. female) moderates the mediation
H2. : The service setting (public vs. private) moderates the effect of
effect of privacy concerns on the relationship between the appearance of
customers’ perceptions of a built-in camera in an AI device on privacy
AI devices (tablet vs. nonhumanoid vs. humanoid) and customers’ un­
concerns; the effect on privacy concerns is stronger in a private setting
easiness through the mediation of privacy concerns.
than in a public setting.
Fig. 1 illustrates the theoretical model guiding this research. To test
H3. : The service setting (public vs. private) moderates the mediation
the hypotheses, Study 1 examined the “customers’ perceptions of a built-
effect of privacy concerns on the relationship between customers’ per­
in camera → privacy concerns → uneasiness” relationship and the
ceptions of a built-in camera and customers’ uneasiness.
moderating role of service setting. Based on the findings of Study 1,
Study 2 further tested the “type of AI devices → privacy concerns →
2.2.3. The physical eye (appearance of AI devices)
uneasiness” relationship, taking the effect of customer gender as a
Among popular AI-powered devices (e.g., tablets or kiosks, digital
boundary condition.
assistants, and service robots) in the service industry, some possess
anthropomorphic (i.e., human-like) features such as a physical appear­
3. Study 1
ance, facial expressions, gestures, and human voices (e.g., Jia et al.,
2021). Physical appearance is one of the most prominent features of AI
3.1. Methods
anthropomorphism (Blut et al., 2021). This research thus focuses on the
eye-watching effect of AI devices’ physical appearance (especially
3.1.1. Study design and procedures
“eyes”).
A scenario-based quasi-experimental design was adopted. Data were
Anthropomorphic design cues can influence customers’ perceptions
collected on Qualtrics via an online survey questionnaire. The re­
in different ways (Blut et al., 2021). Some researchers have discovered
spondents were adults who hail from the United States. Respondents
that anthropomorphism positively affects human–AI interaction (e.g.,
were randomly assigned to one of the two experimental conditions
Yang et al., 2021), while others have revealed that it can compromise
through Qualtrics professional services. Respondents were asked to read
customers’ perceptions (Choi et al., 2021). Service settings play a key
a scenario in which they imagined traveling alone to a new city and
part in these mixed effects (Blut et al., 2021; Yang et al., 2021). Cus­
staying at a fictional hotel (X). The service setting was manipulated
tomers’ behavioral information is especially pertinent in private service
(public hotel lobby vs. private hotel room). In the public setting condi­
settings: the more anthropomorphic an AI device appears, the higher its
tion, respondents were asked to imagine entering the hotel lobby and
perceived social presence (Damiano and Dumouchel, 2018). Customers
noticing an AI robot concierge (shown in an image) on the counter. This
in turn have a stronger sense of being watched (Tussyadiah and Miller,
portion of the survey read, “You are traveling alone to a new city. You made
2019) and increased anxiety, uncomfortableness, and tension (Yogana­
a three-night reservation at Hotel X. When you enter the hotel lobby, you
than et al., 2021). This logic is congruent with the watching-eye effect.
notice that there is an artificial intelligent robot on the counter. It is a voice-

3
Y. Hu and H.(K. Min International Journal of Hospitality Management 110 (2023) 103437

Fig. 1. Theoretical research model.

activated concierge. It can interact with you via multiple sensors. It greets you measured with three items from Xu et al. (2011) on a 5-point Likert-type
with a voice and offers to create a highly personalized, interactive, and scale (1 = never, 5 = very often).
intelligent experience for you.”
In the private setting condition, respondents were asked to imagine 3.1.3. Sample characteristics and realism check
themselves entering their hotel room and noticing an artificial intelli­ To ensure data quality, respondents agreed at the beginning of the
gent robot companion (shown in an image) on their nightstand: “You are survey to answer all questions as accurately as possible. An attention-
traveling alone to a new city. You made a three-night reservation at Hotel X. check question asking respondents to select a certain answer (e.g.,
When you enter your room, you notice that there is an artificial intelligent neutral) was presented next. Surveys from respondents who did not
robot on the nightstand beside your bed. It is a voice-activated in-room answer correctly were discarded. After data screening, 322 valid ques­
companion. It can interact with you via multiple sensors. It greets you with a tionnaires remained (npublic = 160, nprivate = 162). Among all re­
voice and offers to create a highly personalized, interactive, and intelligent spondents, 50.9% were men, 76.4% were Caucasian, and 41.9% were
guest room for you.” Respondents indicated their perceptions of a built-in married. Respondents’ mean age was 40.15 years (SD = 14.20), ranging
camera by answering the following yes-or-no question: “In the scenario from 18 to 70. The scenarios’ realism was verified based on the item “I
that you read, do you think the robot has a built-in camera?”1 Respondents think this scenario is realistic” (1 = strongly disagree, 5 = strongly agree).
then responded to questions regarding this study’s main constructs, Respondents perceived the scenarios as realistic [M = 3.71, SD = 1.19, t
completed a realism check, and provided their demographic (321) = 10.71, p < .001].
information.

3.1.2. Measures 3.2. Results


Measurement items for this study were adapted from the literature
and modified to fit the study context (see detailed items in the Appen­ 3.2.1. Data normality, measurement validity, and reliability
dix). Specifically, privacy concerns were assessed using nine items All continuous variables’ data distribution was evaluated first. The
adapted from Ng and colleagues (Ng et al., 2020), scored on a 5-point mean values of measurement items ranged from 2.49 to 4.08; standard
Likert-type scale (1 = strongly disagree, 5 = strongly agree). Uneasiness deviations ranged from.98 to 1.38. The skewness values of items span­
was evaluated with four items adapted from Piçarra and Giger (2018) ned between − 1.08 and.41, and kurtosis values were between − 1.09
using a 5-point scale (1 = strongly disagree, 5 = strongly agree). Disposi­ and.38. The items were thus normally distributed. The construct validity
tion to value privacy (DTVP) was measured with three items from Xu and reliability of multi-item constructs were then calculated. The overall
and colleagues (2011), rated on a 5-point Likert-type scale (1 = strongly measurement model had an excellent fit to the data (χ 2 = 414.63, df =
disagree, 5 = strongly agree). Previous privacy experience (PEXP) was 146, χ 2/df = 2.84, CFI =.94, RMSEA =.08, SRMR =.06). The main
constructs had acceptable average variance extracted (AVE) values
(range:.51–.84). Moreover, all items had significant loadings on their
1 respective constructs with acceptable coefficients. These results re­
This study measured customers’ perceptions of a built-in camera rather than
flected convergent validity. In addition, the correlation between the
the existence of a camera for several reasons. Since the use of AI devices in the
service context is relatively new, we conducted a pilot study (n = 69, Mage =
main constructs of interest in this study (i.e., privacy concerns and un­
36.75, SD = 13.59) to explore which variable provides more meaningful easiness) was significant (r = .67, p < .001) but lower than the square
contribution. To control the potential impact of AI devices’ anthropomorphism, roots of both constructs’ AVE values, indicating discriminant validity.
two robot designs were used: Amazon’s Alexa (nonhumanoid) and Marriott’s Lastly, all constructs had composite reliability values larger than.70 (see
Mario (humanoid). The pilot study showed respondents the same images of AI Appendix), denoting acceptable reliability.
devices as in the main study (Alexa and Mario). Among pilot study respondents,
30 had previously interacted with AI devices in service encounters; of them, 3.2.2. Results of mediation model
more than half (63.33%) reported that the service providers seldom explicitly A mediation analysis was conducted via Hayes PROCESS Macro
stated whether the AI devices had a built-in camera. As such, more often than Model 4 (Hayes, 2017). The construct of customers’ perceptions of a
not in real service settings, customers are not informed of whether an AI device
built-in camera was dummy coded (no = 0, yes = 1) and entered as the
has a camera; they must judge for themselves. Moreover, even when re­
independent variable, uneasiness served as the dependent variable, and
spondents in the pilot study were told that the AI devices were not equipped
with cameras, they did not feel assured across settings (1 = not at all assured, privacy concerns represented the mediator. Customers’ age, gender,
7 = extremely assured) [Mlobby = 3.87, SD = 1.83, t(68) = − 14.21, p < 0.001; DTVP, PEXP, and AI devices’ anthropomorphism (no = 0, yes =1) were
Mroom = 2.86, SD = 1.90, t(68) = − 18.17, p < 0.001]. Thus, the main study used as covariates. While controlling for covariates, customers’ per­
focused on customers’ perceptions of a built-in camera rather than on the actual ceptions of a built-in camera had a significant positive effect on privacy
existence of a camera. concerns [b = .43, t(315) = 3.71, p < .01]. Privacy concerns had a

4
Y. Hu and H.(K. Min International Journal of Hospitality Management 110 (2023) 103437

significant positive effect on uneasiness [b = .77, t(314) = 12.39, privacy concerns, DTVP, and PEXP) as in Study 1 and the measures of
p < .01]. When controlling for covariates and privacy concerns, cus­ intention to stay in the hotel for additional analysis. Items related to
tomers’ perceptions of a built-in camera had no significant effect on respondents’ uneasiness staying in the room were modified slightly for
uneasiness [b = − .05, t(314) = − .34, p > .05]. The analysis also the in-room service setting (e.g., “If I stayed at the hotel room, I would feel
revealed a significant indirect effect: customers’ perceptions of a built-in worried”; “If I stayed at the hotel room, I would feel uncomfortable”). Two
camera → privacy concerns → uneasiness (a × b =.33, 95% confidence attention-check questions were integrated throughout the survey, asking
interval [CI]: [.16,.53]). Therefore, privacy concerns fully mediated the respondents to select a certain answer (e.g., strongly disagree). Re­
impacts of customers’ perceptions of a built-in camera and uneasiness. spondents who failed any of the attention checks were screened out.
H1 was supported as a result. Ultimately, 298 questionnaires were valid (ntablet = 95, nnonhumanoid =
107, nhumanoid = 96). Slightly less than half of all respondents (47.7%)
3.2.3. Moderating role of service setting were men, 81.5% were Caucasian, and 41.6% were married. Their mean
A moderated mediation analysis was conducted using Hayes PRO­ age was 41.82 years (SD = 15.15; range: 18–85). Respondents perceived
CESS Macro Model 7 (Hayes, 2017) to test H2 and H3. Service setting the scenarios as realistic [M = 3.60, SD = 1.12, t(297) = 39.97,
(public = 0, private = 1) was entered as a moderator in the previous p < .001].
mediation model. Results are displayed in Fig. 2. The interaction term
between customers’ perceptions of a built-in camera and service setting
4.2. Results
had a significant positive effect on privacy concerns [b = .45, t(313) =
2.01, p < .05]; as such, H2 was supported. Specifically, when the service
4.2.1. Data normality, measurement validity, and reliability
setting was private, customers’ perceptions of a built-in camera had a
The mean values of variables ranged from 2.41 to 4.14; standard
significant impact on privacy concerns [b = .65, t(313) = 4.23, p < .01].
deviations ranged from.97 to 1.32. The skewness values ranged from
However, when the service setting was public, customers’ perceptions of
− 1.22 to.41, and kurtosis values were between − .77 and.97. These
a built-in camera had no significant effect on privacy concerns [b = .19, t
values were all within an acceptable range, indicating data normality.
(313) = 1.15, p > .05].
Construct validity and reliability were measured next. The overall
Findings also revealed a significant index of moderated mediation
measurement model showed an excellent fit to the data (χ 2 = 396.76, df
(index =.35, 95% CI: [.02, 72]). The conditional indirect effect analysis
= 146, χ 2/df = 2.72, CFI =.95, RMSEA =.08, SRMR =.06). Constructs’
further indicated that, when the service setting was private, customers’
AVE values were.70 (privacy concerns),.84 (uneasiness),.58 (DTVP),
perceptions of a built-in camera had a significant indirect effect on un­
and.43 (PEXP). All items had significant loadings on their corresponding
easiness through privacy concerns (a × b =.50, 95% CI: [.25, 79]). This
constructs. Every item, except for one related to PEXP, had a loading
indirect effect became non-significant when the service setting was
higher than.50. These results indicate convergent validity. All constructs
public (a × b =.15, 95% CI: [− .07, 37]).
had composite reliability values larger than.70 except for PEXP,
denoting adequate reliability. Because PEXP was used as a control var­
4. Study 2
iable and not one of the main constructs of interest in this study, the
measurements for PEXP remained unchanged. In addition, the correla­
4.1. Methods
tions between constructs were all significant but lower than the square
root of each construct’s AVE (see Appendix), indicating discriminant
4.1.1. Study design and procedures
validity.
Data for Study 2 were gathered on Qualtrics via an online survey
questionnaire using a scenario-based quasi-experimental design. The
4.2.2. Results of the mediation model
respondents were adults who hail from the United States. Respondents
A mediation analysis was conducted using Hayes’s PROCESS Macro
were randomly assigned to the experimental conditions through Qual­
Model 4 (Hayes, 2017). The appearance of AI devices (tablet vs. non­
trics professional services. The scenarios were inspired by Tussyadiah
humanoid vs. humanoid) was sequentially coded into two variables, X1
and Miller’s (2019) work. Respondents were randomly assigned to one
and X2. Specifically, the tablet group was coded as X1 = 0 and X2 = 0;
of the three experimental conditions. Respondents were asked to read a
the nonhumanoid group was coded as X1 = 1 and X2 = 0; and the hu­
scenario about imagining traveling alone to a new city and staying at a
manoid group was coded as X1 = 1 and X2 = 1. This coding method
fictional hotel (X). The appearance of the AI device (i.e., tablet, non­
allowed for subsequent analysis to sequentially compare the non­
humanoid, or humanoid) was manipulated by differing the scenarios’
humanoid group with the tablet group and the humanoid group with the
descriptions and images. The image of the tablet was a typical image of a
tablet with no brand indicated to control for potential confounding ef­
fects. The nonhumanoid condition used an image of Amazon’s Alexa,
and the humanoid condition used an image of Marriott’s Mario robot.
In the scenario, respondents were asked to imagine themselves
entering their hotel room and noticing an artificial intelligent device
(also shown in images) on their nightstand. The scenario indicated that
the device was a voice-activated in-room companion that could interact
with customers via a camera and sensors. The scenario read, “You are
traveling alone to a new city. You made a three-night reservation at Hotel X.
When you enter your room, you notice that there is an artificial intelligent
tablet/robot on the nightstand beside your bed. It is a voice-activated in-room
companion. It can interact with you via a camera and multiple sensors. It
greets you with a voice and offers to create a highly personalized, interactive,
and intelligent guest room for you.” Respondents then answered questions
regarding the study’s main constructs, a realism check, and personal
demographics.

4.1.2. Measures, sample characteristics, and realism check


Study 2 used the same measurements for main constructs (i.e.,

5
Y. Hu and H.(K. Min International Journal of Hospitality Management 110 (2023) 103437

Fig. 2. Results of Study 1. Notes: Values are unstandardized regression coefficients. Customers’ perceptions of a built-in camera and service setting were dummy
coded. * p < .05, * * p < .01. The dashed line indicates a non-significant path.

nonhumanoid group. Then, X1 and X2 were entered as independent


Table 1
variables. Uneasiness was entered as the dependent variable, and pri­
Results of regression analysis.
vacy concerns were used as the mediator. Customers’ age, DTVP, and
PEXP were used as covariates. Dependent variables

Results revealed that, while controlling for covariates, X1 (non­ Variables Privacy Uneasiness Uneasiness
humanoid vs. tablet) had no significant effect on privacy concerns [b concerns Mediation Moderated
Mediation model mediation model
= − .08, t(292) = − .64, p > .05], but X2 (humanoid vs. nonhumanoid)
model
had a significant positive impact on such concerns [b = .40, t(282) =
3.21, p < .01]. Privacy concerns had a significant positive effect on Constant 1.61** -.15** 3.04**
X1 (nonhumanoid vs. -.08 -.14 -.13
uneasiness [b = .85, t(291) = 15.38, p < .01]. When controlling for the
tablet)
covariates and privacy concerns, both X1 [b = − .14, t(291) = − 1.15, X2 (humanoid vs. .40** .10 .07
p > .05] and X2 [b = .10, t(291) = .82, p > .05] had no significant ef­ nonhumanoid)
fects on uneasiness. The analysis also uncovered a significant indirect Customer gender (male = .22*
0, female = 1)
effect for the X2 → privacy concerns → uneasiness path (a × b =.34, 95%
Privacy concerns .31**
CI: [.14,.56]) but not the X1 → privacy concerns → uneasiness path (a × × Customer gender
b = − .07, 95% CI: [− .30,.15]). Therefore, privacy concerns partially Privacy concerns .85** .85**
mediated the effect of appearance of AI devices and uneasiness.2 H4 was Age .01* -.002 .0001
partially supported. DTVP .37** .10 .11
PEXP .23** .05 .09
F test F(5, 292) F(6, 291) F(8, 289)
4.2.3. Moderating role of customer gender = 14.48** = 55.30** = 45.04**
A moderated mediation model was tested using Model 14 of the R2 .20** .53** .55**
PROCESS Macro (Hayes, 2017); results are presented in Table 1 and
Notes: X1 and X2 are sequential coded variables. Tablet condition: X1 = 0,
Fig. 3. Customer gender was dummy coded (male = 0, female = 1) and X2 = 0; nonhumanoid: X1 = 1, X2 = 0; humanoid condition: X1 = 1, X2 = 1.
entered as the moderator. Customer gender significantly moderated the Values are unstandardized regression coefficients. * p < .05, * * p < .01.
relationship between privacy concerns and uneasiness [b = .31, t
(289) = 3.10, p < .01]. The conditional effect of privacy concerns on concerns on uneasiness was much stronger than for men [b = .69, t
uneasiness depending on customer gender is depicted in Fig. 4. For (289) = 9.51, p < .01]. H5 was hence supported.
women [b = .99, t(289) = 13.28, p < .01], the impact of privacy Moderated mediation indices and conditional indirect effects were
assessed to test H6 (Table 2). When comparing a nonhumanoid device
with a tablet, the index of moderated mediation was non-significant
2 (index = − .02, 95% CI: [− .11,.06]). The indirect effects were non-
To further reveal the potential behavioral outcome of the proposed effects,
we conducted an additional analysis using ‘intention to stay in the hotel’ as the
significant for both female and male customers. When comparing hu­
consequence variable of the proposed model (i.e., besides the proposed medi­ manoid and nonhumanoid devices, the index of moderated mediation
ational model, adding three direct paths from the appearance of AI devices, was significant (index =.12, 95% CI: [.03,.25]). The conditional indirect
privacy concerns, and uneasiness of staying in the room to intention to stay in effects based on customer gender were thus significantly different. For
the hotel). Intention to stay in the hotel was measured with three items (i.e., I men, compared with a nonhumanoid device, a humanoid device had a
am willing to stay at a hotel room like this in the future; I would recommend significant indirect effect through privacy concerns on uneasiness (a × b
others to stay at hotel rooms like this; I plan to stay at a hotel room like this in =.28, 95% CI: [.11,.45]). This indirect effect became even stronger for
the future; Cronbach’s alpha = 0.92). Results show that the intention to stay in women (a × b =.40, 95% CI: [.16,.65]). Accordingly, H6 was partially
the hotel is the subsequent variable of uneasiness [b = − .68, t(290) = − 13.60, supported.
p < .01]; the direct paths from the appearance of AI devices and privacy con­
cerns on the intention to stay in the hotel are non-significant. Moreover, the
5. General discussion
analysis revealed a significant indirect effect for the X2 → privacy concerns →
uneasiness → intention to stay in the hotel path (indirect effect = − .23, 95% CI:
[− .38, − .10]) but not the X1 → privacy concerns → uneasiness → intention to 5.1. Theoretical contributions
stay in the hotel path (indirect effect =.05, 95% CI: [− .10,.20]). Since (1) this is
not the main focus of this study (2) the direct paths to behavioral intention have The findings of this research provide several theoretical contribu­
been established in prior studies, and (3) it is ideal to maintain a parsimonious tions. First, this research contributes to AI ethics literature by
model, this additional analysis was not incorporated into the main model.

6
Y. Hu and H.(K. Min International Journal of Hospitality Management 110 (2023) 103437

Fig. 3. Results of Study 2. Notes: Values are unstandardized regression coefficients. The appearance of AI devices was sequentially coded, and customer gender was
dummy coded. * p < .05, * * p < .01.

(Ivanov and Umbrello, 2021). Particularly, although few prior studies


subscribing to AI ethics have raised potential ethical issues such as
privacy and surveillance (e.g., Haenlein et al., 2022; Ivanov and
Umbrello, 2021), limited empirical evidence has been provided.
Expanding upon the watching-eye effect, this research is one of the first
to provide empirical evidence that AI devices can be perceived as the
“eyes” of others which can invade one’s privacy. The watching-eye ef­
fect indicates that a sense of being watched can influence one’s per­
ceptions and behavior (Pfattheicher and Keller, 2015). Studies have
shown that AI devices can be perceived as social entities rather than
simply machines, suggesting that people can feel “seen” by these devices
(Fan et al., 2022; Tussyadiah and Miller, 2019; van Doorn et al., 2017).
This study found that the mere existence of AI devices does not make
people feel uncomfortable, but when people perceive the “eyes” of these
devices, they become concerned that their privacy might have been
invaded.
Further, this study implies a boundary condition based on which the
impact of customers’ perceptions of the camera may vary. The positive
effect of a perceived camera on privacy concerns did not manifest in the
public context; even customers who perceived the “eyes” of AI devices
did not report heightened privacy concerns when in a public place such
as a hotel lobby. However, customers in private settings such as hotel
Fig. 4. Conditional effect of privacy concerns on uneasiness based on rooms felt uncomfortable about their privacy. This discrepancy might
customer gender. have emerged due to the nature of public and private spaces.
Next, this study expands the literature on AI in service settings. The
swift development of technology and AI has amplified the need to un­
Table 2
derstand the roles of AI devices such as service robots (e.g., Jia et al.,
Conditional indirect effects of the moderated mediation model.
2021; Tussyadiah and Miller, 2019). While related theoretical devel­
Variables Indirect effects Customer gender opment is noticeable, most studies have focused on the positive effects of
Male Female robots and other AI devices (e.g., Grewal et al., 2021; Yoganathan et al.,
X1 (nonhumanoid Appearance of AI devices a × b = − .06 a × b = − .08 2021). The current research asserts that AI device usage is not uniformly
vs. tablet) → Privacy concerns → 95% CI: 95% CI: positive. If not carefully utilized, AI devices can make customers feel
Uneasiness [− .25,.12] [− .34,.18] uneasy. The findings of this research suggest that human-like charac­
Index of moderated Index = − .02, 95% CI: teristics (i.e., having a camera to observe customers or having a
mediation [− .11,.06]
X2 (humanoid vs. Appearance of AI devices a × b = .28* a × b = .40*
human-like appearance) make customers feel uneasy and concerned
nonhumanoid) → Privacy concerns → 95% CI: 95% CI: about their privacy at private service settings. Interestingly, the differ­
Uneasiness [.11,.45] [.16,.65] ences between a device with minimal AI functions (e.g., a tablet) and
Index of moderated Index = .12*, 95% CI: [.03,.25] one with more advanced functions (e.g., a voice-activated personal as­
mediation
sistant such as Amazon Echo) were not readily apparent. Anthropo­
Notes: X1 and X2 are sequential coded variables. Tablet condition: X1 = 0, morphic characteristics, not device functionality, thus engender privacy
X2 = 0; nonhumanoid: X1 = 1, X2 = 0; humanoid condition: X1 = 1, X2 = 1. concerns and cause customers to feel uneasy.
Values are unstandardized regression coefficients. * p < .05, * * p < .01. Finally, this research extends knowledge of AI in service by pin­
pointing gender as a viable trait that influences the effect of the dark side
demonstrating that the design features of AI devices rather than the of AI. Individual attributes have been well studied as significant
mere existence of AI devices can cause ethical issues such as invasion of moderating factors influencing the impacts of AI devices on customers’
privacy. Although close customer-employee interactions are the basis of behavior and emotions (e.g., Chi et al., 2022; Fogel and Nehmad, 2009).
the service industry (Zeithaml et al., 1985), and AI and robots have However, demographic characteristics are rarely deemed impactful
replaced some of the intimate roles of employees, ethical concerns from when considering the roles of AI devices. This study reveals that de­
the use of AI and robots in service organizations have been overlooked mographic characteristics such as gender significantly shape customers’

7
Y. Hu and H.(K. Min International Journal of Hospitality Management 110 (2023) 103437

perceptions and emotions, especially when viewing AI through a nega­ AI’s effects.
tive lens. In particular, compared to their male counterparts, female Next, this research used a scenario-based experimental design with
customers are more likely to feel uncomfortable with more human-like pictures describing the appearance of AI devices. Although scenario-
AI devices due to increased privacy concerns. These findings align based experiments are common (e.g., Choi et al., 2021; Hu et al.,
well with prior work theorizing that women are more susceptible to 2021), research manipulation in this type of study may be over­
privacy issues than men (Fogel and Nehmad, 2009; Hoy and Milne, emphasized. A field study or/and video experiments could further
2010; Sheehan, 1999). validate the proposed theoretical model. Finally, this study used the
hotel context to represent the service organization. While hospitality
5.2. Practical implications organizations such as hotels and restaurants are well recognized service
organizations in prior studies (e.g., Yang et al., 2021; Yoganathan et al.,
The findings of this research also provide practical implications for 2021), future studies should include other service organizations such as
the service industry. First, results suggest that caution must be exercised hospitals, senior living facility, and bank to further investigate
when service organizations attempt to adopt AI devices. While AI de­ watching-out effects.
vices resembling humans can boost customers’ satisfaction and enjoy­
ment (Grewal et al., 2021; Hu et al., 2021), these devices may concern Declaration of interest statement
customers if used in private spaces. Thus, nonhumanoid devices should
be used in private settings, and humanoid devices with anthropomor­ None
phic characteristics should be introduced in public spaces. For instance,
healthcare services should use fewer humanoid devices, such as kiosks in Data Availability
individual treatment rooms, while more humanoid devices should be
used in public spaces, such as waiting rooms. In this way, customers are Data will be made available on request.
less likely to feel monitored, and their privacy would be violated while
receiving medical services. As another example, hotels should put
Acknowledgment
human-like robots in the front office but place tablets in hotel rooms.
Doing so will allow hotels to enjoy the benefits of these tools while
“This research was supported by the National Social Science Fund of
avoiding the devices’ dark side.
China [to Yaou Hu, grant number, 22CJY045].”
Next, service organizations should tailor their services based on
gender. Results indicate that privacy concerns elicit more intense
References
negative emotions among women than among men. That is, when
perceiving AI devices as privacy threats, female customers (vs. their Akter, T., Dosono, B., Ahmed, T., Kapadia, A., Semaan, B., 2020. I am uncomfortable
male counterparts) feel more uneasy. To alleviate the possible effects of sharing what I can’t see: privacy concerns of the visually impaired with camera
negative emotions, employees can notify customers about AI devices. based assistive applications. 29th USENIX Secur. Symp. (USENIX Secur. 20)
1929–1948.
For example, customers can be told at check-in about the benefits of AI Araujo, T., Helberger, N., Kruikemeier, S., de Vreese, C.H., 2020. In AI we trust?
devices but reassured that, if they wish, such devices can be disabled or Perceptions about automated decision-making by artificial intelligence. AI Soc. 35
removed from private space, such as a hotel room, private banking (3), 611–623.
Bleier, A., Goldfarb, A., Tucker, C., 2020. Consumer privacy and the future of data-based
service space, and/or medical treatment room. This tactic represents a innovation and marketing. Int. J. Res. Mark. 37 (3), 466–480.
sound marketing strategy to promote AI devices. Service organizations Blut, M., Wang, C., Wünderlich, N.V., Brock, C., 2021. Understanding anthropomorphism
should also train employees to tell customers (especially women) about in service provision: a meta-analysis of physical robots, chatbots, and other AI.
J. Acad. Mark. Sci. 49 (4), 632–658.
the opportunity to disable AI devices. Buhalis, D., Moldavska, I., 2021. In-room voice-based AI digital assistants transforming
Last, findings convey that service organizations using AI devices on-site hotel services and guests’ experiences. Information and Communication
should attend to the design and location of such devices. Scholars have Technologies in Tourism 2021. Springer, Cham, pp. 30–44.
Caine, K., Šabanovic, S., Carter, M., 2012. The effect of monitoring by cameras and
recommended using human-like service robots to foster positive out­
robots on the privacy enhancing behaviors of older adults. Proc. Seven-.-. Annu.
comes such as customer satisfaction, purchase intention, and loyalty (e. ACM/IEEE Int. Conf. Hum. -Robot Interact. 343–350.
g., Grewal et al., 2021; Hu et al., 2021). However, this research suggests Chi, O.H., Gursoy, D., Chi, C.G., 2022. Tourists’ attitudes toward the use of artificially
that AI devices could have adverse effects if customers perceive these intelligent (AI) devices in tourism service delivery: Moderating role of service value
seeking. J. Travel Res. 61 (1), 170–185.
devices as social entities that may threaten their privacy. Additionally, Choi, S., Mattila, A.S., Bolton, L.E., 2021. To err is human (-oid): how do consumers react
this negative impact is stronger in private settings. Service organizations to robot service failure and recovery? J. Serv. Res. 24 (3), 354–371.
should design their AI devices to have less human-like features in such Damiano, L., Dumouchel, P., 2018. Anthropomorphism in human–robot co-evolution.
Front. Psychol. 9, 468.
an environment. Danaher, J., 2020. Robot Betrayal: a guide to the ethics of robotic deception. Ethics Inf.
Technol. 22 (2), 117–128.
5.3. Limitations and future research Davenport, T., Guha, A., Grewal, D., Bressgott, T., 2020. How artificial intelligence will
change the future of marketing. J. Acad. Mark. Sci. 48 (1), 24–42.
Ebbers, F., Zibuschka, J., Zimmermann, C., Hinz, O., 2021. User preferences for privacy
Like any study, this research is subject to limitations. First, the sce­ features in digital assistants. Electron. Mark. 31 (2), 411–426.
narios described a customer traveling alone, yet many travelers stay in a Esmark, C.L., Noble, S.M., Breazeale, M.J., 2017. I’ll be watching you: shoppers’
reactions to perceptions of being watched by employees. J. Retail. 93 (3), 336–349.
hotel with companions. Some customers may not perceive AI robots as a Fan, A., Lu, Z., Mao, Z.E., 2022. To talk or to touch: unraveling consumer responses to
threat (or, on the contrary, a bigger threat) to privacy when sharing a two types of hotel in-room technology. Int. J. Hosp. Manag. 101, 103112.
hotel room due to the presence of other people; others may perceive AI Fogel, J., Nehmad, E., 2009. Internet social network communities: risk taking, trust, and
privacy concerns. Comput. Hum. Behav. 25 (1), 153–160.
robots as an even greater threat in this case. Future studies could Fu, S., Zheng, X., Wong, I.A., 2022. The perils of hotel technology: the robot usage
investigate the moderating roles of party size, customers’ relationships resistance model. Int. J. Hosp. Manag. 102, 103174.
with their travel companions, and the nature of travel (e.g., business trip Garbarino, E., Strahilevitz, M., 2004. Gender differences in the perceived risk of buying
online and the effects of receiving a site recommendation. J. Bus. Res. 57 (7),
vs. leisure trip) on the effects of AI devices. Moreover, customers’ in­
768–775.
dividual characteristics, such as personal innovativeness (e.g., Jackson Grewal, D., Guha, A., Satornino, C.B., Schweiger, E.B., 2021. Artificial intelligence: the
et al., 2013) and previous experience (e.g., Hu, 2021), might be light and the darkness. J. Bus. Res. 136, 229–236.
important factors that affect their perceptions and behaviors toward Haenlein, M., Huang, M.H., Kaplan, A., 2022. Guest editorial: business ethics in the era of
artificial intelligence. J. Bus. Ethics 1–3.
technological advancements such as AI devices. Future studies are Haley, K.J., Fessler, D.M., 2005. Nobody’s watching?: Subtle cues affect generosity in an
encouraged to consider these individual characteristics when examining anonymous economic game. Evol. Hum. Behav. 26 (3), 245–256.

8
Y. Hu and H.(K. Min International Journal of Hospitality Management 110 (2023) 103437

Hayes, A.F., 2017. Introduction to mediation, moderation, and conditional process Nakanishi, J., Kuramoto, I., Baba, J., Ogawa, K., Yoshikawa, Y., Ishiguro, H., 2019.
analysis: A regression-based approach. Guilford publications. Soliloquising social robot in a hotel room. Proc. 31st Aust. Conf. Hum. -Comput.
Hertzfeld, E., 2019. Japanese hotel’s robots modified to prevent security risk. Hotel -Interact. 21–29.
Manag. (Retrieved from https://www.hotelmanagement.net/tech/japanese-hotel- Naudé, W., 2020. Artificial intelligence vs COVID-19: limitations, constraints and pitfalls.
bedside-robots-hacked). AI Soc. 35 (3), 761–765.
Hoy, M.G., Milne, G., 2010. Gender differences in privacy-related measures for young Ng, M., Coopamootoo, K.P., Toreini, E., Aitken, M., Elliot, K., van Moorsel, A., 2020.
adult Facebook users. J. Interact. Advert. 10 (2), 28–45. Simulating the effects of social presence on trust, privacy concerns & usage
Hu, Y., Min, H., Su, N., 2021. How sincere is an apology? Recovery satisfaction in a robot intentions in automated bots for finance. 2020 IEEE Eur. Symp. Secur. Priv.
service failure context. J. Hosp. Tour. Res. 45 (6), 1022–1043. Workshops (Eur.) 190–199.
Huang, M.H., Rust, R.T., 2018. Artificial intelligence in service. J. Serv. Res. 21 (2), Pfattheicher, S., Keller, J., 2015. The watching eyes phenomenon: the role of a sense of
155–172. being seen and public self-awareness. Eur. J. Soc. Psychol. 45 (5), 560–566.
Ioannou, A., Tussyadiah, I., 2021. Privacy and surveillance attitudes during health crises: Piçarra, N., Giger, J.C., 2018. Predicting intention to work with social robots at
acceptance of surveillance and privacy protection behaviours. Technol. Soc. 67, anticipation stage: assessing the role of behavioral desire and anticipated emotions.
101774. Comput. Hum. Behav. 86, 129–146.
Ioannou, A., Tussyadiah, I., Lu, Y., 2020. Privacy concerns and disclosure of biometric Sheehan, K.B., 1999. An investigation of gender differences in online privacy concerns
and behavioral data for travel. Int. J. Inf. Manag. 54, 102122. and resultant behaviors. J. Interact. Mark. 13 (4), 24–38.
Ioannou, A., Tussyadiah, I., Miller, G., 2021. That’s private! Understanding travelers’ Tung, V.W.S., Law, R., 2017. The potential for tourism and hospitality experience
privacy concerns and online data disclosure. J. Travel Res. 60 (7), 1510–1526. research in human-robot interactions. Int. J. Contemp. Hosp. Manag.
Ivanov, S.H., Umbrello, S., 2021. The ethics of artificial intelligence and robotisation in Tussyadiah, I., Miller, G., 2019. Nudged by a robot: responses to agency and feedback.
tourism and hospitality–a conceptual framework and research agenda. J. Smart Ann. Tour. Res. 78, 102752.
Tour. 1 (4), 9–18. Tussyadiah, I., Li, S., Miller, G., 2019. Privacy protection in tourism: Where we are and
Jackson, C., Orebaugh, A., 2018. A study of security and privacy issues associated with where we should be heading for. Inf. Commun. Technol. Tour. 2019, 278–290.
the Amazon Echo. Int. J. Internet Things Cyber-Assur. 1 (1), 91–100. van Doorn, J., Mende, M., Noble, S.M., Hulland, J., Ostrom, A.L., Grewal, D., Petersen, J.
Jackson, J.D., Mun, Y.Y., Park, J.S., 2013. An empirical test of three mediation models A., 2017. Domo arigato Mr. Roboto: emergence of automated social presence in
for the relationship between personal innovativeness and user acceptance of organizational frontlines and customers’ service experiences. J. Serv. Res. 20 (1),
technology. Inf. Manag. 50 (4), 154–161. 43–58.
Jia, J.W., Chung, N., Hwang, J., 2021. Assessing the hotel service robot interaction on Wu, K.W., Huang, S.Y., Yen, D.C., Popova, I., 2012. The effect of online privacy policy on
tourists’ behaviour: the role of anthropomorphism. Ind. Manag. Data Syst. consumer privacy concern and trust. Comput. Hum. Behav. 28 (3), 889–897.
Lee, M.K., Tang, K.P., Forlizzi, J., & Kiesler, S. (2011). Understanding users’ perception Xu, H., Dinev, T., Smith, J., Hart, P., 2011. Information privacy concerns: Linking
of privacy in human-robot interaction. In 2011 6th ACM/IEEE International individual perceptions with institutional privacy assurances. J. Assoc. Inf. Syst. 12
Conference on Human-Robot Interaction (HRI) (pp. 181–182). IEEE. (12), 798–824.
Leong, B., 2019. Facial recognition and the future of privacy: I always feel like… Yang, Y., Liu, Y., Lv, X., Ai, J., Li, Y., 2021. Anthropomorphism and customers’
somebody’s watching me. Bull. At. Sci. 75 (3), 109–115. willingness to use artificial intelligence service agents. J. Hosp. Mark. Manag. 1–23.
Manikonda, L., Deotale, A., & Kambhampati, S. (2018). What’s up with privacy? User Yoganathan, V., Osburg, V.S., Kunz, W.H., Toporowski, W., 2021. Check-in at the Robo-
preferences and privacy concerns in intelligent personal assistants. In Proceedings of desk: effects of automated social presence on social cognition and service
the 2018 AAAI/ACM Conference on AI, Ethics, and Society (pp. 229–235). implications. Tour. Manag. 85, 104309.
Martin, K.D., Murphy, P.E., 2017. The role of data privacy in marketing. J. Acad. Mark. Youn, S., Hall, K., 2008. Gender and online privacy among teens: risk perception, privacy
Sci. 45 (2), 135–155. concerns, and protection behaviors. Cyber Behav. 11 (6), 763–765.
Morosan, C., 2019. Disclosing facial images to create a consumer’s profile: a privacy Zeithaml, V.A., Parasuraman, A., Berry, L.L., 1985. Problems and strategies in services
calculus perspective of hotel facial recognition systems. Int. J. Contemp. Hosp. marketing. J. Mark. 49 (2), 33–46.
Manag.

You might also like