You are on page 1of 19

Tourism Management 100 (2024) 104835

Contents lists available at ScienceDirect

Tourism Management
journal homepage: www.elsevier.com/locate/tourman

Emotional expression by artificial intelligence chatbots to improve


customer satisfaction: Underlying mechanism and boundary conditions
Junbo Zhang a, Qi Chen b, Jiandong Lu a, Xiaolei Wang c, Luning Liu a, *, Yuqiang Feng a
a
School of Management, Harbin Institute of Technology, Harbin, Heilongjiang, 150001, PR China
b
School of Economics and Management, Dalian University of Technology, Dalian, Liaoning, 116081, PR China
c
School of Information Technology and Management, University of International Business and Economics, Beijing, 100029, PR China

A R T I C L E I N F O A B S T R A C T

Keywords: Artificial intelligence chatbots have invaded the tourism industry owing to their low cost and high efficiency.
Chatbot However, the influence of emotional expressions of chatbots on service outcomes has not received much
Human-computer interaction attention from researchers. Drawing upon expectancy violations theory, we explored how emotional expressions
Expectancy violations theory
of chatbots affect customer satisfaction using three experiments in the context of tourist attraction recommen­
Emotional expressions
Customer service
dations. Chatbots’ expressions of concern for customers can improve customer satisfaction by reducing expec­
Customer satisfaction tancy violations. In particular, customer’s goal orientation, the human-likeness of chatbot’s avatars, and the
relationship type between customers and chatbots can moderate the negative relationship between emotional
expression and expectancy violation. These findings advance research on the emotional expressions of chatbots
and provide critical insights for deploying chatbots in customer service in the tourism industry.

1. Introduction prefer communications with humans to AI chatbot interactions (Fan, Lu,


Mao, & Eddie, 2022; van Esch et al., 2022), as they consider AI chatbots
Significant advances in technologies such as machine learning and to lack the problem-solving capability and emotional experience (Chong
artificial intelligence (AI) have increased the penetration of chatbots as et al., 2021; van Esch et al., 2022; H. Kim, So, & Wirtz, 2022). Improving
conversational agents (CAs). Chatbots are relatively inexpensive to customer satisfaction with chatbot services becomes a critical challenge
implement and respond quickly to real-time customer messages (Adam, for tourism companies (Fan et al., 2022; Orden-Mejía & Huertas, 2022).
Wessel, & Benlian, 2021; M. Li, Yin, Qiu, & Bai, 2021). AI-enabled Chatbots are increasingly influential in solving real-world problems
chatbots are transforming the operating pattern of tourism companies (e.g., ChatGPT); however, they often fail to deliver a satisfying
(Orden-Mejía & Huertas, 2022; Pillai & Sivathanu, 2020; Samala, Kat­ emotional customer experience and violate customer expectations of
kam, Bellamkonda, & Rodriguez, 2020). An increasing number of affective interactions (Becker, Efendić, & Odekerken-Schröder, 2022;
tourism companies use chatbots frequently in their customer services Kim, Jiang, Duhachek, Lee, & Garvey, 2022; Hildebrand & Bergner,
ranging from simple airline reservations to complex personalized travel 2021; van Esch et al., 2022; Zhou, Fei, He, & Yang, 2022). Whether
recommendations to ease the complexity of the traveling consulting chatbots’ emotional expressions can improve customer satisfaction and
process (FlowXO, 2022; L. Li, Yin, et al., 2021; Shi, Gong, & Gursoy, to which emotions chatbots should respond during conversations remain
2021; Orden-Mejía & Huertas, 2022). Chatbots make interactions with unclear (Jiang et al., 2022) despite the growing research attention given
tourists more flexible, innovative, and fun (de Kervenoael, Hasan, to chatbots’ emotional anthropomorphism (Becker et al., 2022; Miao,
Schwob, & Goh, 2020). It is predicted that 95% of online service in­ Kozlenkova, Wang, Xie, & Palmatier, 2022; van Esch et al., 2022; X.
teractions will occur via AI chatbots by 2028 (Chong, Yu, Keeling, & de Wang, Jiang, Han, & Qiu, 2022).
Ruyter, 2021). Customers always expect to gain concern from the service staff
In contrast to the ongoing trend of AI chatbot services, some tourism (Gorry & Westbrook, 2011). Concern is an indispensable cornerstone for
companies hesitate to get on the AI train due to concerns about the building affective trust with customers (Akhoondnejad, 2016; I. P.
service effectiveness of AI chatbots. It is observed that customers usually Tussyadiah & Park, 2018). When switching to the AI chatbot service

* Corresponding author.
E-mail address: liuluning@hit.edu.cn (L. Liu).

https://doi.org/10.1016/j.tourman.2023.104835
Received 2 March 2023; Received in revised form 15 July 2023; Accepted 25 August 2023
Available online 11 September 2023
0261-5177/© 2023 Elsevier Ltd. All rights reserved.
J. Zhang et al. Tourism Management 100 (2024) 104835

scenario, we consider that emotional expression of concern from chat­ exploring the influence of chatbots on customer experience in tourism
bots will likely compensate for customers’ perceptions of expectation services is essential for research and practice (Tung & Law, 2017).
violations from poor emotional experience during interactions, thus
improving customer satisfaction. However, this conjecture has not been 2.2. The impact of chatbots on customer experience
discussed by prior research. Moreover, customer expectations toward
emotional interaction may be contingent on characteristics of both sides Current chatbot research in the tourism industry focused on factors
of the interaction, i.e., customer traits and chatbot features, which influencing chatbot adoption and improving customer satisfaction. We
deserve further exploration. Therefore, we focus on the chatbots’ summarize the main research findings about chatbots in recent years in
emotional expressions of concern in customer services, and particularly Table 1. Based on the service robot acceptance model (Wirtz et al.,
expect to understand the following research questions: 2018), we classify these drivers as functional elements, social-emotional
elements, and relational elements.
RQ1. What role do chatbots’ emotional expressions of concern affect
Functional elements are primarily productivity-related factors.
customer expectations and satisfaction?
Scholars in the tourism industry identified ease of use, usefulness, per­
RQ2. How the influence of emotional concern may vary with customer formance expectancy, media richness, competence, accessibility, infor­
traits and designed features of chatbots? mativeness, and reliability as the primary functional elements that drive
user adoption of chatbots and satisfaction (Lei, Shen, & Ye, 2021; Liu, Yi,
This study draws upon expectancy violations theory (EVT) and uses
Shannon, & Wan, 2022; Melián-González, Gutiérrez-Taño, &
three scenario-based online experiments to explore the above research
Bulchand-Gidumal, 2021; Pillai & Sivathanu, 2020; Yoon & Yu, 2022).
questions. We found that chatbots’ emotional expressions of concern
Social-emotional elements represent the social interactions between
improved customer satisfaction via reduced expectancy violation. The
users and robots, such as perceived humanness and perceived social
negative impact of emotional expressions on expectancy violation can
presence. The primary socio-emotional elements influencing the adop­
be mitigated when the customer has low process-oriented or high
tion and customer satisfaction of chatbots in the tourism industry
outcome-oriented goals, the human-likeness of chatbots’ avatars is low,
include a perceived social presence, anthropomorphism, interactivity,
and the relationship type between chatbots and customers is ‘friend’.
and empathy (Cai et al., 2022; Lei et al., 2021; M. A. Orden-Mejía &
Our findings make several significant contributions to the literature.
Huertas, 2022).
First, this paper attempts to study the influence of emotion expressed by
Relational elements are the outcomes associated with relational
AI chatbots on customer satisfaction in tourism services. Second, we
bonds. Scholars identified trust, privacy risk, perceived warmth,
identified expectancy violation as an underlying mechanism relying
perceived cuteness, and perceived need for relatedness as the primary
upon which emotional expressions of chatbots affect customer satis­
relational elements driving user adoption and satisfaction
faction. Third, we advance the study of boundary conditions for chatbots
(Jiménez-Barreto et al., 2021; Liu, Yi, et al., 2022; Lv, Luo, Liang, Liu, &
in tourism by highlighting an unstudied and essential dimension.
Li, 2022; Pillai & Sivathanu, 2020; Zhang et al., 2022).
Two research opportunities remain in the current study. First, in the
2. Literature review and theoretical background
tourism industry, chatbots are replacing human employees to provide
online customer services (Melián-González & Bulchand-Gidumal, 2020);
2.1. Chatbots in tourism services
however, the impact of their emotional expressions on customer satis­
faction has not received much attention. In service encounters,
Chatbots refer to software applications designed for interacting with
emotional expressions from human employees to customers (e.g., smil­
humans using natural written language (Rapp, Curti, & Boldi, 2021).
ing, greeting, and thanking) are critical determinants of customer ser­
They communicate with users virtually through text thanks to advanced
vice experience and customer relationships (H. J. Kim, 2008; Z. Wang
AI technology. Chatbots are gaining attention and popularity in the
et al., 2017). Moreover, emotional chatbots are becoming increasingly
tourism industry (Cai, Li, & Law, 2022). Almost all major travel com­
popular and influencing customer service outcomes (Han et al., 2022).
panies (e.g., Ctrip, Qunar, and Priceline) deployed chatbots to meet
Emotional expressions are “outwardly perceptible clues suggesting
customer needs on their websites or applications. These chatbots can
the presence of an emotional state in the expresser” (van Kleef & Côté,
provide customer service, including customer booking, travel planning,
2022, p. 631). Because emotional expression plays an essential role in
pre-trip consultation, post-trip customer support, and personalized rec­
regulating interpersonal relationships, it can be utilized in
ommendations for tourist attractions (Pillai & Sivathanu, 2020). Thanks
human-computer interactions to make computers more human-like and
to their robust capabilities and cost-effectiveness, chatbots reduce a
create a more enjoyable interaction experience for users (Lopatovska &
company’s human labor burden and improve operational efficiency
Arapakis, 2011). Although chatbots cannot have real emotions (Wirtz
(Han, Yin, & Zhang, 2022).
et al., 2018), chatbots can simulate emotions through verbal cues (e.g.,
Regarding service robots in tourism services, most studies focus on
words) or non-verbal cues (e.g., emoticons, pictures) (Seeger, Pfeiffer, &
offline physical robots that provide customer services, including clean­
Heinzl, 2021). Based on the Computers as Social Actors (CASA) para­
ing rooms (Hoang & Tran, 2022), delivering luggage (H. Kim, So, &
digm, users tend to treat computer systems exhibiting human social cues
Wirtz, 2022), serving food (Liu, Wan, et al., 2022), welcoming cus­
like humans and respond socially to them (Moon, 2000; Nass & Lee,
tomers (Hou, Zhang, & Li, 2021), and check-ins (Hou et al., 2021;
2001). Thus, emotional expressions by chatbots can trigger human na­
Yoganathan, Osburg, Kunz, W, & Toporowski, 2021). By contrast, there
ture, which leads users to perceive them as human and behave socially
is little research on chatbots that broadly provide online customer ser­
in interactions with them (Chin & Yi, 2022).
vice (Cai et al., 2022; Orden-Mejía & Huertas, 2022; Jiménez-Barreto,
Second, prior studies about the influence of chatbots on customer
Rubio, & Molinillo, 2021). There are many differences between chatbots
experience in tourism have mainly focused on the moderating effects of
and physical robots in the tourism industry, even though they are both
human-related and context-related factors, ignoring the moderating
service robots (Wirtz et al., 2018). For example, a physical robot is a
roles of chatbot design features. Previous research has shown that
physical representation, while a chatbot is a virtual representation
customer gender (Zhang et al., 2022), technology anxiety (L. Li, Yin,
(Wirtz et al., 2018). The physical shape of service robots directly affects
et al., 2021), and stickiness to traditional service agents moderate
users’ interactions with them (de Kervenoael et al., 2020; Schuetzler,
chatbot use (Pillai & Sivathanu, 2020). For context, Liu, Yi, Shannon,
Giboney, Grimes, & Nunamaker, 2018). Compared to traditional ser­
and Wan (2022) found that the impact of chatbot appearance on cus­
vices, travel customers create substantial demand for self-service tech­
tomers’ perceived trust can vary with service contexts. Lv et al. (2022)
nology represented by chatbots (Samala et al., 2020). Therefore,
found that the cuteness effect of chatbots is moderated by time pressure

2
J. Zhang et al. Tourism Management 100 (2024) 104835

Table 1
Review of the research on chatbots.
Author Methodology Independent variable Underlying Boundary condition Dependent
mechanism variable
Human Chatbot Context

Pillai and Sivathanu Interviews; Perceived ease of use, Perceived Adoption Intention Stickiness to – – Actual Usage of
(2020) Survey usefulness, Perceived Trust, of Chatbots Traditional Chatbots
Technological anxiety, Perceived Travel agents
Intelligence, Anthropomorphism,
Jiménez-Barreto Interviews; Self-determined interaction Customer – – – Satisfaction
et al. (2021) Survey Experience, Attitude with the
Chatbot
Lei et al. (2021) Survey Media richness, Social presence Task attraction, – – – Reuse intention
Trust, Social
attraction
Li, Lee, Emokpae, Survey Understandability, Reliability, Post-use Technology – – Use
and Yang (2021) Responsiveness, Assurance, Interactivity confirmation, anxiety continuance
satisfaction
Melián-González Survey Performance expectancy, Effort – – – – Chatbots usage
et al. (2021) expectancy, Social influence, Hedonism, intention
Habit, Inconvenience,
Anthropomorphism, Automation,
Perceived innovativeness
Lv et al. (2022) Experimental Cuteness of Chatbots Tenderness, – – Severity, Time- Tolerance of
Performance pressure service failure
expectancy
Zhang et al. (2022) Survey Performance expectancy, Habit, Effort – Gender – – Continuance
expectancy, Social influence, Hedonic difference intention
motivation, Privacy risk, Time risk,
Anthropomorphism, Personalization
Cai et al. (2022) Interviews; Anthropomorphism Trustworthiness, – – – Usage
Experimental Intelligence, intentions
Enjoyment
Liu, Stella, et al. Experimental Chatbot Appearance Trust – – Service context Intention to use
(2022) (Hedonic vs.
Utilitarian)
Orden-Mejía and Experimental Informativeness, Accessibility, Empathy User satisfaction – – – Destination
Huertas (2022a) image
formation
Orden-Mejía and Experimental Informativeness, Accessibility, – – – – Chatbot user
Huertas (2022b) Interactivity, Empathy satisfaction
Yoon and Yu (2022) Survey Findable, Useable, Desirable, Valuable, Attitudes – – – Utilization
Accessible intention

and service failure severity. The effect of the system design of chatbots developed initially to explain expectations related to communication,
on users is a synergistic effect between individual system features of which is different from other expectations paradigms (Burgoon, 1993).
chatbots. Therefore, it is essential to understand the interaction between Expectations are prominent in interpersonal emotional communication.
design features of chatbots to reveal the complex relationship between Second, EVT has been extended to human-robot interaction behavior to
individual design features of chatbots and user perceptions (Seeger explain how robot-related expectations and violations affect user-robot
et al., 2021). interactions and evaluations (Burgoon et al., 2016; Crolic, Thomaz,
Hadi, & Stephen, 2022). Expectations are essential in chatbots
expressing emotions because general attitudes about whether AI should
2.3. Expectancy violations theory (EVT)
express emotions will likely vary (Rapp et al., 2021; Urakami, Moore,
Sutthithatip, & Park, 2019). Third, EVT may be preferable to other
EVT provides us with an appropriate theoretical lens to understand
expectation paradigms, such as expectation confirmation regarding
the impact of emotional expressions by chatbots on customer satisfac­
counterintuitive predictions (Burgoon et al., 2016).
tion. EVT concerns how people form and react to expectations about the
According to EVT (Burgoon, 1993), two types of factors influence
interaction process (Burgoon & Jones, 1976). EVT assumes that people
expectations: communicator and relationship. Communicator charac­
interact with expectations for human or non-human verbal or
teristics imply participative actors’ features, such as demographics,
non-verbal behavior (Burgoon, 1993; Burgoon et al., 2016). Communi­
appearance, and personality. Relationship factors refer to the relation­
cation expectancies are “cognitions about the anticipated communica­
ship of communicators to one another, including familiarity and simi­
tive behavior of specific others” (Burgoon & Walther, 1990, p. 236).
larity (Burgoon, 1993). In our research context, we argue that customers
These expectations are violated when someone does not behave as ex­
generally have affective expectations for chatbots. When chatbots ex­
pected (Afifi & Metts, 1998). Once expectations are violated, people
press appropriate emotions, customers’ negative expectancy violation
shift their attention to the meaning of the violation. Based on this
decreases (even up to positive expectancy violation), and expectancy
violation behavior meaning, expectation violation can be perceived as
violation, in turn, affects customer satisfaction. Moreover, communi­
positive and negative, which in turn affects the communication process
cator (i.e., customers and chatbots) characteristics and relationships
and attitude (Burgoon et al., 2016). EVT believes that expectation vio­
influence such communication expectations. For the communicator
lations tend to have a positive impact when they are positive, i.e., things
characteristics, we considered the goal orientation of customers and
are better than expected; conversely, negative expectation violations, i.
chatbot avatars; for the relationship characteristics, we considered the
e., things are not as good as expected, can negatively affect various
relationship type between customers and chatbots (assistant vs. friend).
outcomes, including satisfaction and purchase intention.
EVT is appropriate as a theoretical basis for our study. First, EVT was

3
J. Zhang et al. Tourism Management 100 (2024) 104835

2.4. Emotional expressions and customer satisfaction service providers in terms of process quality and outcome quality
(Gronroos, 1984). While both are essential components of service
Customer satisfaction implies a customer’s attitude toward the ser­ assessment, different consumers have varying consumer goals (Iaco­
vice interaction they have just experienced and is an essential perfor­ bucci & Ostrom, 1993). Process-oriented and outcome-oriented goals
mance indicator of customer service encounters (Barger & Grandey, shift the customer’s attention to different judgment dimensions of the
2006). As chatbots gradually replace human employees in customer service evaluation (Güntürkün, Haumann, & Mikolon, 2020).
service, service satisfaction will likely be reduced despite lower labor Highly process-oriented customers pay close attention to the intan­
costs and increased efficiency for companies (Ruan & Mezei, 2022; gible aspects of service delivery, including interactive atmosphere and
Zhao, Cui, Hu, Dai, & Zhou, 2022). Part of the reason is that the pro­ social functions (de Ruyter & Wetzels, 1998; Kirmani & Campbell,
liferation of chatbots has made the mutual concern of human employees 2004). They focus on whether the service provider is committed to build
and customers disappear (Kozinets & Gretzel, 2021). Service research a good relationship with them and provide an enjoyable and satisfying
has repeatedly shown that an employee’s ability to manage their emo­ process (de Ruyter & Wetzels, 1998). When interacting with chatbots,
tions (especially displaying positive emotions) is often correlated with customers with high process-oriented goals pay more attention to the
satisfaction (Z. Wang et al., 2017; Delcourt, Gremler, van Riel, & van social functions and atmosphere created by chatbots; therefore, they
Birgelen, 2013). According to the theory of affect infusion, an in­ have more affective expectations of chatbots. Customers with low
dividual’s emotional state can influence social judgment (Forgas, 1995). process-oriented goals do not focus on the emotional experience chat­
If the service provider expresses concern for the customer through bots offer, so they have fewer affective expectations from chatbots. Low
emotions, this emotional capacity of the service provider produces a process-oriented customers have similar perceptions of expectations
positive emotional state in the customer, and makes the customer being violated regardless of whether the chatbot expresses emotions.
perceive higher satisfaction with the service encounter (Delcourt et al., Highly outcome-oriented customers are focused on the tangible as­
2013). Furthermore, based on the CASA paradigm (Moon, 2000; Nass & pects of service delivery, including task functionality and core deliver­
Lee, 2001), the effect of emotional expressions on satisfaction should be ables of the service (de Ruyter & Wetzels, 1998; Kirmani & Campbell,
similar for both anthropomorphic chatbots and human employees. 2004). Customers with high outcome-oriented goals concentrate on
Therefore, hypothesis 1 is proposed: whether the service provider is competent in delivering a satisfactory
result (de Ruyter & Wetzels, 1998). In customer interaction with chat­
H1. Emotional expressions of concern by chatbots enhance customer
bots, customers with high outcome-oriented goals pay more attention to
satisfaction.
a chatbot’s ability to solve customer problems than the warmth intro­
duced by emotional expressions. Customers with high outcome-oriented
2.5. The mediation effect of expectancy violations goals have fewer affective expectations of chatbots; conversely, cus­
tomers with low outcome-oriented goals will have more affective ex­
Previous research has shown that customers have competency- pectations. This means high outcome-oriented customers have similar
related efficiency expectations toward chatbots (Crolic et al., 2022; Lv perceptions of expectations being violated regardless of whether the
et al., 2022). We believe that customers also have affective expectations chatbot expresses emotions. Therefore, hypothesis 4 is proposed:
for chatbots. First, customers generally resist the services provided by
AI-based chatbots as opposed to human employees. In addition to the H4. Customer goal orientation (process orientation and outcome
poor problem-solving ability of AI, there is also the low emotional orientation) moderates the negative relationship between emotional
experience that AI creates with customers (Kyung & Kwon, 2022; Lon­ expressions by chatbots and expectancy violations. (a) Customers with
goni, Bonezzi, & Morewedge, 2019; Zhao et al., 2022; Zhou et al., 2022). low process-oriented goals attenuate the negative relationship between
Second, customers have affective expectations of human employees, emotional expressions by chatbots and expectancy violations compared
which are primarily human employees displaying positive emotions (e. to customers with high process-oriented goals. (b) Customers with high
g., service with a smile) (Houston, Grandey, & Sawyer, 2018) and outcome-oriented goals attenuate the negative relationship between
recognizing customer emotions in specific contexts (e.g., service re­ emotional expressions by chatbots and expectancy violations compared
covery) (Becker et al., 2022). According to CASA (Moon, 2000; Nass & to customers with low outcome-oriented goals.
Lee, 2001), the expectations for anthropomorphic chatbots should be
similar to those for humans. Third, according to self-determination 2.7. The moderating role of the human-likeness of chatbot’ avatars
theory (Ryan & Deci, 2020), individuals require relatedness when
interacting with technology. Relatedness is the need for individuals to Chatbots act as communicators, and appearance characteristics (i.e.,
connect with others (i.e., to care and be cared for). When chatbots meet avatars) are moderating factors we consider. In recent years, avatars
this need, it positively influences the customer’s service evaluation have been defined as “digital entities with anthropomorphic appear­
(Jiménez-Barreto et al., 2021). Customers have an affective expectation ance, controlled by a human or software, that are able to interact” (p.
of chatbots and expect to be concerned for them. Therefore, customers’ 71) (Miao et al., 2022). For chatbots, avatars are static, graphical rep­
expectation violations are reduced when chatbots express emotional resentations of chatbots (Diederich, Brendel, Morana, & Kolbe, 2022).
concern. Expectancy violation has been widely shown to affect customer As an essential anthropomorphic visual cue for chatbots (Seeger et al.,
satisfaction (Oliver, 1980; Oliver & Swan, 1989; Zeithaml, Parasura­ 2021), the avatar influences user perceptions and evaluations of chat­
man, & Berry, 1990). Hypotheses 2 and 3 are proposed: bots. For example, the gender, dress, and ethnicity of avatars can in­
H2. Emotional expressions of concern by chatbots diminish expec­ fluence perceived authenticity and engagement (Esmark Jones,
tancy violation. Hancock, Kazandjian, & Voorhees, 2022); the human-likeness of the
avatar can influence repurchase intentions (Fota, Wagner, Roeding, &
H3. Expectancy violation mediates the positive relationship between Schramm-Klein, 2022); the familiarity of avatars can moderate the
emotional expressions of concern by chatbots and customer satisfaction. relationship between humanization and eeriness (S. W. Song & Shin,
2022). In particular, avatars that are more human-like in form may elicit
2.6. The moderating role of customer’ goal orientation higher social expectations from users (Esmark Jones et al., 2022). The
reason for this is that the customer first notices the avatar of a chatbot
According to EVT (Burgoon, 1993), communicator characteristics before formal interactions with it (Kull, Romero, & Monahan, 2021).
affect expectations. We first considered the moderating effect of The initial impression of chatbots directly influences the subsequent
customer goal orientation. Consumers evaluate the services provided by expectations. Human-like avatars may allow people to perceive chatbots

4
J. Zhang et al. Tourism Management 100 (2024) 104835

as having a higher cognitive ability and emotional intelligence (Miao effect on self-esteem and purchase behavior (Ameen, Cheah, & Kumar,
et al., 2022). Both are essential for acceptance by consumers (X. Song, 2022). In our study context, when chatbots act as ‘friends’, customers
Xu, & Zhao, 2022). More human-like avatars appear to increase focus on its emotional support features and have more affective expec­
pre-interaction user expectations of chatbots’ efficiency performance tations. Once chatbots do not have emotional support, there is a more
(Crolic et al., 2022). Therefore, this manuscript argues that, in addition significant expectancy violation. When chatbots do not provide
to efficiency performance expectations, users will have higher affective emotional support, customers have higher expectancy violations. Once
expectations for more human-like avatars before interaction. We can chatbots provide emotional support, customers have much less expec­
infer that when avatars are more human-like, chatbots without tancy violation. Conversely, when chatbots act as ‘assistants’, customers
emotional expression can lead to more significant expectancy violations. are primarily concerned with whether they can help them accomplish
Under the condition of the high human-like degree of avatars, the ex­ their tasks. In this case, whether chatbots provide emotional support
pectancy violation of customers is higher if chatbots do not express does not significantly affect the customer’s expectancy violation.
emotions. Then when chatbots express emotions, expectancy violation Therefore, hypothesis 6 is proposed:
will be reduced more. Therefore, hypothesis 5 is proposed:
H6. The type of relationship between customers and chatbots (friends
H5. The human-likeness of chatbot avatars moderates the negative vs. assistants) moderates the negative relationship between emotional
relationship between emotional expressions by chatbots and expectancy expressions and expectancy violations. The relationship type of assis­
violation; High human-likeness of avatars made this negative relation­ tants (vs. friends) attenuates the negative relationship between
ship more strongly than low human-likeness of avatars. emotional expressions and expectancy violations.
Our research model is presented in Fig. 1.
2.8. The moderating role of the relationship type between customers and
chatbots 3. Study context and overview of studies

According to EVT (Burgoon, 1993), the relationship between com­ To understand how emotional expressions of concern by chatbots
municators is an essential factor influencing expectations. In customer affect customer expectations and satisfaction, we conducted three
interaction with chatbots, we consider the relationship type (assistant scenario-based online experiments. We first created a freely accessible
vs. friend) between customers and chatbots as the final moderating online travel service website (see Appendix A). Although the travel
factor. In providing services to customers, the role played by the com­ website appears to provide customers with airline reservations, hotel
pany influences the consumer’s reaction (H. C. Kim & Kramer, 2015). As bookings, and travel tips, none of these functions were available. We
company representatives, chatbots can play different roles and influence added an AI-powered chatbot to this travel website. The function of this
the user experience (Youn & Jin, 2021). When chatbots act as ‘assis­ chatbot is to recommend tourist attractions customized to customer
tants’, they are conceptualized as useful machines that help humans to preferences. We developed this chatbot using Google’s AI development
accomplish tasks; therefore, the ‘competence’ dimension of chatbots is platform, Dialogflow (Essentials Edition), which provides a powerful
more prominent. Chatbots in the role of ‘friends’ are conceptualized as natural language understanding engine to process and interpret natural
providing emotional support to users and acting as trustworthy personal language input. We integrated this chatbot into the main page of our
companions for humans; therefore, the ‘sincerity’ dimension is more travel website to provide our customers with recommendations for
prominent (Dautenhahn, 2007; Sundar, Jung, Waddell, & Kim, 2017). tourist attractions. The entire conversational interaction included three
The ‘friends’ label and the ‘assistants’ label have been used differently to steps. First, the chatbot asks the customer which city they want to visit.
elicit positive comments from users about the technology. For example, Then, the chatbot asks the customer about their travel situation and
Sundar et al. predicted that the ‘assistant’ label might trigger the ‘helper’ preferences, including the number of travelers, fare requirements, and
heuristic, causing users to evaluate positively the technology that helps likes and dislikes. Finally, the chatbot recommends a tourist attraction in
them accomplish their tasks; similarly, the ‘friends’ label may trigger the the city for the customer based on the customer’s expressed preferences.
‘social presence’ heuristic, causing users to evaluate positively the We conducted three studies on this travel website to test different
technology that accompanies them (Sundar et al., 2017). The relation­ parts of the theoretical framework. Study 1 provided preliminary evi­
ship type between consumers and chatbots can influence consumer dence for the effect of emotional expressions of concern on expectancy
perception of brand personality (competence vs. sincerity) (Youn & Jin, violation and customer satisfaction (H1 and H2) and validates potential
2021). This relationship type can also moderate perceived body image’s mechanisms influencing customer satisfaction (H3). We validated the

Fig. 1. Research model.

5
J. Zhang et al. Tourism Management 100 (2024) 104835

moderating role of customer characteristics (i.e., customer’s goal Table 3


orientation) through Study 1 (H4). We also examined the moderating Conversational emotion stimuli in Study1, 2, 3.
effect of the human likeness of the chatbot avatar on expectancy viola­ # Control Emotional Expressions of Concern
tion through Study 2 (H5). Finally, Study 3 examined the moderating
1 The picnic should ensure hygiene, The picnic should ensure hygiene, and
effect of the relationship type between customers and chatbots on ex­ and try not to bring food that is easily try not to bring food that is easily
pectancy violations (H6). Table 2 provides an overview of the empirical spoiled. Dietary problems will affect spoiled. I’m worried that dietary
research. your subsequent travel experience. problems will affect your subsequent
travel experience
.
4. Study 1
2 There are very few taxis around the There are very few taxis around the
scenic spot at night. It is difficult for scenic spot at night. I am afraid it is
Study 1 examined (1) whether emotional expressions of concern by you to get a regular taxi. difficult for you to get a regular taxi
chatbots can reduce expectancy violation and increase customer satis­ .
faction, (2) whether expectancy violation mediates the effect of the 3 If you buy tiger food to feed the tiger, If you buy tiger food to feed the tiger,
emotional expressions on customer satisfaction, and (3) the moderating keep your distance. You must keep keep your distance. I’m concerned
effect of the goal orientation. We expect that when customers are more you safe. about whether you can keep you safe
.
process-oriented or less outcome-oriented, expectancy violations will be
4 Russian cabaret shows are only Russian cabaret shows are only
significantly lower; when customers are less process-oriented or more
available in winter and summer, you available in winter and summer, I am
outcome-oriented, there will be no significant difference in expectancy may not be able to go. pity that you may not be able to go
violations. .
5 The space in the museum is not very The space in the museum is not very
4.1. Manipulation stimuli large and there are many people. It large and there are many people.I am
will affect your viewing experience. worried that it will affect viewing
experience
To manipulate the emotional expressions, we selected emotions .
expressing concern (e.g., I’m worried, I’m feeling pity, I’m concerned, or
I’m afraid). These emotions were expressed in text and emoticons, and
the timing of the expression was the chatbot’s reply to the customer research platform in China for providing paid data collection services.
when they stated their preferences. As computer-mediated communi­ We used G-Power (α = 0.05, power = 0.80) to test the minimum sample
cation, online text communication is as emotionally involved as face-to- size (Zhou et al., 2022). This value was 128; therefore, 141 samples were
face communication (Derks, Fischer, & Bos, 2008). Emoticons can be sufficient. Participants were randomly assigned to one of two conditions
applied to express and reinforce emotions (Walther & D’Addario, 2001). between the emotion and control groups. The demographic profile of the
We follow the experimental treatment of Yin, Bond, and Zhang (2021). participants in the three studies is shown in Appendix B.
For the group without emotional expressions (control group), the chat­ First, all participants were informed that the travel website was a
bot expressed only factual or suggested information; for the group with niche in China that provides a recommendation service for tourist at­
emotional expressions (treatment group), based on the control group, tractions. Participants were then told they would visit the travel site and
the chatbot expressed its emotion directly using text at the beginning interact with a chatbot that could recommend a tailored tourist attrac­
and reinforcing the emotion using a corresponding emoji at the end. A tion (see Appendix C). At this point, we showed all participants the
portion of the stimulus material for the control and treatment groups is avatar of the chatbot with which they were about to interact (see Ap­
shown in Table 3. pendix D). In the following step, participants indicated their pre-
interaction affective expectations regarding the chatbot’s upcoming
4.2. Design and procedure performance on six seven-point Likert items (‘I expect the chatbot to: be
sensitive to my emotions and feelings; to understand my emotions from
Experiment 1 was conducted in a between-subjects design the conversation; to understand my emotional state; to express appro­
(emotional expression: control vs. emotion). We recruited 141 (66.7% priate emotions; to show emotions that conform to the norms of
female, M age = 25.48) participants from the Credamo platform, a expression; to express acceptable emotions’, α = 0.87; adapted from
(van Kleef & Côté, 2007; Wong & Law, 2002)). After the interaction was
Table 2 completed, participants assessed the post-interaction chatbot’s
Overview of the empirical research. emotional performance based on six seven-point Likert scale items
Study Objectives of the Manipulation Moderator Hypotheses corresponding to the pre-interaction items (‘I felt the chatbot: was sen­
study variable tested sitive to my emotions and feelings; understood my emotions from the
1 Testing the main Emotional Goal H1, H2, H3,
conversation; understood my emotional state; expressed appropriate
effect of emotional expression orientation H4 emotions; showed emotions that conform to the norms of expression;
expression, the expressed acceptable emotions’; α = 0.90). To ensure the success of our
mediation of manipulation, The participants were asked a manipulation check item in
expectancy
both conditions (‘In your opinion, to what extent did the chatbot use
violation, and the
moderation of goal emotions to express concern for you during the previous conversation?‘).
orientation Finally, participants rated their customer satisfaction (α = 0.83) (Barger
2 Testing the main Emotional Human- H1, H2, H5 & Grandey, 2006) and belief about computer emotion. Details of all
effect of emotional expression and likeness of measurements are shown in Appendix E.
expression and the human-likeness avatars
moderation of of avatars
human-likeness of 4.3. Results
avatars.
3 Testing the main Emotional Relationship H1, H2, H6 To measure expectancy violation, we indicated each participant’s
effect of emotional expression and type
expectancy violation by subtracting the post-interaction expectation
expression and the relationship type
moderation of average score from the pre-interaction assessment average score (Crolic
relationship type. et al., 2022; Madden, Little, & Dolich, 1979). Pre-interaction

6
J. Zhang et al. Tourism Management 100 (2024) 104835

expectations for all studies are presented in Appendix F. We first con­ significantly decreased expectancy violation (β = − 0.581, t = − 3.246, p
ducted a manipulation check to confirm the validity of the emotional < 0.01), with significant mediation through expectancy violation. For
expression manipulation. The independent sample t-test showed par­ customers with a lower process orientation (effect = 0.020, BootSE =
ticipants could perceive chatbots express more emotions in the emotion 0.080, 95% CI [− 0.1137, 0.2200]), emotional expressions had no effect
condition than in the control condition (M emotion = 5.87, standard on expectancy violation (β = − 0.059, t = − 0.335, p = 0.74), with
deviation [SD] emotion = 1.469, M control = 4.40, SD control = 1.444, insignificant mediation through expectancy violation. These findings
t = − 6.007, p < 0.001). These results suggest that our manipulation was support H 4 (a).
successful. Next, when outcome orientation was used as the moderator, there
To test the effect of the chatbot’s emotional expression on customer was a significant indirect effect on expectancy violation (index =
satisfaction and expectancy violation, we conducted two one-way ana­ − 0.065, BootSE = 0.045, 95% CI [− 0.1695, − 0.0110]). The interaction
lyses of covariance (ANCOVA). We used emotional expression by chat­ between emotional expressions and outcome orientation significantly
bots as the independent variable, customer satisfaction, and expectancy impacted expectancy violation (β = 0.193, t = 2.090, p = 0.039 < 0.05)
violation as the dependent variables, and age, gender, experience, error (Fig. 3). For customers with a lower outcome orientation (effect = 0.216,
(whether there were errors during the conversation), and belief about BootSE = 0.130, 95% CI [0.0246, 0.5191]), emotional expressions
computer emotion (the degree to which the customer believed the significantly decreased expectancy violation (β = − 0.644, t = − 3.100, p
computer could have its own emotions) as covariates. The results < 0.01), with significant mediation through expectancy violation. For
showed significant main effects for both models. Compared with the customers with a higher outcome orientation (effect = 0.020, BootSE =
control condition, customer satisfaction was significantly higher in the 0.068, 95% CI [− 0.1066, 0.1708]), emotional expressions had no effect
emotion condition (M emotion = 6.220, SD emotion = 1.030, M control on expectancy violation (effect = − 0.058, t = − 0.335, p = 0.74), with
= 5.671, SD control = 0.756, F (1, 139) = 10.04, p < 0.01). Expectancy insignificant mediation through expectancy violation. This finding
violation was significantly lower in the emotion condition (M emotion supports H 4 (b). The results of the mediation and moderated mediation
= 0.115, SD emotion = 0.696, M control = 0.452, SD control = 0.805, F analyzes of all studies are presented in Appendix G.
(1, 139) = 5.804, p < 0.05). These findings support H 1 and H 2.
To examine the mediating effect of expectancy violation between 4.4. Discussion
emotional expressions of concern and customer satisfaction, we con­
ducted mediation analyses using PROCESS (model 4) with 5000 boot­ Study 1 suggests that the emotional expression of concern can in­
strap samples (Hayes, 2017). Emotional expression was treated as the crease customer satisfaction, and reduced expectancy violation is the
independent variable, customer satisfaction was used as the dependent underlying mechanism. No significant difference in expectancy viola­
variable, expectancy violation was treated as the mediator, and age, tion occurred when customers were low process-oriented (versus high
gender, experience, error, and belief about computer emotion were used process-oriented) or high outcome-oriented (versus low outcome-
as covariates. The results revealed a significant positive indirect effect of oriented). Thus H1, H2, H3, H4(a), and H4(b) are supported.
emotional expressions on customer satisfaction via expectancy violation
(effect = 0.103, BootSE = 0.070, 95% confidence interval [CI] [0.0043,
5. Study 2
0.2711]). This finding supports H 3.
To determine whether the interaction between emotional expres­
Study 2 was mainly conducted to test the moderating effect of the
sions by chatbots and customer goal orientation would affect customer
human-likeness of chatbot avatars. We expected expectancy violations
satisfaction through expectancy violation, we conducted a moderated
to decrease when the chatbot avatar was less human-like. The expec­
mediation analysis using PROCESS (model 7) with 5000 resamples
tancy violation will decrease further when the chatbot avatar is more
(Hayes, 2017). The independent variable was the emotional expressions
human-like.
of concern, the mediating variable was expectancy violation, and the
dependent variable was customer satisfaction.
When process orientation was used as the moderator, there was a 5.1. Pretest
significant indirect effect on expectancy violation (index = 0.075,
BootSE = 0.054, 95% CI [0.0001, 0.0173]). The interaction between We pretested avatars to select two with a significant difference in
emotional expressions and process orientation significantly impacted humanoid degree and to test whether a more humanoid avatar would
expectancy violation (effect = − 0.224, t = − 2.090, p = 0.038 < 0.05) significantly affect affective expectation. Eighty-nine participants from
(Fig. 2). For customers with a higher process orientation (effect = 0.195, the Creadamo platform completed the pretest and received monetary
BootSE = 0.195, 95% CI [0.0222, 0.4494]), emotional expressions rewards. All participants were randomly assigned to one of two condi­
tions: (1) a high human-like avatar condition and (2) a low human-like

Fig. 2. Interaction effects of emotional expressions and process orientation on Fig. 3. Interaction effects of emotional expressions and outcome orientation on
expectancy violation (Study 1). expectancy violation (Study 1).

7
J. Zhang et al. Tourism Management 100 (2024) 104835

avatar condition. Each condition showed a picture of a chatbot avatar variables, expectancy violation as dependent variables, and age, gender,
(see Appendix D). We asked participants to assume that they would experience, errors, and belief about computer emotions as covariates.
interact with the chatbot and to assess the degree to which the avatar The main effect of emotional expression was significant. Expectancy
was human-like and their pre-interaction affective expectations. The violation was significantly lower in the emotion condition (M emotion
degree of human-likeness of the avatar was measured by asking, “Please = − 0.142, SD emotion = 0.735, M control = 0.426, SD control = 0.963,
rate the degree to which the chatbot avatar is human-like”; pre- F (1, 181) = 17.41, p < 0.01). The test also revealed that there was a
interaction affective expectations were measured using an adapted significant interaction between the emotional expressions and the
scale consistent with Study 1. Independent samples t-test revealed a human-likeness of the avatar on expectancy violation [F (1, 179) =
significant difference in the degree of avatar human-likeness between 2.835, p = 0.094 < 0.1] (Fig. 4). When the human-likeness of the avatar
the two conditions (M high = 6.50, SD high = 0.804, M low = 3.19, SD was low, the expectancy violation was significantly lower in the emotion
low = 1.583, t = − 12.206, p < 0.001). Participants had significantly condition [M emotion = − 0.007, SD emotion = 0.802, M control =
higher affective expectations of the chatbot in the high human-like 0.270, SD control = 0.828, F (1, 89) = 3.854, p = 0.053 < 0.1]. When
avatar condition than in the low human-like avatar condition (M high the human-likeness of the avatar was high, the expectancy violation was
= 5.84, SD high = 0.651, M low = 5.15, SD low = 1.171, t = − 3.391, p more significantly lower in the emotion condition [M emotion =
= 0.012 < 0.05). − 0.216, SD emotion = 0.663, M control = 0.582, SD control = 1.068, F
(1, 90) = 18.701, p < 0.001].
5.2. Main study design and procedure To determine whether the interaction between emotional expres­
sions and the human-likeness of the chatbot’s avatar affects customer
Study 2 was conducted in a two x two (emotional expression: control satisfaction through expectancy violation, we conducted a moderated
vs. emotion; avatar human-like: high vs. low) between-subjects design. mediation analysis using the PROCESS (model 7) (Hayes, 2017) in
We recruited 183 (59% female, M age = 26.12) participants from the which the emotional expressions of concern used as the independent
Credamo platform. We used G-Power (α = 0.05, power = 0.80) (Zhou variable, expectancy violation used as the mediator, and customer
et al., 2022) to test the minimum sample size. This value was 179; satisfaction used as the dependent variable. When the human-likeness of
therefore, 183 samples were sufficient. Participants were randomly the avatar was served as the moderator, the results revealed a significant
assigned to one of four conditions. The manipulated stimuli for emotion indirect effect of expectancy violation (index = 0.053, BootSE = 0.036,
in Study 2 were the same as in Study 1. 90% CI [0.0011, 0.1169]). When the human-likeness of the avatar was
Consistent with Study 1, all participants were informed that the low (effect = 0.040, BootSE = 0.025, 90% CI [0.0041, 0.0839]),
travel website was a niche travel website in China that could provide a emotional expression significantly decreased expectancy violation (β =
recommendation service for tourist attractions. Participants were then − 0.158, t = − 1.790, p < 0.1), with a significant mediation through
told that they were going to visit the travel website and interact with the expectancy violation. When the human-likeness of the avatar was high
chatbot to get a suitable tourist attraction. Similarly, we presented (effect = 0.094, BootSE = 0.034, 95% CI [0.0298, 0.1621]), emotional
participants with avatars (human-likeness high vs. low) of the chatbot expressions significantly decreased expectancy violation (β = − 0.368, t
with which they were about to interact. Finally, participants completed = − 4.150, p < 0.001), with significant mediation through expectancy
a questionnaire that included pre-interaction affective expectations (α = violation. These findings supported H 5.
0.86) (van Kleef & Côté, 2007; Wong & Law, 2002), post-interaction
emotional performance (α = 0.90), customer satisfaction (α = 0.83)
5.4. Discussion
(Barger & Grandey, 2006), and manipulation check items about the
emotional expressions and the avatar.
The results of Study 2 demonstrate that emotional expressions by
chatbots can improve customer satisfaction and reduce expectancy
5.3. Results
violation. Moreover, compared to the low human-like avatar, expec­
tancy violation was reduced more when the human-likeness of the
To measure expectancy violation, we expressed each participant’s
avatar was high. Thus H1, H2, and H5 are supported.
expectancy violation by subtracting the post-interaction expectation
average score from the pre-interaction assessment average score (Crolic
et al., 2022; Madden et al., 1979). We first performed a manipulation 6. Study 3
check to confirm the validity of the emotional expression and avatar
manipulation. The independent sample t-test showed participants could The purpose of Study 3 was to examine the moderating effect of the
perceive chatbots express more emotions in the emotion condition than relationship type between customers and chatbots. We expect that
in the control condition (M emotion = 6.04, SD emotion = 1.492, M
control = 4.56, SD control = 1.276, t = − 7.254, p < 0.001), and that the
avatar in the high human-like avatar condition was perceived as more
human-like by customers than the low human-like avatar condition (M
high = 6.58, SD high = 0.788, M low = 2.63, SD low = 1.582, t =
− 21.41, p < 0.001). This finding suggests that our manipulation was
successful.
A two-way ANCOVA was conducted with the chatbot’s emotional
expressions and the human-likeness of the chatbot avatar as indepen­
dent variables, customer satisfaction as dependent variables, and age,
gender, experience, errors, and belief about computer emotions as
covariates. The results indicated that only the main effect of emotional
expression was significant. Compared with the control condition,
customer satisfaction was significantly higher in the emotion condition
(M emotion = 6.165, SD emotion = 0.655, M control = 5.837, SD
control = 0.821, F (1, 181) = 4.951, p < 0.05).
A two-way ANCOVA was conducted with the emotional expressions Fig. 4. Influences of emotional expressions and human-likeness of the avatar
by chatbots and the human-likeness of chatbot avatars as independent on expectancy violation (Study 2).

8
J. Zhang et al. Tourism Management 100 (2024) 104835

expectancy violations would be significantly lower when the relation­ A two-way ANCOVA was conducted with the emotional expressions
ship type is ‘friends’, and that when the relationship type is ‘assistants’, by chatbots and the relationship type as independent variables,
there would be no significant difference in expectancy violation. customer satisfaction as dependent variables, and age, gender, experi­
ence, errors, and belief about computer emotions as covariates. Only the
6.1. Manipulation stimuli main effect of emotional expression was significant. Customer satisfac­
tion was significantly higher in the emotion condition (M emotion =
The manipulation stimuli for emotional expressions were the same as 6.174, SD emotion = 0.660, M control = 5.942, SD control = 0.691, F (1,
in Study 1. We conducted a manipulation to understand the impact of 183) = 5.416, p < 0.05).
the relationship type between customers and chatbots. Following the A two-way ANCOVA was conducted with the emotional expressions
approach of Youn and Jin (2021) approach. We used different by chatbots and the relationship type as independent variables, expec­
self-introductions and conversational styles of chatbots to highlight the tancy violation as dependent variables, and age, gender, experience,
different identities of them as a “friend” or an “assistant”. To be more errors, and belief about computer emotions as covariates. The results
specific, in the two manipulation groups, the chatbot would directly indicated that the main effect of emotional expression was significant.
introduce that it is a “friend” or an “assistant” when making an intro­ Compared with the control condition, expectancy violation was signif­
duction. Moreover, the chatbot would talk to participants in a public, icantly lower in the emotion condition (M emotion = − 0.092, SD
colloquial, and informal style when playing a “friend” role (e.g., emotion = 0.667, M control = 0.276, SD control = 0.786, F (1, 183) =
showing willingness to listen to the customer and using words like “let’s” 9.494, p < 0.01). The results also indicated that there was a significant
to close the distance with the customer). In contrast, the chatbot would interaction between the emotional expressions and the relationship type
talk to participants in an official, formal style (e.g., showing its goal and on expectancy violation [F (1, 181) = 3.937, p = 0.049 < 0.05] (Fig. 5).
effort to help the customer) when playing an “assistant” role. Appendix When the relationship type was assistant, the expectancy violation was
C shows more detailed manipulation process. significantly lower in the emotion condition [M emotion = − 0.195, SD
emotion = 0.711, M control = 0.376, SD control = 0.681, F (1, 92) =
6.2. Design and procedure 15.801, p < 0.001]. When the relationship type was friend, there was no
significant difference in expectancy violation between the emotion
Study 3 was conducted in a two x two between-subjects design condition and the control condition [M emotion = 0.015, SD emotion =
(emotional expression: control vs. emotion; friendship type: friend vs. 0.607, M control = 0.174, SD control = 0.876, F (1, 89) = 1.009, p =
assistant). We recruited 185 (61% female, M age = 25.81) participants 0.318].
from the Credamo platform. We used G-Power (α = 0.05, power = 0.80) To determine whether the interaction between emotional expres­
(Zhou et al., 2022) to test the minimum sample size. This value was 179; sions by chatbots and the relationship type affects customer satisfaction
therefore, 185 samples were sufficient. Participants were randomly through expectancy violation, we conducted a moderated mediation
assigned to one of four conditions. analysis using PROCESS (model 7) with 5000 resamples (Hayes, 2017)
The procedure of Study 3 was similar to that of Study 1. All partic­ with the emotional expressions of concern as the independent variable,
ipants were informed that the travel website was a niche travel website customer satisfaction as the dependent variable, and expectancy viola­
in China that could provide travel attractions recommendation services. tion as the mediator variable, the relationship type was used as the
Next, the chatbot was introduced as a friend or an assistant. Participants moderator variable. There was a significant indirect effect of expectancy
then indicated their pre-interaction affective expectations (α = 0.87; violation (index = − 0.084, BootSE = 0.054, 95% CI [− 0.2075,
adapted from (van Kleef & Côté, 2007; Wong & Law, 2002)) based on − 0.0001]). When the relationship type was ‘assistant’ (effect = 0.107,
the presented avatar and interacted with the chatbot. Finally, partici­ BootSE = 0.051, 95% CI [0.0261, 0.2253]), emotional expressions
pants completed questionnaires that included post-interaction significantly decreased expectancy violation (β = − 0.533, t = − 3.578, p
emotional performance (α = 0.89), customer satisfaction (0.79) < 0.001), with significant mediation through expectancy violation.
(Barger & Grandey, 2006), and manipulation check item about When the relationship type was ‘friend’ (effect = 0.023, BootSE = 0.036,
emotional expression. We also asked participants how much they 95% CI [− 0.0411, 0.1051]), emotional expressions had no effect on
considered the chatbot a friend and an assistant by the following two expectancy violation (β = − 0.114, t = − 0.754, p = 0.45), with insig­
manipulation check items (Youn & Jin, 2021)[(1) to what extent would nificant mediation through expectancy violation. Although the moder­
they consider the chatbot a friend? and (2) to what extent would you ated mediation results of Study 3 were significant, the experiment
consider the chatbot an assistant?]. results were precisely the opposite of our hypothesis. Therefore, H6 is
refuted.
6.3. Results

As in the previous two studies, we expressed each expectancy


violation by subtracting the post-interaction expectation average score
from the pre-interaction assessment average score (Crolic et al., 2022;
Madden et al., 1979). First, we conducted a manipulation check to
confirm the validity of the manipulation regarding emotional expres­
sions. The independent sample t-test showed participants could perceive
chatbots express more emotions in the emotion condition than in the
control condition (M emotion = 6.13, SD emotion = 1.150, M control =
4.33, SD control = 1.715, t = − 8.360, p < 0.001). Next, we performed a
manipulation check to confirm the validity of the manipulation
regarding the friendship type. The independent sample t-test revealed a
significant difference between a friend and assistant conditions (M
friend = 5.27, SD friend = 1.146, M assistant = 4.76, SD assistant =
1.500, t = − 2.640, p < 0.01 for the first question; M friend = 5.29, SD
friend = 1.188, M assistant = 4.70, SD assistant = 1.538, t = 2.895, p =
0.019 < 0.05 for the second question). Therefore, all manipulations Fig. 5. Influences of emotional expressions and relationship type on expec­
were successful. tancy violation (Study 3).

9
J. Zhang et al. Tourism Management 100 (2024) 104835

To investigate why Study 3 refuted the hypothesis, we conducted an Table 4


independent samples t-test to compare the pre-interaction and post- Summary of results.
interaction affective expectations of the friend and assistant condi­ Hypotheses Test results Main findings
tions. The results showed that the pre-interaction affective expectations
H1: Emotional expressions of Supported Chatbots with emotional
in the friend condition were significantly higher than that in the assis­ concern by chatbots enhance concern lead to higher
tant condition (M friend = 5.603, SD friend = 0.806, M assistant = customer satisfaction. individual satisfaction.
5.359, SD assistant = 0.854, t = − 1.990, p = 0.048 < 0.05); however, H2: Emotional expressions of Supported Chatbots expressing emotional
the post-interaction emotional performance in the friend condition was concern by chatbots diminish concern are less likely to violate
expectancy violation. individual expectations.
also higher than that in the assistant condition (M friend = 5.500, SD H3: Expectancy violation Supported The underlying mechanism by
friend = 0.747, M assistant = 5.270, SD assistant = 0.986, t = − 1.795, p mediates the relationship which emotional concern
= 0.074 < 0.1). We compared the post-interaction emotional perfor­ between emotional expressions increases individual satisfaction
mance in the friend and assistant conditions when chatbots did not ex­ and customer satisfaction. is expectation violation.
H4(a): Customers with low Supported Whether or not the chatbot
press emotions. The results of the independent samples t-test showed
process-oriented goals expressed emotional concern,
that the post-interaction emotional performance in the friend condition attenuate the negative customers with low process
was significantly higher than that in the assistant condition (M friend = relationship between orientation perceived
5.300, SD friend = 0.643, M assistant = 4.940, SD assistant = 0.982, t = emotional expressions and expectancy violations similarly.
− 2.102, p = 0.039 < 0.05). The results of the pre-interaction expecta­ expectancy violations.
H4(b): Customers with high Supported Whether or not the chatbot
tions and post-interaction performance for the four conditions are pre­ outcome-oriented goals expressed emotional concern,
sented in Appendix H. Therefore, we believe that the results of Study 3 attenuate the negative customers with high outcome
refuted the hypothesis because, even when chatbots do not use text and relationship between orientation perceived
emoticons to express emotions directly, they make customers perceive emotional expressions and expectancy violations similarly.
expectancy violations.
more emotions due to their social-oriented interactions. Previous liter­
H5: The more human-likeness of Supported The more human-likeness of
ature has shown that a social-oriented communication style results in chatbot avatars make the chatbot avatars makes the
emotion-based thought processes (Zhou et al., 2022). negative relationship between negative relationship between
emotional expressions and emotional expressions and
6.4. Discussion expectancy violations more expectancy violations more
robust. robust.
H6: The relationship type of Not The relationship type of friends
Study 3 also demonstrates that chatbots’ emotional expressions of assistants (vs. friends) Supported (vs. assistants) attenuates the
concern can increase customer satisfaction and decrease expectancy attenuates the negative negative relationship between
violation. Moreover, when the relationship type between chatbots and relationship between emotional expressions and
emotional expressions and expectancy violations.
customers is ‘friend’ (as opposed to ‘assistant’), the customer has higher
expectancy violations.
affective expectations of the chatbot. Interestingly, however, the rela­
tionship type of friend (vs. assistant) significantly mitigated the negative
relationship between emotion expressions and expectancy violation. 7.2. Research contribution

7. General discussion Our paper makes several essential theoretical contributions. First,
our paper contributes significantly to studying chatbots’ emotions in the
7.1. Discussion of results tourism industry by exploring the effect of emotional expressions of
concern by chatbots on customer satisfaction. Because chatbots are
From the perspective of EVT, we explored how chatbots’ emotional revolutionizing the traditional paradigm of travel customer service
expressions of concern affect customer satisfaction during service en­ (Pillai & Sivathanu, 2020; Samala et al., 2020), it is crucial to improve
counters. The experimental results of the three studies are summarized their design to enhance the customer service experience (I. Tussyadiah,
in Table 4. The experimental results support most of our hypotheses and 2020). Previous research in the tourism industry identified many factors
provide insights into the mechanism rely upon which chatbots’ influencing customer experience with chatbots, including cuteness,
emotional expressions affect customer satisfaction. In Study 1, we appearance, and anthropomorphism (Lv et al., 2022; Cai et al., 2022;
demonstrated that chatbots’ emotional expression of concern increased Liu, Yi, et al., 2022). However, the impact of their emotional expressions
customer satisfaction compared to no emotional expression and on customer satisfaction remained unclear. For the first time, we
demonstrated the mediating role of expectancy violation in the main empirically tested the consequences of emotional expressions by chat­
effect. We also found that customer characteristics (i.e., goal orienta­ bots in a tourism context and demonstrated that chatbots could express
tion) moderate the relationship between emotional expression and ex­ concern for customers through emotions, increasing customer satisfac­
pectancy violation. Emotional expression does not significantly affect tion. This research also yields essential insights into the effectiveness of
expectancy violation when process orientation is low or outcome AI in expressing emotions. Emotional AI is a promising direction for
orientation is high. Study 2 also demonstrated that the emotional improving customer experience (Becker et al., 2022; van Esch et al.,
expression of concern increases customer satisfaction and reduces ex­ 2022). Nevertheless, it remains controversial whether AI can provide
pectancy violation. Specifically, we demonstrated that the chatbot’s emotional services (Wirtz et al., 2018). For instance, Soderlund, Oikar­
avatar moderates the relationship between emotional expression and inen, and Tan (2021) and Han et al. (2022) demonstrated that AI agents
expectancy violation. When the avatar is less human-like, emotional expressing positive emotions were effective and ineffective, respec­
expressions reduce expectancy violations less. Study 3 replicated the tively, in improving customer experience. Our three experiment studies
relationships among emotional expressions by chatbots, expectancy demonstrated that chatbots could display concern through emotions,
violation, and customer satisfaction. The negative relationship between thus improving customer satisfaction. Therefore, in addition to the
emotional expression and expectancy violation was robust when environment and the way emotions are expressed (Han et al., 2022), the
changing the relationship type between customers and chatbots from reason for the emergence of this contradictory finding is likely to be the
‘friend’ to ‘assistant’. difference in the type of emotional expression These findings point to an
essential research direction for subsequent AI emotion research, i.e.,
scholars should pay more attention to the emotions that AI expresses.

10
J. Zhang et al. Tourism Management 100 (2024) 104835

Second, we advanced the study of boundary conditions for chatbots 7.3. Contribution to practice
in tourism by considering the moderating effects of the customer’s goal
orientation, the human-likeness of chatbot avatars, and the relationship Our work also provides contributions to practice. First, our research
type between chatbots and customers. In human-chatbot interactions, points to chatbot design directions for chatbot developers in travel
user perceptions and interaction outcomes are often influenced by the service companies. Our research shows that customer satisfaction will
combination of the human, the chatbot, and the context (Diederich increase if the chatbot expresses its concern for customers using emo­
et al., 2022). Some chatbot studies in the tourism industry consider tions. Therefore, companies should improve chatbot conversational
human-related and context-related factors as boundary conditions skills and care for customers using appropriate emotional expressions,
(Zhang et al., 2022; Pillai & Sivathanu, 2020; Liu, Yi, et al., 2022; L. Li, especially chatbots who act as travel recommenders. Chatbots acting as
Yin, et al., 2021; Lv et al., 2022), yet, little consideration is given to the virtual advisers may handle customer issues that may be relatively
interaction of different design features of chatbots. A particular design professional and complex (Al-Natour, Benbasat, & Cenfetelli, 2021),
feature of chatbots often does not appear in isolation (Feine, Gnewuch, such that the customer’s demand for emotional experience may be more
Morana, & Maedche, 2019); therefore, the interaction between indi­ robust. Regardless of the role, the agent should be sure that the emotions
vidual design features is essential for customer impact (Diederich et al., expressed conform to the conventions and unwritten rules of human
2022; Go & Sundar, 2019). This study demonstrated that emotional dialogue to avoid causing discomfort to customers (McDuff & Czer­
expressions by chatbots could reduce customer expectations. Never­ winski, 2018).
theless, we do not consider this relationship identical in all conditions. Second, our research provides a guide for managers to manage
We considered human (i.e., customer goal orientation) and two chatbot customer expectations. According to our study, expectancy violation
design characteristics (i.e., human-likeness of avatar and relationship mediates the impact of emotional expressions on customer satisfaction.
type) as boundary conditions. For customer goal orientation, previous Characteristics of chatbots and customers, and relationship type mod­
research focused on scenarios where the service provider is human erate the relationship between emotional expressions and expectancy
(Güntürkün et al., 2020), and we examined for the first time the role of violation. Therefore, if managers consider adding emotions to chatbot
this customer characteristic in human-chatbots interactions and design, they must carefully consider how to design avatars and con­
confirmed its importance in the emotional expression effect of chatbots. versation styles to manage customer expectations. First, when chatbots
Specifically, customers with a high process or low outcome orientation are designed to act as assistants, it may be beneficial for chatbots to
prefer the emotional expression of chatbots. For avatars, previous express emotions. However, this seems to be at odds with the assistant
studies have shown an association with customers’ efficiency expecta­ role, as assistants primarily focus on problem-solving skills rather than
tions of chatbots but lack evidence of their association with affective emotional support. Nevertheless, we still suggest that chatbot designers
expectations (Crolic et al., 2022). For the first time, we propose and can make assistant chatbots express appropriate emotions to meet cus­
examine their relationship with customers’ affective expectations, i.e., tomers’ affective expectations. While chatbots are designed to be
the more human-like the avatar is, the higher the customer’s affective friends, the socially oriented interaction style makes expressing emo­
expectations, thereby complementing the avatar literature. For rela­ tions unnecessary, although customers still have affective expectations.
tionship types, previous research has focused on its impact on customer Second, when the avatar has a high degree of human-likeness, it is
perceptions of social interaction (Youn & Jin, 2021), but the under­ strongly recommended to make chatbots provide some emotional sup­
standing of its association with expectations is very limited. We com­ port. Although an avatar is a simple design element, it can affect a
plement the relationship type literature in human-chatbot interaction by customer’s initial affective expectations. Finally, we suggest that man­
demonstrating for the first time that customers have higher affective agers of travel companies can try to grasp the customer goal orientation.
expectations for friend (vs. assistant) relationship types. It appears Machine learning algorithms may be an effective way to infer customers’
reasonable that affective expectations would vary across boundary personality traits. For customers with a communication history, ma­
conditions because many factors between communicators influence chine learning can directly predict the customer’s goal orientation based
expectations. on previous communication history. Machine learning algorithms can
Third, our paper advances the study of expectations in human- gradually infer goal orientation from previous user responses in the
chatbot interactions by revealing the influence of emotional expres­ current session for new customers with no communication history.
sions of chatbots on expectancy violations. The confirmation of expec­ Chatbots infer customer goal orientation and adjust their emotional
tations is directly related to customer service outcomes (Oliver, 1980). expression behavior based on user responses over time. It is essential to
Some scholars have emphasized the important role of expectations match the emotional expression of chatbots with other features of their
during communications with chatbots (Miao et al., 2022; Rapp et al., design, to manage user expectations and avoid the Uncanny Valley.
2021; Rheu, Shin, Peng, & Huh-Yoo, 2020). Nevertheless, very little
research attention has been paid to the classification, activation, and 7.4. Limitations and future research
formation of expectations in human-chatbot interactions. This limitation
has led to fragmented research and a lack of coherence among chatbot This research has several main limitations. First, we could not
studies (Rapp et al., 2021). In particular, there has been a call for more determine the external validity of our findings. All three experimental
research on the roles of the chatbots’ emotional intelligence in shaping contexts were tourist attraction recommendations, and no other tourism
customer expectations (Miao et al., 2022). This research responds to the scenarios tested the robustness of our conclusions. Contextual and
call and provides insights into affective expectations’ activation, for­ situational factors might amplify the differences between humans and AI
mation, and impact. We demonstrated that customers have affective and moderate the relationship between humans and AI systems (Rzepka
expectations of chatbots. This expectation is a potential mechanism & Berger, 2018). Moreover, according to EVT (Burgoon, 1993),
mediating the emotional expressions of chatbots and customer satis­ contextual factors are inherently essential in influencing expectations:
faction. In particular, we show that such expectations are more likely to for example, privacy or task-oriented scenarios in the tourism industry
be activated under the high human-likeness of the avatar and the rela­ might have different impacts on expectancy violations. Therefore, we
tionship type of friend. The customer’s goal orientation plays an recommend that the interventions of our experiment be replicated in
essential role in forming affective expectations. Such affective expecta­ different contexts.
tions are almost non-existent for customers with high-outcome or Second, Due to the limitation of the definition of levels and mea­
low-process orientations. surement for moderator variables, it is difficult to draw general con­
clusions regarding these effects. Single-item measures may impact our
findings for goal orientation (process orientation and outcome

11
J. Zhang et al. Tourism Management 100 (2024) 104835

orientation). We encourage the ability to validate our findings with Impact statement
comprehensive scales measuring goal orientation in the future. For the
degree of avatar human-like, we chose only high and low options. We Despite the rise of chatbots in the tourism industry, customers still
expect that more complex and nuanced divisions of the degree of avatar prefer to interact with humans. This paper provides an important guide
anthropomorphism (e.g., low vs. medium vs. high) can be used in future for tourism companies on how to deploy customer service chatbots. By
studies to provide a deeper understanding of this variable. revealing the positive effects of emotional expressions, this study pro­
Third, there is a limitation in the age of the experiment participants. vides suggestions that travel companies may consider allowing chatbots
Compared with Generation Z, Generation X users have a worse attitude to express emotional concern to improve customer satisfaction. This
toward chatbots (Maar, Besson, & Kefi, 2023). Since Generation X cus­ implementation is ethical because it is what customers expect. However,
tomers are more exposed to chatbots and scenes of chatbots expressing these affective expectations vary with customer traits and designed
emotions often appear in science fiction movie scenes (Ghotbi & Ho, features of chatbots. Specifically, tourism companies deploy customer
2021), Generation X customers may have more affective expectations service chatbots with a focus on customer goal orientation, chatbot
for chatbots. Therefore, we encourage future expansion of the age range avatars, and conversational style to manage customer expectations. Our
to replicate the current study. findings also make an important and unique contribution to the litera­
Finally, our experimental chatbot development tool used only Dia­ ture on chatbots in the tourism industry by demonstrating that chatbots
logflow. Although Dialogflow is a popular AI development platform can be as effective as human employees in providing emotional services
equipped with powerful natural language processing capabilities, using in service encounters.
the same development tool in all three study contexts may limit the
generalizability of our findings. Declaration of competing interest

Credit authors statement We declare that there is no potential conflict of interest.

Junbo Zhang: Conceptualization, Data collection, Formal analysis, Acknowledgments


Writing- original draft, Writing – review & editing. Qi Chen: Concep­
tualization, Writing – original draft, Writing – review & editing. Jian­ This research was supported by the grants from the National Natural
dong Lu: Methodology, Data collection, Writing – original draft. Xiaolei Science Foundation of China (72202037, 72101045, 72034001,
Wang: Methodology, Writing – review & editing. Luning Liu: Concep­ 71974044), the Fundamental Research Funds for the Central Univer­
tualization, Supervision, Writing – original draft. Yuqiang Feng: sities in UIBE (21QN01), the Fundamental Research Funds in DUT
Conceptualization, Supervision. (DUT22RW102), Heilongjiang Provincial Natural Science Foundation of
China (YQ2020G004) and the Fundamental Research Funds for the
Central Universities (HIT.OCEF.2022054 and HIT.HSS.DZ201905).

Appendix I. Supplementary data

Supplementary data to this article can be found online at https://doi.org/10.1016/j.tourman.2023.104835.

Appendix A. Travel website designed for the experiment

12
J. Zhang et al. Tourism Management 100 (2024) 104835

Appendix B. Demographic information of the participants

Study 1 (N = 141) Study 2 (N = 183) Study 3 (N = 185)

Gender
Male 33.3 41.0 38.9
Female 66.7 59.0 61.1
Age
18–29 76.6 72.7 77.8
30–39 23.4 27.3 22.2
Prior experience with chatbots
1 0 0.5 0
2 0.7 2.2 1.1
3 7.1 6.6 5.9
4 9.9 10.4 9.7
5 45.4 45.9 41.6
6 24.8 30.6 30.8
7 12.1 3.8 10.8
Note. For prior experience with chatbots, 1 equals never, 7 equals very often, and increases by the degree in between.

Appendix C. Scenarios used in Study 1, 2, 3

Study 1, 2

Study 3.

(a) Friend condition

13
J. Zhang et al. Tourism Management 100 (2024) 104835

(b) Assistant condition

Appendix D. The chatbot avatar used in Study 1, 2, 3

Study 1, 3.

Study 2.

14
J. Zhang et al. Tourism Management 100 (2024) 104835

(a) Low human-like avatar (b) High human-like avatar

Appendix E. Measurements used in Study 1, 2, 3

Variables Items Scales

Affective expectations (van Kleef & Côté, 2007; Wong & Law, I expect the chatbot to be sensitive to my emotions and feelings. 1-Strongly disagree, 7-Strongly
2002) I expect the chatbot to understand my emotions from the conversation. agree
I expect the chatbot to understand my emotional state.
I expect the chatbot to express appropriate emotions.
I expect the chatbot to show emotions that conform to the norms of
expression.
I expect the chatbot to express acceptable emotions.
Customer satisfaction (Barger & Grandey, 2006) I am satisfied with the customer service chatbot’s advice. 1-Strongly disagree, 7-Strongly
I am satisfied with the way the customer service chatbot treated me. agree
I am satisfied with the overall interaction with the customer service
chatbot.
Goal orientation (Güntürkün et al., 2020) It is important to me that there is a positive atmosphere. 1-Strongly disagree, 7-Strongly
It is important to me that things work out in the end agree

Appendix F. The results of pre-interaction affective expectation in Study 1, 2 and 3

Study 1 Study 2 Study 3

Mean 5.50 5.46 5.46


Standard Deviation 0.812 0.816 0.874
Minimum 2.83 1.67 2.00
Maximum 6.67 6.67 7.00
Observations 141 141 141

Appendix G. The results of the mediation and moderated mediation in study 1, 2 and 3

ndependent variable Dependent variable

Study 1 Study 2

Mediation model Moderated mediation Moderated mediation Moderated mediation Moderated mediation
model 1 model 2 model 3 model 4

Expectancy Customer Expectancy Customer Expectancy Customer Expectancy Customer Expectancy Customer
violation satisfaction violation satisfaction violation satisfaction violation satisfaction violation satisfaction

(continued on next page)

15
J. Zhang et al. Tourism Management 100 (2024) 104835

(continued )
ndependent variable Dependent variable

Study 1 Study 2

Mediation model Moderated mediation Moderated mediation Moderated mediation Moderated mediation
model 1 model 2 model 3 model 4

Expectancy Customer Expectancy Customer Expectancy Customer Expectancy Customer Expectancy Customer
violation satisfaction violation satisfaction violation satisfaction violation satisfaction violation satisfaction

Emotion − 0.306** 0.352*** − 0.320** 0.352*** − 0.339*** 0.352*** − 0.158* 0.047 − 0.533*** 0.053
(0.127) (0.141) (0.125) (0.141) (0.129) (0.141) (0.089) (0.051) (0.149) (0.087)
Process-orientation 0.205***
(0.078)
Outcome-orientation − 0.133*
(0.080)
Avatar 0.312*
anthropomorphic (0.177)
Relationship type − 0.130
(0.151)
Emotion x Process- − 0.224**
orientation (H 4 (0.107)
(a))
Emotion x Outcome- 0.193**
orientation (H 4 (0.092)
(b))
Emotion x Avatar − 0.209*
anthropomorphic (0.124)
(H 5)
Emotion x 0.419**
Relationship type (0.211)
(H 6)
Expectancy violation − 0.336*** − 0.336*** − 0.336*** − 0.255*** − 0.201***
(0.094) (0.094) (0.094) (0.058) (0.059)
Gender 0.260* 0.385** 0.219 0.385** 0.240* 0.385** − 0.049 0.059 0.166 0.011
(0.138) (0.152) (0.137) (0.152) (0.138) (0.152) (0.131) (0.101) (0.112) (0.088)
Age − 0.000 0.009 0.000 0.009 − 0.000 0.009 0.012 − 0.016* 0.016 0.006
(0.010) (0.011) (0.010) (0.011) (0.010) (0.011) (0.012) (0.009) (0.010) (0.008)
Experience 0.092 0.067 0.078 0.067 0.106* 0.067 0.083 0.127** − 0.023 0.165***
(0.062) (0.068) (0.061) (0.068) (0.062) (0.068) (0.063) (0.049) (0.053) (0.042)
Error 0.299 − 0.145 0.317* − 0.145 0.301 − 0.145 0.315* − 0.179 0.419* − 0.321
(0.184) (0.201) (0.182) (0.201) (0.185) (0.201) (0.178) (0.139) (0.216) (0.173)
Belief − 0.103** 0.151*** − 0.113** 0.151*** − 0.095* 0.151*** − 0.123*** 0.130*** − 0.083** 0.149***
(0.050) (0.055) (0.051) (0.055) (0.051) (0.055) (0.045) (0.036) (0.041) (0.032)
Observations 141 141 141 183 185
F test 2.85** 8.63*** 3.09*** 8.63*** 2.72*** 8.63*** 4.56*** 10.58*** 3.45*** 12.81***
R2 0.113** 0.312*** 0.158*** 0.312*** 0.142*** 0.312*** 0.173*** 0.298*** 0.136*** 0.336***
Note: Process orientation and Outcome-orientation were mean-centered. Standard errors are in parentheses. *p < 0.1, **p < 0.05, ***p < 0.01.

Appendix H. Comparison of the results of pre-interaction affective expectation and post-interaction affective performance by groups in
Study 3

Friend (F) Assistant (A)

Control (C) Emotion (E) Control (C) Emotion (E)

Pre-interaction Mean 5.48 5.71 5.32 5.4


Std 0.96 0.68 0.77 0.93
P-value of T-test FC vs FE FC vs AC AC vs AE FE vs AE
0.16 0.38 0.62 0.07
Post-interaction Mean 5.3 5,70 4.94 5.6
Std 0.64 0.8 0.98 0.88
P-value of T-test FC vs FE FC vs AC AC vs AE FE vs AE
0.009 0.04 0.001 0.55

References Al-Natour, S., Benbasat, I., & Cenfetelli, R. (2021). Designing online virtual advisors to
encourage customer self-disclosure: A theoretical model and an empirical test.
Journal of Management Information Systems, 38(3), 798–827.
Adam, M., Wessel, M., & Benlian, A. (2021). AI-based chatbots in customer service and
Ameen, N., Cheah, J. H., & Kumar, S. (2022). It’s all part of the customer journey: The
their effects on user compliance. Electronic Markets, 31(2), 427–445.
impact of augmented reality, chatbots, and social media on the body image and self-
Afifi, W. A., & Metts, S. (1998). Characteristics and consequences of expectation
esteem of Generation Z female consumers. Psychology and Marketing, 39(11),
violations in close relationships. Journal of Social and Personal Relationships, 15(3),
2110–2129.
365–392.
Barger, P. B., & Grandey, A. A. (2006). Service with a smile and encounter satisfaction:
Akhoondnejad, A. (2016). Tourist loyalty to a local cultural event: The case of Turkmen
Emotional contagion and appraisal mechanisms. Academy of Management Journal, 49
handicrafts festival. Tourism Management, 52, 468–477.
(6), 1229–1238.

16
J. Zhang et al. Tourism Management 100 (2024) 104835

Becker, M., Efendić, E., & Odekerken-Schröder, G. (2022). Emotional communication by Hou, Y., Zhang, K., & Li, G. (2021). Service robots or human staff: How social crowding
service robots: A research agenda. Journal of Service Management, 33(4–5), 675–687. shapes tourist preferences. Tourism Management, 83, Article 104242. https://doi.org/
Burgoon, J. K. (1993). Interpersonal expectations, expectancy violations, and emotional 10.1016/j.tourman.2020.104242
communication. Journal of Language and Social Psychology, 12(1–2), 30–48. Iacobucci, D., & Ostrom, A. (1993). Gender differences in the impact of core and
Burgoon, J. K., Bonito, J. A., Lowry, P. B., Humpherys, S. L., Moody, G. D., Gaskin, J. E., relational aspects of services on the evaluation of service encounters. Journal of
et al. (2016). Application of Expectancy Violations Theory to communication with Consumer Psychology, 2(3), 257–286.
and judgments about embodied agents during a decision-making task. International Jiang, C., Zhang, C., Ji, Y., Hu, Z., Zhan, Z., & Yang, G. (2022). An affective chatbot with
Journal of Human-Computer Studies, 91, 24–36. controlled specific emotion expression. Science China Information Sciences, 65(10),
Burgoon, J. K., & Jones, S. B. (1976). Toward a theory of personal space expectations and 1–18.
their violations. Human Communication Research, 2(2), 131–146. Jiménez-Barreto, J., Rubio, N., & Molinillo, S. (2021). “Find a flight for me, Oscar!”
Burgoon, J. K., & Walther, J. B. (1990). Nonverbal expectancies and the evaluative Motivational customer experiences with chatbots. International Journal of
consequences of violations. Human Communication Research, 17(2), 232–265. Contemporary Hospitality Management, 33(11), 3860–3882.
Cai, D., Li, H., & Law, R. (2022). Anthropomorphism and OTA chatbot adoption: A mixed de Kervenoael, R., Hasan, R., Schwob, A., & Goh, E. (2020). Leveraging human-robot
methods study. Journal of Travel & Tourism Marketing, 39(2), 228–255. interaction in hospitality services: Incorporating the role of perceived value,
Chin, H. J., & Yi, M. Y. (2022). Voices that care differently: Understanding the empathy, and information sharing into visitors’ intentions to use social robots.
effectiveness of a conversational agent with an alternative empathy orientation and Tourism Management, 78, Article 104042.
emotional expressivity in mitigating verbal abuse. International Journal of Human- Kim, H. J. (2008). Hotel service providers’ emotional labor: The antecedents and effects
Computer Interaction, 38(12), 1–15. on burnout. International Journal of Hospitality Management, 27(2), 151–161.
Chong, T., Yu, T., Keeling, D. I., & de Ruyter, K. (2021). AI-chatbots on the services Kim, T. W., Jiang, L., Duhachek, A., Lee, H., & Garvey, A. (2022a). Do you mind if I ask
frontline addressing the challenges and opportunities of agency. Journal of Retailing you a personal question? How AI service agents alter consumer self-disclosure.
and Consumer Services, 63, Article 102735. https://doi.org/10.1016/j. Journal of Service Research, 25(4), 649–666.
jretconser.2021.102735 Kim, H., So, K. K. F., & Wirtz, J. (2022b). Service robots: Applying social exchange theory
Crolic, C., Thomaz, F., Hadi, R., & Stephen, A. T. (2022). Blame the bot: to better understand human–robot interactions. Tourism Management, 92, Article
Anthropomorphism and anger in customer–chatbot interactions. Journal of 104537. https://doi.org/10.1016/j.tourman.2022.104537
Marketing, 86(1), 132–148. Kim, H. C., & Kramer, T. (2015). Do materialists prefer the “brand-as-servant”? The
Dautenhahn, K. (2007). Socially intelligent robots: Dimensions of human-robot interactive effect of anthropomorphized brand roles and materialism on consumer
interaction. Philosophical Transactions of the Royal Society B: Biological Sciences, 362 responses. Journal of Consumer Research, 42(2), 284–299.
(1480), 679–704. Kirmani, A., & Campbell, M. C. (2004). Goal seeker and persuasion sentry: How
Delcourt, C., Gremler, D. D., van Riel, A. C.r., & van Birgelen, M. (2013). Effects of consumer targets respond to interpersonal marketing persuasion. Journal of
perceived employee emotional competence on customer satisfaction and loyalty: The Consumer Research, 31(3), 573–582.
mediating role of rapport. Journal of Service Management, 24(1), 5–24. Kozinets, R.v., & Gretzel, U. (2021). Commentary: Artificial intelligence: The marketer’s
Derks, D., Fischer, A. H., & Bos, A. E. R. (2008). The role of emotion in computer- dilemma. Journal of Marketing, 85(1), 156–159.
mediated communication: A review. Computers in Human Behavior, 24(3), 766–785. Kull, A. J., Romero, M., & Monahan, L. (2021). How may I help you? Driving brand
Diederich, S., Brendel, A. B., Morana, S., & Kolbe, L. (2022). On the design of and engagement through the warmth of an initial chatbot message. Journal of Business
interaction with conversational agents: An organizing and assessing review of Research, 135, 840–850.
human-computer interaction research. Journal of the Association for Information Kyung, N., & Kwon, H. E. (2022). Rationally trust, but emotionally? The roles of
Systems, 23(1), 96–138. cognitive and affective trust in laypeople’s acceptance of AI for preventive care
van Esch, P., Cui, Y., Gina, Das, G., Jain, S. P., & Wirtz, J. (2022). Tourists and AI: A operations. Production and Operations Management. https://doi.org/10.1111/
political ideology perspective. Annals of Tourism Research, 97. https://doi.org/ poms.13785
10.1016/j.annals.2022.103471 Lei, S. I., Shen, H., & Ye, S. (2021). A comparison between chatbot and human service:
Esmark Jones, C. L., Hancock, T., Kazandjian, B., & Voorhees, C. M. (2022). Engaging the Customer perception and reuse intention. International Journal of Contemporary
Avatar: The effects of authenticity signals during chat-based service recoveries. Hospitality Management, 33(11), 3977–3995.
Journal of Business Research, 144, 703–716. Li, L., Lee, K. Y., Emokpae, E., & Yang, S. B. (2021). What makes you continuously use
Fan, A., Lu, Z., Mao, Z., & Eddie. (2022). To talk or to touch: Unraveling consumer chatbot services? Evidence from Chinese online travel agencies. Electronic Markets,
responses to two types of hotel in-room technology. International Journal of 31(3), 575–599.
Hospitality Management, 101, Article 103112. https://doi.org/10.1016/j. Liu, X., Stella, Wan, L. C., Yi, X., & Shannon. (2022). Humanoid versus non-humanoid
ijhm.2021.103112 robots: How mortality salience shapes preference for robot services under the
Feine, J., Gnewuch, U., Morana, S., & Maedche, A. (2019). A taxonomy of social cues for COVID-19 pandemic? Annals of Tourism Research, 94, Article 103383. https://doi.
conversational agents. International Journal of Human-Computer Studies, 132, org/10.1016/j.annals.2022.103383
138–161. Liu, X. (Stella), Yi, X., Shannon, & Wan, L. C. (2022). Friendly or competent? The effects
FlowXO. (2022). Industries that are winning with chatbots. Retrieved 13 February 2023 of perception of robot appearance and service context on usage intention. Annals of
from: https://flowxo.com/industries-that-are-winning-with-chatbots/. Tourism Research, 92, Article 103324. https://doi.org/10.1016/j.
Forgas, J. P. (1995). Mood and judgment: The affect infusion model (AIM). Psychological annals.2021.103324
Bulletin, 117(1), 39–66. Li, M., Yin, D., Qiu, H., & Bai, B. (2021). A systematic review of AI technology-based
Fota, A., Wagner, K., Roeding, T., & Schramm-Klein, H. (2022). “Help! I have a problem” service encounters: Implications for hospitality and tourism operations. International
– differences between a humanlike and robot-like chatbot avatar in complaint Journal of Hospitality Management, 95, Article 102930. https://doi.org/10.1016/j.
management. HICSS 2022 Proceedings, 4273–4282. https://doi.org/10.24251/ ijhm.2021.102930
hicss.2022.522 Longoni, C., Bonezzi, A., & Morewedge, C. K. (2019). Resistance to medical artificial
Ghotbi, N., & Ho, M. T. (2021). Moral awareness of college students regarding artificial intelligence. Journal of Consumer Research, 46(4), 629–650.
intelligence. Asian Bioethics Review, 13(4), 421–433. Lopatovska, I., & Arapakis, I. (2011). Theories, methods and current research on
Gorry, G. A., & Westbrook, R. A. (2011). Once more, with feeling: Empathy and emotions in library and information science, information retrieval and human-
technology in customer care. Business Horizons, 54(2), 125–134. computer interaction. Information Processing & Management, 47(4), 575–592.
Go, E., & Sundar, S. S. (2019). Humanizing chatbots: The effects of visual, identity and Lv, X., Luo, J., Liang, Y., Liu, Y., & Li, C. (2022). Is cuteness irresistible? The impact of
conversational cues on humanness perceptions. Computers in Human Behavior, 97, cuteness on customers’ intentions to use AI applications. Tourism Management, 90,
304–316. Article 104472. https://doi.org/10.1016/j.tourman.2021.104472
Gronroos, C. (1984). A service quality model and its marketing implications. European Maar, D., Besson, E., & Kefi, H. (2023). Fostering positive customer attitudes and usage
Journal of Marketing, 18(4), 36–44. intentions for scheduling services via chatbots. Journal of Service Management, 34(2),
Güntürkün, P., Haumann, T., & Mikolon, S. (2020). Disentangling the differential roles of 208–230. https://doi.org/10.1108/JOSM-06-2021-0237
warmth and competence judgments in customer-service provider relationships. Madden, C. S., Little, E. L., & Dolich, I. J. (1979). A temporal model of consumer s/
Journal of Service Research, 23(4), 476–503. d concepts as net expectations and performance evaluations. In New dimensions of
Han, E., Yin, D., & Zhang, H. (2022). Bots with feelings: Should AI agents express positive consumer satisfaction and complaining behavior (pp. 79–82). Indiana: School of
emotion in customer service? Information Systems Research, 1–16. https://doi.org/ Business, Indiana University Bloomington.
10.1287/isre.2022.1179 McDuff, D., & Czerwinski, M. (2018). Designing emotionally sentient agents.
Hayes, A. F. (2017). Introduction to mediation, moderation, and conditional process analysis: Communications of the ACM, 61(12), 74–83.
A regression-based approach. New York: Guilford publications. Melián-González, S., & Bulchand-Gidumal, J. (2020). Employment in tourism: The jaws
Hildebrand, C., & Bergner, A. (2021). Conversational robo advisors as surrogates of trust: of the snake in the hotel industry. Tourism Management, 80, Article 104123. https://
Onboarding experience, firm perception, and consumer financial decision making. doi.org/10.1016/j.tourman.2020.104123
Journal of the Academy of Marketing Science, 49(4), 659–676. Melián-González, S., Gutiérrez-Taño, D., & Bulchand-Gidumal, J. (2021). Predicting the
Hoang, C., & Tran, H. A. (2022). Robot cleaners in tourism venues: The importance of intentions to use chatbots for travel and tourism. Current Issues in Tourism, 24(2),
robot-environment fit on consumer evaluation of venue cleanliness. Tourism 192–210.
Management, 93, Article 104611. https://doi.org/10.1016/j.tourman.2022.104611 Miao, F., Kozlenkova, I. v, Wang, H., Xie, T., & Palmatier, R. W. (2022). An emerging
Houston, L., Grandey, A. A., & Sawyer, K. (2018). Who cares if “service with a smile” is theory of avatar marketing. Journal of Marketing, 86(1), 67–90.
authentic? An expectancy-based model of customer race and differential service Moon, Y. (2000). Intimate exchanges: Using computers to elicit self-disclosure from
reactions. Organizational Behavior and Human Decision Processes, 144, 85–96. consumers. Journal of Consumer Research, 26(4), 323–339.

17
J. Zhang et al. Tourism Management 100 (2024) 104835

Nass, C., & Lee, K. M. (2001). Does computer-synthesized speech manifest personality? emotions-as-social-information perspective. Academy of Management Journal, 60(1),
Experimental tests of recognition, similarity-attraction, and consistency-attraction. 109–129.
Journal of Experimental Psychology: Applied, 7(3), 171–181. Wirtz, J., Patterson, P. G., Kunz, W. H., Gruber, T., Lu, V. N., Paluch, S., et al. (2018).
Oliver, R. L. (1980). A cognitive model of the antecedents and consequences of Brave new world: Service robots in the frontline. Journal of Service Management, 29
satisfaction decisions. Journal of Marketing Research, 17(4), 460–469. (5), 907–931.
Oliver, R. L., & Swan, J. E. (1989). Equity and disconfirmation perceptions as influences Wong, C. S., & Law, K. S. (2002). The effects of leader and follower emotional
on merchant and product satisfaction. Journal of Consumer Research, 16(3), 372–383. intelligence on performance and attitude: An exploratory study. The Leadership
Orden-Mejía, M. A., & Huertas, A. (2022a). Tourist interaction and satisfaction with the Quarterly, 13(3), 97–128.
chatbot evokes pre-visit destination image formation? A case study. Anatolia, 1–15. Yin, D., Bond, S. D., & Zhang, H. (2021). Anger in consumer reviews: Unhelpful but
Orden-Mejía, M., & Huertas, A. (2022b). Analysis of the attributes of smart tourism persuasive? MIS Quarterly: Management Information Systems, 45(3), 1059–1086.
technologies in destination chatbots that influence tourist satisfaction. Current Issues Yoganathan, V., Osburg, V. S., Kunz, H., W, & Toporowski, W. (2021). Check-in at the
in Tourism, 25(17), 2854–2869. Robo-desk: Effects of automated social presence on social cognition and service
Pillai, R., & Sivathanu, B. (2020). Adoption of AI-based chatbots for hospitality and implications. Tourism Management, 85. https://doi.org/10.1016/j.
tourism. International Journal of Contemporary Hospitality Management, 32(10), tourman.2021.104309
3199–3266. Yoon, J., & Yu, H. (2022). Impact of customer experience on attitude and utilization
Rapp, A., Curti, L., & Boldi, A. (2021). The human side of human-chatbot interaction: A intention of a restaurant-menu curation chatbot service. Journal of Hospitality and
systematic literature review of ten years of research on text-based chatbots. Tourism Technology, 13(3). https://doi.org/10.1108/JHTT-03-2021-0089
International Journal of Human-Computer Studies, 151, Article 102630. https://doi. Youn, S., & Jin, S. V. (2021). “In A.I. we trust?” The effects of parasocial interaction and
org/10.1016/j.ijhcs.2021.102630 technopian versus luddite ideological views on chatbot-based customer relationship
Rheu, M., Shin, J. Y., Peng, W., & Huh-Yoo, J. (2020). Systematic review: Trust-building management in the emerging “feeling economy.”. Computers in Human Behavior, 119,
factors and implications for conversational agent design. International Journal of Article 106721. https://doi.org/10.1016/j.chb.2021.106721
Human-Computer Interaction, 37(1), 81–96. Zeithaml, V. A., Parasuraman, A., & Berry, L. L. (1990). Delivering quality service:
Ruan, Y., & Mezei, J. (2022). When do AI chatbots lead to higher customer satisfaction Balancing customer perceptions and expectations. In choice reviews online. Simon
than human frontline employees in online shopping assistance? Considering product and Schuster, 28(1). https://doi.org/10.5860/choice.28-0390
attribute type. Journal of Retailing and Consumer Services, 68, Article 103059. https:// Zhang, B., Zhu, Y., Deng, J., Zheng, W., Liu, Y., Wang, C., et al. (2022). “I Am here to
doi.org/10.1016/j.jretconser.2022.103059 assist your tourism”: Predicting continuance intention to use AI-based chatbots for
de Ruyter, K., & Wetzels, M. (1998). On the complex nature of patient evaluations of tourism. Does gender really matter? International Journal of Human-Computer
general practice service. Journal of Economic Psychology, 19(5), 565–590. Interaction, 1–17.
Ryan, R. M., & Deci, E. L. (2020). Intrinsic and extrinsic motivation from a self- Zhao, T., Cui, J., Hu, J., Dai, Y., & Zhou, Y. (2022). Is artificial intelligence customer
determination theory perspective: Definitions, theory, practices, and future service satisfactory? Insights based on microblog data and user interviews.
directions. Contemporary Educational Psychology, 61, Article 101860. https://doi.org/ Cyberpsychology, Behavior, and Social Networking, 25(2), 110–117.
10.1016/j.cedpsych.2020.101860 Zhou, Y., Fei, Z., He, Y., & Yang, Z. (2022). How human–chatbot interaction impairs
Rzepka, C., & Berger, B. (2018). User interaction with AI-enabled systems: A systematic charitable giving: The role of moral judgment. Journal of Business Ethics, 178(3),
review of IS research. ICIS 2018 Proceedings, 1–17. 849–865.
Samala, N., Katkam, B. S., Bellamkonda, R. S., & Rodriguez, R. V. (2020). Impact of AI
and robotics in the tourism sector: A critical insight. Journal of Tourism Futures, 8(1),
73–87.
Junbo Zhang (zhangjunbo@hit.edu.cn) is a doctoral student in
Schuetzler, R. M., Giboney, J. S., Grimes, G. M., & Nunamaker, J. F. (2018). The
influence of conversational agent embodiment and conversational relevance on information systems in the School of Management at the Harbin
socially desirable responding. Decision Support Systems, 114, 94–102. Institute of Technology (HIT). He received his bachelor’s de­
Seeger, A. M., Pfeiffer, J., & Heinzl, A. (2021). Texting with humanlike conversational gree from that university. He is interested in human-computer
agents: Designing for anthropomorphism. Journal of the Association for Information interaction and data mining.
Systems, 22(4), 931–967.
Shi, S., Gong, Y., & Gursoy, D. (2021). Antecedents of trust and adoption intention
toward artificially intelligent recommendation systems in travel planning: A
heuristic–systematic model. Journal of Travel Research, 60(8), 1714–1734.
Soderlund, M., Oikarinen, E. L., & Tan, T. M. (2021). The happy virtual agent and its
impact on the human customer in the service encounter. Journal of Retailing and
Consumer Services, 59, Article 102401. https://doi.org/10.1016/j.
jretconser.2020.102401
Song, S. W., & Shin, M. (2022). Uncanny Valley effects on chatbot trust, purchase
intention, and adoption intention in the context of E-commerce: The moderating role
of avatar familiarity. International Journal of Human-Computer Interaction. https://
doi.org/10.1080/10447318.2022.2121038 Qi Chen(chenqidut@dlut.edu.cn) is an assistant professor in
Song, X., Xu, B., & Zhao, Z. (2022). Can people experience romantic love for artificial the Department of Management Science and Engineering in the
intelligence? An empirical study of intelligent assistants. Information and School of Business and Management at Dalian University of
Management, 59(2), Article 103595. https://doi.org/10.1016/j.im.2022.103595 Technology (DUT), China. She received her Ph.D., M.S. and B.S.
Sundar, S. S., Jung, E. H., Waddell, T. F., & Kim, K. J. (2017). Cheery companions or in Management Science and Engineering from Harbin Institute
serious assistants? Role and demeanor congruity as predictors of robot attraction and of Technology (HIT) in China. Her research interests are in the
use intentions among senior citizens. International Journal of Human-Computer areas of security, privacy and trust, management information
Studies, 97, 88–97. systems, social media. She has published articles in journals and
Tung, V. W. S., & Law, R. (2017). The potential for tourism and hospitality experience conferences such as Internet Research, International Journal of
research in human-robot interactions. International Journal of Contemporary Information Management, Hawaii International Conference on
Hospitality Management, 29(10), 2498–2513. System Sciences (HICSS) and Pacific Asia Conference on Infor­
Tussyadiah, I. (2020). A review of research into automation in tourism: Launching the mation Systems (PACIS).
annals of tourism research curated collection on artificial intelligence and robotics in
tourism. Annals of Tourism Research, 81, Article 102883. https://doi.org/10.1016/j.
annals.2020.102883
Tussyadiah, I. P., & Park, S. (2018). When guests trust hosts for their words: Host
description and trust in sharing economy. Tourism Management, 67, 261–272. Jiandong Lu (lujiandong@hit.edu.cn) is a Ph.D. candidate in
Urakami, J., Moore, B. A., Sutthithatip, S., & Park, S. (2019). Users’ perception of the Department of Management Science and Engineering in the
empathic expressions by an advanced intelligent system. In Proceedings of the 7th School of Management at Harbin Institute of Technology (HIT).
international conference on human-agent interaction. https://doi.org/10.1145/ His research primarily focuses on digital transformation and
3349537.3351895 servitization, organizational resilience, and consumer behavior
van Kleef, G. A., & Côté, S. (2007). Expressing anger in conflict: When it helps and when in tourism. His work has been published in Information &
it hurts. Journal of Applied Psychology, 92(6), 1557. Management, International Journal of Hospitality Management,
Van Kleef, G. A., & Côté, S. (2022). The social effects of emotions. Annual Review of Information Technology & People, International Journal of Infor­
Psychology, 73, 629–658. mation Management, Computers & Industrial Engineering, and
Walther, J. B., & D’Addario, K. P. (2001). The impacts of emoticons on message others.
interpretation in computer-mediated communication. Social Science Computer
Review, 19(3), 324–347.
Wang, X., Jiang, M., Han, W., & Qiu, L. (2022). Do emotions sell? The impact of
emotional expressions on sales in the space-sharing economy. Production and
Operations Management, 31(1), 65–82.
Wang, Z., Singh, S. N., Jessica Li, Y., Mishra, S., Ambrose, M., & Biernat, M. (2017).
Effects of employees’ positive affective displays on customer loyalty intentions: An

18
J. Zhang et al. Tourism Management 100 (2024) 104835

Xiaolei Wang (wangxiaolei@uibe.edu.cn) is an Assistant Pro­ Yuqiang Feng (fengyq@hit.edu.cn) is a Professor in the
fessor in the School of Information Technology & Management Department of Management Science and Engineering in the
at University of International Business & Economics (UIBE), School of Management at Harbin Institute of Technology (HIT),
China. Her research primarily focuses on customer participa­ China. She holds an MS and a PhD in management science and
tion, knowledge contribution, and organizational resilience. engineering from Harbin Institute of Technology. Her research
Her research has been published in European Journal of Infor­ focuses on e-commerce, business intelligence, big data, intelli­
mation Systems, Information & Management, International Journal gent community, and industrial internet. Her research has been
of Hospitality Management, International Journal of Information published in European Journal of Information Systems, Interna­
Management and others. tional Journal of Information Management, Information Systems
Frontiers, Information & Management, Computers in Human
Behavior, Scientometrics, and others.

Luning Liu (liuluning@hit.edu.cn) is a Professor and Depart­


ment Chair in the Department of Public Administration in the
School of Management at Harbin Institute of Technology (HIT),
China. He received his Ph.D. in management science and en­
gineering at that university. His research primarily focuses on
big data analytics, e-government, and e-commerce. His work
has been published in European Journal of Information Systems,
Government Information Quarterly, International Journal of In­
formation Management, Information Systems Frontiers, Computers
in Human Behavior, Information Technology for Development, In­
dustrial Management & Data Systems, Telecommunications Policy,
and others.

19

You might also like