You are on page 1of 13

Technology in Society 75 (2023) 102362

Contents lists available at ScienceDirect

Technology in Society
journal homepage: www.elsevier.com/locate/techsoc

Are users willing to embrace ChatGPT? Exploring the factors on the


acceptance of chatbots from the perspective of AIDUA framework
Xiaoyue Ma *, 1, Yudi Huo
School of Journalism and New Media, Xi’an Jiaotong University, No. 28, Xianning West Road, Xi’an, Shaanxi, 710049, PR China

A R T I C L E I N F O A B S T R A C T

Keywords: As a rapidly emerging generative AI chatbot, ChatGPT has garnered unprecedented global attention for its
Chatbot advanced AI-based text generation capabilities. However, the issue of ChatGPT acceptance requires further
AIDUA investigation. Prior studies on chatbot acceptance primarily focused on traditional technology acceptance models
Cognitive appraisal theory
(TAMs) and did not consider the intelligence features of AI technology. Based on the AI device use acceptance
Novelty value
Perceived humanness
(AIDUA) model and cognitive appraisal theory (CAT), this study proposed a research model to investigate the
acceptance of ChatGPT. Participants with experience using ChatGPT were invited to take part in the survey. A
total of 500 valid questionnaires were collected through the Credamo survey platform. Our findings reveal
compelling associations: social influence, novelty value, and humanness positively correlate with performance
expectations, while hedonic motivation, novelty value, and humanness negatively correlate with effort expec­
tations. Both performance and effort expectations contribute to cognitive attitudes. Age, as a control variable,
exhibits a significant negative impact on the willingness to reject ChatGPT. Notably, this study expands the
current AIDUA framework within chatbot contexts by incorporating perspectives on novelty value, perceived
humanness, and cognitive attitudes to examine chatbot acceptance. These insights offer practical implications for
the design and development of AI-based chatbots, contributing to the evolving landscape of AI technology
acceptance.

1. Introduction valuable tool in various applications and industries [11].


With the increasing discussion of ChatGPT in various walks of life,
Currently, the development of artificial intelligence-generated con­ recent studies have demonstrated its potential and challenges in scien­
tent (AIGC) has received widespread attention and gained popularity tific writing [12], the healthcare industry [13,14]), educational appli­
worldwide, which refers to the fact that users can use artificial intelli­ cations [15–18] , intelligent vehicles[19], and other fields. However,
gence (AI) to create content (e.g., images, text, and videos) automati­ these studies have concentrated on the application of ChatGPT in spe­
cally according to their personalized requirements [1]. AIGC was cific domains, and very little attention has been paid to individuals’ use
considered one of the most advanced technologies, but it only came into of ChatGPT. Currently, its developers, OpenAI, have made ChatGPT free
the public view and impressed people with its powerful capabilities with to use and easily accessible to people who do not have technical
the advent of ChatGPT. To be specific, ChatGPT (https://openai.co expertise [20]. More importantly, ChatGPT is a milestone in the field of
m/blog/chatgpt) is a highly capable AI chatbot and has attracted un­ AI text generation, making users feel for the first time that their lives are
precedented attention for its ability to understand complex and diverse close to AI. However, there remains a paucity of evidence on the ordi­
human languages, generating personalized, human-like responses [2,3]. nary users’ acceptance of ChatGPT and the factors contributing to its
Notably, it serves as the first accessible AI user interface for the general acceptance. We are curious whether users’ demonstrated interest in
public [2]. Compared with the previous chatbots, ChatGPT has made ChatGPT aligns with their actual willingness to embrace it. Conse­
substantial advancements in addressing limitations related to person­ quently, this study aims to investigate users’ willingness to accept or
alized responses [4,5], the richness of answers [6–8], and conversational reject ChatGPT and the factors influencing their evaluations.
coherence [9,10]. The remarkable capabilities of ChatGPT make it a Existing studies on the acceptance of chatbots are mainly empirical

* Corresponding author.
E-mail addresses: xyma_mail@163.com (X. Ma), huoyudi2022@163.com (Y. Huo).
1
Present/permanent address: School of Journalism and New Media, Xi’an Jiaotong University, No. 28, Xianning West Road, Xi’an, Shaanxi, 710,049, P.R. China.

https://doi.org/10.1016/j.techsoc.2023.102362
Received 9 May 2023; Received in revised form 21 August 2023; Accepted 9 September 2023
Available online 14 September 2023
0160-791X/© 2023 Elsevier Ltd. All rights reserved.
X. Ma and Y. Huo Technology in Society 75 (2023) 102362

research based primarily on the theoretical foundations of the technol­ experience [48], customer service performance [49], trust [50,51],
ogy acceptance model (TAM) and the unified theory of acceptance and engagement, and user behavior [52]. Among them, the impact of chat­
use of technology (UTAUT). However, these models did not examine AI bot anthropomorphism [53], social existence [54], humor [55,56],
characteristics or the coexistent acceptance and refusal of AI [21]; thus, empathy [57], and other characteristics on user perception are mainly
this study attempts to use the latest technology acceptance model: the AI considered. For example, chatbot characteristics like competence,
device use acceptance (AIDUA) model [22]; discussed in Section 3.1). credibility, anthropomorphism, and informativeness enhance users’
Despite the advancement and reliability of the AIDUA model, it has trust and thus increase purchase intentions [58]. In addition, some
several limitations, such as a lack of cognitive attitudes [23–25]. studies have considered the impact of individual factors on chatbot
Therefore, this study proposes a new research model to explain users’ usage, such as increased social interaction anxiety and obsessive chat­
acceptance of ChatGPT and investigates whether the AIDUA model is ting with the chatbot [59]. The individual characteristic of tolerating
still applicable in the context of ChatGPT. ambiguity may affect the human–chatbot interaction experience [50],
Compared with the original AIDUA model, this proposed model in­ and users consider chatbots to be more effective when the clarity of their
troduces new factors. First, cognitive attitudes, which state the impor­ wants is greater [40]. Other studies, however, extend to the topics of
tant factors in influencing users’ behavioral intentions toward AI chatbot services, including purchase intention, brand reputation, shop­
[23–25], were incorporated into the model, but they are not reflected in ping experience satisfaction [60], customer–brand relationship [61],
the AIDUA model. In addition, a humanlike appearance or the capability customer–brand engagement [62], brand attitude [34], brand intimacy
to simulate human emotions and behavior is not the core characteristic [63], and human-AI relationship [64,65]. For instance, chatbots respond
of ChatGPT; thus, the antecedent of anthropomorphism was replaced by enthusiastically and competently to initial messages, which makes users
perceived humanness, which embodies a more accurate description of feel closer to the brand and increases their brand engagement [66].
ChatGPT’s characteristics. Moreover, the unique capabilities of ChatGPT ChatGPT, developed by the commercial company OpenAI, is an
set it apart from previous AI chatbots [26,27], which shows great nov­ innovative AI chatbot based on a large language model [67]. It’s an
elty. Therefore, we introduced another new variable, novelty value, autonomous machine-learning system that, after training on extensive
which has been proven to have an impact on the acceptance of tech­ text data, can produce advanced and seemingly intelligent writing [68].
nology (e.g., Refs. [28,29]. Age, population, type of use, and occupation However, what distinguishes ChatGPT from every other model ever
were also used as control variables. To verify the proposed model, 500 released is its capability to engage in ongoing interactions with users,
valid questionnaires were gathered from the Credamo platform respond to user inputs, and provide conversational responses. Due to
(https://www.credamo.com/home.html#/). We employed the struc­ ChatGPT’s most distinctive features, many software developers, creative
tural equation model approach for model testing. authors, scholars/teachers, and songwriters have used ChatGPT to
The innovative aspects of this thesis are as follows: First, this paper create computer software and apps, text, academic articles, and song
contributes to the literature on chatbot acceptance behavior by inves­ lyrics [69].
tigating both acceptance and objections to ChatGPT and validates the However, ChatGPT also poses threats, including black-box algo­
AIDUA model in the new context. Second, as opposed to previous rithms, discrimination, biases, copyright violation, plagiarism, manu­
research that primarily focused on emotion as the ultimate antecedent of factured and unauthentic language, fake news, privacy and security
users’ willingness to accept AI devices, this study predicts user behavior concerns, and an increased probability of cheating on assignments
by combining cognitive and affective attitudes. Third, this research in­ [70–75]. For instance, because this technique generally reproduces text
cludes two new concepts for predicting chatbot acceptance, which without accurately crediting the sources or authors, researchers
further develop and extend the AIDUA model. Novelty value comple­ employing it may fail to give credit to earlier work, accidently plagia­
ments the technological change embodied in ChatGPT, and perceived rizing a large number of unfamiliar documents, and possibly even giving
humanness reflects the enhanced conversational capabilities of away their ideas [68]. Recently, Elon Musk and 1,122 other people have
ChatGPT. signed an open letter calling for a six-month pause on AI program
The subsequent sections of the article are structured as follows. An development. Meanwhile, the Italian Personal Data Protection Agency
overview of studies on chatbots and AI acceptance is provided in the announced a temporary ban on ChatGPT and investigated its alleged
literature review section. The following section explores the theoretical violation of data collection rules. ChatGPT has generally sparked debate
context, introducing the AIDUA model, cognitive appraisal theory and reignited longstanding worries about the future of AI.
(CAT), and hypothesis formulation. The research methodology is then
presented and discussed in light of the existing theoretical and empirical 2.2. Acceptance of AI
evidence. In conclusion, we discuss the significance of our findings, the
management and theoretical ramifications of our work, and its limita­ Studies on the acceptance of chatbots have mainly examined user
tions and potential future scopes. attitudes and willingness based on technical and social characteristics.
From a technical acceptance perspective, these studies are mainly based
2. Literature review on the TAM and UTAUT models. Based on the theory of TAM [76],
discovered that attitudes are influenced by perceived usefulness, ease of
2.1. Chatbots use, enjoyment, risk, price consciousness, and individual creativity [77].
revealed that the acceptability of chatbots was positively affected by
Chatbots are intelligent systems developed utilizing either rule-based utilitarian criteria, such as “truth of discourse” and “perceived utility,”
or self-learning (AI) techniques [30] to converse with people through as well as hedonic factors, such as “felt delighted.” Pet owners’ per­
synthetic voice or text mediated by a digital interface for entertainment ceptions of correctness, completeness, usability, and convenience had a
or information retrieval [31–33]. Due to the advancement of AI, chat­ significant positive effect on their satisfaction with a chatbot providing
bots are becoming more common in e-commerce [34–36], tourism [37, pet disease counseling [78]. Based on the theory of UTAUT, perceptions
38], mental health [39,40], educational research [41], and other service of chatbot performance, trust in bank chatbot services, and the bank’s
scenarios. reputation have a positive impact on user satisfaction [79]. Perceived
The user research on chatbots mainly focuses on customer service, intelligence and anthropomorphism play a major role in shaping atti­
which can be divided into two aspects in general. On the one hand, prior tudes, and sustaining chatbot-based service usage [80]. Users were
literature focused on the topic of chatbots themselves, including users’ found to place high value on perceived expertise, responsiveness and
satisfaction with chatbots [42,43], attitude [44], acceptance [45], security [81].
intention to continue using [34,46], loyalty [42,47], intelligent Besides, the use and satisfaction theory [77], perceived risk theory

2
X. Ma and Y. Huo Technology in Society 75 (2023) 102362

[37], diffusion of innovation theory [62,76,82], expectations confir­ 3.2. Cognitive appraisal theory
mation theory [79,83], user acceptance of technology model [84], and
so on have been used to explore users’ attitudes towards chatbots and Cognitive appraisal theory [94] is a credible psychological expla­
their willingness to accept them. nation for why and how individuals respond to external stimuli [24],
From the perspective of social characteristics [85], summarized which has emerged as a valuable lens for explaining reactions toward
three social characteristics conducive to the interactions between chat­ information systems (IS) [95]. This evaluation, which incorporates
bots and humans: conversational intelligence, social intelligence, and cognitive and emotive elements, conducive to different coping mecha­
anthropomorphism. The informal conversational style of chatbots gen­ nisms, manifests through attitudes [96]. However, prior research
erates a sense of parasocial engagement, which increases chatbot usage employing CAT in information systems has primarily emphasized af­
intention and brand attitude [34]. Theories such as computer-mediated fective attitudes [22]. In addition, several studies have demonstrated the
communication [86], social exchange theory [52], and social response necessity of balancing cognitive and emotional attitudes when
theory [87] are used to explore the social characteristics of chatbots. researching technologies [96].
Netizens appear to have an ambivalent attitude toward ChatGPT,
2.3. Motivation while cognitive attitude is a comprehensive assessment of AI’s skills
[96]. Therefore, to obtain a comprehensive understanding of user atti­
Notably, ChatGPT represents a milestone in AI text generation, tudes toward ChatGPT, it is necessary to include cognitive attitudes in
bringing users closer to AI in their daily lives. In this context, it is the model. Therefore, we proposed the second research question:
important to comprehend how users perceive and evaluate ChatGPT in
RQ2. Do cognitive and emotional attitudes influence the acceptance of
practical usage scenarios. However, existing research on ChatGPT
ChatGPT?
mainly focuses on specific fields, and the ordinary users’ willingness to
accept ChatGPT is still not clear. In addition, previous studies on the
acceptance of chatbots were mainly based on traditional technology 3.3. Hypothesis development
models such as TAM and lacked investigation into the characteristics of
AI technologies. Therefore, to better understand users’ behavioral in­ 3.3.1. Social influence
tentions toward ChatGPT, this paper aims to investigate user evaluation Social influence refers to “the degree to which an individual believes
and behavioral intention toward ChatGPT usage based on the AIDUA that important others believe he/she should adopt a new technology”
model. [97]. Before adopting a new technology, individuals considered the
opinions of their friends and family and were less inclined to do so if
3. Theoretical background and hypothesis development others’ opinions were negative [98]. In this study, social influence is
defined as both mass media influence and interpersonal influence [99].
3.1. An AI acceptance framework: AIDUA It has also been discovered that social influence will inspire users to use
chatbots since it profoundly affects users’ evaluations of performance
Previous studies on the acceptance of chatbots were mainly based on and expected effort [100]. The potential mechanism for this impact is
the TAM model, UTAUT, UTAUT2, and the information system model that users obtain more information about the technology from signifi­
for verification. Although existing technology acceptance models cant others, which reduces their perceptions of uncertainty [101][102].
partially explain the mechanism of consumers’ intentions with AI gad­ Since the release of ChatGPT, tech celebrities have been complimentary
gets, they were initially designed to examine the acceptance of non-AI about it in succession. Musk once commented that ChatGPT was “scary
technology [22][88][89]. New theoretical advances have emerged in good” and “has illustrated to people just how advanced AI has become”
the field of AI, including the AIDUA model, which explains users’ [103]. Bill Gates claimed that ChatGPT will “change our world” [104].
readiness to accept the usage of AI equipment in service. Rather than Therefore, influenced by celebrities’ opinions on social media and sur­
relying on classic models (e.g., TPB and TAM) that define acceptance as rounding ChatGPT users, potential users will receive more specific in­
the absence of refusal [90], indicated that acceptance and rejection may formation about the usage of ChatGPT, thereby having higher
co-exist. Consequently, this study will utilize the AIDUA framework to expectations of its performance. Thus, we hypothesize as follows:
investigate the acceptance of ChatGPT among current users.
H1. Social influence positively influences users’ performance expec­
Before users determine whether to accept AI devices, there are three
tations of ChatGPT.
steps in the generation of acceptance: primary evaluation, secondary
evaluation, and outcome phase [22]. In the preliminary appraisal stage,
3.3.2. Hedonic motivation
individuals will initially assess the importance and relevance of utilizing
Hedonic motivation is “the fun or pleasure derived from using a
AI devices during service encounters, taking into account three vari­
technology” [105]; p. 161). That is, individuals with a hedonic moti­
ables: social influence, hedonic motivation, and anthropomorphism. In
vation enjoy the overall interaction experience and value playfulness
detail, social influence and hedonic motivation are favorably associated
and fun when engaging with the new technology. A previous study
with performance expectations, while anthropomorphism is positively
found that users with hedonic motivation for using AI devices would
associated with effort expectancy. During the second evaluation phase,
benefit from having their demands for novelty and enjoyment met
performance and expected effort are important antecedents of consumer
[106]. The user interface of ChatGPT allows fluent interactions and
feelings. In the outcome stage, users’ attitudes toward using AI devices
various types of conversations, resulting in users perceiving it as
will determine their willingness to accept AI devices and their opposi­
enjoyable and entertaining [107]. Particularly, given that ChatGPT is
tion to their use during service interactions. The AIDUA theory has so far
still a new technology, users are curious and often ask ChatGPT some
been validated in artificially intelligent robotic devices in the hospitality
whimsical or nonsensical questions that they would never ask a human
service setting [91], AI devices [92], autonomous vehicles in travel and
[108]. For users, ChatGPT’s capabilities are reflected in conversational
tourism [93], and AI hospitality robots [90]. Moreover, there is no
skills. Correspondingly, if users have a pleasant experience during the
literature using this model in chatbot acceptance studies. One of our
interaction, that will strengthen their beliefs that ChatGPT is easy to use
research objectives is to test whether this model can be extended to
and can help accomplish tasks. Consequently, we put forth the following
chatbot usage scenarios. Therefore, we proposed the first question:
hypotheses:
RQ1. What are the behavioral intentions of users toward ChatGPT?
H2a. Hedonic motivation positively improves ChatGPT users’ perfor­
mance expectations.

3
X. Ma and Y. Huo Technology in Society 75 (2023) 102362

H2b. Hedonic motivation detrimentally affects the effort expectations of how ChatGPT will help them in various tasks, such as answering
of ChatGPT users. questions, generating content, or providing assistance in daily activities.
According to the human–computer interaction literature, the utilitarian
3.3.3. Novelty value aspects that increase user productivity significantly impact technology
Novelty value is a newly introduced variable that assesses the extent adoption [105]. This means that users are more likely to accept ChatGPT
to which a product is perceived distinctively from others due to its if they perceive it as a valuable tool that can enhance their efficiency and
freshness and originality [109], which is an important characteristic of productivity. Moreover, rational choice theory posits that individual
any new technology [110]. Previous studies have revealed that novelty judgments regarding cost and benefit evaluations that maximize overall
value is a key belief about technology innovations and plays an instru­ utility are typically reasonable (Atkins and Kim, 2012). In other terms,
mental role in the acceptance of innovative technology [110]. In the clients typically choose the ideal shopping technique to obtain the
context of chatbots, it has been proved that a novelty effect is at play greatest advantage [122]. Users’ performance expectations pertain to
when it comes to interactions between users and chatbots [111]. their anticipation of service dependability and consistency [22][123].
Furthermore, when users perceive the novelty value of technology, they Consequently, we postulate as follows:
will achieve tasks in an enjoyable manner [28], which positively affects
H5a. Performance expectancy positively influences users’ cognitive
both utilitarian and hedonistic values toward technology [112]. In light
attitudes.
of the fact that ChatGPT is disruptive AI in the realm of natural language
processing, the novelty value of ChatGPT will help to attract and H5b. Performance expectancy positively influences users’ affective
maintain users’ attention [113], and reduce their psychological resis­ attitudes.
tance to new technologies [114]. Thus, the following hypothesis was
formulated: 3.3.6. Influence of effort expectancy on attitudes
Effort expectancy refers to “the degree of ease associated with users’
H3a. Novelty value negatively influences users’ performance expec­
use of technology” [105]; p. 159). In the context of ChatGPT, this
tations of ChatGPT.
concept relates to how users perceive the ease of interacting with and
H3b. Novelty value negatively influences users’ effort expectations of utilizing ChatGPT for various tasks. Users’ perceptions of effort expec­
ChatGPT. tations play a crucial role in their intention to accept new technology.
This view was supported by the AI literature indicating the emergence of
3.3.4. Perceived humanness negative emotions when AI systems cause communication issues and
In the AIDUA model, anthropomorphism as the antecedent increased require additional cognitive work due to their complicated design [22].
users’ effort expectations of AI facilities. Specifically, anthropomor­ Prior studies have demonstrated that effort anticipation has a substantial
phism refers to the degree to which an item possesses humanlike qual­ and favorable effect on sightseers’ perceptions of AI in service provision
ities, such as human appearance, self-awareness, and mood [115]. On [124] and AI-based robotic devices [91]. In a separate study [125],
the one hand, ChatGPT is a text-based chatbot with no anthropomorphic revealed that effort expectation has the greatest positive impact on
appearance; on the other hand, although social media praised ChatGPT users’ AI experiences. This means that users are more likely to have a
for its high level of anthropomorphism, users were aware that anthro­ positive experience if they find it easy to communicate with ChatGPT.
pomorphism was a technological advancement and not an actual Therefore, we propose the following hypothesis:
self-awareness [116]. Considering the interactive characteristics of
H6a. Effort expectancy negatively influences users’ cognitive
ChatGPT, admitting its errors, challenging incorrect premises, and
attitudes.
rejecting inappropriate requests [117], we propose that the concept of
perceived humanness is more appropriate than anthropomorphism as H6b. Effort expectancy negatively influences users’ affective attitudes.
the antecedent.
The concept of humanness is essential to research human–chatbot 3.3.7. Influence of affective and cognitive attitudes on willingness to accept
interaction [118,119]. As discussed in the literature review [118], hu­ chatbots
manness is comprised of the formal features of contact, such as syntax Cognitive and affective evaluations are distinct aspects of attitude
and language, and humanlike conversational approaches. People theory [126]. Cognitive appraisals correspond to the utilitarian side of
consider a chatbot with superior conversational skills to be more hu­ attitude, whereas affective appraisals are judgments based on senti­
manlike and engaging [120]. In chatbot-related research, user happiness ments, emotions, and gut reactions that individuals experience in rela­
is highly influenced by the conversational quality of AI bots, including tion to an attractive object, alluding to the hedonic aspect of attitude
their capacity to comprehend humanness, perceptions of contingency, [46,127]. Thus cognitive appraisals might relate to users’ perceptions of
and human-like responses [42]. Although ChatGPT is merely an AI how effectively ChatGPT can help them with tasks or provide informa­
application, it has the capacity to generate responses that closely tion, while affective appraisals might involve the pleasure or satisfaction
resemble human writing [121]. In this way, ChatGPT’s responses and users derive from interacting with ChatGPT.
answers are full of humanness, which enables users to increase their [91] revealed that their emotional judgment of these technologies
performance expectations for ChatGPT. The humanlike responses are mostly influences user acceptance of AI devices [128] and users’
considerate and friendly to users, making interactions with ChatGPT behavioral intentions [129]. also provided evidence that affective
easy and enjoyable. The following hypothesis is formulated based on the evaluation, which encompasses the passionate and hedonic aspects of
discussions: social learning, plays a crucial role in elucidating users’ perspectives and
attitudes toward the source. In addition, the pre-adoption evaluation of
H4a. Perceived humanness positively influences users’ performance
AI in enterprises revealed that both emotional and cognitive attitudes
expectations of ChatGPT.
are connected to connection with the staff’s reaction to the technology
H4b. Perceived humanness negatively influences users’ effort expec­ [96]. Thus, we hypothesize:
tations of ChatGPT.
H7a. Affective appraisals are positively associated with users’ will­
ingness to accept ChatGPT.
3.3.5. Influence of performance expectancy on attitudes
Performance expectancy can be characterized as the extent to which H7b. Affective appraisals negatively affect users’ objections to
users believe that using ChatGPT will aid them in completing a certain ChatGPT.
activity [97]. In the context of ChatGPT, this refers to users’ perceptions

4
X. Ma and Y. Huo Technology in Society 75 (2023) 102362

H8a. Cognitive appraisals are positively associated with users’ will­ Amos 24.0 (Analysis of Moment Structure).
ingness to accept ChatGPT.
H8b. Cognitive appraisals are negatively associated with users’ ob­ 4.1. Questionnaire design
jections to ChatGPT.
The questionnaire was separated into two components: demographic
Hence, our third research question examines the impact of these information and perceptions of ChatGPT. In the first portion of the
factors on users’ evaluation of ChatGPT. To conclude, based on the questionnaire, we incorporated the respondents’ demographic infor­
AIDUA model and CAT, this study presents a research design for mation (such as age, gender, education, and occupation) and the
investigating user acceptance of ChatGPT (see Fig. 1). This research ChatGPT usage information of the respondents (e.g., the types of func­
model hypothesizes that social influence and eight other factors will tions you most often use and the version of ChatGPT).
influence users’ behavior toward ChatGPT. To further investigate the The second section of the questionnaire concludes with the con­
potential influence of demographic variables on users’ willingness to structs for the perception of ChatGPT. The scales of this survey were
accept or reject ChatGPT, we have included age, gender, education level, adapted from well-established literature (see Table 1). The social in­
and usage type as control variables. fluence and hedonic motivation constructs were adapted from Ref. [22];
novelty value from Ref. [112]; and perceived humanness from Ref. [42].
4. Methodology The items for measuring performance expectancy and effort expecta­
tions were adapted from Ref. [105]. Adapted from Voss et al. (2003),
To collect data for this study, a questionnaire survey was adminis­ both the cognitive and affective attitude constructs consist of four items.
tered. The final questionnaire contained two sections: demographic in­ The acceptance of ChatGPT with three items and the objection to
formation and ChatGPT perceptions. All variables and their dimensions ChatGPT with four items were adapted by Ref. [22]. These items were
were adapted from prior literature. rated on a 7-point Likert-type scale anchored by 1 for “Strongly
To analyze the acquired data, we evaluated the accuracy of the Disagree” and 7 for “Strongly Agree.” Age, gender, education, type of
measurements and executed a structural equation model. Pretests of the use, and occupation were used as control variables in this study.
questionnaire were undertaken to identify and rectify any flaws or issues
that might develop during the data collection procedure. We first
collected 160 valid questionnaires on the WeChat platform. During the 4.2. Data collection
formal questionnaire distributions, we chose the Credemo data platform
(https://www.credamo.com/home.html#/). After excluding the From March 19 to March 25, 2023, we distributed online question­
incomplete questionnaires, 500 valid questionnaires were finally adop­ naires. Due to the multiple versions of ChatGPT, especially the release of
ted. The data were analyzed using SPSS 26.0 (IBM SPSS Statistics) and ChatGPT-4, the survey included the question, “Which version of
ChatGPT have you used?” Credemo (https://www.credamo.com/home.

Fig. 1. Proposed model.

5
X. Ma and Y. Huo Technology in Society 75 (2023) 102362

Table 1 4.3. Data description


Measurements for constructs.
Construct Item Reference There were a total of 500 valid responses, of which 42.6% were male,
and 57.4% were female. The demographic characteristics of the re­
Social Influence People who influence my behavior would want [22]
me to utilize ChatGPT. spondents are presented in Table 2. More than 80% of respondents were
People whose opinions I value would prefer that younger than 35, and 77.2% held a bachelor’s degree. Interactive con­
I utilize ChatGPT. versations, information searches, and text production are the most
People who are important to me would frequently used ChatGPT functions. In addition, over 80% of the par­
encourage me to utilize it.
People in my social networks who would utilize
ticipants used the versions of ChatGPT-3.5 and ChatGPT-4.
AI devices have more
prestige than those who don’t. 5. Results
Hedonic I have fun interacting with ChatGPT. [22]
Motivation Interacting with ChatGPT is fun.
Interaction with ChatGPT is enjoyable. 5.1. Measurement model
Novelty Value I found using ChatGPT to be a novel experience. [112]
Using ChatGPT is new and refreshing. AMOS 24.0 and SPSS 26.0 were utilized to evaluate the model fit of
Using ChatGPT satisfied my curiosity.
the measurement model. 2/df = 1.689, RMSEA = 0.037, CFI = 0.949,
ChatGPT made me feel like I was exploring a new
world. IFI = 0.949, and TLI = 0.944 indicate that the data are well represented
Perceived ChatGPT’s responses feel natural. [42] by the model. Table 3 displays the test results for reliability and
Humanness ChatGPT has a humanlike response. convergent validity. Internal consistency is indicated by Cronbach’s
ChatGPT’s responses do not feel machine-like. alpha values greater than 0.70 for all constructs [130]. Each construct’s
ChatGPT reacts in a very human way.
composite reliability (CR) value exceeds the benchmark value of 0.7,
Performance I would find using ChatGPT useful in daily life or [105]
Expectancy work. and the average variance extracted (AVE) for each construct is greater
Using ChatGPT would help me accomplish than 0.5, which is the suggested threshold for convergent validity. Ac­
things more quickly. cording to Ref. [131] approach, discriminant validity is ensured, as
Using ChatGPT has increased my productivity.
Table 4 reveals that the square root of the AVE of each concept is greater
ChatGPT would increase my chances of
achieving things that are important to me.
than its correlations with other constructs. Furthermore, applying the
Effort Expectancy Learning how to use ChatGPT would be easy for [105] heterotrait monotrait (HTMT) ratio approach [132], all HTMT values of
me.
My interaction with ChatGPT would be clear and
Table 2
understandable.
I would find using ChatGPT easy.
Demographics of the respondents (n = 500).
It would be easy for me to become skillful using Measure Items Frequency Percent
ChatGPT.
Cognitive Using ChatGPT is effective. [97] Gender Male 213 42.6%
Attitudes Using ChatGPT is helpful. Female 287 57.4%
ChatGPT is practical. Age 19–25 188 37.6%
ChatGPT is valuable. 16–35 231 46.2%
Affective Attitudes Using ChatGPT is [97] 36–45 60 12%
Happy 46–55 15 3%
Positive 56–65 6 1.2%
Pleasing Education Senior high school or below 15 3%
Satisfactory Undergraduate degree 386 77.2%
Willingness to I am willing to receive ChatGPT. [22] Master’s degree 87 17.4%
Accept I will feel happy to interact with ChatGPT. Doctor’s degree 12 2.4%
I am likely to interact with ChatGPT. The types of Information search 134 26.8%
Objection to Use The information is processed in a less humanized [22] functions you Text production 117 23.4%
manner. most often use Syntax check 13 2.6%
The existing problems with ChatGPT make me Debug code 43 8.6%
take Language translation 49 9.8%
a wait-and-see approach to ChatGPT. Interactive conversations 144 28.8%
I do not plan to continue using ChatGPT. The version of ChatGPT1 in 2018 5 1%
I prefer human contact in service transactions ChatGPT ChatGPT2 in 2019 13 2.6%
ChatGPT3 in 2020 46 9.2%
ChatGPT3.5 in 2022 273 54.6%
ChatGPT4 in 2023 163 32.6%
html#/) has been confirmed as the official online platform for ques­
Occupation IT/Hardware and software 187 37.4%
tionnaire distribution. It allows for the rejection of 30% of invalid services/E-commerce/Internet
questionnaires, thereby increasing the efficiency of the data review. operations
The study targeted respondents who had used ChatGPT. We set up Household appliance 91 18.2%
two filter questions at the beginning of the questionnaire: “Have you Banking/insurance/securities/ 43 8.6%
investment banking/venture funds
ever used ChatGPT?” and “Please describe one specific situation in Electronic technology/ 10 2%
which you used ChatGPT” to exclude respondents who did not meet our semiconductor/integrated circuit
requirements. We also used attention filters (e.g., “please select Catering/entertainment/tourism/ 10 2%
disagree,” “please select strongly agree,” etc.) in the middle and at the hotel/lifestyle services
Advertising/public relations/ 17 3.4%
end of the questionnaire as well as to check whether respondents paid
media/art
considerable attention while completing the questionnaire (Yu et al., Communications/telecom 37 7.4%
2018). After excluding users who did not meet the requirements and did operations/network equipment/
not pay attention, 500 valid questionnaires were collected. value-added services
Medical/nursing/health/hygiene 15 7.5%
Accounting/auditing 9 4.5%
Manufacturing industry 22 4.4%
Other sectors 59 29.5%

6
X. Ma and Y. Huo Technology in Society 75 (2023) 102362

Table 3
Reliability and validity results.
Constructs Items Mean S.D. Loadings Cronbach’s α CR AVE

Social Influence SI1 5.33 1.058 0.808*** 0.830 0.852 0.592


SI2 5.53 1.104 0.761***
SI3 5.61 1.150 0.766***
SI4 5.61 1.117 0.741***
Hedonic Motivation HM1 5.91 0.883 0.840*** 0.849 0.857 0.666
HM2 6.25 0.869 0.822***
HM3 6.18 0.830 0.785***
Novelty Value NV1 6.01 0.967 0.796*** 0.850 0.850 0.587
NV2 6.12 1.019 0.753***
NV3 6.10 0.967 0.721***
NV4 6.04 1.006 0.793***
Perceived Humanness PH1 5.85 0.971 0.771*** 0.847 0.846 0.579
PH2 5.69 0.976 0.738***
PH3 5.34 0.900 0.767 ***
PH4 5.62 0.940 0.767***
Performance Expectancy PE1 5.97 0.896 0.758 *** 0.839 0.837 0.560
PE2 6.12 0.996 0.766***
PE3 6.03 1.015 0.750***
PE4 5.81 0.973 0.726***
Effort Expectancy EE1 5.98 0.927 0.744*** 0.841 0.842 0.571
EE2 6.12 0.911 0.732***
EE3 6.01 0.943 0.779***
EE4 5.96 0.915 0.768***
Cognitive Attitudes CA1 6.08 0.831 0.712 *** 0.808 0.812 0.519
CA2 6.32 0.796 0.749***
CA3 6.25 0.771 0.724***
CA4 6.21 0.868 0.695***
Affective Attitudes AA1 6.21 0.957 0.824*** 0.833 0.870 0.625
AA2 6.21 0.953 0.789***
AA3 6.21 0.972 0.761***
AA4 6.21 0.971 0.791***
Willingness to Accept WA1 6.25 0.724 0.723*** 0.823 0.823 0.606
WA2 6.39 0.766 0.747***
WA3 6.46 0.627 0.858***
Objection to Use OU1 6.46 0.627 0.782*** 0.815 0.836 0.561
OU2 6.46 0.627 0.801***
OU3 6.46 0.627 0.680***
OU4 6.46 0.627 0.728***

Notes: SI = social influence; HM = hedonic motivation; NV = novelty value; HU = humanness; PE = performance expectancy; EE = effort expectancy; CA= Cognitive
Attitudes; AA = Affective Attitudes; W = willingness to accept the use of AI devices; O = objection to the use of AI devices; CR = composite reliability; AVE = average
variance extracted; ***p < 0.001.

Table 4
Construct correlations.
SI HM NV PH PE EE CA AA WA OU

SI 0.770
HM 0.471 0.816
NV 0.405 0.421 0.766
PH 0.470 0.458 0.732 0.761
PE 0.493 0.444 0.755 0.754 0.718
EE 0.360 0.435 0.684 0.634 0.620 0.733
CA 0.288 0.289 0.501 0.479 0.556 0.507 0.717
AA 0.277 0.157 0.285 0.277 0.344 0.228 0.195 0.790
WA 0.212 0.212 0.368 0.352 0.411 0.367 0.700 0.225 0.749
OU − 0.166 − 0.159 − 0.283 − 0.273 − 0.328 − 0.255 − 0.372 − 0.570 − 0.306 0.778

Notes: The square roots of AVE values for each construct itself are in bold; SI = social influence; HM = hedonic motivation; NV = novelty value; HU = humanness; PE =
performance expectancy; EE = effort expectancy; CA= Cognitive Attitudes; AA = Affective Attitudes; W = willingness to accept the use of ChatGPT; O = objection to
the use of ChatGPT.

the constructs (as shown in Table 5) were found to be below the con­ willingness to reject ChatGPT (F/t = 5.813, p < 0.01), other de­
servative cut-off value of 0.85, demonstrating satisfactory discriminant mographic variables showed no significant effects on users’ behavioral
validity. intentions toward ChatGPT. Consequently, in the process of fitting the
structural model, we included the age variable.
5.2. Structural model Fig. 2 depicts the standardized path coefficient and path significance
for each path. Table 5 presents the results of the hypotheses test. Social
As depicted in Table 6, the statistical analysis, which includes both influence positively influences performance expectancy (β = 0.097; p =
analysis of variance (ANOVA) and t-tests, has offered valuable insights 0.011), and H1 is verified. Hedonic motivation is connected positively
into the impact of various control variables on users’ behavioral in­ with effort anticipation (β = − 0.139, p = 0.005), while there is no
tentions toward ChatGPT. Apart from gender, which influenced users’ substantial effect on expected performance (β = 0.014, p = 0.761).

7
X. Ma and Y. Huo Technology in Society 75 (2023) 102362

Table 5
HTMT of all constructs.
HTMT F1 F10 F2 F3 F4 F5 F6 F7 F8 F9

SI –
O 0.365 –
HM 0.467 0.323 –
NV 0.401 0.332 0.429 –
HU 0.468 0.459 0.455 0.733 –
PE 0.476 0.426 0.423 0.816 0.782 –
EE 0.372 0.260 0.439 0.689 0.629 0.638 –
CA 0.394 0.395 0.379 0.415 0.482 0.541 0.497 –
AA 0.285 0.600 0.295 0.265 0.375 0.285 0.206 0.281 –
AC 0.303 0.310 0.340 0.345 0.357 0.410 0.444 0.687 0.268 –

Notes: SI = social influence; HM = hedonic motivation; NV = novelty value; HU = humanness; PE = performance expectancy; EE = effort expectancy; CA= Cognitive
Attitudes; AA = Affective Attitudes; W = willingness to accept the use of ChatGPT; O = objection to the use of ChatGPT.

Consequently, H2b is supported, and H2 is rejected. Novelty value has a


Table 6
positive impact on the performance expectancy (β = 0.447; p < 0.000)
Hypotheses and their significance.
and a negative effect on effort expectancy (β = − 0.402; p < 0.000); H3a
Path Path Coefficients (β) p-value Result and H3b are supported. Similarly, perceived humanness positively in­
H1 SI→PE 0.109 0.013 fluences performance expectations (β = 0.389; p < 0.000) and nega­
H2a HM→PE 0.012 0.784 Supported tively affects effort expectations (β = 0.243; p < 0.000); H4a and H4b
H2b HM→EE − 0.135 0.005 Supported
are supported.
H3a NV→PE 0.493 *** Supported
H3b NV→EE − 0.460 *** Supported
The results demonstrate that performance anticipation affects
H4a HU→PE 0.374 *** Supported cognitive attitudes positively (β = 0.346; p < 0.000) and affective atti­
H4b HU→EE − 0.231 *** Supported tudes (β = 0.287; p < 0.000), H5a and H5b are supported. In addition,
H5a PE→CA 0.384 *** Supported effort expectancy negatively influences cognitive attitudes (β = − 0.237;
H5b PE→AA 0.314 *** Supported
p < 0.001), and H6a is supported. However, the effect of effort expec­
H6a EE→CA − 0.127 *** Supported
H6b EE→AA − 0.267 0.713 Rejected tation on emotional attitudes is negligible (β = − 0.022; p = 0.738), and
H7a CA→ W 0.684 *** Supported H6b is rejected. Age has a significant negative impact on users’ will­
H7b CA→O − 0.257 0.004 Supported ingness to refuse ChatGPT (β = − 0.435; p = 0.001).
H8a AA→ W 0.094 0.043 Supported
Both cognitive and affective attitudes increased the willingness to
H8b AA→O − 0.552 *** Supported
Control variable Age→ O − 0.435 0.001 Supported
accept ChatGPT, supporting H7a and H8a. Specifically, cognitive atti­
tudes were the most influential factor in increasing the desire to receive
Notes: SI = social influence; HM = hedonic motivation; NV = novelty value; HU the use of ChatGPT (β = 0.505, p < 0.000). Nonetheless, affective atti­
= humanness; PE = performance expectancy; EE = effort expectancy; CA=
tudes had a much smaller impact on willingness to accept (β = 0.068, p
Cognitive Attitudes; AA = Affective Attitudes; W = willingness to accept the use
= 0.048). Moreover, both cognitive and affective attitudes are nega­
of AI devices; O = objection to the use of AI devices.
tively associated with ChatGPT objection (β = − 0.435, p 0.000; β =
− 0.836, p < 0.000). As a result, both H7b and H8 are supported. The

Fig. 2. Results of structural model testing. (*p < 0.05; **p < 0.01; ***p < 0.001; n. s.: not significant).

8
X. Ma and Y. Huo Technology in Society 75 (2023) 102362

results of the test of the hypotheses are shown in Table 6. Fig. 2 depicts highlighting the significant mediation function of both types of attitudes
the structural model with standardized coefficient outcomes. between evaluations and conduct intentions. However, the prior AIDUA
paradigm focused primarily on users’ acceptance of AI robotic devices
6. Discussion across service settings [90,91]. In these AI service scenarios, the avail­
ability of human service alternatives effectively avoids service failure
6.1. New findings about antecedents in the AIDUA model when an AI chatbot cannot serve users well [47]. However, in the case of
a powerful chatbot service such as ChatGPT, it is clear to users that
The current findings of the study partially support the AIDUA model. human services cannot be an alternative; that is, with the exception of
Consistent with the prior study [22], the results indicate a positive chatbot researchers, human customer service cannot provide additional
correlation between social influence and performance expectancy, he­ help. Therefore, it is necessary to evaluate its usefulness from a utili­
donic motivation, and effort expectancy. However, the new antecedents tarian perspective. Besides, in accordance with the social learning hy­
(novelty value and perceived humanness) demonstrated stronger effects pothesis, the cognitive dimension enhances an individual’s entire
than the original factors (social influence and novelty value) on the functionality, and the emotional size simultaneously develops a unique
next-stage evaluation, indicating that the adjustments we made to the sensibility [126]. Users expect to receive emotional communication and
AIDUA model based on the context of ChatGPT are appropriate. social support from ChatGPT during their interactions.
It is a special discovery that the indirect impact of novel value on user
acceptance intention is far greater than that of social influence. This 6.3. Acceptance and objection co-exist in chatbot acceptance
means that users attracted through the novel characteristics of ChatGPT
are more likely to accept it and continuously use ChatGPT than those Our results underline the effect of cognitive and affective attitudes in
attracted through celebrity endorsements or social groups (i.e., family, understanding users’ acceptance and objection to ChatGPT. The current
friends). When users are exposed to massive amounts of information in study found that cognitive attitudes are the key determinant of user
the Internet environment, it weakens the impact of social groups on acceptance of ChatGPT. Both positive cognitive attitudes and affective
users in real life, and on the contrary, the novelty of information ob­ attitudes can reduce the willingness to reject ChatGPT. However, in
tained has a greater impact on them. Besides, the weak effect of social contrast to the prior studies about emotion [91,96], the affective atti­
influence may be due to the fact that ChatGPT is used in spontaneous tude was found to be almost negligible in explaining the acceptance
user behavior, and its role may be more pronounced in organizational behaviors. This finding suggests that the influence of attitudes on
scenarios. behavioral intention is probably interrelated with the technology
Moreover, the effect of novelty value can have a much better influ­ application scenarios. In scenarios such as tourism services and social
ence on acceptance intention in highly intelligent chatbots [133] and companionship, users are more influenced by emotional attitudes. A
drive users to use the ChatGPT to the fullest extent [29]. The underlying congruence between the attitude components that motivate conduct and
implication is that for most advanced technologies, users concentrate those that are emphasized by cognitive processes would strengthen the
more on the actual creativity of the technology than on how it is por­ influence of attitude on behavior [137]. Given that ChatGPT is an
trayed in the mass media or the mouths of important people. More informative chatbot, users value its utilitarian function, which explains
importantly, the findings revealed no correlation between hedonic why cognitive attitude plays a decisive role in their willingness to accept
motivation and performance expectancy. Given that ChatGPT is a ChatGPT. In addition, cognitive attitudes reflect an examination of the
functional chatbot, the impact of utilitarian factors may be more sub­ technology effectiveness of ChatGPT, that is, the users’ positive cogni­
stantial than hedonic factors on performance expectancy [77]. tive perceptions toward chatbots which in turn affect their attitudes
In addition, perceived humanness has a huge effect on users’ eval­ [44].
uations of performance and expectations of effort. This result is notable Moreover, although affective attitudes have little effect on users’
for several reasons. First, the concept of humanness centers on chatbots’ acceptance intention, they can significantly reduce users’ rejection of
ability to accurately comprehend user questions or commands and ChatGPT. This influence seems to put users in an ambiguous position
respond naturally, like humans [42], instead of the concept of anthro­ between acceptance and rejection of the new technology. The equivocal
pomorphism in the original AIDUA model. In combination with the effect of attitudes toward AI can be traced to the absence of a distinction
discussion on the humanness of chatbots in the previous literature, between thinking and emotion [96]. The results prove that it is not
ChatGPT has reflected the high level of contingency in message ex­ enough to investigate the user’s behavior only through emotional atti­
changes [5], understanding humanness, response humanness [42], and tudes. Users with only positive emotions are still in the wait-and-see
other aspects of humanness. Second, this is consistent with recent stage of the new technology.
research on chatbot humanness, which suggests that humanness can In contrast to previous studies, which often suggest that younger
increase user trust, resulting in a better user experience [134], and that generations are more inclined to readily accept and adapt to emerging
perceived humanness has a substantial impact on users’ propensity to technologies, our research reveals a unique finding. Young users in our
interact with chatbots [135]. Moreover, the focus on ChatGPT’s hu­ study did not exhibit a pronounced willingness to accept ChatGPT;
manness eliminates the uncanny valley effect of anthropomorphism and however, they also did not express a strong willingness to reject it. This
decreases users’ perceptions of the amount of effort required to use the observation implies that users, particularly younger ones, may be more
chatbot. inclined to provide new technologies with an opportunity for acceptance
and evaluation.
6.2. Cognitive attitudes as the mediating factor of evaluation behavior
7. Implications
This study confirmed that performance expectancy influences users’
cognitive attitudes and affective attitudes, while effort expectancy only 7.1. Theoretical implications
impacts cognitive perspectives, highlighting the utilitarian nature of
users’ evaluation behavior. Furthermore, the influence of effort expec­ This study provides substantial contributions to the literature: (1)
tancy on cognitive attitudes is weaker than that of performance expec­ The AIDUA framework is extended with the antecedent factors (novelty
tancy. In light of ChatGPT’s user-friendliness, as AI products become value and perceived humanness) in the context of ChatGPT. (2) As
more convenient, the perceived performance of future AI devices will proposed by the CAT, this study stresses the necessity of examining
play a greater role in determining the value of AI device use for users. cognitive and emotional attitudes simultaneously when investigating
This study resonates with previous studies [96,46,136] by users’ behavioral attention to ChatGPT.

9
X. Ma and Y. Huo Technology in Society 75 (2023) 102362

First, it contributes to the development of studies on the AIDUA a desire and reluctance to utilize ChatGPT. For ChatGPT, which repre­
model. However, almost no study has examined the model validity in the sents a major advancement in AI, the average user’s perspective is likely
context of AI chatbots [138]. This study has extended the AIDUA model to be complex. On the one hand, ChatGPT liberates people from
with the antecedents of novelty value and perceived humanness in the monotonous and repetitive work; on the other hand, it brings innovation
context of ChatGPT, which is currently the most advanced AI chatbot. to working methods, a revolution in knowledge learning, career anxiety,
The antecedent of anthropomorphism in previous studies was concep­ data leakage, and other technical concerns. Therefore, the advancement
tualized as the humanlike qualities such as a name or avatar [139], voice of technology should also incorporate ethical and humanistic consider­
conversations [80], which is not suitable and accurate for describing the ations. AI developers and policymakers should formulate policies and
characteristics of ChatGPT. Thus, this study introduces a new factor, establish AI governance systems to guide and regulate the correct
perceived humanness, which provides more nuanced findings about the viewing and use of AI, such as ChatGPT.
model. Besides, we compared the effects of all the antecedents, and the The observed trend of younger individuals displaying a lower will­
results showed that another new factor, novelty value, had the most ingness to reject ChatGPT, while older individuals exhibit a higher
significant impact on the expectancy about ChatGPT. Previous studies likelihood of rejecting it, carries significant practical implications. First,
have provided evidence that novelty value refers to new, unique, it underscores the significance of adopting a human-centered approach
personalized, novel content and experience [140]. We further empha­ in the design and promotion of artificial intelligence (AI) technology. It
sized the findings to examine innovation in the acceptance of new highlights the necessity of conducting targeted surveys among older
technology products. The feasibility of new factors was validated in this groups to understand their concerns about ChatGPT. Such research en­
study, which promotes the progression of the original model in the new deavors facilitate technology developers in tailoring various aspects of
context. the technology, including interface design and response mechanisms, to
In addition, the study has emphasized the role of both cognitive at­ align with the preferences and requirements of elderly users. Second,
titudes and affective attitudes on users’ behavioral intentions, which older people might need extra help, training resources, or more user-
provides a complete explanation of how users’ intentions for using friendly interfaces to feel comfortable with new tech. By considering
ChatGPT are developed. This study introduces cognitive attitudes in the these requirements, we can foster a more inclusive and accommodating
framework of the AIDUA model, which has been proven to be an environment for the adoption of emerging technologies like ChatGPT,
indispensable part of the evaluation. Therefore, this study suggests ensuring that they are accessible and beneficial to a wider demographic.
future studies that will use the AIDUA model to examine the behavioral
intention toward AI devices, and incorporate cognitive attitudes into the 8. Conclusion
model. The finding of affective attitude reversed the previous results of
emotion, which has little promotion on the willingness to accept 8.1. Summary
ChatGPT. Nonetheless, an affective attitude toward ChatGPT implies
potential human–chatbot relationships, which can be further explored in This study developed a new theoretical model to explain the users’
the studies of emotional attachment [141], psychological dependence willingness to accept chatbots based on the AIDUA model and CAT. The
[142], and so on. Moreover, the intensity of users’ cognitive attitude and results indicate that in the primary appraisal stage, social influence,
affective attitudes probably influence the chatbot support type of novelty value, and humanness positively affect individuals’ perfor­
ChatGPT [143], such as “friend vs assistant” or to what extent users can mance expectancy perceptions of ChatGPT. Novelty value, hedonic
develop human-chatbot friendships [111]. Therefore, this study believes motivation, and humanness negatively impact the effort expectancy of
that it is necessary to investigate and compare cognitive and affective ChatGPT. Performance and effort expectancy influenced users’ cognitive
attitudes in the future research scenario of willingness to accept attitudes during the second evaluation stage, whereas only performance
chatbots. expectations influenced users’ affective attitudes. Ultimately, cognitive
attitudes are the main prerequisites for users to adopt ChatGPT, whereas
7.2. Practical implications emotional attitudes significantly diminish their propensity to reject
ChatGPT.
The findings of the current study have a number of practical impli­ This study contributes to chatbot acceptance research and expands
cations for future practice. the AIDUA model by capturing its characteristics in the ChatGPT sce­
First, this study confirms that novelty value and humanness are nario. The research findings enable us to comprehend the primary fac­
crucial factors in determining the acceptance of chatbots. ChatGPT tors influencing user attitudes, thereby facilitating chatbot adoption.
avoids the fear of users regarding anthropomorphic chatbots in terms of
vision and consciousness in previous studies and instead develops an 8.2. Limitations and future research
intelligent tool that approximates human expression and comprehension
from the perspective of the function of conversational communication. In different usage scenarios, users’ perceptions of and acceptance
This necessitates that R&D personnel grasp the degree of “humanization behavior toward ChatGPT may be inconsistent. Currently, ChatGPT is
reasonably.” Further efforts can be made from the perspective of func­ still in its early stages of adoption and popularity, and users may be more
tions such as the ability to understand dialogue in the future. tolerant of the technology effectiveness of chatbots. Our research in­
Second, this article provides new strategies for marketers to promote dicates that the most popular ChatGPT function is interactive commu­
new technology products. From the first evaluation to the final accep­ nication. On the contrary, issues such as insufficient content accuracy
tance of ChatGPT, novelty value has been proven to be much more and inability to perform advanced logic processing should be resolved
effective than social influence. Thus, providing fresh content and un­ with the ChatGPT function upgrade. Therefore, future research is
precedented experiences is more attractive than celebrity endorsements required to evaluate user acceptance of ChatGPT in certain usage
when accepting new technology products. With the emergence of scenarios.
numerous new products and technologies, identifying the unique, In addition, beyond the added novelty value and humanness, users’
innovative features that distinguish one product from others can help willingness to accept the use of AI gadgets may be substantially influ­
marketers and advertisers attract the target audience. This also includes enced by the perceived accuracy of the information and other consid­
novel peripheral information that is less related to important products’ erations. Future research attempts could incorporate new items from
core characters, which has been proven to evoke positive feelings about different individuals or organizations. Furthermore, this study exclu­
the products [144]. sively explored the willingness of users with ChatGPT experience to
In addition, this study demonstrated that users are likely to have both accept the technology. The generalizability of these findings to a broader

10
X. Ma and Y. Huo Technology in Society 75 (2023) 102362

population requires careful consideration. Therefore, future research [21] S. Kelly, S.A. Kaye, O. Oviedo-Trespalacios, What Factors Contribute to
Acceptance of Artificial Intelligence? A Systematic Review, Telematics and
should aim to incorporate a more diverse sample, encompassing in­
Informatics, 2022, 101925.
dividuals who have not yet engaged with the technology. This approach [22] D. Gursoy, O.H. Chi, L. Lu, R. Nunkoo, Customers acceptance of artificially
would enable a comprehensive comparative analysis of both user and intelligent (AI) device use in service delivery, Int. J. Inf. Manag. 49 (2019)
non-user perspectives. 157–169.
[23] S. Lee, S. Ha, R. Widdows, Consumer responses to high-technology products:
product attributes, cognition, and emotions, J. Bus. Res. 64 (11) (2011)
Author statement 1195–1200.
[24] Y. Suseno, C. Chang, M. Hudik, E.S. Fang, Beliefs, anxiety and change readiness
for artificial intelligence adoption among human resource managers: the
Xiaoyue Ma: Conceptualization, Methodology, Reviewing and moderating role of high-performance work systems, Int. J. Hum. Resour. Manag.
Editing. Yudi Huo: Data curation, Writing- Original draft preparation. 33 (6) (2022) 1209–1236.
[25] W. Huo, X. Yuan, X. Li, W. Luo, J. Xie, B. Shi, Increasing acceptance of medical AI:
the role of medical staff participation in AI development, Int. J. Med. Inf. 175
Data availability (2023), 105073.
[26] E. Agathokleous, C.J. Saitanis, C. Fang, Z. Yu, Use of ChatGPT: what does it mean
Data will be made available on request. for biology and environmental science? Sci. Total Environ. 888 (2023), 164154.
[27] S. Liu, A.P. Wright, B.L. Patterson, J.P. Wanderer, R.W. Turer, S.D. Nelson,
A. Wright, Using AI-generated suggestions from ChatGPT to optimize clinical
Acknowledgements decision support, J. Am. Med. Inf. Assoc. 30 (7) (2023) 1237–1245.
[28] S. Adapa, S.M. Fazal-e-Hasan, S.B. Makam, M.M. Azeem, G. Mortimer, Examining
the antecedents and consequences of perceived shopping value through smart
This work was supported by the National Natural Science Foundation
retail technology, J. Retail. Custom. Serv. 52 (2020), 101901.
of China (72174164) and National Social Science Foundation of China [29] R. Hasan, R. Shams, M. Rahman, Customer trust and perceived risk for voice-
(Major program) (21&ZD320). controlled artificial intelligence: the case of Siri, J. Bus. Res. 131 (2021) 591–597.
[30] V. Taecharungroj, “What can ChatGPT do?” Analyzing early reactions to the
innovative AI chatbot on twitter, Big Data Cognit. Comput. 7 (1) (2023) 35.
References [31] M. Dahiya, A tool of conversation: chatbot, Int. J. Comput. Sci. Eng. 5 (5) (2017)
158–161.
[1] T. Wu, S. He, J. Liu, S. Sun, K. Liu, Q.L. Han, Y. Tang, A brief overview of [32] B.A. Shawar, E.S. Atwell, Using corpora in machine-learning chatbot systems, Int.
ChatGPT: the history, status quo and potential future development, IEEE/CAA J. J. Corpus Linguist. 10 (4) (2005) 489–516.
Automat. Sinica 10 (5) (2023) 1122–1136. [33] C. Crolic, F. Thomaz, R. Hadi, A.T. Stephen, Blame the bot: Anthropomorphism
[2] W.M. Lim, A. Gunasekara, J.L. Pallant, J.I. Pallant, E. Pechenkina, Generative AI and anger in customer–chatbot interactions, J. Mark. 86 (1) (2022) 132–148.
and the future of education: ragnarök or reformation? A paradoxical perspective [34] M. Li, R. Wang, Chatbots in e-commerce: the effect of chatbot language style on
from management educators, Int. J. Manag. Educ. 21 (2) (2023), 100790. customers’ continuance usage intention and attitude toward brand, J. Retail.
[3] Dan Milmo, ChatGPT Reaches 100 Million Users Two Months after Launch, The Custom. Serv. 71 (2023), 103209.
Guardian, 2023. February 2. Available online: https://www.theguardian. [35] X. Cheng, Y. Bao, A. Zarifis, W. Gong, J. Mou, Exploring customers’ response to
com/technology/2023/feb/02/chatgpt-100-million-users-open-ai-fastest-growi text-based chatbots in e-commerce: the moderating role of task complexity and
ng-app. (Accessed 18 June 2023). chatbot disclosure, Internet Res. 32 (2) (2021) 496–517.
[4] X. Luo, S. Tong, Z. Fang, Z. Qu, Frontiers: machines vs. humans: the impact of [36] S. Dube, E-Commerce Chatbots - Using Chatbots Customer Support to Improve
artificial intelligence chatbot disclosure on customer purchases, Market. Sci. 38 eCommerce Conversion Rate, 2020. https://www.invespcro.com/blog/eco
(6) (2019) 937–947. mmerce-chatbots/.
[5] E. Go, S.S. Sundar, Humanizing chatbots: the effects of visual, identity and [37] B. Zhang, Y. Zhu, J. Deng, W. Zheng, Y. Liu, C. Wang, R. Zeng, “I Am here to assist
conversational cues on humanness perceptions, Comput. Hum. Behav. 97 (2019) your tourism”: predicting continuance intention to use AI-based chatbots for
304–316. tourism. Does gender really matter? Int. J. Hum. Comput. Interact. (2022) 1–17.
[6] A. Janssen, L. Grützner, M.H. Breitner, Why do chatbots fail? A critical success [38] R. Pillai, B. Sivathanu, Adoption of AI-based chatbots for hospitality and tourism,
factors analysis, in: International Conference On Information Systems (ICIS), Int. J. Contemp. Hospit. Manag. 32 (10) (2020) 3199–3226.
2021. [39] E. Kang, Y.A. Kang, Counseling chatbot design: the effect of anthropomorphic
[7] O. Beran, An attitude towards an artificial soul? Responses to the “Nazi Chatbot, chatbot characteristics on user self-disclosure and companionship, Int. J. Hum.
Philos. Investig. 41 (1) (2018) 42–69. Comput. Interact. (2023) 1–15.
[8] Følstad, A., & Skjuve, M. (2019, August). Chatbots for customer service: user [40] Y. Zhu, R. Wang, C. Pu, “I am chatbot, your virtual mental health adviser.” What
experience and motivation. In Proceedings of the 1st international conference on drives citizens’ satisfaction and continuance intention toward mental health
conversational user interfaces (pp. 1-9). chatbots during the COVID-19 pandemic? An empirical study in China, Digital
[9] Y. Shen, L. Heacock, J. Elias, K.D. Hentel, B. Reig, G. Shih, L. Moy, ChatGPT and Health 8 (2022), 20552076221090031.
other large language models are double-edged swords, Radiology 307 (2) (2023), [41] K. Ryong, D. Lee, J.G. Lee, Chatbot’s complementary motivation support in
e230163. developing study plan of E-learning English lecture, Int. J. Hum. Comput.
[10] T. Chong, T. Yu, D.I. Keeling, K. de Ruyter, AI-chatbots on the services frontline Interact. (2023) 1–15.
addressing the challenges and opportunities of agency, J. Retail. Customer Serv. [42] C.L. Hsu, J.C.C. Lin, Understanding the user satisfaction and loyalty of customer
63 (2021), 102735. service chatbots, J. Retail. Custom. Serv. 71 (2023), 103211.
[11] Y. Shahsavar, A. Choudhury, User intentions to use ChatGPT for self-diagnosis [43] A.M. Baabdullah, A.A. Alalwan, R.S. Algharabat, B. Metri, N.P. Rana, Virtual
and health-related purposes: cross-sectional survey study, JMIR Human Factors agents and flow experience: an empirical examination of AI-powered chatbots,
10 (1) (2023), e47564. Technol. Forecast. Soc. Change 181 (2022), 121772.
[12] S. Altmäe, A. Sola-Leyva, A. Salumets, Artificial intelligence in scientific writing: [44] X. Lin, B. Shao, X. Wang, Employees’ perceptions of chatbots in B2B marketing:
a friend or a foe?, Reprod, Biomed. Online 47 (1) (2023) 3–9. affordances vs. disaffordances, Ind. Market. Manag. 101 (2022) 45–56.
[13] S.B. Patel, K. Lam, ChatGPT: the future of discharge summaries? Lancet Digit. [45] Y. Zhu, J. Zhang, J. Wu, Y. Liu, AI is better when I’m sure: the influence of
Health 5 (3) (2023) e107–e108. certainty of needs on customers’ acceptance of AI chatbots, J. Bus. Res. 150
[14] M. Cascella, J. Montomoli, V. Bellini, E. Bignami, Evaluating the feasibility of (2022) 642–652.
ChatGPT in healthcare: an analysis of multiple clinical and research scenarios, [46] S.V. Jin, S. Youn, Social presence and imagery processing as predictors of chatbot
J. Med. Syst. 47 (1) (2023) 33. continuance intention in human-AI-interaction, Int. J. Hum. Comput. Interact.
[15] E. Kasneci, K. Seßler, S. Küchemann, M. Bannert, D. Dementieva, F. Fischer, (2022) 1–13.
G. Kasneci, ChatGPT for good? On opportunities and challenges of large language [47] Q. Chen, Y. Lu, Y. Gong, J. Xiong, Can AI Chatbots Help Retain Customers?
models for education, Learn. Indiv Differ 103 (2023), 102274. Impact of AI Service Quality on Customer Loyalty, Internet Research, 2023
[16] M.C. Keiper, ChatGPT in practice: increasing event planning efficiency through (ahead-of-print).
artificial intelligence, J. Hospit. Leisure Sports Tourism Educ. 33 (2023), 100454. [48] H. Fan, W. Gao, B. Han, Are AI chatbots a cure-all? The relative effectiveness of
[17] F.J. García-Peñalvo, The Perception of Artificial Intelligence in Educational chatbot ambidexterity in crafting hedonic and cognitive smart experiences,
Contexts after the Launch of ChatGPT: Disruption or Panic?, 2023. J. Bus. Res. 156 (2023), 113526.
[18] G. Liu, C. Ma, Measuring EFL learners’ use of ChatGPT in informal digital learning [49] X. Wang, X. Lin, B. Shao, How does artificial intelligence create business agility?
of English based on the technology acceptance model, Innovat. Lang. Learn. Evidence from chatbots, Int. J. Inf. Manag. 66 (2022), 102535.
Teach. (2023) 1–14. [50] Y. Jiang, X. Yang, T. Zheng, Make chatbots more adaptive: dual pathways linking
[19] H. Du, S. Teng, H. Chen, J. Ma, X. Wang, C. Gou, F.Y. Wang, Chat with Chatgpt on humanlike cues and tailored response to trust in interactions with chatbots,
Intelligent Vehicles: an Ieee Tiv Perspective, IEEE Transactions on Intelligent Comput. Hum. Behav. 138 (2023), 107485.
Vehicles, 2023. [51] S.W. Song, M. Shin, Uncanny Valley effects on chatbot trust, purchase intention,
[20] N. Editorials, Tools such as ChatGPT threaten transparent science; here are our and adoption intention in the context of E-commerce: the moderating role of
ground rules for their use, Nature 613 (612) (2023) 10–1038. avatar familiarity, Int. J. Hum. Comput. Interact. (2022) 1–16.

11
X. Ma and Y. Huo Technology in Society 75 (2023) 102362

[52] H. Jiang, Y. Cheng, J. Yang, S. Gao, AI-powered chatbot communication with [83] M. Ashfaq, J. Yun, S. Yu, S.M.C. Loureiro, I, Chatbot: modeling the determinants
customers: dialogic interactions, satisfaction, engagement, and customer of customers’ satisfaction and continuance intention of AI-powered service
behavior, Comput. Hum. Behav. 134 (2022), 107329. agents, Telematics Inf. 54 (2020), 101473.
[53] M. Cheng, X. Li, J. Xu, Promoting healthcare workers’ adoption intention of [84] B. Zarouali, E. Van den Broeck, M. Walrave, K. Poels, Predicting customer
artificial-intelligence-assisted diagnosis and treatment: the chain mediation of responses to a chatbot on Facebook, Cyberpsychol., Behav. Soc. Netw. 21 (8)
social influence and human–computer trust, Int. J. Environ. Res. Publ. Health 19 (2018) 491–497.
(20) (2022), 13311. [85] A.P. Chaves, M.A. Gerosa, How should my chatbot interact? A survey on social
[54] J. Lee, D. Lee, J.G. Lee, Influence of rapport and social presence with an AI characteristics in human–chatbot interaction design, Int. J. Hum. Comput.
psychotherapy chatbot on customers’ self-disclosure, Int. J. Hum. Comput. Interact. 37 (8) (2021) 729–758.
Interact. (2022) 1–12. [86] S.I. Lei, H. Shen, S. Ye, A comparison between chatbot and human service:
[55] H. Shin, I. Bunosso, L.R. Levine, The influence of chatbot humour on customer customer perception and reuse intention, Int. J. Contemp. Hospit. Manag. 33 (11)
evaluations of services, Int. J. Custom. Stud. 47 (2) (2022) 545–562. (2021) 3977–3995.
[56] E.A. Croes, M.L. Antheunis, M.B. Goudbeek, N.W. Wildman, “I am in your [87] S.Y. Huang, C.J. Lee, Predicting continuance intention to fintech chatbot,
computer while we talk to each other” A content analysis on the use of language- Comput. Hum. Behav. 129 (2022), 107027.
based strategies by humans and a social chatbot in initial human-chatbot [88] J. Im, M. Hancer, What fosters favorable attitudes toward using travel mobile
interactions, Int. J. Hum. Comput. Interact. (2022) 1–19. applications? J. Hospit. Mark. Manag. 26 (4) (2017) 361–377.
[57] Q. Jiang, Y. Zhang, W. Pian, Chatbot as an emergency exist: mediated empathy [89] B. Lee, D.A. Cranage, Causal attributions and overall blame of self-service
for resilience via human-AI interaction during the COVID-19 pandemic, Inf. technology (SST) failure: different from service failures by employee and policy,
Process. Manag. 59 (6) (2022), 103074. J. Hospit. Mark. Manag. 27 (1) (2018) 61–84.
[58] C. Yen, M.C. Chiang, Trust me, if you can: a study on the factors that influence [90] O.H. Chi, C.G. Chi, D. Gursoy, R. Nunkoo, Customers’ acceptance of artificially
consumers’ purchase intention triggered by chatbots based on brain image intelligent service robots: the influence of trust and culture, Int. J. Inf. Manag. 70
evidence and self-reported assessments, Behav. Inf. Technol. 40 (11) (2021) (2023), 102623.
1177–1194. [91] H. Lin, O.H. Chi, D. Gursoy, Antecedents of customers’ acceptance of artificially
[59] F. Ali, Q. Zhang, M.Z. Tauni, K. Shahzad, Social chatbot: my friend in my distress, intelligent robotic device use in hospitality services, J. Hospit. Market. Manag. 29
Int. J. Hum. Comput. Interact. (2023) 1–11. (5) (2020) 530–549.
[60] E. Konya-Baumbach, M. Biller, S. von Janda, Someone out there? A study on the [92] V. Vitezić, M. Perić, Artificial intelligence acceptance in services: connecting with
social presence of anthropomorphized chatbots, Comput. Hum. Behav. 139 Generation Z, Serv. Ind. J. 41 (13–14) (2021) 926–946.
(2023), 107513. [93] M.A. Ribeiro, D. Gursoy, O.H. Chi, Customer acceptance of autonomous vehicles
[61] Y. Cheng, H. Jiang, Customer–brand relationship in the era of artificial in travel and tourism, J. Travel Res. 61 (3) (2022) 620–636.
intelligence: understanding the role of chatbot marketing efforts, J. Prod. Brand [94] R.S. Lazarus, S. Folkman. Stress, Appraisal, and Coping, Springer Publishing
Manag. 31 (2) (2022) 252–264. Company, 1984.
[62] H. Hari, R. Iyer, B. Sampat, Customer brand engagement through chatbots on [95] S. Paluch, S. Tuzovic, H.F. Holz, A. Kies, M. Jörling, My colleague is a
bank websites–Examining the antecedents and consequences, Int. J. Hum. robot”–exploring frontline employees’ willingness to work with collaborative
Comput. Interact. 38 (13) (2022) 1212–1227. service robots, J. Serv. Manag. 33 (2) (2022) 363–388.
[63] J.S.E. Lin, L. Wu, Examining the psychological process of developing customer- [96] Y.T. Chiu, Y.Q. Zhu, J. Corbett, In the hearts and minds of employees: a model of
brand relationships through strategic use of social media brand chatbots, Comput. pre-adoptive appraisal toward artificial intelligence in organizations, Int. J. Inf.
Hum. Behav. 140 (2023), 107488. Manag. 60 (2021), 102379.
[64] P.B. Brandtzaeg, M. Skjuve, A. Følstad, My AI friend: how customers of a social [97] V. Venkatesh, M.G. Morris, G.B. Davis, F.D. Davis, User acceptance of information
chatbot understand their human–AI friendship, Hum. Commun. Res. 48 (3) technology: toward a unified view, MIS Q. (2003) 425–478.
(2022) 404–429. [98] P. He, S. Lovo, M. Veronesi, Social networks and renewable energy technology
[65] J. Chatterjee, N. Dethlefs, This new conversational AI model can be your friend, adoption: empirical evidence from biogas adoption in China, Energy Econ. 106
philosopher, and guide... and even your worst enemy, Patterns 4 (1) (2023), (2022), 105789.
100676. [99] T.T. Wei, G. Marthandan, A.Y.L. Chong, K.B. Ooi, S. Arumugam, What drives
[66] A.J. Kull, M. Romero, L. Monahan, How may I help you? Driving brand Malaysian m-commerce adoption? An empirical analysis, Ind. Manag. Data Syst.
engagement through the warmth of an initial chatbot message, J. Bus. Res. 135 109 (3) (2009) 370–388.
(2021) 840–850. [100] S. Sharma, N. Islam, G. Singh, A. Dhir, Why do retail customers adopt artificial
[67] OpenAI, “DALL.E2”, 2023. Available at: https://openai.com/dall-e-2/. intelligence (AI) based autonomous decision-making systems? IEEE Trans. Eng.
[68] E.A. van Dis, J. Bollen, W. Zuidema, R. van Rooij, C.L. Bockting, ChatGPT: five Manag. 3 (115-121) (2022) 1–17.
priorities for research, Nature 614 (7947) (2023) 224–226. [101] X. Cheng, X. Zhang, J. Cohen, J. Mou, Human vs. AI: understanding the impact of
[69] Y.K. Dwivedi, N. Kshetri, L. Hughes, E.L. Slade, A. Jeyaraj, A.K. Kar, R. Wright, anthropomorphism on customer response to chatbots from the perspective of trust
“So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, and relationship norms, Inf. Process. Manag. 59 (3) (2022), 102940.
challenges and implications of generative conversational AI for research, practice [102] A. Oldeweme, J. Märtins, D. Westmattelmann, G. Schewe, The role of
and policy, Int. J. Inf. Manag. 71 (2023), 102642. transparency, trust, and social influence on uncertainty reduction in times of
[70] J. Paul, A. Ueno, C. Dennis, ChatGPT and consumers: benefits, pitfalls and future pandemics: empirical study on the adoption of COVID-19 tracing apps, J. Med.
research agenda, Int. J. Consum. Stud. 47 (4) (2023) 1213–1225. Internet Res. 23 (2) (2021), e25893.
[71] C. Lee, J. Kim, J.S. Lim, How does fact-check labeling impact the evaluations of [103] Browne Ryan, Elon Musk, Who Co-founded Firm behind ChatGPT, Warns A.I. Is
inadvertently placed brand ads? Soc. Sci. J. (2023) 1–17. ‘one of the Biggest Risks’ to Civilization, Consumer News and Business Channel,
[72] H. Ibrahim, R. Asim, F. Zaffar, T. Rahwan, Y. Zaki, Rethinking homework in the 2023. March 6. Available online: https://www.cnbc.com/2023/02/15/elon-mu
age of artificial intelligence, IEEE Intell. Syst. 38 (2) (2023) 24–27, iiMedia sk-co-founder-of-chatgpt-creator-openai-warns-of-ai-society-risk.html.
(2021). China’s Chatbot Satisfaction Report in 2021, https://www.iimedia. [104] Bhaimiya Sawdah, Bill Gates said ChatGPT will ’change our world’ by making the
cn/c400/81565.html. workplace more efficient, Insider (2023). February 2. Available online: https
[73] A. Tlili, B. Shehata, M.A. Adarkwah, A. Bozkurt, D.T. Hickey, R. Huang, ://www.businessinsider.com/bill-gates-chatgpt-says-will-change-our-world-inter
B. Agyemang, What if the devil is my guardian angel: ChatGPT as a case study of view-2023-2.
using chatbots in education, Smart Learn. Environ. 10 (1) (2023) 15. [105] V. Venkatesh, J.Y. Thong, X. Xu, Customer acceptance and use of information
[74] The Guardian. (2023). New York City schools ban AI chatbot ChatGPT. The technology: extending the unified theory of acceptance and use of technology,
Guardian. Available online: https://www.theguardian.com/us-news/2023/jan MIS Q. (2012) 157–178.
/06/new-york-city-schools-ban-ai-chatbot-chatgpt. [106] L.K. Fryer, M. Ainley, A. Thompson, A. Gibson, Z. Sherlock, Stimulating and
[75] F. Rahimi, A.T.B. Abadi, ChatGPT and publication ethics, Arch. Med. Res. 54 (3) sustaining interest in a language course: an experimental comparison of Chatbot
(2023) 272–274. and Human task partners, Comput. Hum. Behav. 75 (2017) 461–468.
[76] D.L. Kasilingam, Understanding the attitude and intention to use smartphone [107] A. Strzelecki, To use or not to use ChatGPT in higher education? A study of
chatbots for shopping, Technol. Soc. 62 (2020), 101280. students’ acceptance and use of technology, Interact. Learn. Environ. (2023)
[77] A. Rese, L. Ganster, D. Baier, Chatbots in retailers’ customer communication: how 1–14.
to measure their acceptance? J. Retail. Custom. Serv. 56 (2020), 102176. [108] S. Melián-González, D. Gutiérrez-Taño, J. Bulchand-Gidumal, Predicting the
[78] D.H. Huang, H.E. Chueh, Chatbot usage intention analysis: veterinary intentions to use chatbots for travel and tourism, Curr. Issues Tourism 24 (2)
consultation, J. Innovat. Knowled. 6 (3) (2021) 135–144. (2021) 192–210.
[79] B.A. Eren, Determinants of customer satisfaction in chatbot use: evidence from a [109] S. Im, S. Bhat, Y. Lee, Consumer perceptions of product creativity, coolness, value
banking application in Turkey, Int. J. Bank Market. 39 (2) (2021) 294–311. and attitude, J. Bus. Res. 68 (1) (2015) 166–172.
[80] J. Balakrishnan, S.S. Abed, P. Jones, The role of meta-UTAUT factors, perceived [110] J.D. Wells, D.E. Campbell, J.S. Valacich, M. Featherman, The effect of perceived
anthropomorphism, perceived intelligence, and social self-efficacy in chatbot- novelty on the adoption of information technology innovations: a risk/reward
based services? Technol. Forecast. Soc. Change 180 (2022), 121692. perspective, Decis. Sci. J. 41 (4) (2010) 813–843.
[81] E. Mogaji, J. Balakrishnan, A.C. Nwoba, N.P. Nguyen, Emerging-market [111] E.A. Croes, M.L. Antheunis, Can we be friends with Mitsuku? A longitudinal study
consumers’ interactions with banking chatbots, Telematics Inf. 65 (2021), on the process of relationship formation between humans and a social chatbot,
101711. J. Soc. Pers. Relat. 38 (1) (2021) 279–300.
[82] A. Kwangsawad, A. Jattamart, Overcoming customer innovation resistance to the [112] H. Karjaluoto, A.A. Shaikh, H. Saarijärvi, S. Saraniemi, How perceived value
sustainable adoption of chatbot services: a community-enterprise perspective in drives the use of mobile financial services apps, Int. J. Inf. Manag. 47 (2019)
Thailand, J. Innovat. Knowled. 7 (3) (2022), 100211. 252–261.

12
X. Ma and Y. Huo Technology in Society 75 (2023) 102362

[113] A. Luqman, X. Cao, A. Ali, A. Masood, L. Yu, Empirical investigation of Facebook of followers as relationship performance in mobile social commerce, Comput.
discontinues usage intentions based on SOR paradigm, Comput. Hum. Behav. 70 Hum. Behav. 131 (2022), 107212.
(2017) 544–555. [129] D. Le, M. Pratt, Y. Wang, N. Scott, G. Lohmann, How to win the customer’s heart?
[114] L. Xie, X. Liu, D. Li, The mechanism of value cocreation in robotic services: Exploring appraisal determinants of customer pre-consumption emotions, Int. J.
customer inspiration from robotic service novelty, J. Hospit. Market. Manag. 31 Hospit. Manag. 88 (2020), 102542.
(8) (2022) 962–983. [130] J.C. Nunnally, I.H. Bernstein. Psychometric Theory, McGraw-Hill, Inc, USA, 1994.
[115] H.Y. Kim, A.L. McGill, Minions for the rich? Financial status changes how [131] C. Fornell, D.F. Larcker, Evaluating structural equation models with unobservable
consumers see products with anthropomorphic features, J. Consumer Res. 45 (2) variables and measurement error, J. Market. Res. 18 (1) (1981) 39–50.
(2018) 429–450. [132] J. Henseler, C.M. Ringle, M. Sarstedt, A new criterion for assessing discriminant
[116] S. Li, A.M. Peluso, J. Duan, Why do we prefer humans to artificial intelligence in validity in variance-based structural equation modeling, J. Acad. Market. Sci. 43
telemarketing? A mind perception explanation, J. Retail. Custom. Serv. 70 (2015) 115–135.
(2023), 103139. [133] Y. Wang, Q. Kang, S. Zhou, Y. Dong, J. Liu, The impact of service robots in retail:
[117] Open AI (2023), Introducing ChatGPT. Available online: https://openai.com/blo exploring the effect of novelty priming on consumer behavior, J. Retailing
g/chatgpt. Consum. Serv. 68 (2022), 103002.
[118] A. Rapp, L. Curti, A. Boldi, The human side of human-chatbot interaction: a [134] D. Shin, The effects of explainability and causability on perception, trust, and
systematic literature review of ten years of research on text-based chatbots, Int. J. acceptance: implications for explainable AI, Int. J. Hum. Comput. Stud. 146
Hum. Comput. Stud. 151 (2021), 102630. (2021), 102551.
[119] L. Lu, C. McDonald, T. Kelleher, S. Lee, Y.J. Chung, S. Mueller, C.A. Yue, [135] D. Shin, The perception of humanness in conversational journalism: an
Measuring customer-perceived humanness of online organizational agents, algorithmic information-processing perspective, New Media Soc. 24 (12) (2022)
Comput. Hum. Behav. 128 (2022), 107092. 2680–2704.
[120] R.M. Schuetzler, G.M. Grimes, J. Scott Giboney, The impact of chatbot [136] J.N. Choi, S.Y. Sung, K. Lee, D.S. Cho, Balancing cognition and emotion:
conversational skill on engagement and perceived humanness, J. Manag. Inf. Syst. innovation implementation as a function of cognitive appraisal and emotional
37 (3) (2020) 875–900. reactions toward innovation, J. Organ. Behav. 32 (1) (2011) 107–124. ’.
[121] B. Foroughi, M.G. Senali, M. Iranmanesh, A. Khanfar, M. Ghobakhloo, [137] Z.J. Wang, K.Q. Chan, J.J. Chen, A. Chen, F. Wang, Differential impact of affective
N. Annamalai, B. Naghmeh-Abbaspour, Determinants of intention to use ChatGPT and cognitive attributes on preference under deliberation and distraction, Front.
for educational purposes: findings from PLS-SEM and fsQCA, Int. J. Hum. Psychol. 549 (2015).
Comput. Interact. (2023) 1–20. [138] W.B. Kim, H.J. Hur, What makes people feel empathy for AI chatbots? Assessing
[122] E.C.X. Aw, G.W.H. Tan, T.H. Cham, R. Raman, K.B. Ooi, Alexa, what’s on my the role of competence and warmth, Int. J. Hum. Comput. Interact. (2023) 1–14.
shopping list? Transforming customer experience with digital voice assistants, [139] C. Crolic, F. Thomaz, R. Hadi, A.T. Stephen, Blame the bot: anthropomorphism
Technol. Forecast. Soc. Change 180 (2022), 121711. and anger in customer–chatbot interactions, J. Market. 86 (1) (2022) 132–148.
[123] X. Lv, J. Luo, Y. Liang, Y. Liu, C. Li, Is cuteness irresistible? The impact of cuteness [140] G. McLean, A. Wilson, Shopping in the digital world: examining customer
on customers’ intentions to use AI applications, Tourism Manag. 90 (2022), engagement through augmented reality mobile applications, Comput. Hum.
104472. Behav. 101 (2019) 210–224.
[124] O.H. Chi, D. Gursoy, C.G. Chi, Tourists’ attitudes toward the use of artificially [141] I. Pentina, T. Hancock, T. Xie, Exploring relationship development with social
intelligent (AI) devices in tourism service delivery: moderating role of service chatbots: a mixed-method study of replika, Comput. Hum. Behav. 140 (2023),
value seeking, J. Trav. Res. 61 (1) (2022) 170–185. 107600.
[125] E. Moriuchi, An empirical study on anthropomorphism and engagement with [142] T. Xie, I. Pentina, T. Hancock, Friend, mentor, lover: does chatbot engagement
disembodied AIs and customers’ re-use behavior, Psychol. Market. 38 (1) (2021) lead to psychological dependence? J. Serv. Manag. 34 (4) (2023) 806–828.
21–42. [143] N. Ameen, J.H. Cheah, S. Kumar, It’s all part of the customer journey: the impact
[126] A. Chen, Y. Lu, B. Wang, Customers’ purchase decision-making process in social of augmented reality, chatbots, and social media on the body image and self-
commerce: a social learning perspective, Int. J. Inf. Manag. 37 (6) (2017) esteem of Generation Z female consumers, Psychol. Market. 39 (11) (2022)
627–638. 2110–2129.
[127] Y. Lee, A.N. Chen, V. Ilie, Can online wait be managed? The effect of filler [144] J. Lee, H. Kim, How to survive in advertisement flooding: the effects of
interfaces and presentation modes on perceived waiting time online, MIS Q. schema–product congruity and attribute relevance on advertisement attitude,
(2012) 365–394. J. Consum. Behav. 21 (2) (2022) 214–230.
[128] S.V. Jin, S. Youn, “They bought it, therefore I will buy it”: the effects of peer
customers’ conversion as sales performance and entrepreneurial sellers’ number

13

You might also like