Professional Documents
Culture Documents
net/publication/337256309
Siri, Alexa, and other digital assistants: a study of customer satisfaction with
artificial intelligence applications
CITATIONS READS
7 774
3 authors:
Richard Miller
University of Dallas
14 PUBLICATIONS 89 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Tom Brill on 13 April 2020.
To cite this article: Thomas M. Brill, Laura Munoz & Richard J. Miller (2019): Siri, Alexa, and other
digital assistants: a study of customer satisfaction with artificial intelligence applications, Journal
of Marketing Management, DOI: 10.1080/0267257X.2019.1687571
To link to this article: https://doi.org/10.1080/0267257X.2019.1687571
1
Siri, Alexa, and Other Digital Assistants: A Study of Customer Satisfaction
with Artificial Intelligence Applications
a
Gupta College of Business, University of Dallas, Irving, Texas, USA; bGupta College of
Business, University of Dallas, Irving, Texas, USA; cGupta College of Business, University of
Dallas, Irving, Texas, USA
Thomas M. Brill, DBA, is an adjunct professor at the University of Dallas. He is also a marketing
consultant with extensive industry experience in the areas of marketing strategy, marketing
analytics, artificial intelligence and digital marketing.
Laura Munoz, Ph.D., is an associate professor of marketing at the University of Dallas. She
publishes in the areas of professional selling, entrepreneurship, and pedagogical issues.
2
Siri, Alexa, and Other Digital Assistants: A Study of Customer Satisfaction
with Artificial Intelligence Applications
Abstract
Digital assistants (e.g., Apple’s Siri, Amazon’s Alexa, Google’s Google Assistant, and
Microsoft’s Cortana) are highly complex and advanced artificial intelligence (AI) based
technologies. Individuals can use digital assistants to perform basic personal task management
functions as well as for more advanced capabilities and connected-device integrations. Yet,
the functional and topical use of a digital assistant tends to vary by individual. Thus, this study
reflects the contextual experiences of the respondents. At present, there is little empirical
evidence of customer satisfaction with digital assistants. PLS-SEM was used to analyze 244
survey responses to examine this research gap. While the majority of the sample consisted of
users of Siri (72%), other digital assistants (28%) were also identified and included. The
results confirmed that expectations and confirmation of expectations have a positive and
perspective, this study provides evidence that customer expectations are being satisfied
through the digital assistant interaction experience. Thus, as firms begin to accelerate the
integration of digital assistants into their operations, firms must help customers properly
define what to expect from the firm’s interactive experience. After all, consumers have
3
Introduction
Siri, Alexa, and other digital assistants are rapidly being embraced by consumers and
dynamic systems possessing the ability to learn customer preferences (Kumar, Dixit, Javalgi, &
Dass, 2016). These systems “use inputs such as the user’s voice, vision (images), and contextual
recommendations, and performing actions” (Hauswald et al., 2015, p.223). Juniper Research
currently estimates that there are an estimated 3.25 billion digital assistants in use worldwide
today. They project approximately 8 billion digital assistants to be in use by 2023. This growth
reflects a compound average growth rate (CAGR) of 25.4% (Moar, 2019, February).
Canbek and Mutlu (2016) have identified that consumers are rapidly embracing digital
assistants, including those offered by the current market leaders (e.g., Apple’s Siri, Amazon’s
Alexa, Google’s Google Assistant, and Microsoft’s Cortana). Digital assistants (a.k.a., intelligent
virtual assistants or intelligent personal assistants) offer a diverse range of benefits to consumers
content that is delivered to the user on a real-time basis, with a high degree of reliability and
convenience (Baier, Rese, & Röglinger, 2018; Wise, VanBoskirk, & Liu, 2016). Even though
digital assistants are highly complex and advanced technologies, the functional and topical use of
a digital assistant tends to vary by individual (Kumar et al., 2016). Individuals can use digital
assistants to perform basic personal task management functions (e.g., to manage calendars and
appointments, send text messages, make phone calls, and obtaining driving directions) as well as
for more advanced capabilities and connected-device integrations (e.g., home automation,
4
“Fueled by interacting with the likes of Siri and Alexa, it’s no surprise that Gartner
predicts that by 2020, customers will manage 85% of their relationship with an enterprise without
interacting with a human” (Peart, 2018). Accordingly, businesses are embracing digital assistants
either as standalone units or integrating them into enterprise platforms. Many times, these
platform integrations are referred to as chatbots or conversational agents (Moar, 2019, February).
This technology is also proving to be a disruptive platform application offering substantive value
to companies (Kumar et al., 2016). Coupling digital assistants with other AI technologies offers
the potential to transform companies by creating more efficient business processes, automating
complex tasks, and improving the customer service experience (Koehler, 2016).
Businesses have begun integrating this technology into their operations with the
expectation of achieving significant productivity gains (Baier et al., 2018; Bittner, Oeste-Reiß, &
management, and banking automation. In addition, this technology is being integrated into
collaborative work practices, such as for interactions between conversational agents and humans
for call center agent support. Considering the significant investment firms are making in digital
assistant technology and the re-design of core production and customer service processes for this
technology, corroboration is needed that customers indeed trust and are satisfied with this
technology (Mithas & Rust, 2016; Pappas, 2016). Therefore, it is imperative to study the degree
to which there is an alignment of digital assistant user expectations with the perceptions of the
Recent advancements in machine learning and deep learning capabilities associated with
digital assistants allow for data-driven discoveries involving previously hidden patterns,
correlations, and other revealing personal interactivity insights (Alpaydin, 2014). For some users,
concern exists that the data may be misused or abused (Bhatt, 2014). This concern is among the
5
topics included in the White House and the President’s Council of Advisors on Science and
Technology (PCAST) report encouraging research on the implications of big data on privacy
(White House, 2014, May). While the technological benefits to consumers are suggested to be
manifold, businesses face a substantial risk of abandoned investment or brand injury if consumers
lack trust in the ability of either the firm or the technology to protect the privacy of personal
information (Pappas, 2016). History has demonstrated that people can become emotionally
dependent on technology (Karapanos, 2013). While firms strive to build solutions that
demonstrate wide-scale adoption, ongoing loyalty, and to some sense, a user dependency upon
their technology platform and ecosystem, negative outcomes (c.f. Facebook data privacy
scandals) can result from undervaluing trust and privacy considerations. Thus, it is also
imperative to study to what extent the cognitive considerations associated with information
privacy concerns and perceived trust offer a moderating influence on the expectations
“The topic of customer satisfaction has always been of great interest to marketing
practitioners and academics” (Kumar, 2016, p. 108). Further, Rego, Morgan, and Fornell (2013,
p. 2) state that “customer satisfaction has long been a central construct in the consumer behavior,
marketing strategy, and theoretical and empirical modeling research streams in marketing.” Yet,
there is little empirical evidence of customer satisfaction with digital assistants. The dearth of
research on this topic presents opportunities to provide clarity and insights to firms as they pursue
ongoing programs involving digital assistants. To address this business question, this study used
the ECT as its foundational theory to better understand how user satisfaction judgments are
While the majority of the sample consisted of users of Siri (72%), other digital assistants
(28%) were also identified and included. This study advances our understanding of the theoretical
6
assistants. This study affirms the role of the expectations confirmation process in the customer
satisfaction evaluation. Further, it provides insights that allow managers to understand the drivers
and the degree of customer satisfaction with digital assistants. This study also provides
recommendations as to where management should focus its priorities in order to assist users in
In the remaining sections, we review the literature and the theoretical foundation
supporting the study and develop the research hypotheses. Next, we describe the research
the implications for theory and practice, limitations, and further research direction.
Theoretical framework
Satisfaction is both a central concept and a topic of extensive research interest throughout
the fields of psychology, marketing, management, and information systems (Kumar, 2016; Mouri,
Bindroo, & Ganesh, 2015; Oliver, 1977; Tam, 2004). Despite the many definitions of customer
satisfaction in extant literature, ECT continues to be a primary theoretical lens used for defining
customer satisfaction (Caruana, La Rocca, & Snehota, 2016). Its structure emulates the dynamic
judgment.
In describing ECT, Morgeson (2013) suggested that "satisfaction judgments are formed
through a cognitive process relating prior expectations to perceived performance and the
aligns with assertions that the ECT framework evaluates satisfaction through two processes: the
7
through the comparison process (Oliver, Balakrishnan, & Barry, 1994). Thus, this theory
resulting from expectations and perceived performance. This rational behavior is mediated
through the positive or negative confirmation between expectations and perceived performance
(Bhattacherjee, 2001). These relationships are reflected in the core ECT model which reflects the
three core antecedent constructs for customer satisfaction: expectations, perceived performance,
Customer satisfaction
and provides a level of competitive advantage (Akbari, Salehi, & Samadi, 2015; Minta, 2018).
widely diverse dimensions and applications (e.g., offering value, quality, and loyalty to
of customer satisfaction (Giese, 2000). Zhao, Lu, Zhang, and Chau (2012) posited that the
definitional differences can be attributed to the dynamic, complex, and context-specific nature of
the construct. This study adopts the definition which is espoused by Oliver (2014): “satisfaction is
the consumer's fulfillment response. It is a judgment that a product/service feature, or the product
including levels of under- or over-fulfillment” (p 8). Similarly, if the level of fulfillment was
judged by the consumer to be unpleasant, then the individual would be dissatisfied due to the
dissonance between expected levels of satisfaction and the level of satisfaction experienced
(Hasan & Nasreen, 2014). This definition represents a consumer's summary or overall fulfillment
Pleasurable fulfillment response implies that pleasure is either increased or the amount of
pain is reduced. However, there is no assertion that this increase in pleasurable response matches
8
the need of the individual. Over-fulfillment represents a measure of unexpected pleasurable
outcome would also be dissatisfaction (Oliver, 2014; Oliver, Rust, & Varki, 1997).
Expectations
they should or will receive through the performance of a product or service (Lankton,
McKnight, & Thatcher, 2014; Oliver, 1980, 1981). This judgment is prior to the comparison
of performance (Oliver & DeSarbo, 1988). Oliver (2014) defined expectations as the
consumer's goal, usually in the form of a valanced reaction such as good or bad
performance" (p. 22). These expectations represent the probability of occurrence (i.e.,
reflective of the individual's desires and needs) and the evaluation of the occurrence (e.g.,
reference against which performance judgments can be made (Lankton et al., 2014).
construct (Kim, Ferrin, & Rao, 2009). Expectations are context-specific and inherently contain a
level of abstraction. Some individuals focus on what they expect to receive in the form of attribute
performance; others are more concerned about receiving macro performance outcomes such as
value and quality. Both scenarios involve predicted outcomes and anticipated satisfaction. Yet,
the differences in anticipated satisfaction highlight the challenges with defining expectations
(Oliver, 2014). For example, if the studies are limited to product or service attributes, then the
9
researcher risks omitting the intensity of consumer's level of desire. Prior literature acknowledged
that consumers have different levels of desire through the establishment of expectation zones
(Oliver, 1980; Parasuraman, Berry, & Zeithaml, 1991). These zones reflect the inherent level of
bounded at the upper end by “ideal” and at the lower end by “minimally acceptable” level of
expectations (Teas & DeCarlo, 2004). In general, the upper boundary is associated with
excellence or superiority (Oliver, 2014). If expectation levels are established too high, then the
probability of performance being less than the floor of the zone of tolerance is high. Such a
scenario is likely to result in negative disconfirmation. Similarly, if the levels of expectation are
established too low, then the probability of performance being greater than the ceiling of the zone
of tolerance is high. Such a scenario is likely to result in positive disconfirmation (Teas &
DeCarlo, 2004).
Expectations are generally not static but represent a dynamic compilation of experience,
knowledge, and desires. Initial expectations can be updated during consumption as a component
comparison judgments against performance. This updated expectation then becomes the reference
Perceived performance
Performance has been demonstrated through two types: objective performance and
perceived performance (Venkatesh, Morris, Davis, & Davis, 2003). Objective performance
represents the actual performance level of the product or service. Perceived performance
represents a subjective assessment. It refers to the individual's cognitive perceptions about the
10
performance of a product’s attributes, levels of attributes, or outcomes (Spreng & Olshavsky,
1993). Perceived performance has generally been used as a reference point against which
Since most individuals do not have access to objective performance information for digital
assistants, this study is based on perceived performance. As such, this study will adopt the Spreng
and Olshavsky (1993) definition that performance is an individual’s cognitive perception about
Confirmation of expectations
There are two types of confirmation: objective confirmation and subjective confirmation.
performance (Olshavsky & Miller, 1972). Subjective confirmation represents the discrepancy
between expectations and perceived performance (Yi, 1990). This study will use perceived
instead of objective performance, as the objective performance results for digital assistants are not
readily available to most users. Consistent with this direction, this study adopts the definition
offered by Jiang and Klein (2009) that confirmation of expectations is the "difference between a
Dissecting this definition reveals three separate elements for confirmation: the event, the
probability of occurrence, and the desirability or undesirability of the performance event (Oliver,
2014). Confirmation of expectations occurs when low and high probability performance standards
do or do not occur, as expected. Positive disconfirmation occurs when low probability desirable
‘high performance’ occurs and/or high probability undesirable ‘low performance’ does not occur.
As depicted in the zone of tolerance in Figure 1, this experience is at the top of the range in the
level of expectations. Whereas, negative disconfirmation is at the bottom of the range in the level
11
of expectations. Negative disconfirmation occurs when high probability desirable ‘high
performance’ does not occur and/or low probability undesirable ‘low performance’ occurs
(Oliver, 2014).
Perceived trust
Trust has been conceptualized in many ways, but the underlying concept of trust involves
to have the confidence to overcome perceptions of uncertainty and risk in order to engage in
2002). Perceived trust assumes that other people will respond in a predictable way (Luhmann &
Schorr, 1979) as described in the trust-building process. This study will adopt a definition of
perceived trust based on the customer's subjective trust beliefs established in the trust-building
process. It is focused on the customer’s perceptions of a specific vendor or product attributes such
Competence
Competence is centered around the belief in the trustee’s ability to do what the trustor
expects (Venkatesh, Thong, Chan, Hu, & Brown, 2011). Within the context of AI technologies
and digital assistants, the trustee is expected to fulfill the trustor’s needs for reliable and
personalized information content on a real-time basis. Further, the trustee has the appropriate
Benevolence
Benevolence reflects the belief that the trustee will act in the trustor’s interests rather than
making such interests subservient to those of the trustee (Venkatesh et al., 2011). For AI
technologies and digital assistants, the trustee is expected to be accountable to the trustor. While
12
providing new knowledge and innovation, care should be exercised to guard against unnecessary
Integrity
Integrity focuses on the belief that the trustee will be honest and keep its promise
(Venkatesh et al., 2011). Within the context of AI technologies and digital assistants, the trustee is
expected to appropriately secure any personal information of the trustor and only use this
information in a manner consistent with the agreed-on terms of service. Further, the trustee will
follow ethically sound principles and will allow users to also have control over decisions
involving their information unless the user defers to the decisions offered through machine
intelligence.
While the potential of digital assistants is promising, the road to adoption of this
technology is not without challenge. It must be recognized that this technology creates a rich
digital data footprint which contains a plethora of personal and behavioral data as users integrate
digital assistants into their everyday life (Belanger & Xu, 2015). Recent advancements in machine
learning allow for data-driven discoveries of previously hidden patterns, correlations, and other
revealing personal insights (Belanger & Xu, 2015). For some users, concern exists that the data
may be misused or abused (Bhatt, 2014). Smith, Milberg, and Burke (1996, p. 169) described
use of personal information; external unauthorized secondary use of personal information; errors
13
Individuals are increasingly challenged with managing the complex trade-offs of
technology innovation with risks of information privacy concerns (Acquisti, Brandimarte, &
Loewenstein, 2015). AI technologies and digital assistants are not immune to these concerns
(Belanger & Xu, 2015). These concerns have been directly linked to cognitive decision-making
by customers in terms of vendor selection and technology usage (Zimmer, Arsal, Al-Marzouq, &
Grover, 2010). Therefore, consumers with higher privacy concerns will perceive greater risks
Hypotheses
The research model as shown in Figure 2, illustrates the study’s hypotheses directly
assistants (confirmation), and customer satisfaction with digital assistants (customer satisfaction)
from the core ECT model. Additional constructs were added for perceived trust and information
(Oliver et al., 1997) through an expectations assimilation effect. This effect occurs if the
individual views that there is a disparity between performance and expectations for a product or
service. If this disparity is small, then the individual's perceptions of performance may be
assimilated toward one’s expectations to reduce dissonance (Olshavsky & Miller, 1972). This
expectations assimilation effect is more likely to occur when expectations are stronger and more
salient than performance information (Einhorn & Hogarth, 1978). For more established products
and services with which users have an extensive experience history, the expectations should
14
increase in both accuracy and confidence. Thus, this effect is based on a strong and stable
knowledge base which generally is consistent with the product's perceived performance (Johnson,
Expectations and perceived performance are among the core constructs of the ECT (Oliver
et al., 1994). Perceived performance represents an individual's subjective assessment about the
1993). The ECT model establishes a positive relationship between these constructs as
expectations establish a point of reference or norm against which performance judgments can be
made (Lankton et al., 2014; Oliver, 2014). Based on these arguments, we propose:
The ECT framework posits that expectations and perceived performance are antecedents
to confirmation (Spreng & Page, 2003). Confirmation is the consumer’s judgment of the
Klein, 2009). When performance exceeds expectations, it offers a positive effect on confirmation
(Jack & Powers, 2013). Conversely, when performance is worse than expectations, it offers a
negative effect on confirmation (Bhattacherjee, 2001; Oliver, 1980). Based on these arguments,
we propose:
H3. The perceived performance of digital assistants will be positively related to the
Per ECT, expectations influence confirmation through the confirmation judgment (Oliver
et al., 1994). This influence reflects a positive relationship between expectations and confirmation
(Lankton & McKnight, 2012; Venkatesh et al., 2011) and is referred to as the halo effect (Oliver
15
et al., 1997) or confirmation bias (Nickerson, 1998). The halo effect occurs when users “see what
they want to see”. With it, users with high expectations will only see high outcomes, which are
also better than expected outcomes. Users with low expectations will only see low outcomes,
which are also worse than expected outcomes, thus creating a positive relationship between
propose:
The ECT framework posits that perceived performance is among the antecedents
comparison (Spreng & Page, 2003). However, a positive direct link between perceived
performance and customer satisfaction has also been identified (Anderson & Sullivan,
1993; Churchill Jr. & Surprenant, 1982). This direct linkage reflects the performance
assimilation effect (LaTour & Peat, 1979). Since perceived performance involves the
their expectation anchor rather than the performance perception. Thus, perceived
viewed through the lens of new product experience, this performance assimilation effect
may be reflective of new learnings or insights gained following the use of the product (Tse
1980). The contrast effect was originally identified in social psychology and states that people
16
tend to exaggerate the positive disconfirmation or negative disconfirmation judgment (Tse &
Individual beliefs provide the foundation for a customer’s perception of trust. Because this
foundation is not based on hard facts, trust can be fragile and subjective (Yannopoulou, Koronis,
& Elliott, 2011). This trust enables individuals to overcome perceptions of uncertainty and risk
and engage in "trust-related behaviors" with web-enabled technologies (McKnight et al., 2002). If
users perceive a high level of trust, then the associated risk perceptions would be reduced (Kim,
H7. The perceived trust will positively moderate the relationship between
digital assistants.
about the vulnerability of their personal data and the possibility of it being compromised or
misused (Acquisti et al., 2015). Individuals are increasingly challenged with managing the
complex trade-offs of technology innovation with the risks of information privacy concerns
(Acquisti, Taylor, & Wagman, 2016). Various theories and information privacy topics are
routinely cited in nearly all recent marketing research involving user behaviors of technology,
social media and/or web-based applications (Rafiq, Fulford, & Lu, 2013). Based on these
arguments, we propose:
17
H8. Information privacy concerns will negatively moderate the relationship between
digital assistants.
Research methodology
A random, self-selection sampling of adults (i.e., age 18 and older) in the United States
was solicited through email and social media platforms (e.g., Facebook and LinkedIn). Recipients
were encouraged to share the survey solicitation across their respective networks. This approach
Digital assistant use cases assessing features, functionality or applications were not
included for evaluation by respondents when completing the survey. Additionally, respondents
were not asked to identify their digital assistant(s) and other demographic information until after
they had completed the survey. Thus, respondents used their own contextual experiences for
completing the survey. Participants who completed the survey and provided a valid contact email
Two-hundred and sixty responses were received, but sixteen cases were disqualified due
to incomplete responses. Thus, the sample contains the responses of 244 participants of which
49% (121) were men and 51% (123) were women. Since the general availability of overall digital
assistant user statistics is fragmented, the age distribution of smartphone users was used as a
proxy (Moar, 2019, February) to comparatively examine the age distribution of the study sample.
Statista.com (2019) reports that approximately 58% of smartphone users are at least 55 years old
for 2018. This study sample identified that 67% of the survey responses reported being 55 years
of age or younger. Thus, we believe that our sample is generally comparable to the age
18
Respondents indicated that their usage of digital assistants was largely concentrated on the
top three brands: Apple’s Siri (72%), Amazon’s Alexa (13%), and Google’s Google Assistant
(5%). Other responses represented only 10%. Other sample characteristics for ethnicity, age,
income, education, and experience with digital assistants are displayed in Table 1.
Measures
The study results are based upon a cross-sectional survey-based field study. All constructs
were evaluated using existing measures. See Appendix A for all scale items. The measurement
items for each construct in the model were based on a 7-point Likert type scale except for
customer satisfaction (which used a 7-point semantical differential scale). All items were adapted
from the extant literature to maximize the validity and reliability of the measurement model.
Customer satisfaction used the scale from Spreng, MacKenzie, and Olshavsky (1996).
usefulness scales (Davis, 1989; Davis, Bagozzi, & Warshaw, 1989). Effectively, expectations
measured the pre-use (current) respondent mindset of expectations of digital assistants while
confirmation of expectations assessed the post-use expectations of digital assistants. This dynamic
interaction assessment then becomes the basis of expectations for the subsequent use of digital
assistants.
Perceived trust used the cognitive and emotional trust scale (Xiao & Benbasat, 2002). In
alignment with the recommendation of McKnight et al. (2002), Benbasat and Wang (2005)
presented perceived trust as a reflective second-order construct. It was comprised of the three
concerns also is presented as a reflective second-order construct (Malhotra, Kim, & Agarwal,
2004). It used the global information privacy concerns scale (Malhotra et al., 2004; Smith et al.,
19
1996) and the perceived privacy protection scale (Chen, Han, & Yu, 1996; Kim, Ferrin, & Rao,
2008).
Control variables
Given its relative early lifecycle, little information is available identifying the impact of
assume, however, that certain control variables used in website research would be similarly
relevant to this study. The categorical variables of gender, age, education, income, and experience
have previously been shown to influence relationships in ECT and other acceptance research
Data analysis
Using the guidelines offered by Hair, Hult, Ringle, and Sarstedt (2017), PLS-SEM
(SmartPLS 3.2.6) was selected to analyze the 244 survey responses. The selection criteria
reflected the facts that the research goal focused on predicting the key target construct of
customer satisfaction; the study used a formative model; the structural model is complex; the data
The model estimation used the default algorithm settings (i.e., path weighting scheme, a
maximum of 300 iterations, factor weighting scheme, a stop criterion of 0.0000001 (or 1 x 10-7),
and equal indicator weights) recommended (Dijkstra & Henseler, 2015; Lohmöller, 2013). It
required eight iterations of the algorithm, far less than the maximum number of 300 iterations
(Ringle, Wende, & Becker, 2015). All 244 cases within the current survey data were sourced to
draw 5000 random subsamples (Sarstedt, Henseler, & Ringle, 2011) for the consistent PLS
bootstrapping analysis.
20
Results
Measurement model
The measurement model was first examined to assess its internal consistency reliability,
indicator reliability, convergent reliability, and discriminant reliability (Hair, Ringle, & Sarstedt,
2011). Internal consistency reliability was confirmed as the Cronbach’s alpha and the composite
reliability scores for all reflect indicators were all above the recommended score of .70 (Bagozzi
& Yi, 1988; Bernstein & Nunnally, 1994) and lower than the upper limit of .95 (Diamantopoulos,
Sarstedt, Fuchs, Wilczynski, & Kaiser, 2012; Hair et al., 2017). The lone construct (Integrity)
falling below the ideal floor threshold was deemed to have an acceptable score since it exceeds
.60 (Hair et al., 2017). Convergent validity was confirmed as 21 of the 26 reflective indicators
have outer loadings above the threshold level of .70 (Hulland, 1999; Nunnally, 1978). Each of the
remaining 5 indicators was lower than the .70 threshold, but they were sufficiently large enough
for continued use as their removal would not have increased composite reliability (Hair et al.,
2011). Each indicator was confirmed to be statistically significant (p < .001). In addition, each of
the constructs had AVE scores equal to or greater than the .50 threshold (Bagozzi & Yi, 1988;
Hair et al., 2017) or acceptable composite validity (Bagozzi & Yi, 1988; Chin, 1998). These
Using the repeated indicator approach, the formative measured constructs were confirmed
through the significance and relevance of indicator weights, convergent validity, and collinearity
(Sarstedt, Ringle, Smith, Reams, & Hair, 2014). The assessment results confirmed that the
formative indicator weights’ were approximately equal, had statistical significance at the α = .05
level, and were substantially above zero indicating an acceptable construct relationship (Hair et
21
[Table 3 near here]
perceived performance on expectations. The cross-loadings and the Fornell and Larcker (1981)
criterion, shown in Table 4, were assessed for insights and possible remedies for this issue.
However, since the Fornell and Larcker (1981) criterion and cross-loadings are insufficiently
sensitive to detect problems with discriminant validity, the heterotrait-monotrait ratio (HTMT)
was used (Henseler, Ringle, & Sarstedt, 2015). Essentially, “the HTMT approach is an estimate of
what the true correlation between two constructs would be if they were perfectly measured (i.e., if
Unfortunately, this issue with perceived performance could not be remedied. The decision
as to which construct to remove from the study was based upon comparative explanatory power
model analyses. The first analysis retained the expectations construct and removed the perceived
performance construct. It yielded a higher explanatory power result than that of the second
analysis. The second analysis retained the perceived performance construct and removed the
expectations construct. Thus, the perceived performance construct was removed from the study to
enable validation of discriminant validity. By removing this construct, hypotheses H2, H3, and
H5 could no longer be assessed. Following these changes, discriminant validity was confirmed,
see Table 5, as each of the remaining reflectively measured constructs was determined as being
significantly less than a conservative threshold value of .85 (Henseler, Hubona, & Ray, 2016).
Additionally, a review of the upper bounds of the 95% confidence interval similarly confirmed
22
Prior to assessing the structural model, assessments were conducted for multicollinearity,
goodness-of-fit, endogeneity bias, common method variance. For multicollinearity, the formative
indicator variance inflation factors (VIFs) were calculated for all latent variables and confirmed to
be less than the 5.0 threshold (Kock, 2015). The goodness-of-fit was also determined as being
adequate as the bootstrapped standardized root mean square residual (SMRM) value of .033 was
well under the most conservative threshold of .05 (Byrne, 2013). Endogeneity bias was also
reviewed via the control variable approach (Hult et al., 2018). None of the control variables
demonstrated f2 values exceeding 0.006. Thus, any possible endogeneity bias effects were deemed
to be insignificant.
Common Method Bias (CMV) was assessed as the data came from single respondents in a
one-time survey and might influence relationships within the PLS path model. For this study, we
used a common method construct approach for this assessment (Podsakoff, MacKenzie, Lee, &
Podsakoff, 2003). This assessment involved comparing PLS-SEM results with the common
construct to the results without the common construct. Unique iterations were performed for each
of the antecedent constructs. The results showed no significant differences in the estimated
structural model path coefficients. Such results suggest that common method bias is not an issue
Structural model
The structural model assessment substantiated the reliability and validity of the PLS path
modeling results using the PLSc bootstrapping approach with the BCa advanced setting (Hair et
al., 2017). The empirical analysis provided support for most of the hypothesized cause-effect
relationships depicted in the model. The presented theoretical concept explained 79.7% of user
satisfaction with digital assistants. For marketing studies, an R2 value of .75, .50, and .25
Given these thresholds, the R2 value of customer satisfaction was assessed as being substantial in
23
predictive power and is statistically significant. The results for each hypothesis are displayed in
Table 6.
Additional assessments were conducted for effect size (f 2 and q2) and predictive relevance
(Q2). Expectations had a positive and significant effect on customer satisfaction (β = .291, p =
.008) as H1 is supported. However, this path relationship only demonstrated a small f 2 effect size
(0.110) and was considered to have a weak predictive relevance (q2 = .069). Expectations had a
supported. This path relationship demonstrated a large f 2 effect size (2.177) with strong
predictive relevance (q2 = .908). Confirmation of expectations had a positive and significant effect
demonstrated a medium f 2 effect size (0.276) with weak predictive relevance (q2 = .100). The
direct effect of perceived trust on customer satisfaction was evaluated as being positive and a
significant effect (β = .193, p = .000). This path relationship demonstrated a small f 2 effect size
(0.091) with weak predictive relevance (q2 = .053). The moderating effect of perceived trust on
the relationship between confirmation of expectations and customer satisfaction was assessed as
being a positive but non-significant effect (β = .049, p = .171). The f 2 effect size (0.091) was
determined to be small with negligible predictive relevance (q2 = .053). Thus, the moderating
effect results indicate that H7 was not supported. The direct effect of information privacy
concerns on customer satisfaction was evaluated as being negative and a significant effect (β = -
.114, p = .005) on the outcome variable. This path relationship demonstrated a small f 2 effect size
(0.058) with weak predictive relevance (q2 = .017). The moderating effect of information privacy
concerns on the relationship between confirmation of expectations and customer satisfaction was
assessed as being a negative and non-significant effect (β = -.056, p = .053). The f 2 effect size
24
(0.024) was determined to be small with negligible predictive relevance (q2 = .000). The
relative importance of constructs in explaining other constructs in the structural model based on
the total effect of the relationship between these two constructs. These analysis results identify the
determinants with a relatively high importance and relatively low performance for a particular
endogenous construct (Hock, Ringle, & Sarstedt, 2010; Ringle & Sarstedt, 2016). The importance
score reflects the unstandardized total effects (i.e., direct and indirect effect) for each predictor
variable (Slack, 1994). The performance score was calculated using the latent variable scores for
the unstandardized outer weights for each construct upon the target construct of customer
satisfaction. These scores were then rescaled on a 0 - 100 performance score for that construct
(Hock et al., 2010; Ringle & Sarstedt, 2016). Since these paths have already been found to be
significant within this study, “the goal is to identify predecessors that have a relatively high
importance for the target construct (i.e. those that have a strong total effect), but also have a
relatively low performance (i.e. low average latent variable scores)” (Ringle & Sarstedt, 2016, p.
1866). The importance and performance scores are more easily interpreted once displayed on a
Axes representing the mean importance score (0.312) and the mean performance score
(60.787) for all constructs were also displayed to create a four-quadrant grid. Based largely on the
grid interpretation taxonomy of Martilla and James (1977), constructs in the lower right area
reside in Quadrant I (i.e. above average importance and below-average performance) and are of
highest priority to achieve improvement. This area is followed by the upper right (Quadrant II),
25
lower left (Quadrant III) and, finally, the upper left area (Quadrant IV). Using this approach,
IPMA results provide managerial insights by mapping the latent variable values by importance
and performance dimensions (Hock et al., 2010; Ringle & Sarstedt, 2016). The relative scores for
Insights can also be achieved by splitting the sample into two meaningful sub-groups and
comparing the IPMA results between the two groups. The calculation of importance and
performance scores for each sub-group is the same as described above. The analysis goal is also
similar to that described above. However, insights and recommendations sometimes emerge for
one sub-group as compared to the other. The sub-group results are displayed in Table 8. Results
discussion for each of the variables and some of the relevant sub-groups follows.
Quadrant 1, and is a ‘high priority’ for managerial action as it requires immediate attention for
improvement. In comparison with the other constructs, the performance score of confirm (59.053)
is slightly below average while the importance score (0.413) is above average. Based on these
scores, the overall performance score can increase by 0.413 for each unit of positive movement in
the performance score. As an example, if management sought to increase the performance score
from 59.053 to 60.292, programs would need to generate three units of positive movement as
driven by the importance score (3 units of movement x 0.413 importance score = 1.239
26
While there is room for significant improvement in the overall performance of this
construct, the highest priority programs should be focused on specific sub-groups. Programs
focused on sub-groups having large gaps (e.g., greater than 10%) in importance scores and
meaningful performance gaps (e.g., greater than 5%) could produce attractive gains in the
Expectations (Expect)
performance score (0.650). These scores position this construct in Quadrant II and represent
expectations performance score. However, management would likely gain higher overall portfolio
returns by prioritizing programs for sub-groups having combined importance and performance
each reflect these criteria. This approach is consistent with the quadrant strategies of investing in
high priority (Quadrant I) sub-groups and sustaining investment in Quadrant II sub-groups. Any
performance score (60.787). These scores position this construct in Quadrant III and represent
priority 3. Its location suggests that managerial actions for improvement are classified as ‘low
priority’. Given its lower total effect score, contributions to improving the performance score are
limited. Thus, improvement efforts (i.e., at either the construct-level or the sub-group level)
27
Information privacy concerns (Privacy)
Information privacy concerns is located in Quadrant IV of the grid and represents priority
4. This construct has the highest performance score (66.572) among all constructs but the lowest
importance score (-0.081). The negative sign associated with importance indicates that the total
effects are not significantly different than zero. Given the low total effects, it offers little
strategy. Thus, management should exercise caution before devoting a significant focus towards
customer satisfaction improvement programs in this category. Such programs may be viewed as
wasted investments due to the lack of positive movement in the performance score.
Discussion of results
In this research, we examined the alignment of digital assistant user expectations with the
perceptions of digital assistant performance towards customer satisfaction. The model explained
79.7% of the variance in customer satisfaction with digital assistants and was assessed as being
both substantial in predictive power and statistically significant. In addition, we examined if the
cognitive considerations associated with information privacy concerns and perceived trust offer a
moderating influence on the expectations confirmation theory relationships. While these cognitive
considerations were confirmed, the effects were evaluated as being of limited predictive value.
The first hypothesis predicted that expectations would be positively related to customer
satisfaction. This relationship was confirmed suggesting that for some users, customer satisfaction
with digital assistants is evaluated through the assimilation effect associated with the user's
expectations. This assimilation effect is generally reinforced through media and expert reviews of
digital assistants. However, the results of this study also show the assimilation effect to be
generally small and to have a non-significant overall effect. These results depict a smaller
28
segment of users evaluating digital assistants through the assimilation effect. Instead, most users
confirmation of expectations. This relationship was confirmed suggesting that user expectations
of digital assistants were quite large and possessing a significant effect. Given the level of
advertising and promotions for digital assistants and the rapid level of product adoption, this
result is not unexpected. However, this relationship is so high that it is quite possible that
expectations may be too high. If so, then these expectations may be approaching the halo effect
zone. Generally, if a firm's product enters this halo effect zone, further product adoption will
customer satisfaction. This relationship was confirmed suggesting that for many users, customer
satisfaction with digital assistants was evaluated through the contrast effect associated with
confirming the user's expectations. As previously discussed, most users appear to want
confirmation that their expectations are being met. The study results confirmed that users want
Hypothesis 7 predicted that perceived trust would positively moderate the relationship
between confirmation of expectations and customer satisfaction. This study confirmed that the
higher a respondent’s perceived trust, the stronger the relationship between customer satisfaction
and confirmation of expectations. Yet, based on the study results, both the direct and moderating
effects of perceived trust have little to no predictive value. On the surface, such a finding might
lead managers to believe that there is no need to focus upon this construct. However, most of the
digital assistants identified by the survey respondents were associated with strong brands. Given
the strength of these brands, it is possible that perceived trust was heavily weighted towards
institutional trust and may have been already factored into the user’s expectations in the form of
29
brand satisfaction. Managers must continue to reaffirm the principles of trust with customers in
every interaction. Future studies should explore the influences of brand satisfaction and other
trust-building elements.
Hypothesis 8 predicted that information privacy concerns would negatively moderate the
confirmed that higher levels of information privacy concerns weaken the relationship between
confirmation of expectations and customer satisfaction. Yet, like perceived trust, both the direct
and moderating effects of information privacy concerns have little to no predictive value. If
managers interpret these findings to be an indication that only limited focus is required for this
construct, then they are likely introducing significant risk to the firm. The lack of predictive value
identified in the study may be due to high levels of brand satisfaction and may have already been
factored into the user's expectations. Thus, given the magnitude of potential risk, managers must
This study offers multiple contributions to the gaps in the literature. First, to the best of
our knowledge, this study is one of the first attempts to empirically examine the theoretical
foundations for customer satisfaction (Oliver, 1980, 1981) as related to a new AI technology
platform. In the past, this focus has been conveyed in studies associated with the introduction of
new technologies. Research, however, has yet to explore this focal point for AI technologies due
Customer satisfaction has long been a focal point of extant marketing and information
technology literature (Kumar, 2016; Rego et al., 2013). Thus, the second contribution extends the
research on ECT relationships with these fields of research. In particular, this study extends the
findings of Bhattacherjee and Lin (2015) and Lankton et al. (2014). Each of these studies provides
30
supporting linkage of different marketing and technology applications to customer satisfaction
Lastly, with the explosive growth of digital and advanced technology capabilities,
information privacy and trust implications (Jain, Gyanchandani, & Khare, 2016) represent
important topics within the disciplines of marketing and IT. To the best of our knowledge, this
study is one of the first attempts to empirically examine the influences of trust and information
Individuals can use digital assistants to perform basic personal task management functions
as well as for more advanced capabilities and connected-device integrations. Yet, the functional
and topical use of a digital assistant tends to vary by individual (Kumar et al., 2016). Accordingly,
at a minimum, these findings provide insights that allow managers to understand the drivers and
continues on its maturation path. Harmeling, Moffett, Arnold, and Carlson (2017) offer a theory
of customer engagement marketing which suggests that opportunities exist for firms to use
diverse engagement marketing toolsets (i.e., amplification tools, connective tools, feedback tools,
and creative tools), to gain valuable insights from customers. Digital assistants serve as a
significant data-capture point for many of these analytical tools. Firms must properly monitor and
evaluate the entirety of their engagement marketing strategy to ensure that this technology is
aligned with evolving customer value needs and driving customer satisfaction (Ranjan & Read,
2016). This study provides evidence that customer expectations are being met and satisfied
31
through the digital assistant interaction experience. Thus, this technology can serve as a critical
As firms begin to accelerate the integration of digital assistants into their operations,
management commitment must be given to ensuring that customers are aware of new skill
capabilities. Since no common product standards yet exist, it is highly likely that some firms will
prioritize the deployment of certain skills that may differ from other firms. As a result, the
possibility of customer confusion and frustration may be high. Accordingly, management should
focus priorities on assisting users to become aware of new skills and provide relevant examples of
how the application skills can be used to meet user needs (Baier et al., 2018). Such action is
consistent with the importance and performance scores associated with expectations and
confirmation of expectations. By doing so, users will gain a greater understanding of how digital
assistants can provide newer relevant information and efficiently perform important tasks for
them.
Perceptions of trust
Digital assistants represent one of the most visible applications involving an AI-based tech
stack and are a key disrupter for digital transformation. Cohn and Wolfe (2017) found that 75% of
consumers would readily share their personal information with brands they trust. Perhaps this
explains why this same study also identified that the top five authentic and trustworthy firms to be
Amazon, Apple, Microsoft, Google, and Paypal. Interestingly, Facebook only ranked 92nd likely
due to residual negative fallout from some past trust-compromising product features and policies.
Facebook’s ranking tends to illustrate the fact that size and dominance do not guarantee consumer
trust.
32
Individual beliefs provide the foundation for a customer’s perception of trust. Because this
foundation is not based on hard facts, trust can be fragile and subjective (Yannopoulou et al.,
2011). Managers must recognize that trust is an important performance item for their brand.
However, the IPMA trust effects are more significant for certain sub-groups: 57% higher for Gen
X and older, 32% higher for females than males, and 72% higher for higher-income users than
lower/middle-income users. Similarly, the length of experience with digital assistants does matter.
Trust effects are 17% higher for users with greater than 2 years of experience than those with less
experience. Such findings suggest that managers should establish gender-specific programs,
communications, and user experiences that focus upon important trust topics associated with their
products and services. In addition, managers must fight the temptation of catering programs to
higher-income users. While they may have higher discretionary spending, the adverse effects of
trust erosion are more impactful for the low/middle-income group. Finally, managers may view
that longer tenure digital assistant users might require less attention than more less tenured users.
Such a viewpoint could lead to disastrous outcomes. Longer-tenured users typically have
significantly different foundational experiences not shared by others. As a result, these users
likely have different needs than those of newer users. Thus, managers need to design unique
educational programs, product communications, and user experiences which align with the user’s
product utilization and knowledge level. Further research is needed to more fully understand the
about the vulnerability of their personal data and the possibility of it being compromised or
misused. In general, consumer confidence that firms are respecting individual data privacy rights
is rapidly eroding (Fitzgerald, 2019). In fact, California’s digital privacy law (California
33
Consumer Privacy Act, A.B. 375) becomes effective in 2020. This law “affords California
residents an array of new rights, starting with the right to be informed about what kinds of
personal data companies have collected and why it was collected” (Ghosh, 2018, p. 1). Other
While managers use customer data to increase personalization and improve marketing
effectiveness (Schumann, von Wangenheim, & Groene, 2014), firms must understand and
acknowledge the potentially severe dimensions and unintended consequences of misused personal
data (Martin, Borah, & Palmatier, 2017). Users want and expect that the personal information
collected by a digital assistant is confidential, protected and used within the parameters that they
approved. Protecting user information privacy is necessary for expanding the integration of digital
programs, communications, and user experiences, which focus on important privacy topics
associated with their products and services. The focus should not be limited to higher-income
groups nor to less-tenured users. Certainly, this topic is complex and far-reaching. Thus, further
research is needed to more fully understand the impacts of information privacy concerns in AI
environments.
business goal, it is not (by itself) the endpoint of the customer journey nor the culmination of
relevant consumer-related outcomes. Given the rapid development of new skill capabilities for
digital assistants and the anticipated explosive adoption rate of these devices, an abundance of
opportunities exists for future research to examine more and different dimensions of digital
assistants (e.g., diverse sample compositions, specific user preferences, and generational usage
differences). Following are some specific examples of opportunities for future research.
34
Smartphone-based digital assistants.
The study survey did not focus on assessments of digital assistant use cases involving
features, functionality or applications. However, among the instructions was: “If you've ever
pushed the microphone button on your cellphone to ask for driving directions, then you've likely
used a digital assistant.” Given this statement, the mindset of some respondents may have been
influenced to solely consider smartphone-based digital assistants. Future studies should strive for
Continuation intention
The study sample largely consisted of current (and continuing) users of digital assistants
plus a small group of never users. Users who have discontinued use of digital assistants were not
included due to the limitations of identifying and contacting such participants. It is reasonable to
assume that the inclusion of such sample participants would likely lower that overall satisfaction
relationship scores while unmasking other predictor variables. Future studies could explore the
dimensions of commitment and continuation intention of existing users as well as why prior users
avoid negative performance traps associated with homogeneous program designs. Instead,
programs and resources can be developed and supported that complement these differences. It is
equally important to gain insights from former users, as it is important to understand why these
users no longer use the digital assistant. Pairing learning from both current users and former users
might yield important future managerial actions that will reinforce user loyalty, highlight
competitive vulnerabilities, or direct new product features or capabilities for digital assistants. To
the extent that the antecedents and resources used to optimize each type of commitment are likely
35
to vary among digital assistant users, managers will need to have a differentiated and relevant
Brand satisfaction
While the instrument invited respondents to identify any digital assistant used, only those
associated with established and strong brands were identified (e.g., Apple's Siri, Amazon's Alexa,
Google's Google Assistant, and Microsoft's Cortana). These brands have strong images with
established perceptions of trust and respect for individual privacy. In addition, these firms do not
have major, unrepaired damages associated with large data breaches. Thus, future research should
explore if brand satisfaction impacts expectations, trust and privacy concerns for digital assistants.
Influence of self-efficacy
customer satisfaction and productivity gains (Yim, Chan, & Lam, 2012). Self-efficacy
pride can influence how users perceive their mastery of a digital assistant. Thus, it is reasonable to
believe that such beliefs can substantively impact satisfaction assessments for digital assistants.
Future research should assess the relative importance of self-efficacy in the adoption and
Longitudinal study
The cross-sectional data collected for this study reflected only one point in time. Rarely
can such a snapshot fully capture the dynamic and interactive nature of many relationship
variables. Any assumption that these study results are reflective of future generations of digital
assistants would be speculative at best. Given the rapid and continuous advancements in digital
assistants and the underlying AI technologies, this current study cannot be viewed as a predictor
36
of where this technology may be headed. Nor can this current study explicitly predict customer
behavior and evolving expectations. New developments in technology can change consumer
behavior and habits. While it is sometimes difficult to directly identify the immediate impact of
technology changes, over time these changes become more visible as the technology adoption rate
increases. It is reasonable to expect new such changes associated with digital assistants. Future
studies should endeavor to collect longitudinal data to provide a fuller view of the stability of
customer expectations and customer satisfaction perceptions involving digital assistants over
time.
Conclusion
Customer satisfaction has long been a focal point of extant marketing and information
technology literature. This study advances our understanding of the theoretical foundations for
Given the relative infancy of current digital assistant adoption and utilization, there is limited
empirical work directly related to the consumer experience and customer satisfaction. This study
affirmed the role of the expectations confirmation process in the customer satisfaction evaluation.
Further, it provides insights that allow managers to understand the drivers and the degree of
customer satisfaction with digital assistants. These elements can influence customer satisfaction
evaluations.
37
Expectation Zone Level of Expectation
Confirmation
Desired
Needed
Adequate
Minimum Tolerance Low
“Unacceptable” Negative
(Floor effect) Disconfirmation
38
Figure 2. Research model.
39
Figure 3. Importance-performance map for customer satisfaction.
40
Table 1. Sample Characteristics
Sample Characteristics (n = 244)
$150k or 61 25%
above
41
Table 2. Results Summary for Reflective
Results Summary for Reflective Measurements
Internal Consistency
Convergent Validity
Reliability
Latent Variablea Indicatorsb
Indicator t Composite Cronbach’s
Loadings AVE
Reliability Statisticc Reliability Alpha
42
Table 3. Results Summary for Formative Measurements
Results Summary for Formative Measurements
43
Table 4. Correlation Matrix (Fornell-Larcker Criterion))
Correlation Matrix (Fornell-Larcker Criterion)
Confirm Sat Expect Ben Comp Int GenPv PercPv Privacy Trust
(1) (2) (3) (4) (5) (6) (7) (8) (9) (10)
Confirmation of Expectations (Confirm) 0.888
Customer Satisfaction (Sat) 0.835 0.901
Expectations (Expect) 0.828 0.828 0.874
Benevolence (Ben) 0.473 0.519 0.475 0.688
Competency (Comp) 0.791 0.826 0.766 0.797 0.753
Integrity (Int) 0.614 0.648 0.622 0.837 0.822 0.604
General Privacy Concerns (GenPv) 0.085 -0.046 0.000 0.140 0.086 -0.152 0.750
Perceived Privacy (PercPv) -0.030 -0.148 -0.075 0.019 -0.030 -0.345 0.853 0.842
Information Privacy Concerns (Privacy) 0.023 -0.107 -0.043 0.076 0.022 -0.271 1.078 1.054 0.761
Perceived Trust (Trust) 0.695 0.736 0.686 1.056 1.072 1.097 0.042 -0.107 -0.042 0.635
Age 0.053 0.125 0.134 0.108 0.111 0.089 0.216 0.064 0.137 0.113
Education -0.063 -0.012 -0.033 -0.033 -0.041 -0.065 0.081 0.183 0.144 -0.048
Ethnicity -0.051 -0.031 -0.042 -0.021 -0.025 -0.060 0.020 0.141 0.091 -0.035
Experience -0.042 -0.032 -0.021 0.012 -0.008 -0.041 0.014 0.019 0.018 -0.011
Gender -0.087 -0.047 -0.037 -0.054 -0.003 0.054 -0.153 -0.126 -0.145 -0.003
Income 0.069 0.178 0.183 0.100 0.125 0.132 0.079 0.026 0.052 0.128
Composite Reliability (CR) 0.918 0.945 0.928 0.729 0.797 0.627 0.792 0.879 0.891 0.856
Average Variance Extracted (AVE) 0.788 0.812 0.763 0.473 0.567 0.365 0.562 0.709 N/A N/A
Mean 0.917 0.945 0.927 0.728 0.796 0.626 0.792 0.879 0.891 0.856
Standard Deviation (SD) 0.012 0.007 0.009 0.037 0.025 0.046 0.028 0.018 0.013 0.016
Notes:
1. Square roots (AVEs) are on diagonal, and construct correlations are below the diagonal
2. AVEs of formative indicators are not applicable.
44
Table 5. Discriminant Validity (Heterotrait-Monotrait Ratio of Correlations)
Discriminant Validity (Heterotrait-Monotrait Ratio of Correlations)
Perceived Privacy (PercPv) 0.070 0.146 0.074 0.110 0.083 0.343 0.848
[0.020, 0.091] [0.060, 0.257] [0.024, 0.117] [0.028, 0.154] [0.036, 0.111] [0.202, 0.463] [0.751, 0.924]
Standard Deviation (SD) 0.012 0.007 0.009 0.037 0.025 0.046 0.028 0.018
Note: The values in the brackets represent the lower and the upper bounds of the 95% confidence interval; p < .05
45
Table 6. Significance Testing Results of the Structural Path Coefficients
Significance Testing Results of the Structural Path Coefficients
Path
t 95% f2 q2
Coefficients p Hypothesis
Hypotheses Structural Path Confidence Effect Effect
Statistics Values Results
β Intervals Size Size
H1 Expectation → Customer Satisfaction .291 2.665 .008** [.067, .494] 0.110 .069 Supported
H4 Expectation → Confirmation of .828 29.800 .000* [.770, .879] 2.177 .908 Supported
Expectations
H6 Confirmation of Expectations → .454 4.223 .000* [.248, .668] 0.276 .100 Supported
Customer Satisfaction
Post Hoc Direct Effect of Perceived Trust → .193 3.645 .000* [-.194, -.037] 0.091 .053 --
Analysis Customer Satisfaction
H7 Moderating Effect of Perceived Trust .049 1.368 .171 [-.021, .122] 0.010 .000 Not
on Confirmation of Expectations → Supported
Customer Satisfaction
Post Hoc Direct Effect of Information Privacy -.114 2.827 .005** [-.215, -.035] 0.058 .017 --
Analysis Concerns → Customer Satisfaction
H8 Moderating Effect of Information -.056 1.934 .053 [-.118, -.004] 0.024 .000 Not
Privacy Concerns on Confirmation of Supported
Expectations → Customer
Satisfaction
46
Table 7. Importance-Performance Variable Matrix for Customer Satisfaction
Note: The Importance element denotes unstandardized effects (i.e., direct and indirect
effects). Significance testing uses the bootstrapping routine with 5,000 sample and no sign
changes for determining the 95 percent BCa confidence intervals.
47
Table 8. Importance-Performance Sub-Group Matrix for Customer Satisfaction
Gen Z + Difference Difference Bachelor’s Advanced Difference
Gen X and older Male Female
Variable Element Millennials
(b)
((a - b) / b)
(d) (e)
((d - e) / e) degree or less degree ((g - h) / h)
(a) (c) (f) (g) (h) (i)
n= 88 156 121 123 151 93
Confirm Importance 0.472 0.357 32% 0.440 0.365 21% 0.424 0.463 -8%
Performance 57.768 59.763 -3% 60.496 57.662 5% 59.752 57.910 3%
Quadrant I I I I I I
Expect Importance 0.470* 0.716 -34% 0.618 0.607 2% 0.662 0.555 19%
Performance 58.219* 63.840 -9% 62.473 61.157 2% 62.493 60.821 3%
Quadrant I II II II II II
Trust Importance 0.306 0.195 57% 0.207 0.305 -32% 0.237 0.316 -25%
Performance 55.391 61.065 -9% 59.284 58.757 1% 59.537 58.176 2%
Quadrant III III III III III III
Privacy Importance -0.177 -0.086 106% -0.074 -0.138 -46% -0.118 -0.127 -7%
Performance 63.642 66.428 -4% 68.483 62.413 10% 64.206 67.400 -5%
Quadrant IV IV IV IV IV IV
Low/Middle Higher-Income Difference Experience Experience Difference Difference
Siri Non-Siri
Construct Element (< $80k) (> $80k) ((a - b) / b) ≤ 2 Years > 2 Years ((d - e) / e) ((g - h) / h)
(g) (h)
(a) (b) (c) (d) (e) (f) (i)
n= 87 157 143 101 175 69
Confirm Importance 0.406 0.412 -1% 0.373 0.469 -20% 0.430 0.331 30%
Performance 56.807 65.841 -14% 57.639 61.056 -6% 59.377 58.209 2%
Quadrant I II I I I I
Expect Importance 0.490* 0.662 -26% 0.613* 0.690 -11% 0.619 0.689* -10%
Performance 56.281* 64.898 -13% 59.035* 65.774 -10% 62.419 60.342* 3%
Quadrant I II I II II I
Trust Importance 0.345 0.201 72% 0.210 0.254 -17% 0.254 0.157 62%
Performance 56.304 60.296 -7% 58.203 60.172 -3% 58.070 61.423 -5%
Quadrant I III III III III III
Privacy Importance -0.232 -0.064 263% -0.065 -0.180 -64% -0.093 -0.179 -48%
Performance 64.670 63.655 2% 66.364 64.091 4% 64.896 66.760 -3%
Quadrant IV IV IV IV IV IV
Notes: The Importance element denotes unstandardized effects (i.e., direct and indirect effects) of the relationship between the variable and customer satisfaction. Significance testing
uses the bootstrapping routine with 5,000 sample and no sign changes for determining the 95 percent BCa confidence intervals. The bold values indicate the highest total effect
(importance) and highest performance value of the exogenous driver constructs. * Priority programs for expectations performance improvement.
48
References
Acquisti, A., Brandimarte, L., & Loewenstein, G. (2015). Privacy and human behavior
in the age of information. Science, 347, 509-514.
Acquisti, A., Taylor, C., & Wagman, L. (2016). The economics of privacy. Journal of
Economic Literature, 54(2), 442-492.
Akbari, M., Salehi, K., & Samadi, M. (2015). Brand heritage and word of mouth: The
mediating role of brand personality, product involvement and customer
satisfaction. Journal of Marketing Management, 3(1), 83-90.
Alpaydin, E. (2014). Introduction to Machine Learning. Cambridge, Massachusetts:
MIT Press.
Anderson, E. W., & Sullivan, M. W. (1993). The antecedents and consequences of
customer satisfaction for firms. Marketing Science, 12(2), 125-143.
Bagozzi, R. P., & Yi, Y. (1988). On the evaluation of structural equation models.
Journal of the Academy of Marketing Science, 16(1), 74-94.
Baier, D., Rese, A., & Röglinger, M. (2018, December 2018). Conversational User
Interfaces for Online Shops? A Categorization of Use Cases. Paper presented at
the 39th International Conference on Information Systems (ICIS), San
Francisco, USA.
Belanger, F., & Xu, H. (2015). The role of information systems research in shaping the
future of information privacy. Information Systems Journal, 25(6), 573-578.
Benbasat, I., & Wang, W. (2005). Trust in and adoption of online recommendation
agents. Journal of the Association for Information Systems, 6(3), 72-101.
Bernstein, I. H., & Nunnally, J. C. (1994). Psychometric theory. New York: McGraw-
Hill. Oliva, TA, Oliver, RL, & MacMillan, IC (1992). A catastrophe model for
developing service satisfaction strategies. Journal of Marketing, 56, 83-95.
Bhatt, A. (2014). Consumer attitude towards online shopping in selected regions of
Gujarat. Journal of Marketing Management, 2(2), 29-56.
Bhattacherjee, A. (2001). Understanding information systems continuance: An
expectation-confirmation model. MIS Quarterly, 25(3), 351-370.
Bhattacherjee, A., & Lin, C. P. (2015). A unified model of IT continuance: Three
complementary perspectives and crossover effects. European Journal of
Information Systems, 24(4), 364-373.
Bittner, E., Oeste-Reiß, S., & Leimeister, J. M. (2019). Where is the bot in our team?
Toward a taxonomy of design option combinations for conversational agents in
collaborative work. Paper presented at the Proceedings of the 52nd Hawaii
International Conference on System Sciences.
Byrne, B. M. (2013). Structural Equation Modeling With EQS: Basic Concepts,
Applications, and Programming (2 ed.). New York, NY: Routledge.
Canbek, N. G., & Mutlu, M. E. (2016). On the track of artificial intelligence: Learning
with intelligent personal assistants. Journal of Human Sciences, 13(1), 592-601.
Caruana, A., La Rocca, A., & Snehota, I. (2016). Learner satisfaction in marketing
simulation games: Antecedents and influencers. Journal of Marketing
Education, 38(2), 107-118.
Chen, M.-S., Han, J., & Yu, P. S. (1996). Data mining: An overview from a database
perspective. IEEE Transactions on Knowledge and Data Engineering, 8(6), 866-
883.
Chin, W. W. (1998). The partial least squares approach to structural equation modeling.
Modern Methods for Business Research, 295(2), 295-336.
49
Churchill Jr., G. A., & Surprenant, C. (1982). An investigation into the determinants of
customer satisfaction. Journal Of Marketing Research, 19(4), 491-504.
Cohn, & Wolfe. (2017). 2017 Cohn & Wolfe Authentic Brands Study. Authentic 100.
Retrieved from http://www.authentic100.com
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of
information technology. MIS Quarterly, 13(3), 319-340.
Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1989). User acceptance of computer
technology: A comparison of two theoretical models. Management Science,
35(8), 982-1003.
Diamantopoulos, A., Sarstedt, M., Fuchs, C., Wilczynski, P., & Kaiser, S. (2012).
Guidelines for choosing between multi-item and single-item scales for construct
measurement: A predictive validity perspective. Journal of the Academy of
Marketing Science, 40(3), 434-449.
Dijkstra, T. K., & Henseler, J. (2015). Consistent partial least squares path modeling.
MIS Quarterly, 39(2), 297-316.
Einhorn, H. J., & Hogarth, R. M. (1978). Confidence in judgment: Persistence of the
illusion of validity. Psychological Review, 85(5), 395-416.
Festinger, L. (1957). A Theory of Cognitive Dissonance. Stanford, CA: Stanford
University.
Fitzgerald, K. (2019, January 16). In The "Opt-In' Data Economy, Consumer
Confidenceis Key. Forbes. Retrieved from
https://www.forbes.com/sites/forbestechcouncil/2019/01/16/in-the-opt-in-data-
economy-consumer-confidence-is-key/#6202570c6634.
Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with
unobservable variables and measurement error. Journal Of Marketing Research,
18(1), 39-50.
Ghosh, D. (2018, July 11). What you need to know about California's new data privacy
law. Harvard Business Review Digital Articles, pp. 2-4. Retrieved from
https://hbr.org/2018/07/what-you-need-to-know-about-californias-new-data-
privacy-law.
Giese, J. L., & Cote, J. A. (2000). Defining consumer satisfaction. Academy of
Marketing Science Review, 2000(1), 1-24.
Hair, J. F., Hult, G. T. M., Ringle, C., & Sarstedt, M. (2017). A Primer on Partial Least
Squares Structural Equation Modeling (PLS-SEM). Thousand Oaks, CA: Sage
Publications.
Hair, J. F., Ringle, C. M., & Sarstedt, M. (2011). PLS-SEM: Indeed a silver bullet.
Journal of Marketing Theory and Practice, 19(2), 139-152.
Harmeling, C. M., Moffett, J. W., Arnold, M. J., & Carlson, B. D. (2017). Toward a
theory of customer engagement marketing. Journal of the Academy of Marketing
Science, 45(3), 312-335.
Hasan, U., & Nasreen, R. (2014). The empirical study of relationship between post
purchase dissonance and consumer behaviour. Journal of Marketing
Management, 2(2), 65-77.
Hauswald, J., Laurenzano, M. A., Zhang, Y., Li, C., Rovinski, A., Khurana, A.,
Dreslinski, R. G., Mudge, T., Petrucci, V., & Tang, L. (2015). Sirius: An open
end-to-end voice and vision personal assistant and its implications for future
warehouse scale computers. Paper presented at the ACM SIGPLAN Notices.
Henseler, J., Hubona, G., & Ray, P. A. (2016). Using PLS path modeling in new
technology research: Updated guidelines. Industrial Management & Data
Systems, 116(1), 2-20.
50
Henseler, J., Ringle, C. M., & Sarstedt, M. (2015). A new criterion for assessing
discriminant validity in variance-based structural equation modeling. Journal of
the Academy of Marketing Science, 43(1), 115-135.
Hock, C., Ringle, C. M., & Sarstedt, M. (2010). Management of multi-purpose
stadiums: Importance and performance measurement of service interfaces.
International Journal of Services Technology and Management, 14(2-3), 188-
207.
Holloway, R. J. (1967). An experiment on consumer dissonance. The Journal of
Marketing, 31(1), 39-43.
Hulland, J. (1999). Use of partial least squares (PLS) in strategic management research:
A review of four recent studies. Strategic management journal, 20(2), 195-204.
Hult, G. T. M., Hair, J. F., Proksch, D., Sarstedt, M., Pinkwart, A., & Ringle, C. M.
(2018). Addressing Endogeneity in International Marketing Applications of
Partial Least Squares Structural Equation Modeling. Journal of International
Marketing, 26(3), 1-21.
Jack, E. P., & Powers, T. L. (2013). Shopping behaviour and satisfaction outcomes.
Journal of Marketing Management, 29(13-14), 1609-1630.
Jain, P., Gyanchandani, M., & Khare, N. (2016). Big data privacy: A technological
perspective and review. Journal of Big Data, 3(1), 1-25.
Jiang, J. J., & Klein, G. (2009). Expectation confirmation theory: Capitalizing on
descriptive power. In K. Klinger (Ed.), Handbook of research on contemporary
theoretical models in information systems (pp. 384-401). Hershey, PA: IGI
Global.
Johnson, M. D., & Fornell, C. (1991). A framework for comparing customer satisfaction
across individuals and product categories. Journal of Economic Psychology(12),
267-286.
Karapanos, E. (2013). User experience over time. Berlin Heidelberg: Springer.
Kim, D. J., Ferrin, D. L., & Rao, H. R. (2008). A trust-based consumer decision-making
model in electronic commerce: The role of trust, perceived risk, and their
antecedents. Decision Support Systems, 44(2), 544-564.
Kim, D. J., Ferrin, D. L., & Rao, H. R. (2009). Trust and satisfaction, two stepping
stones for successful e-commerce relationships: A longitudinal exploration.
Information Systems Research, 20(2), 237-257.
Kim, H. W., Xu, Y., & Gupta, S. (2012). Which is more important in Internet shopping,
perceived price or trust? Electronic Commerce Research and Applications,
11(3), 241-252.
Kock, N. (2015). A note on how to conduct a factor-based PLS-SEM analysis.
International Journal of e-Collaboration, 11(3), 1-9.
Koehler, J. (2016, April 15). Business Process Innovation with Artificial Intelligence -
Benefits and Operational Risks. Lucerne University.
Komiak, S. X., & Benbasat, I. (2006). The effects of personalization and familiarity on
trust and adoption of recommendation agents. MIS Quarterly, 30(4), 941-960.
Kumar, V. (2016). Introduction: Is customer satisfaction (ir) relevant as a metric?
Journal of Marketing, 80(5), 108-109.
Kumar, V., Dixit, A., Javalgi, R. R. G., & Dass, M. (2016). Research framework,
strategies, and applications of intelligent agent technologies (IATs) in
marketing. Journal of the Academy of Marketing Science, 44(1), 24-45.
Lankton, N. K., & McKnight, D. H. (2012). Examining two expectation disconfirmation
theory models: Assimilation and asymmetry effects. Journal of the Association
for Information Systems, 13(2), 88-115.
51
Lankton, N. K., McKnight, D. H., & Thatcher, J. B. (2014). Incorporating trust-in-
technology into expectation disconfirmation theory. The Journal of Strategic
Information Systems, 23(2), 128-145.
LaTour, S. A., & Peat, N. C. (1979). Conceptual and methodological issues in consumer
satisfaction research. NA-Advances in Consumer Research, 6(1), 431-437.
Lohmöller, J.-B. (2013). A framework for comparing customer satisfaction across
individuals and product categories. Berlin Heidelberg: Springer Science &
Business Media.
Luhmann, N., & Schorr, K. E. (1979). Reflexionsprobleme im Erziehungssystem (Vol.
740). Stuttgart: Klett-Cotta.
Malhotra, N. K., Kim, S. S., & Agarwal, J. (2004). Internet users' information privacy
concerns (IUIPC): The construct, the scale, and a causal model. Information
Systems Research, 15(4), 336-355.
Martilla, J. A., & James, J. C. (1977). Importance-Performance Analysis. Journal of
Marketing, 41(1), 77-79.
Martin, K. D., Borah, A., & Palmatier, R. W. (2017). Data privacy: Effects on customer
and firm performance. Journal of Marketing, 81(1), 36-58.
McKnight, D. H., Choudhury, V., & Kacmar, C. (2002). Developing and validating
trust measures for e-commerce: An integrative typology. Information Systems
Research, 13, 334-359.
Minta, Y. (2018). Link between satisfaction and customer loyalty in the insurance
industry: Moderating effect of trust and commitment. Journal of Marketing
Management, 6(2), 25-33.
Mithas, S., & Rust, R. T. (2016). How information technology strategy and investments
influence firm performance: Conjecture and empirical evidence. MIS Quarterly,
40(1), 223-246.
Moar, J. (2019, February). The Digital Assistants of Tomorrow. Retrieved from
https://www.juniperresearch.com/document-library/white-papers/the-digital-
assistants-of-tomorrow
Morgeson, F. V. (2013). Expectations, disconfirmation, and citizen satisfaction with the
US federal government: Testing and expanding the model. Journal of Public
Administration Research and Theory, 23(2), 289-305.
Mouri, N., Bindroo, V., & Ganesh, J. (2015). Do retail alliances enhance customer
experience? Examining the relationship between alliance value and customer
satisfaction with the alliance. Journal of Marketing Management, 31(11-12),
1231-1254.
Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises.
Review of general psychology, 2(2), 175-220.
Nunnally, J. C. (1978). Psychometric theory. New York: McGraw-Hill.
Oliver, R. L. (1977). Effect of expectation and discontinuation on postexposure product
evaluations: An alternative interpretation. Journal Of Applied Psychology, 62(4),
480-486.
Oliver, R. L. (1980). A cognitive model of the antecedents and consequences of
satisfaction decisions. Journal Of Marketing Research, 17(4), 460-469.
Oliver, R. L. (1981). Measurement and evaluation of satisfaction processes in retail
settings. Journal of Retailing, 57(3), 25-48.
Oliver, R. L. (2014). Satisfaction: A Behavioral Perspective on the Consumer. (2 ed.).
New York, NY: Routledge.
52
Oliver, R. L., Balakrishnan, P. S., & Barry, B. (1994). Outcome satisfaction in
negotiation: A test of expectancy disconfirmation. Organizational Behavior and
Human Decision Processes, 60(2), 252-275.
Oliver, R. L., & DeSarbo, W. S. (1988). Response determinants in satisfaction
judgments. Journal Of Consumer Research, 14(4), 495-507.
Oliver, R. L., Rust, R. T., & Varki, S. (1997). Customer delight: Foundations, findings,
and managerial insight. Journal of Retailing, 73(3), 311-336.
Olshavsky, R. W., & Miller, J. A. (1972). Consumer expectations, product performance,
and perceived product quality. Journal Of Marketing Research, 9(1), 19-21.
Pappas, N. (2016). Marketing strategies, perceived risks, and consumer trust in online
buying behaviour. Journal of Retailing and Consumer Services, 29, 92-103.
Parasuraman, A., Berry, L. L., & Zeithaml, V. A. (1991). Refinement and reassessment
of the SERVQUAL scale. Journal of Retailing, 67(4), 420-450.
Peart, A. (2018, July 25). Conversational ai platforms demand is growing. Artificial
Solutions. Retrieved from https://www.artificial-
solutions.com/blog/conversational-ai-platforms-demand-grows.
Podsakoff, P. M., MacKenzie, S. B., Lee, J.-Y., & Podsakoff, N. P. (2003). Common
method biases in behavioral research: A critical review of the literature and
recommended remedies. Journal Of Applied Psychology, 88(5), 879.
Rafiq, M., Fulford, H., & Lu, X. (2013). Building customer loyalty in online retailing:
The role of relationship quality. Journal of Marketing Management, 29(3-4),
494-517.
Ranjan, K. R., & Read, S. (2016). Value co-creation: Concept and measurement.
Journal of the Academy of Marketing Science, 44(3), 290-315.
Rego, L. L., Morgan, N. A., & Fornell, C. (2013). Reexamining the market share–
customer satisfaction relationship. Journal of Marketing, 77(5), 1-20.
Ringle, C. M., & Sarstedt, M. (2016). Gain more insight from your PLS-SEM results:
The importance-performance map analysis. Industrial Management & Data
Systems, 116(9), 1865-1886.
Ringle, C. M., Wende, S., & Becker, J.-M. (2015). SmartPLS 3 [Computer software}.
Sarstedt, M., Henseler, J., & Ringle, C. M. (2011). Multigroup analysis in partial least
squares (PLS) path modeling: Alternative methods and empirical results
Measurement and Research Methods in International Marketing (Vol. 22, pp.
195-218): Emerald Group Publishing Limited.
Sarstedt, M., Ringle, C. M., Smith, D., Reams, R., & Hair, J. F. (2014). Partial least
squares structural equation modeling (PLS-SEM): A useful tool for family
business researchers. Journal of Family Business Strategy, 5(1), 105-115.
Schumann, J. H., von Wangenheim, F., & Groene, N. (2014). Targeted online
advertising: Using reciprocity appeals to increase acceptance among users of
free web services. Journal of Marketing, 78(1), 59-75.
Slack, N. (1994). The importance-performance matrix as a determinant of improvement
priority. International Journal of Operations & Production Management, 14(5),
59-75.
Smith, H. J., Milberg, S. J., & Burke, S. J. (1996). Information privacy: measuring
individuals' concerns about organizational practices. MIS Quarterly, 20(2), 167-
196.
Spreng, R. A., MacKenzie, S. B., & Olshavsky, R. W. (1996). A reexamination of the
determinants of consumer satisfaction. The Journal of Marketing, 60(3), 15-32.
53
Spreng, R. A., & Olshavsky, R. W. (1993). A desires-as-standard model of consumer
satisfaction: Implications for measuring satisfaction. Journal of the Academy of
Marketing Science, 21(3), 169-177.
Spreng, R. A., & Page, T. J. (2003). A test of alternative measures of disconfirmation.
Decision Sciences, 34(1), 31-62.
Statista.com. (2019). US smartphone users by age group. Statista.com. Retrieved from
https://www.statista.com/statistics/805090/us-smartphone-users-by-age-group/
Tam, J. L. M. (2004). Customer satisfaction, service quality and perceived value: An
integrative model. Journal of Marketing Management, 20(7-8), 897-917.
Teas, R. K., & DeCarlo, T. E. (2004). An examination and extension of the zone-of-
tolerance model: A comparison to performance-based models of perceived
quality. Journal of Service Research, 6(3), 272-286.
Tse, D. K., & Wilton, P. C. (1988). Models of consumer satisfaction formation: An
extension. Journal Of Marketing Research, 25(2), 204-212.
Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of
information technology: Toward a unified view. MIS Quarterly, 27(3), 425-478.
Venkatesh, V., Thong, J. Y. L., Chan, F. K. Y., Hu, P. J. H., & Brown, S. A. (2011).
Extending the two-stage information systems continuance model: incorporating
UTAUT predictors and the role of context. Information Systems Journal, 21(6),
527-555.
White House. (2014, May). Big data and privacy: A technological perspective.
Washington, DC: Executive Office of the President, President’s Council of
Advisors on Science and Technology Retrieved from
http://www.whitehouse.gov/sites/default/files/microsites/ostp/PCAST/pcast_big
_data_and_privacy_-_may_2014.pdf.
Wise, J., VanBoskirk, S., & Liu, S. (2016, September 12). The rise of intelligent agents.
Forrester.com. Retrieved from
https://www.forrester.com/report/The+Rise+Of+Intelligent+Agents/-/E-
RES128047#figure1.
Xiao, S., & Benbasat, I. (2002). The Impact of Internalization and Familiarity on Trust
and Adoption of Recommendation Agents. Paper presented at the AIS SIG-HCI.
Yannopoulou, N., Koronis, E., & Elliott, R. (2011). Media amplification of a brand
crisis and its affect on brand trust. Journal of Marketing Management, 27(5-6),
530-546.
Yi, Y. (1990). A critical review of consumer satisfaction. Review of Marketing, 4(1),
68-123.
Yim, C. K., Chan, K. W., & Lam, S. S. (2012). Do customers and employees enjoy
service participation? Synergistic effects of self-and other-efficacy. Journal of
Marketing, 76(6), 121-140.
Zhao, L., Lu, Y., Zhang, L., & Chau, P. Y. K. (2012). Assessing the effects of service
quality and justice on customer satisfaction and the continuance intention of
mobile value-added services: An empirical test of a multidimensional model.
Decision Support Systems, 52(3), 645-656.
Zimmer, J. C., Arsal, R. E., Al-Marzouq, M., & Grover, V. (2010). Investigating online
information disclosure: Effects of information relevance, trust and risk.
Information & Management, 47(2), 115-123.
54
APPENDIX A
CONSTRUCT SCALES
Description Measurement
Customer Satisfaction
S1 Overall, how satisfied are you with your digital Extremely Extremely
assistant? displeased pleased
S2 Overall, how satisfied are you with your digital Extremely Extremely
assistant? frustrated contented
S3 Overall, how satisfied are you with your digital Extremely Extremely
assistant? miserable delighted
S4 Overall, how satisfied are you with your digital Extremely Extremely
assistant? dissatisfied satisfied
Expectations
Usefulness
EU1 Based on my experience so far, I expect that my Strongly Strongly
digital assistant will increase my productivity. disagree agree
EU2 Based on my experience so far, I expect that my Strongly Strongly
digital assistant will improve my performance. disagree agree
EU3 Based on my experience so far, I expect that my Strongly Strongly
digital assistant will enhance my effectiveness. disagree agree
EU4 Based on my experience so far, I expect that my Strongly Strongly
digital assistant will be useful. disagree agree
EU5 Based on my experience so far, I expect that my Strongly Strongly
digital assistant will allow me to complete tasks disagree agree
more quickly.
EU6 Based on my experience so far, I expect that my Strongly Strongly
digital assistant will make it easier to complete my disagree agree
tasks.
55
Perceived Performance
Usefulness
PU1 Based on my experience with my digital assistant, Strongly Strongly
it increased my productivity. disagree agree
PU2 Based on my experience with my digital assistant, Strongly Strongly
it improved my performance. disagree agree
PU3 Based on my experience with my digital assistant, Strongly Strongly
it enhanced my effectiveness. disagree agree
PU4 Based on my experience with my digital assistant, Strongly Strongly
it was useful. disagree agree
PU5 Based on my experience with my digital assistant, Strongly Strongly
it allowed me to complete tasks more quickly. disagree agree
PU6 Based on my experience with my digital assistant, Strongly Strongly
my tasks were easier to complete. disagree agree
Confirmation of expectations
Usefulness
CU1 My increased productivity due to my digital Much worse Much better
assistant was than expected. than expected than expected
CU2 My improved performance due to my digital Much worse Much better
assistant was than expected. than expected than expected
CU3 My enhanced effectiveness due to my digital Much worse Much better
assistant was than expected. than expected than expected
CU4 The usefulness of my digital assistant Much worse Much better
was than expected. than expected than expected
CU5 My ability to complete tasks more quickly with my Much worse Much better
digital assistant was than expected. than expected than expected
CU6 The ease with which I complete my tasks with my Much worse Much better
digital assistant was than expected. than expected than expected
56
Perceived Trust
Competence
TC1 My digital assistant is like a real expert in Strongly Strongly
providing answers. disagree agree
TC2 My digital assistant has the expertise to Strongly Strongly
understand my needs and preferences. disagree agree
TC3 My digital assistant can understand my needs and Strongly Strongly
preferences. disagree agree
TC4 My digital assistant had good knowledge about Strongly Strongly
the questions and subjects that I am interested in. disagree agree
TC5 My digital assistant matches my needs to the Strongly Strongly
information available. disagree agree
Benevolence
TB1 My digital assistant puts my interests first. Strongly Strongly
disagree agree
TB2 My digital assistant keeps my interests in mind. Strongly Strongly
disagree agree
TB3 My digital assistant wants to understand my needs Strongly Strongly
and preferences. disagree agree
TB4 My digital assistant helps me know more about Strongly Strongly
the topic of my inquiry. disagree agree
Integrity
TI1 My digital assistant provides unbiased Strongly Strongly
information and recommendations. disagree agree
TI2 My digital assistant provides honest answers. Strongly Strongly
disagree agree
TI3 I consider my digital assistant to possess integrity. Strongly Strongly
disagree agree
TI4 My digital assistant is not linked to a specific Strongly Strongly
company, so it is unbiased. disagree agree
57
Information Privacy Concerns
General Privacy Concerns
PC1 Compared to others, I am more sensitive about the Strongly Strongly
way online companies handle my personal disagree agree
information.
PC2 To me, it is most important to keep my privacy Strongly Strongly
intact from online companies. disagree agree
PC3 I am concerned about threats to my personal Strongly Strongly
privacy today. disagree agree
PC4 I believe other people are too much concerned with Strongly Strongly
online privacy issues. disagree agree
PC5 I am concerned about threats to my personal Strongly Strongly
privacy today. disagree agree
58