You are on page 1of 14

Telematics and Informatics 47 (2020) 101324

Contents lists available at ScienceDirect

Telematics and Informatics


journal homepage: www.elsevier.com/locate/tele

T
Technology acceptance theories and factors influencing artificial
Intelligence-based intelligent products
Kwonsang Sohn, Ohbyung Kwon

School of Management, Kyung Hee University, Kyungheedae-ro 26, Dongdaemun-gu, Seoul, South Korea

A R T IC LE I N F O ABS TRA CT

Keywords: The rapid growth of artificial intelligence (AI) technology has prompted the development of AI-
AI-based intelligent products based intelligent products. Accordingly, various technology acceptance theories have been used
Technology adoption to explain acceptance of these products. This comparative study determines which models best
Purchase intention explain consumer acceptance of AI-based intelligent products and which factors have the greatest
Technology acceptance theory
impact in terms of purchase intention. We assessed the utility of the Technology Acceptance
Decomposition analysis
Model (TAM), the Theory of Planned Behavior (TPB), the Unified Theory of Acceptance and Use
of Technology (UTAUT), and the Value-based Adoption Model (VAM) using data collected from a
survey sample of 378 respondents, modeling user acceptance in terms of behavioral intention to
use AI-based intelligent products. In addition, we employed decomposition analysis to compare
each factor included in these models in terms of influence on purchase intention. We found that
the VAM performed best in modeling user acceptance. Among the various factors, enjoyment was
found to influence user purchase intention the most, followed by subjective norms. The findings
of this study confirm that acceptance of highly innovative products with minimal practical value,
such as AI-based intelligent products, is more influenced by interest in technology than in uti-
litarian aspects.

1. Introduction

Recent advancements in computing capabilities have brought about rapid growth in artificial intelligence (AI) technologies such
as natural language processing, voice recognition, and machine learning. Unsurprisingly, interest in intelligent products based on AI
technologies is also increasing. Intelligent products are physical objects with the intelligence to take autonomous action and make
decisions based on interactions with the environment (González García et al., 2017). Intelligent products can be classified as in-
novative IT products; therefore, an understanding of the factors affecting the behavioral intention of using AI-based intelligent
products can begin with an understanding of prior research on the user adoption of innovative products.
Most studies of use of innovative products are based on the Technology Acceptance Model (TAM), Theory of Planned Behavior
(TPB), and Unified Theory of Acceptance and Use of Technology (UTAUT) (Groß, 2015). The TAM has been widely used to explain
intention to use in various fields such as intelligent healthcare systems (Chen et al., 2017; Hsieh, 2015), internet-based intelligent
systems (Changchit, 2003), web 3.0-based intelligent learning environments (Cabada et al., 2018), intelligent advertising systems
(Aguilar and Garcia, 2018), and intelligent robots (Liang and Lee, 2017). The TPB and UTAUT have also been utilized in analyses of
intelligent games (Lim, 2003; Hamari and Koivisto, 2013) and in research on agent-based systems (Zhang and Zhang, 2007), in-
telligent healthcare systems (Fan et al., 2018), intelligent learning systems (Fernández-Llamas et al., 2018; Roll et al., 2018), and


Corresponding author.
E-mail addresses: miroo1215@khu.ac.kr (K. Sohn), obkwon@khu.ac.kr (O. Kwon).

https://doi.org/10.1016/j.tele.2019.101324

Available online 16 December 2019


Received 1 April 2019; Received in revised form 22 July 2019; Accepted 6 December 2019

0736-5853/ © 2019 Elsevier Ltd. All rights reserved.


K. Sohn and O. Kwon Telematics and Informatics 47 (2020) 101324

recommended advertising systems (Oechslein et al., 2014).


Although some variables in acceptance theory are similar from one model to the next (e.g., perceived usefulness in the TAM and
performance expectancy in the UTAUT, perceived ease of use in the TAM and effort expectancy in the UTAUT) and many are applied
in similar ways (e.g., in learning systems, healthcare settings, advertising recommendations, etc.), consistency among these various
systems is limited. Technology acceptance theory is currently without an objective consensus on which model performs best in each
field. This empirical study examines these models in terms of their ability to explain acceptance phenomena in various fields. Lately,
an interesting comparison study was conducted in the context of driving assistance systems examining which model better explains
driver acceptance: the TAM, TPB, or UTAUT (Rahman et al., 2017). However, such a comparison study has yet to be conducted in
other fields, including intelligent products. It is very important for product developers and corporate investors to predict the impact
of various factors on consumers' acceptance of AI-based intelligent products at the time of diffusion. Understanding which models
best explain acceptance phenomena and which factors influence purchase of AI-based intelligent products may be helpful in this
regard.
Accordingly, the purposes of this study are First, to compare technology acceptance theories in terms of acceptance of AI-based
intelligent products; Second, to investigate the factors influencing technology acceptance and how they affect purchase intention;
Third, to verify differences among factors affecting purchase intention of AI-based intelligent products.

2. Theoretical foundations

The IS community has adopted the TAM, TPB, UTAUT, and Value-based Adoption Model (VAM) to shed light on the acceptance of
intelligent products and services. First, the TAM, which was developed from the Theory of Reasoned Action (TRA) to improve
understanding of user acceptance of information systems (Davis, 1985, 1989, Fig. 1a), is the most widely used model to explain the
behavior of consumers regarding technology adoption (Lee et al., 2003). The TAM has been used in studies of the acceptance of
various types of information technology (Kim and Shin, 2015; Karahanna and Straub, 1999; Subramanian, 1994; Adams et al., 1992)
and is known as a robust acceptance theory (Hendrickson et al., 1993; Segars and Grover, 1993). For example, the TAM was used to
explain the acceptance of wearable devices (e.g., the smart watch) (Chuah et al., 2016; Yang et al., 2016; Kim and Shin, 2015),
business intelligence systems (Wang, 2016), intelligent health monitoring systems (Tseng et al., 2013), intelligent tourism (Venkatesh
and Davis, 2000), smart in-store technology (Kim et al., 2017), the smartphone credit card (Ooi and Tan, 2016), and many others.
Next, the TPB is an extended model of the TRA in that it utilizes the same constructs while introducing the constructs of perceived
behavioral control and attitude and subjective norms to explain intention to act (Ajzen, 1991, Fig. 1b). Subjective norms relate to the
importance of social influences on acceptance that affect individual behavior (Ajzen and Fishbein, 1973). In addition, perceived
behavioral control, which affects individual behavior, represents ease of access (Ajzen, 1985), as opposed to perceived ease of use in
the TAM, which refers to ease of operation of the system itself. The TPB was developed at the same time as the TAM; these models
have been compared in terms of utility for evaluation of acceptance intention. It has been shown that results using these models differ
according to measurement object and method (Mathieson, 1991; Taylor and Todd, 1995). The TPB has been used in research on
acceptance of innovative products examining exogenous factors such as social influence. Similarly, intelligent products also have
several exogenous factors that have been examined in scholarly work. For example, the TPB was used to explain acceptance of
wearable devices (Basoglu et al., 2017; Lunney et al., 2016), smart home services (Yang et al., 2017), mobile services (Yang and Jolly,
2009), health cloud systems (Hsieh, 2015), and intelligent transport systems (Larue et al., 2015; Thorhauge et al., 2016).
The UTAUT was developed by redefining representative technology acceptance theories, such as the TRA, TAM, and TPB, from an
integrated perspective (Venkatesh et al., 2003). The main factors included in the UTAUT are performance expectancy, effort ex-
pectancy, social influence, and facilitation conditions. It considers individual perspectives and the influence of social and environ-
mental factors on technology (Fig. 1c). The UTAUT was used to explain acceptance of intelligent healthcare systems (Fan et al., 2018;
Hsieh, 2016; Gao et al., 2015), wearable devices (Adapa et al., 2018; Gu et al., 2016), recommender systems (Wang et al., 2015;
Oechslein et al., 2014), virtual reality education systems (Setiawan et al., 2019), and other intelligent systems (Williams et al., 2015).
Then the VAM was proposed as an alternative model because the TAM failed to consider many of the effects of exogenous
variables in explaining intention to use new information and communication technologies (ICT), including mobile internet. From the
cost-benefit perspective, the VAM retained the technical characteristics (usefulness, technicality) of existing technology acceptance
theories, adding enjoyment and perceived fee (Kim et al., 2007, Fig. 1d). Unlike other models, the VAM includes perceived value as a
mediating factor in the making of individual decisions; this value is estimated based on trade-off of technology. The VAM was used to
explain the acceptance of IPTV (Lin et al., 2012), mobile payments (Mallat, 2007), and IoT services (Kim et al., 2017).
However, few studies clearly explain why they select a specific acceptance model over any other, nor do they evaluate its ability
to describe acceptance phenomena related to intelligent products. Furthermore, though intelligent products have various sub-
classifications, no comparison study of which factors affect acceptance has been conducted.

3. Method

3.1. Research models

To understand acceptance of AI-based intelligent products, we utilized a method to compare the four most frequently used
models, the TAM, TPB, UTAUT, and VAM, based on the method proposed in a study of advanced driver assistance systems (Rahman
et al., 2017). Moreover, we compared the magnitude of the influence between factors explaining purchase intention. The constructs

2
K. Sohn and O. Kwon Telematics and Informatics 47 (2020) 101324

Attitude
Perceived
Usefulness

Subjective Behavioral Actual


Behavioral Actual Norms Intention Usage
Intention to System
Use Use
Perceived
Perceived Behavioral
Ease of Use Control

(a) TAM (Davis, 1989) (b) TPB (Ajzen, 1991)

Performance
Expectancy

Effort
Expectancy
Behavioral
Use Behavior
Intention

Social
Influence

Facilitating
Conditions

Voluntariness
Gender Age Experience
of Use

(c) UTAUT (Venkatesh et al., 2003)

Benefit

Usefulness

Enjoyment

Perceived Adoption
Value Intention
Technicality

Perceived Fee

Sacrifice

(d) VAM (Kim et al., 2007)


Fig. 1. IS adoption models.

presented in the models are as follows:

- TAM (Davis, 1989)


1. Perceived usefulness (PU) and perceived ease of use (PEoU) are significant predictors of behavioral intention (BI) (model:
BI = PU + PEoU).
2. PU mediates the effect of PEoU on BI; however, the mediation is not complete. In other words, PEoU significantly affects BI, above
and beyond the influence of PU.
- TPB (Ajzen, 1991)
1. Attitude toward behavior (A), subjective norms (SN), and perceived behavioral control (PBC) are significant predictors of BI

3
K. Sohn and O. Kwon Telematics and Informatics 47 (2020) 101324

(model: BI = A + SN + PBC).
- UTAUT (Venkatesh et al., 2003)
1. Performance expectancy (P E), effort expectancy (EE), and social influence (SI) are significant predictors of BI (model:
BI = PE + EE + SI).
2. Social influence is related to subjective norms.
- VAM (Kim et al., 2007)
1. Usefulness (U), enjoyment (Enj), technicality (Tech), and perceived fee (PF) are significant predictors of perceived value (PV)
(model: PV = U + Enj + Tech + PF).
2. Technicality (Tech) and PF significantly and negatively affect PV.
3. PV mediates the effects of U, Enj, Tech, and PF on BI; however, the mediation is not complete. In this study, we see that U, Enj,
Tech, and PF can affect BI directly.
4. Usefulness is related to PU.

3.2. Data collection

Before the survey, experts in AI products and technology acceptance theories reviewed the questionnaire. After various revisions,
they confirmed that the questionnaire was well written. Accordingly, we used it to conduct an online survey of 849 people through a
professional survey organization for three weeks in October 2018. This validation exercise was conducted before it was used to test
the research model. To validate BI to use and intention to purchase (PI) intelligent products, we targeted those interested in using
products such as the smart speaker, voice assistant services, and AI-based home appliances. These three products were selected for
three reasons: because they all involve voice recognition; according to a survey report published in June 2017 by the Korean
Consumer Agency, they were all already commercialized as of the second half of 2017; and they represent a distinct distribution
pattern representative of AI-based products. Out of 849 respondents, 414 (48.8%) completed the survey. Insincere responses and
those with standardized residual values exceeding 2 in each model were eliminated as outliers (Berggren et al., 2008). Finally, data
for 378 respondents (262 actual and 116 potential users) were used in the analysis. The demographic distribution of the respondents
is provided in Table 1.

3.3. Procedure

The survey began with a functional description and examples of intelligent products to aid respondents' understanding. Items
from the TAM, TPB, UTAUT, and VAM related to BI and purchase intention (PI) were measured on a 7-point Likert scale. All
measurement items used were developed through a thorough search of the literature; in total, 54 items were measured (Table 2).
Items with similar concepts such as usefulness (TAM and VAM) and subjective norms/social influences (TPB and UTAUT) were
common to each model. Additionally, the purpose of this study was to compare the influence of independent variables that directly
affect BI. Therefore, the moderating variables (age, gender, experience, voluntary use) used in the UTAUT were excluded.

3.4. Reliability and validity of constructs

To investigate the convergent validity of the measurement items used in this study, a confirmatory factor analysis was conducted.
Table 3 summarizes the statistics related to the measurement items and constructs. Convergent validity was assessed by examining
the factor loadings for each item, the composite reliability (CR), and the average variance extracted (AVE) for each construct. Items
with factor loadings of < 0.5, indicating potential problems with their validity, were deleted from the factor analysis. In total, ten
items were removed, including one perceived ease of use item (PEoU3), four attitude items (A1, A3, A6, A7), one perceived

Table 1
Demographic characteristics of respondents.
Characteristics Actual users Potential users %

Gender
Male 125 51 46.6
Female 137 65 53.4
Age (years)
20–29 71 20 24.1
30–39 74 20 24.9
40–49 66 32 25.9
50+ 51 44 25.1
Education
High school 27 29 14.8
College or university student 23 7 7.9
Graduate college or university 185 72 68.0
Advanced degree 27 8 9.3

Respondents (n = 378).

4
K. Sohn and O. Kwon Telematics and Informatics 47 (2020) 101324

Table 2
Measurement items and their sources.
Construct No. Measurement items Refs.

Perceived Ease of Use PEoU1 Using the AI product would be easy Venkatesh and Davis, 2000
PEoU2 Interaction with the AI product would be clear and
understandable
PEoU3(R) I would find the AI product difficult to use
PEoU4 I would find it easy to get the AI product to do what I want it to
do
Perceived Usefulness PU1 Using the AI product would improve my daily work performance Venkatesh and Davis, 2000
PU2 Using the AI product would help my daily work
PU3 Using the AI product would enhance effectiveness in my daily
work
PU4 I would find the AI product useful in my daily work
Attitude A1 Use of the AI product in everyday life would be bad/good Rahman et al., 2017
A2 Use of the AI product in everyday life would be useless/useful
A3(R) Use of the AI product in everyday life would be desirable/
undesirable
A4 Use of the AI product in everyday life would be ineffective/
effective
A5 Use of the AI product in everyday life would be unpleasant/
pleasant
A6 Use of the AI product in everyday life would be irritating/
likeable
A7(R) Use of the AI product in everyday life would be helpful/
worthless
Subjective Norms/Social Influence SN1 People who influence my behavior would think that I should use Yang and Jolly, 2009; Venkatesh et al.,
the AI product 2003
SN2 People who are important to me would think that I should use
the AI product
SN3 People around me will take a positive view of me using the AI
product
SN4(R) People around me would think that I should not use the AI
product
Perceived Behavioral Control PBC1 Using the AI product is entirely within my control Taylor and Todd, 1995; Rahman et al.,
PBC2 I have enough ability to use the AI product 2017
PBC3(R) I do not have the knowledge necessary to use the AI product
PBC4 I have the resources, knowledge, and ability to use the AI
product
Performance Expectancy PE1 Using the AI product would improve my work performance Venkatesh et al., 2003
PE2 Using the AI product would be helpful in my work
PE3 Using the AI product would enhance the effectiveness of my
work
PE4 Using the AI product would improve my work performance
Effort Expectancy EE1 Interaction with the AI product would be clear and Venkatesh et al., 2003
understandable
EE2 It would be easy for me to become skillful at using the AI product
EE3(R) I would find the AI product not easy to use
EE4 Learning to operate the AI product would be easy for me
Enjoyment Enj1 I would have fun interacting with the AI product Lin et al., 2012; Agarwal and Karahanna,
Enj2 Using the AI product would provide me with a lot of enjoyment 2000
Enj3 I would enjoy using the AI product
Enj4(R) Using the AI product would bore me
Perceived Fee PF1 The fee that I would have to pay for use of the AI product is too Kim et al., 2007; Voss et al., 1998
high
PF2 The fee that I would have to pay for use of the AI product is
reasonable
PF3(R) I am pleased with the fee that I would have to pay for use of the
AI product
Technicality Tech1 It would not be easy to use the AI product technically Kim et al., 2007; Davis, 1989
Tech2 It would not be easy to operate the AI product
Tech3 It would take quite some time to get familiar with the AI product
Tech4 It looks a little difficult to use the AI product
Perceived Value PV1 Compared to the fee I would need to pay, the AI product offers Kim et al., 2007; Sirdeshmukh et al., 2002
value for money
PV2 Compared to the effort I would need to put in, the AI product is
beneficial to me
PV3 Compared to the time I would need to spend, the AI product is
worthwhile to me
PV4 Overall, the AI product delivers good value
(continued on next page)

5
K. Sohn and O. Kwon Telematics and Informatics 47 (2020) 101324

Table 2 (continued)

Construct No. Measurement items Refs.

Behavioral Intention BI1 I intend to use the AI product in the future Rahman et al., 2017
BI2 I intend to use the AI product frequently
BI3 I intend to recommend that other people use the AI product
BI4 I intend to buy the AI product in the future
PurchaseIntention PI1 I intend to purchase the AI product in the future Davis et al., 1989
PI2 When I purchase a product/service, the AI product will be
considered first
PI3 I intend to recommend the AI product to people around me
PI4 I plan to purchase the AI product soon

Table 3
Factor loadings of indicator variables.
Model Construct Items Factor loading Cronbach’s alpha C.R AVE

TAM Perceived Usefulness PU_1 0.865 0.928 0.931 0.770


PU_2 0.852
PU_3 0.897
PU_4 0.878
Perceived Ease of Use PEoU_1 0.859 0.874 0.877 0.705
PEoU_2 0.879
PEoU_4 0.772
TPB Attitude A_2 0.736 0.814 0.754 0.507
A_4 0.858
A_5 0.735
Subjective Norms SN_1 0.893 0.868 0.885 0.721
SN_2 0.892
SN_3 0.716
Perceived Behavioral Control PBC_1 0.804 0.819 0.823 0.611
PBC_2 0.886
PBC_4 0.657
UTAUT Performance Expectancy PE_1 0.895 0.941 0.940 0.796
PE_2 0.885
PE_3 0.889
PE_4 0.909
Social Influence SN_1 0.896 0.868 0.885 0.721
SN_2 0.889
SN_3 0.717
Effort Expectancy EE_1 0.816 0.844 0.835 0.628
EE_2 0.809
EE_4 0.776
VAM Usefulness PU_1 0.872 0.928 0.931 0.771
PU_2 0.854
PU_3 0.890
PU_4 0.877
Enjoyment Enj_1 0.860 0.927 0.935 0.828
Enj_2 0.929
Enj_3 0.914
Technicality Tech_1 0.742 0.923 0.883 0.654
Tech_2 0.897
Tech_3 0.891
Tech_4 0.937
Perceived Fee PF_1 0.976 0.870 0.844 0.733
PF_2 0.789
Perceived Value PV_1 0.856 0.941 0.940 0.797
PV_2 0.907
PV_3 0.914
PV_4 0.898
Behavioral Intention BI_1 0.840 0.897 0.910 0.718
BI_2 0.849
BI_3 0.761
BI_4 0.870

behavioral control item (PBC3), one subjective norms item (SN4), one effort expectancy item (EE3), one enjoyment item (Enj4), and
one perceived fee item (PF3). In addition, the Cronbach's α test was conducted on the extracted items to verify the internal con-
sistency of those items to be used as factors; the reliability of the items used in the analysis was verified by a Cronbach's α coefficient
of 0.6 or higher for all factors.

6
K. Sohn and O. Kwon Telematics and Informatics 47 (2020) 101324

Table 4
Comparison model fit between models.
Model χ2 Q (χ2/df) GFI NFI TLI PGFI

TAM 77.891 1.900 0.964 0.975 0.984 0.599


TPB 184.312 3.124 0.931 0.939 0.944 0.603
UTAUT 287.008 4.042 0.900 0.932 0.933 0.609
VAM 354.157 2.035 0.917 0.951 0.969 0.690

Note: χ2 (chi-square), df (Degree of freedom), GFI (Goodness of fit index), NFI (Normed fit index), TLI (Tucker-Lewis index), PGIF (Parsimony
goodness of fit index).

3.5. Common method bias

For the measurement items related to technology acceptance used in this study, online survey methods were utilized because
survey results may be biased by temporary changes in the mood of respondents, repeated measures of items, or associative effects.
Due to the large number of items included in this study, we conducted Harman's single-factor test to test for common method variance
based on the protocol of Podsakoff and Organ (1986). Analysis of all the factors showed that the extraction sum of ordered loadings
explained 36.418% of the variance, indicating no problem related to common method bias.

4. Results

4.1. Comparison between models

Structural equation modelling (SEM) was conducted using Amos 24 for each model (the TAM, TPB, UTAUT, and VAM) to verify
their utility in evaluating BI to use AI-based intelligent products. The results for the TAM showed that PU (β = 0.354, p < 0.01) and
PEoU (β = 0.558, p < 0.01) had significant effects on BI with an explained variance of 63.0% (R2 = 0.630). The results for the TPB
showed that attitude (β = 0.202, p < 0.01), perceived behavioral control (β = 0.323, p < 0.01), and subjective norms (β = 0.474,
p < 0.01) had significant effects on BI and that TPB explained 66.8% of the variance (R2 = 0.668). The results for the UTAUT
showed that performance expectancy (β = 0.303, p < 0.01), effort expectancy (β = 0.315, p < 0.01), and social influence
(β = 0.396, p < 0.01) had significant effects on BI, and the UTAUT explained 71.6% of the variance (R2 = 0.716). The results for
the VAM showed that enjoyment (β = 0.631, p < 0.01), technicality (β = -0.132, p < 0.01), and PV (β = 0.255, p < 0.01) had
significant effects on BI and that the VAM explained 77.2% of the variance (R2 = 0.772).
Model fit tests were performed for each model, the results of which are summarized in Table 4. The validation tests showed that
all models were significant (p < 0.001), with GFI, NFI, and TLI values all above 0.9, indicating the model's goodness of fit. However,’
the parsimony goodness of fit test is only used as a basis for determining how well a research model reflects the data, or it is suitable
as a null model; therefore, it is difficult to verify differences in descriptive power between independent models statistically. To
alleviate this problem, we employed ordinary least squares (OLS) using SPSS 24 for Hotelling’s T2 test and comparison between
models.
Hotelling’s T2 test was employed to verify non-independent correlations among the models for BI. Hotelling’s T2 test has been
mainly used in studies of quality control (Williams et al., 2006; Faraz and Moghadam, 2009); it has rarely been used to compare
models explaining acceptance phenomena. The results showed that the comparison between the TAM and TPB, TPB and UTAUT, and
UTAUT and VAM was statistically significant (Fig. 2). In the end, we verified that the VAM (adj. R2 = 0.679) is the best model to
explain BI and the TAM (adj. R2 = 0.483) had lowest utility.

4.2. Decomposition analysis

Decomposition analysis was first applied in IS research to investigate the proportional influence of factors in models including

Fig. 2. Results of comparison between models.

7
K. Sohn and O. Kwon Telematics and Informatics 47 (2020) 101324

purchase intention. In this approach, regression analysis is conducted based on the non-standardization of regression coefficients in
the relationship between changes in the independent variable and the dependent variable (Hou and Loh, 2016). In other words, the
effect of an increase in an independent variable on a dependent variable can be decomposed to the gradient value of the explanatory
variables to verify differences in importance among the variables in proportion to one another. Decomposition analysis is used
differently in each field of application and by each researcher (Adomian, 1988; Ang, 2004). In this study, we employed the meth-
odology suggested by Hou and Loh (2016), in which idiosyncratic volatility, subsequent stock returns, and relationships in the
finance field are forecasted. The decomposition analysis utilized in this study has a key single variable that influences the dependent
variable to be measured. It consists of the candidate variables that explain the key variable. In this study, intention to purchase AI-
based intelligent products was set as the dependent variable, and BI was set as the key variable that has the greatest influence on PI.

4.2.1. Reliability and validity of constructs in the decomposition analysis


First, an integrated factor analysis was performed based on conceptual similarities among factors that significantly affect BI. Items
with communalities and factor loadings < 0.5, which may indicate problems with their validity, were deleted. Seven factors were
extracted: usefulness (PU and PE), ease of use (PEoU, PBC, EE), technicality (PEoU, PBC, EE, Technicality), subjective norms, en-
joyment, PF, and PV. For usefulness, the concept of PE (“The degree to which an individual believes that using the system will help
him or her to attain gains in job performance”, Venkatesh et al., 2003) originated from PU (“The degree to which a person believes
that using a particular system would enhance his or her job performance”, Davis, 1989). Therefore, PU and PE were incorporated into
one factor. In the case of ease of use, PEoU (“The degree to which a person believes that using a particular system would be free of
effort”, Davis, 1989), PBC (“Perceived ease or difficulty performing the behavior”, Ajzen, 1991), and EE (“Degree of ease associated
with the use of the system”, Venkatesh et al., 2003) were incorporated into one factor because of their conceptual similarity. Finally,
the reverse items of each of PEoU, PBC, and EE were related to difficulties in use; therefore, they were conflated into one factor:
Technicality. To verify the internal consistency of these factors, Cronbach's α test was conducted on the extracted items, and the
reliability of all items was verified by a coefficient greater than 0.6 (Table 5).

4.2.2. Analysis of influence ratio between variables


Our decomposition analysis begins with the assumption that BI is a key variable that affects PI. The regression equation is as
follows:
PI = α1 + γ ·BI + ε1 (1)
The equation that represents the effect of explanatory variables (candidates) in the TAM, TPB, UTAUT, and VAM on BI can be
expressed in the following ways (Table 6):
BI = α2 + δ·candidate + ε2 (2)

PI = α1 + γ (α2 + δ·candidate + ε2) + ε1 (3)


In this equation, γ means the gradient of and covariance between BI and PI divided by the variance of BI. These equations can also
be expressed as follows:
Cov (α 2 + δ·candidate + ε2, PI ) Cov (α2 + ε2, PI ) cov (δ·candidate , PI )
γ= = +
var[BI ] var [BI ] var[BI ] (4)
In Eq. (4), the gradient α2 and residual ε2 are values that do not explain the dependent variable, and δ·candidate is the value of the
explanatory variables that do explain the dependent variable. Thus, we can investigate how much each explanatory variable affects
BI, expressed as a percentage of the total explanation of the PI variable. The results of the decomposition analysis are shown in Fig. 3.
The results showed that 84.95% of the PI variable could be explained by the overall explanatory factors, and the proportion of

Table 5
Internal consistency and correlations between constructs.
Usefulness Ease of Use Technicality Attitude Subjective Enjoyment Perceived Fee Perceived Behavioral Purchase
Norm Value Intention Intention

Usefulness 0.958
Ease of Use 0.500** 0.921
Technicality −0.079 −0.484** 0.923
Attitude 0.457** 0.424** −0.188** 0.845
Subjective Norms 0.520** 0.570** −0.134** 0.368** 0.868
Enjoyment 0.652** 0.626** −0.190** 0.455** 0.603** 0.927
Perceived Fee 0.080 −0.082 0.096 −0.019 −0.140** −0.001 0.687
Perceived Value 0.677** 0.485** 0.005 0.406** 0.548** 0.600** −0.127* 0.941
Behavioral Intention 0.625** 0.675** −0.247** 0.466** 0.687** 0.787** −0.102* 0.637** 0.897
PurchaseIntention 0.584** 0.526** −0.123* 0.382** 0.576** 0.636** −0.096* 0.677** 0.731** 0.892

Note: Internal consistency (Cronbach’s α) statistics are on the diagonal.


*Correlation is significant at the 0.05 level (1-tailed).
**Correlation is significant at the 0.01 level (2-tailed).

8
K. Sohn and O. Kwon Telematics and Informatics 47 (2020) 101324

Table 6
Multiple linear regression results of decomposition analysis.
Dependent Variable Independent Variable B SE β Adj. R2

Behavioral Intention Usefulness 0.047 0.039 0.050 0.730**


Ease of Use 0.157 0.044 0.154**
Technicality −0.035 0.024 −0.049
Attitude 0.033 0.025 0.042
Subjective Norms 0.207 0.035 0.221**
Enjoyment 0.378 0.038 0.414**
Perceived Fee −0.036 0.026 −0.040
Perceived Value 0.119 0.036 0.137**
Purchase Intention Behavioral Intention 0.701 0.034 0.731** 0.533**

**p < 0.01, *p < 0.05.

Fig. 3. Proportional influence of explanatory variables.

enjoyment was largest (36.12%). Next, the proportions of subjective norms (17.43%) and PV (12.68%) were similar, and the pro-
portions of PF (0.52%) and technicality (0.83%), which had a negative influence on PI, were very low.

4.3. Comparison between categories of AI-based intelligent products

Further analysis was conducted to verify differences in the proportional effects of the explanatory factors onPI between product
categories. Data were divided into categories of AI-based intelligent products that respondents indicated they were interested in using
in future. Analysis results by category (Fig. 4) showed that the overall ability of the explanatory factors to explain the variance for the
PI was 94.68% for the smart speaker, 81.39% for the voice assistant services, and 91.34% for AI-based home appliances. The
proportional influence of the explanatory factors differed slightly for each category.
A one-way ANOVA test and Tukey’s HSD test were conducted to verify statistical differences in explanatory factors according to
categories of AI-based intelligent products (Table 7).
The results showed that the average value for the technicality variable in AI-based home appliances (3.652) was significantly
higher than that for the smart speaker (3.392). This means that smart speakers were perceived to be less complex to use than home
appliances, which currently offer a variety of AI-based functions, and that because they can perform all functions in response to voice
commands, they are popular. In addition, the value for the PF variable in AI-based home appliances (4.815) was higher than that for
the smart speaker (4.496). This result may be based on users' perceptions of the high cost of applying AI technology to home
appliances such as refrigerators and air conditioners.

5. Discussion

5.1. Main findings

First, to assess the utility of technology acceptance theories and determine the best model for predicting behavioral intention to
use AI-based intelligent products, we compared four models and their factors: the TAM, TPB, UTAUT, and VAM. This comparison
confirmed that the VAM was the best predictor in the context of AI-based intelligent products based on the adjusted R2 value,
followed by the UTAUT, TPB, and TAM. Differences in the predictive ability between models stemmed from the relative influence of
factors in each model; enjoyment was particularly crucial in identifying the VAM as better than the others. Although several factors in

9
K. Sohn and O. Kwon Telematics and Informatics 47 (2020) 101324

Fig. 4. Differences in influence by category.

Table 7
Results of testing for differences in explanatory factors by category.
Variable Smart speaker Voice assistant Home appliances F (α) Welch’s T test (α) Tukey’s HSD
(n = 185)(a) (n = 71)(b) (n = 209)(c)
Mean (SD) Mean (SD) Mean (SD)

Usefulness 5.183 (0.926) 5.275 (0.878) 5.304 (0.856) 0.937 (0.393) 0.913 (0.403) –
Ease of Use 5.207 (0.768) 5.180 (0.866) 5.059 (0.800) 1.820 (0.163) 1.849 (0.160) –
Technicality 3.392 (1.058) 3.641 (1.183) 3.652 (1.100) 3.081 (0.047*) 3.155 (0.045*) c > a
Attitude 5.477 (0.953) 5.349 (1.020) 5.480 (0.937) 0.551 (0.577) 0.494 (0.611) –
Subjective Norms 4.984 (0.881) 5.146 (1.003) 5.030 (0.847) 0.858 (0.425) 0.719 (0.489) –
Enjoyment 5.359 (0.869) 5.235 (1.010) 5.359 (0.885) 0.569 (0.566) 0.467 (0.628) –
Perceived Fee 4.496 (0.863) 4.639 (0.856) 4.815 (0.908) 6.452 (0.002**) 6.389 (0.002**) c > a
Perceived Value 4.862 (0.963) 4.856 (1.001) 4.923 (0.908) 0.258 (0.773) 0.260 (0.772) –

**p < 0.01, *p < 0.05.

the subjective norms (=SI) category were similar, the UTAUT performed better than the TPB because the influence of attitude was
minor. In the case of the TAM, although ease of use was highly influential, its performance was the lowest. Because only two factors
explained BI: PU and PEoU.
Second, we conducted a decomposition analysis to investigate the proportional influence of factors included in the technology
acceptance models to BI to use AI-based intelligent products. The results of the analysis confirmed an overwhelmingly large pro-
portion for enjoyment, followed by subjective norms, PV, and ease of use. These results may be due to the fact that the diffusion of AI-
based intelligent products is moving from the innovation stage to the early adoption stage (Gartner, 2017), which is characterized by
public curiosity to acquire knowledge of new technology. Acquisition of knowledge about a new technology has been shown to
correlate with enjoyment (Perlovsky et al., 2010). In addition, despite their technical limitations, these products have been shown to

10
K. Sohn and O. Kwon Telematics and Informatics 47 (2020) 101324

provide utilitarian value. However, at this point, in the context of AI-based intelligent products, the hedonic aspects of seeking new
experiences and curiosity about technology are more important than utilitarian aspects (i.e., weighing the benefits that can be gained
against the costs).
Subjective norms had the next largest proportion, which indicates that AI technology is a highly interesting technology from a
social point of view, but it still lacks practical use experience, so potential users remain affected by other people’s opinions in their
decision-making to adopt AI-based intelligent products. In other words, while AI technology is being developed at a rapid pace in
combination with a variety of products, it is still in the early stages of social acceptance, so the influence of others still plays an
important role in building trust in these products (Li et al., 2008). PV also had a large proportion, indicating high user expectations
about the performance and value of AI-based intelligent products. This interpretation is also supported by the positive simulation
results of a recent study (André et al., 2018). The context is similar to that of expectancy-value theory (Wigfield and Eccles, 2000),
which explains the psychological motivation for believing in and expecting a good performance as a crucial element for adoption.
Ease of use had a similar proportion to that of PV. Prior technology adoption studies have overlooked the importance of PEoU, but a
relative increase in ease of use has been observed in the characteristics of intelligent products, which developed very fast and in a
variety of forms (Meyer et al., 2009).
The results of the decomposition analysis by category are interesting. The categories of AI-based intelligent products are diverse
because AI technology has been applied in various market sectors. The results of the decomposition analysis in this study confirmed
that users considered the characteristics of combined products rather than the AI technology itself (except in terms of enjoyment). For
the smart speaker, the influence of subjective norms (20.26%) was greatest as a result of the recent rapid growth in the market and
the increased number of users. Information provided by experienced users is crucial to potential users in terms of PI. In addition, the
influence of ease of use (19.59%) was also large; this result is attributed to the characteristics of the smart speaker, such as voice
recognition and connection with various products or services. Voice assistant services are also quite familiar to smartphone users.
When sufficient user experience has been accumulated, users become more aware of the utility of voice assistant services; therefore,
the influence of usefulness (18.20%) increases. For recently-released AI-based home appliances, PV (22.66%) was the most influential
factor, reflecting the expectation that AI technology offers higher value by contrast with products from prior experiences. This makes
sense because home appliances are frequently used in daily life.

5.2. Theoretical implications

This study has important theoretical implications for future studies on the acceptance of AI-based intelligent products. First, this
study presents information to aid in model selection to explain technology acceptance phenomena. While there are many studies
demonstrating the superiority of the TAM in evaluating IT acceptance (Ukpabi and Karjaluoto, 2017; Marangunić and Granić, 2015;
Wallace and Sheetz, 2014), the pace of innovation is rapidly accelerating beyond the abilities of this model. Therefore, it is necessary
to consider other suitable models according to the patterns of development and diffusion of current innovations (e.g., AI, IoT, AR/VR)
rather than explaining current technology acceptance phenomena by adding new explanatory factors to this existing model. In the
case of AI-based intelligent products, the VAM was assessed as the best model to explain adoption, whereas the predictive ability of
the TAM was the lowest. These results suggest that the TAM, which is most often employed in the study of technology adoption, may
not be the best model to explain emerging new technologies, particularly AI-based intelligent products.
Second, this study enhances our understanding of the cognitive structure of people interested in AI-based intelligent products.
Until now, few studies have examined the acceptance of intelligent products, which are usually combined with widely used ICT
products. In this study, we confirmed that enjoyment is more crucial than usefulness in the context of acceptance of intelligent
products or services, which differs from the results of previous research (Wu and Chen, 2017; Ooi and Tan, 2016; Renko and
Druzijanic, 2014) that focused primarily on the influence of usefulness. This suggests that AI technology still has a technological
limitation in terms of utility and that users approach it based on curiosity about new technologies rather than feelings about its
usefulness.
Third, we employed a new methodology, decomposition analysis, to explain technology acceptance phenomena in an IS field.
Unlike prior research methods such as multiple regression or PLS, which have only analyzed relative differences in the significance
and influence of causality between factors, quantifying and comparing differences in influence among factors is a novel and in-
formative approach. This study provides a foundation for practical utilization of this methodology, allowing researchers to go beyond
theoretical implications. This new perspective may be utilized in various fields as well as technology adoption research in the future.

5.3. Practical implications

This study has some practical implications for the diffusion of AI-based intelligent products. First, the results indicate that en-
joyment is a crucial factor influencing PI in the context of AI-based intelligent products, unlike other products, which has implications
for product design and advertising. Different factors affect acceptance of technology in the IS field, and consumers of AI-based
intelligent products have an interest in the technology itself rather than considering its usefulness. For designers of these products, it
is necessary to consider features, designs, and content that will arouse people's interest. However, if additional features are added
simply to pique interest, consumers who do not feel the need for AI-based intelligent products may refrain from purchasing.
Therefore, product development strategies must also consider the product’s usefulness before full-scale diffusion is attempted.
Second, to promote purchase of AI-based intelligent products in a social atmosphere characterized by diversity of opinion re-
garding AI technology, product designers can actively leverage the influence of subjective norms, which were verified in this study to

11
K. Sohn and O. Kwon Telematics and Informatics 47 (2020) 101324

affect PI of potential consumers. The finding that enjoyment and subjective norms have more impact on acceptance of AI-based
intelligent products than usefulness can be explained using the Innovation Resistance Model (Ram, 1987). From the perspective of
this model, consumers may be resistant to innovation because of negative feelings such as fear, uncertainty, and doubt; this is
certainly true in the context of changes that AI-based intelligent products will bring (Rogers, 2003). Therefore, it is necessary to
develop and present a positive scenario in which AI-based intelligent products may be used and to publicize the successes of ex-
perienced users. It is also important to establish a positive image for these products, which are still in the early adoption stage.
Third, AI-based intelligent products fit into categories that differ in terms of user experience and market spread; therefore,
different factors should be considered in each category. Product development should be prioritized accordingly, and user satisfaction
and market share of competing products in the same category should also be considered. For example, in 2017, Amazon Echo sold 31
million units and Google Home sold 14 million units, and Amazon accounted for 51.8% of the smart speaker market (Business Wire,
2017). Amazon has a market advantage because of its pioneer status in the smart speaker market, and consumers appreciate the ease
of use of Amazon Echo, which connects more than 12,000 devices of 2000 brands as compared to Google Home, which connects only
5000. Home appliances are most frequently used in daily life, but consumer experience of using products with AI technology has not
accumulated yet. In this particular area, the impact of technicality and PF was confirmed to be significant, indicating that barriers to
purchase do exist. Therefore, highlighting price differentiation and ease of use may help to grow the AI-based home appliances
market. Based on the results of this study, factors that affect PI in each category can be used as indicators for the direction of
development, making products more competitive in the market and increasing consumer satisfaction.

5.4. Limitations

This study has some limitations. First, generalization of the results regarding acceptance of AI-based intelligent products may be
limited because the data used in this study were collected only in South Korea. Further studies in various countries will allow more
generalized conclusions on the acceptance of AI-based intelligent products. In addition, we did not consider robots in this study.
Despite the potentially large market, robots are not currently available for individual purchase. Although they are definitely part of
the AI technology world, they were not included because the characteristics of robots vary considerably depending on their appli-
cation, ranging from social service to industrial. In addition, the AI technology included in this study featured not only voice re-
cognition, but also various other technologies, consideration of which was beyond the scope of this study. For technologies related to
virtual reality and augmented reality, few virtual-reality systems are available for purchase by individuals; therefore, they also were
not considered in this study. We call for future research on consumer behavior regarding adoption of AI technologies in areas other
than the three considered in this paper. Next, high correlations were found among the factors employed in this study. Thus, additional
variables must be identified that may affect intention to purchase AI-based intelligent products through a more independent tech-
nology acceptance theory. Finally, although decomposition analysis, as borrowed from the financial sector, was applied in this study,
it is a very formulaic mechanism. Therefore, further verification is required to ensure the suitability of survey data for use with
decomposition analysis.

6. Conclusion

AI-based intelligent products will be developed in more diverse ways and evaluated by consumers more frequently as AI tech-
nology evolves. However, the development of this technology and its application to various fields are not enough to ensure consumer
use and discovery of the potential benefits it provides. Therefore, advanced knowledge of success factors related to AI-based in-
telligent products is necessary from the planning stage.
Because of the potential for AI technology to change society, AI-based intelligent products will have a great impact on life. In this
study, we assessed the utility of the TAM, TPB, UTAUT, and VAM for evaluation of AI-based intelligent products to promote their
diffusion. For this, we employed Hotelling's T2 test and decomposition analysis to investigate BI of users. The results indicated that
the VAM is best in terms of predictability and enjoyment has the greatest influence. The results of this study indicate that people are
interested in these products but that more effort must be made to enhance their usefulness in future. Future studies may seek to
identify external factors other than those utilized in the TAM, TPB, UTAUT, and VAM and investigate what factors affect usefulness.
These factors can then be used to expand the model that best explains acceptance of AI-based intelligent products.

Acknowledgment

This work was supported by the Ministry of Education of the Republic of Korea and the National Research Foundation of Korea
(NRF-2017S1A3A2066740).

References

Adams, D.A., Nelson, R.R., Todd, P.A., 1992. Perceived usefulness, ease of use, and usage of information technology: a replication. MIS Q 16 (2), 227–247.
Adapa, A., Nah, F.F.H., Hall, R.H., Siau, K., Smith, S.N., 2018. Factors influencing the adoption of smart wearable devices. Int. J. Hum. Comput. Interact. 34 (5),
399–409.
Adomian, G., 1988. A review of the decomposition method in applied mathematics. J. Math. Anal. Appl. 135 (2), 501–544.
Agarwal, R., Karahanna, E., 2000. Time flies when you're having fun: cognitive absorption and beliefs about information technology usage. MIS Q. 24 (4), 665–694.
Aguilar, J., Garcia, G., 2018. An adaptive intelligent management system of advertising for social networks: A case study of Facebook. IEEE Trans. Comput. Soc. Syst. 5

12
K. Sohn and O. Kwon Telematics and Informatics 47 (2020) 101324

(1), 20–32.
Ajzen, I., 1985. From intentions to actions: a theory of planned behavior. In Action Control. Springer, Berlin, Heidelberg, pp. 11–39.
Ajzen, I., 1991. The theory of planned behavior. Organ. Behav. Hum. Decis. Process. 50 (2), 179–211.
Ajzen, I., Fishbein, M., 1973. Attitudinal and normative variables as predictors of specific behavior. J. Pers. Soc. Psychol. 27 (1), 41–57.
André, Q., Carmon, Z., Wertenbroch, K., Crum, A., Frank, D., Goldstein, W., Huber, J., Van Boven, L., Weber, B., Yang, H., 2018. Consumer choice and autonomy in the
age of artificial intelligence and big data. Cust. Need Solut. 5, 28–37.
Ang, B.W., 2004. Decomposition analysis for policymaking in energy: which is the preferred method? Energy Policy 32 (9), 1131–1139.
Basoglu, N., Ok, A.E., Daim, T.U., 2017. What will it take to adopt smart glasses: a consumer choice based review? Technol. Soc. 50, 50–56.
Berggren, N., Elinder, M., Jordahl, H., 2008. Trust and growth: a shaky relationship. Empir. Econ. 35 (2), 251–274.
Business wire, 2017. Global Smart Speaker Vendor & OS Shipment and Installed Base Market Share by Region: Q4 2017. Available from: https://www.businesswire.
com/news/home/20180227006077/en/Strategy-Analytics-Explosive-Growth-Smart-Speakers-Continues.
Cabada, R.Z., Estrada, M.L.B., Hernández, F.G., Bustillos, R.O., Reyes-García, C.A., 2018. An affective and web 3.0-based learning environment for a programming
language. Telemat. Inform. 35 (3), 611–628.
Changchit, C., 2003. An investigation into the feasibility of using an Internet-based intelligent system to facilitate knowledge transfer. J. Comput. Inf. Syst. 43 (4),
91–99.
Chen, Y., Le, D., Yumak, Z., Pu, P., 2017. EHR: A sensing technology readiness model for lifestyle changes. Mobile Netw. Appl. 22 (3), 478–492.
Chuah, S.H.W., Rauschnabel, P.A., Krey, N., Nguyen, B., Ramayah, T., Lade, S., 2016. Wearable technologies: the role of usefulness and visibility in smartwatch
adoption. Comput. Hum. Behav. 65, 276–284.
Davis, F.D., 1985. A technology acceptance model for empirically testing new end-user information systems: Theory and results. Doctoral dissertation. Massachusetts
Institute of Technology.
Davis, F.D., 1989. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 13 (3), 319–339.
Davis, F.D., Bagozzi, R.P., Warshaw, P.R., 1989. User acceptance of computer technology: A comparison of two theoretical models. Manage. Sci. 35 (8), 982–1003.
Fan, W., Liu, J., Zhu, S., Pardalos, P.M., 2018. Investigating the impacting factors for the healthcare professionals to adopt artificial intelligence-based medical
diagnosis support system(AIMDSS). Ann. Oper. Res. 1–26.
Faraz, A., Moghadam, M.B., 2009. Hotelling’s T 2 control chart with two adaptive sample sizes. Qual. Quant. 43 (6), 903–912.
Fernández-Llamas, C., Conde, M.A., Rodríguez-Lera, F.J., Rodríguez-Sedano, F.J., García, F., 2018. May I teach you? Students' behavior when lectured by robotic vs.
human teachers. Comput. Hum. Behav. 80, 460–469.
Gao, Y., Li, H., Luo, Y., 2015. An empirical study of wearable technology acceptance in healthcare. Ind. Manage. Data Syst. 115 (9), 1704–1723.
Gartner, 2017. Gartner’s 2017 Hype Cycle for Artificial Intelligence. Available from: https://www.gartner.com/doc/3770467/hype-cycle-artificial-intelligence.
González García, C., Meana Llorián, D., Pelayo García-Bustelo, B.C., Cueva Lovelle, J.M., 2017. A review about smart objects, sensors, and actuators. Int. J. Interact
Multimedia Artif Intell.
Groß, M., 2015. Mobile shopping: A classification framework and literature review. Int. J. Retail Distrib. Manag. 43 (3), 221–241.
Gu, Z., Wei, J., Xu, F., 2016. An empirical study on factors influencing consumers' initial trust in wearable commerce. J. Comput. Inf. Syst. 56 (1), 79–85.
Hamari, J., Koivisto, J., 2013, June. Social motivations to use gamification: An empirical study of gamifying exercise. In ECIS (Vol. 105).
Hendrickson, A.R., Massey, P.D., Cronan, T.P., 1993. On the test-retest reliability of perceived usefulness and perceived ease of use scales. MIS Q. 17 (2), 227–230.
Hou, K., Loh, R.K., 2016. Have we solved the idiosyncratic volatility puzzle? J. Financ. Econ. 121 (1), 167–194.
Hsieh, P.J., 2015. Healthcare professionals’ use of health clouds: integrating technology acceptance and status quo bias perspectives. Int. J. Med. Inform. 84 (7),
512–523.
Hsieh, P.J., 2016. An empirical investigation of patients’ acceptance and resistance toward the health cloud: The dual factor perspective. Comput. Hum. Behav. 63,
959–969.
Larue, G.S., Rakotonirainy, A., Haworth, N.L., Darvell, M., 2015. Assessing driver acceptance of Intelligent Transport Systems in the context of railway level crossings.
Transp. Res. Pt. F-Traffic Psychol. Behav. 30, 1–13.
Lee, Y., Kozar, K.A., Larsen, K.R., 2003. The technology acceptance model: Past, present, and future. Commun. Assoc. Inf. Syst. 12, 752–780.
Li, X., Hess, T.J., Valacich, J.S., 2008. Why do we trust new technology? A study of initial trust formation with organizational information systems. J. Strateg. Inf. Syst.
17 (1), 39–71.
Liang, Y., Lee, S.A., 2017. Fear of autonomous robots and artificial intelligence: Evidence from national representative data with probability sampling. Int. J. Soc.
Robot. 9 (3), 379–384.
Lim, J., 2003. A conceptual framework on the adoption of negotiation support systems. Inf. Softw. Technol. 45 (8), 469–477.
Lin, T.C., Wu, S., Hsu, J.S.C., Chou, Y.C., 2012. The integration of value-based adoption and expectation–confirmation models: An example of IPTV continuance
intention. Decis. Support Syst. 54 (1), 63–75.
Lunney, A., Cunningham, N.R., Eastin, M.S., 2016. Wearable fitness technology: a structural investigation into acceptance and perceived fitness outcomes. Comput.
Hum. Behav. 65, 114–120.
Mallat, N., 2007. Exploring consumer adoption of mobile payments–A qualitative study. J. Strateg. Inf. Syst. 16 (4), 413–432.
Marangunić, N., Granić, A., 2015. Technology acceptance model: a literature review from 1986 to 2013. Univers. Access Inf. Soc. 14 (1), 81–95.
Mathieson, K., 1991. Predicting user intentions: Comparing the technology acceptance model with the theory of planned behavior. Inf. Syst. Res. 2 (3), 173–191.
Meyer, G.G., Främling, K., Holmström, J., 2009. Intelligent products: a survey. Comput. Ind. 60 (3), 137–148.
Karahanna, E., Straub, D.W., 1999. The psychological origins of perceived usefulness and ease-of-use. Inf. Manage. 35 (4), 237–250.
Kim, K.J., Shin, D.H., 2015. An acceptance model for smart watches: Implications for the adoption of future wearable technology. Internet Res. 25 (4), 527–541.
Kim, H.W., Chan, H.C., Gupta, S., 2007. Value-based adoption of mobile Internet: An empirical investigation. Decis. Support Syst. 43 (1), 111–126.
Kim, Y., Park, Y., Choi, J., 2017. A study on the adoption of IoT smart home service: Using value-based adoption model. Total Qual. Manag. Bus. 28 (9–10),
1149–1165.
Oechslein, O., Fleischmann, M., Hess, T., 2014, January. An application of UTAUT2 on social recommender systems: Incorporating social information for performance
expectancy. In System Sciences (HICSS), 2014 47th Hawaii International Conference on (pp. 3297–3306). IEEE.
Ooi, K.B., Tan, G.W.H., 2016. Mobile technology acceptance model: An investigation using mobile users to explore smartphone credit card. Expert Syst. Appl. 59,
33–46.
Perlovsky, L.I., Bonniot-Cabanac, M.C., Cabanac, M., 2010, July. Curiosity and pleasure. In Neural Networks (IJCNN), The 2010 International Joint Conference on (pp.
1–3). IEEE.
Podsakoff, P.M., Organ, D.W., 1986. Self-reports in organizational research: Problems and prospects. J. Manag. 12 (4), 531–544.
Rahman, M.M., Lesch, M.F., Horrey, W.J., Strawderman, L., 2017. Assessing the utility of TAM, TPB, and UTAUT for advanced driver assistance systems. Accid. Anal.
Prev. 108, 361–373.
Ram, S., 1987. A model of innovation resistance. Adv. Consumer Res. 14 (1), 208–212.
Renko, S., Druzijanic, M., 2014. Perceived usefulness of innovative technology in retailing: Consumers‫ ׳‬and retailers‫ ׳‬point of view. J. Retail. Consumer Serv. 21 (5),
836–843.
Rogers, E.M., 2003. Diffusion of Innovations, (5th ed.),. The Free Press, New York, NY.
Roll, I., Russell, D.M., Gašević, D., 2018. Learning at scale. Int. J. Artif. Intell. Educ. 1–7.
Segars, A.H., Grover, V., 1993. Re-examining perceived ease of use and usefulness: a confirmatory factor analysis. MIS Q. 17 (4), 517–525.
Setiawan, A., Agiwahyuanto, F., Arsiwi, P., 2019. A virtual reality teaching simulation for exercise during pregnancy. Int. J. Emerg. Technol. Learn. 14 (1), 34–48.
Sirdeshmukh, D., Singh, J., Sabol, B., 2002. Consumer trust, value, and loyalty in relational exchanges. J. Mark. 66 (1), 15–37.
Subramanian, G.H., 1994. A replication of perceived usefulness and perceived ease of use measurement. Decis. Sci. 25 (5–6), 863–874.

13
K. Sohn and O. Kwon Telematics and Informatics 47 (2020) 101324

Taylor, S., Todd, P.A., 1995. Understanding information technology usage: A test of competing models. Inf. Syst. Res. 6 (2), 144–176.
Thorhauge, M., Haustein, S., Cherchi, E., 2016. Accounting for the Theory of Planned Behaviour in departure time choice. Transp. Res. Pt. F-Traffic Psychol. Behav. 38,
94–105.
Tseng, K.C., Hsu, C.L., Chuang, Y.H., 2013. Designing an intelligent health monitoring system and exploring user acceptance for the elderly. J. Med. Syst. 37 (6), 9967.
https://doi.org/10.1007/s10916-013-9967-y.
Ukpabi, D.C., Karjaluoto, H., 2017. Consumers’ acceptance of information and communications technology in tourism: a review. Telemat. Inform. 34 (5), 618–644.
Venkatesh, V., Davis, F.D., 2000. A theoretical extension of the technology acceptance model: four longitudinal field studies. Manage. Sci. 46 (2), 186–204.
Venkatesh, V., Morris, M.G., Davis, G.B., Davis, F.D., 2003. User acceptance of information technology: Toward a unified view. Manage. Sci. 27 (3), 425–478.
Voss, G.B., Parasuraman, A., Grewal, D., 1998. The roles of price, performance, and expectations in determining satisfaction in service exchanges. J. Mark. 62 (4),
46–61.
Wallace, L.G., Sheetz, S.D., 2014. The adoption of software measures: a technology acceptance model (TAM) perspective. Inf. Manage. 51 (2), 249–259.
Wang, C.H., 2016. A novel approach to conduct the importance-satisfaction analysis for acquiring typical user groups in business-intelligence systems. Comput. Hum.
Behav. 54, 673–681.
Wang, Y.Y., Luse, A., Townsend, A.M., Mennecke, B.E., 2015. Understanding the moderating roles of types of recommender systems and products on customer
behavioral intention to use recommender systems. Inf. Syst. E-Bus. Manag. 13 (4), 769–799.
Wigfield, A., Eccles, J.S., 2000. Expectancy–value theory of achievement motivation. Contemp. Educ. Psychol. 25 (1), 68–81.
Williams, J.D., Woodall, W.H., Birch, J.B., Sullivan, J.H., 2006. Distribution of Hotelling's T 2 statistic based on the successive differences estimator. J. Qual. Technol.
38 (3), 217–229.
Williams, M.D., Rana, N.P., Dwivedi, Y.K., 2015. The unified theory of acceptance and use of technology (UTAUT): a literature review. J. Enterp. Inf. Manage. 28 (3),
443–488.
Wu, B., Chen, X., 2017. Continuance intention to use MOOCs: Integrating the technology acceptance model (TAM) and task technology fit (TTF) model. Comput. Hum.
Behav. 67, 221–232.
Yang, H., Lee, H., Zo, H., 2017. User acceptance of smart home services: an extension of the theory of planned behavior. Ind. Manage. Data Syst. 117 (1), 68–89.
Yang, H., Yu, J., Zo, H., Choi, M., 2016. User acceptance of wearable devices: an extended perspective of perceived value. Telemat. Inform. 33 (2), 256–269.
Yang, K., Jolly, L.D., 2009. The effects of consumer perceived value and subjective norm on mobile data service adoption between American and Korean consumers. J.
Retail. Consumer Serv. 16 (6), 502–508.
Zhang, T., Zhang, D., 2007. Agent-based simulation of consumer purchase decision-making and the decoy effect. J. Bus. Res. 60 (8), 912–922.

14

You might also like