Professional Documents
Culture Documents
Abstract
The main purpose of this study is to develop and validate a multidimensional
instrument for measuring mobile learning systems success (MLSS) based on the
previous research. This study defines the construct of MLSS, develops a generic
MLSS instrument with desirable psychometric properties, and explores the instru-
ment’s theoretical and practical applications. By analyzing data from a calibration
sample (n ¼ 241) and a validation sample (n ¼ 209), this study proposes a 6-factor,
25-item MLSS instrument. This empirically validated instrument will be useful
to researchers in developing and testing mobile learning theories, as well as to
educators in understanding MLSS from students’ perspective and promoting the
use of mobile learning systems.
Keywords
mobile learning, information systems success, scale development, measurement
1
Department of Distribution Management, National Taichung University of Science and Technology, Taiwan
2
Department of Information Management, National Changhua University of Education, Taiwan
3
Department of Business Administration, Jilin University, China
4
Department of Management, Leadership, and Information Systems, Le Moyne College, NY, USA
Corresponding Author:
Yi-Shun Wang, Department of Information Management, National Changhua University of Education,
No. 2, Shi-da Road, Changhua 500, Taiwan.
Email: yswang@cc.ncue.edu.tw
2 Journal of Educational Computing Research 0(0)
Introduction
Recently, mobile learning has become more and more important in the educa-
tional context because of the rapid advance and popularity of wireless commu-
nication and mobile technologies (e.g., Cheon, Lee, Crooks, & Song, 2012;
Evans, 2008; Huang & Chiu, 2014, 2015; Kukulska-Humle & Traxler, 2005;
Shen, Wang, Gao, Novak, & Tang, 2009; Wang & Shen, 2012). Numerous
studies about the use of mobile and wireless communication technologies in
education also have been reported and indicated that these mobile technologies
can complement and add value to the existing learning models, such as the social
constructive theory of learning with technology (Brown & Campione, 1996)
and conversation theory (Pask, 1975). Mobile learning originates from distance
education resulting from the characteristics of mobile technology, which is the
following up of e-learning (Lin, Lin, Yeh, & Wang, 2016). E-learning typically
took learning away from the classroom, whereas mobile learning is taking learn-
ing away from a fixed location (Park, Nam, & Cha, 2012).
Mobile learning refers to the delivery of learning to students anytime and
anywhere through the use of mobile devices (e.g., personal digital assistants,
cellular phones, or portable computers) (Alrasheedi, Capretz, & Raza, 2016).
With the use of mobile devices, students can interact with educational resources
while away from their normal place of learning (Hwang & Tsai, 2011; Kim,
2006; Shen, Wang, & Pan, 2008). Recently, there is recognition in mobile learn-
ing studies that e-learning system using mobile devices can provide educational
efficacy and improvement for students and e-learners (Chu, Hwang, & Tseng,
2010; Shih, Chuang, & Hwang, 2010). Thus, mobile learning is becoming pro-
gressively more significant, and that it will play a vital role in the rapidly growing
e-learning market. While a considerable amount of research has been conducted
on mobile learning systems (e.g., Cheon et al., 2012; Wang, Wu, & Wang, 2009),
little research has been carried out to address the conceptualization and meas-
urement of mobile learning systems success (MLSS) within higher education
institutions (HEIs; Gikas & Grant, 2013). Therefore, methods of assessing the
effectiveness of mobile learning systems are a critical issue in both practice and
research. To address the concern, mobile learning researchers and practitioners
need dependable ways to measure the success or effectiveness of mobile learning
systems. Following DeLone and McLean’s (2003) conceptual model of informa-
tion system (IS) success, this study evaluates a mobile learning system imple-
mentation through the conceptualization and empirical measurement of a MLSS
construct.
It is noted that the success of mobile learning systems cannot be evaluated
using a single proxy construct (e.g., user satisfaction) or a single-item scale (e.g.,
overall success). The measure of MLSS needs to incorporate different aspects of
the MLSS construct if it is to be a useful diagnostic instrument. To assess the
extent and specific nature of MLSS, different dimensions of MLSS construct
Lin et al. 3
Theoretical Foundations
Mobile learning is a specific type of e-learning through using mobile technology
(Cheon et al., 2012; Evans, 2008). Compared with traditional lectures, e-learning
allows learners to choose (within constraints) when, where, and how they study.
Through using mobile technology and devices that embrace these characteristics
of portability, instant connectivity, and context sensitivity (Cheon et al., 2012),
mobile learning extends not only inherited advantages from e-learning but also
allows learners to vary their study location and to study ‘‘on the move’’ through
which they can experience a unique learning mode (Traxler, 2010; Wang &
Higgins, 2006).
Because of increasing demands on time that lead leaders to study in their
lunch breaks or other free time in addition to conventional lecture theatres or
the library, a mobile learning system makes it easier for learners to study when
and where they want through mobile technology and devices to transport their
learning materials (Ting, 2012). Because mobile devices have become ubiquitous
on college campuses, various mobile learning attempts have been applied in
higher education. Advanced hardware of mobile devices (e.g., camera) and vari-
ous software (e.g., Apps) availabilities can provide mobile learning systems with
more capabilities to organize, manipulate, and generate information for teaching
and learning (Keskin & Metcalf, 2011). Today, mobile learning is becoming
4 Journal of Educational Computing Research 0(0)
volitional and non-volitional usage contexts, (b) adding ‘‘service quality’’ meas-
ures as a new dimension of IS success model, and (c) grouping all the ‘‘impact’’
measures into a single impact or benefit category called ‘‘net benefits.’’
The updated IS success model consists of six dimensions: (a) information
quality, (b) system quality, (c) service quality, (d) use or intention to use,
(e) user satisfaction, and (f) net benefits. Because use is voluntary or quasivo-
luntary in Internet application, system usage or ‘‘intention to use’’ are still
considered to be important measures of IS success in the updated IS success
model, which also continue to be used as a dependent variable in a number of
empirical studies (e.g., Liao, Huang, & Wang, 2015; Wang, 2008). DeLone and
McLean (2003) suggest that their updated IS success model can be adapted to
the measurement challenges of the new Internet world (e.g., e-learning). Thus,
this study adopted DeLone and McLean’s (2003) IS success model as a theor-
etical framework to develop an instrument for assessing the success of mobile
learning systems in the high educational institutions context.
Finally, net benefits are usually proposed to serve as an ultimate IS success
measure in the previous IS success models (e.g., DeLone & McLean, 2003;
Seddon, 1997). However, Seddon (1997) explicitly argues that different stake-
holders may have different opinions as to what constitutes a benefit to them.
Thus, when measuring the net benefits of an IS, researchers need to define clearly
and carefully the stakeholders and context in which the net benefits are to be
measured (DeLone & McLean, 2003; Seddon, 1997). Accordingly, this study
focuses mainly on the perspective of the student and uses the six updated IS
success dimensions—information quality, system quality, service quality, system
use, user satisfaction, and net benefits—to develop and validate a measurement
model of MLSS.
Research Methodology
Generation of Scale Items
Operationally, MLSS can be considered as a summation of different success
measures of a mobile learning system. There are several potential measuring
items for the MLSS construct. A review of the literature on IS success, website
success, IS performance, user information satisfaction, e-learner satisfaction,
web user satisfaction, system use, website quality, IS service quality, IS benefits,
and e-learning systems success (e.g., Aladwani & Palvia, 2002; Bailey & Pearson,
1983; Barnes & Vidgen, 2000; Chen, Soliman, Mao, & Frolick, 2000; DeLone &
McLean, 2003; Doll & Torkzadeh, 1988, 1998; Downing, 1999; Etezadi-Amoli &
Farhoomand, 1996; Heo & Han, 2003; Ives, Olson, & Baroudi, 1983; Kettinger
& Lee, 1994; Liu & Arnett, 2000; McKinney, Yoon, & Zahedi, 2002; Mirani &
Lederer, 1998; Muylle, Moenaert, & Despontin, 2004; Palvia, 1996; Rai et al.,
2002; Saarinen, 1996; Segars & Grover, 1998; Wang, 2003; Wang & Chiu, 2011;
6 Journal of Educational Computing Research 0(0)
Wang, Li, Li, & Wang, 2014; Wang, Li, Lin, & Shih, 2014; Wang, Tang, &
Tang, 2001; Wang & Tang, 2003; Wang, Wang, & Shee, 2007) provided 40 items
representing the six dimensions underlying the MLSS construct, and these were
used to form the initial pool of items with face validity for the MLSS scale. To
make sure that no important attributes or items were omitted, experience sur-
veys and personal interviews regarding MLSS were conducted with the assist-
ance of two MLS designers, two university teachers and five mobile learning
system users. To address content validity, this panel was asked to review the
initial item list of the MLSS scale, and they recommended eliminating 15 items
because of redundancy. After careful examination of the result of the experience
surveys and interviews, the remaining 25 items were further adjusted to make
their wording as precise as possible and were then considered to constitute an
initially proposed scale for the MLSS measurement.
An initial MLSS instrument involving 25 items (as shown in the Appendix),
with the two global measures (i.e., perceived overall performance and perceived
overall success), was developed using a 7-point Likert-type scale, ranging from
strongly disagree to strongly agree. Seven-point Likert-type scale was adopted to
be consistent with previous e-learning systems success measures (e.g., Wang
et al., 2007; Wang, Li, Li, et al., 2014), providing a basis for cross-study research.
The global measures can be used to analyze the criterion-related validity of the
instrument and to measure the overall MLSS prior to detailed analysis. In add-
ition to the MLSS measuring items, the questionnaire contains demographic
questions used for describing the sample.
Scale Development
Item Analysis and Reliability Estimates
The 25-item instrument (with the two global items excluded) was refined by
analyzing the pooled data; that is, the data collected from different universities
were considered together. Because the primary purpose of this study was to
develop a standardized instrument with desirable psychometric properties for
measuring MLSS, the pooling of the sample data was considered appropriate
and justified. The first step in purifying the instrument was to calculate the
coefficient alpha and the item-to-total correlations used to delete garbage
items (Cronbach, 1951). To avoid spurious part-whole correlation, the criterion
used in this study for determining whether to delete an item was the item’s
corrected item-to-total correlation. An iterative sequence of computing
Cronbach’s alpha coefficients and item-to-total correlations was executed for
each MLSS dimension. Items with corrected item-to-total correlations below a
threshold value of 0.4 were eliminated (Palvia, 1996). Because each item’s cor-
rected item-to-total correlation was above 0.4, no item was eliminated in this
stage. The 25-item MLSS instrument has internal consistency reliability
(Cronbach’s alpha) of 0.856.
the proposed MLSS instrument are well consistent with those of DeLone and
McLean’s (2003) IS success model used in the study to conceptualize MLSS.
Table 1 summarizes the factor loadings for the 25-item MLSS instrument. The
significant loading of all the items on the single factor indicates convergent
validity, while the fact that no cross-loading items were found supports the
discriminant validity of the instrument.
Reliability
Reliability was evaluated by assessing the internal consistency of the items rep-
resenting each factor using Cronbach’s alpha. The 25-item instrument had a very
high reliability of 0.856, exceeding the acceptable standard of 0.70 suggested by
Nunnally (1978). The reliability of each factor was as follows: knowledge
or information quality ¼ 0.830, system quality ¼ 0.819, service quality ¼ 0.876,
system use ¼ 0.856, user satisfaction ¼ 0.792, and net benefits ¼ 0.817.
Content Validity
The MLSS instrument met the requirements of reliability and had a consistent
factor structure. However, while high reliability and internal consistency are
necessary conditions for a scale’s construct validity (the extent to which a
scale fully and unambiguously captures the underlying, unobservable construct
it is intended to measure), they are not sufficient (Nunnally, 1978). The basic
qualitative criterion concerning construct validity is content validity. Content
validity implies that the scale considers all aspects of the construct being
measured. Churchill (1979, p. 70) contends that ‘‘specifying the domain of
the construct, generating items that exhaust the domain, and subsequently
purifying the resulting scale should produce a measure which is content
or face valid and reliable.’’ Therefore, the rigorous procedures described
earlier used in conceptualizing the MLSS construct, generating items and pur-
ifying the MLSS measures suggest that the MLSS scale has acceptable content
validity.
Criterion-Related Validity
Criterion-related validity was assessed by the correlation between the total
scores on the instrument (sum for 25 items) and the criterion measure (sum of
the global items). Previous studies usually used the sum of the global items as the
criterion measure (e.g., Palvia, 1996; Wang, 2003). Criterion-related validity
refers to concurrent validity in this study where the total scores on the MLSS
instrument and the score on the criterion are measured at the same time.
A positive relationship is expected between the total score and the criterion if
the instrument is capable of measuring the MLSS construct. The 25-item
Lin et al. 9
Factor loading
of items on
The factor dimension to
each item which they
Dimensions/item Reliability loaded onto belong
Table 1. Continued
Factor loading
of items on
The factor dimension to
each item which they
Dimensions/item Reliability loaded onto belong
Factor 1 1.000
Factor 2 0.148* 1.000
Factor 3 0.214* 0.258* 1.000
Factor 4 0.160* 0.326* 0.428* 1.000
Factor 5 0.166* 0.123* 0.126* 0.303* 1.000
Factor 6 -0.034 0.244* 0.115* 0.128* 0.151* 1.000
*p < .05.
12 Journal of Educational Computing Research 0(0)
(Anderson & Gerbing, 1988; MacCallum, 1986; Marsh & Hocevar, 1985; Segars
& Grover, 1993). Using the validation sample (n ¼ 209), this study attempts to
establish strong construct validity of MLSS measure, including its convergent
validity and discriminant validity, which can be directly tested by a higher order
CFA (Marsh & Hocevar, 1988).
The CFA of the MLSS instrument requires a prior designation of plausible
factor patterns from previous theoretical or empirical work. Based on logic,
theory, and previous studies (Doll, Raghunathan, Lim, & Gupta, 1995;
Doll, Xia, & Torkzadeh, 1994), two plausible alternative models of factor
structure were proposed and were then explicitly tested statistically against the
sample data.
Model 1 hypothesizes that the six first-order factors are correlated with each
other. More specifically, Model 1 hypothesized a priori that (a) responses to the
MLSS could be explained by six factors, (b) each item would have a nonzero
loading on the factor it was designed to measure, and zero loadings on all other
factors, (c) the six factors would be correlated, and (d) measurement error terms
would be uncorrelated (Byrne, 1998, p. 137).
Model 2 hypothesizes six first-order factors and one second-order MLSS
factor. As Tanaka and Huba (1984) contend, if first-order factors are correlated,
the common variance shared by the first-order factors may be statistically
‘‘caused’’ by a single second-order factor. Thus, testing for the existence of a
single second-order MLSS construct is considered appropriate. As such, Model
2 hypothesized a priori that: (a) responses to the MLSS could be explained by six
first-order factors and one second-order factor (i.e., MLSS), (b) each item would
have a nonzero loading on the first-order factor it was designed to measure, and
zero loadings on the other four first-order factors; (c) error terms associated with
each item would be uncorrelated, and (d) covariation among the six first-order
factors would be explained fully by their regression on the second-order factor
(Byrne, 1998, p. 170).
Based on the absolute and incremental indices of goodness-of-fit reported in
Table 3, this study concludes that Model 1 fits the sample data fairly well. Model
2 provides acceptable model-data fit, as evidenced by absolute and relative indi-
ces. Importantly, the second-order factor model is merely explaining the covari-
ation among the first-order factors in a more parsimonious way (i.e., one that
requires fewer degrees of freedom; Segars & Grover, 1998). Consequently, even
when the higher-order model (Model 2) is able to explain the factor covariations,
the goodness-of-fit of the higher order model can never be better than the cor-
responding first-order model (Model 1). As expected, Model 2’s goodness-of-fit
indices (see Table 3) are slightly lower than its first-order counterpart (Model 1).
As theorized, MLSS is a higher order phenomenon that is evidenced through
high performance across multiple dimensions. In Model 1, the observed correl-
ations among the first-order factors of satisfaction seem to suggest that effect-
iveness in MLSS is an aggregate of the six factors. The baseline model (Model 1)
Lin et al. 13
Table 3. Summary of the Global Fit Indices for the Validation Sample.
Model 1: First order 0.907 0.020 281.91/260 0.884 0.889 0.990 0.031
with five factors
(correlated) model
Model 2: First order 0.904 0.019 289.46/269 0.885 0.886 0.991 0.035
with five factors and
one second order
factors model
Note. GFI ¼ goodness of fit index; RMSEA ¼ root mean square error of approximation; AGFI ¼ adjusted
goodness of fit index; NFI ¼ normed fit index; CFI ¼ comparative fit index; SRMR ¼ standardized root
mean square residual.
for testing the existence of an MLSS construct implies that the six first-order
factors are associated but not governed by a common latent phenomenon. The
alternative model (Model 2) posits a second-order MLSS construct governing
the correlations among the six factors. Target (T) coefficient [T ¼ 2 (baseline
model)/2 (alternative model)] was used to test for the existence of a higher
order MLSS construct (Marsh & Hocevar, 1985).
The calculated target coefficient between the Models 1 and 2 is a certainly
high 0.97, suggesting that the addition of the second-order factor does not sig-
nificantly increase 2. This value provides reasonable evidence of a second-order
MLSS construct. Ninety-seven percent of the variations in the six first-order
factors in Model 1 are explained by Model 2’s MLSS construct. Since Model
2 represents a more parsimonious representation of observed covariances, it
should be accepted over Model 1 as a truer representation of the MLSS
model structure. Additionally, the observed item loadings and standardized
structure coefficients of Model 1 are described in Table 3. Consequently, the
6-factor, 25-item MLSS instrument demonstrated adequate convergent validity
and discriminant validity.
Criterion
Q26. As a whole, the performance of the mobile learning system is good.
Q27. As a whole, the mobile learning system is successful.
Lin et al. 17
Funding
The authors disclosed receipt of the following financial support for the research,
authorship, and/or publication of this article: This research was substantially supported
by the Ministry of Science and Technology (MOST) of Taiwan under grant number
MOST 101-2511-S-025-001-MY2.
Note
1. Nomological validity is a form of construct validity. It is the degree to which a con-
struct behaves as it should within a system of related constructs (the nomological
network; Liu, Li, & Zhu, 2012).
References
Aladwani, A. M., & Palvia, P. C. (2002). Developing and validating an instrument for
measuring user-perceived web quality. Information & Management, 39(6), 467–476.
Alrasheedi, M., Capretz, L. F., & Raza, A. (2015). A systematic review of the critical
factors for success of mobile learning in higher education (university students’ per-
spective). Journal of Educational Computing Research, 52(2), 257–276.
Alrasheedi, M., Capretz, L. F., & Raza, A. (2016). Management’s perspective on critical
success factors affecting mobile learning in higher education institutions—An empir-
ical study. Journal of Educational Computing Research, 54(2), 253–274.
Anderson, J. C., & Gerbing, D. W. (1988). Structural equation modeling in practice: A
review and recommended two step approach. Psychological Bulletin, 103(3), 411–423.
Bailey, J. E., & Pearson, S. W. (1983). Development of a tool for measuring and analyzing
computer user satisfaction. Management Science, 29(5), 530–545.
Barnes, S. J., & Vidgen, R. T. (2000, July). WebQual: An exploration of Web site quality.
Proceedings of the Eighth European Conference on Information Systems, Vienna,
298–305.
Brown, A., & Campione, J. (1996). Psychological theory and design of innovative learn-
ing environments: On procedures, principles, and systems. In L. Schauble & R. Glaser
(Eds.), Innovations in learning: New environments for education. Mahwah, NJ:
Erlbaum.
Byrne, B. M. (1998). Structural equation modeling with LISREL, PRELIS, and SIMPLIS:
Basic concepts, applications, and programming. Mahwah, NJ: Lawrence Erlbaum
Associates.
Chen, L., Soliman, K. S., Mao, E., & Frolick, M. N. (2000). Measuring user satisfaction
with data warehouses: An exploratory study. Information & Management, 37(3),
103–110.
Cheon, J., Lee, S., Crooks, S. M., & Song, J. (2012). An investigation of mobile learning
readiness in higher education based on the theory of planned behavior. Computers &
Education, 59(3), 1054–1064.
18 Journal of Educational Computing Research 0(0)
Chu, H. C., Hwang, G. J., & Tseng, J. C. R. (2010). An innovative approach for develop-
ing and employing electronic libraries to support context-aware ubiquitous learning.
The Electronic Library, 28(6), 873–890.
Churchill, G. A. (1979). A paradigm for developing better measures of marketing con-
structs. Journal of Marketing Research, 16(1), 64–73.
Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests.
Psychometrika, 16(13), 297–334.
Cudeck, R., & Browne, M. W. (1983). Cross validation of covariance structures.
Multivariate Behavioral Research, 18(2), 147–167.
DeLone, W. H., & McLean, E. R. (1992). Information systems success: The quest for the
dependent variable. Information Systems Research, 3(1), 60–95.
DeLone, W. H., & McLean, E. R. (2003). The DeLone and McLean model of informa-
tion systems success: A ten-year update. Journal of Management Information Systems,
19(4), 9–30.
Doll, W. J., Raghunathan, T. S., Lim, J. U., & Gupta, Y. P. (1995). A confirmatory
factor analysis of the user information satisfaction instrument. Information System
Research, 6(2), 177–189.
Doll, W. J., & Torkzadeh, G. (1998). Developing a multidimensional measure of system-
use in an organizational context. Information & Management, 33(4), 171–185.
Doll, W. J., & Torkzadeh, G. (1988). The measurement of end-user computing satisfac-
tion. MIS Quarterly, 12(2), 259–274.
Doll, W. J., Xia, W., & Torkzadeh, G. (1994). A confirmatory factor analysis of the
end-user computing satisfaction instrument. MIS Quarterly, 18(4), 453–461.
Downing, C. E. (1999). System usage behavior as proxy for user satisfaction: An empir-
ical investigation. Information & Management, 35(4), 203–216.
Etezadi-Amoli, J., & Farhoomand, A. F. (1996). A structural model of end user
computing satisfaction and user performance. Information & Management, 30(2),
65–73.
Evans, C. (2008). The effectiveness of m-learning in the form of podcast revision lectures
in higher education. Computers & Education, 50(2), 491–498.
Galletta, D. F., & Lederer, A. L. (1989). Some cautions on the measurement of user
information satisfaction. Decision Sciences, 20(3), 419–438.
Gikas, J., & Grant, M. M. (2013). Mobile computing devices in higher education: Student
perspectives on learningwith cellphones, smartphones & social media. Internet and
Higher Education, 19, 18–26.
Hair, J. E., Anderson, R. E., Tatham, R. L., & Black, W. C. (1998). Multivariate data
analysis. Upper Saddle River, NJ: Prentice-Hall.
Heo, J., & Han, I. (2003). Performance measure of information systems (IS) in evolving
computing environments: An empirical investigation. Information & Management,
40(4), 243–256.
Huang, Y.-M., & Chiu, P.-S. (2014). The effectiveness of a meaningful learning-based
evaluation model for context-aware mobile learning. British Journal of Educational
Technology, 46(2), 437–447.
Huang, Y.-M., & Chiu, P.-S. (2015). The effectiveness of the meaningful learning-based
evaluation for different achieving students in a ubiquitous learning context. Computers
& Education, 87, 243–253.
Lin et al. 19
Hwang, G.-J., & Tsai, C.-C. (2011). Research trends in mobile and ubiquitous learning: A
review of publications in selected journals from 2001 to 2010. British Journal of
Educational Technology, 42(4), 65–70.
Iacobucci, D., Saldanha, N., & Deng, X. (2007). A meditation on mediation: Evidence
that structural equations models perform better than regressions. Journal of Consumer
Psychology, 17(2), 139–153.
Ives, B., Olson, M. H., & Baroudi, J. J. (1983). The measurement of user information
satisfaction. Communications of the ACM, 26(10), 785–793.
Keskin, N. O., & Metcalf, D. (2011). The current perspectives, theories and practice of
mobile learning. The Turkish Online Journal of Educational Technology, 10(2),
202–208.
Kettinger, W. J., & Lee, C. C. (1994). Perceived service quality and user satisfaction with
the information services function. Decision Sciences, 25(5), 737–766.
Kim, J. Y. (2006). A survey on mobile-assisted language learning. Modern English
Education, 7(2), 57–69.
Kukulska-Humle, A., & Traxler, J. (2005). Mobile learning: A handbook for educators and
trainers. London, England: Routledge.
Liao, Y.-W., Huang, Y.-M., & Wang, Y.-S. (2015). Factors affecting students’ continued
usage intention toward business simulation games: An empirical study. Journal of
Educational Computing Research, 53(2), 260–283.
Lin, H.-H., Lin, S., Yeh, C.-H., & Wang, Y.-S. (2016). Measuring mobile learning readi-
ness: Scale development and validation. Internet Research, 26(1), 265–287.
Liu, C., & Arnett, K. P. (2000). Exploring the factors associated with web site success in
the context of electronic commerce. Information & Management, 38(1), 23–33.
Liu, L., Li, C., & Zhu, D. (2012). A new approach to testing nomological validity and its
application to a second-order measurement model of trust. Journal of the Association
for Information Systems, 13(12), Article 4.
MacCallum, R. (1986). Specification searches in covariance structure modeling.
Psychological Bulletin, 100(1), 107–120.
Marsh, H. W., & Hocevar, D. (1985). Application of confirmatory factor analysis to the
study of self-concept: First- and higher-order factor models and their invariance across
groups. Psychological Bulletin, 97(3), 562–582.
Marsh, H. W., & Hocevar, D. (1988). A new more powerful approach to multitrait-
multimethod analysis: Application of second-order confirmatory analysis. Journal of
Applied Psychology, 73(1), 107–117.
McKinney, V., Yoon, K., & Zahedi, F. M. (2002). The measurement of web-customer
satisfaction: An expectation and disconfirmation approach. Information Systems
Research, 13(3), 296–315.
Mirani, R., & Lederer, A. L. (1998). An instrument for assessing the organizational
benefits of IS projects. Decision Sciences, 29(4), 803–838.
Muylle, S., Moenaert, R., & Despontin, M. (2004). The conceptualization and empirical
validation of web site user satisfaction. Information & Management, 41(5), 543–560.
Myers, B. L., Kappelman, L. A., & Prybutok, V. R. (1997). A comprehensive model for
assessing the quality and productivity of the information systems function: Toward a
theory for information systems assessment. Information Resources Management
Journal, 10(1), 6–25.
20 Journal of Educational Computing Research 0(0)
Nunnally, J. C. (1978). Psychometric theory (2nd ed.). New York, NY: McGraw-Hill.
Palvia, P. C. (1996). A model and instrument for measuring small business user satisfac-
tion with information technology. Information & Management, 31(3), 151–163.
Park, S. Y., Nam, M.-W., & Cha, S.-B. (2012). University students’ behavioral intention
to use mobile learning: evaluating the technology acceptance model. British Journal of
Educational Technology, 43(4), 592–605.
Pask, G. (1975). Minds and media in education and entertainment: Some theoretical
comments illustrated by the design and operation of a system for exteriorizing and
manipulating individual theses. In R. Trappl & G. Pask (Eds.), Progress in cybernetics
and system research. Washington, DC: Hemisphere.
Price, J. L., & Mueller, C. W. (1986). Handbook of organizational measurement.
Marshfield, MA: Pitman Publishing, Inc.
Rai, A., Lang, S. S., & Welker, R. B. (2002). Assessing the validity of IS success models:
An empirical test and theoretical analysis. Information Systems Research, 13(1), 50–69.
Saarinen, T. (1996). An expanded instrument for evaluating information systems success.
Information & Management, 31(2), 103–118.
Seddon, P. B. (1997). A respecification and extension of the DeLone and McLean model
of IS success. Information Systems Research, 8(3), 240–253.
Segars, A. H., & Grover, V. (1993). Re-examining ease of use and usefulness: A con-
firmatory factor analysis. MIS Quarterly, 17(4), 517–527.
Segars, A. H., & Grover, V. (1998). Strategic information systems planning success: An
investigation of the construct and its measurement. MIS Quarterly, 22(2), 139–163.
Shen, R. M., Wang, M. J., Gao, W. P., Novak, D., & Tang, L. (2009). Mobile learning in
a large blended computer science classroom: System function, pedagogies, and their
impact on learning. IEEE Transactions on Education, 52(4), 538–546.
Shen, R. M., Wang, M. J., & Pan, X. Y. (2008). Increasing interactivity in blended
classrooms through a cutting-edge mobile learning system. British Journal of
Educational Technology, 39(6), 1073–1086.
Shih, J. L., Chuang, C. W., & Hwang, G. J. (2010). An inquiry-based mobile learning
approach to enhancing social science learning effectiveness. Educational Technology &
Society, 13(4), 50–62.
Straub, D. W. (1989). Validating instruments in MIS research. MIS Quarterly, 13(2),
147–169.
Tanaka, J. S., & Huba, G. J. (1984). Confirmatory hierarchical factor analysis of psy-
chological distress measures. Journal of Personality and Social Psychology, 43(6),
621–635.
Ting, Y. L. (2012). The pitfalls of mobile devices in learning: A different view and impli-
cations for pedagogical design. Journal of Educational Computing Research, 46(2),
119–134.
Traxler, J. (2010). Sustaining mobile learning and its institutions. International Journal of
Mobile and Blended Learning, 2(4), 58–65.
Wang, H.-C., & Chiu, Y.-F. (2011). Assessing e-learning 2.0 system success. Computers &
Education, 57(2), 1790–1800.
Wang, M., & Shen, R. (2012). Message design for mobile learning: Learning theories,
human cognition and design principles. British Journal of Educational Technology,
43(4), 561–575.
Lin et al. 21
Wang, S., & Higgins, M. (2006). Limitations of mobile phone learning. The JALT CALL
Journal, 2(1), 3–14.
Wang, Y.-S. (2003). Assessment of learner satisfaction with asynchronous electronic
learning systems. Information & Management, 41(1), 75–86.
Wang, Y.-S. (2008). Assessing e-commerce systems success: A respecification and valid-
ation of the DeLone and McLean model of IS success. Information Systems Journal,
18(5), 529–557.
Wang, Y. S., & Liao, Y. W. (2008). Understanding individual adoption of mobile book-
ing service: An empirical investigation. CyberPsychology & Behavior, 11(5), 603–605.
Wang, Y.-S., Li, C.-R., Lin, H.-H., & Shih, Y.-W. (2014). The measurement and dimen-
sionality of e-learning blog satisfaction: Two-stage development and validation.
Internet Research, 24(5), 546–565.
Wang, Y.-S., Li, H.-T., Li, C.-R., & Wang, C. (2014). A model for assessing blog-based
learning systems success. Online Information Review, 38(7), 969–990.
Wang, Y.-S., Wang, H.-Y., & Shee, D. Y. (2007). Measuring e-learning systems success in
an organizational context: Scale development and validation. Computers in Human
Behavior, 23(4), 1792–1808.
Wang, Y.-S., & Tang, T.-I. (2003). Assessing customer perceptions of web sites service
quality in digital marketing environments. Journal of End User Computing, 15(3), 14–31.
Wang, Y.-S., Tang, T.-I., & Tang, J.-T. D. (2001). An instrument for measuring customer
satisfaction toward web sites that market digital products and services. Journal of
Electronic Commerce Research, 2(3), 89–102.
Wang, Y.-S., Wu, M.-C., & Wang, H.-Y. (2009). Investigating the determinants and age
and gender differences in the acceptance of mobile learning. British Journal of
Educational Technology, 40(1), 92–118.
Author Biographies
Hsin-Hui Lin is a professor in the department of Distribution Management at
National Taichung University of Science and Technology, Taiwan. She received
her PhD in Business Administration from National Taiwan University of
Science and Technology. Her current research interests include electronic or
mobile commerce, management education, electronic or mobile learning, service
marketing, and customer relationship management. Her work has been pub-
lished in academic journals such as Academy of Management Learning and
Education, British Journal of Educational Technology, Information Systems
Journal, Information & Management, International Journal of Information
Management, Computers in Human Behavior, Internet Research, Journal of
Service Theory and Practice, Service Business, Service Industries Journal,
International Journal of Human-Computer Interaction, Journal of Global
Information Management, and Journal of Electronic Commerce Research.
PhD in MIS from National Chengchi University, Taiwan. His current research
interests include IS success models, management education, Internet entrepre-
neurship education, electronic or mobile learning, customer relationship man-
agement, knowledge management, and IT/IS adoption strategies. He has
published papers in journals such as Academy of Management Learning
and Education, Information Systems Journal, Information & Management,
International Journal of Information Management, Government Information
Quarterly, Computers & Education, British Journal of Educational Technology,
among others. He is currently serving as a Project Reexamination Committee
Member and a Convener of the Management Education SIG for the research
field of Applied Science Education in the Ministry of Science and Technology of
Taiwan.