You are on page 1of 22

Article

Journal of Educational Computing

The Measurement Research


0(0) 1–22
! The Author(s) 2016
and Dimensionality Reprints and permissions:
sagepub.com/journalsPermissions.nav
of Mobile Learning DOI: 10.1177/0735633116671324
jec.sagepub.com
Systems Success:
Two-Stage
Development
and Validation

Hsin-Hui Lin1, Yi-Shun Wang2, Ci-Rong Li3,


Ying-Wei Shih2, and Shin-Jeng Lin4

Abstract
The main purpose of this study is to develop and validate a multidimensional
instrument for measuring mobile learning systems success (MLSS) based on the
previous research. This study defines the construct of MLSS, develops a generic
MLSS instrument with desirable psychometric properties, and explores the instru-
ment’s theoretical and practical applications. By analyzing data from a calibration
sample (n ¼ 241) and a validation sample (n ¼ 209), this study proposes a 6-factor,
25-item MLSS instrument. This empirically validated instrument will be useful
to researchers in developing and testing mobile learning theories, as well as to
educators in understanding MLSS from students’ perspective and promoting the
use of mobile learning systems.

Keywords
mobile learning, information systems success, scale development, measurement

1
Department of Distribution Management, National Taichung University of Science and Technology, Taiwan
2
Department of Information Management, National Changhua University of Education, Taiwan
3
Department of Business Administration, Jilin University, China
4
Department of Management, Leadership, and Information Systems, Le Moyne College, NY, USA
Corresponding Author:
Yi-Shun Wang, Department of Information Management, National Changhua University of Education,
No. 2, Shi-da Road, Changhua 500, Taiwan.
Email: yswang@cc.ncue.edu.tw
2 Journal of Educational Computing Research 0(0)

Introduction
Recently, mobile learning has become more and more important in the educa-
tional context because of the rapid advance and popularity of wireless commu-
nication and mobile technologies (e.g., Cheon, Lee, Crooks, & Song, 2012;
Evans, 2008; Huang & Chiu, 2014, 2015; Kukulska-Humle & Traxler, 2005;
Shen, Wang, Gao, Novak, & Tang, 2009; Wang & Shen, 2012). Numerous
studies about the use of mobile and wireless communication technologies in
education also have been reported and indicated that these mobile technologies
can complement and add value to the existing learning models, such as the social
constructive theory of learning with technology (Brown & Campione, 1996)
and conversation theory (Pask, 1975). Mobile learning originates from distance
education resulting from the characteristics of mobile technology, which is the
following up of e-learning (Lin, Lin, Yeh, & Wang, 2016). E-learning typically
took learning away from the classroom, whereas mobile learning is taking learn-
ing away from a fixed location (Park, Nam, & Cha, 2012).
Mobile learning refers to the delivery of learning to students anytime and
anywhere through the use of mobile devices (e.g., personal digital assistants,
cellular phones, or portable computers) (Alrasheedi, Capretz, & Raza, 2016).
With the use of mobile devices, students can interact with educational resources
while away from their normal place of learning (Hwang & Tsai, 2011; Kim,
2006; Shen, Wang, & Pan, 2008). Recently, there is recognition in mobile learn-
ing studies that e-learning system using mobile devices can provide educational
efficacy and improvement for students and e-learners (Chu, Hwang, & Tseng,
2010; Shih, Chuang, & Hwang, 2010). Thus, mobile learning is becoming pro-
gressively more significant, and that it will play a vital role in the rapidly growing
e-learning market. While a considerable amount of research has been conducted
on mobile learning systems (e.g., Cheon et al., 2012; Wang, Wu, & Wang, 2009),
little research has been carried out to address the conceptualization and meas-
urement of mobile learning systems success (MLSS) within higher education
institutions (HEIs; Gikas & Grant, 2013). Therefore, methods of assessing the
effectiveness of mobile learning systems are a critical issue in both practice and
research. To address the concern, mobile learning researchers and practitioners
need dependable ways to measure the success or effectiveness of mobile learning
systems. Following DeLone and McLean’s (2003) conceptual model of informa-
tion system (IS) success, this study evaluates a mobile learning system imple-
mentation through the conceptualization and empirical measurement of a MLSS
construct.
It is noted that the success of mobile learning systems cannot be evaluated
using a single proxy construct (e.g., user satisfaction) or a single-item scale (e.g.,
overall success). The measure of MLSS needs to incorporate different aspects of
the MLSS construct if it is to be a useful diagnostic instrument. To assess the
extent and specific nature of MLSS, different dimensions of MLSS construct
Lin et al. 3

must be defined both conceptually and operationally. It can enable researchers


and practitioners to the following: (a) capture multiple aspects of MLSS that
may be subsumed within single scale, (b) provide insight into the nature of
interrelationships among MLSS dimensions, (c) provide a more accurate diag-
nostic tool to assess the success of mobile learning system, and (d) employ it in
the postimplementation phase as an evaluation mechanism to assess whether the
anticipated outcomes and benefits of mobile learning systems are realized. By
using a well-validated instrument, mobile learning designers can better design
and justify their activities, especially if they devote a significant portion of their
resources to these activities. Additionally, until such a scale is developed, the
varying measures of MLSS will inhibit the generalizability and accumulation of
research findings.
The remainder of this article is organized as follows. In the next section, this
study conceptualizes the MLSS construct and establishes its theoretical founda-
tion. It is followed by descriptions of research methods used in scale item gen-
eration and data collection. And then, based on the scale development procedure
suggested by Churchill (1979), this study presents the results of purifying the
MLSS scale, identifying the factor structure of the scale, and examining the
scale’s reliability, content validity, criterion-related validity, convergent validity,
discriminant validity, and nomological validity.11 Finally, the managerial impli-
cations and directions for future research are discussed.

Theoretical Foundations
Mobile learning is a specific type of e-learning through using mobile technology
(Cheon et al., 2012; Evans, 2008). Compared with traditional lectures, e-learning
allows learners to choose (within constraints) when, where, and how they study.
Through using mobile technology and devices that embrace these characteristics
of portability, instant connectivity, and context sensitivity (Cheon et al., 2012),
mobile learning extends not only inherited advantages from e-learning but also
allows learners to vary their study location and to study ‘‘on the move’’ through
which they can experience a unique learning mode (Traxler, 2010; Wang &
Higgins, 2006).
Because of increasing demands on time that lead leaders to study in their
lunch breaks or other free time in addition to conventional lecture theatres or
the library, a mobile learning system makes it easier for learners to study when
and where they want through mobile technology and devices to transport their
learning materials (Ting, 2012). Because mobile devices have become ubiquitous
on college campuses, various mobile learning attempts have been applied in
higher education. Advanced hardware of mobile devices (e.g., camera) and vari-
ous software (e.g., Apps) availabilities can provide mobile learning systems with
more capabilities to organize, manipulate, and generate information for teaching
and learning (Keskin & Metcalf, 2011). Today, mobile learning is becoming
4 Journal of Educational Computing Research 0(0)

increasingly popular with university students in America, European, and Asia.


Many universities are beginning to provide students with mobile learning sys-
tems. Therefore, it is necessary to conduct research that deals with methods of
assessing the effectiveness of mobile learning systems in order to be successful
(Alrasheedi, Capretz, & Raza, 2015).
Because mobile learning system is not only a special type of e-learning but
also a special type of IS, this study, therefore, follows on prior IS success studies
to establish the theoretical foundation and conceptualization of a MLSS con-
struct. One of the more powerful IS success models is the model argued by
DeLone and McLean (1992), which suggests a systematic combination of indi-
vidual measures from IS success categories (Heo & Han, 2003; Myers,
Kappelman, & Prybutok, 1997). In DeLone and McLean’s (1992) work, their
comprehensive review of different IS success measures concludes with a model of
interrelationships between six IS success variable categories, including system
quality, information quality, IS use, user satisfaction, individual impact,
and organization impact. The model makes two important contributions to
the understanding of IS success. First, it provides a scheme for categorizing
the multitude of IS success measures that have been used in the literature, and
second, it suggests a model of temporal and causal interdependencies between
the categories (Rai, Lang, & Welker, 2002; Seddon, 1997).
After the publication of DeLone and McLean’s (1992) IS success model,
researchers have begun proposing modifications to the original IS success
model (e.g., Rai et al., 2002; Seddon, 1997). Seddon (1997) suggests that the
inclusion of both process and causal explanations in DeLone and McLean’s
(1992) model leads to so many potentially confusing meanings that the value
of the model is diminished. He presented three distinct models intermingled in
DeLone and McLean’s (1992) model, each reflecting a different interpretation of
IS use. The first model is a process model of IS success that describes the
sequence of events relating to an IS, the second model is a representation of
the behavior that manifests as a result of IS success, and the third is a variance
model of IS success. Additionally, because Seddon (1997) claims that IS use is a
behavior, not a success measure, he respecifies the DeLone and McLean (1992)
model by replacing IS use with perceived usefulness, which serves as a general
perceptual measure of the net benefits of IS use, to adapt his model to both
volitional and non-volitional usage contexts.
In response, DeLone and McLean (2003) argue that Seddon’s reformulation
of their 1992 model into two partial variance models (i.e., IS success model and
partial behavioral model of IS use) unduly complicates the success model and
defeats the intent of the original model, although they agree with Seddon’s
premise that the combination of variance and process explanations of IS success
in one model can be confusing. Thus, DeLone and McLean (2003) present an
updated IS success model by (a) using system usage or alterative ‘‘intention to
use’’ as an important measure of IS success to adapt their model to both
Lin et al. 5

volitional and non-volitional usage contexts, (b) adding ‘‘service quality’’ meas-
ures as a new dimension of IS success model, and (c) grouping all the ‘‘impact’’
measures into a single impact or benefit category called ‘‘net benefits.’’
The updated IS success model consists of six dimensions: (a) information
quality, (b) system quality, (c) service quality, (d) use or intention to use,
(e) user satisfaction, and (f) net benefits. Because use is voluntary or quasivo-
luntary in Internet application, system usage or ‘‘intention to use’’ are still
considered to be important measures of IS success in the updated IS success
model, which also continue to be used as a dependent variable in a number of
empirical studies (e.g., Liao, Huang, & Wang, 2015; Wang, 2008). DeLone and
McLean (2003) suggest that their updated IS success model can be adapted to
the measurement challenges of the new Internet world (e.g., e-learning). Thus,
this study adopted DeLone and McLean’s (2003) IS success model as a theor-
etical framework to develop an instrument for assessing the success of mobile
learning systems in the high educational institutions context.
Finally, net benefits are usually proposed to serve as an ultimate IS success
measure in the previous IS success models (e.g., DeLone & McLean, 2003;
Seddon, 1997). However, Seddon (1997) explicitly argues that different stake-
holders may have different opinions as to what constitutes a benefit to them.
Thus, when measuring the net benefits of an IS, researchers need to define clearly
and carefully the stakeholders and context in which the net benefits are to be
measured (DeLone & McLean, 2003; Seddon, 1997). Accordingly, this study
focuses mainly on the perspective of the student and uses the six updated IS
success dimensions—information quality, system quality, service quality, system
use, user satisfaction, and net benefits—to develop and validate a measurement
model of MLSS.

Research Methodology
Generation of Scale Items
Operationally, MLSS can be considered as a summation of different success
measures of a mobile learning system. There are several potential measuring
items for the MLSS construct. A review of the literature on IS success, website
success, IS performance, user information satisfaction, e-learner satisfaction,
web user satisfaction, system use, website quality, IS service quality, IS benefits,
and e-learning systems success (e.g., Aladwani & Palvia, 2002; Bailey & Pearson,
1983; Barnes & Vidgen, 2000; Chen, Soliman, Mao, & Frolick, 2000; DeLone &
McLean, 2003; Doll & Torkzadeh, 1988, 1998; Downing, 1999; Etezadi-Amoli &
Farhoomand, 1996; Heo & Han, 2003; Ives, Olson, & Baroudi, 1983; Kettinger
& Lee, 1994; Liu & Arnett, 2000; McKinney, Yoon, & Zahedi, 2002; Mirani &
Lederer, 1998; Muylle, Moenaert, & Despontin, 2004; Palvia, 1996; Rai et al.,
2002; Saarinen, 1996; Segars & Grover, 1998; Wang, 2003; Wang & Chiu, 2011;
6 Journal of Educational Computing Research 0(0)

Wang, Li, Li, & Wang, 2014; Wang, Li, Lin, & Shih, 2014; Wang, Tang, &
Tang, 2001; Wang & Tang, 2003; Wang, Wang, & Shee, 2007) provided 40 items
representing the six dimensions underlying the MLSS construct, and these were
used to form the initial pool of items with face validity for the MLSS scale. To
make sure that no important attributes or items were omitted, experience sur-
veys and personal interviews regarding MLSS were conducted with the assist-
ance of two MLS designers, two university teachers and five mobile learning
system users. To address content validity, this panel was asked to review the
initial item list of the MLSS scale, and they recommended eliminating 15 items
because of redundancy. After careful examination of the result of the experience
surveys and interviews, the remaining 25 items were further adjusted to make
their wording as precise as possible and were then considered to constitute an
initially proposed scale for the MLSS measurement.
An initial MLSS instrument involving 25 items (as shown in the Appendix),
with the two global measures (i.e., perceived overall performance and perceived
overall success), was developed using a 7-point Likert-type scale, ranging from
strongly disagree to strongly agree. Seven-point Likert-type scale was adopted to
be consistent with previous e-learning systems success measures (e.g., Wang
et al., 2007; Wang, Li, Li, et al., 2014), providing a basis for cross-study research.
The global measures can be used to analyze the criterion-related validity of the
instrument and to measure the overall MLSS prior to detailed analysis. In add-
ition to the MLSS measuring items, the questionnaire contains demographic
questions used for describing the sample.

Sample and Procedure


To generalize the results, this study gathered a sample of 450 students from
seven universities in Taiwan using a convenience sampling approach. Willing
participants were first introduced to the definition of mobile learning systems by
this study. Students were first asked whether they had ever used mobile learning
system provided by their college; if they replied in the affirmative, they were
asked to participate in the survey. Respondents then self-administered the ques-
tionnaire and were asked to circle the response that best described their level of
agreement with the statements. In terms of the respondents, 51.3% were male
and 48.7% were female. The age distributions of the respondents were as fol-
lows: under 20 (3.7%), 21–30 (66.4%), 31–40 (14.9%), and over 40 (15.0%). The
respondents had an average of 9.3 years of computer experience (SD ¼ 1.98) and
7.2 years of Internet experience (SD ¼ 2.13). And then, the completed sample
was randomly split into calibration (n ¼ 241) and validation (n ¼ 209) samples
(Anderson & Gerbing, 1988; Churchill, 1979; Cudeck & Browne, 1983;
Iacobucci, Saldanha, & Deng, 2007). The calibration sample was used to develop
the initial MLSS scale while the validation sample was used to validate the
scale’s dimensionality and psychometric properties.
Lin et al. 7

Scale Development
Item Analysis and Reliability Estimates
The 25-item instrument (with the two global items excluded) was refined by
analyzing the pooled data; that is, the data collected from different universities
were considered together. Because the primary purpose of this study was to
develop a standardized instrument with desirable psychometric properties for
measuring MLSS, the pooling of the sample data was considered appropriate
and justified. The first step in purifying the instrument was to calculate the
coefficient alpha and the item-to-total correlations used to delete garbage
items (Cronbach, 1951). To avoid spurious part-whole correlation, the criterion
used in this study for determining whether to delete an item was the item’s
corrected item-to-total correlation. An iterative sequence of computing
Cronbach’s alpha coefficients and item-to-total correlations was executed for
each MLSS dimension. Items with corrected item-to-total correlations below a
threshold value of 0.4 were eliminated (Palvia, 1996). Because each item’s cor-
rected item-to-total correlation was above 0.4, no item was eliminated in this
stage. The 25-item MLSS instrument has internal consistency reliability
(Cronbach’s alpha) of 0.856.

Identifying the Factor Structure of the MLSS Instrument


An exploratory factor analysis was conducted to further examine the factor
structure of the 25-item instrument. Before identifying the factor structure of
the MLSS construct using factor analysis, a chi-square value of 2557.08 and
significance level of .001 were obtained using Bartlett’s sphericity test, which
suggests that the intercorrelation matrix contains sufficient common variance
to make factor analysis worthwhile. The calibration sample of 241 responses was
examined using a principal components factor analysis as the extraction tech-
nique, and varimax as the orthogonal rotation method. To improve the unidi-
mensionality or convergent validity and discriminant validity (Price & Mueller,
1986) of the instrument through exploratory factor analysis, four commonly
employed decision rules (Hair, Anderson, Tatham, & Black, 1998; Straub,
1989) were applied to identify the factors underlying the MLSS construct:
(a) using a minimum eigenvalue of 1 as a cutoff value for extraction, (b) deleting
items with factor loadings less than 0.5 on all factors, or greater than 0.5 on two
or more factors; (c) a simple factor structure, and (d) exclusion of single item
factors from the standpoint of parsimony. An iterative sequence of factor ana-
lysis was executed. Fortunately, none of the items were deleted in this phase. At
the end of the factor analysis procedure, this study obtained a six-factor, 25-item
instrument. These six factors were interpreted as knowledge or information
quality, system quality, service quality, system use, user satisfaction, and net
benefits, explaining 66.53% of the variance in the data set. The six factors of
8 Journal of Educational Computing Research 0(0)

the proposed MLSS instrument are well consistent with those of DeLone and
McLean’s (2003) IS success model used in the study to conceptualize MLSS.
Table 1 summarizes the factor loadings for the 25-item MLSS instrument. The
significant loading of all the items on the single factor indicates convergent
validity, while the fact that no cross-loading items were found supports the
discriminant validity of the instrument.

Reliability
Reliability was evaluated by assessing the internal consistency of the items rep-
resenting each factor using Cronbach’s alpha. The 25-item instrument had a very
high reliability of 0.856, exceeding the acceptable standard of 0.70 suggested by
Nunnally (1978). The reliability of each factor was as follows: knowledge
or information quality ¼ 0.830, system quality ¼ 0.819, service quality ¼ 0.876,
system use ¼ 0.856, user satisfaction ¼ 0.792, and net benefits ¼ 0.817.

Content Validity
The MLSS instrument met the requirements of reliability and had a consistent
factor structure. However, while high reliability and internal consistency are
necessary conditions for a scale’s construct validity (the extent to which a
scale fully and unambiguously captures the underlying, unobservable construct
it is intended to measure), they are not sufficient (Nunnally, 1978). The basic
qualitative criterion concerning construct validity is content validity. Content
validity implies that the scale considers all aspects of the construct being
measured. Churchill (1979, p. 70) contends that ‘‘specifying the domain of
the construct, generating items that exhaust the domain, and subsequently
purifying the resulting scale should produce a measure which is content
or face valid and reliable.’’ Therefore, the rigorous procedures described
earlier used in conceptualizing the MLSS construct, generating items and pur-
ifying the MLSS measures suggest that the MLSS scale has acceptable content
validity.

Criterion-Related Validity
Criterion-related validity was assessed by the correlation between the total
scores on the instrument (sum for 25 items) and the criterion measure (sum of
the global items). Previous studies usually used the sum of the global items as the
criterion measure (e.g., Palvia, 1996; Wang, 2003). Criterion-related validity
refers to concurrent validity in this study where the total scores on the MLSS
instrument and the score on the criterion are measured at the same time.
A positive relationship is expected between the total score and the criterion if
the instrument is capable of measuring the MLSS construct. The 25-item
Lin et al. 9

Table 1. Summary of Results From the Scale Purification.

Factor loading
of items on
The factor dimension to
each item which they
Dimensions/item Reliability loaded onto belong

Knowledge/information quality 0.830


Q1. The mobile learning system Factor 1 0.820
provides information that is exactly
what you need.
Q2. The mobile learning system Factor 1 0.820
provides information you need at
the right time.
Q3. You feel the output of the mobile Factor 1 0.805
learning system is reliable
Q4. The mobile learning system Factor 1 0.765
provides information that is easy
to understand.
System quality 0.819
Q5. The mobile learning system Factor 2 0.675
provides interactive features
between users and system.
Q6. The mobile learning system is Factor 2 0.780
user-friendly.
Q7. The mobile learning system Factor 2 0.774
provides high availability.
Q8. The mobile learning system is easy Factor 2 0.782
to use.
Q9. The mobile learning system has Factor 2 0.691
attractive features to appeal to
the users.
Service quality 0.876
Q10. When you have a problem, the Factor 3 0.756
mobile learning system service shows
a sincere interest in solving it.
Q11. The mobile learning system Factor 3 0.808
service is always willing to help you.
Q12. The mobile learning system Factor 3 0.805
service gives you individual attention.
Q13. The mobile learning system Factor 3 0.788
service understands your specific
needs.
(continued)
10 Journal of Educational Computing Research 0(0)

Table 1. Continued

Factor loading
of items on
The factor dimension to
each item which they
Dimensions/item Reliability loaded onto belong

Q14. The mobile learning system staff Factor 3 0.719


provides high availability for
consultation.
Q15. The mobile learning system Factor 3 0.756
provides a proper level of
on-line assistance and explanation
System use 0.856
Q16. Your use the mobile learning Factor 4 0.848
system to record information and
knowledge you learned.
Q17. You use the mobile learning system Factor 4 0.771
to interact with other learners.
Q18. You use the mobile learning system Factor 4 0.796
to share your information and
knowledge.
Q19. You depend upon the mobile Factor 4 0.739
learning system.
User satisfaction 0.792
Q20. You are satisfied with this mobile Factor 5 0.844
learning system.
Q21. You are satisfied with the efficiency Factor 5 0.823
of the mobile learning system.
Q22. You are satisfied with the Factor 5 0.797
effectiveness of the mobile
learning system.
Net benefits 0.817
Q23. Using the mobile learning system Factor 6 0.836
improves my learning efficiency.
Q24. The mobile learning system helps Factor 6 0.849
you improve your learning
performance.
Q25. The mobile learning system helps Factor 6 0.839
you think through learning or
working problems.
Note. The loadings on dimensions to which they did not belong were all less than 0.5. Factor 1 ¼ know-
ledge/information quality; Factor 2 ¼ system quality; Factor 3 ¼ service quality; Factor 4 ¼ system use;
Factor 5 ¼ user satisfaction; Factor 6 ¼ net benefits.
Lin et al. 11

instrument has a criterion-related validity of 0.583 and a significance level of .01,


representing an acceptable criterion-related validity (Wang, 2003).

Discriminant and Convergent Validity


While the previous factor analysis has preliminarily demonstrated the discrim-
inant and convergent validity, this study further used the correlation matrix
approach to evaluate these two validities of the MLSS instrument.
Convergent validity tests whether the correlations between measures of the
same factor are different from zero and large enough to warrant further inves-
tigation of discriminant validity. The smallest within-factor correlations are
knowledge or information quality ¼ 0.49, system quality ¼ 0.39, service qual-
ity ¼ 0.49, system use ¼ 0.54, user satisfaction ¼ 0.53, and net benefits ¼ 0.58.
These correlations are significantly higher than zero (p < .05) and large
enough to proceed with discriminant validity analysis (Doll & Torkzadeh,
1988; Palvia, 1996; Wang, 2003).
Discriminant validity was assessed with the confidence interval approach rec-
ommended by Anderson and Gerbing (1988). Table 2 indicates the correlations
among the six dimensions. The six dimensions correlated significantly with each
other, implying the existence of a common MLSS construct governing the six
dimensions. However, the confidence intervals constructed around the pairwise
correlations between the six factors do not contain the value of 1.00. These
results support the discriminant validity of the MLSS instrument.

Scale Validation: Data-to-Model Fit


The advantages of applying confirmatory factor analysis (CFA) as compared
with classical approaches such as common factor analysis and multitrait multi-
method analysis to determine construct validity are widely recognized

Table 2. Correlations Among Mobile Learning System Success Dimensions.

1. Knowledge 2. System 3. Service 4. System 5. User 6. Net


quality quality quality use satisfaction benefits

Factor 1 1.000
Factor 2 0.148* 1.000
Factor 3 0.214* 0.258* 1.000
Factor 4 0.160* 0.326* 0.428* 1.000
Factor 5 0.166* 0.123* 0.126* 0.303* 1.000
Factor 6 -0.034 0.244* 0.115* 0.128* 0.151* 1.000
*p < .05.
12 Journal of Educational Computing Research 0(0)

(Anderson & Gerbing, 1988; MacCallum, 1986; Marsh & Hocevar, 1985; Segars
& Grover, 1993). Using the validation sample (n ¼ 209), this study attempts to
establish strong construct validity of MLSS measure, including its convergent
validity and discriminant validity, which can be directly tested by a higher order
CFA (Marsh & Hocevar, 1988).
The CFA of the MLSS instrument requires a prior designation of plausible
factor patterns from previous theoretical or empirical work. Based on logic,
theory, and previous studies (Doll, Raghunathan, Lim, & Gupta, 1995;
Doll, Xia, & Torkzadeh, 1994), two plausible alternative models of factor
structure were proposed and were then explicitly tested statistically against the
sample data.
Model 1 hypothesizes that the six first-order factors are correlated with each
other. More specifically, Model 1 hypothesized a priori that (a) responses to the
MLSS could be explained by six factors, (b) each item would have a nonzero
loading on the factor it was designed to measure, and zero loadings on all other
factors, (c) the six factors would be correlated, and (d) measurement error terms
would be uncorrelated (Byrne, 1998, p. 137).
Model 2 hypothesizes six first-order factors and one second-order MLSS
factor. As Tanaka and Huba (1984) contend, if first-order factors are correlated,
the common variance shared by the first-order factors may be statistically
‘‘caused’’ by a single second-order factor. Thus, testing for the existence of a
single second-order MLSS construct is considered appropriate. As such, Model
2 hypothesized a priori that: (a) responses to the MLSS could be explained by six
first-order factors and one second-order factor (i.e., MLSS), (b) each item would
have a nonzero loading on the first-order factor it was designed to measure, and
zero loadings on the other four first-order factors; (c) error terms associated with
each item would be uncorrelated, and (d) covariation among the six first-order
factors would be explained fully by their regression on the second-order factor
(Byrne, 1998, p. 170).
Based on the absolute and incremental indices of goodness-of-fit reported in
Table 3, this study concludes that Model 1 fits the sample data fairly well. Model
2 provides acceptable model-data fit, as evidenced by absolute and relative indi-
ces. Importantly, the second-order factor model is merely explaining the covari-
ation among the first-order factors in a more parsimonious way (i.e., one that
requires fewer degrees of freedom; Segars & Grover, 1998). Consequently, even
when the higher-order model (Model 2) is able to explain the factor covariations,
the goodness-of-fit of the higher order model can never be better than the cor-
responding first-order model (Model 1). As expected, Model 2’s goodness-of-fit
indices (see Table 3) are slightly lower than its first-order counterpart (Model 1).
As theorized, MLSS is a higher order phenomenon that is evidenced through
high performance across multiple dimensions. In Model 1, the observed correl-
ations among the first-order factors of satisfaction seem to suggest that effect-
iveness in MLSS is an aggregate of the six factors. The baseline model (Model 1)
Lin et al. 13

Table 3. Summary of the Global Fit Indices for the Validation Sample.

Model GFI RMSEA 2/df AGFI NFI CFI SRMR

Model 1: First order 0.907 0.020 281.91/260 0.884 0.889 0.990 0.031
with five factors
(correlated) model
Model 2: First order 0.904 0.019 289.46/269 0.885 0.886 0.991 0.035
with five factors and
one second order
factors model
Note. GFI ¼ goodness of fit index; RMSEA ¼ root mean square error of approximation; AGFI ¼ adjusted
goodness of fit index; NFI ¼ normed fit index; CFI ¼ comparative fit index; SRMR ¼ standardized root
mean square residual.

for testing the existence of an MLSS construct implies that the six first-order
factors are associated but not governed by a common latent phenomenon. The
alternative model (Model 2) posits a second-order MLSS construct governing
the correlations among the six factors. Target (T) coefficient [T ¼ 2 (baseline
model)/2 (alternative model)] was used to test for the existence of a higher
order MLSS construct (Marsh & Hocevar, 1985).
The calculated target coefficient between the Models 1 and 2 is a certainly
high 0.97, suggesting that the addition of the second-order factor does not sig-
nificantly increase 2. This value provides reasonable evidence of a second-order
MLSS construct. Ninety-seven percent of the variations in the six first-order
factors in Model 1 are explained by Model 2’s MLSS construct. Since Model
2 represents a more parsimonious representation of observed covariances, it
should be accepted over Model 1 as a truer representation of the MLSS
model structure. Additionally, the observed item loadings and standardized
structure coefficients of Model 1 are described in Table 3. Consequently, the
6-factor, 25-item MLSS instrument demonstrated adequate convergent validity
and discriminant validity.

Implication for Research and Practice


User satisfaction has frequently been measured for IS success in past studies
including wire-based Internet applications (McKinney et al., 2002; Muylle et al.,
2004; Wang et al., 2007), asynchronous e-learning systems (Wang, 2003), or
mobile commerce systems (Wang & Liao, 2008). However, according to previous
IS success literature, ISs success or effectiveness is a multidimensional construct. It
indicates that IS success cannot be evaluated using a single proxy construct such
as user satisfaction or system use. This study therefore conceptualized the con-
struct of MLSS, provided empirical validation of the construct and its underlying
14 Journal of Educational Computing Research 0(0)

dimensionality, and developed a standardized instrument with desirable psycho-


metric properties for measuring MLSS. The validated 25-item MLSS instrument
consists of six factors: knowledge or information quality, system quality, service
quality, system use, user satisfaction, and net benefits.
On the other hand, DeLone and McLean (2003) not only emphasize that IS
success is a multidimensional and interdependent construct but also argued that
it is necessary to study the interrelationships among those dimensions. However,
this study did not investigate the causal relationships between the six factors of
MLSS. Hence, using the proposed MLSS instrument, future research efforts
should explore and test the causal relationships among knowledge or informa-
tion quality, system quality, service quality, user satisfaction, system use, and
net benefits constructs within the boundary of mobile learning from students
perspectives. The findings can provide more insights into how to implement
successful mobile learning systems within the context of HEIs such as univer-
sities. Based on DeLone and McLean’s (2003) model, information quality,
system quality, and service quality influence both user satisfaction and system
use, which, in turn, influence net benefits. Therefore, it is important for educa-
tors and mobile learning systems developers in HEIs to periodically assess the
success of mobile learning systems using the proposed MLSS instrument, and
take corrective actions for improvement of mobile learning quality, satisfaction,
usage, and effectiveness. The multiple-item MLSS instrument with good reliabil-
ity and validity can also provide researchers with a tool for measuring the MLSS
dimensions, and a basis for explaining, justifying, and comparing differences
among the results.
In addition, the results provide several implications for the development
and management of mobile learning systems. First, the MLSS instrument not
only can be utilized to assess the success of mobile learning systems from learner
perspectives but also will provide a fast and early feedback to the HEIs.
The 25-item MLSS instrument that emerged was demonstrated to produce accept-
able reliability estimates, and the empirical evidence supported its con-
tent validity, criterion-related (concurrent) validity, discriminant validity, and
convergent validity. Thus, the concise MLSS instrument with good reliability
and validity provides an effective tool for educators and mobile learning managers
and system developers to understand users’ MLSS periodically and take corrective
actions for improvement of system satisfaction, usage, and effectiveness.
Second, because of the cross sample validation of this MLSS instrument, this
instrument can be used to compare effectiveness for different mobile learning
systems with specific factors, such as knowledge or information quality, system
quality, service quality, system use, user satisfaction, and net benefits. If an
organization finds itself lacking in any of these dimensions, then it may do a
more detailed analysis and take the necessary corrective actions. This MLSS
instrument was designed to be applicable across a broad spectrum of mobile
learning systems and to provide a common framework for comparative analysis.
Lin et al. 15

The framework, when necessary, can be adapted or supplemented to fit the


specific practical needs of a particular mobile learning environment.
Finally, this empirically validated MLSS emphasizes the importance of
assuming a multidimensional analytical approach in the context of mobile learn-
ing systems. Thus, establishing strategies to improve only one success variable is
therefore an incomplete strategy if the effects of the others are not considered.
The results of this study encourage mobile learning managers to include the
measures of knowledge or information quality, system quality, service quality,
system use, user satisfaction, and net benefits into their present evaluation tech-
niques of MLSS.

Conclusions and Limitations


Based on the previous research on IS success (e.g., DeLone & McLean, 2003),
this study has conceptually defined the domain of the MLSS construct, oper-
ationally designed the initial MLSS item list, and empirically validated the gen-
eral MLSS instrument. The generality of this proposed instrument provides
adequate reliability and validity for comparative analysis of the results across
various mobile learning systems and mobile learning organizations. Thus, the
proposed MLBS instrument is of value not only to educators and systems
developers responsible for the implementation and utilization of mobile learning
systems but also to researchers interested in investigating the model of mobile
learning system success.
This research has some limitations that could be addressed by future research,
even though the generic MLSS instrument was developed and validated by using
a rigorous procedure. First, due to the use of a nonrandom sample of volunteers,
this study may have a risk of sampling bias in terms of the representativeness,
which may limit the generalizability of the study results beyond the study
sample. Consequently, future researchers should first randomize their sample
to include other nationalities and geographical areas besides Taiwan.
Second, future research should evaluate a test–retest reliability of the MLSS
instrument. Measures of reliability not only include internal consistency, gener-
ally evaluated by coefficient alpha, but also consist of test–retest reliability that
examines the stability of a scale over time. To establish the reliability of an
instrument, it is necessary for evaluating test–retest reliability (Galletta &
Lederer, 1989). Therefore, future research should further use the test–retest cor-
relation method to investigate the stability of the MLSS instrument, including
short- and long-range stability.
Finally, traditional Chinese was used in the stages of item generation, per-
sonal interviews, and data collection. However, the final form of the MLSS
instrument was expressed in English. Thus, cultural and language differences
are important concerns in the study of scale development. Future research
could validate the proposed instrument in different languages.
16 Journal of Educational Computing Research 0(0)

Appendix A. Measurement of Mobile Learning


System Success—Items Used in the Exploratory
Factor Analysis
Q1. The mobile learning system provides information that is exactly what you
need.
Q2. The mobile learning system provides information you need at the right time.
Q3. You feel the output of the mobile learning system is reliable
Q4. The mobile learning system provides information that is easy to understand.
Q5. The mobile learning system provides interactive features between users and
system.
Q6. The mobile learning system is user-friendly.
Q7. The mobile learning system provides high availability.
Q8. The mobile learning system is easy to use.
Q9. The mobile learning system has attractive features to appeal to the users.
Q10. When you have a problem, the mobile learning system service shows a
sincere interest in solving it.
Q11. The mobile learning system service is always willing to help you.
Q12. The mobile learning system service gives you individual attention.
Q13. The mobile learning system service understands your specific needs.
Q14. The mobile learning system staff provides high availability for consultation.
Q15. The mobile learning system provides a proper level of on-line assistance
and explanation
Q16. Your use the mobile learning system to record information and knowledge
you learned.
Q17. You use the mobile learning system to interact with other learners.
Q18. You use the mobile learning system to share your information and knowledge.
Q19. You depend upon the mobile learning system.
Q20. You are satisfied with this mobile learning system.
Q21. You are satisfied with the efficiency of the mobile learning system.
Q22. You are satisfied with the effectiveness of the mobile learning system.
Q23. Using the mobile learning system improves my learning efficiency.
Q24. The mobile learning system helps you improve your learning performance.
Q25. The mobile learning system helps you think through learning or working
problems.

Criterion
Q26. As a whole, the performance of the mobile learning system is good.
Q27. As a whole, the mobile learning system is successful.
Lin et al. 17

Declaration of Conflicting Interests


The authors declared no potential conflicts of interest with respect to the research,
authorship, and/or publication of this article.

Funding
The authors disclosed receipt of the following financial support for the research,
authorship, and/or publication of this article: This research was substantially supported
by the Ministry of Science and Technology (MOST) of Taiwan under grant number
MOST 101-2511-S-025-001-MY2.

Note
1. Nomological validity is a form of construct validity. It is the degree to which a con-
struct behaves as it should within a system of related constructs (the nomological
network; Liu, Li, & Zhu, 2012).

References
Aladwani, A. M., & Palvia, P. C. (2002). Developing and validating an instrument for
measuring user-perceived web quality. Information & Management, 39(6), 467–476.
Alrasheedi, M., Capretz, L. F., & Raza, A. (2015). A systematic review of the critical
factors for success of mobile learning in higher education (university students’ per-
spective). Journal of Educational Computing Research, 52(2), 257–276.
Alrasheedi, M., Capretz, L. F., & Raza, A. (2016). Management’s perspective on critical
success factors affecting mobile learning in higher education institutions—An empir-
ical study. Journal of Educational Computing Research, 54(2), 253–274.
Anderson, J. C., & Gerbing, D. W. (1988). Structural equation modeling in practice: A
review and recommended two step approach. Psychological Bulletin, 103(3), 411–423.
Bailey, J. E., & Pearson, S. W. (1983). Development of a tool for measuring and analyzing
computer user satisfaction. Management Science, 29(5), 530–545.
Barnes, S. J., & Vidgen, R. T. (2000, July). WebQual: An exploration of Web site quality.
Proceedings of the Eighth European Conference on Information Systems, Vienna,
298–305.
Brown, A., & Campione, J. (1996). Psychological theory and design of innovative learn-
ing environments: On procedures, principles, and systems. In L. Schauble & R. Glaser
(Eds.), Innovations in learning: New environments for education. Mahwah, NJ:
Erlbaum.
Byrne, B. M. (1998). Structural equation modeling with LISREL, PRELIS, and SIMPLIS:
Basic concepts, applications, and programming. Mahwah, NJ: Lawrence Erlbaum
Associates.
Chen, L., Soliman, K. S., Mao, E., & Frolick, M. N. (2000). Measuring user satisfaction
with data warehouses: An exploratory study. Information & Management, 37(3),
103–110.
Cheon, J., Lee, S., Crooks, S. M., & Song, J. (2012). An investigation of mobile learning
readiness in higher education based on the theory of planned behavior. Computers &
Education, 59(3), 1054–1064.
18 Journal of Educational Computing Research 0(0)

Chu, H. C., Hwang, G. J., & Tseng, J. C. R. (2010). An innovative approach for develop-
ing and employing electronic libraries to support context-aware ubiquitous learning.
The Electronic Library, 28(6), 873–890.
Churchill, G. A. (1979). A paradigm for developing better measures of marketing con-
structs. Journal of Marketing Research, 16(1), 64–73.
Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests.
Psychometrika, 16(13), 297–334.
Cudeck, R., & Browne, M. W. (1983). Cross validation of covariance structures.
Multivariate Behavioral Research, 18(2), 147–167.
DeLone, W. H., & McLean, E. R. (1992). Information systems success: The quest for the
dependent variable. Information Systems Research, 3(1), 60–95.
DeLone, W. H., & McLean, E. R. (2003). The DeLone and McLean model of informa-
tion systems success: A ten-year update. Journal of Management Information Systems,
19(4), 9–30.
Doll, W. J., Raghunathan, T. S., Lim, J. U., & Gupta, Y. P. (1995). A confirmatory
factor analysis of the user information satisfaction instrument. Information System
Research, 6(2), 177–189.
Doll, W. J., & Torkzadeh, G. (1998). Developing a multidimensional measure of system-
use in an organizational context. Information & Management, 33(4), 171–185.
Doll, W. J., & Torkzadeh, G. (1988). The measurement of end-user computing satisfac-
tion. MIS Quarterly, 12(2), 259–274.
Doll, W. J., Xia, W., & Torkzadeh, G. (1994). A confirmatory factor analysis of the
end-user computing satisfaction instrument. MIS Quarterly, 18(4), 453–461.
Downing, C. E. (1999). System usage behavior as proxy for user satisfaction: An empir-
ical investigation. Information & Management, 35(4), 203–216.
Etezadi-Amoli, J., & Farhoomand, A. F. (1996). A structural model of end user
computing satisfaction and user performance. Information & Management, 30(2),
65–73.
Evans, C. (2008). The effectiveness of m-learning in the form of podcast revision lectures
in higher education. Computers & Education, 50(2), 491–498.
Galletta, D. F., & Lederer, A. L. (1989). Some cautions on the measurement of user
information satisfaction. Decision Sciences, 20(3), 419–438.
Gikas, J., & Grant, M. M. (2013). Mobile computing devices in higher education: Student
perspectives on learningwith cellphones, smartphones & social media. Internet and
Higher Education, 19, 18–26.
Hair, J. E., Anderson, R. E., Tatham, R. L., & Black, W. C. (1998). Multivariate data
analysis. Upper Saddle River, NJ: Prentice-Hall.
Heo, J., & Han, I. (2003). Performance measure of information systems (IS) in evolving
computing environments: An empirical investigation. Information & Management,
40(4), 243–256.
Huang, Y.-M., & Chiu, P.-S. (2014). The effectiveness of a meaningful learning-based
evaluation model for context-aware mobile learning. British Journal of Educational
Technology, 46(2), 437–447.
Huang, Y.-M., & Chiu, P.-S. (2015). The effectiveness of the meaningful learning-based
evaluation for different achieving students in a ubiquitous learning context. Computers
& Education, 87, 243–253.
Lin et al. 19

Hwang, G.-J., & Tsai, C.-C. (2011). Research trends in mobile and ubiquitous learning: A
review of publications in selected journals from 2001 to 2010. British Journal of
Educational Technology, 42(4), 65–70.
Iacobucci, D., Saldanha, N., & Deng, X. (2007). A meditation on mediation: Evidence
that structural equations models perform better than regressions. Journal of Consumer
Psychology, 17(2), 139–153.
Ives, B., Olson, M. H., & Baroudi, J. J. (1983). The measurement of user information
satisfaction. Communications of the ACM, 26(10), 785–793.
Keskin, N. O., & Metcalf, D. (2011). The current perspectives, theories and practice of
mobile learning. The Turkish Online Journal of Educational Technology, 10(2),
202–208.
Kettinger, W. J., & Lee, C. C. (1994). Perceived service quality and user satisfaction with
the information services function. Decision Sciences, 25(5), 737–766.
Kim, J. Y. (2006). A survey on mobile-assisted language learning. Modern English
Education, 7(2), 57–69.
Kukulska-Humle, A., & Traxler, J. (2005). Mobile learning: A handbook for educators and
trainers. London, England: Routledge.
Liao, Y.-W., Huang, Y.-M., & Wang, Y.-S. (2015). Factors affecting students’ continued
usage intention toward business simulation games: An empirical study. Journal of
Educational Computing Research, 53(2), 260–283.
Lin, H.-H., Lin, S., Yeh, C.-H., & Wang, Y.-S. (2016). Measuring mobile learning readi-
ness: Scale development and validation. Internet Research, 26(1), 265–287.
Liu, C., & Arnett, K. P. (2000). Exploring the factors associated with web site success in
the context of electronic commerce. Information & Management, 38(1), 23–33.
Liu, L., Li, C., & Zhu, D. (2012). A new approach to testing nomological validity and its
application to a second-order measurement model of trust. Journal of the Association
for Information Systems, 13(12), Article 4.
MacCallum, R. (1986). Specification searches in covariance structure modeling.
Psychological Bulletin, 100(1), 107–120.
Marsh, H. W., & Hocevar, D. (1985). Application of confirmatory factor analysis to the
study of self-concept: First- and higher-order factor models and their invariance across
groups. Psychological Bulletin, 97(3), 562–582.
Marsh, H. W., & Hocevar, D. (1988). A new more powerful approach to multitrait-
multimethod analysis: Application of second-order confirmatory analysis. Journal of
Applied Psychology, 73(1), 107–117.
McKinney, V., Yoon, K., & Zahedi, F. M. (2002). The measurement of web-customer
satisfaction: An expectation and disconfirmation approach. Information Systems
Research, 13(3), 296–315.
Mirani, R., & Lederer, A. L. (1998). An instrument for assessing the organizational
benefits of IS projects. Decision Sciences, 29(4), 803–838.
Muylle, S., Moenaert, R., & Despontin, M. (2004). The conceptualization and empirical
validation of web site user satisfaction. Information & Management, 41(5), 543–560.
Myers, B. L., Kappelman, L. A., & Prybutok, V. R. (1997). A comprehensive model for
assessing the quality and productivity of the information systems function: Toward a
theory for information systems assessment. Information Resources Management
Journal, 10(1), 6–25.
20 Journal of Educational Computing Research 0(0)

Nunnally, J. C. (1978). Psychometric theory (2nd ed.). New York, NY: McGraw-Hill.
Palvia, P. C. (1996). A model and instrument for measuring small business user satisfac-
tion with information technology. Information & Management, 31(3), 151–163.
Park, S. Y., Nam, M.-W., & Cha, S.-B. (2012). University students’ behavioral intention
to use mobile learning: evaluating the technology acceptance model. British Journal of
Educational Technology, 43(4), 592–605.
Pask, G. (1975). Minds and media in education and entertainment: Some theoretical
comments illustrated by the design and operation of a system for exteriorizing and
manipulating individual theses. In R. Trappl & G. Pask (Eds.), Progress in cybernetics
and system research. Washington, DC: Hemisphere.
Price, J. L., & Mueller, C. W. (1986). Handbook of organizational measurement.
Marshfield, MA: Pitman Publishing, Inc.
Rai, A., Lang, S. S., & Welker, R. B. (2002). Assessing the validity of IS success models:
An empirical test and theoretical analysis. Information Systems Research, 13(1), 50–69.
Saarinen, T. (1996). An expanded instrument for evaluating information systems success.
Information & Management, 31(2), 103–118.
Seddon, P. B. (1997). A respecification and extension of the DeLone and McLean model
of IS success. Information Systems Research, 8(3), 240–253.
Segars, A. H., & Grover, V. (1993). Re-examining ease of use and usefulness: A con-
firmatory factor analysis. MIS Quarterly, 17(4), 517–527.
Segars, A. H., & Grover, V. (1998). Strategic information systems planning success: An
investigation of the construct and its measurement. MIS Quarterly, 22(2), 139–163.
Shen, R. M., Wang, M. J., Gao, W. P., Novak, D., & Tang, L. (2009). Mobile learning in
a large blended computer science classroom: System function, pedagogies, and their
impact on learning. IEEE Transactions on Education, 52(4), 538–546.
Shen, R. M., Wang, M. J., & Pan, X. Y. (2008). Increasing interactivity in blended
classrooms through a cutting-edge mobile learning system. British Journal of
Educational Technology, 39(6), 1073–1086.
Shih, J. L., Chuang, C. W., & Hwang, G. J. (2010). An inquiry-based mobile learning
approach to enhancing social science learning effectiveness. Educational Technology &
Society, 13(4), 50–62.
Straub, D. W. (1989). Validating instruments in MIS research. MIS Quarterly, 13(2),
147–169.
Tanaka, J. S., & Huba, G. J. (1984). Confirmatory hierarchical factor analysis of psy-
chological distress measures. Journal of Personality and Social Psychology, 43(6),
621–635.
Ting, Y. L. (2012). The pitfalls of mobile devices in learning: A different view and impli-
cations for pedagogical design. Journal of Educational Computing Research, 46(2),
119–134.
Traxler, J. (2010). Sustaining mobile learning and its institutions. International Journal of
Mobile and Blended Learning, 2(4), 58–65.
Wang, H.-C., & Chiu, Y.-F. (2011). Assessing e-learning 2.0 system success. Computers &
Education, 57(2), 1790–1800.
Wang, M., & Shen, R. (2012). Message design for mobile learning: Learning theories,
human cognition and design principles. British Journal of Educational Technology,
43(4), 561–575.
Lin et al. 21

Wang, S., & Higgins, M. (2006). Limitations of mobile phone learning. The JALT CALL
Journal, 2(1), 3–14.
Wang, Y.-S. (2003). Assessment of learner satisfaction with asynchronous electronic
learning systems. Information & Management, 41(1), 75–86.
Wang, Y.-S. (2008). Assessing e-commerce systems success: A respecification and valid-
ation of the DeLone and McLean model of IS success. Information Systems Journal,
18(5), 529–557.
Wang, Y. S., & Liao, Y. W. (2008). Understanding individual adoption of mobile book-
ing service: An empirical investigation. CyberPsychology & Behavior, 11(5), 603–605.
Wang, Y.-S., Li, C.-R., Lin, H.-H., & Shih, Y.-W. (2014). The measurement and dimen-
sionality of e-learning blog satisfaction: Two-stage development and validation.
Internet Research, 24(5), 546–565.
Wang, Y.-S., Li, H.-T., Li, C.-R., & Wang, C. (2014). A model for assessing blog-based
learning systems success. Online Information Review, 38(7), 969–990.
Wang, Y.-S., Wang, H.-Y., & Shee, D. Y. (2007). Measuring e-learning systems success in
an organizational context: Scale development and validation. Computers in Human
Behavior, 23(4), 1792–1808.
Wang, Y.-S., & Tang, T.-I. (2003). Assessing customer perceptions of web sites service
quality in digital marketing environments. Journal of End User Computing, 15(3), 14–31.
Wang, Y.-S., Tang, T.-I., & Tang, J.-T. D. (2001). An instrument for measuring customer
satisfaction toward web sites that market digital products and services. Journal of
Electronic Commerce Research, 2(3), 89–102.
Wang, Y.-S., Wu, M.-C., & Wang, H.-Y. (2009). Investigating the determinants and age
and gender differences in the acceptance of mobile learning. British Journal of
Educational Technology, 40(1), 92–118.

Author Biographies
Hsin-Hui Lin is a professor in the department of Distribution Management at
National Taichung University of Science and Technology, Taiwan. She received
her PhD in Business Administration from National Taiwan University of
Science and Technology. Her current research interests include electronic or
mobile commerce, management education, electronic or mobile learning, service
marketing, and customer relationship management. Her work has been pub-
lished in academic journals such as Academy of Management Learning and
Education, British Journal of Educational Technology, Information Systems
Journal, Information & Management, International Journal of Information
Management, Computers in Human Behavior, Internet Research, Journal of
Service Theory and Practice, Service Business, Service Industries Journal,
International Journal of Human-Computer Interaction, Journal of Global
Information Management, and Journal of Electronic Commerce Research.

Yi-Shun Wang is a professor in the department of Information Management


at the National Changhua University of Education, Taiwan. He received his
22 Journal of Educational Computing Research 0(0)

PhD in MIS from National Chengchi University, Taiwan. His current research
interests include IS success models, management education, Internet entrepre-
neurship education, electronic or mobile learning, customer relationship man-
agement, knowledge management, and IT/IS adoption strategies. He has
published papers in journals such as Academy of Management Learning
and Education, Information Systems Journal, Information & Management,
International Journal of Information Management, Government Information
Quarterly, Computers & Education, British Journal of Educational Technology,
among others. He is currently serving as a Project Reexamination Committee
Member and a Convener of the Management Education SIG for the research
field of Applied Science Education in the Ministry of Science and Technology of
Taiwan.

Ci-Rong Li is an associate professor of Management at Jilin University, China.


He received his PhD in Business Administration from National Dong Hwa
University, Taiwan. His current research interests focus on business education,
e-learning, product quality and innovativeness, competitive strategy, and prod-
uct development team. He has published in such journals as Industrial Marketing
Management, Academy of Management Learning and Education, Tourism
Management, Internet Research, Online Information Review, Thinking Skills
and Creativity, European Journal of Marketing, Management Decision, and
Quality & Quantity.

Ying-Wei Shih is a professor in the department of Information Management at


National Changhua University of Education, Taiwan. He received his PhD in
Management Information Systems from National Chengchi University, Taiwan.
His current research interests focus mainly on e-commerce, e-learning, and vir-
tual community. He has published papers in Computers & Education, Computers
in Human Behavior, International Journal of Information Management,
International Journal of Mobile Communications, Journal of Computer
Information Systems, Journal of Global IT Management, and Journal of
Electronic Commerce Research.

Shin-jeng Lin is an associate professor in the department of Management,


Leadership, and Information Systems at Le Moyne College, USA. He received
his PhD in Information Science from Rutgers University, USA. His current
research interests include information seeking process, design and evaluation
of interactive user interfaces, and acceptance and adoption of innovative tech-
nologies. He has published in journals such as Journal of American Society for
Information Science and Technology, Computers & Education, Academy of
Management Learning and Education, Internet Research, Information
Processing and Management, and Journal of Computer Information Systems.

You might also like