You are on page 1of 14

Accelerat ing t he world's research.

A Task-Technology Fit View of


Learning Management System
Impact
Tanya McGill

Computers & Education

Cite this paper Downloaded from Academia.edu 

Get the citation in MLA, APA, or Chicago styles

Related papers Download a PDF Pack of t he best relat ed papers 

T he Relat ionship bet ween LMS Use and Teacher Performance: T he Role of Task-Technology …
Tanya McGill

Barriers t o Ut ilizing ICT in Educat ion in Jordan


Michael Menchaca

Invest igat ing e-learning syst em usage out comes in t he universit y cont ext , Comput ers & Educat ion
A.K.M. Najmul Islam
Computers & Education 52 (2009) 496–508

Contents lists available at ScienceDirect

Computers & Education


journal homepage: www.elsevier.com/locate/compedu

A task–technology fit view of learning management system impact


Tanya J. McGill a,*, Jane E. Klobas b,c
a
School of Information Technology, Murdoch University, South St., Murdoch 6150 WA, Australia
b
Carlo F. Dondena Centre for Research on Social Dynamics, Bocconi University, Italy
c
UWA Business School, University of Western Australia, P.O. Box 1164, Nedlands 6909 WA, Australia

a r t i c l e i n f o a b s t r a c t

Article history: Learning management systems (LMSs) are very widely used in higher education. However, much of the
Received 19 February 2008 research on LMSs has had a technology focus or has been limited to studies of adoption. In order to take
Received in revised form 8 October 2008 advantage of the potential associated with LMSs, research that addresses the role of LMSs in learning suc-
Accepted 14 October 2008
cess is needed. Task–technology fit is one factor that has been shown to influence both the use of infor-
mation systems and their performance impacts. The study described in this paper used the technology-
to-performance chain as a framework to address the question of how task–technology fit influences the
Keywords:
performance impacts of LMSs. The results provide strong support for the importance of task–technology
Interactive learning environments
Learning management systems
fit, which influenced perceived impact on learning both directly and indirectly via level of utilization.
Task–technology fit Whilst task–technology fit had a strong influence on perceived impact of the LMS on learning it only
Technology-to-performance chain had a weak impact on outcomes in terms of student grades. Contrary to expectations, facilitating condi-
E-learning tions and common social norms did not play a role in the performance impacts of LMSs. However,
instructor norms had a significant effect on perceived impact on learning via LMS utilization.
Ó 2008 Elsevier Ltd. All rights reserved.

1. Introduction

One of the most significant developments in the use of information technology (IT) in universities in the last decade has been the adop-
tion of learning management systems (LMSs) to support the teaching and learning process (Coates, James, & Baldwin, 2005). A LMS is an
information system that facilitates e-learning. (We will use e-learning in this paper as a generic term to refer to IT-supported learning,
rather than similar terms such as online learning, web-based learning, distributed learning and technology-mediated learning.) LMSs pro-
cess, store and disseminate educational material and support administration and communication associated with teaching and learning.
The terms virtual learning environment (VLE) and e-learning environment are also commonly used to describe this type of information
system. LMSs are usually implemented on a large-scale across an entire university, faculty, or school, and then adopted by teachers
who use them in a variety of ways to support course management and student learning.
LMSs are very widely used in higher education. For example, in 2005, 95% of all higher education institutions in the UK were using LMSs
(Browne, Jenkins, & Walker, 2006). However, despite this ubiquity of LMS use, there has not been widespread change in pedagogic practice
to take advantage of the functionality afforded by LMSs (Becker & Jokivirta, 2007; Collis & van der Wende, 2002). Consistent with this, there
has been very little analysis of the impact of LMSs on teaching or learning (Coates et al., 2005). Indeed, much of the research on LMSs has
had a technology focus or has been limited to studies of adoption, while little research has placed the technology in its learning context
(Alavi & Leidner, 2001; Wang, Wang, & Shee, 2007).
In order to understand the impact of LMSs and take advantage of their potential, research that addresses the role of LMSs in learning is
needed. In addition to research investigating factors that influence the use of LMSs, research on the factors that influence the impacts of
LMS use on student learning is required. Task–technology fit is one factor that has been shown to influence both the use of information
systems and their performance impacts (Goodhue & Thompson, 1995). This paper considers the role of task–technology fit in LMS success,
and addresses the question of how task–technology fit influences the student performance impacts of LMSs.

* Corresponding author. Tel.: +61 8 93602798; fax: +61 8 93602941.


E-mail addresses: T.Mcgill@murdoch.edu.au (T.J. McGill), jane.klobas@unibocconi.it (J.E. Klobas).

0360-1315/$ - see front matter Ó 2008 Elsevier Ltd. All rights reserved.
doi:10.1016/j.compedu.2008.10.002
T.J. McGill, J.E. Klobas / Computers & Education 52 (2009) 496–508 497

2. Literature review

2.1. E-learning success research

Much of the early research about e-learning consisted of descriptions of LMS implementations. These descriptions were sometimes en-
hanced by evaluations of the outcomes of the use of the e-learning environments, sometimes in conjunction with a comparison to the out-
comes of traditional face to face teaching. This research has considered a range of outcomes in a variety of e-learning contexts. For example,
Piccoli, Ahmad, and Ives (2001) compared learning in an LMS environment to learning from face to face teaching in the context of basic IT
skills training. They found that, while there were no significant differences in performance between students enrolled in the two environ-
ments, the e-learning students reported higher computer self-efficacy and were less satisfied with the learning process. By contrast, in sim-
ilar kinds of studies, Zhang, Zhao, Zhou, and Nunamaker (2004) reported improved academic outcomes for e-learning students, and Chou
and Liu (2005) reported that students using their e-learning environment showed improved learning performance and satisfaction. The
diversity of results in these studies suggests that, not just the LMS, but also the wider context in which e-learning takes place is an impor-
tant factor in e-learning success.
The other major focus of LMS research has been the adoption and continuance of use of LMSs by students. This research has been largely
based around the technology acceptance model (TAM) (Davis, Bagozzi, & Warshaw, 1989) and related models such as TAM2 (Venkatesh &
Davis, 2000) and the unified theory of acceptance and use of technology (UTAUT) (Venkatesh, Morris, Davis, & Davis, 2003). In typical re-
search based on these models, van Raaij and Schepers (2008) explored the differences between individual students in the level of accep-
tance and use of a LMS using a conceptual model that draws from TAM, TAM2, and UTAUT; and Pituch and Lee (2006) observed that,
although factors such as perceived usefulness influenced LMS use, the strongest influence on student use was system characteristics. Fol-
lowing a review of the acceptance literature, Selim (2007) identified eight critical success factors for acceptance of e-learning, as perceived
by students: instructor’s attitude, instructor’s teaching style, student motivation, student technical competency, student–student interac-
tion, ease of access to the technology, infrastructure reliability, and university support.
Expectation confirmation theory (Oliver, 1980) has also been employed to explain LMS use. For example Hayashi, Chen, Ryan, and Wu
(2004) showed that perceived usefulness and satisfaction directly influenced continuance of use in a LMS context, that satisfaction was
influenced by perceived usefulness, and that both perceived usefulness and satisfaction were positively associated with confirmation of
expectations of the system. Roca, Chiu, and Martínez (2006) combined technology acceptance (TAM) and expectation confirmation theory.
Like Hayashi et al. they observed that students’ continuance intention is determined by satisfaction, which in turn is influenced by per-
ceived usefulness and confirmation. They also found that service quality, system quality, perceived ease of use and cognitive absorption
influenced satisfaction.
The use studies suggest a range of factors that might influence use of LMSs and e-learning systems, but they do not consider how these
factors, or use itself, are associated with learning. Indeed, few studies of LMSs or e-learning have gone beyond use to explore factors asso-
ciated with learning. Almost all of these studies have been conducted in the context of online collaborative learning. Swan (2001) inves-
tigated factors affecting perceived learning and found that clarity of design, interaction with instructors, and active discussion significantly
influenced student perceptions. Arbaugh and Benbunan-Fich (2007) focused on the role of interaction in e-learning and found that, while
collaborative environments were associated with higher levels of learner–learner and learner–system interaction, only learner–instructor
and learner–system interaction were significantly associated with increased perceived learning. Klobas and Haddow (2000) showed that
students not only perceived that they learned more, their teachers observed that they learned more, the more they participated in collab-
orative (learner–learner) activities. The focus on collaborative learning in these studies cannot necessarily be generalized to other forms of
e-learning.
Although LMSs offer functions that might be used to support collaborative learning, very few courses actually use these, and collabo-
rative learning theorists claim that learning outcomes from online collaborative learning cannot be generalized to situations where e-learn-
ing is used for material distribution or even to support unguided student interaction (Lipponen, Hakkarainen, & Paavola, 2004; Rudestam &
Schoenholtz-Read, 2002). In any case, not all students appear to respond positively to collaborative learning. Hornik, Johnson, and Wu
(2007) observed that, where there was a gap between a student’s preferred approach to learning and the approach implemented in a
LMS the learner participated less in online discussion, was less satisfied with the course, and performance was reduced.
Thus, LMS research is characterized by a diversity of studies conducted in a wide range of contexts on a variety of outcome variables
using a variety of different explanatory variables and models. As Coates et al. (2005) pointed out; it is difficult if not impossible to gener-
alize from this research. The problem seems particularly acute when we try to understand the relationship between the context in which
learning occurs, LMS use, and learning outcomes.

2.2. Task–technology fit

To gain further understanding of the factors that influence learning outcomes in a LMS environment, it may be useful to conduct re-
search within the context of models that have shown promise in predicting information systems success. Goodhue and Thompson’s
(1995) technology-to-performance chain (TPC) is one such model.
Goodhue and Thompson (1995) proposed that an explanation of information systems success needs to recognize both the task for which
the technology is used and the fit between the task and the technology. They define task–technology fit as ‘the degree to which a technol-
ogy assists an individual in performing his or her portfolio of tasks’ (p. 216). In the case of student use of a LMS, task–technology fit refers to
the ability of the LMS to support students in the range of learning activities they engage in, whilst accommodating the variety of student
abilities. These activities can include communicating with instructors and other students, accessing learning materials and undertaking
interactive activities such as quizzes.
Goodhue and Thompson developed the TPC as a model to help users and organizations understand and make more effective use of IT.
The TPC combines insights from research on user attitudes as predictors of use (‘‘utilization” in the TPC) with the notion of task–technology
fit as a predictor of performance. As can be seen from Fig. 1 below, the model proposes that task–technology fit is a function of task char-
498 T.J. McGill, J.E. Klobas / Computers & Education 52 (2009) 496–508

Task
Characteristics
Tah
Technology
Fit
Technology
Characteristics

Individual
Characteristics
Performance
Impacts
Precursors
of Utilization

Expected Consequences of
Utilization

Utilization
Affect Towards Using
Social Norms
Habit
Facilitating Conditions

Fig. 1. The technology-to-performance chain (Goodhue and Thompson, 1995).

acteristics, technology characteristics, and individual characteristics. Task–technology fit in turn both directly influences performance, and
indirectly influences utilization via precursors of utilization such as expected consequences of use, attitude towards use, social norms, habit
and facilitating conditions. Utilization is also proposed to directly influence performance. The basic argument is that for a technology to
have a positive impact on individual performance, the technology must fit with the tasks it is supposed to support, and it has to be used.
The impact of task–technology fit has been investigated using parts of the TPC in various domains. Goodhue and Thompson (1995) ini-
tially tested a subset of the model using participants from a transport company and an insurance company, and found strong support for
the influence of task–technology fit on performance as well some support for the influence of system characteristics and task character-
istics on task–technology fit.
Other domains in which parts of the model have been tested include software development (Dishaw & Strong, 1998), managerial deci-
sion making (Goodhue, Klein, & March, 2000) and health care (Pendharkar, Rodger, & Khosrow-Pour, 2001). The most comprehensive test of
the model to date is Staples and Seddon’s (2004) study which considered use of a library cataloguing system by staff and use of spreadsheet
and word processing software by students. Staples and Seddon found strong support for the impact of task–technology fit on performance,
as well as on attitudes and beliefs about use. But the influence of level of utilization on performance was less clear.
The role of task–technology fit has not yet been investigated in the e-learning domain. Given the need for rigorous research on factors
that influence the success of LMSs, and the relevance of the TPC, it could provide a valuable framework for research.

3. Research questions

This paper considers the role of task–technology fit in LMS success, and uses the TPC to address the question of how task–technology fit
influences the performance impacts of LMSs. The primary research question investigated in this study was

 How does task–technology fit influence the performance impacts of LMSs for students?

Task-
Technology
Fit
H1 H7b
H7a
Precursors Expected
of Utilization Consequences of H2
LMS Use Perceived
Student
Impact on H9
Grades
Learning
H3
Attitude Towards H8
LMS Use
H4
LMS Utilization
H5
Social Norms
H6

Facilitating
Conditions

Fig. 2. Initial model.


T.J. McGill, J.E. Klobas / Computers & Education 52 (2009) 496–508 499

Consistent with the TPC and previous research relating to it, the relationships described below were initially hypothesized in order to
answer the research question. Fig. 2 illustrates the initial model for the study.
The TPC proposes that task–technology fit has a positive influence on expected consequences of use (i.e. the better the task–technology
fit the more positive the anticipated consequences of use of a system). In the context of LMS use, the consequences that students might
anticipate could include being able to accomplish their study more quickly and easily, and improving their performance. However, Good-
hue and Thompson did not test the path between task–technology fit and expected consequences of use in their original study (Goodhue &
Thompson, 1995) and nor did Goodhue, Littlefield, and Straub (1997) in a subsequent related study. Rather, they assumed that the rela-
tionship existed and tested a direct path from task–technology fit to utilization. Staples and Seddon (2004) tested the relationship and
found that task–technology fit had a positive influence on expected consequences of use. It was therefore hypothesized that the relation-
ship would be exhibited in the context of LMS use

H1: Task–technology fit will positively influence expected consequences of LMS use.

An attitude is a positive or negative evaluation of an object or behavior (Fishbein & Ajzen, 1975). Fishbein and Ajzen (1975) argue that
attitudes towards objects do not strongly predict specific behaviors towards the objects; rather it is the attitude towards the specific behav-
ior that determines whether the behavior is performed. Hence attitude towards use of LMSs, rather than attitude towards LMSs, is of inter-
est in this study. Goodhue and Thompson (1995) did not propose a direct association between task–technology fit and attitude towards use
in their original model, and Goodhue (1997) has argued that task–technology fit operates primarily through changes to the expected con-
sequences of use. However, in their 2004 test of the TPC, Staples and Seddon (2004) tested the relationship between task–technology fit
and attitude towards use and found that task–technology fit significantly influenced attitude towards use when use was mandatory, but
not when use was optional. With LMSs becoming increasingly embedded in teaching and learning, student use is tending towards man-
datory. Thus the following hypothesis was proposed

H2: Task–technology fit will positively influence attitude towards LMS use.

Triandis (1971) introduced the role of expected consequences of use in influencing behavior. Whilst Goodhue and colleagues did not test
this relationship (Goodhue et al., 1997; Goodhue & Thompson, 1995), Thompson, Higgins, and Howell (1991) found that expected conse-
quences of use have a strong influence on utilization and Staples and Seddon (2004) found an effect when usage was voluntary. It was thus
hypothesized that

H3: Expected consequences of LMS use will positively influence LMS utilization.

In the TPC, attitude towards use of the system is proposed as a predictor of utilization (Goodhue & Thompson, 1995). Previous research
on this relationship has been mixed. Whilst Chang and Cheung (2001) found that attitude to use influenced intention to use the WWW at
work, Staples and Seddon (2004) did not find a relationship in either of the two domains they studied: use of a library cataloguing system
by staff and use of spreadsheet and word processing software by students. Despite this uncertainty about the role of attitude, consistent
with Goodhue and Thompson’s inclusion of the path in the TPC it was hypothesized that

H4: Attitude towards LMS use will influence LMS utilization.

Social norms refer to users’ beliefs as to whether most other people who are important to them want them to perform a behavior. In the
case of student use of LMSs, the other people might include academics, other students, family and friends. The role of social norms in infor-
mation systems success has been investigated with mixed results. Some authors have found that it influences utilization (Venkatesh & Da-
vis, 2000). However, others such as Dishaw and Strong (1999) have found that social norms do not influence intention to use. Venkatesh
and Davis (2000) argued that the influence of social norms is restricted to mandatory environments and, consistent with this, Staples and
Seddon (2004) found that social norms influenced utilization when usage was mandatory but not when it was voluntary. There has been
little investigation of the role of social norms in e-learning but given the uncertainty of the role of social norms in the success of systems in
general, it was considered important to investigate it in this study. Thus the following hypothesis was proposed

H5: Social norms will positively influence LMS utilization.

Various conditions relating to support for system use (such as ease of access to the system, relationship of the user with support staff,
etc.) can influence use and performance. The importance of facilitating conditions is reflected in DeLone and McLean’s (2003) addition of
service quality to their updated model of information systems success, and several authors have commented on the importance of support
in ensuring the success of e-learning (Sumner & Hostetler, 1999; Williams, 2002). Although Staples and Seddon (2004) did not find that
facilitating conditions influenced use, a positive effect was found in Chang and Cheung’s (2001) study. In the e-learning domain, Ngai, Poon,
and Chan’s (2007) study of LMS adoption found that facilitating conditions had a strong indirect effect on attitude toward use. Consistent
with the TPC it was therefore hypothesized that

H6: Facilitating conditions will positively influence LMS utilization.

Performance impact refers to the effect of the system on the behavior of the user or the outcomes for the user. The influential role of
task–technology fit on performance is a key component of the TPC, and its role has been confirmed in numerous studies by Goodhue and
colleagues and others (D’Ambra & Wilson, 2004; Goodhue, 1995; Goodhue et al., 2000; Goodhue et al., 1997; Goodhue & Thompson, 1995;
Staples & Seddon, 2004). The impacts most commonly considered in information systems success research relate to management perfor-
500 T.J. McGill, J.E. Klobas / Computers & Education 52 (2009) 496–508

mance and decision making (DeLone & McLean, 1992), but in the LMS domain, performance impact can relate to impacts on academic re-
sults or student perceptions of learning success, among others. Measures such as perceived performance impact are commonly used as
surrogates for ‘actual’ performance impact (DeLone & McLean, 1992). However, the relationship between perceptions of performance im-
pact and objective measures of impact is acknowledged to be complex (Ballantine et al., 1998; DeLone & McLean, 2003; Shayo, Guthrie, &
Igbaria, 1999) and Goodhue et al. (2000) noted that there has been little research that explicitly tests the link between user evaluations and
objectively measured performance. A study by McGill and Klobas (2008) which explicitly tested the relationship between perceived indi-
vidual impact of user developed spreadsheets and objectively measured decision performance found that although user satisfaction had a
significant positive effect on both individual impact and perceived individual impact, there was not a significant relationship between
them. Therefore in this study both perceptions of learning success and outcomes in terms of student grades were considered. It was
hypothesized that

H7: Task–technology fit will positively influence LMS performance impacts.


 H7a: Task–technology fit will positively influence perceived impact on learning.
 H7b: Task–technology fit will positively influence student grades.

The positive influence of utilization on performance is also a key component of the TPC. Although Staples and Seddon (2004) did not find
an association between level of utilization and performance, Goodhue and colleagues (Goodhue et al., 1997; Goodhue & Thompson, 1995)
found support for the relationship, as did D’Ambra and Wilson (2004). It was therefore hypothesized that

H8: LMS utilization will positively influence perceived impact on learning.

From a cognitive point of view, perceived impact on learning might be considered to be a kind of ‘‘forethought” in which the student
anticipates their learning performance outcome. Some forms of forethought, including outcome expectancy, can influence actual perfor-
mance (Bandura, 1997). If students’ perceptions of the impact of LMS on their learning act in this way, then they will positively influence
actual performance measured as grades. Thus in this study, perceived impact on learning is conceptualized as an antecedent to student
grades, such that a student’s results are in part influenced by their perception of the impact of the LMS on their learning. It was therefore
hypothesized that

H9: Perceived impact on learning will positively influence student grades.

4. Research methodology

4.1. Participants

WebCT has been one of the most commonly used types of LMS (Browne et al., 2006; Yip, 2004) and was the LMS considered in this
study. The participants in this study consisted of students from an Australian university who were using WebCT in their studies.

4.2. Procedure

The study was conducted approximately half way through a semester. Students enrolled in 17 different undergraduate degrees were
targeted to give a broad range of both levels of use and pedagogies. Participants were initially contacted via email and invited to participate
in the study by clicking on a link to complete a questionnaire on the web. The questionnaire took approximately 10 min to complete. Com-
pletion of the questionnaire was voluntary and all responses anonymous.

4.3. Measurement

Items to measure the constructs of interest were developed for the LMS domain using instruments from previous research on the TPC as
a starting point (e.g. Goodhue & Thompson, 1995; Hartwick & Barki, 1994; Staples & Seddon, 2004), with new items being developed as
needed. The questionnaire was pilot tested (along with the online completion process) by nine students and slight changes made to clarify
questions.
The questionnaire consisted of two main sections. The first section asked questions about the participants and their previous training
and experience with computers, the Internet and WebCT. Given that several relationships in the TPC appear to be influenced by the extent
to which system use is mandatory (Staples & Seddon, 2004; Venkatesh & Davis, 2000), participants were also asked to provide their per-
ceptions of the degree of mandatoriness of their use of WebCT by indicating their agreement with the statement ‘I am required to use Web-
CT for my studies’ on 7 point Likert scale labeled from ‘strongly disagree’ to ‘strongly agree’.
The second section of the questionnaire asked questions about the participants’ perceptions of WebCT and its role in their academic
success. The constructs measured in the second section are described below and the items used to measure the constructs are listed in
the Appendix.
Task–technology fit was measured with a multi-faceted measure. The aspects of task–technology fit considered (and the source of items
used to measure them) were work compatibility (two items from Moore and Benbasat (1991)), ease of use (three items from Doll and Tork-
zadeh (1988)), ease of learning (three items from Staples and Seddon (2004)), and information quality (5 items from Doll and Torkzadeh
(1988)). The 13 items were measured a 7 point Likert scale labeled either from ‘strongly disagree’ to ‘strongly agree’ or from ‘never’ to
‘always’.
T.J. McGill, J.E. Klobas / Computers & Education 52 (2009) 496–508 501

Attitude toward LMS use was measured using four items. The items are based on items used by Hartwick and Barki (1994), Taylor and
Todd (1995) and Davis et al. (1989) and use 5 point semantic differential scales.
Expected consequences of LMS use was measured using eight of the 10 items used by Staples and Seddon (2004). These items were devel-
oped initially by Davis (1989) and Moore and Benbasat (1991). The items were measured on a 7 point Likert scale labeled from ‘strongly
disagree’ to ‘strongly agree’.
Social norms were measured using four items adapted to the LMS context from Hartwick and Barki (1994). The items were measured on
a 7 point Likert scale labeled from ‘strongly disagree’ to ‘strongly agree’. See Section 4.4.1 for a further discussion of this construct.
Facilitating conditions was measured using five items. The items are based on items used by Baroudi and Orlikowski (1988), Thompson,
Higgins, and Howell (1994) and Taylor and Todd (1995). The items were measured on a 7 point Likert scale labeled from ‘strongly disagree’
to ‘strongly agree’.
LMS utilization was measured using four items. Participants were asked how many hours a week they used WebCT, and how many hours
per week they intended to use WebCT over the rest of the semester. Participants were also asked to indicate their use and intended use of
WebCT on a 5 point scale ranging from (1) ‘light’ to (7) ‘heavy’.
LMS performance impacts were measured in two ways. Student perceptions of the impact of WebCT on their learning in general were
measured using three items based on those developed by Goodhue and Thompson (1995). The items were measured on a 7 point Likert
scale labeled from ‘strongly disagree’ to ‘strongly agree’. This aspect of performance impact was characterized as perceived impact on learn-
ing. As recommended by Staples and Seddon (2004) and van Raaij and Schepers (2008) an objective measure of performance impact was
also sought. This was obtained by asking participants what percentage they had received for their last test, exam or assignment. This aspect
of performance impact was characterized as student grades.

4.4. Data analysis

The relationships in the model were tested using partial least squares (PLS). PLS provides an alternative estimation approach to tradi-
tional structural equation modeling (SEM). A two-step approach commonly used in SEM techniques was used to evaluate model fit. The
approach involves first testing the fit and construct validity of the proposed measurement model and then, once a satisfactory measure-
ment model is obtained, the measurement model is ‘‘fixed” when the structural model is estimated (Hair, Black, Babin, Anderson, & Ta-
tham, 2006). SmartPLS version 2.0 was used to assess the measurement model and the structural model.

4.4.1. Measurement model


The measurement model was assessed in terms of: individual item loadings, reliability of measures, convergent validity and discrim-
inant validity. All items loaded significantly on their latent construct (p < 0.05) and exceeded the minimum threshold of 0.4 recommended
by Hulland (1999). Reliability was assessed using composite reliability and Cronbach’s alpha. All multi-item constructs except social norms
met the guidelines for composite reliability greater than 0.70 (Hair et al., 2006) and Cronbach’s alpha greater than 0.70 (Nunnally, 1978).
Convergent validity was assessed using average variance extracted. All multi-item constructs met the guideline of average variance ex-
tracted greater than 0.50 (Hair et al., 2006).
Further investigation of the measurement of social norms indicated that the item ‘My lecturer/tutor thinks it is important for me to use
WebCT’ reflected a slightly different construct than did the other three items. This construct was named instructor norms, and a decision
was made to test its influence separately from common social norms which was defined as students’ beliefs as to whether friends and family
want them to use the LMS.
For satisfactory discriminant validity each item should load more highly on its own construct than on other constructs. In addition, the
average variance shared between a construct and its measures should be greater than the variance shared by the construct and any other
constructs in the model (Chin, 1998). Two items measuring expected consequences of LMS use and two items measuring task–technology
fit loaded too heavily on other constructs so were dropped.
Table 1 provides a summary of the reliability and convergent validity of the final scales used in the study. Table 2 provides the construct
inter-correlations and the square root of average variance extracted for each construct (in bold on the diagonal). In all cases the square root
of average variance extracted exceeds the corresponding construct inter-correlations thereby demonstrating discriminant validity (Chin,
1998).

4.4.2. Structural model


Consistent with the change to the conceptualization of social norms to differentiate between common social norms and instructor
norms as discussed above, H5 was replaced with the following two sub-hypotheses:

Table 1
Summary of measurement scales

Construct Composite reliability Cronbach’s alpha Average variance extracted


Task–technology fit 0.95 0.95 0.65
Attitude towards LMS use 0.94 0.91 0.80
Expected consequences of LMS use 0.97 0.96 0.82
Social norms 0.88 0.80 0.70
Instructor norms NAa NAa NAa
Facilitating conditions 0.86 0.82 0.51
LMS utilization 0.84 0.78 0.58
Perceived impact on learning 0.94 0.90 0.83
Student grades NAa NAa NAa
a
Single item measure.
502 T.J. McGill, J.E. Klobas / Computers & Education 52 (2009) 496–508

Table 2
Discriminant validity

Construct 1 2 3 4 5 6 7 8 9
1. Task–technology fit 0.807
2. Attitude towards LMS use 0.780 0.892
3. Expected consequences of LMS use 0.579 0.713 0.908
4. Social norms 0.285 0.354 0.422 0.840
5. Instructor norms 0.081 0.105 0.134 0.198 NA
6. Facilitating conditions 0.666 0.628 0.527 0.342 0.093 0.712
7. LMS utilization 0.246 0.354 0.318 0.248 0.338 0.218 0.759
8. Perceived impact on learning 0.602 0.771 0.792 0.490 0.140 0.556 0.433 0.913
9. Student grades 0.122 0.144 0.101 0.012 0.056 0.008 0.058 0.072 NA

Task-
Technology
Fit
H1 H7b
H7a
Precursors Expected
of Utilization Consequences of H2
LMS Use Perceived
Student
Impact on H9
Grades
Learning
H3
Attitude Towards H8
LMS Use
H4
LMS Utilization
H5a
Social Norms
H5b

H6
Instructor Norms

Facilitating
Conditions

Fig. 3. Model tested in the study.

H5a: Common social norms will positively influence LMS utilization.


H5b: Instructor norms will positively influence LMS utilization.

The structural model was updated to reflect this change. Fig. 3 shows the final structural model. Two criteria were used to assess struc-
tural model quality: the statistical significance of estimated model coefficients and the ability of the model to explain the variance in the
dependent variables. If the TPC is a valid representation of LMS impact, all proposed relationships in the model should be significant. The
bootstrapping technique implemented in SmartPLS 2.0 was used to evaluate the significance of these hypothesized relationships. The R2 of
the structural equations for the dependent variables provides an estimate of variance explained (Hair et al., 2006), and therefore an indi-
cation of the success of the model in explaining these variables.

5. Results

A total of 267 students (73.7% females and 26.3% males) participated in the study. Student ages ranged from a minimum of 17 to a max-
imum of 59 (with an average age of 28 years). Whilst being essentially a convenience sample, the participants covered a broad spectrum of
IT experience and training. They had a wide range of levels of usage of WebCT with the average length of use being 3.47 semesters (with a
minimum of less than a full semester, and a maximum of 12 semesters). Participants were also asked to provide their perceptions of the
degree of mandatoriness of their use of WebCT. The majority of students perceived that they had little choice about whether to use WebCT
with over 61% agreeing or strongly agreeing with the statement ‘I am required to use WebCT for my studies’. Table 3 provides a summary of
the background of the participants.

Table 3
Participant background information

N Mean S.D. Minimum Maximum


Age (years) 269 28.01 9.98 17 59
Years of computer experience 265 11.84 4.79 1 25
Years of Internet experience 266 7.08 2.87 1 20
Perceived IT skill (out of 7) 269 5.25 1.20 2 7
Semesters using WebCT 268 3.47 2.02 <1 12
Degree of mandatoriness of use (out of 7) 269 5.61 1.59 1 7
T.J. McGill, J.E. Klobas / Computers & Education 52 (2009) 496–508 503

Task-
Technology
Expected Fit
0.579 ***
Consequences of
LMS Use
2
R = 0.336
0.7796*** 0.527* ** 0.123*
0.086
Attitude Towards
LMS Use
2
R = 0.608
0.254* *

Perceived Impact
LMS Utilization 0.303*** -0.01 Student Grades
on Learning 2
0.079 2
Social Norms R = 0.224 2 R = 0.015
R = 0.448

0.288***

Instructor Norms
-0.041

Facilitating
Conditions

Fig. 4. Structural model results.

Fig. 4 shows the standardized coefficients for each hypothesized path in the model and the R2 for each dependent variable. Seven of the
11 hypotheses were supported. Task–technology fit had a significant positive effect on both expected consequences of LMS use and attitude
towards LMS use, thus hypotheses H1 and H2 were supported. Contrary to expectations, expected consequences of LMS use did not influ-
ence LMS utilization in this study, therefore hypothesis H3 was not supported. As hypothesized, attitude towards LMS use had a significant
positive influence on LMS utilization. Therefore hypothesis H4 was supported.
Although social norms was not found to influence LMS utilization in this study, instructor norms had a significant positive influence on
LMS utilization, so hypothesis H5a was not supported, but hypothesis H5b was. The final precursor of utilization that was considered was
facilitating conditions. Facilitating conditions was not found to influence LMS utilization, so hypothesis H6 was not supported.
Task–technology fit had a strong positive influence on perceived impact on learning and a weak positive influence on student grades,
thus both hypotheses H7a and H7b were supported. LMS utilization also positively influenced perceived impact on learning, thus hypoth-
esis H8 was also supported. However, perceived impact on learning did not influence student grades; therefore hypothesis H9 was not
supported.
The ability of the model to explain the variance in the dependent variables was the second criterion used to evaluate the model. The R2
values are measures of the ability of the model to explain the variance in the dependent variables and are reported in Fig. 4. The model
explained 44.8% of the variability in perceived impact on learning, but only 1.5% of the variability in student grades. The variability in
LMS utilization and the precursors of use were also of interest. The model accounted for 22.4% of the variability of LMS utilization,
60.8% of the variance in attitude towards LMS use, and 33.6% of the variance in expected consequences of LMS use.

6. Discussion and conclusion

The study described in this paper investigated the role of task–technology fit in LMS success, and addressed the question of how task–
technology fit influences the performance impacts of LMSs. As proposed, task–technology fit was found to play an important role in the
utilization and success of LMSs. The TPC (Goodhue & Thompson, 1995) was used as the framework for the study, and support was found
for the usefulness of the model in the e-learning context.

6.1. Influence of task–technology fit on precursors of utilization

As proposed in the TPC (Goodhue & Thompson, 1995), task–technology fit had a significant positive effect on precursors of LMS utili-
zation, in this study characterized as expected consequences of use and attitudes to use. The influence on expected consequences of use is
consistent with previous research on this relationship (Staples & Seddon, 2004). Although Goodhue and Thompson (1995) did not propose
a relationship between task–technology fit and attitude towards use in their original model, Staples and Seddon found that task–technol-
ogy fit influenced attitude towards use when use was mandatory, but not when use was optional. In this study, task–technology fit was
found to have a significant positive effect on attitude towards LMS use. As students in the study believed that they had little choice about
whether to use the LMS, this result is consistent with the findings of Staples and Seddon.

6.2. Role of precursors of utilization

6.2.1. Expected consequences of LMS use and attitudes to LMS use


As hypothesized, attitude towards LMS use had an influence on level of LMS utilization. This result is consistent with that of Chang and
Cheung (2001) who found that attitude to use influenced intention to use in their study of WWW use at work. However, in Ngai et al.’s
504 T.J. McGill, J.E. Klobas / Computers & Education 52 (2009) 496–508

(2007) study of WebCT adoption, attitude to using WebCT was not found to influence use. Similarly, Staples and Seddon (2004) did not find
a relationship in either of the two domains they studied, one of which was use of spreadsheet and word processing software by students.
Given that attitude to use has not appeared to play a role in these previous studies involving students, this finding requires further
investigation.
It is worth considering the findings on the influence of attitudes to use on LMS utilization in relation to the findings on expected con-
sequences of LMS use. Contrary to initial expectations, and Goodhue’s (1997) argument that task–technology fit acts primarily by affecting
expected consequences of use, expected consequences of LMS use did not influence LMS utilization in this study. One explanation may be
that, given the student perceptions of mandatoriness of use, the results are consistent with Staples and Seddon (2004) who found an effect
when usage was voluntary, but not when use was mandatory. An alternative explanation might be found in the high correlation (.71) be-
tween expected consequences of use and attitudes to use in this study. The high correlation between these variables, along with the var-
iation in results of other task–technology fit research that includes them as influences on utilization, suggests that attitudes to use and
expected consequences of use might jointly influence utilization.
Post hoc analysis using two alternative structural models was undertaken to test how the relationship between attitudes to use and
expected consequences might affect utilization. Expected consequences of LMS use were removed from the first model, leaving attitudes
to LMS use as the only one of the two variables to influence utilization. Attitudes to LMS use was omitted from the second model, leaving
expected consequences of LMS use as the only one of the two variables to influence utilizations. Both of these models explained the same
amount of variance in utilization (24%). Given the bootstrapping process used in PLS, this value is virtually identical to the 22% explained by
the full model. The similar explanatory ability of the three models suggests that expected consequences and attitudes jointly influence uti-
lization. Fishbein and Ajzen’s work on the relationship between expectations, beliefs, attitudes and behavior (Ajzen, 1991; Fishbein, 1972;
Fishbein & Ajzen, 1975), which builds on Triandis’s (1971) work on expected consequences, suggests that the path from task–technology fit
to utilization might well be through expected consequences to attitudes to utilization. Additional research is needed to test this relation-
ship in the e-learning domain.

6.2.2. Social norms


In the initial conceptualization of the research, individuals who might influence students’ beliefs about LMS use were considered to in-
clude instructors, friends and family, and the construct social norms was conceptualized to include influences of all these people. As a result
of the measurement model analysis, reconceptualization was required, and hence instructor norms were modeled separately from com-
mon social norms (i.e. family, friends etc). Common social norms were not found to influence LMS utilization in this study.
Previous research on the role of social norms in influencing intention has had mixed results and very little has been done in the e-learn-
ing domain. Venkatesh and Davis (2000) found that social norms significantly affect intention directly only when usage is mandatory and
experience is in the early stages. Whilst use tended towards mandatory in this study, many participants had used the LMS extensively, so
they were not in the early stages of experience with it. Thus, there is partial consistency with Venkatesh and Davis (2000). The result is
consistent with van Raaij and Schepers (2008) who found that social norms had no effect on use of a LMS. In their study, use of the
LMS was mandatory, and participants had been using the LMS extensively for 3 months.
Although common social norms were not found to influence LMS utilization, instructor norms had a positive influence on LMS utiliza-
tion. A strong belief that lecturers and tutors think it is important for students to use the LMS led to increased utilization. Whilst the role of
instructor norms in influencing utilization does not appear to have been previously studied, this finding is consistent with previous re-
search that has investigated how student perceptions of instructor attitudes towards technology in teaching and learning influence student
satisfaction with learning (Chyung & Vachon, 2005; Selim, 2007; Sun, Tsai, Finger, Chen, & Yeh, 2008; Webster & Hackley, 1997).
Together, these results raise a question about the measurement of social norms in the LMS context. Theories of the normative influences
on intention and behavior refer to salient or relevant norms, norms that are important to the individual in the context of the behavior of
interest (Ajzen, 1991; Ajzen, 2002). From this point of view, it is not surprising that we observed that, while instructor norms exert an influ-
ence on students who are required to use an LMS for their course, friends and family do not. The challenge is therefore to identify social
norms that are likely to influence LMS utilization. Given the results of other researchers’ studies that show that student–student interaction
and collaborative uses of LMS affect student performance, one possible salient social norm is fellow students or peers. Further research on
normative influences on utilization of LMS should incorporate these and any other salient social norms that may apply in the context of the
study.

6.2.3. Facilitating conditions


The final precursor of utilization that was considered was facilitating conditions. Facilitating conditions have been considered to play an
important role in ensuring the success of e-learning (Sumner & Hostetler, 1999; Williams, 2002), and Wang et al. (2007) included measures
of facilitating conditions in their recent scale to measure e-learning systems success. However, facilitating conditions were not found to
influence LMS utilization in this study. This result is consistent with the results of Staples and Seddon (2004) and of Al-Gahtani, Hubona,
and Wang (2007). These results are also consistent with Chiu, Chiu, and Chang’s (2007) finding that facilitating conditions did not influence
student satisfaction with web-based learning, but do not seem consistent with Selim’s (2007) observation that students associate acces-
sibility, infrastructure quality, and support (all factors measured in this study as facilitating conditions) with their use of LMS. One reason
why facilitating conditions may not have played a role in this study is that the LMS was well established and stable, and students had rel-
atively high levels of experience with it. It would be interesting to further explore the role of facilitating conditions in a wider range of
environments.
An alternative explanation is suggested by the moderately high correlation between facilitating conditions and expected consequences
of LMS use (.53) and attitudes to LMS use (.63). Facilitating conditions may influence utilization by an indirect path, through expected con-
sequences and attitudes to use, rather than directly as proposed in the TPC and tested in this study. The possibility of an indirect influence
should be considered in future research.
T.J. McGill, J.E. Klobas / Computers & Education 52 (2009) 496–508 505

6.3. Influences on LMS performance impacts

Consistent with the TPC and research by Goodhue and colleagues (Goodhue et al., 1997; Goodhue & Thompson, 1995), and in contrast to
Staples and Seddon’s (2004) findings, LMS utilization positively influenced perceived impact on learning in this study. Thus, increased use
of a LMS can lead to increases in perceptions of learning.
The role of task–technology fit in directly influencing performance is a key element of the TPC, and has been confirmed in studies in
other domains by Goodhue and colleagues and others (D’Ambra & Wilson, 2004; Goodhue, 1995; Goodhue et al., 2000; Goodhue et al.,
1997; Goodhue & Thompson, 1995; Staples & Seddon, 2004). While task–technology fit was found to influence perceptions of learning,
it had only a small (but significant) effect on student grades. Thus while its role in LMS success has been confirmed by this study, it is
important to note that the effect of the LMS on perceived impact on learning was moderately strong while its effect on student grades
was weak. Also of relevance here is the lack of relationship between perceived impact on learning and student grades. The students in
the study evidently felt that the LMS was contributing to their learning, yet this was not reflected in their grades. A possible explanation
for this could be mismatches between what students perceive they should learn, and what is actually tested.
The relationship between perceptions of performance impact and objective measures of impact is acknowledged to be complex (Ball-
antine et al., 1998; DeLone & McLean, 2003; McGill & Klobas, 2008; Shayo et al., 1999) and students may have a variety of goals for their
learning of which grades are just one. The items to measure perceived learning impact were general, intended to be compatible with a
range of goals. The relationship between task–technology fit and perceived learning impact suggests that improved task–technology fit
may provide efficiency and effectiveness benefits regardless of whether the student achieves higher grades. In addition, the slight improve-
ment in grades confirms the complexity of the relationship between use of technology and perceived and objective measures of impact, and
supports van Raaij and Schepers (2008) call for use of grades to ‘enrich analysis and add theoretical rigor’.

6.4. The TPC and LMS success

The TPC was used in this study to address the question of how task–technology fit influences the performance impacts of LMSs. The
seven hypothesized paths that were supported in this test of the TPC suggest that task–technology fit plays an important role in influencing
LMS success as perceived by students – both via precursors of utilization, and through direct effects on learning. Task–technology fit both
indirectly influence LMS utilization via attitude toward LMS use, as well as directly influences learning impacts. Utilization also directly
influences perceived learning impacts. Instructor norms were also shown to influence utilization and hence perceived performance
impacts.
The model accounted for almost 45% of the variability in perceived impact on learning, and as discussed by Staples and Seddon (2004)
most of the explanatory power of the model comes from task–technology fit. Whilst task–technology fit significantly influenced student
grades, the effect was weak and only a very small proportion of the variability in grades was explained. Student results are influenced
by a myriad of other factors including ability, personal goals (Yi & Im, 2004), learning style (Graff, 2003), beliefs about the nature of learning
(Jacobson & Spiro, 1995). In particular, achievement of higher grades is not necessarily the goal of all students. Students may perceive that
the LMS has influenced the efficiency of their learning, or the ease of their studies, but not hope to take advantage of that to increase their
grades. Future research should consider the role of individual motivational beliefs on LMS outcomes. Thus, this study makes an important
contribution by highlighting the role that task–technology fit may have in influencing student performance with LMSs. Task–technology fit
is clearly essential for successful use of LMSs and future research should further investigate the role of task–technology fit in LMS success.
This study also makes a significant contribution by acknowledging the role that student perceptions of instructor’s beliefs about the
importance of using an LMS play in the success of the LMS. If instructors have doubts about the value of LMSs in their teaching this can
perhaps unwittingly negatively impact on student outcomes.

Appendix

Items used to measure constructs

Note: those marked with 


were dropped from the final analysis after measurement model development

Task–technology fit

 WebCT fits well with the way I like to study*.


 WebCT is compatible with all aspects of my study*.
 WebCT is easy to use.
 WebCT is user friendly.
 It is easy to get WebCT to do what I want it to do.
 WebCT is easy to learn.
 It is easy for me to become more skilful at using WebCT.
 New features of WebCT are easy to learn.
 Do you think the output from WebCT is presented in a useful format?
 Is the information from WebCT accurate?
 Does WebCT provide you with up-to-date information?
 Do you get the information you need in time?
 Does WebCT provide output that seems to be just about exactly what you need?
506 T.J. McGill, J.E. Klobas / Computers & Education 52 (2009) 496–508

Expected consequences of LMS use

 Using WebCT will help me to accomplish my study more quickly.


 Using WebCT will improve my performance in units.
 Using WebCT will increase my productivity.
 Using WebCT will enhance my effectiveness in my program of study.
 Using WebCT will make it easier to complete my learning tasks.
 Using WebCT will give me greater control over my learning tasks.
 Overall, I think that WebCT will be useful in my studies*.
 Using WebCT will improve the quality of my learning*.

Attitude towards LMS use

 Using WebCT in my studies/teaching is: unpleasant. . . pleasant.


 My frequent use of WebCT is bad. . . good.
 Using WebCT a lot in my studies is: awful. . . great.
 All things considered, my using WebCT in my studies is: harmful. . . beneficial.

Social norms

 My friends think it is important for me to use WebCT.


 My family thinks it is important for me to use WebCT.
 People respect you if you use WebCT.

Instructor norms

 My lecturer/tutor thinks it is important for me to use WebCT.


 Facilitating conditions.
 The support staff make it easy to use WebCT.
 WebCT support is always available when I want it.
 Training on how to use WebCT is available to me.
 A specific person (or group) is available for assistance with WebCT difficulties.
 I can always access a computer to use WebCT when I need it.
 Downloading learning materials is fast.
 Utilization.
 On average, how many hours per week do you use WebCT during semester?
 How many hours a week do you expect to use WebCT (for the rest of semester)?
 Your usage of WebCT so far this semester is: light. . . heavy.
 Your expected use of WebCT for the rest of semester is: light. . . heavy.

Perceived impact on learning

 WebCT has a large positive impact on my effectiveness and productivity as a student.


 WebCT is an important and valuable aid to me in my studies.
 I learn better with WebCT than without it.

Student grades

 What percentage did you receive for your last test, exam or assignment in a unit that uses WebCT?

References

Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50, 179–211.
Ajzen, I., (2002). Constructing a TpB questionnaire: Conceptual and methodological considerations. <http://www-unix.oit.umass.edu/~aizen/pdf/tpb.measurement.pdf>
Retrieved 08.08.07.
T.J. McGill, J.E. Klobas / Computers & Education 52 (2009) 496–508 507

Alavi, M., & Leidner, D. E. (2001). Research commentary: Technology-mediated learning – A call for greater depth and breadth of research. Information Systems Research, 12(1),
1–10.
Al-Gahtani, S. S., Hubona, G. S., & Wang, J. (2007). Information technology (IT) in Saudi Arabia: Culture and the acceptance and use of IT. Information & Management, 44(8),
681–691.
Arbaugh, J. B., & Benbunan-Fich, R. (2007). The importance of participant interaction in online environments. Decision Support Systems, 43(3), 853–865.
Ballantine, J., Bonner, M., Levy, M., Martin, A., Munro, I., & Powell, P. L. (1998). Developing a 3-D model of information systems success. In E. J. Garrity & G. L. Sanders (Eds.),
Information systems success measurement (pp. 46–59). Hershey PA: Idea Group Publishing.
Bandura, A. (1997). Self-efficacy: The exercise of control. New York: W.H. Freeman.
Baroudi, J. J., & Orlikowski, W. J. (1988). A short form measure of user satisfaction and notes on use. Journal of Management Information Systems, 4(4), 44–59.
Becker, R., & Jokivirta, L., (2007). Online learning in universities: Selected data from the 2006 Observatory survey – November 2007. The observatory on borderless higher
education (OBHE). [Electronic Version] <http://www.obhe.ac.uk> Retrieved 21.09.07.
Browne, T., Jenkins, M., & Walker, R. (2006). A longitudinal perspective regarding the use of VLEs by higher education institutions in the United Kingdom. Interactive Learning
Environments, 14(2), 177–192.
Chang, M. K., & Cheung, W. (2001). Determinants of the intention to use Internet/WWW at work: A confirmatory study. Information & Management, 39(1), 1–14.
Chin, W. W. (1998). The partial least squares approach to structural equation modelling. In G. A. Marcoulides (Ed.), Modern methods for business research (pp. 295–336).
Mahway NJ: Lawrence Erlbaum Associates.
Chiu, C. M., Chiu, C. S., & Chang, H. C. (2007). Examining the integrated influence of fairness and quality on learners’ satisfaction and Web-based learning continuance
intention. Information Systems Journal, 17(3), 271–287.
Chou, S.-W., & Liu, C.-H. (2005). Learning effectiveness in a web-based virtual learning environment: A learner control perspective. Journal of Computer Assisted Learning, 21(1),
65–76.
Chyung, S. Y., & Vachon, M. (2005). An investigation of the profiles of satisfying and dissatisfying factors in e-learning. Performance Improvement Quarterly, 18(2), 97–114.
Coates, H., James, R., & Baldwin, G. (2005). A critical examination of the effects of learning management systems on university teaching and learning. Tertiary Education and
Management, 11(19–36).
Collis, B., & van der Wende, M., (2002). Models of technology and change in higher education: An international comparative survey on the current and future use of ICT in
higher education. [Electronic Version] <http://www.utwente.nl/cheps/documenten/ictrapport.pdf> Retrieved 11.02.08.
D’Ambra, J., & Wilson, C. S. (2004). Use of the World Wide Web for international travel: Integrating the construct of uncertainty in information seeking and the task–
technology fit (TTF) model. Journal of the American Society for Information Science and Technology, 55(8), 731–742.
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319–339.
Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1989). User acceptance of computer technology: A comparison of two theoretical models. Management Science, 15(8), 982–1003.
DeLone, W. H., & McLean, E. R. (1992). Information systems success: The quest for the dependent variable. Information Systems Research, 3(1), 60–95.
DeLone, W. H., & McLean, E. R. (2003). The DeLone and McLean model of information systems success: A ten-year update. Journal of Management Information Systems, 19(4),
9–30.
Dishaw, M. T., & Strong, D. M. (1998). Assessing software maintenance tool utilization using task–technology fit and fitness-for-use models. Journal of Software Maintenance:
Research and Practice, 10(3), 151–179.
Dishaw, M. T., & Strong, D. M. (1999). Extending the technology acceptance model with task–technology fit constructs. Information & Management, 36(1), 9–21.
Doll, W. J., & Torkzadeh, G. (1988). The measurement of end-user computing satisfaction. MIS Quarterly, 12(2), 259–274.
Fishbein, M. (1972). The search for attitudinal–behavioral consistency. In J. B. Cohen (Ed.), Behavioral science foundations of consumer behavior (pp. 245–252). New York: Free
Press.
Fishbein, M., & Ajzen, I. (1975). Belief, attitude, intention and behavior: An introduction to theory and research. Reading, Ma: Addison-Wesley.
Goodhue, D. (1995). Understanding user evaluations of information systems. Management Science, 41(2), 1827–1844.
Goodhue, D. (1997). The model underlying the measurement of the impacts of the IIC on the end-users. Journal of the American Society for Information Science, 48(5), 449–453.
Goodhue, D., Klein, B. D., & March, S. T. (2000). User evaluations of IS as surrogates for objective performance. Information & Management, 38(2), 87–101.
Goodhue, D., Littlefield, R., & Straub, D. W. (1997). The measurement of the impacts of the IIC on the end-users: The survey. Journal of the American Society for Information
Science, 48(5), 454–465.
Goodhue, D., & Thompson, R. L. (1995). Task–technology fit and individual performance. MIS Quarterly, 19(2), 213–236.
Graff, M. (2003). Learning from Web-based instructional systems and cognitive style. British Journal of Educational Technology, 34(4), 407–418.
Hair, J. F., Black, W. C., Babin, B. J., Anderson, R. E., & Tatham, R. L. (2006). Multivariate data analysis. New Jersey: Prentice-Hall.
Hartwick, J., & Barki, H. (1994). Explaining the role of user participation in information system use. Management Science, 40(4), 440–465.
Hayashi, A., Chen, C., Ryan, T., & Wu, J. (2004). The role of social presence and moderating role of computer self efficacy in predicting the continuance usage of e-learning
systems. Journal of Information Systems Education, 15(2), 139–154.
Hornik, S., Johnson, R. D., & Wu, Y. (2007). When technology does not support learning: Conflicts between epistemological beliefs and technology support in virtual learning
environments. Journal of Organizational and End User Computing, 19(2), 23–46.
Hulland, J. (1999). Use of partial least squares (PLS) in strategic management research: A review of four recent studies. Strategic Management Journal, 20(2), 195–204.
Jacobson, M. J., & Spiro, R. J. (1995). Hypertext learning environments, cognitive flexibility, and the transfer of complex knowledge: An empirical investigation. Journal of
Educational Computing Research, 12(4), 301–333.
Klobas, J. E., Haddow, G., (2000). Evaluating the impact of computer-supported international collaborative teamwork in business education [Electronic Version]. International
Journal of Educational Technology, 2. <http://smi.curtin.edu.au/ijet/v2n1/klobas/> Retrieved 10.02.08.
Lipponen, L., Hakkarainen, K., & Paavola, S. (2004). Practices and orientations of CSCL. In J. Strijbos, P. A. Kirschner, R. L. Martens, & P. Dillenbourg (Eds.), What we know about
CSCL and implementing it in higher education (pp. 31–50). Norwell, MA: Kluwer Academic Publishers.
McGill, T., & Klobas, J. (2008). User developed application success: Sources and effects of involvement. Behaviour and Information Technology, 27(5), 407–422.
Moore, G. C., & Benbasat, I. (1991). Development of an instrument to measure the perceptions of adopting an information technology innovation. Information Systems Research,
2(3), 192–222.
Ngai, E. W. T., Poon, J. K. L., & Chan, Y. H. C. (2007). Empirical examination of the adoption of WebCT using TAM. Computers & Education, 48(2), 250–267.
Nunnally, J. C. (1978). Psychometric theory. New York: McGraw-Hill.
Oliver, R. L. (1980). A cognitive model of the antecedents and consequences of satisfaction decisions. Journal of Marketing Research, 17(3), 460–469.
Pendharkar, P. C., Rodger, J. A., & Khosrow-Pour, M. (2001). Development and testing of an instrument for measuring the user evaluations of information technology in health
care. Journal of Computer Information Systems, 41(4), 84–89.
Piccoli, G., Ahmad, R., & Ives, B. (2001). Web-based virtual learning environments: A research framework and a preliminary assessment of effectiveness in basic IT skills
training. MIS Quarterly, 25(4), 401–427.
Pituch, K. A., & Lee, Y.-k. (2006). The influence of system characteristics on e-learning use. Computers & Education, 47(2), 222–244.
Roca, J. C., Chiu, C.-M., & Martínez, F. J. (2006). Understanding e-learning continuance intention: An extension of the technology acceptance model. International Journal of
Human–Computer Studies, 64(8), 683–696.
Rudestam, K. E., & Schoenholtz-Read, J. (2002). The coming of age of adult education. In K. E. Rudestam & J. Schoenholtz-Read (Eds.), Handbook of online learning: Innovations in
higher education and corporate training (pp. 3–28). Thousand Oaks, CA: Sage.
Selim, H. M. (2007). Critical success factors for e-learning acceptance: Confirmatory factor models. Computers & Education, 49(2), 396–413.
Shayo, C., Guthrie, R., & Igbaria, M. (1999). Exploring the measurement of end user computing success. Journal of End User Computing, 11(1), 5–14.
Staples, D. S., & Seddon, P. (2004). Testing the technology-to-performance chain model. Journal of Organizational and End User Computing, 16(4), 17–36.
Sumner, M., & Hostetler, D. (1999). Factors influencing the adoption of technology in teaching. Journal of Computer Information Systems, 40(1), 81–87.
Sun, P., Tsai, R., Finger, G., Chen, Y., & Yeh, D. (2008). What drives a successful e-Learning? An empirical investigation of the critical factors influencing learner satisfaction.
Computers & Education, 50(4), 1284–1303.
Swan, K. (2001). Virtual interaction: Design factors affecting student satisfaction and perceived learning in asynchronous online courses. Distance Education, 22(2), 306–332.
Taylor, S., & Todd, P. A. (1995). Understanding information technology usage: A test of competing models. Information Systems Research, 6(2), 144–176.
Thompson, R. L., Higgins, C. A., & Howell, J. M. (1991). Personal computing: Toward a conceptual model of utilization. MIS Quarterly, 15(1), 125–143.
Thompson, R. L., Higgins, C. A., & Howell, J. M. (1994). Influence of experience on personal computer utilization: Testing a conceptual model. Journal of Management
Information Systems, 11(1), 167–187.
Triandis, H. C. (1971). Attitude and attitude change. New York, NY: John Wiley and Sons.
508 T.J. McGill, J.E. Klobas / Computers & Education 52 (2009) 496–508

van Raaij, E. M., & Schepers, J. J. L. (2008). The acceptance and use of a virtual learning environment in China. Computers & Education, 50(3), 838–852.
Venkatesh, V., & Davis, F. D. (2000). A theoretical extension of the technology acceptance model: Four longitudinal field studies. Management Science, 46(2), 186–204.
Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Towards a unified view. MIS Quarterly, 27(3), 425–478.
Wang, Y.-S., Wang, H.-Y., & Shee, D. Y. (2007). Measuring e-learning systems success in an organizational context: Scale development and validation. Computers in Human
Behavior, 23(4), 1792–1808.
Webster, J. P., & Hackley, P. (1997). Teaching effectiveness in technology-mediated distance learning. Academy of Management Journal, 40(6), 1282–1309.
Williams, P. (2002). The learning web: The development, implementation and evaluation of Internet-based undergraduate materials for the teaching of key skills. Active
Learning in Higher Education, 3(1), 40–53.
Yi, M. Y., & Im, K. S. (2004). Predicting computer task performance: Personal goal and self-efficacy. Journal of Organizational and End User Computing, 16(2), 20–37.
Yip, M. C. W. (2004). Using WebCT to teach courses online. British Journal of Educational Technology, 35(4), 497–501.
Zhang, D., Zhao, J. L., Zhou, L., & Nunamaker, J. F. (2004). Can e-learning replace classroom teaching? Communications of the ACM, 47(5), 75–79.

You might also like