You are on page 1of 9

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/220969136

A Proposed Theoretical Model For Evaluating E-Learning.

Conference Paper · January 2008


Source: DBLP

CITATION READS

1 208

2 authors, including:

Brenda Justine Mallinson


Rhodes University
22 PUBLICATIONS   340 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

OER Africa / OUT collaboration View project

EdTech-UEM View project

All content following this page was uploaded by Brenda Justine Mallinson on 25 February 2015.

The user has requested enhancement of the downloaded file.


IADIS International Conference e-Learning 2008

A PROPOSED THEORETICAL MODEL FOR


EVALUATING E-LEARNING

Brenda Mallinson, Norman Nyawo


Rhodes University

ABSTRACT
The deployment of e-learning offers an opportunity to build the skills required for the 21st century knowledge-based
economy. It is important to be able to evaluate various e-learning systems and analyse their efficacy. The focus of this
paper is to investigate the area of e-learning evaluation in order to discover or formulate a framework or model that
would assist the successful evaluation of e-learning in Higher Education Institutions (HEIs). The manner in which
organisations currently implement e-learning evaluation is investigated. This paper critically assesses four current models
and determines how applicable they are to HEIs. Finally, the various perspectives are synthesised and inform the creation
of a new theoretical model for the implementation of successful e-learning evaluation. The proposed model attempts to
address the identified shortcomings, and is suggested for use as a guideline for evaluating e-learning in HEIs.

KEYWORDS
Evaluation, E-Learning, Higher Education Institutions

1. INTRODUCTION
Rosenberg (2006) redefines e-learning as “the use of Internet technologies to create and deliver a rich
learning environment that includes a broad array of instruction and information resources and solutions, the
goal of which is to enhance individual and organizational performance”. Active learning strategies placing
the student at the heart of the education process can now be supported by a range of media deployed by an
HEI. Mostert and Hodgkinson-Williams (2006) report that the high level of hardware/software availability,
together with pervasive Internet access, are reflected in the growing prevalence of e-learning in HEIs.
An important goal of e-learning is that it should be equivalent to or better than learning provided by
conventional methods such as classroom-based instruction (Leung, 2003) and, as such, justify the return on
investment (ROI). Although there has been a significant increase in the use of e-learning in mainstream
education, very little research has been conducted to justify its use (Aivazidis et al., 2006), and the evaluation
of e-learning solutions is only partially resolved (Voigt and Swatman, 2004). HEIs considering the use of e-
learning are increasingly aware of the need for quality in both the development and implementation of their
online solutions, and evaluation of these systems will promote quality maintenance.
This study investigates how e-learning is or should be evaluated in HEIs in order to ascertain whether
their various e-learning technologies are providing them with a positive ROI. Current research on e-learning
evaluation, the purpose of evaluation, the motivation for evaluating e-learning systems, and the reasons why
some institutions may not want to evaluate their systems are investigated. Existing evaluation models are
examined, and an approach to e-learning evaluation that is designed to deal with all stages of the e-learning
cycle is shown. Finally, a new theoretical model is proposed to promote the effective evaluation of e-
learning. It is suggested that e-learning takes place in a social context and therefore any evaluation methods
and their impact on outcomes should take the surrounding constraints into consideration.

411
ISBN: 978-972-8924-58-4 © 2008 IADIS

2. THE PURPOSE OF, MOTIVATION FOR, AND OBJECTIONS TO


E-LEARNING EVALUATION
The field of evaluation of e-learning systems has become contentious due to competing models and
conflicting paradigms. A crucial question is the reason for evaluation. E-learning projects have recorded
some spectacular successes; perhaps because only successful projects tend to be reported. Evaluation of an e-
learning system refers to evaluation of the entire online interactive curriculum; it should entail a systematic
process that judges the worth of an online educational programme via quantitative and/or qualitative data
analysis, consistent with the evaluation criteria (Maudsley, 2001) and should aim to improve students’
experience and achievements. Maudsley (2001) also states that the main reason to evaluate e-learning is to
stimulate and maintain renewal of e-learning systems.
Horton (2001) describes evaluation as literally assigning a value to something: that is, determining the
level of quality. Evaluation provides the instructor with the ability to measure needs and assess the results of
learning activity (Hricko and Howell, 2006). There are always many variables at work in addition to those
directly affected by the project itself (Stake, 1991). Evaluation is also used to judge the effectiveness of a
project, where the aim is to discover the degree to which a project has reached its goals (Garvin-Doxas and
Barker, 2004). Stake (1991) notes that evaluation is used to check whether a product conforms to its
specifications. Gathering feedback from others and spending time reflecting on it are critical to increase the
level of understanding of the teaching practice (Hricko and Howell, 2006).
Voigt and Swatman (2004) emphasise the importance of e-learning evaluation and believe it is crucial to
understanding the effectiveness of e-learning in a business or academic environment. Evaluation should be
simple, flexible, reliable and economical. Evaluation needs to be embedded in the overall teaching process
(Khan, 2001). Well planned evaluation is crucial to ensure the consistency and quality needed for a
successful e-learning programme. E-learning in a social context is an open system as it interacts with the
environment and therefore any evaluation of methods must take into consideration the surrounding
constraints and underlying assumptions (Voigt and Swatman, 2004). They state that the disadvantage of
integrating context into e-learning evaluations is that the evaluation increases in complexity and could
impede evaluation if: the scope of the evaluation does not correspond to the resources available; and/or the
formal processes governing how empirical enquiries should be executed are overstated; and/or the
implications of taking action in real settings are ignored. Evaluations should show recommendations on how
to proceed, which will limit the range of views and possible explanations. Reasons to evaluate e-learning
systems include:
• Justify investments in education: evaluation of e-learning systems may be used as evidence of
whether the technology is profitable to the institution. Evaluation may help convince top executives
of the institution that e-learning is beneficial (Horton, 2001).
• Encourage learning: the process of evaluation encourages the learners to work harder as their results
will be monitored. The evaluation process is usually more important than the data gathered. The
process is more that of fact finding than data gathering (Reeves and Hedberg, 2003).
• Support decision making: evaluation can effect this by placing a value on action alternatives during
formative and summative evaluation (Voigt and Swatman, 2004).
• Accountability of the responsible participants: the evaluation process helps reveal whether the
individuals, departments and facilitators responsible for implementing and using the interactive
systems are delivering the promised results (Horton, 2001).
• Improve training or learning quality: depending on the various models used for evaluation, the
evaluation process may reflect the quality and effectiveness of the learning material (Reeves and
Hedberg, 2003) and identify areas needing improvement.
• Inform future training plans and strategy: e-learning is a fast-paced technology that requires prior
training and strategic planning before implementation. Evaluation of current e-learning systems will
help top executives make strategic decisions that are well informed.
Although the following points outline reasons for resistance to e-learning evaluation, the existing
objections to performing evaluation may be countered by a carefully crafted evaluation plan (Horton, 2001).
Resistance may also stem from the inter-related sources of management and leadership, the information
technology department, and the training department (Van Dam, 2004).

412
IADIS International Conference e-Learning 2008

• Evaluation is expensive and difficult: some organisations may lack the proper budget, skills and time
to evaluate their e-learning systems effectively (Horton, 2001). For smaller organisations or
institutions, evaluation may result in budget and time over-runs.
• Evaluation is political: the notion of evaluation often results in personnel feeling some discomfort
and even organisational paranoia. Instructors that use the traditional methods of teaching may feel
threatened if evaluation compares their methods to e-learning systems (Horton, 2001).
• Credibility of e-learning: the launch of questionable e-learning courseware combined with some less
successful e-learning implementations has bruised the image of e-learning and critics use this to
discredit the necessity of evaluation (Van Dam, 2004).
It was found that there is no single model that can be used when it comes to the evaluation of e-learning
in Higher Education Institutions. The following models have been adapted by various authors in an attempt
to formulate a suitable e-learning evaluation model. The two main schools of thought are one that follows the
traditional Kirkpatrick inspired views and another that follows a systematic approach to e-learning.

3. MODIFIED KIRKPATRICK MODELS


Many professionals turn to Kirkpatrick’s model comprising four ordered structured levels because it has
become an industry standard for evaluation. Most evaluations take a layered approach using the basic model
of: Level 1 – Response (Reaction); Level 2 – Learning; Level 3 – Performance (Behaviour); and Level 4 -
Results (Horton, 2001). The first model examined is Van Dam’s (2004) expanded Kirkpatrick model, with
two new levels inserted: Level 0 (Participation) was added as e-learning participation has evolved and
become an important factor in evaluation. Participation can be measured by counting the number of hits on
the website, downloads, live plays, orders, unique users, live e-learning attendance and overall usage; and the
additional Level 3 (Job application), which is related to Level 0 (Participation).
Table 1. Van Dam’s modified Kirkpatrick evaluation model (Van Dam, 2004)

Level Name Description


Level 0 Participation This focuses on the level of participation and interaction with the application.

Level 1 Response Was the course liked by students? Was it completed? This level gauges the learners’ satisfaction
(Reaction) with the training program.
Level 2 Learning Did the students gain any knowledge or skills? This level verifies improvement in skill,
acquisition of knowledge, or positive change in attitude.
Level 3 Job Application Did they use it? This level ascertains whether the acquired skills were later used.
Level 4 Performance Did the course improve student performance? This level determines the impact of training on
(Behaviour) behaviour, on-the-job performance and application of learned skill.
Level 5 Results Was there a good ROI for the institution? This level ascertains whether the training program
achieved or impacted desired end-results.

The second model is the result of Beal (2007) proposing that the ADDIE model (Analysis, Design,
Development, Implementation, and Evaluation) be used in conjunction with Kirkpatrick’s model. The most
widely used methodology for developing new education and training programs is called Instructional
Systems Design (ISD). This approach provides a step-by-step system for the evaluation of students' needs,
the design and development of training materials, and the evaluation of the effectiveness of the training
intervention. Almost all ISD models are based on the generic ADDIE model (Beal 2007). Each step has an
outcome that feeds the subsequent step and the five phases represent a dynamic, flexible guideline for
building effective training and performance support tools. Usually evaluation design for e-learning only takes
place at the end of the development process when ideally it should take place at the beginning. The
Evaluator’s Project Report Summary (Table 2), which integrates the ADDIE model with Kirkpatrick’s
model, is a projection of how useful variant evaluation can be and that it should not be left to the ‘Steps’
suggested by Kirkpatrick. The most important link phase is the Evaluation phase that focuses on how well
participants have mastered the learning content and the effectiveness of the training programme or
application.

413
ISBN: 978-972-8924-58-4 © 2008 IADIS

Table 2. The Evaluator’s Project Report Summary (Beal, 2007)


Kirk Four - ROI Kirk Three - On-the- Kirk Two – Tests Kirk One -
Job Feedback
Analysis What are the business What performance supports What exactly do top How can employees
challenges, financial and overall goals on the job? performers do that supports best learn how to
competitive goals? What performance problems overall goals and which can perform in ways that
or obstacles make it difficult be observed and measured? support overall
to support the goals? business goals?
Design How to show connection If employees forget details, What is an effective What are some
with business goals what job aid or other sequence for teaching the alternatives to
throughout the program? resources can they use as a concepts, information, skills abstract, sometimes
reminder? and attitudes needed in order boring, slide
to achieve performance lectures?
goals?
Development Is there a clear connection Does the e-Learning Are key concepts and skills Are the program
between e-Learning program have the look and presented, demonstrated, activities and
content and the need to feel of real world challenges practiced and reviewed … materials what
support department and so participants learn what not just lectured about? participants want as
organizational goals? they need to do on the job? well as need?
Implementation Is there organizational Is it job related? Are performance objectives Is it user friendly?
support for the e-Learning and learning content in sync?
at the executive level?
Evaluation Reaction and Feedback Paper and pencil tests or 360 interviews or on-the-job Return on investment
Questionnaire with ratings observed or scored activities assignments, etc. to check in e-Learning, based
and comments (Smile (role play, simulation, achievement of performance on business goals
sheets) presentations) goals

4. SYSTEMATIC EVALUATION MODELS


The third model was formulated by Lanzilotti et al. (2006). Their e-Learning Systematic Evaluation (eLSE)
model breaks away from the traditional Kirkpatrick framework. The eLSE methodology combines a specific
inspection technique with user testing. This approach to the evaluation of e-learning is more focused on
practical aspects, and should be used when performing a formative evaluation.
The inspection described in this model aims at giving an evaluator who may not have wide experience in
evaluating e-learning systems the tools to perform accurate evaluations. This methodology guides beginner
evaluators through a step-by-step process to conduct their evaluation. The methodology is based on the use of
evaluation patterns called Abstract Tasks (ATs), which precisely describe the activities to be performed
during the evaluation (Lanzilotti et al., 2006). This methodology suggests a flow of activities that can be used
with the main result being a systematic combination of the inspection with the user-based evaluation.
Table 3. The E-Learning Systematic Evaluation (eLSE) model (Lanzilotti et al., 2006)

Preparatory Phase:
This is the initial phase of the model and it is only performed once for each analysis dimension. In this phase the definition of
a specific set of Abstract Tasks (ATs) must be carried out. This is done in order to create a conceptual framework against which the
subsequent evaluations can be compared.
This phase consists of: Identification of guidelines to be considered; and a Definition of a library of Abstract Tasks.
An AT is a description of what an evaluator has to do when inspecting an application. The AT shows the evaluator what to look for
in the application. It is a cross-reference list that can be used by the evaluator to judge the application. The AT can be used as
a pattern, allowing the evaluator to re-use the knowledge. An AT will possess the following items: AT Classification Code and
Title; Focus of Action; Intent; Activity Description; and Output.
Execution Phase:
This is performed each time a specific application must be evaluated, and consists of inspection performed by the evaluators. If
needed the inspection can be followed by user testing sessions involving real users. At the end the evaluators must provide
formative and subjective feedback to the designers and developers. There are two major inspection types in this phase:
a. Systematic inspection: During this the evaluator uses the ATs to analyse the application rigorously and produce a report.
b. User-based evaluation: This is conducted only when there is a disagreement among the evaluators on some inspection findings.
This aims at giving the designers and developers some feedback about the application.

414
IADIS International Conference e-Learning 2008

The fourth model is Voigt and Swatman’s (2004) suggested use of Fricke’s model, which includes nine
evaluation forms that consider a variety of prescriptive and descriptive research questions. Fricke’s model is
designed to deal with both the stages of the e-learning system life cycle and a variety of learning
environments (Voigt and Swatman, 2004). This model also emphasises the importance of context when
evaluating e-learning systems. E-learning in a social context is an open system, in that the system influences
the environment and vice versa, making it vulnerable to a number of external/internal contextual forces.
It is clear that any context-situated learning research must first define what should be evaluated and where
context comes into play. Fricke’s model established a popular framework for the design and evaluation of
multimedia-based instruction. Fricke identified the five evaluation categories: Instructional conditions;
Instructional outcomes; Instructional methods; Assumptions; and General considerations. The two last
categories, in particular, help to integrate contextual information into evaluation design: ‘Assumptions’ help
to clarify norms and values underlying the evaluation design and 'General conditions' describe the non-
scientific nature of evaluations (Voigt and Swatman, 2004). The model suggests that evaluation be seen as an
ongoing process in the quest for transparency and better decision quality (Table 4).
Table 4. Fricke’s Evaluation Criteria: Contextual Variables (Voigt and Swatman, 2004)

C1 Learner's previous knowledge, attitudes & experiences C6 Implicit learning and instructional theories
C2 Content to be learned C7 Explicit learning and instructional theories
C3 Instructional outcomes C8 Priorities of learning outcomes
C4 Instructional methods C9 Financial resources and skills available
C5 Instructional settings C10 Political guidelines

5. CRITICAL ANALYSIS OF THE CURRENT MODELS


Van Dam’s (2004) adapted Kirkpatrick model is a good summary of the important steps that should be
included in any evaluation process of a training programme or application. The two extra levels added by
Van Dam (2004) make the model more applicable to various contexts. However, the model is caught in the
trap of adhering to Kirkpatrick’s generic model, which neglects the idea that systematic approaches are used
to design and develop these e-learning systems; thus it is important to take into consideration the systems
design model (Reeves and Hedberg, 2003). Jochems et al. (2004) highlight that Kirkpatrick’s model is partial
and has to be revised conceptually to be applicable, particularly in e-learning environments.
Beal’s (2007) integrated model is a good evaluation framework for e-learning systems as it takes into
account the systematic approach to evaluation. Reeves and Hedberg (2003) highlight how important the ISD
approach is for developing, and evaluating education and training programmes. As the model implies some
sort of iteration, it allows for a more thorough evaluation process that can guide evaluators to give a more
detailed, systems approach. This more schematic approach is aligned with current best practice and
educational standards. A disadvantage of this model is that it fails to ask the crucial questions regarding the
experience that the users gained from the training application, or how well the training helped them perform
on the job. These questions could be addressed by the use of Kirkpatrick’s guidelines (Jochems et al., 2004).
The most important part of any evaluation model is to query the effectiveness of the evaluation process. This
is not expressed clearly in the ADDIE model, which has been criticised by some as being too systematic: too
linear, too inflexible, too constraining, and even too time-consuming to implement. As an alternative, there
are a variety of systemic design models that emphasise a more holistic, iterative approach to development.
Rather than developing the instruction in phases, the entire development team works together from the start
to rapidly build modules that can be tested with the student audience, and then revised based on their
feedback. Although this approach to development has many advantages when it comes to the creation of e-
learning, there are practical challenges in the management of resources. Frequently, training programmes
must be developed under a fixed and often limited budget and schedule. While it is easy to allocate people
and time to each step in the ADDIE model, it is harder to plan deliverables when there are no distinct steps.
The eLSE model focuses on user testing and obtaining direct user feedback to ascertain whether the
training application is effective. Evaluation patterns are created that can be used repeatedly, standardising the
whole procedure. The breaking down of the evaluation into a systematic inspection and a user-based

415
ISBN: 978-972-8924-58-4 © 2008 IADIS

evaluation allows the evaluator the opportunity to examine all angles. The main advantage of this model is
that it couples inspection and user testing to make the evaluation more practical and reliable, but still keeping
it cost effective. Each of the processes starts with a basic inspection and identifies areas that could pose
problems. Following this, user testing is conducted. In addition, the use of Abstract Tasks (ATs) specifically
defined for the evaluation of e-learning systems, gives the evaluator a more detailed and practical evaluation.
Fricke's framework provides a rough idea of the categories we need to examine, and it recognises that
evaluation occurs at different points in the design process, which is consistent with the systemic nature of an
evolutionary learning process. The model is well structured and gives evaluators a good idea as to what, how,
and why they should go about the evaluation process (Reeves and Hedberg, 2003). The key to any good
evaluation framework is the ability to direct the process step by step and ensure that particular standards are
followed and maintained to ensure uniformity (Jochems et al., 2004). This model also highlights the role of
contextual variables in the whole evaluation process, and how they play a major role in the evaluation
outcomes. The model has been criticised for not conforming to the guidelines that have been set by
Kirkpatrick regarding evaluation (Reeves and Hedberg, 2003). This is primarily because the model is closely
linked with the ADDIE model, which focuses solely on a systematic approach to evaluation.

6. PROPOSED THEORETICAL MODEL


In the absence of a single model that addresses all the issues concerning the evaluation of e-learning, certain
important aspects of the current models will be integrated while attempting to define a new model that will
ensure the effective evaluation of e-learning. In addition, the proposed model is informed by the issues
already discussed in order to formulate a coherent set of critical success factors that have been modelled to
build a framework which can be used to evaluate e-learning training applications in HEIs. The proposed
model (Figure 1) should be considered as an iterative approach, starting with Analysis all the way through to
Evaluation and executing these phases for all the ‘Levels’ in the model. It is important to note that throughout
the evaluation process the model can be affected by external contextual and organisational variables.
The model draws upon Van Dam’s (2004) adapted and extended Kirkpatrick levels of evaluation. The
two additional levels are important as they add further significant dimensions to the evaluation framework.
The six levels (Participation, Response, Learning, Job Application, Performance and Results) are critical to
any evaluation framework of a training system, and thus provide the backbone of the proposed model.
The model is represented in a spiral structure with Analysis, Design, Development and Implementation
on the respective axes to denote what will be taking place in a particular timeframe. This systems approach to
evaluation is borrowed from Beal’s (2007) model which highlights that when dealing with e-learning training
systems it is important that you also take a systematic approach to the process. The spiral structure depicts
how the process is iterative and continuous. All the levels of evaluation can be performed repeatedly, which
becomes critical as variables and the situation can change for each iteration.
Contextual variables: in order to ascertain whether the training is working as intended, it is important to
measure contextual variables that may affect the evaluation process. For example, the lack of executive
support can undermine even the most effectively designed and delivered training program. An evaluation
should measure more than reaction, learning, behaviour and results. These original four levels of Kirkpatrick
are outcomes based and do not take into account the process leading to the results.
Organisational variables: evaluation will differ from institution to institution as each has its own set of
core values. In addition to having a standard set of values that the institution follows, members will generally
be influenced by how their managers behave, as values are passed from top down. An important
organisational factor usually overlooked by the models is networking. Building effective networks is related
to the four outcomes of Kirkpatrick’s model but it does not happen automatically. Although most institutions
experience bureaucratic barriers that block communication, networks can be used to break these barriers.
The four prominent models regarding the evaluation of e-learning have all weathered well but these
models appear to limit thinking and possibly the ability to conduct customised and meaningful evaluations.
The error that most trainers, evaluators or users of the models make when trying to apply these models is to
apply the ‘Steps’ without taking into account the time to assess their needs and resources, or to determine
how they will apply their results. The proposed model attempts to address these shortcomings and brings
together critical aspects from the various models to generate an integrated model.

416
IADIS International Conference e-Learning 2008

Figure 1. Proposed Model for E-Learning Evaluation


In order to understand the new model (Figure 1), it is important to re-examine some misconceptions that
have been made by the current models and comment on how the proposed model directly improves on them.
Misconception 1: Level 4 of Kirkpatrick’s model is superior. Kirkpatrick’s Levels 1 to 4 measure
different aspects but level 4 is often described as a ‘higher’ level of evaluation. There is the view that level 4
is the pinnacle of the model as it is concerned with ROI and results.
Misconception 2: Level 3 is difficult to measure. Many measures are not appropriate or not sensitive
enough to detect changes in learners’ behaviours. It is difficult to ask the correct questions and obtain
accurate, truthful responses from people. Human behaviour is generally difficult to measure, thus the
measurement methods are not 100% reliable.
Misconception 3: Evaluation equals effectiveness. This is not necessarily true; the evaluation should focus
on the learning aspect (Level 2) of the subject, while effectiveness focuses on whether training has produced
intended results (Level 3 and 4). Evaluation and effectiveness are linked but they should not necessarily
arranged in continuum as they are in Kirkpatrick’s model.
Misconception 4: The waterfall approach is the most suitable method. This approach has its own
disadvantages such as no fair division of phases in the life cycle. Not all the problems related to a phase are
resolved during the same phase; instead all those problems related to one phase are carried on to the next
phase and need to be resolved there. This takes up much of the time of the next phase. The proposed model
thus uses a spiral approach in its systematic evaluation so that it can avoid the problems of carrying over
issues into the next phase by solving the problems in further iterations.
Misconception 5: The external variables to the evaluation process are not necessary. Most of the
traditional evaluation models overlook the impact of external variables on the evaluation process. The

417
ISBN: 978-972-8924-58-4 © 2008 IADIS

proposed model caters for these variables that can have tremendous impact on the evaluation process and
influence the ability of users to use the e-learning application effectively.
The proposed model (Figure 1), attempts to take into account all these misconceptions, critical factors and
the basic levels of evaluation, and formulates a new, improved model. An important aspect of the proposed
model is that it is not a standardised, pre-packaged process. Not all options can be formulated in a model, but
the main aspects can be highlighted and it is up to the evaluator to ensure all possible options are specified in
the evaluation. Most importantly, guidance on how to interpret results from the study should be provided.

7. CONCLUSION
Faced with the proliferation of e-learning materials, modules, courses and programmes, instructors and
learners need to ascertain which ones will best meet their needs. E-learning developers need to adopt
systematic processes in order to address the requirements of teachers and learners and ensure the quality of
their offerings. The evaluation of e-learning is a crucial issue for researchers attempting to understand the
impact and effectiveness of e-learning in an academic environment.
The proposed theoretical model presents a systematic process that can be used to assist with the effective
evaluation of e-learning. It incorporates Kirkpatrick’s model as a guideline and, most importantly, it adheres
to the conventional software development life cycle. Evaluation is a basic component of good innovation
design and implementation. The use of this model to evaluate e-learning is subjective to the evaluator.
The theoretical model need only serve as a guide when evaluating e-learning and it is by no means ideal;
for every situation issues must be considered contextually. For this reason, further testing and analysis of the
model is required to determine its values and contribution to successive initiatives in differing contexts.
Ultimately the proposed theoretical model is intended to act as a platform from which to increase the
successful evaluation of e-learning systems in HEIs.

REFERENCES
Aivazidis, C. Lazaridou, M. and Hellden, G. (2006) A Comparison Between a Traditional and Online Environmental
Program. The Journal of Environmental Education. 37, 4: 45-54.
Beal, T. (2007) A.D.D.I.E. Meets the Kirkpatrick Four: A 3-Act Play. The E-learning Guild Research. 1, 1:1-12.
Garvin-Doxas, K and Barker, L. (2004) Broadening DLESE. Unpublished Annual Report for DLESE. Colorado: Alliance
for Technology, Learning, and Society Evaluation and Research Group, University of Colorado.
Horton, W. (2001) Evaluating E-Learning. American Society for Training and Development, Alexandria.
Hricko, M. and Howell, S. (2006) Online Assessment and Measurement. Idea Group Inc., London.
Jochems, W., Van Merrienboer, J. and Koper, R. (2004) Integrated E-learning: Implications for Pedagogy,Technology
and Organisation. RoutledgeFalmer, New York.
Khan, B. (2001) Web-Based Training. New Jersey. Educational Technology Publications.
Lanzilotti, R., Ardito, C., Costabile, M. and Angeli, A. (2006) A Systematic Approach to the e-Learning Systems
Evaluation. School of Informatics. 9, 4: 42-53.
Leung, H. (2003) Evaluating the Effectiveness of E-learning. Computer Science Education. 13, 2: 123-136.
Maudsley, G. (2001) What issues are raised by evaluating problem-based undergraduate medical curricula? Making
healthy connections across the literature. Journal of Evaluation in Clinical Practice. 7, 3: 311–324.
Mostert, M. and Hodgkinson-Williams, C. (2006) Using ICTs in Teaching and Learning: A Survey of Academic Staff and
Students at Rhodes University. Unpublished report from Rhodes University. Rhodes University.
Reeves, T. and Hedberg, J. (2003) Interactive Learning Systems Evaluation. New Jersey. Educational Technology Pubs.
Rosenberg, M.J. (2006) Beyond E-Learning: Approaches and Technologies to Enhance Organizational Knowledge,
Learning and Performance. McGraw-Hill.
Stake, S. (1991) The purpose of evaluation. [Online] [Accessed 12 May 2007] Available at
http://www.cerlim.ac.uk/projects/efx/toolkit/evalpurpose.html.
Van Dam, N. (2004) The E-learning Field book. McGraw-Hill Companies, New York.
Voigt, C. and Swatman, P. (2004) Contextual e-learning evaluation: a preliminary framework. Journal of Educational
Media. 29, 3:175-187.

418

View publication stats

You might also like