Professional Documents
Culture Documents
net/publication/220969136
CITATION READS
1 208
2 authors, including:
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Brenda Justine Mallinson on 25 February 2015.
ABSTRACT
The deployment of e-learning offers an opportunity to build the skills required for the 21st century knowledge-based
economy. It is important to be able to evaluate various e-learning systems and analyse their efficacy. The focus of this
paper is to investigate the area of e-learning evaluation in order to discover or formulate a framework or model that
would assist the successful evaluation of e-learning in Higher Education Institutions (HEIs). The manner in which
organisations currently implement e-learning evaluation is investigated. This paper critically assesses four current models
and determines how applicable they are to HEIs. Finally, the various perspectives are synthesised and inform the creation
of a new theoretical model for the implementation of successful e-learning evaluation. The proposed model attempts to
address the identified shortcomings, and is suggested for use as a guideline for evaluating e-learning in HEIs.
KEYWORDS
Evaluation, E-Learning, Higher Education Institutions
1. INTRODUCTION
Rosenberg (2006) redefines e-learning as “the use of Internet technologies to create and deliver a rich
learning environment that includes a broad array of instruction and information resources and solutions, the
goal of which is to enhance individual and organizational performance”. Active learning strategies placing
the student at the heart of the education process can now be supported by a range of media deployed by an
HEI. Mostert and Hodgkinson-Williams (2006) report that the high level of hardware/software availability,
together with pervasive Internet access, are reflected in the growing prevalence of e-learning in HEIs.
An important goal of e-learning is that it should be equivalent to or better than learning provided by
conventional methods such as classroom-based instruction (Leung, 2003) and, as such, justify the return on
investment (ROI). Although there has been a significant increase in the use of e-learning in mainstream
education, very little research has been conducted to justify its use (Aivazidis et al., 2006), and the evaluation
of e-learning solutions is only partially resolved (Voigt and Swatman, 2004). HEIs considering the use of e-
learning are increasingly aware of the need for quality in both the development and implementation of their
online solutions, and evaluation of these systems will promote quality maintenance.
This study investigates how e-learning is or should be evaluated in HEIs in order to ascertain whether
their various e-learning technologies are providing them with a positive ROI. Current research on e-learning
evaluation, the purpose of evaluation, the motivation for evaluating e-learning systems, and the reasons why
some institutions may not want to evaluate their systems are investigated. Existing evaluation models are
examined, and an approach to e-learning evaluation that is designed to deal with all stages of the e-learning
cycle is shown. Finally, a new theoretical model is proposed to promote the effective evaluation of e-
learning. It is suggested that e-learning takes place in a social context and therefore any evaluation methods
and their impact on outcomes should take the surrounding constraints into consideration.
411
ISBN: 978-972-8924-58-4 © 2008 IADIS
412
IADIS International Conference e-Learning 2008
• Evaluation is expensive and difficult: some organisations may lack the proper budget, skills and time
to evaluate their e-learning systems effectively (Horton, 2001). For smaller organisations or
institutions, evaluation may result in budget and time over-runs.
• Evaluation is political: the notion of evaluation often results in personnel feeling some discomfort
and even organisational paranoia. Instructors that use the traditional methods of teaching may feel
threatened if evaluation compares their methods to e-learning systems (Horton, 2001).
• Credibility of e-learning: the launch of questionable e-learning courseware combined with some less
successful e-learning implementations has bruised the image of e-learning and critics use this to
discredit the necessity of evaluation (Van Dam, 2004).
It was found that there is no single model that can be used when it comes to the evaluation of e-learning
in Higher Education Institutions. The following models have been adapted by various authors in an attempt
to formulate a suitable e-learning evaluation model. The two main schools of thought are one that follows the
traditional Kirkpatrick inspired views and another that follows a systematic approach to e-learning.
Level 1 Response Was the course liked by students? Was it completed? This level gauges the learners’ satisfaction
(Reaction) with the training program.
Level 2 Learning Did the students gain any knowledge or skills? This level verifies improvement in skill,
acquisition of knowledge, or positive change in attitude.
Level 3 Job Application Did they use it? This level ascertains whether the acquired skills were later used.
Level 4 Performance Did the course improve student performance? This level determines the impact of training on
(Behaviour) behaviour, on-the-job performance and application of learned skill.
Level 5 Results Was there a good ROI for the institution? This level ascertains whether the training program
achieved or impacted desired end-results.
The second model is the result of Beal (2007) proposing that the ADDIE model (Analysis, Design,
Development, Implementation, and Evaluation) be used in conjunction with Kirkpatrick’s model. The most
widely used methodology for developing new education and training programs is called Instructional
Systems Design (ISD). This approach provides a step-by-step system for the evaluation of students' needs,
the design and development of training materials, and the evaluation of the effectiveness of the training
intervention. Almost all ISD models are based on the generic ADDIE model (Beal 2007). Each step has an
outcome that feeds the subsequent step and the five phases represent a dynamic, flexible guideline for
building effective training and performance support tools. Usually evaluation design for e-learning only takes
place at the end of the development process when ideally it should take place at the beginning. The
Evaluator’s Project Report Summary (Table 2), which integrates the ADDIE model with Kirkpatrick’s
model, is a projection of how useful variant evaluation can be and that it should not be left to the ‘Steps’
suggested by Kirkpatrick. The most important link phase is the Evaluation phase that focuses on how well
participants have mastered the learning content and the effectiveness of the training programme or
application.
413
ISBN: 978-972-8924-58-4 © 2008 IADIS
Preparatory Phase:
This is the initial phase of the model and it is only performed once for each analysis dimension. In this phase the definition of
a specific set of Abstract Tasks (ATs) must be carried out. This is done in order to create a conceptual framework against which the
subsequent evaluations can be compared.
This phase consists of: Identification of guidelines to be considered; and a Definition of a library of Abstract Tasks.
An AT is a description of what an evaluator has to do when inspecting an application. The AT shows the evaluator what to look for
in the application. It is a cross-reference list that can be used by the evaluator to judge the application. The AT can be used as
a pattern, allowing the evaluator to re-use the knowledge. An AT will possess the following items: AT Classification Code and
Title; Focus of Action; Intent; Activity Description; and Output.
Execution Phase:
This is performed each time a specific application must be evaluated, and consists of inspection performed by the evaluators. If
needed the inspection can be followed by user testing sessions involving real users. At the end the evaluators must provide
formative and subjective feedback to the designers and developers. There are two major inspection types in this phase:
a. Systematic inspection: During this the evaluator uses the ATs to analyse the application rigorously and produce a report.
b. User-based evaluation: This is conducted only when there is a disagreement among the evaluators on some inspection findings.
This aims at giving the designers and developers some feedback about the application.
414
IADIS International Conference e-Learning 2008
The fourth model is Voigt and Swatman’s (2004) suggested use of Fricke’s model, which includes nine
evaluation forms that consider a variety of prescriptive and descriptive research questions. Fricke’s model is
designed to deal with both the stages of the e-learning system life cycle and a variety of learning
environments (Voigt and Swatman, 2004). This model also emphasises the importance of context when
evaluating e-learning systems. E-learning in a social context is an open system, in that the system influences
the environment and vice versa, making it vulnerable to a number of external/internal contextual forces.
It is clear that any context-situated learning research must first define what should be evaluated and where
context comes into play. Fricke’s model established a popular framework for the design and evaluation of
multimedia-based instruction. Fricke identified the five evaluation categories: Instructional conditions;
Instructional outcomes; Instructional methods; Assumptions; and General considerations. The two last
categories, in particular, help to integrate contextual information into evaluation design: ‘Assumptions’ help
to clarify norms and values underlying the evaluation design and 'General conditions' describe the non-
scientific nature of evaluations (Voigt and Swatman, 2004). The model suggests that evaluation be seen as an
ongoing process in the quest for transparency and better decision quality (Table 4).
Table 4. Fricke’s Evaluation Criteria: Contextual Variables (Voigt and Swatman, 2004)
C1 Learner's previous knowledge, attitudes & experiences C6 Implicit learning and instructional theories
C2 Content to be learned C7 Explicit learning and instructional theories
C3 Instructional outcomes C8 Priorities of learning outcomes
C4 Instructional methods C9 Financial resources and skills available
C5 Instructional settings C10 Political guidelines
415
ISBN: 978-972-8924-58-4 © 2008 IADIS
evaluation allows the evaluator the opportunity to examine all angles. The main advantage of this model is
that it couples inspection and user testing to make the evaluation more practical and reliable, but still keeping
it cost effective. Each of the processes starts with a basic inspection and identifies areas that could pose
problems. Following this, user testing is conducted. In addition, the use of Abstract Tasks (ATs) specifically
defined for the evaluation of e-learning systems, gives the evaluator a more detailed and practical evaluation.
Fricke's framework provides a rough idea of the categories we need to examine, and it recognises that
evaluation occurs at different points in the design process, which is consistent with the systemic nature of an
evolutionary learning process. The model is well structured and gives evaluators a good idea as to what, how,
and why they should go about the evaluation process (Reeves and Hedberg, 2003). The key to any good
evaluation framework is the ability to direct the process step by step and ensure that particular standards are
followed and maintained to ensure uniformity (Jochems et al., 2004). This model also highlights the role of
contextual variables in the whole evaluation process, and how they play a major role in the evaluation
outcomes. The model has been criticised for not conforming to the guidelines that have been set by
Kirkpatrick regarding evaluation (Reeves and Hedberg, 2003). This is primarily because the model is closely
linked with the ADDIE model, which focuses solely on a systematic approach to evaluation.
416
IADIS International Conference e-Learning 2008
417
ISBN: 978-972-8924-58-4 © 2008 IADIS
proposed model caters for these variables that can have tremendous impact on the evaluation process and
influence the ability of users to use the e-learning application effectively.
The proposed model (Figure 1), attempts to take into account all these misconceptions, critical factors and
the basic levels of evaluation, and formulates a new, improved model. An important aspect of the proposed
model is that it is not a standardised, pre-packaged process. Not all options can be formulated in a model, but
the main aspects can be highlighted and it is up to the evaluator to ensure all possible options are specified in
the evaluation. Most importantly, guidance on how to interpret results from the study should be provided.
7. CONCLUSION
Faced with the proliferation of e-learning materials, modules, courses and programmes, instructors and
learners need to ascertain which ones will best meet their needs. E-learning developers need to adopt
systematic processes in order to address the requirements of teachers and learners and ensure the quality of
their offerings. The evaluation of e-learning is a crucial issue for researchers attempting to understand the
impact and effectiveness of e-learning in an academic environment.
The proposed theoretical model presents a systematic process that can be used to assist with the effective
evaluation of e-learning. It incorporates Kirkpatrick’s model as a guideline and, most importantly, it adheres
to the conventional software development life cycle. Evaluation is a basic component of good innovation
design and implementation. The use of this model to evaluate e-learning is subjective to the evaluator.
The theoretical model need only serve as a guide when evaluating e-learning and it is by no means ideal;
for every situation issues must be considered contextually. For this reason, further testing and analysis of the
model is required to determine its values and contribution to successive initiatives in differing contexts.
Ultimately the proposed theoretical model is intended to act as a platform from which to increase the
successful evaluation of e-learning systems in HEIs.
REFERENCES
Aivazidis, C. Lazaridou, M. and Hellden, G. (2006) A Comparison Between a Traditional and Online Environmental
Program. The Journal of Environmental Education. 37, 4: 45-54.
Beal, T. (2007) A.D.D.I.E. Meets the Kirkpatrick Four: A 3-Act Play. The E-learning Guild Research. 1, 1:1-12.
Garvin-Doxas, K and Barker, L. (2004) Broadening DLESE. Unpublished Annual Report for DLESE. Colorado: Alliance
for Technology, Learning, and Society Evaluation and Research Group, University of Colorado.
Horton, W. (2001) Evaluating E-Learning. American Society for Training and Development, Alexandria.
Hricko, M. and Howell, S. (2006) Online Assessment and Measurement. Idea Group Inc., London.
Jochems, W., Van Merrienboer, J. and Koper, R. (2004) Integrated E-learning: Implications for Pedagogy,Technology
and Organisation. RoutledgeFalmer, New York.
Khan, B. (2001) Web-Based Training. New Jersey. Educational Technology Publications.
Lanzilotti, R., Ardito, C., Costabile, M. and Angeli, A. (2006) A Systematic Approach to the e-Learning Systems
Evaluation. School of Informatics. 9, 4: 42-53.
Leung, H. (2003) Evaluating the Effectiveness of E-learning. Computer Science Education. 13, 2: 123-136.
Maudsley, G. (2001) What issues are raised by evaluating problem-based undergraduate medical curricula? Making
healthy connections across the literature. Journal of Evaluation in Clinical Practice. 7, 3: 311–324.
Mostert, M. and Hodgkinson-Williams, C. (2006) Using ICTs in Teaching and Learning: A Survey of Academic Staff and
Students at Rhodes University. Unpublished report from Rhodes University. Rhodes University.
Reeves, T. and Hedberg, J. (2003) Interactive Learning Systems Evaluation. New Jersey. Educational Technology Pubs.
Rosenberg, M.J. (2006) Beyond E-Learning: Approaches and Technologies to Enhance Organizational Knowledge,
Learning and Performance. McGraw-Hill.
Stake, S. (1991) The purpose of evaluation. [Online] [Accessed 12 May 2007] Available at
http://www.cerlim.ac.uk/projects/efx/toolkit/evalpurpose.html.
Van Dam, N. (2004) The E-learning Field book. McGraw-Hill Companies, New York.
Voigt, C. and Swatman, P. (2004) Contextual e-learning evaluation: a preliminary framework. Journal of Educational
Media. 29, 3:175-187.
418