You are on page 1of 21

A HOW

TO
GUIDE
Some Good Practices to facilitate Programme Success. Adapted from (The Institute for
Higher Education Policy, 2000)

A PROGRAMME EVALUATION
HANDBOOK: STEPS AND GOOD
PRACTICES IN TECHNOLOGY-
ENABLED EDUCATIONAL
ENVIRONMENTS

Shabera S. Jacobs
The University of the West Indies: Open
Campus
EDLM8130 - Evaluating Quality in Technology-
Enabled Educational Environments
Professor: Dr. Jason Marshall
1
Table of Contents
The Aim of This Handbook 2
Introduction 3
The Steps in the Evaluation process: 1 & 2  4
What is Programme Evaluation 6
Defining a Technology-enabled Learning (TEL)
7
Environment
The Importance of Programme Evaluation in Technology-
7
Enabled Learning Environments

The Evaluation Process: Steps 3,4 & 5 9

Unique Features of Technology-Enabled Learning


10
Environments
Important Quality Standards which Programmes Offered
in Technology-Enabled Learning Environments Should 10
Comply With
Types of Programme Evaluations 11
  Different Types of Programme Evaluation Models   11
         Kirkpatrick's evaluation model 12
         Stufflebeam's Context, Input,
14
Process,        Product (CIPP) Model
         The Logic Model 15
Conclusion 16
References 17

Appendix A 19

Appendix B 20
ABOUT THIS 2

HANDBOOK
This handbook is a practical ‘how to’ guide to facilitate your
adoption and execution of successful programme
evaluations in technology-enabled learning environments. It
is helpful to evaluators, educators and anyone who engages
in the technology-enabled learning environments. After
engaging with this handbook, the recommended resources
and tools provided, you will gain complete understanding of
how to conduct high-quality programme evaluations in
technology-enabled learning environments. Whether you are
a novice or experienced programme evaluator, this
handbook provides guidance to facilitate the process of
ensuring and assuring that educational programmes are of
the highest quality.

As you peruse this handbook, be mindful that "We can't


improve when we can't measure.” ― Bharath Mamidoju.
The complete Evaluation Process requires team effort and
meaningful ways to measure quality.
3
INTRODUCTION
Have you ever been dissatisfied as a participant of an
academic programme? If yes, which I assume it is, think of the
reasons why you were dissatisfied . Now that you are viewing
academic programmes through the eyes of an evaluator, what
are some measures that you would take to improve that
programme? How would you assure Quality?
Quality assurance is achieved through thoughtful planning,
implementation, monitoring and the employment of conducive
evaluation systems. There are various frameworks, such as the
PDCA (Plan-Do-Check-Act) cycle which can be used to facilitate
the quality assurance initiative. The handbook also answers the
question: what next? After ensuring and assuring quality,
organizations are required to develop a culture of excellence in
an effort to sustain the quality of the programme. This
handbook was authored based on seven principles adapted
from Frye & Hemmer (2012), illustrated in figure 1.

Figure 1
4
Figure 1 continued

The Steps in the Evaluation process: 1 & 2

The evaluation process includes specific steps such as pre-planning


activities, developing the evaluation plan, collecting data, and
reporting findings as reflected in figure 2.
5
Figure 2
The Steps in the Evaluation Process
The first step in the
evaluation process
Planning. Be clear about
the purpose for the
evaluation, the various
persons you will need to
engage with and the
programme resources. For
example, the purpose may
be to determine whether
technology has been
effectively integrated in
the learning activities and
assessments?
Note: Adapted from Yale Center of Teaching and Learning (2017).

At this stage, document background information of the programme such


as, established policies, historical student enrollment and results from
previous evaluations. At this, select the evaluation model which will be
employed to guide the process. For example, you may document that
having determined this evaluation to be a summative assessment of the
current programme, the aim is to improve the effective integration of
technology in programme elements. As such , a Logic model will be used
to guide the process. The subsequent sub-headings provide detail
explanations of key concepts to facilitate your effort at the planning
stage.

At step 2, having selected an evaluation model, use the model to further


develop the plan by analyzing the goals and outcomes of the evaluation
process. In the input feature of a Logic model for example, note the
available ICT resources available instructors and students. Consider the
unique features of a TEL environment that must be taken into
consideration. Details in this regard is also provided in subsequent
sections. Additionally, be mindful of the quality standards that
programmes offered in technology-enabled learning environments
should comply with to ensure that the programmes offered are of a high
quality. Before moving on to step 3, let us look at what is programme
evaluation.
What is Programme Evaluation? 6

Think about it! Programme Evaluation is a systematic



quality assurance process of making
“Every day you engage in judgements about academic programmes via
random acts of evaluation research and presentation of findings and
— multiple times per day, in recommendations.
fact. When you get dressed
in the morning, you It is literally applying systematic methods to
implicitly ask yourself a set gather, analyze, interpret and communicate
of questions and gather information about a program to comprehend
data to answer them” its design, application, conclusions or effects
(Robinson S. B., 2022) (Robinson S. B., 2022). For example,
Rajashekara, et al., (2020) used the logic
Key Learning points model to guide the planning,
At the PE planning stage, implementation, and evaluation of the
identify the issues and
Leading Healthcare Improvement (LHI)
determine the questions to ask.
course and was able to inform changes to
You are clearly identifying the
the course based on the evaluation data
purpose for the evaluation.
collected.

Frequent evaluation of technology-enabled


programmes is required to inform quality
improvement initiatives. However, at times
evaluation initiatives may be broad in scope
and you may need to anticipate the needs of
the organiiztion. For instance, the U.S.
Government Accountability Office (GAO)
perform numerous program evaluations due
to legislative requests which are usually
broad in scope. As such, the evaluators’
responsibility is to identify the specific
issues and determine the appropriate
questions to ask (Wholey et al., 2010).

Now, let us take a closer look at the concept


of a technology-enabled learning
environment.
7
Defining a Technology-enabled Learning (TEL) Environment

Have you ever participated in, or facilitated an online learning


course? What was the experience like in terms of accessibility,
flexibility, open communication, engagement, collaboration, content
or materials, content presentation, facilitator’s presence, students’
engagement? These are some of the elements of a successful online
learning environment. According to Western Sydney University
(2022), technology-enabled learning (TEL) is the use of information,
communications and technology (ICT) resources to facilitate and
enhance student-centred learning. The design of TEL environments
starts with the consideration of pedagogical principles. The
selection of a programme evaluation model must consider the
classroom environment in which the programme is executed given
the distinction between traditional and technology-enabled learning
environments. For instance, the traditional learning environment
characterizes a teacher-centred approach as oppose to the TEL
which adopts a student-centred approach. In a technology-enabled
learning environment, the facilitator uses technology tools to
support the achievement of learning outcomes. TEL environment
should reflect a universal teaching and learning design where
students with diverse variabilities are accommodated. PE is
necessary to assess the expectations of the TEL and the actual
occurrences.

The Importance of Programme Evaluation in Technology-Enabled


Learning Environments

“Methods change, but the standards of quality endure” (Barbados


Accreditation Council, 2019). According to National Standards for
Quality (2022), a quality online programme will recognize the need for
quality evaluation. Quality is measured by assessment of evidence
which is acquired through data collection. Figure 3 illustrates that
evaluation is a significant element in the quality assurance process.
8

In a blended learning Food Cost Control Class we cover the topic,


“Popularity Index” which assesses the mix of menu items by
classifying the results of the assessment in a matrix based on
profitability and popularity. The final outcome (the classification)
informs the decision-making process regarding each menu item. For
instance, should we remove or repurpose a particular menu item. The
importance of programme evaluation may be viewed from this
perspective. The analysis of inputs and activities produces results
which are used to inform the decision making process in regards to
programme elements, the overall programme and the organization. It
helps institutions to take corrective action where deviation from
standards are observed. For instance, you may assess whether the use
of ICT resources is the focus of an online learning environment, as
oppose to being integrated in support of achieving the learning
objectives.

Figure 3
3 key Elements in the quality assurance process

Note: adopted from (UNESCO IESALC, 2021)


The Evaluation Process: Steps 3,4 & 5 9

The 3rd step in the evaluation You will choose whether the
process is to design the evaluation research aspect of the
plan with details in regards to the evaluation should be qualitative
scope of the evaluation and data or quantitative. Analyze the data
collection methods. Think of the based on the research type and
various participants who are submit the report to the
impacted by the programme, the stakeholders.
activities that are employed in the
process and the actual outcome as The final step is to revise the
a result of the activities. programme, facilitating
Continuing with our Logic model improvement and maintenance
example, it may be determined of a culture of excellence with
that ICT integration is resulted in the online learning environment.
established institutional For example, using the Logic
integration standards. Determine model, you may detail the short
data collection methods such as term, intermediate and long-
review of existing data, surveys or term impact as a result of the
interviews. At this stage determine evaluation. For instance, a short-
the metrics to be used for data term impact may be training for
collection and analysis. For instructors to improve
example, content delivery methods technology-enabled content
using technology. delivery in an effort to facilitate
an equitable learning
In step 4, conduct the evaluation environment by providing
by collecting and analyzing the multiple means of
data. representation.

To further assist with the above steps, the following sections provide
details regarding:
Unique features of technology-enabled learning environments that
must be taken into consideration when conducting a programme
evaluation;
Important quality standards that programmes offered in
technology-enabled learning environments should comply with to
ensure that the programmes offered are of a high quality;
The types of evaluations and evaluation models
10
Unique Features of Technology-Enabled Learning Environments
that must be taken into Consideration When Conducting a
Programme Evaluation

Quality assurance strategies for TEL can be established based on the


guidelines prepared by Organization of Ibero-American States for
Education, Science, and Culture (OEI). The guideline identifies four
elements of focus for higher educational institutions to capitalize on
technology innovations. These features include well-defined
students’ profiles to facilitate intervention to reinforce use to
technology tools; access to the learning environment and
technology devices; as well as adaptive technology and
communication mechanisms. The second feature is continuous
training for staff given the continuous growth of technology. The
third feature is using evaluations to verify authorship of tests. The
fourth feature that should be considered is the proper installation
and management of ICT resources to prevent data loss and security
breaches (UNESCO IESALC, 2021).
Important Quality Standards which Programmes Offered in
Technology-Enabled Learning Environments Should Comply With
to Ensure that The Programmes Offered are of a High Quality

Various organizations such as accreditation bodies have established


standards for technology-enabled programmes. For instance, the
Barbados Accreditation Council (BAC) sets quality assurance
guidelines specifically for distance education. The quality assurance
protocols by BAC for distance learning are classified under eight
primary headings: institutional mission; governance and
administration; curriculum and instruction; learning and
information resources; faculty support, learner support; evaluation
and assessment; quality assurance and quality enhancement. To
ensure that the needs of learners and other stakeholders are
satisfied, there is a practical integrated framework under the quality
assurance and quality enhancement protocol to ensure a clear
cycle of planning, implementing, monitoring reflection and action
(Barbados Accreditation Council, 2019).
TYPES OF PROGRAMME 11
EVALUATIONS
Figure 4
Types of Evaluation Programme evaluation may be
classified based on whether the
research method is quantitative
or qualitative. However, it is the
purpose for the evaluation
which determines the research
method.

The types of evaluation may be


based on whether the
evaluation is taking place
during the process of the
programme - formative or at
the end of the programme -
summative.
Key Learning Points
At the planning stage, determine what type of
evaluation is required Evaluation may also be
classified as situation analysis
(of both strengths and needs);
building program theory;
process evaluation;
performance monitoring;
impact evaluation and
economic evaluation (Wholey
et al., 2010).

Different Types of Programme Evaluation Models


Various types of evaluation models include Quasi-experimental models,
Kirkpatrick’s four-level model, the Logic Model, and the CIPP
(Context/Input/Process/Product). Three of these four models are
explained in subsequent paragraphs and supporting examples are
presented to facilitate your understanding of the models and inform your
decision to select a suitable model.
12
Kirkpatrick's evaluation model

This model which was developed by Donald Kirkpatrick in the 1950’s,


has four evaluation levels (Reaction, Learning, Behavior, Results) and
has been revised over the years with its most recent iteration
presented in 2010 (Yale Center of Teaching and Learning , 2017).

At Level 1 measures participants' participants are using the


reaction to the programme. You may acquired skills, knowledge or
ask questions which address how experience. Key questions to be
much the participants were engaged answered at this level includes
throughout the programme and whether there were any
determine participants' view of the noticeable behavioral changes
programme. keep in mind that the as a result of participating in
objectives, activities and assessments
the programme.
in TELs should support multiple
means of representation, engagement
Level 4 analyses the intended
and expression.
outcomes of the programme
At Level 2, assess whether and the organization on a
participants understood the whole. A possible question to
programme by comparing the ask at this stage is whether the
intended programme content with intended goals of the
what participants actually learned programme were achieved.
(Yale Center of Teaching and Learning
, 2017). For instance, ask questions to
assess whether the participants
increased in knowledge, skills or
experience, perhaps using a survey
tool such as Survey Monkey to collect
data. By way of example, the
Kirkpatrick’s four-level model was
adapted to evaluate training
programmes for teachers and the
research concluded that the adapted
Kirkpatrick evaluation model was
effective in assessing the educational
training for head teachers (Alsalamah
& Callinan, 2021).
13
Kirkpatrick's evaluation model

This model can be implemented prior to the start of the


programme, during the programme or even after the
programme. Examples of the resources and approaches that
may be used at each level is provided in Appendix A.
Additionally, figure 5 illustrates the New World KirkPatrick model
which enables the evaluator to start the evaluation process at
any one of the for levels.

This model can be implemented prior to the start of the


programme, during the programme or even after the programme.
Examples of the resources and approaches that may be used at
each level is provided in Appendix A. Additionally, figure 1
illustrates the New World KirkPatrick model which enables the
evaluator to start the evaluation process at any one of the for
levels.

Figure 5
The New World Kirkpatrick Model

Note: Adopted from (Kirkpatrick & Kirkpatrick, 2021)


Stufflebeam's Context, Input, Process, Product (CIPP) Model
14

Daniel Stufflebeam created the CIPP model in the 1960s (Yale


University, 2021). It is used in the decision-making process to
improve program effectiveness or plan for future program through
methodically gathering information to detect content and delivery
strengths and limitations (Yale University, 2021). The model
prioritizes continuous improvement by focusing on four aspects of a
program: the overall goals or mission (Context evaluation); the plans
and resources (Input Evaluation); the activities or components
(Process Evaluation); and the outcomes or objectives (Product
Evaluation) (Yale University, 2021).

The CIPP evaluation process has four core values which are
illustrated in figure 6. At the Context evaluation stage, evaluators
assess the ICT and other resources as well as current circumstances
of the program. At this stage, you may assess the primary goals,
explore background information and cultural setting (Yale University,
2021).

At the Input Evaluation stage, key stakeholders are identified, the


programme budget is evaluated and information regarding planning
and implementation is gathered (Yale University, 2021). Examples of
information to be gathered include human resources and deadlines.

The Process evaluation stage is the step of action where there is


continuous improvement assessment and feedback. A this stage
measure completed programme elements, level of satisfaction and
changes to be made (Yale University, 2021).

The programme outcome is determined at the Product evaluation


stage where the evaluator may ask questions in regards to the
sustainability and effectiveness of the programme. (Aziz et al., 2018)
employed Stufflebeam's CIPP evaluation model (1983) to evaluate
the educational quality at schools. The research concluded, the
schools provided quality education through various means such as
advanced technology and effective communication.
15
Stufflebeam's Context, Input, Process, Product
(CIPP) Model
Figure 6
The CIPP Model

Note: Adopted from (Yale University, 2021)

The Logic Model


A logic model is a planning tool which gives visual expression to
programme inputs, activities and intended outcomes. Logic models
basically have six categories: inputs, activities, outputs, short-term
outcomes, intermediate outcomes, and long-term outcomes
(Robinson A. , 2018). These categories are presented in figure 7. See
Appendix B, Figure 1B to facilitate creating a Logic model.

The Logic models are useful resources to connect program processes


to program outcomes and effectively facilitates outcome evaluation
as it illustrates how program components causes behavior change
(Robinson A. , 2018). For instance, a logic model can be used to
determine whether there has been any variance between the actual
and intended outcomes of a technology-enabled programme.
Figure 7 16
The Logic Model

Note: Adopted from (Eastern Washington University, 2022).

CONCLUSION

Effective programme evaluations will include pre-evaluation activities


such as background checks of the programme to be evaluated, as well
as the institution. The process includes a series of steps from planning,
conducting the evaluation, producing a report and recommendations
for improvement as well as establishing standards to ensure sustained
quality assurance through cultivating a culture of quality within the
organization on a whole. There are several theoretical frameworks
inclusive of the three covered by this handbook, which can be
employed to facilitate the evaluation process in a technology-enabled
learning environment. Knowledge is forever changing especially in
regards to technological advancements. Evaluation of technology
enabled programmes should therefore be consistent.
REFERENCES
17
Alsalamah, A., & Callinan, C. (2021). Adaptation of Kirkpatrick’s
Four-Level Model of Training Criteria to Evaluate Training
Programmes for Head Teachers. Education Sciences, 11(3).

Barbados Accreditation Council. (2019). Code of Practice for the


Assurance of Educational Quality and Standards in Distance
Education. Barbados Accreditation Council. Retrieved from
https://bac.gov.bb/wp-content/uploads/2021/05/Code-of-
Practice-for-Distance-Education-2019.pdf

Eastern Washington University. (2022). Logic Models. Retrieved


from Office of Grant and Reasearch Development:
https://inside.ewu.edu/ogrd/pre-award/proposal-
development/writing-resources/logic-models/

Frye, A. W., & Hemmer, P. A. (2012). Program evaluation models


and related theories: AMEE Guide No. 67. Medical Teacher, 34(5).

Kirkpatrick, J., & Kirkpatrick, W. K. (2021). An Introduction to The


New World Kirkpatrick Model.

Rajashekara, S., Gregory, M. E., Godwin, K. M., Naik, A. D.,


Campbell, C. M., Gregory, M. E., . . . Engebretson, A. (2020). Using
a Logic Model to Design and Evaluate a Quality Improvement
Leadership Course. Academic Medicine, 95(8), 1201-1206.

Robinson, A. (2018). Using Logic Models for Program Planning and


Evaluation. Retrieved from Creative Research Solutions:
https://creativeresearchsolutions.com/using-logic-models-for-
program-planning-and-evaluation/

REFERENCES Cont'd 18

The Institute for Higher Education Policy. (2000). Benchmarks For


Success In Internet-Based Distance Education. Retrieved from
https://www.ihep.org/wp-
content/uploads/2014/05/uploads_docs_pubs_qualityontheline.pdf

UNESCO IESALC. (2021). Applying quality standards to strengthen


blended and distance learning program: Content proposal for a
Policy Brief. Retrieved from UNESCO:
https://www.iesalc.unesco.org/wp-
content/uploads/2021/06/Applying-quality-standards-strengthen-
blended-distance-learning-program-PBC.pdf

Western Sydney University. (2022). Technology-Enabled Learning.


Retrieved from Western Sydney University:
https://www.westernsydney.edu.au/tel#:~:text=Technology%2DEna
bled%20Learning%20(TEL),the%20consideration%20of%20pedagog
ical%20principles.

Wholey, J. S., Hatry, H. P., & Newcomer, K. E. (2010). Handbook of


Practical Program. 9 Market Street, San Francisco, CA 94103-1741:
John Wiley & Son.

Yale Center of Teaching and Learning . (2017). Kirkpatrick Model.


Retrieved from Poorvu Center for Teaching and Learning:
https://poorvucenter.yale.edu/Kirkpatrick

Yale University. (2021). CIPP Model. Retrieved from Poorvu Center


for Teaching and Learning: https://poorvucenter.yale.edu/CIPP
Appendix A 19
Examples of Resources and Techniques
for the Kirkpatrick Evaluation Model

LEVEL RESOURCES & TECHNIQUES

In what ways participants
liked a particular program / training? How participants feel?

1. Online assessments
Level 1 - Reaction
2. Interviews

3. Find out whether participants are happy with technology tools.

New skills / knowledge /


attitudes? What was learned? and What was not learned?

1. Measurement and evaluation is simple and straightforward for any group size.

2. Use of a control group to compare.


Level 2 - Learning
3. Exams, interviews or assessments prior to and immediately after the training.

4. A distinct clear scoring process needs to be determined in order to reduce the


possibility of inconsistent evaluation reports.

5.Interview, printed, or electronic type examinations can be carried out.

Was the leaning being applied by


the attendees? 

1.   This can be carried out through observations and interviews.

2.   Evaluations
have to be subtle until change is noticeable, after which a more thorough
Level 3 - Transfer examination tool can be used.

3.      Were
the learned knowledge and gained skills used.

4.       Surveys and close observation can be used to evaluate significant change,
importance of change, and how long this change
will last.

What are the final results of the


training?

1. Intended results should be discussed with the participants

Level 4 - Results 2.  Allow enough time to measure

3. Improper observations and the inability to make a connection with training


input type will make it harder to see how the training program has made a
difference in the workplace.

Note: Adapted from (Kurt, 2016)


20
APPENDIX B

How to Create a Logic Model

Follow this link to create a Logic model:


http://toolkit.pellinstitute.org/evaluation-guide/plan-budget/use-a-logic-
model-in-evaluation/

You might also like