You are on page 1of 4

Response Paper #1

Adam Cohen
CUIN 7381

After my introduction to instruction evaluation, I believe evaluation should be considered from


the beginning of the design process. Prior to this reading, I was a firm believer of starting with
the goals in mind, one of the tenets of the Dick and Carey model for instructional design (Dick
et al., 2015). However, as I have reflected upon this information and other experiences, I have
begun to align with a more holistic approach similar to backwards design, where after creating
instructional goals, a designer needs to define what evidence is needed to know those goals are
met (Wiggins & McTighe, 2006). While Wiggins and McTighe refer to that specifically in the
context of assessment, I believe this also applies to the instructional evaluation process as a
whole.
Chris Lovato and David Wall define program evaluation as the “diligent investigation of a
program’s characteristics and merits,” where a program itself can refer to anything from a
single event to an entire curriculum (Lovato & Wall, 2013). This definition recognizes that
instructional evaluation involves a multitude of different aspects about the program, including
internal aspects such as instructional technique, materials, and results, and external aspects
such as the learning environment, impact, and cost. As I reflected on this definition, I realized
that in order for an endeavor of such magnitude to be successful (in that it truly evaluates the
program as whole), the aspects, scope and goals of the evaluation need to be defined early in
the process of instructional design.
As mentioned above, backward design of instruction and curriculum involves thinking about
assessment (and, as I postulate, evaluation) early in the process. John Goldie distinguishes
between the two processes by noting assessment as a part of the evaluation process (Goldie,
2006). He describes assessment as collecting data regarding student performance. Wiggins and
McTighe add to this definition by stating that this data needs to be evidence of the learners
meeting the desired results of the instruction or curriculum – the instructional goals (Wiggins &
McTighe, 2006). This evidence can range from learner satisfaction to societal impact
(Kirkpatrick, 2009). In addition to assessment of learner performance, evaluation can also
include gathering data on the teaching methods, the instructors, the materials and resources
provided to learners, the learning environment and culture, whether the program meets a
specific accreditation standard, and whether the current instruction can be disseminated to a
wider audience. Figure 1 illustrates this point graphically by showing that data from multiple
different areas including assessment are funneled and analyzed together to create the larger
picture of the evaluation. Assessment is certainly a large piece of instructional evaluation, but
not the only consideration.

Assessment of
Evaluation of Learners
Materials,
Methods,
Environment

Evaluation of
Accreditation
and
Dissemination
Potenital

Instructional Evaluation

Figure 1: Relationship Between Assessment and Evaluation

Now that we have defined the main terms and what is involved in instructional evaluation, it is
important to discuss some methods of collecting the evaluation data. John Goldie helpfully
divides the different methods into two main groups based on the type of data: quantitative and
qualitative (Goldie, 2006).
Quantitative data can be helpful when specific outcomes or measures can be directly tested.
For example, a learning goal such as an increase in knowledge can be measured in an
experimental pre/post-test design. While this can be a very helpful way to measure learning, it
does have weaknesses. For one, demonstrating learning is low on the Kirkpatrick hierarchy of
evaluation. It is expected that instruction will cause learning, and therefore not an impactful
outcome to evaluate. This type of evaluation is also limited by the types of questions that are
asked. While well-written questions can certainly demonstrate deep levels of knowledge,
writing those questions can take significant effort.
Another common quantitative approach is via survey. As opposed to knowledge-based tests,
surveys can be adapted and adjusted for a variety of different stakeholders, including learners,
instructors/facilitators, and leaders. For example, if one is trying to evaluate the effect of the
instruction or curriculum on society, a survey could be sent out to those that would be affected
downstream of the instruction, such as patients in the case of a medical curriculum. Surveys
could also be used for observers of the learner’s behaviors as a method of measuring
behavioral changes. A third example would be surveying instructors or facilitators about the
benefits and challenges in using the instructional materials provided. There are a few
downsides to surveys as well. One common issue with surveys is that it can be difficult to
encourage the target population to fill them out. Additionally, similar to pre/post-tests,
questions need to be carefully written to capture valid and usable data. Finally, while you can
ask open questions, it is often difficult to get in depth information with survey results alone.
Qualitative data can allow evaluators to explore topics and questions in more depth than
quantitative data. Through recorded discussions with single (individual interviews) or multiple
participants (focus groups), evaluators can obtain rich descriptions of a variety of issues related
to the instruction. For example, if an evaluator wanted more information about how students
interacted with different materials which were presented to them, a focus group would allow
them to probe about the positive and negative effects of those materials. Similar to surveys,
these techniques can be adapted to multiple different groups to answer different questions
which the evaluator has. Also similar to the prior techniques, questions need to be carefully
written in order to ensure the right information is being obtained. One disadvantage to
qualitative data is that the data can take a significant amount of time to analyze and
understand in the context of the evaluation.
Prior to this initial foray into instructional evaluation, I had a vague idea about the place of
evaluation has in the instructional design process. These readings have not only helped me
better define key concepts in the evaluation process, but have also helped me better define
how evaluation fits into the larger picture. In context with what I’ve learned in prior degree
coursework and curricular design work, I believe that evaluation, much like learning outcomes,
needs to be thought about, discussed and defined early in the instructional design process to
allow for better results at the end.

Works Cited
Dick, W., Carey, L., & Care, J. O. (2015). The Systematic Design of Instruction (8th ed.). Pearson.
Goldie, J. (2006). AMEE Education Guide no. 29: Evaluating educational programmes. Medical
Teacher, 28(3), 210–224. https://doi.org/10.1080/01421590500271282
Kirkpatrick, D. (2009). The Kirkpatrick Model. https://www.kirkpatrickpartners.com/Our-
Philosophy/The-Kirkpatrick-Model
Lovato, C., & Wall, D. (2013). Programme evaluation: improving practice, influencing policy and
decision-making. In T. Swanwick (Ed.), Understanding medical education: Evidence, theory
and practice (2nd editio, pp. 385–399). John Wiley & Sons, Ltd.
Wiggins, G., & McTighe, J. (2006). Understanding By Design (2nd editio). Pearson.

You might also like