Professional Documents
Culture Documents
Adam Cohen
CUIN 7381
Assessment of
Evaluation of Learners
Materials,
Methods,
Environment
Evaluation of
Accreditation
and
Dissemination
Potenital
Instructional Evaluation
Now that we have defined the main terms and what is involved in instructional evaluation, it is
important to discuss some methods of collecting the evaluation data. John Goldie helpfully
divides the different methods into two main groups based on the type of data: quantitative and
qualitative (Goldie, 2006).
Quantitative data can be helpful when specific outcomes or measures can be directly tested.
For example, a learning goal such as an increase in knowledge can be measured in an
experimental pre/post-test design. While this can be a very helpful way to measure learning, it
does have weaknesses. For one, demonstrating learning is low on the Kirkpatrick hierarchy of
evaluation. It is expected that instruction will cause learning, and therefore not an impactful
outcome to evaluate. This type of evaluation is also limited by the types of questions that are
asked. While well-written questions can certainly demonstrate deep levels of knowledge,
writing those questions can take significant effort.
Another common quantitative approach is via survey. As opposed to knowledge-based tests,
surveys can be adapted and adjusted for a variety of different stakeholders, including learners,
instructors/facilitators, and leaders. For example, if one is trying to evaluate the effect of the
instruction or curriculum on society, a survey could be sent out to those that would be affected
downstream of the instruction, such as patients in the case of a medical curriculum. Surveys
could also be used for observers of the learner’s behaviors as a method of measuring
behavioral changes. A third example would be surveying instructors or facilitators about the
benefits and challenges in using the instructional materials provided. There are a few
downsides to surveys as well. One common issue with surveys is that it can be difficult to
encourage the target population to fill them out. Additionally, similar to pre/post-tests,
questions need to be carefully written to capture valid and usable data. Finally, while you can
ask open questions, it is often difficult to get in depth information with survey results alone.
Qualitative data can allow evaluators to explore topics and questions in more depth than
quantitative data. Through recorded discussions with single (individual interviews) or multiple
participants (focus groups), evaluators can obtain rich descriptions of a variety of issues related
to the instruction. For example, if an evaluator wanted more information about how students
interacted with different materials which were presented to them, a focus group would allow
them to probe about the positive and negative effects of those materials. Similar to surveys,
these techniques can be adapted to multiple different groups to answer different questions
which the evaluator has. Also similar to the prior techniques, questions need to be carefully
written in order to ensure the right information is being obtained. One disadvantage to
qualitative data is that the data can take a significant amount of time to analyze and
understand in the context of the evaluation.
Prior to this initial foray into instructional evaluation, I had a vague idea about the place of
evaluation has in the instructional design process. These readings have not only helped me
better define key concepts in the evaluation process, but have also helped me better define
how evaluation fits into the larger picture. In context with what I’ve learned in prior degree
coursework and curricular design work, I believe that evaluation, much like learning outcomes,
needs to be thought about, discussed and defined early in the instructional design process to
allow for better results at the end.
Works Cited
Dick, W., Carey, L., & Care, J. O. (2015). The Systematic Design of Instruction (8th ed.). Pearson.
Goldie, J. (2006). AMEE Education Guide no. 29: Evaluating educational programmes. Medical
Teacher, 28(3), 210–224. https://doi.org/10.1080/01421590500271282
Kirkpatrick, D. (2009). The Kirkpatrick Model. https://www.kirkpatrickpartners.com/Our-
Philosophy/The-Kirkpatrick-Model
Lovato, C., & Wall, D. (2013). Programme evaluation: improving practice, influencing policy and
decision-making. In T. Swanwick (Ed.), Understanding medical education: Evidence, theory
and practice (2nd editio, pp. 385–399). John Wiley & Sons, Ltd.
Wiggins, G., & McTighe, J. (2006). Understanding By Design (2nd editio). Pearson.