You are on page 1of 1

Inter-rater Agreement and Reliability Discussion

Two measures that can be taken to ensure inter-related agreement and reliability are to
have two administrators go into evaluate a teacher during an instructional round
specifically looking for a certain strategy then going back to dicscuss it.
The second measure that can be done is to use a video for the evaluation looking for the
same targeted areas. The evaluators can discuss their evidence and observations.
This is interesting to note because I was part of an instructional round team just this past
week and we focused on one specific part of the lesson. After, we debriefed and discussed
our findings, however some of the evaluators were still having a problem removing
judgment. For example, an evaluator stated that the students were engaged. What is the
evidence of this? This is a judgment they made, but there needed to be evidence of this.
Using either the Danielson rubric or the NYSUT rubric standardizes the evaluation in a
way. However, I believe that more training needs to be done in order to learn how not to
make judgments.

Holland, Stacy. (2011). Building Inter-rater Reliability and Accuracy in to a System of


Teacher Observation. Learning Sciences International: Learning and Performance
Management. Retrieved from http://www.marzanoevaluation.com/news/building-interrater-reliability-and-accuracy-into-a-system-of-teacher-obse/
Baeder, Justin. (n.d.). Teacher Evaluations: The Validity & Reliability Myth. The
Principal Center. Retrieved from http://www.principalcenter.com/teacher-evaluationsthe-validity-reliability-myth/

You might also like