Professional Documents
Culture Documents
Inter-Rater Reliability Is A Way To Measure The Level of Agreement Between Multiple
Inter-Rater Reliability Is A Way To Measure The Level of Agreement Between Multiple
raters or judges. It deals with the topic of rating system implementation uniformity. Inter-
rater reliability can be evaluated by using a number of different statistics.
This process needed to make sure the data collected is authentic and correct. The data
collected test through observation or research against the data obtained or else conduct a
review using data collected with assistance some people related to the field of study.
In this study triangulation was used for the validity and reliability of the data
qualitative. This strategy is used to improve the study's qualitative data's validity and
reliability. The data from this study was shared with my two classmates in order to test the
data's dependability. Table below show the theme explanation and the agreement between
two rater.
Agreement
Themes Explanation
Rater A Rater B
Q1: There is positive and negative perception Agree Agree
Teachers’ about the online teaching and learning depends
perception on the situation that face by a teacher.
Q13: Guide the students to master any topic that Agree Agree
the students weak.
The data then calculated to determine the validity and reliability. If the raters agrees,
IRR is (or 100%) and if everyone disagrees, IRR is 0 (0%). The result is 88% which the data
is valid.