Professional Documents
Culture Documents
G I N A J . RYA N, L E I S A L . M A R S H A L L &
K A L E N P O RT E R Mercer University College of Pharmacy and Health Sciences, USA
Background
Class participation promotes active learning and is an important component
of seminar style classes. Yet, assigning a class participation grade is very
complicated because of its subjective nature. Assigning grades in the
extreme high or low range is easier than assessing those students in the
middle range. The students who consistently contribute to class discussion
or those who contribute little are easy to identify. Several scales have been
published to assist faculty members in assessing class participation (Bean
and Peterson, 1998; Craven and Hogan, 2001; Melvin, 1988). The use of
published scales may assist in the process, but assigning a class participation
grade remains difficult to objectify.
49
049-062 ALH-074049.qxd 31/1/07 5:42 PM Page 50
AC T I V E L E A R N I N G I N H I G H E R E D U C AT I O N 8(1)
50
049-062 ALH-074049.qxd 31/1/07 5:42 PM Page 51
RYA N E T A L . : E VA L U AT I O N O F C L A S S P A R T I C I P AT I O N
Method
In the USA, two years of basic science college level pre-pharmacy course
work must be completed in order to be eligible for admission to pharmacy
school. Pharmacy school is a four-year program and a Doctorate of
Pharmacy is the degree earned. Pharmacy students generally take all
required didactic courses together during the first three years. Mercer
University College of Pharmacy and Health Sciences has approximately
140 students per class. In addition to required courses, second and third
year professional students take one elective class per 16-week term.
In the spring of 2005, second and third year professional students enrolled
in Diabetes Care (N 30), Pediatric Pharmacotherapy (N 40) and
Women’s Health (N 34) elective courses assessed the classroom participa-
tion of their classmates and themselves. The demographics of the student
subjects are listed in Table 1. Each course was taught by different faculty
members. In these seminar-style didactic courses class participation
accounts for 20–25 per cent of the final grade. Each course met once per
51
049-062 ALH-074049.qxd 31/1/07 5:42 PM Page 52
AC T I V E L E A R N I N G I N H I G H E R E D U C AT I O N 8(1)
Number 96
Age (yearsSD) 24.53.3
Female n (%) 89 (92.7%)
Male n (%) 7 (7.3%)
3rd profession year standing n (%) 50 (51.5)
2nd profession year standing n (%) 46 (49.5)
Race n (%)
Caucasian 63 (65.6)
African American 14 (14.6)
Other 19 (19.8)
Average GPA on a 4 point scale 3.490.43
52
049-062 ALH-074049.qxd 31/1/07 5:42 PM Page 53
RYA N E T A L . : E VA L U AT I O N O F C L A S S P A R T I C I P AT I O N
53
049-062 ALH-074049.qxd 31/1/07 5:42 PM Page 54
AC T I V E L E A R N I N G I N H I G H E R E D U C AT I O N 8(1)
Statistical analysis
Bias and precision were calculated to determine the ability of a peer grade
to predict a faculty grade. Bias is the average difference between a peer
grade and the faculty grade. Precision is the measure of random error of
the differences between the peer grade and the faculty grade. Precision is
calculated using a coefficient of variation which is the standard deviation
divided by the mean and expressed as percentage. Correlation coefficient
was not used because a high degree of correlation does not indicate equity;
therefore, a high correlation does not necessarily indicate accuracy and
validity. Also, when there is a small degree of variability, Pearson Rho (R)
is artificially suppressed (Bean and Peterson, 1998; Sheiner and Beal,
1981). Despite these facts most of the literature that examines the rela-
tionship between peer, faculty, and self-evaluation uses correlation as an
indicator of accuracy or equity. In addition, in a meta-analysis of peer
assessment studies, Falchikov and Goldfinch (2000) recommend not using
correlation coefficients to determine the agreement between faculty and
peer evaluations.The association between peer, faculty, and self-evaluations
and cumulative GPA was determined by calculation of Pearson R correlation
coefficients. A paired sample Student’s t-test was used to determine the
54
049-062 ALH-074049.qxd 31/1/07 5:42 PM Page 55
RYA N E T A L . : E VA L U AT I O N O F C L A S S P A R T I C I P AT I O N
Results
Peer versus faculty
Ninety-six students (95% of eligible students) participated in this study and
reported a total of 8881 peer evaluations and 272 self-evaluations.
At week five 89 (92.7%) students completed evaluations, and 91 (94.8%)
and 92 (95.8%) students completed the evaluations at weeks 10 and 16,
respectively. The results of average peer, faculty, and self-evaluations are
presented in Figure 1. The distribution of faculty grades is illustrated in
Figure 2. Table 4 contains the bias and precision values of peer and self-
evaluations versus faculty evaluations. Faculty grades tended to be higher
than peer grades (p0.05). Self-evaluation grades were higher than faculty
grades (p0.05). The majority of students (66.7%) graded themselves as
category four, and 31.3 per cent reported a three for self-evaluation. Only 3.1
per cent of students graded themselves as category two. There was no cor-
relation between GPA and faculty evaluation (Pearson R 0.10, p 0.35),
peer evaluation (Pearson R 0.11, p 0.27), or self-evaluation (Pearson
R 0.005, p 0.95). The results of the student-opinion questionnaire are
included in Figure 2. There was no correlation between a student’s GPA and
his/her opinion of this project (Spearman R 0.02, p 0.87).
4.5
4
3.5 *
3
Week 5
Grades
2.5 Week 10
2 Week 16
1.5
1
0.5
0
Faculty Peer Self
55
049-062 ALH-074049.qxd 31/1/07 5:42 PM Page 56
AC T I V E L E A R N I N G I N H I G H E R E D U C AT I O N 8(1)
20
Frequency 15
10
3
2
8
1.
1.
1.
2.
2.
3.
3.
Class participation grades
Figure 2 Distribution of faculty class participation grades
Discussion
Peer evaluations of class participation, using a forced-normal distribution
pattern, are not predictive of faculty evaluations of class participation. This
study collected over 8000 scores, which is a large enough sample size to
provide good reliability. Despite the lack of prediction and a statistical dif-
ference, no average peer grade was over one point higher than a faculty
grade, and no final class participation grades were adjusted. This suggests
that even though the scores were statistically different, the difference
between the grades was not academically significant, that is, it did not
result in a change in a student’s grade. In these three courses, the average
faculty grades were higher than peer grades and the grades did not
conform to a bell-shaped distribution pattern (Figure 2).
56
049-062 ALH-074049.qxd 31/1/07 5:42 PM Page 57
RYA N E T A L . : E VA L U AT I O N O F C L A S S P A R T I C I P AT I O N
Student opinion
The student-opinion questionnaire results (Figure 3) indicated that stu-
dents did not like forced distribution grading. More than 80 per cent of
students reported they would have assigned higher grades if possible.
Although not specifically asked, the fact that most students wanted to give
higher grades and did not want to give lower grades indicates that most
students wanted to give more fours and fewer ones. Most students (80%)
thought the peer evaluation process was unfair and disagreed with their
average peer grade. This result is consistent with the fact that most self-
evaluation grades were higher than peer grades. Subjects (80%) reported
that evaluating their peers was difficult and did not recommend using this
system in future courses. In two out of the ten written comments, students
reported that they did not know their classmates well enough to evaluate
their performance. Five students stated that they thought their peers did
not assign grades based on participation, but instead gave higher grades to
their friends. This problem of ‘friendship bias’ in peer assessment has been
previously described (Love, 1981; Topping, 1998). The literature suggests
that allowing the students to be involved with the creation of the evalua-
tion criteria may improve student understanding and acceptance of the
assigned grades (Dochy et al., 1999). However, our students were not able
to contribute to the development of the grading criteria because our insti-
tution requires documentation of all grading criteria in the course syllabi
no later than the first day of class.
GPA, peer evaluations and faculty evaluations
There was no correlation between GPA and either peer or self-evaluations
of class participation. This may have been because there was a very small
57
049-062 ALH-074049.qxd 31/1/07 5:42 PM Page 58
AC T I V E L E A R N I N G I N H I G H E R E D U C AT I O N 8(1)
Limitations
The validity and reliability of the results of this research could have been
improved if students had been involved in the formulation of the grading
criteria because the students may have understood the scoring process
better (Pond et al., 1995). There is evidence that these students did not
completely understand this process. For example, most students (66.7%)
gave themselves a four, which may indicate they did not understand that
the self-evaluation grade could never affect their final grade. The class sizes
in this study were also large (30–40 students) for a seminar style course.
This may have led to decreased discrimination in the peer grading process.
Since it is the faculty member’s job to learn student names and assess
student performance we might have needed to be more diligent in our
observations of class participation. Students, however, may not have been
as attentive in noting the class participation of others; and, therefore, might
have had less recall when assigning the grades. Smaller classes would have
meant that students could have known others better and would have had
fewer grades to assign. Hopefully, this would have made the task of assign-
ing grades less arduous, and students would have assigned grades with
greater discrimination.
Another obvious limitation was that the instructors were not forced to
grade in a normal distribution, thus increasing the chance of an observed
difference.Also, since our students had taken the vast majority of their courses
together there was increase risk of friendship bias. This was a point that five
students complained about on the essay section of their opinion survey.
The majority (92.7%) of our participants was female, and the results may
have differed in a population with a different male to female ratio. Finally,
another limitation to this trial was that two of the electives were pass/fail and
an 85 per cent was required to receive a passing grade. The forced distribu-
tion of peer grades required that half of the students be assigned a grade
of less than 80 per cent for class participation. This discrepancy may be one
reason why the faculty grades were higher than peer grades.
Future research
Several points should be considered by faculty conducting future research
on the utility of peer evaluations for validating class participation grades.
58
049-062 ALH-074049.qxd 31/1/07 5:42 PM Page 59
RYA N E T A L . : E VA L U AT I O N O F C L A S S P A R T I C I P AT I O N
Although use of an online survey made data collection easier, there were
technical difficulties which resulted in a many students having to re-enter
data for the second session. There was some concern that student partici-
pation in the online survey might decline throughout the semester.
However, 90–93 per cent of students completed the online surveys at each
of the three time points. No students complained about the technical dif-
ficulty in the opinion survey.
Leniency must be prevented and the problem of students assigning
grades based on non-performance related issues must be addressed. Rather
than forcing the students to grade in a normal distribution pattern, perhaps
students could grade in a distribution pattern derived from the faculty eval-
uations. Since grading class participation of each classmate is laborious,
giving students some incentive for accuracy in grading may improve the
quality of their evaluations and prevent bias. One could consider giving
students extra credit points if their evaluations fall within a certain range
of faculty or average peer evaluations. In addition, future researchers may
wish to schedule the first peer evaluation of class participation later in the
semester to allow students to become more familiar with each other prior
to the first evaluation. Finally, having a limit on the number of peers stu-
dents evaluate may improve quality of peer assessment (Falchikov and
Goldfinch, 2000). If the class size is large, limiting the number of peers a
student had to evaluate, rather than having all students evaluate all their
peers, may be a way to reduce the number of peer evaluations.
To conclude, although peer evaluation in a forced-normal distribution pat-
tern was not statistically predictive of faculty evaluation of class participation,
peer grades were not considerably different from faculty grades. Self-
evaluations using this system were inflated relative to faculty evaluations. And
finally, students did not like grading their peers using these methods.
Acknowledgements
We thank Dottie Harris for her technical support with creating and administering the
online survey. We also appreciate Dr Richard Jackson’s assistance in the design of this
study.
References
BEAN, J. C. & PETERSON , D. (1998) ‘Grading Classroom Participation’, New Directions for
Teaching and Learning 74(1): 33–40.
BRINDLEY, C. & SCOFFIELD, S. (1998) ‘Peer Assessment in Undergraduate Programmes’,
Teaching in Higher Education 3(1): 79–89.
BURCHFIELD, C. M . & SAPPINGTON, J. (1999) ‘Participation in Classroom Discussion’,
Teaching of Psychology 26(4): 290–1.
CHENG, W. & WARREN, M. (1997) ‘Having Second Thoughts: Student Perception Before
and After Peer Assessment Exercise’, Studies in Higher Education 22(2): 233–9.
59
049-062 ALH-074049.qxd 31/1/07 5:42 PM Page 60
AC T I V E L E A R N I N G I N H I G H E R E D U C AT I O N 8(1)
CRAVEN, J. A., III & HOGAN, T . (2001) ‘Assessing Student Participation in the
Classroom’, Science Scope 25(1): 36–40.
DOCHY, F., SEGERS, M. & SLUIJSMANS, D. (1999) ‘The Use of Self-, Peer and
Co-assessment in Higher Education: A Review’, Studies in Higher Education 24(3):
331–50.
FALCHIKOV, N. (1995) ‘Peer Feedback Marking: Developing Peer Assessment’,
Innovation in Education and Training 32(2): 175–87.
FALCHIKOV, N. & GOLDFINCH, J. (2000) ‘Student Peer Assessment in Higher
Education: A Meta-analysis Comparing Peer and Teacher Marks’, Review of Educational
Research 70(3): 287–322.
GOPINATH, C. (1999) ‘Alternatives to Instructor Assessment of Class Participation’,
Journal of Education for Business 75(1): 10–14.
LINDBLOM-YLANNE, S., PIHLAJAMAKI, H. & KOTKAS, T. (2006) ‘Self, Peer, and Teacher
Assessment of Student Essays’, Active Learning in Higher Education 7(1): 51–62.
LOVE, K. (1981) ‘Comparison of peer assessment methods: reliability, validity,
friendship bias and user reaction’, Journal of Applied Psychology 66(4): 451–7.
MELVIN, K. (1988) ‘Rating Class Participation: The Prof/Peer Method’, Teaching of
Psychology 15(3): 137–9.
ORSMOND, P. & MERRY, S. (1996) ‘The Importance of Marking Criteria in the Use of
Peer Assessment’, Assessment & Evaluation in Higher Education 21(3): 239–49.
PEARSONS, O. (1998) ‘Factors Influencing Students’ Peer Evaluation in Cooperative
Learning’, Journal of Education for Business 73(4): 225–30.
POND, K., UIHAQ, R. & WADE, W . (1995) ‘Peer Review: A Precursor to Peer
Assessment’, Innovations in Education & Teaching International 32(4): 314–23.
SHEINER, L. & BEAL, S. (1981) ‘Some Suggestions for Measuring Predictive
Performance’, Journal of Pharmacokinetics and Pharmacodynamics 9(4): 503–12.
TOPPING, K. (1998) ‘Peer Assessment Between Students in Colleges and Universities’,
Review of Educational Research 68(3): 249–76.
Biographical notes
GINA J. RYAN is a graduate of the University of California’s San Francisco School of
Pharmacy, where she also completed her general practice residency. Currently, she is
a Clinical Assistant Professor at Mercer University College of Pharmacy and Health
Sciences. In her academic position, she teaches endocrinology and oversees a clinical
pharmacy practice at Grady Health System Diabetes Clinic. Her research interests
include pedagogical research in pharmacy and improving the uses and techniques of
active learning. [email: ryan_gj@mercer.edu]
DR LEISA L. MARSHALL is Clinical Associate Professor, Department of Clinical and
Administrative Sciences, Mercer University College of Pharmacy and Health Sciences in
Atlanta, Georgia. She developed the seminar style elective, Womens Health, five years
ago and offers the course each year to doctor of pharmacy students. Leisa supervises
doctor of pharmacy students in advanced practice experiences in the geriatric chronic
care setting and provides consulting pharmacy services to a continuous care retirement
community. [email: marshall_1@mercer.edu]
DR KALEN PORTER is a Clinical Assistant Professor in the Department of Clinical and
Administrative Pharmacy at University of Georgia College of Pharmacy. She obtained
her Pharm.D. from the University of Georgia and postgraduate training at Shands at the
University of Florida and Medical University of South Carolina. Her research interests
60
049-062 ALH-074049.qxd 31/1/07 5:42 PM Page 61
RYA N E T A L . : E VA L U AT I O N O F C L A S S P A R T I C I P AT I O N
include pediatric infectious disease, pediatric asthma, and increasing education and
awareness among pharmacists and pharmacy students regarding the appropriate use of
medications in pediatric patients. [email: porter_kb@mercer.edu]
Address for all authors: Mercer University College of Pharmacy and Health Sciences,
3001 Mercer University Dr., Atlanta, Georgia 30341, USA.
DR HAOMIAO JIA is an Assistant Professor in the Department of Biostatistics at Columbia
University, USA. He gained his PhD at Case Western Reserve University, USA. His
research interests within the field of temporal–spatial analysis of disease distributions
and health outcome measures.
Address: Mercer University School of Medicine, 1400 College Street, Macon, Georgia
31207-0001, USA. [email: jia_h@mercer.edu]
61
Copyright of Active Learning in Higher Education is the property of Sage Publications, Ltd. and its content may
not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written
permission. However, users may print, download, or email articles for individual use.