You are on page 1of 14

049-062 ALH-074049.

qxd 31/1/07 5:42 PM Page 49

Peer, professor and Copyright © 2007 SAGE Publications


(London, Los Angeles, New Delhi and
Singapore)

self-evaluation of Vol 8(1): 49–61


DOI: 10.1177/1469787407074049

class participation ARTICLE

G I N A J . RYA N, L E I S A L . M A R S H A L L &
K A L E N P O RT E R Mercer University College of Pharmacy and Health Sciences, USA

H AO M I AO J I A Mercer University School of Medicine, USA

The purpose of this project was to determine the validity of


A B S T R AC T
peer and self-evaluations of class participation compared to professors’
class participation grades. Students (N = 96) evaluated themselves and
their classmates on class participation on a four-point scale and stu-
dents were required to assign grades in a normalized distribution.
Relative to faculty evaluations, the bias and precision of the peer grades
were 0.48 points and 36.3 per cent (p < 0.05) and self-evaluations
scores were 0.48 and 77.5 per cent (p < 0.05). There was no corre-
lation between a student’s grade point average and his/her opinion of
this process (R = 0.02). Students did not like peer assessment using
forced distribution of grades.
K E Y W O R D S : faculty assessment and class participation, peer
assessment, self-assessment

Background
Class participation promotes active learning and is an important component
of seminar style classes. Yet, assigning a class participation grade is very
complicated because of its subjective nature. Assigning grades in the
extreme high or low range is easier than assessing those students in the
middle range. The students who consistently contribute to class discussion
or those who contribute little are easy to identify. Several scales have been
published to assist faculty members in assessing class participation (Bean
and Peterson, 1998; Craven and Hogan, 2001; Melvin, 1988). The use of
published scales may assist in the process, but assigning a class participation
grade remains difficult to objectify.

49
049-062 ALH-074049.qxd 31/1/07 5:42 PM Page 50

AC T I V E L E A R N I N G I N H I G H E R E D U C AT I O N 8(1)

Multiple evaluators may increase the accuracy of class participation


grading; however, having more than one faculty member in class is not
always feasible. Peer evaluation and assessment of group and individual
projects, presentations, and essays are well documented in the literature
(Dochy et al., 1999; Lindblom-Ylanne et al., 2006; Topping, 1998). The
use of peer evaluation of class participation has been previously reported
(Gopinath, 1999; Melvin, 1988). Studies that compare peer assessments
with faculty assessment of the same activity are determining the validity
of the peer assessments, whereas studies that compare groups of peer
assessments or series of peer assessments are determining reliability.
In a study done in the United States of America (USA), Gopinath (1999)
compared peer and self class participation grades of 92 students from
three different masters in business administration courses to the grades
given by the instructors. Peer and self-grades were significantly higher
(p0.05) than the instructors’ grades. Grade point average (GPA) was
the only significant predictor of the similarity between instructors’,
peers’, or self-grades. Student status, work experience or gender did not
contribute to the differences measured. By way of explanation, GPA is the
most common way that academic standing is determined in the USA. It
is a weighted average based on the grades received and the number of
credit hours taken. In the Gopinath (1999) study, students with high
GPAs scored themselves lower on class participation than students with
low GPAs.
Melvin (1988) studied peer evaluation by 144 students in seven different
courses to determine if average peer scores were equivalent to the professor’s
class participation grade. Students had to grade in a forced distribution to pre-
vent leniency errors but the professor did not. Peer grades were averaged with
the faculty’s grade to yield the final class participation grade if the peer grade
was at least one letter grade higher than the faculty grade; otherwise, no
adjustments were made. The correlation coefficient between peer grades and
the professor’s grade was 0.83–0.9, and there were no adjustments made in
any final class participation grades. Students rated the fairness of this grading
system 4.02 on 5-point scale with 5 being the most fair. It was reported that
there were no complaints regarding class participation grades from any of the
144 students. Burchfield and Sappington (1999) conducted a similar study
over three years with 144 students and found a correlation of 0.72 between
peer and instructor ranking of class participation.
The validity and reliability of peer evaluations is debatable. In a review of
peer and self-assessment Topping (1998) found that 18 out of 31 studies on
peer assessment were reliable and valid; however, in seven of the studies
reliability and validity were low. The following problems have been noted

50
049-062 ALH-074049.qxd 31/1/07 5:42 PM Page 51

RYA N E T A L . : E VA L U AT I O N O F C L A S S P A R T I C I P AT I O N

when peer assessment is used: inflated grading of friends, lack of dis-


crimination among members of a group, individuals dominating to seek
higher marks, and students who do less work but still benefit from the
group grade (Pond et al., 1995). Dochy et al. (1998) reviewed the litera-
ture regarding self-assessment and found, in general, good students tend
to underrate themselves and poorer students tend to overrate themselves.
In another study of self-evaluation of class participation there was no cor-
relation between faculty and self-grades (Burchfield and Sappington,
1999).
The purpose of this project was to determine the validity of peer and
self-evaluations of class participation using a large sample size with mul-
tiple assessment time points.The similarity between peer, faculty, and self-
evaluations was the primary outcome measured. Students may accept
faculty class participation grades better if peer and faculty grades are sim-
ilar (Gopinath, 1999). Therefore, we examined student opinion of this
process. We also wished to find out if there is any correlation between
grade point averages (GPA) and peer and faculty evaluations of class
participation since there is some evidence that GPAs correlate with peer
evaluations (Gopinath, 1999; Persons, 1998). These studies had been
carried out in the discipline of business and management and there is
thus a need to find out whether or not there is such a correlation in
other disciplines and, if so, how similar or otherwise are the findings.This
study looks at this correlation in the context of pharmacy education.

Method
In the USA, two years of basic science college level pre-pharmacy course
work must be completed in order to be eligible for admission to pharmacy
school. Pharmacy school is a four-year program and a Doctorate of
Pharmacy is the degree earned. Pharmacy students generally take all
required didactic courses together during the first three years. Mercer
University College of Pharmacy and Health Sciences has approximately
140 students per class. In addition to required courses, second and third
year professional students take one elective class per 16-week term.
In the spring of 2005, second and third year professional students enrolled
in Diabetes Care (N  30), Pediatric Pharmacotherapy (N  40) and
Women’s Health (N  34) elective courses assessed the classroom participa-
tion of their classmates and themselves. The demographics of the student
subjects are listed in Table 1. Each course was taught by different faculty
members. In these seminar-style didactic courses class participation
accounts for 20–25 per cent of the final grade. Each course met once per

51
049-062 ALH-074049.qxd 31/1/07 5:42 PM Page 52

AC T I V E L E A R N I N G I N H I G H E R E D U C AT I O N 8(1)

Table 1 Student demographics

Number 96
Age (yearsSD) 24.53.3
Female n (%) 89 (92.7%)
Male n (%) 7 (7.3%)
3rd profession year standing n (%) 50 (51.5)
2nd profession year standing n (%) 46 (49.5)
Race n (%)
Caucasian 63 (65.6)
African American 14 (14.6)
Other 19 (19.8)
Average GPA on a 4 point scale 3.490.43

week for two hours throughout a 16-week semester.Throughout the semes-


ter, faculty evaluated student participation in case presentations, small group
discussions, question and answer sessions, and other in-class activities.
Although the students had been in pharmacy school with each other for
1.5–2.5 years, some of them did not know each other by name. Therefore,
during the first week of class, students played games to introduce them-
selves and to learn each other’s names. In a further effort to ensure that
classmates knew each other, faculty also made a point of referring to
students by their name whenever possible. The instructors made notes on
student participation after each class meeting.All students in the classes were
required to complete peer and self-evaluations as a part of each course.
Students gave voluntary written informed consent to the use of their data in
this analysis.
Students and faculty used an online survey to rank classroom partici-
pation in one of four categories (Table 2). This assessment of class
participation occurred at weeks 5, 10, and 16 of the 16-week semester.
Leniency, or overestimation of grades, in peer review processes has been
previously documented (Brindley and Scoffield, 1998; Falchikov, 1995;
Orsmond and Merry, 1996;Topping, 1998).To reduce the effect of leniency
errors, the students were forced to assign grades in a normally distri-
buted pattern. For example, in a class of 30 students, five students were
given a four, ten students a three, ten students a two, and five students a
one. Students placed their peers within each category, but did not assign
ranks within each category. A median peer rating was computed for each
student on the 4-point scale. The faculty assessed student participation
at the same time points. Faculty used the same grading scale without the

52
049-062 ALH-074049.qxd 31/1/07 5:42 PM Page 53

RYA N E T A L . : E VA L U AT I O N O F C L A S S P A R T I C I P AT I O N

Table 2 Class participation grading scale

Category Grade % Description

4 100 Is a leader in class discussion and activities.


Generates meaningful comments on material.
Often provides new insights. Reveals
outstanding command of materials of course.
3 90 Participates frequently and contributes
consistently to class discussion and activities.
Demonstrates better than average command of
materials.
2 80 Participates in discussion on many occasions,
although not necessarily consistently.
Satisfactory command of materials.
1 70 Rarely participates in discussion voluntarily. Or
only participates when asked a question
directly. And/or minimal command of the
materials.

forced distribution since it is our institution’s policy not to use a forced


distribution when assigning grades. Faculty graded along the continuum
of the grading scale (that is, 2.3, 3.8, etc.). Self-assessment grades were
not included in the peer rating average. If the peer rating was at least
one letter grade higher than the faculty assigned grade, the two grades
were averaged. If the peer rating was lower than the faculty assigned
grade then only the faculty grade counted. Students were informed of
these policies. The grading criteria were adapted from Melvin (1988)
and are listed in Table 2. All students and faculty were blinded to other
ratings until all peer evaluations were submitted. Both faculty and aver-
age peer assessments were posted one week after all peer evaluations
were submitted.
At the end of the semester, student opinions of this process were collected
via an online version of the student questionnaire on peer grading (Table 3).The ques-
tions were derived from other published studies (Cheng and Warren, 1997;
Orsmond and Merry, 1996) and based on the instructors’ experiences with
issues that arise during the assessment of class participation.The Likert Scale
was chosen because all course and faculty evaluations at our institution are
done with this scale and our students are very familiar with it. To preserve
anonymity, the GPA range, rather than the exact GPA, was collected to avoid
the possibility of identifying an individual student by GPA.

53
049-062 ALH-074049.qxd 31/1/07 5:42 PM Page 54

AC T I V E L E A R N I N G I N H I G H E R E D U C AT I O N 8(1)

Table 3 Student opinion of peer review process

Question Disagree Agree Average


(%) (%) (SD)

I feel that peer evaluation was fair in helping


to determine my class participation grade. 70 12 2.00.93
I participated more because I knew my peers
were evaluating me. 55 21 2.41.2
I agreed with my peer evaluation scores. 91 4 2.61.1
I found it easy to evaluate my peers on their
class participation. 85 24 1.90.95
I would recommend using peer evaluation
grades in the future. 81 10 1.80.95
I like the use of forced distribution grading. 43 22 1.60.78
I would have given people lower scores
if there was not a forced distribution of the
grades. 85 25 2.51.1
I would have given people higher scores
if there was not a forced distribution
of the grades. 82 9 3.80.99
Questionnaire scale: 1 – strongly disagree; 2 – disagree; 3 – neutral; 4 – agree;
5 – strongly agree

Statistical analysis
Bias and precision were calculated to determine the ability of a peer grade
to predict a faculty grade. Bias is the average difference between a peer
grade and the faculty grade. Precision is the measure of random error of
the differences between the peer grade and the faculty grade. Precision is
calculated using a coefficient of variation which is the standard deviation
divided by the mean and expressed as percentage. Correlation coefficient
was not used because a high degree of correlation does not indicate equity;
therefore, a high correlation does not necessarily indicate accuracy and
validity. Also, when there is a small degree of variability, Pearson Rho (R)
is artificially suppressed (Bean and Peterson, 1998; Sheiner and Beal,
1981). Despite these facts most of the literature that examines the rela-
tionship between peer, faculty, and self-evaluation uses correlation as an
indicator of accuracy or equity. In addition, in a meta-analysis of peer
assessment studies, Falchikov and Goldfinch (2000) recommend not using
correlation coefficients to determine the agreement between faculty and
peer evaluations.The association between peer, faculty, and self-evaluations
and cumulative GPA was determined by calculation of Pearson R correlation
coefficients. A paired sample Student’s t-test was used to determine the
54
049-062 ALH-074049.qxd 31/1/07 5:42 PM Page 55

RYA N E T A L . : E VA L U AT I O N O F C L A S S P A R T I C I P AT I O N

statistical significance of the difference between average evaluations in the


three sessions. Statistical significance of the difference between the average
faculty, peer or self-evaluations was tested using a Wilcoxon signed-rank
test. The association between student’s GPA-ranks and opinions of this
project was calculated with a Spearman correlation coefficient(s).

Results
Peer versus faculty
Ninety-six students (95% of eligible students) participated in this study and
reported a total of 8881 peer evaluations and 272 self-evaluations.
At week five 89 (92.7%) students completed evaluations, and 91 (94.8%)
and 92 (95.8%) students completed the evaluations at weeks 10 and 16,
respectively. The results of average peer, faculty, and self-evaluations are
presented in Figure 1. The distribution of faculty grades is illustrated in
Figure 2. Table 4 contains the bias and precision values of peer and self-
evaluations versus faculty evaluations. Faculty grades tended to be higher
than peer grades (p0.05). Self-evaluation grades were higher than faculty
grades (p0.05). The majority of students (66.7%) graded themselves as
category four, and 31.3 per cent reported a three for self-evaluation. Only 3.1
per cent of students graded themselves as category two. There was no cor-
relation between GPA and faculty evaluation (Pearson R  0.10, p  0.35),
peer evaluation (Pearson R  0.11, p  0.27), or self-evaluation (Pearson
R  0.005, p  0.95). The results of the student-opinion questionnaire are
included in Figure 2. There was no correlation between a student’s GPA and
his/her opinion of this project (Spearman R  0.02, p  0.87).

4.5
4
3.5 *
3
Week 5
Grades

2.5 Week 10
2 Week 16
1.5
1
0.5
0
Faculty Peer Self

Figure 1 Average class participation grades


*p0.05 compared to weeks 5 and 10

55
049-062 ALH-074049.qxd 31/1/07 5:42 PM Page 56

AC T I V E L E A R N I N G I N H I G H E R E D U C AT I O N 8(1)

20

Frequency 15

10

3
2

8
1.

1.

1.

2.

2.

3.

3.
Class participation grades
Figure 2 Distribution of faculty class participation grades

Table 4 Bias and precision of class participation grades

Peer versus faculty Self versus faculty

Bias Precision Bias Precision


(points) (coefficient (points) (coefficient
variation,%) variation, %)

All 0.48* 36.3 0.48* 77.5


Diabetes 0.27* 118.8 0.71* 88.4
Pediatrics 0.53* 64.7 0.49* 125.8
Women’s Health 0.60* 60.6 0.77* 63.0
N  97 students
*p0.05

Discussion
Peer evaluations of class participation, using a forced-normal distribution
pattern, are not predictive of faculty evaluations of class participation. This
study collected over 8000 scores, which is a large enough sample size to
provide good reliability. Despite the lack of prediction and a statistical dif-
ference, no average peer grade was over one point higher than a faculty
grade, and no final class participation grades were adjusted. This suggests
that even though the scores were statistically different, the difference
between the grades was not academically significant, that is, it did not
result in a change in a student’s grade. In these three courses, the average
faculty grades were higher than peer grades and the grades did not
conform to a bell-shaped distribution pattern (Figure 2).

56
049-062 ALH-074049.qxd 31/1/07 5:42 PM Page 57

RYA N E T A L . : E VA L U AT I O N O F C L A S S P A R T I C I P AT I O N

Self-evaluations versus faculty evaluations


The majority (66.7%) of students graded themselves a four. Self-evaluations
were consistently inflated relative to faculty grades. Over inflation of self-
evaluation has been reported previously, as stated earlier. Weaker students
tend to overrate themselves, good students tend to underrate themselves
(Burchfield and Sappington, 1999; Gopinath, 1999) and accuracy of self-
evaluations improves with time (Dochy et al., 1999). The results of week
five and week 10 peer and faculty evaluations had little effect on self-
evaluation grade. Both scores remained 3.6–3.7 throughout the semester
and stayed significantly different from peer (p0.05) and faculty scores
(p0.05). This may indicate that students either did not become more
reflective in the evaluations of their peers or that student classroom partici-
pation increased. The later hypothesis is supported by the fact that average
faculty evaluations also increased over the semester.

Student opinion
The student-opinion questionnaire results (Figure 3) indicated that stu-
dents did not like forced distribution grading. More than 80 per cent of
students reported they would have assigned higher grades if possible.
Although not specifically asked, the fact that most students wanted to give
higher grades and did not want to give lower grades indicates that most
students wanted to give more fours and fewer ones. Most students (80%)
thought the peer evaluation process was unfair and disagreed with their
average peer grade. This result is consistent with the fact that most self-
evaluation grades were higher than peer grades. Subjects (80%) reported
that evaluating their peers was difficult and did not recommend using this
system in future courses. In two out of the ten written comments, students
reported that they did not know their classmates well enough to evaluate
their performance. Five students stated that they thought their peers did
not assign grades based on participation, but instead gave higher grades to
their friends. This problem of ‘friendship bias’ in peer assessment has been
previously described (Love, 1981; Topping, 1998). The literature suggests
that allowing the students to be involved with the creation of the evalua-
tion criteria may improve student understanding and acceptance of the
assigned grades (Dochy et al., 1999). However, our students were not able
to contribute to the development of the grading criteria because our insti-
tution requires documentation of all grading criteria in the course syllabi
no later than the first day of class.
GPA, peer evaluations and faculty evaluations
There was no correlation between GPA and either peer or self-evaluations
of class participation. This may have been because there was a very small

57
049-062 ALH-074049.qxd 31/1/07 5:42 PM Page 58

AC T I V E L E A R N I N G I N H I G H E R E D U C AT I O N 8(1)

variation in GPA.The standard deviation was 0.43 points on a four-point scale.


This small variation made it mathematically difficult to detect any correlation
between GPA and any other variable.This may be why our results differ from
Pearson (1998) and Gopinath (1999). Unfortunately, the variations in GPA in
Pearson (1998) and Gopinath (1999) were not reported.

Limitations
The validity and reliability of the results of this research could have been
improved if students had been involved in the formulation of the grading
criteria because the students may have understood the scoring process
better (Pond et al., 1995). There is evidence that these students did not
completely understand this process. For example, most students (66.7%)
gave themselves a four, which may indicate they did not understand that
the self-evaluation grade could never affect their final grade. The class sizes
in this study were also large (30–40 students) for a seminar style course.
This may have led to decreased discrimination in the peer grading process.
Since it is the faculty member’s job to learn student names and assess
student performance we might have needed to be more diligent in our
observations of class participation. Students, however, may not have been
as attentive in noting the class participation of others; and, therefore, might
have had less recall when assigning the grades. Smaller classes would have
meant that students could have known others better and would have had
fewer grades to assign. Hopefully, this would have made the task of assign-
ing grades less arduous, and students would have assigned grades with
greater discrimination.
Another obvious limitation was that the instructors were not forced to
grade in a normal distribution, thus increasing the chance of an observed
difference.Also, since our students had taken the vast majority of their courses
together there was increase risk of friendship bias. This was a point that five
students complained about on the essay section of their opinion survey.
The majority (92.7%) of our participants was female, and the results may
have differed in a population with a different male to female ratio. Finally,
another limitation to this trial was that two of the electives were pass/fail and
an 85 per cent was required to receive a passing grade. The forced distribu-
tion of peer grades required that half of the students be assigned a grade
of less than 80 per cent for class participation. This discrepancy may be one
reason why the faculty grades were higher than peer grades.

Future research
Several points should be considered by faculty conducting future research
on the utility of peer evaluations for validating class participation grades.
58
049-062 ALH-074049.qxd 31/1/07 5:42 PM Page 59

RYA N E T A L . : E VA L U AT I O N O F C L A S S P A R T I C I P AT I O N

Although use of an online survey made data collection easier, there were
technical difficulties which resulted in a many students having to re-enter
data for the second session. There was some concern that student partici-
pation in the online survey might decline throughout the semester.
However, 90–93 per cent of students completed the online surveys at each
of the three time points. No students complained about the technical dif-
ficulty in the opinion survey.
Leniency must be prevented and the problem of students assigning
grades based on non-performance related issues must be addressed. Rather
than forcing the students to grade in a normal distribution pattern, perhaps
students could grade in a distribution pattern derived from the faculty eval-
uations. Since grading class participation of each classmate is laborious,
giving students some incentive for accuracy in grading may improve the
quality of their evaluations and prevent bias. One could consider giving
students extra credit points if their evaluations fall within a certain range
of faculty or average peer evaluations. In addition, future researchers may
wish to schedule the first peer evaluation of class participation later in the
semester to allow students to become more familiar with each other prior
to the first evaluation. Finally, having a limit on the number of peers stu-
dents evaluate may improve quality of peer assessment (Falchikov and
Goldfinch, 2000). If the class size is large, limiting the number of peers a
student had to evaluate, rather than having all students evaluate all their
peers, may be a way to reduce the number of peer evaluations.
To conclude, although peer evaluation in a forced-normal distribution pat-
tern was not statistically predictive of faculty evaluation of class participation,
peer grades were not considerably different from faculty grades. Self-
evaluations using this system were inflated relative to faculty evaluations. And
finally, students did not like grading their peers using these methods.

Acknowledgements
We thank Dottie Harris for her technical support with creating and administering the
online survey. We also appreciate Dr Richard Jackson’s assistance in the design of this
study.

References
BEAN, J. C. & PETERSON , D. (1998) ‘Grading Classroom Participation’, New Directions for
Teaching and Learning 74(1): 33–40.
BRINDLEY, C. & SCOFFIELD, S. (1998) ‘Peer Assessment in Undergraduate Programmes’,
Teaching in Higher Education 3(1): 79–89.
BURCHFIELD, C. M . & SAPPINGTON, J. (1999) ‘Participation in Classroom Discussion’,
Teaching of Psychology 26(4): 290–1.
CHENG, W. & WARREN, M. (1997) ‘Having Second Thoughts: Student Perception Before
and After Peer Assessment Exercise’, Studies in Higher Education 22(2): 233–9.

59
049-062 ALH-074049.qxd 31/1/07 5:42 PM Page 60

AC T I V E L E A R N I N G I N H I G H E R E D U C AT I O N 8(1)

CRAVEN, J. A., III & HOGAN, T . (2001) ‘Assessing Student Participation in the
Classroom’, Science Scope 25(1): 36–40.
DOCHY, F., SEGERS, M. & SLUIJSMANS, D. (1999) ‘The Use of Self-, Peer and
Co-assessment in Higher Education: A Review’, Studies in Higher Education 24(3):
331–50.
FALCHIKOV, N. (1995) ‘Peer Feedback Marking: Developing Peer Assessment’,
Innovation in Education and Training 32(2): 175–87.
FALCHIKOV, N. & GOLDFINCH, J. (2000) ‘Student Peer Assessment in Higher
Education: A Meta-analysis Comparing Peer and Teacher Marks’, Review of Educational
Research 70(3): 287–322.
GOPINATH, C. (1999) ‘Alternatives to Instructor Assessment of Class Participation’,
Journal of Education for Business 75(1): 10–14.
LINDBLOM-YLANNE, S., PIHLAJAMAKI, H. & KOTKAS, T. (2006) ‘Self, Peer, and Teacher
Assessment of Student Essays’, Active Learning in Higher Education 7(1): 51–62.
LOVE, K. (1981) ‘Comparison of peer assessment methods: reliability, validity,
friendship bias and user reaction’, Journal of Applied Psychology 66(4): 451–7.
MELVIN, K. (1988) ‘Rating Class Participation: The Prof/Peer Method’, Teaching of
Psychology 15(3): 137–9.
ORSMOND, P. & MERRY, S. (1996) ‘The Importance of Marking Criteria in the Use of
Peer Assessment’, Assessment & Evaluation in Higher Education 21(3): 239–49.
PEARSONS, O. (1998) ‘Factors Influencing Students’ Peer Evaluation in Cooperative
Learning’, Journal of Education for Business 73(4): 225–30.
POND, K., UIHAQ, R. & WADE, W . (1995) ‘Peer Review: A Precursor to Peer
Assessment’, Innovations in Education & Teaching International 32(4): 314–23.
SHEINER, L. & BEAL, S. (1981) ‘Some Suggestions for Measuring Predictive
Performance’, Journal of Pharmacokinetics and Pharmacodynamics 9(4): 503–12.
TOPPING, K. (1998) ‘Peer Assessment Between Students in Colleges and Universities’,
Review of Educational Research 68(3): 249–76.

Biographical notes
GINA J. RYAN is a graduate of the University of California’s San Francisco School of
Pharmacy, where she also completed her general practice residency. Currently, she is
a Clinical Assistant Professor at Mercer University College of Pharmacy and Health
Sciences. In her academic position, she teaches endocrinology and oversees a clinical
pharmacy practice at Grady Health System Diabetes Clinic. Her research interests
include pedagogical research in pharmacy and improving the uses and techniques of
active learning. [email: ryan_gj@mercer.edu]
DR LEISA L. MARSHALL is Clinical Associate Professor, Department of Clinical and
Administrative Sciences, Mercer University College of Pharmacy and Health Sciences in
Atlanta, Georgia. She developed the seminar style elective, Womens Health, five years
ago and offers the course each year to doctor of pharmacy students. Leisa supervises
doctor of pharmacy students in advanced practice experiences in the geriatric chronic
care setting and provides consulting pharmacy services to a continuous care retirement
community. [email: marshall_1@mercer.edu]
DR KALEN PORTER is a Clinical Assistant Professor in the Department of Clinical and
Administrative Pharmacy at University of Georgia College of Pharmacy. She obtained
her Pharm.D. from the University of Georgia and postgraduate training at Shands at the
University of Florida and Medical University of South Carolina. Her research interests

60
049-062 ALH-074049.qxd 31/1/07 5:42 PM Page 61

RYA N E T A L . : E VA L U AT I O N O F C L A S S P A R T I C I P AT I O N

include pediatric infectious disease, pediatric asthma, and increasing education and
awareness among pharmacists and pharmacy students regarding the appropriate use of
medications in pediatric patients. [email: porter_kb@mercer.edu]
Address for all authors: Mercer University College of Pharmacy and Health Sciences,
3001 Mercer University Dr., Atlanta, Georgia 30341, USA.
DR HAOMIAO JIA is an Assistant Professor in the Department of Biostatistics at Columbia
University, USA. He gained his PhD at Case Western Reserve University, USA. His
research interests within the field of temporal–spatial analysis of disease distributions
and health outcome measures.
Address: Mercer University School of Medicine, 1400 College Street, Macon, Georgia
31207-0001, USA. [email: jia_h@mercer.edu]

61
Copyright of Active Learning in Higher Education is the property of Sage Publications, Ltd. and its content may
not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written
permission. However, users may print, download, or email articles for individual use.

You might also like