Professional Documents
Culture Documents
DOI 10.1007/s10864-009-9083-8
ORIGINAL PAPER
Introduction
123
174 J Behav Educ (2009) 18:173–188
123
J Behav Educ (2009) 18:173–188 175
engaged in class discussion. For example, Boniecki and Moore (2003) used tokens
exchangeable for extra course credit to increase the number of student hands raised
to answer an instructor question, decrease the amount of time following an
instructor question until a hand was raised, and increase the number of questions or
comments from students. These variables were recorded by a research assistant
sitting in the back of the classroom. Similarly, Sommer and Sommer (2007) also
increased class participation by giving a small amount of course credit for class
participation on alternate days. Participation increased on both credit and noncredit
days but was significantly higher on the credit days. However, because neither of
these studies specifically targeted initially low responders, we do not know whether
such students found credit for class comments highly reinforcing.
Although awarding credit for participation sounds like a straightforward way to
improve class discussion, keeping track of individual student comments while
conducting class discussion could prove unmanageable for an instructor. In
examining a way to make assessment of class participation more manageable for the
teacher, Krohn et al. (2008) had students record their own comments in class. The
number of self-recorded student comments generally agreed (75–80% agreement)
with the number of comments recorded by observers. Disagreement regarding
number of comments usually resulted from students’ under-recording their
comments, resulting in very few instances where students claimed unwarranted
credit for participation. Krohn et al.’s self-recording procedures proved sufficiently
reliable and manageable with large classes for us to evaluate those procedures
specifically with the lowest responders in large classes.
A primary goal of the study was finding ways to promote class participation by
students who typically have a low rate of responding in large classes (50?
students). Obviously, some students in large classes regularly participate in class
discussions, while other students go through an entire course without volunteering
a single comment. Thus, we were particularly interested in determining whether a
small amount of contingent credit for voluntary participation would increase the
percentage of low-responding students who participated in class discussion, when
self-recording participation under both credit and non-credit conditions. Assigning
a small amount of credit for class participation, in combination with asking
students to record their own comments, would appear to be a relatively
inconspicuous way to promote low-responding students’ engagement in class
discussion. Because these students may be somewhat intimidated by the prospect
of earning course credit for participation, a small amount of credit for participation
presumably should encourage participation without creating excessive emphasis on
participation.
Thus, the focus of the current study was to assess the effects of two credit levels
(small vs. no credit), used in conjunction with two previously established self-
recording procedures, on the percentage of initially low-responding students who
subsequently participated in class discussion. The low-responding students approx-
imated the bottom quartile of class participants in the baseline (i.e., the first unit) of
each section of the course. The credit and self-recording procedures were used in all
sections of the course over a two-semester period.
123
176 J Behav Educ (2009) 18:173–188
Method
Participants
Students were selected from six sections of a large undergraduate course in human
development required of all students who wished to enter the Teacher Preparation
Program in a large state university in the Southeastern United States. Each section
had approximately 55 students. The study was conducted over two consecutive
semesters. Students were selected on the basis of their participation levels in the first
of five units in the course (i.e., Physical Development). Although the intent was to
select the bottom quartile of participants in the first unit for each section, several ties
in percentile ranks within sections permitted only an approximation of the bottom
quartile of participants within each section. The students ultimately selected are
referred to as low responders throughout this manuscript. Initially, 49 students were
identified as low responders the first semester and 45 the second semester across
sections.
First Semester
For Section A, students in the bottom 31.9% (n = 15) were identified as low
responders, with the percentage of these students participating across baseline days
ranging from 23 to 73%. For Section B, students in the bottom 32.1% (n = 17) were
identified as low responders, with the percentage of them participating across
baseline days ranging from 6 to 41%. For Section C, students in the bottom 29.8%
(n = 17) were identified as low responders, with the percentage of them
participating across baseline days ranging from 0 to 47%.
Second Semester
For Section A, students in the bottom 33.3% (n = 12) were identified as low
responders, with the percentage of these students participating across baseline days
ranging from 17 to 36%. For Section B, students in the bottom 24.0% (n = 14) were
identified as low responders, with the percentage of them participating across
baseline days ranging from 0 to 8%. For Section C, students in the bottom 25.9%
(n = 19) were identified as low responders, with the percentage of them
participating across baseline days ranging from 0 to 6%.
For 4 days in each of five units in all course sections during both semesters, students
were instructed to answer questions from their study guide over a specific section of
the instructor notes included in their reading material. Students reported at the
beginning of each class period either on a sign-in sheet passed around the class (first
semester) or on a record card (second semester) whether they had answered all the
assigned homework questions for that day. The homework assignments were
designed to prepare students for class discussions in the corresponding class
123
J Behav Educ (2009) 18:173–188 177
123
178 J Behav Educ (2009) 18:173–188
Students were asked to number or bullet their comments so that instructors could
easily distinguish between the comments.
Because some students did not submit the 3 9 5 notecards when credit was not
given for participation during the first semester, a more comprehensive record card
that included space for recording other credit-producing activities, as well as
participation, was used the second semester. Consequently, if students did not
submit the expanded record card on a particular day, they received no credit for the
day. The second-semester record card precisely delineated space for recording
comments and checking attendance, display of namecards, and completion of
homework assignments. In both semesters, students submitted their participation
records on the four discussion days in each unit.
Each semester two GTAs from another section of the course observed class
discussion 1 day in each unit. GTAs observed the 3rd day of each unit the first
semester and the 4th day of each unit the second semester. The selection of the
observation day was based primarily on the GTAs’ class schedules. The GTAs were
given the same instructions as the students regarding what constituted a comment
(included in the syllabus). The observers sat in the two front corners of the tiered
amphitheater in which the course was conducted. From their perspective, they could
see all of the students and the name cards in front of the students. The students had
been told that the GTAs would be observing the discussion. The same GTAs also
assisted with the administration of the unit exams and consequently were in the
classroom for the five observation days plus the five unit exam days across units,
allowing considerable time for students to habituate to their presence.
Inter-rater agreement between the low-responding students’ and observers’
records of class participation was established in a somewhat different manner from
that previously employed by Krohn et al. (2008) over a wide range of students in
large classes. In the Krohn et al. study, each observer’s total comments for a
particular student was compared with that student’s self-reported total for the day.
The smaller of the two totals was divided by the larger to determine percentage of
agreement between the observer and the student. Cases in which neither the student
nor the observer recorded a comment for the day were excluded from the reliability
assessment. The reliability calculations for each student were then averaged across
students in each section of the course for each unit.
Because of much smaller ns in the current study plus numerous cases in which
neither the low-responding student nor the observer recorded a comment,
correlational analyses that included zero–zero comparisons were used in quantifying
the level of agreement between student and observer participation records. For the
first semester, the correlations ranged from .76 to .97 across units: Unit 1 = .76,
Unit 2 = .87, Unit 3 = .97, Unit 4 = .88, and Unit 5 = .92. The average
correlation between student and observer records the first semester was .88. For
the second semester, the correlations across units ranged from .81 to .94: Unit
1 = .81, Unit 2 = .94, Unit 3 = .93, Unit 4 = .88, and Unit 5 = .90. The average
correlation between student and observer records the second semester was .89.
123
J Behav Educ (2009) 18:173–188 179
Credit Contingencies
Each semester allotted some credit for class discussion in two of the five course
units per section. However, the amount of credit was slightly different for the two
semesters. The first semester, students could receive one point for their first
comment and another point for a second comment. During the second semester,
students received two points for their first comment and one point for their second
comment. Our rationale for this change was that low-responding students may have
more difficulty in making their initial comment than in making additional
comments. Determination of credit for class participation was based strictly on
the participation reports of the students across all discussion days, even on days
when observers also recorded student comments.
Research Design
The same research design was used each semester. It was pre-determined that each
section of the course would have the credit contingency in two units, which would
be separated by a unit without credit for participation. This credit arrangement for
participation was described in the course syllabus. By random selection at the
beginning of each semester, two sections had a baseline, credit, non-credit, credit,
and non-credit sequence, with each phase equivalent to one unit in the course. A
third section had an extended baseline covering the first two units and then a credit,
non-credit, and credit sequence. Thus, the design included features of a reversal
and multiple-baseline across sections design. Students self-recorded comments in
all phases of the study. Students in each section were informed at the beginning of
each unit whether credit would be available for self-recorded participation that
unit.
Results
The findings are first presented for each semester and then compared across
semesters. Because of the relatively small ns, we analyzed the data through graphic
presentation rather than through analysis of variance. Figures 1 and 2 show the
mean percentage of low responders (identified by section in the baseline phase) who
participated in every phase of each section, displayed separately by semester.
Additionally, we identified the percentage of low-responding students showing
different levels of consistency in treatment effects in the various sections.
First Semester
123
180 J Behav Educ (2009) 18:173–188
70
60
50
40
30
20
10
Section A 0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
80
70
Percentage Participating
60
50
40
30
20
10
Section B 0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
80
70
60
50
40
30
20
10
0
Section C
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Day Participation Recorded
Fig. 1 Percent of initially low-responding students participating in baseline, treatment, and withdrawal
phases in the first semester
123
J Behav Educ (2009) 18:173–188 181
Table 1 Consistency of treatment effects for low responders in each section the first semester
Level of treatment effect Section A Section B Section C
hand, there was a 10% increase in the percentage of low responders from the highest
no-treatment percentage to the lowest treatment percentage.
Although the data patterns point to strong and consistent treatment-effects for the
low-responding students as a group, some of these students participated minimally
even under the treatment conditions. Table 1 classifies consistency of treatment
effects for low responders as not consistent, partly consistent, mostly consistent, and
consistent. These classifications were based on day to day comparisons of
participation between the baseline and the two treatment phases: Not consistent
was defined as no gain in daily participation in the treatment phases compared to
participation on corresponding baseline days; partly consistent as more participation
on one or two treatment days compared to comparable baseline days; mostly
consistent as more participation on three treatment days compared to comparable
baseline days; and consistent as more participation on all four treatment days
compared to comparable baseline days.
The percentages of low-responding students classified as not consistent, partly
consistent, mostly consistent, and consistent in treatment effects differed somewhat
123
182 J Behav Educ (2009) 18:173–188
across sections of the course. Despite the effectiveness of the treatment for some
participants (consistent treatment effect for 0.0–20.0% across sections), all sections
showed a substantial percentage of participants in the ‘‘not consistent’’ category
(ranging from 13.3 to 76.5% across sections). To some degree, this lack of
improvement appeared to be a function of the baseline participation levels of several
low-responding students. Four of these students in Section A, one in Section B, and
three in Section C made two comments on at least one baseline day. Given that
these students were responding at what would be the maximum credit level (two
comments per class period) on some baseline days, a consistent treatment effect was
difficult to demonstrate for them.
Second Semester
Figure 2 shows that baseline percentages for students identified as low responders
differed across sections: about 26% of the low responders participated in class
discussion in the baseline phase for Section A, about 5% in Section B, and about 2%
in Section C. With regard to the latter section, the low responders did not make a
single comment on most baseline days (which spanned 8 days across the first two
units of the course).
Treatment comparisons for the low-responding students revealed consistent
treatment effects for all sections. Despite the substantial percentage of low-
responding students who participated during baseline in Section A (26%), an
average of 18% more of these students participated during the treatment than the no-
treatment phases (including the baseline phase). In Section B, 41% more of the
initially low-responding students participated in the treatment than in the no-
treatment phases. In Section C, 33% more of the initially low-responding students
participated in the treatment than in the no-treatment phases. Figure 2 suggests that
increased participation of low-responding students was dependent on the credit
incentive. For example, in addition to the small percentage of low-responding
students who participated in baseline, very low percentages also participated in the
credit-withdrawal phases across sections.
Although the data showed strong and consistent treatment-effects for the low-
responding students as a group in semester two, some of these students participated
minimally even under the treatment condition. As in Table 1, Table 2 classified
treatment effects for low responders as not consistent, partly consistent, mostly
consistent, and consistent treatment effect. These classifications were based on day
to day comparisons of participation between the baseline and the two treatment
phases and determined in the same manner as in the first semester.
Even though the percentages of low-responding students demonstrating not
consistent, partly consistent, mostly consistent, and consistent treatment effects
differed somewhat across sections of the course, all sections contained a substantial
percentage of participants in the ‘‘not consistent’’ category (ranging from 21.4 to
58.4% across sections). Despite the effectiveness of the treatment for other
participants (consistent effect for 14.2–31.6% across sections), some low responders
were unaffected by the credit contingency. A close examination of baseline data
showed that only one low responder made two comments (which would have
123
J Behav Educ (2009) 18:173–188 183
60
50
40
30
20
10
Section A 0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
70
Percentage Participating
60
50
40
30
20
10
Section B 0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
70
60
50
40
30
20
10
Section C 0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Day Participation Recorded
Fig. 2 Percent of initially low-responding students participating in baseline, treatment, and withdrawal
phases in the second semester
maximized credit in the treatment phases) on any baseline day, leaving ample room
for virtually all the low-responding participants to increase their participation to
maximize credit under the credit contingency.
Consequently, while the low-responding students as a group substantially
increased their percentage of participants under treatment conditions, a sizeable
percentage of individual low-responding students did not increase the consistency of
their participation. Nonetheless, consistent effects were more frequent in the second
than the first semester. Consistent treatment effects for first semester ranged from
123
184 J Behav Educ (2009) 18:173–188
Table 2 Consistency of treatment effects for low responders in each section the second semester
Level of treatment effect Section A Section B Section C
0.0 to 20% across sections, whereas consistent effects in the second semester ranged
from 14.2 to 31.6% across sections. This difference may partly be attributable to the
higher levels of baseline participation in the first than the second semester, making
treatment gains less likely to be achieved in the first semester.
Otherwise, one would assume that the combination of second-semester changes
in the recording system (reporting other credit-producing activities and participation
on specially designed record cards) and the credit ratio for first and second daily
comments (two points for the first comment and one point for second comment the
second semester vs. one point for each of these comments in the first semester)
accounted for the difference in individual treatment effects. Certainly, the second-
semester credit ratio appeared to work better than the first-semester ratio.
Discussion
123
J Behav Educ (2009) 18:173–188 185
the cases. Thus, the data on which the credit and non-credit comparisons were made
can be regarded as generally reliable. Virtually no students exaggerated their report
of participation to gain undeserved credit. The findings generally showed that the
agreement between low-responding students’ records and observers’ records of
class discussion was in the .76 to .97 range the first semester and in the .81 to .94
range for the second semester, with no consistent difference in level of agreement
under credit versus non-credit conditions.
The credit phases had a higher percentage of low-responding students partic-
ipating in class than did most non-credit phases. However, the percentage of low
responders in the credit phases was far from 100%. Plus, in most withdrawal-of-
credit phases in the first semester, the percentage of low-responding students
participating in discussion was lower than in the baseline period for all sections
(suggesting the possibility of a contrast effect even for students who initially
participated infrequently in class discussion). This pattern was less pronounced in
the second semester, but still occurred to a mild degree in Section A. Another
possibility is that the low-responding students were inclined to participate more than
normally at the outset of the course because of the newness of self-recording, even
without the provision of credit for self-recorded comments.
Although more of the low-responding students participated in class discussion
when credit was given for participation than when no credit was available, we do
not yet know why a sizeable percentage of these students did not respond to the
credit contingency. Several questions remain to be answered about the absence of
treatment effects for these low-responding students. For example, was the amount of
credit too small to reverse their inclination toward reticence? The composite credit
given in the study represented only about 3–5% of the course credit across the two
semesters, permitting students to make an A in the course without participating in
class discussion. In the future, the credit for participation could be escalated to an
extent that an A would not be possible without considerable participation credit.
Nonetheless, some students in the study complained about awarding any credit for
participation, indicating a personal dislike for that arrangement.
Also, it may be that participation in class discussion must be initiated early in the
course or not at all. However, this hypothesis is based more on experiential
observation than empirical research. We suspect that the longer students wait to
engage in class discussion, the more difficult participation may become. Thus, credit
for participation could be given at the outset of the course and every day thereafter,
as opposed to only in certain units in the course. This arrangement might produce
earlier and more sustained participation than the arrangement used in the current
study. Although not a direct parallel to this suggestion, Auster and MacRone (1994)
reported that early participation in college courses is a reliable predictor of
participation in later college classes.
Because minimal participation may largely be linked to characteristics students
bring to a course, it might be helpful to know whether the reticence observed in our
course was typical of the low-responding students’ verbalization in other courses of
comparable size. Similarly, determining how long this tendency had persisted in
their schoolwork might be valuable information. It may be that some of our low-
responding students have a history of reticence in all their courses that would be
123
186 J Behav Educ (2009) 18:173–188
extremely difficult for them and their instructors to overcome in large courses.
Although most of these students were planning to be teachers, which would require
substantial verbalization in group situations, some showed little inclination to either
demonstrate this skill or work toward developing the skill.
A realistic assessment of what was accomplished or not accomplished in this study
leads to the conclusions that low-responding students can reliably record their
participation in class discussion with minimal overstatement of their participation and
that providing a small amount of credit for reporting comments will help some highly
reticent students become more engaged in class discussion. However, for some low-
responding students, their silence may partly be a function of limited incentive to
become involved in class discussion, with their silence indicative of either disinterest
in the discussion or lack of sufficient credit for participating. On the other hand, some
low-responding students may be so intimidated by the prospect of discussion that no
amount of extrinsic rewards would enlist their engagement in discussion.
Howard et al. (2002) determined through surveys and interviews possible reasons
why some students minimally participate in class discussion. These researchers
compared talkers (those who make two or more comments per class session) with
non-talkers (those who make fewer than two comments per class session). Talkers
were more likely than non-talkers to see participation as part of their responsibility
in a course, instead of an expendable option. Conversely, non-talkers were more
likely than talkers to indicate that they had limited knowledge of the subject matter,
their ideas were not well enough formulated, or they were shy.
Some educators (Angelo and Cross 1993; McKinney 2000) have recommended
allowing students more time to formulate their ideas before asking them to discuss
those ideas in class. For example, students might be given a minute to write down
their ideas about an issue to be discussed or share their ideas with an adjacent
student before voicing those ideas with the total class. Large classes, such as the one
targeted in the current study, likely will need to provide time for students to share
their ideas in pairs or small groups to help low-responding students make the
transition to speaking before the class as a whole. Although being able to express
one’s views in large groups is a valuable skill for students to develop, the
development of this skill may need to begin inconspicuously and develop gradually
for some students to become comfortable in sharing their views with large classes.
Although the current study showed that initially low-responding students can
make treatment gains in quantity of participation, an area that remains to be
researched is whether increased quantity will be accompanied by increased quality.
Several reports on student participation make reference to quality of participation
(e.g., Dallimore et al. 2004; Junn 1994). However, instead of operationalizing
quality of participation and then assessing quality within that operational
framework, these studies typically had students rate the global quality of discussion
in a course. Several studies have linked quality of student participation to the
concepts of higher order thinking and critical thinking, but with mixed results
regarding improvement of these thinking skills (Bradley et al. 2008; Ferguson 1986;
Smith 1977). Again, higher thinking levels were not assessed in terms of student
comments, but rather through written work, critical thinking tests, and student
ratings of satisfaction with the course experience.
123
J Behav Educ (2009) 18:173–188 187
References
Angelo, T. A., & Cross, P. (1993). Classroom assessment techniques: A handbook for college teachers
(2nd ed.). San Francisco, CA: Jossey-Bass.
Auster, C. J., & MacRone, M. (1994). The classroom as a negotiated social setting: An empirical study of
the effects of faculty members’ behavior on students’ participation. Teaching Sociology, 22, 289–
300. doi:10.2307/1318921.
Boniecki, K. A., & Moore, S. (2003). Breaking the silence: Using a token economy to reinforce classroom
participation. Teaching of Psychology (Columbia, Mo.), 30, 224–227. doi:10.1207/S15328023TOP
3003_05.
Bradley, M. E., Thom, L. R., Hayes, J., & Hay, C. (2008). Ask and you will receive: How question type
influences quantity and quality of online discussions. British Journal of Educational Technology, 39,
888–900. doi:10.1111/j.1467-8535.2007.00804.x.
Dallimore, E. J., Hertenstein, J. H., & Platt, M. B. (2004). Classroom participation and discussion
effectiveness: Student-generated strategies. Communication Education, 53, 103–115. doi:10.1080/
0363452032000135805.
Erway, E. A. (1972). Listening: The second speaker. Speech Journal, 10, 22–27.
Fassinger, P. A. (1995). Professors’ and students’ perceptions of why students participate in class.
Teaching Sociology, 24, 25–33. doi:10.2307/1318895.
123
188 J Behav Educ (2009) 18:173–188
Ferguson, N. B. (1986). Encouraging responsibility, active participation, and critical thinking in general
psychology students. Teaching of Psychology, 13, 217–218.
Garside, C. (1996). Look who’s talking: A comparison of lecture and group discussion teaching strategies
in developing critical thinking skills. Communication Education, 45, 212–227.
Harton, H. C., Richardson, D. S., Barreras, R. E., Rockloff, M. J., & Latane, B. (2002). Focused
interactive learning: A tool for active class discussion. Teaching of Psychology (Columbia, Mo.), 29,
10–15. doi:10.1207/S15328023TOP2901_03.
Howard, J. R., James, G. H., III, & Taylor, D. R. (2002). The consolidation of responsibility in the mixed-
age college classroom. Teaching Sociology, 30, 214–234. doi:10.2307/3211384.
Jones, R. C. (2008). The ‘‘why’’ of class participation. College Teaching, 56, 59–62. doi:10.3200/
CTCH.56.1.59-64.
Junn, E. (1994). ‘Pearls of wisdom’: Enhancing student class participation with an innovative exercise.
Journal of Instructional Psychology, 94, 385–387.
Karp, D. A., & Yoels, W. C. (1976). The college classroom: Some observations on the meaning of student
participation. Sociology and Social Research, 60, 421–439.
Krohn, K. R., Foster, L. N., McCleary, D. F., Aspiranti, K. B., Nalls, M. L., Quillivan, C. C., Taylor, C.
M., & Williams, R. L. (2008). Reliability of students’ self-recorded participation in class discussion
(submitted).
McKinney, K. (2000). Teaching the mass class: Active/interactive strategies that have worked for me. In
J. Sikora & T. O. Anoloza (Eds.), Introductory sociology resource manual (pp. 13–16). Washington
DC: ASA Teaching Resources Center.
Morrison, T. L., & Thomas, M. D. (1975). Self-esteem and classroom participation. The Journal of
Educational Research, 68, 374–377.
Porat, K. L. (1990). Listening: The forgotten skill. Momentum, 21(1), 66–68.
Schuelke, L. D. (1972). Subject matter relevance in interpersonal communication, skills, and
instructional accountability: A consensus model. Paper presented at the Annual Meeting of the
Speech Communication Association in Chicago, IL.
Smith, D. G. (1977). College classroom interactions and critical thinking. Journal of Educational
Psychology, 69, 180–190. doi:10.1037/0022-0663.69.2.180.
Sommer, R., & Sommer, B. A. (2007). Credit for comments, comments for credit. Teaching of
Psychology (Columbia, Mo.), 34, 104–106.
Weaver, R. R., & Qi, J. (2005). Classroom organization and participation: College students’ perceptions.
The Journal of Higher Education, 76, 570–601. doi:10.1353/jhe.2005.0038.
123