You are on page 1of 16

J Behav Educ (2009) 18:173–188

DOI 10.1007/s10864-009-9083-8

ORIGINAL PAPER

Increasing Low-Responding Students’ Participation


in Class Discussion

Lisa N. Foster Æ Katherine R. Krohn Æ


Daniel F. McCleary Æ Kathleen B. Aspiranti Æ
Meagan L. Nalls Æ Colin C. Quillivan Æ
Cora M. Taylor Æ Robert L. Williams

Published online: 14 April 2009


Ó Springer Science+Business Media, LLC 2009

Abstract Students in six sections of a large undergraduate class were asked to


record their class comments on notecards in all course units. Additionally, in some
units, they received points toward their course grade based on their reported com-
ments in class discussion. The study was conducted over a two-semester period,
with slight variation in both the recording and crediting procedures across the two
semesters. The primary goal of the study was to determine the effects of two credit
and self-recording arrangements on initially low-responding students’ subsequent
participation in class discussion (first semester n = 49, second semester n = 45). A
higher percentage of low-responding students reported participating in class dis-
cussion when credit was given for participation than when no credit was awarded.
Nonetheless, 39% of the initially low-responding students the first semester and
38% of the initially low-responding students the second semester did not participate
in class discussion in any phase of the study.

Keywords Class discussion  College students  Low participation 


Credit contingencies  Self-recording

Introduction

When managing discussion in large classes, one fundamental instructional


challenge is achieving a balance in discussion across students. At the extremes,
some students comment frequently in large classes, whereas others seldom or never

L. N. Foster  K. R. Krohn  D. F. McCleary  K. B. Aspiranti  M. L. Nalls  C. C. Quillivan 


C. M. Taylor  R. L. Williams (&)
Department of Educational Psychology and Counseling, The University of Tennessee, Knoxville,
TN 37996, USA
e-mail: bobwilliams@utk.edu

123
174 J Behav Educ (2009) 18:173–188

voluntarily comment. For example, Weaver and Qi (2005) reported that


approximately 25% of students participate in class discussion, with only 12%
doing so regularly. The class experience may be differentially beneficial for
participants and low responders. Anecdotally, we have found that some low
responders claim that they learn best by listening. These students need to be
assured that listening and participating are entirely compatible; in fact, effective
communication must include effective listening (Erway 1972; Porat 1990). Also,
remaining silent weakens the vitality of class discussion and leaves under-
developed an important professional skill for the low responders—the ability to
express their views in group situations.
Although one might expect that student participation is a reliable predictor of
course performance, minimal research has addressed this issue. Probably a more
accurate claim is that certain levels and types of participation may enhance
learning. For example, moderate levels of participation may facilitate learning
more than extreme levels. Plus, student questions for clarification about subject
matter and student responses to teacher content questions may be more beneficial
than student anecdotes regarding issues under discussion. Some research points to
positive effects of allowing students to discuss possible exam items on future exam
performance (Harton et al. 2002). Other educators have claimed that involving
students in class discussion can facilitate their critical thinking (Garside 1996;
Jones 2008; Smith 1977). This outcome would be most likely when class
discussion involves problem solving in which students provide evidence to support
their views.
General guidelines for promoting class participation begin with the instructor’s
management of the course. For example, subject matter relevant to students’
personal and professional interests is one way to enhance student discussion of the
subject matter (Schuelke 1972). When students can see a connection between what
is being discussed and issues in their own lives, they may be more likely to actively
engage in class discussion. After attempting to establish subject-matter relevancy,
instructors should then create a discussion climate that provides extensive
opportunity for students to engage in discussion. One suggestion would be to
initiate discussion early in the class period, which may be critical to having robust
discussion at later points in the class period.
On the student side, several factors can affect the ease of students’ class
participation. For example, preparation for class is imperative for informed class
discussion to occur (Fassinger 1995). Students should know what specific issues will
be discussed in a particular class period and should study the assigned content on
those issues prior to class. Even better, instructors should have students answer
questions in writing over the assigned content prior to coming to class. Also, the size
of the class and the location of the student’s seat in the room may affect the ease of
participation in class discussion (Karp and Yoels 1976; Morrison and Thomas
1975). Reticent students might benefit from selecting smaller classes when available
and sitting as close to the front and center of the room as possible.
In addition to the instructor’s scheduling homework assignments to help
students prepare for class discussion and providing ample opportunity for students
to participate in class, some students may need additional incentives to become

123
J Behav Educ (2009) 18:173–188 175

engaged in class discussion. For example, Boniecki and Moore (2003) used tokens
exchangeable for extra course credit to increase the number of student hands raised
to answer an instructor question, decrease the amount of time following an
instructor question until a hand was raised, and increase the number of questions or
comments from students. These variables were recorded by a research assistant
sitting in the back of the classroom. Similarly, Sommer and Sommer (2007) also
increased class participation by giving a small amount of course credit for class
participation on alternate days. Participation increased on both credit and noncredit
days but was significantly higher on the credit days. However, because neither of
these studies specifically targeted initially low responders, we do not know whether
such students found credit for class comments highly reinforcing.
Although awarding credit for participation sounds like a straightforward way to
improve class discussion, keeping track of individual student comments while
conducting class discussion could prove unmanageable for an instructor. In
examining a way to make assessment of class participation more manageable for the
teacher, Krohn et al. (2008) had students record their own comments in class. The
number of self-recorded student comments generally agreed (75–80% agreement)
with the number of comments recorded by observers. Disagreement regarding
number of comments usually resulted from students’ under-recording their
comments, resulting in very few instances where students claimed unwarranted
credit for participation. Krohn et al.’s self-recording procedures proved sufficiently
reliable and manageable with large classes for us to evaluate those procedures
specifically with the lowest responders in large classes.
A primary goal of the study was finding ways to promote class participation by
students who typically have a low rate of responding in large classes (50?
students). Obviously, some students in large classes regularly participate in class
discussions, while other students go through an entire course without volunteering
a single comment. Thus, we were particularly interested in determining whether a
small amount of contingent credit for voluntary participation would increase the
percentage of low-responding students who participated in class discussion, when
self-recording participation under both credit and non-credit conditions. Assigning
a small amount of credit for class participation, in combination with asking
students to record their own comments, would appear to be a relatively
inconspicuous way to promote low-responding students’ engagement in class
discussion. Because these students may be somewhat intimidated by the prospect
of earning course credit for participation, a small amount of credit for participation
presumably should encourage participation without creating excessive emphasis on
participation.
Thus, the focus of the current study was to assess the effects of two credit levels
(small vs. no credit), used in conjunction with two previously established self-
recording procedures, on the percentage of initially low-responding students who
subsequently participated in class discussion. The low-responding students approx-
imated the bottom quartile of class participants in the baseline (i.e., the first unit) of
each section of the course. The credit and self-recording procedures were used in all
sections of the course over a two-semester period.

123
176 J Behav Educ (2009) 18:173–188

Method

Participants

Students were selected from six sections of a large undergraduate course in human
development required of all students who wished to enter the Teacher Preparation
Program in a large state university in the Southeastern United States. Each section
had approximately 55 students. The study was conducted over two consecutive
semesters. Students were selected on the basis of their participation levels in the first
of five units in the course (i.e., Physical Development). Although the intent was to
select the bottom quartile of participants in the first unit for each section, several ties
in percentile ranks within sections permitted only an approximation of the bottom
quartile of participants within each section. The students ultimately selected are
referred to as low responders throughout this manuscript. Initially, 49 students were
identified as low responders the first semester and 45 the second semester across
sections.

First Semester

For Section A, students in the bottom 31.9% (n = 15) were identified as low
responders, with the percentage of these students participating across baseline days
ranging from 23 to 73%. For Section B, students in the bottom 32.1% (n = 17) were
identified as low responders, with the percentage of them participating across
baseline days ranging from 6 to 41%. For Section C, students in the bottom 29.8%
(n = 17) were identified as low responders, with the percentage of them
participating across baseline days ranging from 0 to 47%.

Second Semester

For Section A, students in the bottom 33.3% (n = 12) were identified as low
responders, with the percentage of these students participating across baseline days
ranging from 17 to 36%. For Section B, students in the bottom 24.0% (n = 14) were
identified as low responders, with the percentage of them participating across
baseline days ranging from 0 to 8%. For Section C, students in the bottom 25.9%
(n = 19) were identified as low responders, with the percentage of them
participating across baseline days ranging from 0 to 6%.

General Discussion Arrangement

For 4 days in each of five units in all course sections during both semesters, students
were instructed to answer questions from their study guide over a specific section of
the instructor notes included in their reading material. Students reported at the
beginning of each class period either on a sign-in sheet passed around the class (first
semester) or on a record card (second semester) whether they had answered all the
assigned homework questions for that day. The homework assignments were
designed to prepare students for class discussions in the corresponding class

123
J Behav Educ (2009) 18:173–188 177

sessions. In all sections, instructors asked a combination of factual and compre-


hension questions concerning those issues. Virtually no instructor lecturing occurred
in the course. Instead, from the outset of each targeted class period each unit, the
instructor posed questions about the issues reflected in the instructor notes and
remained active in the discussion throughout the class period.
The agenda for discussion on a particular day was generally the same for all
instructors. These instructors were second-year graduate teaching assistants (GTAs),
who were coached by the same supervisor in the types of questions to ask about
content common to all sections. Although the instructors were given the liberty to
ask questions in their own style, they discussed similar issues across sections on any
particular day in the study. Additionally, the instructors were coached in how to
follow-up on student comments—typically acknowledging the essence and
importance of each student comment and often linking the student’s comment to
a subsequent instructor question. Thus, although not specifically monitored
throughout the study, question and answer procedures used by each instructor
were designed to be highly similar across sections by equating content, discussion
agenda, and supervisory coaching. Also, because the most critical comparisons were
intra- rather than inter-participant, differences in instructional style across
instructors would be less damaging to the design of the study than differences in
instructional style across units within the same section.

Self-reporting of Class Comments

On the 4 days corresponding to the homework assignments in each unit, students


recorded their comments in class and submitted their participation records at the
conclusion of each class. On the three remaining days in each unit, class time was
largely devoted to video presentations, essay quizzes, practice exams, and multiple-
choice unit exams. As previously noted, the self-recording in the first unit (Physical
Development) was used to identify the low responders, whose self-recording
continued across the remaining units in the course (i.e., Cognitive Development,
Social Development, Psychological Development, and Values Development). Self-
recording was held constant across treatment phases (credit vs. non-credit), which
corresponded to the different units. Comments could consist of student responses to
content questions posed by the instructor, student content questions directed to the
instructor, and volunteered student opinions about content issues under discussion.
When students expressed opinions as agreement or disagreement with others’
perspectives, they were asked to articulate the rationale for their agreement or
disagreement for the comment to be recorded for course credit.
The current study employed two versions of student self-reporting of class
comments, both of which were used in the Krohn et al. (2008) study. In both
semesters, students recorded their comments each day on notecards they submitted
to the instructor at the conclusion of class. The notecard used the first semester was
a plain 3 by 5 notecard on which students wrote their name, the date, and any
comments they had made in class that day. As previously noted, questions, answers
to questions, or viewpoints concerning discussion issues were counted as comments.

123
178 J Behav Educ (2009) 18:173–188

Students were asked to number or bullet their comments so that instructors could
easily distinguish between the comments.
Because some students did not submit the 3 9 5 notecards when credit was not
given for participation during the first semester, a more comprehensive record card
that included space for recording other credit-producing activities, as well as
participation, was used the second semester. Consequently, if students did not
submit the expanded record card on a particular day, they received no credit for the
day. The second-semester record card precisely delineated space for recording
comments and checking attendance, display of namecards, and completion of
homework assignments. In both semesters, students submitted their participation
records on the four discussion days in each unit.

Reliability of Self-recorded Participation

Each semester two GTAs from another section of the course observed class
discussion 1 day in each unit. GTAs observed the 3rd day of each unit the first
semester and the 4th day of each unit the second semester. The selection of the
observation day was based primarily on the GTAs’ class schedules. The GTAs were
given the same instructions as the students regarding what constituted a comment
(included in the syllabus). The observers sat in the two front corners of the tiered
amphitheater in which the course was conducted. From their perspective, they could
see all of the students and the name cards in front of the students. The students had
been told that the GTAs would be observing the discussion. The same GTAs also
assisted with the administration of the unit exams and consequently were in the
classroom for the five observation days plus the five unit exam days across units,
allowing considerable time for students to habituate to their presence.
Inter-rater agreement between the low-responding students’ and observers’
records of class participation was established in a somewhat different manner from
that previously employed by Krohn et al. (2008) over a wide range of students in
large classes. In the Krohn et al. study, each observer’s total comments for a
particular student was compared with that student’s self-reported total for the day.
The smaller of the two totals was divided by the larger to determine percentage of
agreement between the observer and the student. Cases in which neither the student
nor the observer recorded a comment for the day were excluded from the reliability
assessment. The reliability calculations for each student were then averaged across
students in each section of the course for each unit.
Because of much smaller ns in the current study plus numerous cases in which
neither the low-responding student nor the observer recorded a comment,
correlational analyses that included zero–zero comparisons were used in quantifying
the level of agreement between student and observer participation records. For the
first semester, the correlations ranged from .76 to .97 across units: Unit 1 = .76,
Unit 2 = .87, Unit 3 = .97, Unit 4 = .88, and Unit 5 = .92. The average
correlation between student and observer records the first semester was .88. For
the second semester, the correlations across units ranged from .81 to .94: Unit
1 = .81, Unit 2 = .94, Unit 3 = .93, Unit 4 = .88, and Unit 5 = .90. The average
correlation between student and observer records the second semester was .89.

123
J Behav Educ (2009) 18:173–188 179

Credit Contingencies

Each semester allotted some credit for class discussion in two of the five course
units per section. However, the amount of credit was slightly different for the two
semesters. The first semester, students could receive one point for their first
comment and another point for a second comment. During the second semester,
students received two points for their first comment and one point for their second
comment. Our rationale for this change was that low-responding students may have
more difficulty in making their initial comment than in making additional
comments. Determination of credit for class participation was based strictly on
the participation reports of the students across all discussion days, even on days
when observers also recorded student comments.

Research Design

The same research design was used each semester. It was pre-determined that each
section of the course would have the credit contingency in two units, which would
be separated by a unit without credit for participation. This credit arrangement for
participation was described in the course syllabus. By random selection at the
beginning of each semester, two sections had a baseline, credit, non-credit, credit,
and non-credit sequence, with each phase equivalent to one unit in the course. A
third section had an extended baseline covering the first two units and then a credit,
non-credit, and credit sequence. Thus, the design included features of a reversal
and multiple-baseline across sections design. Students self-recorded comments in
all phases of the study. Students in each section were informed at the beginning of
each unit whether credit would be available for self-recorded participation that
unit.

Results

The findings are first presented for each semester and then compared across
semesters. Because of the relatively small ns, we analyzed the data through graphic
presentation rather than through analysis of variance. Figures 1 and 2 show the
mean percentage of low responders (identified by section in the baseline phase) who
participated in every phase of each section, displayed separately by semester.
Additionally, we identified the percentage of low-responding students showing
different levels of consistency in treatment effects in the various sections.

First Semester

Figure 1 shows the percentage of the initially low-responding students who


participated during the various treatment and no-treatment phases. The mean
percentage of low responders who participated was consistently greater in treatment
than no-treatment phases (including baseline phase). The percent of low-responding

123
180 J Behav Educ (2009) 18:173–188

Baseline Treatment 1 Withdrawal 1 Treatment 2 Withdrawal 2


80

70

60

50

40

30

20

10
Section A 0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
80

70
Percentage Participating

60

50

40

30

20

10

Section B 0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

80

70

60

50

40

30

20

10

0
Section C
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Day Participation Recorded

Fig. 1 Percent of initially low-responding students participating in baseline, treatment, and withdrawal
phases in the first semester

students who participated varied from 2% in a no-treatment phase (Section C) to


57% in a treatment phase (Section A). Within each section, there was at least a 30%
increase in the percentage of low-responding students who participated from the
lowest no-treatment percentage to the highest treatment percentage. On the other

123
J Behav Educ (2009) 18:173–188 181

Table 1 Consistency of treatment effects for low responders in each section the first semester
Level of treatment effect Section A Section B Section C

n Percent n Percent n Percent

First treatment phase


Not consistenta 2 13.3 7 41.2 9 52.9
Partly consistentb 9 60.1 8 47.1 6 35.3
Mostly consistentc 2 13.3 0 0.0 1 5.9
Consistentd 2 13.3 2 11.7 1 5.9
Second treatment phase
Not consistenta 7 46.7 10 58.8 13 76.5
Partly consistentb 3 20.0 5 29.5 3 17.6
Mostly consistentc 2 13.3 0 0.0 1 5.9
Consistentd 3 20.0 2 11.7 0 0.0
Combined treatment phases
Not consistente 2 13.3 5 29.5 7 41.2
Inconsistentf 9 60.1 10 58.8 9 52.9
Mostly consistentg 1 6.6 0 0.0 0 0.0
Consistenth 3 20.0 2 11.7 1 5.9
a
Not consistent, participation on no treatment day exceeded the comparable baseline day
b
Partly consistent, participation on treatment days exceeded 1 or 2 comparable baseline days
c
Mostly consistent, participation on treatment days exceeded 3 comparable baseline days
d
Consistent, participation on treatment days exceeded 4 comparable baseline days
e
Not consistent, participation on no treatment day exceeded the comparable baseline day
f
Partly consistent, participation on treatment days exceeded 1–4 comparable baseline days
g
Mostly consistent, participation on treatment days exceeded 5–6 comparable baseline days
h
Consistent, participation on treatment days exceeded 7–8 comparable baseline days

hand, there was a 10% increase in the percentage of low responders from the highest
no-treatment percentage to the lowest treatment percentage.
Although the data patterns point to strong and consistent treatment-effects for the
low-responding students as a group, some of these students participated minimally
even under the treatment conditions. Table 1 classifies consistency of treatment
effects for low responders as not consistent, partly consistent, mostly consistent, and
consistent. These classifications were based on day to day comparisons of
participation between the baseline and the two treatment phases: Not consistent
was defined as no gain in daily participation in the treatment phases compared to
participation on corresponding baseline days; partly consistent as more participation
on one or two treatment days compared to comparable baseline days; mostly
consistent as more participation on three treatment days compared to comparable
baseline days; and consistent as more participation on all four treatment days
compared to comparable baseline days.
The percentages of low-responding students classified as not consistent, partly
consistent, mostly consistent, and consistent in treatment effects differed somewhat

123
182 J Behav Educ (2009) 18:173–188

across sections of the course. Despite the effectiveness of the treatment for some
participants (consistent treatment effect for 0.0–20.0% across sections), all sections
showed a substantial percentage of participants in the ‘‘not consistent’’ category
(ranging from 13.3 to 76.5% across sections). To some degree, this lack of
improvement appeared to be a function of the baseline participation levels of several
low-responding students. Four of these students in Section A, one in Section B, and
three in Section C made two comments on at least one baseline day. Given that
these students were responding at what would be the maximum credit level (two
comments per class period) on some baseline days, a consistent treatment effect was
difficult to demonstrate for them.

Second Semester

Figure 2 shows that baseline percentages for students identified as low responders
differed across sections: about 26% of the low responders participated in class
discussion in the baseline phase for Section A, about 5% in Section B, and about 2%
in Section C. With regard to the latter section, the low responders did not make a
single comment on most baseline days (which spanned 8 days across the first two
units of the course).
Treatment comparisons for the low-responding students revealed consistent
treatment effects for all sections. Despite the substantial percentage of low-
responding students who participated during baseline in Section A (26%), an
average of 18% more of these students participated during the treatment than the no-
treatment phases (including the baseline phase). In Section B, 41% more of the
initially low-responding students participated in the treatment than in the no-
treatment phases. In Section C, 33% more of the initially low-responding students
participated in the treatment than in the no-treatment phases. Figure 2 suggests that
increased participation of low-responding students was dependent on the credit
incentive. For example, in addition to the small percentage of low-responding
students who participated in baseline, very low percentages also participated in the
credit-withdrawal phases across sections.
Although the data showed strong and consistent treatment-effects for the low-
responding students as a group in semester two, some of these students participated
minimally even under the treatment condition. As in Table 1, Table 2 classified
treatment effects for low responders as not consistent, partly consistent, mostly
consistent, and consistent treatment effect. These classifications were based on day
to day comparisons of participation between the baseline and the two treatment
phases and determined in the same manner as in the first semester.
Even though the percentages of low-responding students demonstrating not
consistent, partly consistent, mostly consistent, and consistent treatment effects
differed somewhat across sections of the course, all sections contained a substantial
percentage of participants in the ‘‘not consistent’’ category (ranging from 21.4 to
58.4% across sections). Despite the effectiveness of the treatment for other
participants (consistent effect for 14.2–31.6% across sections), some low responders
were unaffected by the credit contingency. A close examination of baseline data
showed that only one low responder made two comments (which would have

123
J Behav Educ (2009) 18:173–188 183

Baseline Treatment 1 Withdrawal 1 Treatment 2 Withdrawal 2


70

60

50

40

30

20

10

Section A 0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

70
Percentage Participating

60

50

40

30

20

10

Section B 0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

70

60

50

40

30

20

10

Section C 0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Day Participation Recorded

Fig. 2 Percent of initially low-responding students participating in baseline, treatment, and withdrawal
phases in the second semester

maximized credit in the treatment phases) on any baseline day, leaving ample room
for virtually all the low-responding participants to increase their participation to
maximize credit under the credit contingency.
Consequently, while the low-responding students as a group substantially
increased their percentage of participants under treatment conditions, a sizeable
percentage of individual low-responding students did not increase the consistency of
their participation. Nonetheless, consistent effects were more frequent in the second
than the first semester. Consistent treatment effects for first semester ranged from

123
184 J Behav Educ (2009) 18:173–188

Table 2 Consistency of treatment effects for low responders in each section the second semester
Level of treatment effect Section A Section B Section C

n Percent n Percent n Percent

First treatment phase


Not consistenta 6 50.0 3 21.4 11 57.9
Partly consistentb 2 16.6 8 57.2 4 21.0
Mostly consistentc 1 8.4 1 7.1 1 5.3
Consistentd 3 25.0 2 14.3 3 15.8
Second treatment phase
Not consistenta 7 58.4 5 35.7 8 42.1
Partly consistentb 2 16.6 2 14.3 5 26.3
Mostly consistentc 1 8.4 4 28.7 2 10.5
Consistentd 2 16.6 2 14.3 4 21.1
Combined treatment phases
Not consistente 6 50.0 3 21.4 9 47.3
Partly consistentf 2 16.6 5 35.7 4 21.1
Mostly consistentg 1 8.4 4 28.7 0 0.0
Consistenth 3 25.0 2 14.2 6 31.6
a
Not consistent, participation on no treatment day exceeded the comparable baseline day
b
Partly consistent, participation on treatment days exceeded 1 or 2 comparable baseline days
c
Mostly consistent, participation on treatment days exceeded 3 comparable baseline days
d
Consistent, participation on treatment days exceeded 4 comparable baseline days
e
Not consistent, participation on no treatment day exceeded the comparable baseline day
f
Partly consistent, participation on treatment days exceeded 1–4 comparable baseline days
g
Mostly consistent, participation on treatment days exceeded 5–6 comparable baseline days
h
Consistent, participation on treatment days exceeded 7–8 comparable baseline days

0.0 to 20% across sections, whereas consistent effects in the second semester ranged
from 14.2 to 31.6% across sections. This difference may partly be attributable to the
higher levels of baseline participation in the first than the second semester, making
treatment gains less likely to be achieved in the first semester.
Otherwise, one would assume that the combination of second-semester changes
in the recording system (reporting other credit-producing activities and participation
on specially designed record cards) and the credit ratio for first and second daily
comments (two points for the first comment and one point for second comment the
second semester vs. one point for each of these comments in the first semester)
accounted for the difference in individual treatment effects. Certainly, the second-
semester credit ratio appeared to work better than the first-semester ratio.

Discussion

Although the designation of low-responding students was based on students’ self-


report of participation, observers agreed with this designation in virtually 100% of

123
J Behav Educ (2009) 18:173–188 185

the cases. Thus, the data on which the credit and non-credit comparisons were made
can be regarded as generally reliable. Virtually no students exaggerated their report
of participation to gain undeserved credit. The findings generally showed that the
agreement between low-responding students’ records and observers’ records of
class discussion was in the .76 to .97 range the first semester and in the .81 to .94
range for the second semester, with no consistent difference in level of agreement
under credit versus non-credit conditions.
The credit phases had a higher percentage of low-responding students partic-
ipating in class than did most non-credit phases. However, the percentage of low
responders in the credit phases was far from 100%. Plus, in most withdrawal-of-
credit phases in the first semester, the percentage of low-responding students
participating in discussion was lower than in the baseline period for all sections
(suggesting the possibility of a contrast effect even for students who initially
participated infrequently in class discussion). This pattern was less pronounced in
the second semester, but still occurred to a mild degree in Section A. Another
possibility is that the low-responding students were inclined to participate more than
normally at the outset of the course because of the newness of self-recording, even
without the provision of credit for self-recorded comments.
Although more of the low-responding students participated in class discussion
when credit was given for participation than when no credit was available, we do
not yet know why a sizeable percentage of these students did not respond to the
credit contingency. Several questions remain to be answered about the absence of
treatment effects for these low-responding students. For example, was the amount of
credit too small to reverse their inclination toward reticence? The composite credit
given in the study represented only about 3–5% of the course credit across the two
semesters, permitting students to make an A in the course without participating in
class discussion. In the future, the credit for participation could be escalated to an
extent that an A would not be possible without considerable participation credit.
Nonetheless, some students in the study complained about awarding any credit for
participation, indicating a personal dislike for that arrangement.
Also, it may be that participation in class discussion must be initiated early in the
course or not at all. However, this hypothesis is based more on experiential
observation than empirical research. We suspect that the longer students wait to
engage in class discussion, the more difficult participation may become. Thus, credit
for participation could be given at the outset of the course and every day thereafter,
as opposed to only in certain units in the course. This arrangement might produce
earlier and more sustained participation than the arrangement used in the current
study. Although not a direct parallel to this suggestion, Auster and MacRone (1994)
reported that early participation in college courses is a reliable predictor of
participation in later college classes.
Because minimal participation may largely be linked to characteristics students
bring to a course, it might be helpful to know whether the reticence observed in our
course was typical of the low-responding students’ verbalization in other courses of
comparable size. Similarly, determining how long this tendency had persisted in
their schoolwork might be valuable information. It may be that some of our low-
responding students have a history of reticence in all their courses that would be

123
186 J Behav Educ (2009) 18:173–188

extremely difficult for them and their instructors to overcome in large courses.
Although most of these students were planning to be teachers, which would require
substantial verbalization in group situations, some showed little inclination to either
demonstrate this skill or work toward developing the skill.
A realistic assessment of what was accomplished or not accomplished in this study
leads to the conclusions that low-responding students can reliably record their
participation in class discussion with minimal overstatement of their participation and
that providing a small amount of credit for reporting comments will help some highly
reticent students become more engaged in class discussion. However, for some low-
responding students, their silence may partly be a function of limited incentive to
become involved in class discussion, with their silence indicative of either disinterest
in the discussion or lack of sufficient credit for participating. On the other hand, some
low-responding students may be so intimidated by the prospect of discussion that no
amount of extrinsic rewards would enlist their engagement in discussion.
Howard et al. (2002) determined through surveys and interviews possible reasons
why some students minimally participate in class discussion. These researchers
compared talkers (those who make two or more comments per class session) with
non-talkers (those who make fewer than two comments per class session). Talkers
were more likely than non-talkers to see participation as part of their responsibility
in a course, instead of an expendable option. Conversely, non-talkers were more
likely than talkers to indicate that they had limited knowledge of the subject matter,
their ideas were not well enough formulated, or they were shy.
Some educators (Angelo and Cross 1993; McKinney 2000) have recommended
allowing students more time to formulate their ideas before asking them to discuss
those ideas in class. For example, students might be given a minute to write down
their ideas about an issue to be discussed or share their ideas with an adjacent
student before voicing those ideas with the total class. Large classes, such as the one
targeted in the current study, likely will need to provide time for students to share
their ideas in pairs or small groups to help low-responding students make the
transition to speaking before the class as a whole. Although being able to express
one’s views in large groups is a valuable skill for students to develop, the
development of this skill may need to begin inconspicuously and develop gradually
for some students to become comfortable in sharing their views with large classes.
Although the current study showed that initially low-responding students can
make treatment gains in quantity of participation, an area that remains to be
researched is whether increased quantity will be accompanied by increased quality.
Several reports on student participation make reference to quality of participation
(e.g., Dallimore et al. 2004; Junn 1994). However, instead of operationalizing
quality of participation and then assessing quality within that operational
framework, these studies typically had students rate the global quality of discussion
in a course. Several studies have linked quality of student participation to the
concepts of higher order thinking and critical thinking, but with mixed results
regarding improvement of these thinking skills (Bradley et al. 2008; Ferguson 1986;
Smith 1977). Again, higher thinking levels were not assessed in terms of student
comments, but rather through written work, critical thinking tests, and student
ratings of satisfaction with the course experience.

123
J Behav Educ (2009) 18:173–188 187

Given the difficulty in judging quality of student comments, especially students


judging the quality of their own comments, tracking quality will likely prove much
more complex than tracking quantity. Instructors would have to provide definitive
cues following each student’s comment as to the relevance of that comment or the
thinking level of the comment for the student to make an accurate and reliable
judgment as to the quality of the comment. Because of the added complexity of
assessing the student–teacher interaction related to quality of student participation,
we did not include that component in this initial study on student participation. We
viewed increasing participation by low-responding students as a sufficient first step
in engaging them in discussion. Both the self-recording and the credit contingencies
for quantity of participation can be implemented relatively easily to increase
percentage of participants.
The findings of this study point to the following conclusions: the percentage of
initially low-responding students who subsequently participated in class discussion
was moderately increased by giving a small-amount of course credit for self-
reporting up to two comments each class session; treatment effects were consistent
when low-responding students were considered as a group; treatment effects were
inconsistent when participation of low-responding students was considered on an
individual basis (ranging from students who showed no consistency to students who
showed complete consistency in treatment gains from baseline to treatment phases);
and a small amount of credit for participation was not sufficient to mobilize
participation from the most reticent students. Teachers who are committed to
increasing the percentage of students participating in class discussion should
consider some kind of credit contingency to maximize voluntary participation from
most students, recognizing however that the most reticent students may not respond
to the credit contingency. Having students self-record their participation makes the
task of awarding credit for student responding more manageable and equitable.

References

Angelo, T. A., & Cross, P. (1993). Classroom assessment techniques: A handbook for college teachers
(2nd ed.). San Francisco, CA: Jossey-Bass.
Auster, C. J., & MacRone, M. (1994). The classroom as a negotiated social setting: An empirical study of
the effects of faculty members’ behavior on students’ participation. Teaching Sociology, 22, 289–
300. doi:10.2307/1318921.
Boniecki, K. A., & Moore, S. (2003). Breaking the silence: Using a token economy to reinforce classroom
participation. Teaching of Psychology (Columbia, Mo.), 30, 224–227. doi:10.1207/S15328023TOP
3003_05.
Bradley, M. E., Thom, L. R., Hayes, J., & Hay, C. (2008). Ask and you will receive: How question type
influences quantity and quality of online discussions. British Journal of Educational Technology, 39,
888–900. doi:10.1111/j.1467-8535.2007.00804.x.
Dallimore, E. J., Hertenstein, J. H., & Platt, M. B. (2004). Classroom participation and discussion
effectiveness: Student-generated strategies. Communication Education, 53, 103–115. doi:10.1080/
0363452032000135805.
Erway, E. A. (1972). Listening: The second speaker. Speech Journal, 10, 22–27.
Fassinger, P. A. (1995). Professors’ and students’ perceptions of why students participate in class.
Teaching Sociology, 24, 25–33. doi:10.2307/1318895.

123
188 J Behav Educ (2009) 18:173–188

Ferguson, N. B. (1986). Encouraging responsibility, active participation, and critical thinking in general
psychology students. Teaching of Psychology, 13, 217–218.
Garside, C. (1996). Look who’s talking: A comparison of lecture and group discussion teaching strategies
in developing critical thinking skills. Communication Education, 45, 212–227.
Harton, H. C., Richardson, D. S., Barreras, R. E., Rockloff, M. J., & Latane, B. (2002). Focused
interactive learning: A tool for active class discussion. Teaching of Psychology (Columbia, Mo.), 29,
10–15. doi:10.1207/S15328023TOP2901_03.
Howard, J. R., James, G. H., III, & Taylor, D. R. (2002). The consolidation of responsibility in the mixed-
age college classroom. Teaching Sociology, 30, 214–234. doi:10.2307/3211384.
Jones, R. C. (2008). The ‘‘why’’ of class participation. College Teaching, 56, 59–62. doi:10.3200/
CTCH.56.1.59-64.
Junn, E. (1994). ‘Pearls of wisdom’: Enhancing student class participation with an innovative exercise.
Journal of Instructional Psychology, 94, 385–387.
Karp, D. A., & Yoels, W. C. (1976). The college classroom: Some observations on the meaning of student
participation. Sociology and Social Research, 60, 421–439.
Krohn, K. R., Foster, L. N., McCleary, D. F., Aspiranti, K. B., Nalls, M. L., Quillivan, C. C., Taylor, C.
M., & Williams, R. L. (2008). Reliability of students’ self-recorded participation in class discussion
(submitted).
McKinney, K. (2000). Teaching the mass class: Active/interactive strategies that have worked for me. In
J. Sikora & T. O. Anoloza (Eds.), Introductory sociology resource manual (pp. 13–16). Washington
DC: ASA Teaching Resources Center.
Morrison, T. L., & Thomas, M. D. (1975). Self-esteem and classroom participation. The Journal of
Educational Research, 68, 374–377.
Porat, K. L. (1990). Listening: The forgotten skill. Momentum, 21(1), 66–68.
Schuelke, L. D. (1972). Subject matter relevance in interpersonal communication, skills, and
instructional accountability: A consensus model. Paper presented at the Annual Meeting of the
Speech Communication Association in Chicago, IL.
Smith, D. G. (1977). College classroom interactions and critical thinking. Journal of Educational
Psychology, 69, 180–190. doi:10.1037/0022-0663.69.2.180.
Sommer, R., & Sommer, B. A. (2007). Credit for comments, comments for credit. Teaching of
Psychology (Columbia, Mo.), 34, 104–106.
Weaver, R. R., & Qi, J. (2005). Classroom organization and participation: College students’ perceptions.
The Journal of Higher Education, 76, 570–601. doi:10.1353/jhe.2005.0038.

123

You might also like