Professional Documents
Culture Documents
REFERENCES
Linked references are available on JSTOR for this article:
https://www.jstor.org/stable/41824459?seq=1&cid=pdf-reference#references_tab_contents
You may need to log in to JSTOR to access the linked references.
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide
range of content in a trusted digital archive. We use information technology and tools to increase productivity and
facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org.
Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at
https://about.jstor.org/terms
Springer is collaborating with JSTOR to digitize, preserve and extend access to Journal of
Behavioral Education
This content downloaded from 111.68.97.230 on Wed, 16 Oct 2019 09:08:02 UTC
All use subject to https://about.jstor.org/terms
J Behav Educ (2009) 18:173-188
DOI 10.1 007/s 1 0864-009-9083-8
ORIGINAL PAPER
Introduction
Ô Springer
This content downloaded from 111.68.97.230 on Wed, 16 Oct 2019 09:08:02 UTC
All use subject to https://about.jstor.org/terms
174 J Behav Educ (2009) 1 8: 173-1 88
<0 Springer
This content downloaded from 111.68.97.230 on Wed, 16 Oct 2019 09:08:02 UTC
All use subject to https://about.jstor.org/terms
J Behav Educ (2009) 18:173-188 175
ô Springer
This content downloaded from 111.68.97.230 on Wed, 16 Oct 2019 09:08:02 UTC
All use subject to https://about.jstor.org/terms
176 J Behav Educ (2009) 18:173-188
Method
Participants
Students were selected from six sections of a large undergraduate course in human
development required of all students who wished to enter the Teacher Preparation
Program in a large state university in the Southeastern United States. Each section
had approximately 55 students. The study was conducted over two consecutive
semesters. Students were selected on the basis of their participation levels in the first
of five units in the course (i.e., Physical Development). Although the intent was to
select the bottom quartile of participants in the first unit for each section, several ties
in percentile ranks within sections permitted only an approximation of the bottom
quartile of participants within each section. The students ultimately selected are
referred to as low responders throughout this manuscript. Initially, 49 students were
identified as low responders the first semester and 45 the second semester across
sections.
First Semester
For Section A, students in the bottom 31.9% (n = 15) were identified as low
responders, with the percentage of these students participating across baseline days
ranging from 23 to 73%. For Section B, students in the bottom 32.1% (n = 17) were
identified as low responders, with the percentage of them participating across
baseline days ranging from 6 to 41%. For Section C, students in the bottom 29.8%
(n = 17) were identified as low responders, with the percentage of them
participating across baseline days ranging from 0 to 47%.
Second Semester
For Section A, students in the bottom 33.3% (n = 12) were identified as low
responders, with the percentage of these students participating across baseline days
ranging from 17 to 36%. For Section B, students in the bottom 24.0% ( n = 14) were
identified as low responders, with the percentage of them participating across
baseline days ranging from 0 to 8%. For Section C, students in the bottom 25.9%
(n = 19) were identified as low responders, with the percentage of them
participating across baseline days ranging from 0 to 6%.
For 4 days in each of five units in all course sections during both semesters, students
were instructed to answer questions from their study guide over a specific section of
the instructor notes included in their reading material. Students reported at the
beginning of each class period either on a sign-in sheet passed around the class (first
semester) or on a record card (second semester) whether they had answered all the
assigned homework questions for that day. The homework assignments were
designed to prepare students for class discussions in the corresponding class
Springer
This content downloaded from 111.68.97.230 on Wed, 16 Oct 2019 09:08:02 UTC
All use subject to https://about.jstor.org/terms
J Behav Educ (2009) 1 8: 1 73-1 88 1 77
Springer
This content downloaded from 111.68.97.230 on Wed, 16 Oct 2019 09:08:02 UTC
All use subject to https://about.jstor.org/terms
178 J Behav Educ (2009) 18:173-188
Students were asked to number or bullet their comments so that instructors could
easily distinguish between the comments.
Because some students did not submit the 3 x 5 notecards when credit was not
given for participation during the first semester, a more comprehensive record card
that included space for recording other credit-producing activities, as well as
participation, was used the second semester. Consequently, if students did not
submit the expanded record card on a particular day, they received no credit for the
day. The second-semester record card precisely delineated space for recording
comments and checking attendance, display of namecards, and completion of
homework assignments. In both semesters, students submitted their participation
records on the four discussion days in each unit.
Each semester two GTAs from another section of the course observed class
discussion 1 day in each unit. GTAs observed the 3rd day of each unit the first
semester and the 4th day of each unit the second semester. The selection of the
observation day was based primarily on the GTAs' class schedules. The GTAs were
given the same instructions as the students regarding what constituted a comment
(included in the syllabus). The observers sat in the two front corners of the tiered
amphitheater in which the course was conducted. From their perspective, they could
see all of the students and the name cards in front of the students. The students had
been told that the GTAs would be observing the discussion. The same GTAs also
assisted with the administration of the unit exams and consequently were in the
classroom for the five observation days plus the five unit exam days across units,
allowing considerable time for students to habituate to their presence.
Inter-rater agreement between the low-responding students' and observers'
records of class participation was established in a somewhat different manner from
that previously employed by Krohn et al. (2008) over a wide range of students in
large classes. In the Krohn et al. study, each observer's total comments for a
particular student was compared with that student's self-reported total for the day.
The smaller of the two totals was divided by the larger to determine percentage of
agreement between the observer and the student. Cases in which neither the student
nor the observer recorded a comment for the day were excluded from the reliability
assessment. The reliability calculations for each student were then averaged across
students in each section of the course for each unit.
Because of much smaller ns in the current study plus numerous cases in which
neither the low-responding student nor the observer recorded a comment,
correlational analyses that included zero-zero comparisons were used in quantifying
the level of agreement between student and observer participation records. For the
first semester, the correlations ranged from .76 to .97 across units: Unit 1 = .76,
Unit 2 = .87, Unit 3 = .97, Unit 4 = .88, and Unit 5 = .92. The average
correlation between student and observer records the first semester was .88. For
the second semester, the correlations across units ranged from .81 to .94: Unit
1 = .81, Unit 2 = .94, Unit 3 = .93, Unit 4 = .88, and Unit 5 = .90. The average
correlation between student and observer records the second semester was .89.
Springer
This content downloaded from 111.68.97.230 on Wed, 16 Oct 2019 09:08:02 UTC
All use subject to https://about.jstor.org/terms
J Behav Educ (2009) 18:173-188 179
Credit Contingencies
Each semester allotted some credit for class discussion in two of the five course
units per section. However, the amount of credit was slightly different for the two
semesters. The first semester, students could receive one point for their first
comment and another point for a second comment. During the second semester,
students received two points for their first comment and one point for their second
comment. Our rationale for this change was that low-responding students may have
more difficulty in making their initial comment than in making additional
comments. Determination of credit for class participation was based strictly on
the participation reports of the students across all discussion days, even on days
when observers also recorded student comments.
Research Design
The same research design was used each semester. It was pre-determined that each
section of the course would have the credit contingency in two units, which would
be separated by a unit without credit for participation. This credit arrangement for
participation was described in the course syllabus. By random selection at the
beginning of each semester, two sections had a baseline, credit, non-credit, credit,
and non-credit sequence, with each phase equivalent to one unit in the course. A
third section had an extended baseline covering the first two units and then a credit,
non-credit, and credit sequence. Thus, the design included features of a reversal
and multiple-baseline across sections design. Students self-recorded comments in
all phases of the study. Students in each section were informed at the beginning of
each unit whether credit would be available for self-recorded participation that
unit.
Results
The findings are first presented for each semester and then compared across
semesters. Because of the relatively small ns, we analyzed the data through graphic
presentation rather than through analysis of variance. Figures 1 and 2 show the
mean percentage of low responders (identified by section in the baseline phase) who
participated in every phase of each section, displayed separately by semester.
Additionally, we identified the percentage of low-responding students showing
different levels of consistency in treatment effects in the various sections.
First Semester
ô Springer
This content downloaded from 111.68.97.230 on Wed, 16 Oct 2019 09:08:02 UTC
All use subject to https://about.jstor.org/terms
180 J Behav Educ (2009) 18:173-188
Springer
This content downloaded from 111.68.97.230 on Wed, 16 Oct 2019 09:08:02 UTC
All use subject to https://about.jstor.org/terms
J Behav Educ (2009) 18:173-188 181
This content downloaded from 111.68.97.230 on Wed, 16 Oct 2019 09:08:02 UTC
All use subject to https://about.jstor.org/terms
182 J Behav Educ (2009) 18:173-188
Second Semester
Figure 2 shows that baseline percentages for students identified as low responders
differed across sections: about 26% of the low responders participated in class
discussion in the baseline phase for Section A, about 5% in Section B, and about 2%
in Section C. With regard to the latter section, the low responders did not make a
single comment on most baseline days (which spanned 8 days across the first two
units of the course).
Treatment comparisons for the low-responding students revealed consistent
treatment effects for all sections. Despite the substantial percentage of low-
responding students who participated during baseline in Section A (26%), an
average of 18% more of these students participated during the treatment than the no-
treatment phases (including the baseline phase). In Section B, 41% more of the
initially low-responding students participated in the treatment than in the no-
treatment phases. In Section C, 33% more of the initially low-responding students
participated in the treatment than in the no-treatment phases. Figure 2 suggests that
increased participation of low-responding students was dependent on the credit
incentive. For example, in addition to the small percentage of low-responding
students who participated in baseline, very low percentages also participated in the
credit-withdrawal phases across sections.
Although the data showed strong and consistent treatment-effects for the low-
responding students as a group in semester two, some of these students participated
minimally even under the treatment condition. As in Table 1, Table 2 classified
treatment effects for low responders as not consistent, partly consistent, mostly
consistent, and consistent treatment effect. These classifications were based on day
to day comparisons of participation between the baseline and the two treatment
phases and determined in the same manner as in the first semester.
Even though the percentages of low-responding students demonstrating not
consistent, partly consistent, mostly consistent, and consistent treatment effects
differed somewhat across sections of the course, all sections contained a substantial
percentage of participants in the "not consistent" category (ranging from 21.4 to
58.4% across sections). Despite the effectiveness of the treatment for other
participants (consistent effect for 14.2-31.6% across sections), some low responders
were unaffected by the credit contingency. A close examination of baseline data
showed that only one low responder made two comments (which would have
Ô Springer
This content downloaded from 111.68.97.230 on Wed, 16 Oct 2019 09:08:02 UTC
All use subject to https://about.jstor.org/terms
J Behav Educ (2009) 18:173-188 183
Ö Springer
This content downloaded from 111.68.97.230 on Wed, 16 Oct 2019 09:08:02 UTC
All use subject to https://about.jstor.org/terms
184 J Behav Educ (2009) 18:173-188
Discussion
<£) Springer
This content downloaded from 111.68.97.230 on Wed, 16 Oct 2019 09:08:02 UTC
All use subject to https://about.jstor.org/terms
J Behav Educ (2009) 18:1 73-1 88 1 85
Ô Springer
This content downloaded from 111.68.97.230 on Wed, 16 Oct 2019 09:08:02 UTC
All use subject to https://about.jstor.org/terms
186 J Behav Educ (2009) 18:173-188
Springer
This content downloaded from 111.68.97.230 on Wed, 16 Oct 2019 09:08:02 UTC
All use subject to https://about.jstor.org/terms
J Behav Educ (2009) 18:173-188 187
References
Angelo, T. A., & Cross, P. (1993). Classroom assessment techniques: A handbook for college teachers
(2nd ed.). San Francisco, CA: Jossey-Bass.
Auster, C. J., & MacRone, M. (1994). The classroom as a negotiated social setting: An empirical study of
the effects of faculty members' behavior on students' participation. Teaching Sociology , 22, 289-
300. doi: 10.2307/1 3 1 892 1 .
Boniecki, K. A., & Moore, S. (2003). Breaking the silence: Using a token economy to reinforce classroom
participation. Teaching of Psychology (Columbia, Mo.), 30, 224-227. doi:10.1207/S15328023TOP
3003_05.
Bradley, M. E., Thom, L. R., Hayes, J., & Hay, C. (2008). Ask and you will receive: How question type
influences quantity and quality of online discussions. British Journal of Educational Technology, 39,
888-900. doi: 10.1 1 1 1/U467-8535.2007.00804.X.
Dallimore, E. J., Hertenstein, J. H., & Piatt, M. B. (2004). Classroom participation and discussion
effectiveness: Student-generated strategies. Communication Education, 53, 103-115. doi: 10. 1080/
0363452032000135805.
Erway, E. A. (1972). Listening: The second speaker. Speech Journal, 10, 22-27.
Fassinger, P. A. (1995). Professors and students' perceptions of why students participate in class
Teaching Sociology, 24, 25-33. doi: 10.2307/1318895.
Ô Springer
This content downloaded from 111.68.97.230 on Wed, 16 Oct 2019 09:08:02 UTC
All use subject to https://about.jstor.org/terms
188 J Behav Educ (2009) 18:173-188
Springer
This content downloaded from 111.68.97.230 on Wed, 16 Oct 2019 09:08:02 UTC
All use subject to https://about.jstor.org/terms