The present study compared gains in students’ critical thinking measured with general and subjectspeciﬁc questions. In agreement with McMillan’s (1987) review and in recent studies by Williams (2003) and Williams et al. (2004), larger pretest–posttest gains were found from tests that contained questions that relate to the course in which the tests were given (e.g., introductory psychology) than tests with questions that focused on general topics. Perhaps one reason for this ﬁnding is that although a student is likely to be aware of the topics in general measures , they may not be required as often to engage in the types of critical thinking in these everyday topics as they would in studying particular subjects (e.g., psychology). In other words, it may be that as instructors teach the skills that foster critical thinking, much of the focus is within a subject-speciﬁc context. In addition, the length of time between the pretest and posttest was only about 40 min and the subjects were ﬁrst-year students. Therefore, the opportunity to engage in a variety of learning experiences over a longer time span such as 3–4 years ought to contribute to gains in critical thinking skills that may be more detectable with a general measure. In addition to being signiﬁcant both in statistical and practical terms, this ﬁnding is noteworthy for another reason. Like intelligence, it is not unreasonable to think that an appreciable gain in critical thinking skills would occur after having been exposed to a variety of courses and experiences over an extended period. Therefore, given that one could expect only a small effect under the restricted conditions of this experiment, namely minimal exposure to the independent variable in such a short time frame, these results are encouraging .Arguably, such a minimal exposure would lead to a narrower range of the degree to which subjects had experienced the treatment, which would tend to attenuate its relationship with the scores on the dependent variable .On a related limitation concerning the short duration of the intervention, it becomes more difficult to determine the degree to which the students’ pretest and posttest scores actually reﬂect the level of critical thinking that was inﬂuenced by the treatment. Alternatively, given that the entire experiment (i.e., pretest, intervention, and posttest), it is possible that these ﬁndings may be a function of other inﬂuences such as the students’ orientation to what was required on the assessments. In future research, it will be helpful to reduce the effects of these confounding inﬂuences by assessing these skills more comprehensively (i.e., varied measures) and over a longer period. An ongoing concern in controlled experiments like this is the degree to which students are putting forth a genuine effort to complete the tasks involved in the experiment. The students who took part in this study did so to help fulﬁll their research participation requirement in the introductory psychology course. This requirement is based solely on the number of research studies in which the student participates, and has nothing to do with the quality of the participation. As an incentive to make the students in this study try their best in completing the critical thinking tests and the review questions, each student was promised a lottery ticket if his or her scores were above a hypothetical standard on all three tasks (i.e., pretest, review questions, posttest). In this study, the effectiveness of this incentive is questionable for two reasons. First, although both groups showed a pretest–posttest improvement on the course-speciﬁc psychology subtest, it was somewhat surprising that there was not a larger improvement given that the critical thinking tests referred to a relatively short passage that students had read through for about 45 min. Second, it is possible that the lower scores obtained on review questions by students in the higher order condition (4.35 out of 8, on average) may suggest that these students were not engaged in higher order thinking as much as was expected. One way to address the concern regarding the level of student motivation to try their best in each of the tasks in the experiment might be to provide a more valuable incentive. At the post-secondary level, one of the clearest, most immediate incentives is grades. Therefore, one possibility would be to 1
gains over a longer enrollment (e.. two implications are worth noting. The second implication has to do with identifying valid institutional (e.g. course-based measure of critical thinking could help better determine the effectiveness of various educational processes. most studies found a signiﬁcant gain in critical thinking going from freshman to senior year. The review questions could consist of both lower order questions that pertain to one half of the passage (e. A more sensitive.. This would help to provide better empirical evidence concerning the factors that contribute toward student learning and development.g. from ﬁrst-year to fourth-year) might be more appropriately assessed with a general measure. The ﬁndings in this study suggest that gains in students’ critical thinking skills are more clearly detected with items focusing on speciﬁc course content rather than on general issues assumed to be familiar to a student in any discipline.g. it is possible that some of the studies considered in their review that used general measures of critical thinking may have underestimated the extent of student gains.include the experimental tasks as a small part of a course with a corresponding weight of the ﬁnal grade. From another perspective. and self-ratings. one semester). higher order questioning in class) in terms of their relation with intended outcomes including gains in students’ critical thinking skills.. and therefore should be transferable to novel contexts outside of a particular course. might be more detectable with subject-speciﬁc measures.. Pascarella and Terenzini (2005) point out that while signiﬁcant gains in critical thinking have been found as a result of attending college. based on one chapter). While Pascarella and Terenzini (2005) conclude in their extensive review that controlling for incoming ability and maturational effects. Halpern (2001) suggests that because critical thinking ought to be a skill that students should be able to use indeﬁnitely after graduation. subject speciﬁc. the format of assessing critical thinking used in these studies includes general.
. Concerning research that examines the link between institutional and instructional processes and student outcomes. Using subject-speciﬁc measure of critical thinking may help colleges to better assess gains in students’ critical thinking skills both within departments and entire institutions based on measures obtained from several departments.g. we are only beginning to learn what actually contributes toward this improvement. campus tutoring programs) and instructional processes (e. and higher order questions that pertain to the rest. such an experiment would require a within-subjects design such that each student receives exactly the same materials from which he or she will be assigned a grade. To deal with the ethical issues involved.. which can be used to make more informed and justiﬁed choices both at the institutional level and within a particular course.g. Conversely. shorter term gains (e. Thus.