You are on page 1of 50

Accepted Manuscript

Title: An Approach to Teaching Critical Thinking Across


Disciplines Using Performance Tasks with a Common Rubric

Authors: Sarita Cargas, Sheri Williams, Martina Rosenberg

PII: S1871-1871(16)30159-6
DOI: http://dx.doi.org/doi:10.1016/j.tsc.2017.05.005
Reference: TSC 440

To appear in: Thinking Skills and Creativity

Received date: 1-11-2016


Revised date: 31-3-2017
Accepted date: 21-5-2017

Please cite this article as: Cargas, Sarita., Williams, Sheri., & Rosenberg,
Martina., An Approach to Teaching Critical Thinking Across Disciplines Using
Performance Tasks with a Common Rubric.Thinking Skills and Creativity
http://dx.doi.org/10.1016/j.tsc.2017.05.005

This is a PDF file of an unedited manuscript that has been accepted for publication.
As a service to our customers we are providing this early version of the manuscript.
The manuscript will undergo copyediting, typesetting, and review of the resulting proof
before it is published in its final form. Please note that during the production process
errors may be discovered which could affect the content, and all legal disclaimers that
apply to the journal pertain.
An Approach to Teaching Critical Thinking Across Disciplines Using Performance

Tasks with a Common Rubric

First Author
Sarita Cargas1, D. Phil., Assistant Professor1
Affiliation: 1Honors College, University of New Mexico
Address: Honors College 2E, MSC 06 3890, 1 University of New Mexico, Albuquerque,
NM 87131-0001, USA
cargas@unm.edu
FAX 505-277-4271
Phone 505.277.7405

Second Author – Corresponding Author


Sheri Williams2, Ed.D., Assistant Professor
Affiliation: 2Department of Teacher Education and Educational Leadership, College of
Education, University of New Mexico
Address: Hokona Hall 380, MSCO5 3040, 1 University of New Mexico, Albuquerque,
NM 87131-0001, USA
ssw@unm.edu
FAX 505.277.0455
Phone 505.506.9550

Third Author
Martina Rosenberg3, Ph.D., Assistant Professor
Affiliation: 3Health Sciences Center, Department of Biochemistry & Molecular Biology,
University of New Mexico
Address: BRF 223 J, MSC 08 4670, 1 University of New Mexico, Albuquerque, NM
87131-0001, USA
MRosenberg@salud.unm.edu
FAX 505.272.6587
Phone 505.272.6778

1
Highlights: Ref.TSC_2016_86.3.3-28-17

Title: An Approach to Teaching Critical Thinking Across Disciplines Using


Performance Tasks with a Common Rubric

 An approach to teaching critical thinking across disciplinary contexts is

proposed.

 Performance tasks can be designed and explicitly taught using a common rubric

with corrective feedback, aiding both the development and assessment of critical

thinking.

 Performance tasks that prompt interpretation and evaluation-level thinking are

powerful methods for improving the critical thinking skills of students in

collegiate settings.

 Performance tasks may be used in many fields in higher education to foster

student and instructor awareness of the tools and practices involved in critical

thinking.

 Regular use of performance tasks in a problem-based learning environment can

contribute to the transferability of critical thinking skills and dispositions.

Abstract

While there is large body of literature discussing critical thinking in higher education,

there is a less substantial body of scholarship exploring methods for teaching it. There

are several tests being used nationally to assess critical thinking. Rather than just

assessing critical thinking, we explored the use of performance tasks with a common

2
rubric as a way of raising student and instructor awareness of the tools and practices

involved in critical thinking. In this exploratory study, faculty in three different fields,

Teacher Education, Social Sciences, and Life Sciences, designed performance tasks in a

problem-based learning environment that were appropriate to their disciplines and

aligned to the skills of critical thinking. Although the tasks differed for each cohort, they

were structured similarly and explicitly taught using a common rubric with corrective

feedback, aiding both the development and assessment of critical thinking. Students

completed a pre-post assessment on a critical thinking assessment test. Some cohorts

evidenced measurable improvements in critical thinking skills with less discernable

improvement among other cohorts. Qualitative results tended to confirm the value of

student participation in rigorous and challenging performance tasks. We conclude that

using performance tasks with corrective feedback on a common rubric may be useful in

many fields. We further suggest that regular use of performance tasks in a problem-based

learning environment can contribute to the transferability of critical thinking skills and

dispositions.

Keywords: critical thinking; performance tasks; disciplinary contexts; higher education

1. Introduction

There is consensus about three aspects of U.S. higher education related to

transferable skills today, one reflects positively on colleges and universities and two

negatively. On the up side of the scale, faculty and administrators agree that a central

function of pedagogy is to teach critical thinking (Arum & Roska, 2010; Bailin & Siegel,

2003; Bok, 2006). Tilting the scale down, data show higher education is in general not

successfully teaching critical thinking at the tertiary level (Gabennesch, 2006; Halx &

3
Reybold, 2005; Pascarella & Terenzini, 2005; Snyder & Snyder, 2008; Stedman &

Adams, 2012; Willingham, 2008). Other studies indicate faculty members are not sure

what critical thinking is nor are they able to define it, complicating attempts to investigate

and ameliorate the discrepancy (Halx & Reybold, 2005; Stedman & Adams, 2012).

There are several instruments measuring critical thinking that are currently used

nationally, namely the Critical Thinking Assessment Test and the Collegiate Learning

Assessment. We wanted to know if it was possible to use one of the tests as a model for

designing learning experiences that would teach students critical thinking in almost any

discipline, and at the same time help faculty become more skilled at facilitating critical

thinking. We chose the Critical Thinking Assessment Test (CAT) for several reasons.

First, the CAT is validated for assessing critical thinking and creativity generically, i.e.,

independent of the context of a particular discipline. Second, unlike other instruments

the CAT does not rely on self-report and it avoids the reliability problems with self-

reported data (Boud & Falchikov, 1989; Ross, 2006). Lastly, the CAT itself models a

performance task such that the assessment of defined skill sets could be integrated

without being perceived by the learner or the instructor as unrelated to the course. We

conjectured that if a formulaic way of creating performance tasks could be developed that

would teach students the language and skills of critical thinking, then it also could be

useful for promoting the disposition of critical thinking among students in the higher

education setting.

In what follows we begin with a discussion of the recent literature on critical

thinking. We concur with the urgent need to improve the pedagogy of critical thinking,

present a discussion of how it is defined, and review the data about selected instructional

4
practices that do and do not work. This includes an examination of the arguments for and

against teaching critical thinking in disciplinary contexts; that is, there is a debate about

whether or not critical thinking should be taught in stand-alone courses or if the skills

should be taught along with or applied to disciplinary material. We then present

examples of performance tasks we created and suggest a generic design that can be used

in many fields thereby demonstrating that teaching critical thinking can be based in

disciplinary content on a common rubric while at the same time illuminating the deeper

structure and fostering the transferability of the skills. In the study section of our paper,

we describe how we designed our common rubric and intervention for critical thinking

and share the results of our research. Finally, we present recommendations for further

exploration.

1.1.1. Critical thinking discussions in the academic literature

Daniel Kahneman, who won a 2002 Nobel Prize for his work on decision making

and judgment, provides evidence that critical thinking by any definition is not something

we do automatically or even can be trained to do automatically (2011). He claims that

thought processes are driven more by the qualities of speed and cognitive laziness. We

rely on “biases and heuristics” for the sake of fast decisions, which provide shortcuts in

thinking (p. 8). Common biases, which Kahneman emphasizes no one can escape,

include: “the halo effect” as when we like or dislike everything about an idea or person

based on limited information; “the mere exposure effect” when we like or believe

something because we have been exposed to it frequently; “the confirmation bias” or the

tendency to only recognize information that we agree with rather than seeking out

additional information; and the “affect heuristic” which is relying on emotions rather than

5
evidence when making a decision, to list only a very few stumbling blocks for complex

thinking (p. 57, 64).

To counteract this trap evolution set up for us, Kahneman argues we must train

ourselves to recognize a situation that is likely to create implicit bias, intentionally slow

down and engage our deliberative system. Translated into critical thinking pedagogy,

students must be taught the habits of detecting and preventing these biases by knowing

when to pause to analyze an argument or statement, looking for alternative information,

and becoming conscious of doing all of the above. This conclusion is supported by

others who maintain that critical thinking is not a skill that is developed as a byproduct of

merely maturing (Walsh & Paul, 1986), that it is challenging and difficult to do (Dweck,

2002; Kuhn, 2000; Marin & Halpern, 2010) and that all around us there “are powerful

messages that confound efforts to think critically” (Marin & Halpern, 2010, p. 3).

Examples of the latter point include advertising that appeals to our emotions and sound

bites that simplify complex issues.

It is not surprising the research also reveals that just being educated at any level

does not necessarily impart the ability to think critically. Scholars have speculated that

unless critical thinking is being taught explicitly it is not being taught effectively, even

though faculty may assume it is. Marin and Halpern (2010) found this to be true for

primary and secondary education, i.e., critical thinking was not being taught in teacher

training. And although teaching critical thinking is an accreditation requirement in

several fields such as nursing, Staib (2003) finds little evidence that it is actually being

taught in nursing programs. These findings are consistent with those that suggest that

“faculty’s knowledge of perceptions and concepts of critical thinking is severely lacking”

6
(Stedman & Adams, 2012, p. 9) and that “few are prepared to teach critical thinking”

(Halx & Reybold, 2005, p. 313).

1.1.2. Importance and urgency

Various scholars and stakeholders use the word urgency when calling for more

teaching of critical thinking: “Educators are not alone in their concern about the urgency

of teaching and learning critical thinking” (Abrami, Bernard, Borokhovski, Wade,

Zurkes, Tamin, & Zang, 2008, p. 1103). Abrami et al. cite Tsui’s (2002) finding that in

the United States, “a national survey of employers, policymakers, and educators found

consensus that the dispositional as well as the skills dimension of critical thinking should

be considered an essential outcome of a college education” (2008, p. 1103). Attention to

critical thinking is especially warranted at a time when the quality of university education

is being closely scrutinized (Arum & Roska, 2010; Fethke & Policano, 2012). All this

points to the growing national recognition of the urgency of teaching critical thinking in

academia.

Because of this unfortunate situation in higher education and its ramifications on a

national scale, we argue there is an urgent need to develop a manageable method for

faculty to teach critical thinking. We will demonstrate that it is possible to use problems

germane to one’s discipline to promote students’ critical thinking. Although we did not

assess faculty improvement in critical thinking, it is worth noting that our method

requires instructors to become more familiar with their own understanding and

dispositions toward critical thinking. Again, research demonstrates that faculty are not

sure what critical thinking is and are not teaching it explicitly, which means they may not

be teaching it at all.

7
1.1.3. Definitions of critical thinking

Before we can begin to discuss methods of teaching critical thinking, we need to

understand its characteristics. We find that scholars who study critical thinking utilize a

range of definitions to describe it. Most contain overlapping elements and a few add

unique ones. Some descriptions focus on the skills involved in engaging in critical

thinking, some describe what having a disposition toward critical thinking consists of,

whereas others contain both. For example, Brookfield (2012) defines critical thinking in

terms of these skills: “identifying assumptions, checking out the validity of them,

examining different perspectives, and then making informed decisions” (p. 1). Halpern

(1998) also focuses on skills when she describes the activity as involving “solving

problems, formulating inferences, calculating likelihoods, and making decisions” (p. 450-

451). In addition to defining critical thinking as involving analytical thinking and

problem solving Bok (2006) includes the element of reflectiveness that is also found in

many definitions. Paul claims critical thinking is “thinking about your thinking, while

you’re thinking, in order to make your thinking better” (In Nosich, 2009, p. 2). Ennis

argues that reflective thinking is an essential component of critical thinking; it is

“reasonable, reflective thinking that is focused on what to believe or do” (1989, p. 4).

Less often in the definitions we find the requirement to see “both sides of an issue” and

“being open to new evidence that disconfirms your ideas” (Willingham, 2008, p. 21).

One educator of social justice goes so far as to say critical thinking must include a focus

on “information from multiple, non-dominant perspectives…de-centering students’

analytical frame…analyzing the effects of power and oppression” (Hackman, 2005, p.

106).

8
None of these definitions are recognized as confined to a discipline. All of them

assert that critical thinking is a general skill and/or disposition. However, some

disciplines adapt these definitions. Staib (2003) states, “some might think the scientific

method sums up critical thinking for the sciences” (p. 499). Likewise, Marin and

Halpern (2010) write that the goals of scientific literacy as stated by the National

Academy of Sciences; i.e., “observational and interpretive skills and making and

evaluating evidence” are similar to definitions of critical thinking (p. 12). Staib (2003)

also explains that an international committee of nurse experts took the above American

Philosophical Association (APA, 1990) definition and did its own study to determine if

they agreed with it. They did accept most of the APA criteria, but amended the

disposition aspect to include “creativity and intuition” (p. 499), similar to the Australian

tradition of including creativity in educational goals (Newton & Fisher, 2009).

1.1.4. Arguments for and against teaching critical thinking in disciplinary contexts

Although definitions of critical thinking are not unique to a single discipline, there

is some debate about whether or not critical thinking can be taught in a general course

focused only on critical thinking and independent of the discipline. Although countless

undergraduate critical thinking classes are taught every year, a preponderance of scholars

argue against such stand-alone courses because they find critical thinking is more

successfully taught within the context of discipline specific knowledge (Bailin & Siegel,

2003; Hatton & Smith, 1995; Pithers & Soden, 2000). However, Royalty (1991) and Sa,

Stanovich, and West (1999) defend the need for specific critical thinking courses where

learning critical thinking is the main outcome. In their meta-analysis Abrami et al. use

Ennis’ typology of four ways of teaching critical thinking:

9
“…general, infusion, immersion, and mixed…. In the general course, critical

thinking skills and dispositions are learning objectives, without specific subject

matter content. In contrast, content is important in both the infusion and

immersion approaches. Critical thinking is an explicit objective in the infusion

course but not in the immersion course. In the mixed approach, critical thinking

is taught as an independent track within a specific subject matter” (2008, p. 1105).

Based on their analysis of over 150 studies, the authors found that the mixed approach of

teaching specific skills of critical thinking, along with applications to problems and

concepts of practice, had the largest effect on student learning. This lends support to our

choice of designing performance tasks with a common rubric in a problem-based learning

environment to teach critical thinking that can be adapted across disciplines.

1.1.5. Teaching critical thinking

Before we present our approach for teaching critical thinking, we review the

practices for teaching critical thinking that have been studied in the literature. In 2012,

Shim and Walczak reported that there was not a great deal of research on “specific

instructor-driven instructional practices that affect students’ critical thinking” (p. 16).

They report on two practices as effective, namely asking “challenging questions” in class

and requiring that students “integrate” ideas either through assignments that compare and

contrast or examine multiple perspectives (p. 24). Tsui (1999) also finds something

similar in that writing involving analysis, more than description, facilitates critical

thinking. And Browne and Freeman (2000) concur that asking frequent evaluative

questions in class aids critical thinking. Engagement and active learning are also cited in

the literature (Browne & Freeman, 2000; Snyder & Snyder, 2008; Gottesman & Hoskins,

10
2013). The latter present evidence that these practices are more effective than lectures

and memorization. The more active pedagogies are supported in an extensive report on

instructional practices found in the meta-analysis of Abrami, Bernard, Borokhovski,

Waddington, Wade, and Persson (2015). The authors present data on 341 effect sizes

from studies “that used standardized measures of critical thinking as outcome variables”

(p. 275). The studies they included were published between 1930-2009 with the

preponderance appearing post 2000, “i.e., 1990–1999, 26.6%; 2000–2009, 44.6%, for a

combination of more than 70%” (p. 292). They conclude that critical thinking disposition

and skills are facilitated by three kinds of pedagogy: various kinds of dialogue, using

authentic problems and examples, and mentoring. In discussing dialogue, the authors

include critical dialogue, debates, whole class and small group discussions. Authentic

problems and examples are defined as students addressing problems, including role-

playing. Examples of mentoring are one-on-one teacher-student interaction, peer-led

dyads, and internships. Other scholars also find moderate to high effect sizes with

pedagogy that provides recognition and regular feedback to students, leading to

improvement of critical thinking (Hattie, 2007; Huba & Freed, 2000; Marzano, Pickering

& Pollack, 2001). Black and Wiliam cite studies that show “firm evidence that

innovations designed to strengthen the frequent feedback that students receive about their

learning yield substantial learning gains” (1998, p. 1). In addition, Shiel reports “the

development and implementation of performance tasks supports the development of

assessment-capable learners, which has an effect size of 1.44 when the average effect size

is 0.40” (2017, p 24).

While proficiency in critical thinking is generally influenced by context and

11
discipline dependent, we concur with van Gelder (2005) that critical thinking skills are

transferable; that is, if students can develop a disposition for critical thinking, the

disposition will facilitate transfers across subject domains when students develop an

orientation of applying critical thinking to their work. Also, if performance tasks based

on a common rubric are used in problem-based environments with corrective feedback,

such interventions might indeed help students comprehend the underlying structure of

critical thinking and promote more frequent and deliberate use. This has been shown to

be essential for transferability (Bailin, Case, Coombs, & Daniels, 1999; Lang, 2016; van

Gelder, 2005).

As Kelly-Riley substantiated in a university-wide project, faculty can use rubrics

to “promote higher-order thinking abilities within specific contexts” (2007, p. 37). Thus,

we emphasize that our intervention in the creation of performance tasks with feedback

based on a common rubric is usable in almost any problem-based environment. Since it

has been established that there is a paucity of critical thinking being taught, providing

faculty a useful and manageable method for teaching the skill has the potential to make

an important contribution to undergraduate education.

1.2. Research questions

Two essential questions motivated this exploratory study: (1) can we create a

problem-based learning environment including performance task interventions based on a

common rubric that would improve students’ critical thinking skills and dispositions?

And (2) can a common intervention approach focused on critical thinking skills and

dispositions be used across disciplines?

2. Methods and measurements

12
To investigate the research questions in our exploratory study, both quantitative

and qualitative measures were used. Data was collected using the CAT instrument at the

beginning and end of five semester-long undergraduate courses. A questionnaire was

administered near the completion of the semester to assess student perceptions of their

mastery and use of critical thinking skills as a result of their classwork. Students’ written

reflections were analyzed to determine student perceptions of the quality of their own

thinking skills and strategies for solving the performance tasks. Instructors were queried

on their reflections of the use of performance tasks and feedback to improve critical

thinking skills and dispositions.

2.1. Study design

This exploratory study involved convenience sampling of 63 students in five

lower- and upper-division undergraduate classes in a research-intensive university in the

American southwest in the spring 2015. Classes represented three different scholarly

fields with their own disciplinary culture. Participants included 28 students in two

Teacher Education classes, 28 students in two Social Sciences classes, and seven students

in one Life Sciences class. Data collection for this study was primarily conducted during

class time, with the exception that Life Sciences students were asked for responses to the

questionnaire and reflection on their critical thinking skills outside of class. This

additional student effort may explain the lower participation rate in Life Sciences.

Demographics were collected from all participants in the study. Most were

juniors and seniors (66.6%). The majority were female (78.1%) and 18-20 years of age

(54.7%). A majority reported their race as White (78.1%) and 39.1% identified their

ethnicity as Spanish/Hispanic/Latino. Nearly all (90.6%) considered English as their

13
primary language. The sample was atypical of the student population at the institution;

i.e., students by gender and ethnicity were overrepresented in the study. The study sample

was 78.1% female while the institution was 55% female, and the study sample was

78.1% White while the institution was 46.4% White. This disproportionality may be

partly explained by the lack of diverse candidates in the Social Sciences and Teacher

Education cohorts.

2.2. Class Structure in the three Disciplines

2.2.1. Teacher education

Two undergraduate Teacher Education classes at the 300- and 400-level were

included in the study. Both classes were part of the core curriculum for teacher licensure.

The classes met face-to-face in seminar rooms where instructors facilitated learning in

small- and large-group discussions. Course structure involved independent completion of

required readings, participation in individual- and group-assignments, graded

assignments at four-to-six week intervals, and end-of-course written assessments. One

class addressed issues of literacy development across the core content areas in the

secondary school. The other class addressed the study of oral and written forms of

language development and teacher-child interactions in the elementary school.

2.2.2. Social sciences

One class was a 200-level Social Science general education course based in the

Honors College. The classes were small and taught seminar style and employed many

active learning strategies. The focus of this particular class was on defining the

interrelationship of globalization and human rights, and then using world hunger as a case

study involving both topics. The assessments included low- and high-stakes writing

14
assignments. The other was a 300-level class focused on an examination of critical

thinking through current issues.

2.2.3. Life sciences

This upper level Biochemistry class was the second part of a two-semester series,

addressing metabolism in context and had an organic chemistry pre-requisite. It enrolled

about one hundred third-year undergraduate majors and a few post-baccalaureate

students, most of whom planned to attend medical or other professional schools in the

health sciences. The course structure relied on a student-centered format, that included:

pre-class reading and low stake assignments, in-class peer learning, application level

individual or group assignments emphasizing scientific and quantitative literacy, mostly

graded, and summative assessments including three exams and a final.

2.3. Procedures

In our exploratory study the procedures involved adaptation of a common rubric

for use across the three disciplines in a problem-based learning environment (2.3.1),

design and implementation of a performance task intervention (2.3.2), pre-and-post

assessment of critical thinking skills (2.3.3), post assessment of student confidence in

using critical thinking skills (2.3.4), and student and instructor reflections (2.3.5). We

first adapted a common rubric based on elements of critical thinking and then developed

performance tasks for each discipline. Thus, all faculty used the common rubric while the

performance tasks differed.

2.3.1. The common rubric

The common rubric employed in this study, as shown in Appendix A, was based

on an application of specific problem-solving skills published by Mitchell, Anderson,

15
Sensibaugh, & Osgood (2011) and informed by the works of Andrade (2000). Our

adaptation of the rubric guided instructors in the design of performance tasks, provided

explicit expectations for students, and allowed instructors to apply the rubric to student

work across the disciplines. The criteria in the rubric were made explicit to the students

and served several important pedagogical purposes, namely to articulate the expected

learning outcomes of the performance tasks, to spell out the criteria and indicators of

critical thinking as a guide to problem resolution, to support student self-evaluation in

accord with the rating labels, and to help instructors identify student weaknesses and

track individual progress. Discussion surrounding the rubric – namely giving it to

students before they worked on the performance task and sharing the results of applying

the rubric to their answers – was an essential part of the creation of the learning

environment. It is in these discussions that the elements of critical thinking were made

explicit and necessary for learning to take place. The discussions also provided group

reflection on any issues that arose in the utilization of the skills. The problem that was the

topic of each performance task was part of the content of the discussions as well.

2.3.2. The performance tasks

In our exploratory study, performance tasks consisted of a set of written materials

and a set of higher-order questions for problem resolution, as aligned with the

expectations in the common rubric. In the performance tasks, students constructed a

solution, created a written product, or presented their results for peer and instructor

review. From this perspective, performance tasks differed from routine lessons in that

students progressed through a sequence of steps that encompassed a wide range of

activities appropriate across the disciplines (Marzano, Pickering & McTighe, 1993). As

16
McTighe and Ferrara (1998) explain, extended performance tasks serve as “an

assessment activity… that elicits one or more responses to a question or problem,” and

includes elements of effective problem solving, including meaningful context, thinking

processes, and appropriate performances” (p. 43).

The goal of the performance tasks was to provide students with (1) practice in

using the skills of critical thinking; i.e., making claims or hypothesizing, analyzing,

evaluating, and integrating or synthesizing, and (2) assessment of these skills scored on a

common rubric to provide actionable feedback to students. We chose these skills, because

they are common to the definitions of critical thinking in the literature and recognized as

essential to problem solving. Thus, they are the success criteria in our common rubric.

We also included reflection as an essential element in developing critical thinking skills

and dispositions (Dewey, 1933; Hatton & Smith, 1995; Schön, 1991).

2.3.2.1. Format of performance tasks in teacher education

In the secondary Teacher Education class, we designed a task for students to

explore a controversial issue regarding arguments for and against grade-level retention.

Students completed an independent evaluation of the arguments as presented in a variety

of sources, including peer-reviewed research papers, relevant state- and national-reports,

and editorials and commentaries. Similar to the problem-based learning process defined

by Wood (2003), students met in small groups to identify conflicting values in the

documents and to create new claims in response to the arguments. Students produced a

written synthesis demonstrating their independent responsibility for learning. Another

task for the secondary teacher education class involved an extensive critical analysis of

the extant research on the teaching of literacy, culminating in a presentation of a

17
defensible and informed annotated bibliography. Students were expected to redefine or

extend traditional literacy stances, evaluate the extent of student-centered voice in

literacy learning, and make connections to the larger context of effective literacy practice.

In the elementary Teacher Education class, students examined a set of

predetermined articles in the teaching of writing genres in elementary classrooms, with

relevant data and irrelevant distractors, and then selected the most relevant information to

make a claim about gaps in current teaching practices. Students constructed new

approaches utilizing evidence from multiple texts to support their reasoning and

decisions, constructed innovative approaches, and tested their approach in presentations

for instructor and peer review. Another task for the elementary teacher education class

was designed using the same genres of sources but the topic was identifying evidence-

based best practices in teaching language acquisition in the elementary grades and

incorporating the information into a personal knowledge base for effective instruction.

Feedback was provided to the students on the extent to which they used critical thinking

skills/thought processes to effectively research, formulate questions, and construct new

approaches that could be used by aspiring classroom teachers to maximize teacher-child

engagement and interaction.

2.3.2.2. Format of performance tasks in social sciences

We designed the first performance task for Social Sciences on the safety of

genetically modified organisms (GMOs) on human health. We selected an excerpt from a

peer-reviewed journal article, an article from a popular magazine, an article from a self-

proclaimed specialist on the Internet, and excerpts from several book chapters which

reviewed the same experiment found in the Internet article but with different conclusions.

18
Thus, with a total of less than twenty pages of text, students were exposed to different

genres of sources, with differing qualities of evidence, and with stronger and weaker

arguments. In writing, students were first asked to state each of the author’s main claims,

then analyze the evidence provided for those claims, and then evaluate the conclusion.

Students were asked to synthesize the findings in the combined readings and draw a

conclusion about the safety of GMOs. We then used the common rubric to assess and

provide feedback to the students on their work. Finally, students were asked to reflect

about their use of critical thinking after completing the performance task.

The second performance task for the Social Science classes was designed using

the same genres of sources but the topic was the health claims of the Paleo Diet.

2.3.2.3. Format of performance tasks in in life sciences

The performance tasks for students in the Life Sciences class used the same rubric

for assessing student responses but the topic of the performance task, while complex, was

not a controversial issue. Instead students reviewed a clinical case study, created

hypotheses, identified data to narrow down the hypotheses, analyzed/evaluated data,

integrated the findings, and reflected on the skills/thought processes needed to complete

the case. The task used in the Life Sciences class was most closely modeled to the work

of Mitchell, Anderson, Sensibaugh, & Osgood (2011), who emulated procedural skills

and the scientific method. In an interrupted case-study format students were asked to

form a hypothesis about a provided observation, designed an experiment (identifying

dependent and independent variables and constant factors and controls) for a given

hypothesis, evaluated given results in the form of quantitative data, integrated that

information with given the most likely explanation of the scenario. The context of the

19
first task was related to the efficacy of plant-derived medicines and the second task used

a patient case focused on dysregulation of metabolic mechanisms in biochemistry.

2.3.3. Pre- and post-assessment of critical thinking

Critical thinking skills were measured using the Critical Thinking Assessment

Test (Stein, Haynes, & Redding, 2016) as a pre- and post-test. The secure test consisted

of 15 questions, mostly short essay responses to topics unrelated to the course content,

which took students about one hour to complete. As mentioned in the introduction, we

chose the CAT notably for validity, reliability, cultural fairness, and authenticity. The

instrument showed no floor or ceiling effect and included a range of critical thinking

skills, independent of content and discipline, as developed for undergraduate college

students (CAT Instrument Technical Information, 2016).

An all-day scoring workshop was conducted with ten faculty members at the

conclusion of the study. Two members of the research team were trained in scoring by

the CAT proprietors; they in turn trained the volunteer-faculty graders from the three

fields of study. Each question was scored by a minimum of two scorers and

disagreements were resolved by a third scorer. A scoring accuracy check, conducted by

the Tennessee Technological University (TTU) National Testing Center to determine

inter-rater reliability, found none of the scored questions had error rates that could lead to

misinterpretation of results (K. Harris, personal communication, December 1, 2015).

Test-retest reliability of the CAT 4.0 version which we used was > 0.80. Refinements in

the test and the scoring guide have yielded scoring reliability = 0.92 between the first and

second scorer. A sample question from the CAT is included in Appendix B.

For the data analysis procedure, we compared student pre-post tests using a paired

20
t-test. Rank correlation was used to determine if there was a relationship between

questions on the CAT instrument and the participants’ discipline, scores on college

entrance exams, grade-level standing, or demographics. To ascertain whether certain

variables in the study were predictive of other variables the following comparisons were

analyzed: a) all 15 CAT pre-post test questions to each other, b) the 15 CAT pre-post test

questions grouped by discipline, and c) the total CAT pre-post test results by standardized

tests widely used for college admissions in the United States (ACT and SAT), grade-level

standing by discipline, and demographics.

We used two tests to perform the data analysis: ANOVA for group comparisons

and Spearman's rho for correlations among ordinal variables. Spearman’s rho was

selected because it is a measure of correlation that is more general than Pearson's r. As

such, it was deemed appropriate for assessing the relationship between ordinal variables

exhibiting a monotonic relationship. Insignificant variables were eliminated using

stepwise selection. Pairwise comparisons were made for the variables that remained.

2.3.4. Post assessment of student confidence in using critical thinking skills

We used self-appraisal to collect qualitative data from students regarding their

perceptions on how critical thinking was understood and used during the semester. At

the end of the semester, participants were asked to assess their level of confidence in

using the skills of critical thinking to solve problems (Subset 1), their mastery and use of

critical thinking skills (Subset 2), and the degree to which instructor feedback was helpful

in improving their critical thinking skills (Subset 3). Students rated the items on a 5-point

Likert rating scale from strongly disagree to strongly agree. We chose these values to

yield the greatest consistency from respondents (Bartram & Yielding, 1973). Responses

21
were analyzed using simple descriptive statistics for counts of ordered categorical data.

This was necessary in order to compute means. The questionnaire was administered

spring 2015 with all participants in the study on paper in class, or online in Life Sciences.

Table 1 displays the questions for each of the indicators of the disposition toward critical

thinking, categorized in the three subsets of confidence, mastery, and feedback.

2.3.5. Student and instructor reflections

2.3.5.1. Student reflections

We analyzed samples of student reflections embedded in the performance tasks to

explore the degree to which students were disposed toward the critical thinking skills of

analysis, evaluation, and synthesis. On the reflective assignment, students were asked to

critically examine their skill level in completing the performance tasks and to identify

areas for growth. Students reflected on and evaluated the quality of their own thinking

skills and strategies for solving the problems in the performance tasks.

2.3.5.2. Instructor reflections

The instructors who accepted our invitation to use their classes for the study

implemented the performance tasks and provided feedback to students in the five classes

across the disciplines. We worked with the instructors to create the performance tasks,

grounding ourselves in direct classroom observations and interaction, and then

conducting open-ended conversations with the instructors lasting about an hour each. We

looked for general patterns in how the instructors perceived the value of the performance

tasks and scoring rubric. We used open coding to create categories that inductively

emerged from our informal observations (Creswell, 2007). Knowledge from the review

22
of the literature allowed us to see the patterns in the observations that were situated in the

instructor’s classroom settings.

3. Results

In this section we analyze the results of our exploratory mixed-methods study,

taking each research question in turn, i.e., (1) can we create a problem-based learning

environment using performance task interventions based on a common rubric that would

improve students’ critical thinking skills and dispositions? (3.1.). And (2) can a common

intervention approach focused on critical thinking skills and dispositions be used across

disciplines? (3.2.).

3.1. Research question 1

3.1.1. Differences in evaluative thinking across disciplines

Overall, the questions examining evaluative thinking and interpretation of

information (Q. 13 and Q. 14) accounted for most of the variability in the total CAT

scores across disciplines, indicating tasks that prompt interpretation and evaluation-level

thinking are powerful methods for improving the critical thinking skills of students in

collegiate settings. Spearman’s rho revealed a statistically significant relationship

between the responses on evaluation-level thinking in the CAT pre-post test questions. A

moderate to significant relationship existed between two of the pre-test questions on

evaluative thinking (Q13 and Q14 where rs [63] = 0.67, p < .05) and the post-test

questions (Q13 and Q14 where rs [63] = 0.7, p < .05). These pairings in the dataset were

found to be similar to the clusters posited by the designers of the CAT for strengths in

evaluative thinking, which include skills in identifying appropriate conclusions.

3.1.2. Differences in analysis and evaluation within the disciplines

23
When grouped by discipline, there was no evidence to suggest overall significant

gains in critical thinking from pre- to post-assessment on the CAT. However, a

statistically significant relationship was found among the Social Sciences students on the

questions concerning analysis/use of relevant information (Q13 and Q9 where rs [28] =

0.8, p < .05) and on the total questions (rs [28] = .75, p < .05). In the Life Sciences

cohort, a significant relationship was found on evaluative thinking in the pre-post

questions (Q13 and Q14) and by all questions (rs [7] = .87, p < .05). In the Teacher

Education cohort, Q13 on evaluation in the post-test showed significance (rs [29] = 0.89,

p < .05).

3.1.3. Differences by entering ACT and SAT

When CAT test results were compared to recognized college admissions tests,

namely the ACT and SAT, the scores correlated to each other; however, SAT had a

stronger correlation to the pre-test questions (rs [63] = 0.839, p < .05) and no correlation

to the post-test questions. The designers of the CAT instrument report similar results;

i.e., ACT and SAT are normally distributed and pre-college SAT contributes minimally

to critical thinking outcomes for students in their freshman year of college.

3.1.4. Differences by grade-level standing and demographics

When CAT test results were compared to grade-level standing, upper-classmen

juniors demonstrated significance, albeit at a marginal level. We contend that the

development of critical thinking among upper-class students could have been influenced

by several factors unrelated to the study. Grade-level standing, maturity, course taken, or

prior knowledge may have contributed to the results. No significant performance

differences were detected between students identifying as White or Hispanic.

24
3.2. Research question 2

3.2.1. Differences in using critical thinking skills

All 63 participants responded to the end-of-course questionnaire on their

perceived mastery of critical thinking skills. Following the example documented by Koon

and Murray (1995), we utilized an end-of-course questionnaire as a self-assessment of

students’ perceptions of the understanding and use of critical thinking skills. Our

justification for using the single questionnaire at the end of the study was supported in

part by the work of O’Connell and Dickinson who found student perceptions added value

to the assessment of learning beyond conventional pre- and post-test results (1993).

Table 2 displays the mean ratings on the indicators as coded by confidence,

mastery, and feedback. Results indicate the data points tended to be very close to the

mean with few differences and little spread. Despite posting the lowest performance on

the CAT instrument, Teacher Education students appeared to be somewhat more

confident in their mastery of critical thinking skills and in the value of instructor feedback

than students in other disciplines. In contrast, despite higher performance on the CAT

instrument, the small sample of Life Sciences students appeared to be somewhat less

confident in the use of critical thinking skills in their course. This may be partially

explained by the lack of explicitly mentioning critical thinking during the course;

although students used critical thinking in their assignments, this was a tacit rather than

an explicit objective.

3.2.2. Student reflection on use of critical thinking skills

We examined student written reflections to explore the degree to which students

were disposed toward critical thinking. Overall, the performance task interventions

25
appeared to enhance students’ perceptions of their own critical thinking skills, such as

analysis, evaluation, and synthesis. The tasks also provided instructors with an authentic

assessment of student learning in the discipline. Through their performances and

products, students put their critical thinking skills into action, applying what they were

learning, and discovering how this mattered to them.

Reponses were coded by categories in the rubric criteria, namely analysis,

evaluation, and synthesis. Table 3 captures the key themes from the student reflections

and demonstrates that students, regardless of discipline, indicated they would be able to

use the skills of critical thinking in their future learning. A typical comment came from

one student who said, the instructor “provided us with an abundance of feedback that

many professors lack; this written and verbal feedback has helped me to identify

strengths and areas for growth as a critical thinker to my future learning.” Testimonials

such as this suggest the importance that feedback on a common rubric plays in students'

mastery and use of critical thinking.

When students were asked if their opinions changed on a controversial topic as a

result of new information, 75% reported their opinion changed, 21% reported their

opinion did not change, and 4% reported their opinion changed somewhat. When asked

how they could improve as critical thinkers, students offered specific strategies to

improve their motivation and success as critical thinkers. They said they would attend to

“close reading,” strive for “clarity of written thought,” make explanations “more relatable

to future work,” and “not let emotions cloud thinking.” One student said, “I went from

being unsure of what exactly critical thinking was all about to making it my initial

response to many questions I face every day.” This student’s experience was consistent

26
with Fletcher’s (2015) premise that explicit instruction moves students from doubt to

defensible action. Reflections from the cohort in Life Sciences indicated that other factors

might influence mastery of critical thinking, such as fixed mindsets and prior knowledge.

3.2.3. Instructor reflections on implementation of performance tasks

We found unexpected benefits for instructors who participated in the study, both

in terms of influencing their own beliefs about whether critical thinking can be taught and

in the quality of student work on the performance tasks. Our informal observations in the

classrooms tended to correspond to the information generated in the student reflections.

While our protocol did not include formal interviews with the instructors, we were able to

develop some general impressions using the open coding method described by Creswell

(2007).

What we discovered from our informal observations was that instructors were

generally inclined to express gratification with the way their students explored more than

one valid interpretation or viewpoint. In general, the instructors appeared to increase

their awareness of the use of performance tasks in generating multiple arguments based

on the assignments and explicit expectations in the rubric. Some registered surprise

about how well students were able to marshal quality support for their claims, regardless

of the particular position taken in the source documents. Some instructors intervened

directly when students were engaged in the learning environment by addressing student

weaknesses in critical thinking and interrupting incorrect judgments. Instructors noted

that the use of complex performance tasks helped slow down the students’ typical rush to

judgment and prompted students to defend their arguments instead.

As a result of instructor participation in the study, we observed instructors were

27
able to create performance tasks with greater ease. Designing rigorous performance tasks

aligned to the rubric had been time intensive for some instructors, requiring more

attention to logistics than routine classroom activities. However, as a result of their

participation in the study, instructors noted the design work provided them with a means

to enhance their pedagogy. In our investigation, we had the opportunity to learn from

interdisciplinary faculty how they used performance tasks and feedback on a common

rubric to deepen students’ critical thinking. Instructors agreed that the tasks were initially

challenging for students, but with repeated practice and regular feedback, students were

able to go beyond what was expected. Similar to the findings of Costa and Kallick

(2008), instructors noticed that students’ products and performances on the performance

tasks created evidence of critical thinking that moved well beyond the bubble of paper

and pencil exams.

4. Discussion

This study explored how critical thinking can be taught in college classes,

provided an approach to respond to the existing gap in the teaching of critical thinking,

and argued that skills connected to critical thinking are not dependent on a specific major.

We aimed to introduce a pedagogical method that most faculty could adapt into their

existing teaching strategies, thereby ensuring that students are explicitly asked to apply

critical thinking skills to problems in the context of the discipline.

First, we examined whether we could create a problem-based learning

environment with performance task interventions that would improve students’ critical

thinking skills and dispositions by using multiple measures. We can report gains on

individual questions and a trend towards improvement of the CAT scores, but we cannot

28
confirm critical thinking gain based on the overall aggregate CAT results for most of the

classes. However, students in one class did have a statistically significant gain between

the pre- and post-test. The Social Science class on globalization and human rights had an

extended focus on the question of how to approach world hunger. Although, there was no

additional time spent on the specific performance tasks compared to the other classes in

this study, it is possible that the length of time focusing on one topic area as a class

contributed to the detected improvement in critical thinking. This finding justifies the

depth-over-breath strategy that student-centered approaches are known for.

In terms of demographics, there were only marginal differences noted between

students identifying themselves as either Hispanic or White. This aligns with findings of

the CAT designers who noted “once the effects of entering SAT score(s)… were taken

into account, neither gender, race, nor ethnic background were significant predictors”

(Stein, Haynes, & Redding, 2016, p. 4). The instrument was designed to eliminate

cultural bias and we took care not to introduce these effects involuntarily when we

designed our tasks and the common rubric.

While the CAT results showed some limited improvements in critical thinking,

the qualitative data provided information that enhanced interpretation of findings which

could not be captured by the CAT instrument alone. The qualitative data demonstrated a

gain in critical thinking dispositions. We found students reported positive changes in

their critical thinking and showed overall positive attitudes towards deliberate practice

and reflection. Instructors regarded performance tasks as a beneficial way to foster

critical thinking development and transferability of skills in analysis, evaluation, and

synthesis of arguments and claims. The problem-based learning environment that we

29
created engendered lively discussions. As Abrami et al. demonstrate (2008), discussion

is a key component in teaching critical thinking skills.

Contrary to claims made by Huber and Kuncel (2016), our study suggests critical

thinking improves when the skills are explicitly taught and embedded in the content of

the discipline. Critical thinking is a skill that must be practiced and repeated over and

over across disciplines. Overcoming unconscious biases and reflexive jumps to

conclusion can best be accomplished if people have a disposition and attitudinal readiness

that ensures they will make themselves conscious of the need to apply the skills. If

students and faculty start developing a disposition toward critical thinking, they should

expect to see gains in critical thinking over the course of a semester-long study in the

disciplines. Overall the evidence presented here leaves us hopeful that further

investigations on the feasibility of explicit teaching of critical thinking can address an

unmet need and add value to students’ college experiences.

In the second part of this project we attempted to outline a blueprint for a

common intervention approach that focused on critical thinking skills and dispositions

and investigated its usability across disciplines. Because the initial pre-post assessment

data suggest, but do not strongly support the teachability of critical thinking, we can only

interpret our findings with caution. In our exploratory study, gains in critical thinking

reached significance level on evaluative thinking across the disciplines, but the aggregate

CAT results did not confirm overall critical thinking gain for most of the classes. We

therefore make relative statements about the power of performance task interventions

with cautious optimism that our goal to increase the use of performance tasks with

corrective feedback on a common rubric is warranted. Support for the notion that we

30
could design an intervention suitable to many disciplines comes from the qualitative data.

Student and instructor responses indicate we were able to create a problem-based

learning environment using subject-specific performance tasks in three different

disciplines. We found that a common rubric can be used to explicitly teach students the

skills required for critical thinking. For at least a semester, our subjects became aware

that the practice of hypothesizing and identifying claims, analyzing, and evaluating and

synthesizing results was applicable to their understanding and use of critical thinking.

Overall, our results also contribute to the research that supports particular methods for

teaching critical thinking. The three main techniques for promoting the skills, as

presented in Section 1.1.5, include fostering critical dialogue and discussion, using

authentic problems in the course, and mentoring. Our problem-based environment using

performance tasks achieves all three techniques because authentic problems from the

disciplines were used in the tasks, instructors mentored the students through corrective

feedback on student work, and class discussion about the students’ analysis, evaluation,

and synthesis took place.

4.1. Recommendations

Instructors can use this method to bring critical thinking into their classrooms

with little to no additional professional development required. They can adopt our rubric

or modify it as required and follow our guidelines to create performance tasks on

problems important to their disciplines. Tasks can be done within the time constraints of

a course or as an extended homework assignment. What does need to take place within

class time is a presentation of the rubric that will be used to assess student work so the

skills of critical thinking are made explicit. Also, discussion of the student work needs to

31
occur during and after each task so students are able to use the feedback to reflect on their

work before they undertake a subsequent performance task. Through discussion faculty

provide mentoring which contributes to learning skills that can be applied to other

assignments of critical thinking in subsequent performance tasks. And discussion of the

student results allows the class to further deepen their comprehension and critique of the

various viewpoints presented in the learning materials. In applying our technique, we

argue that any course can be turned into one that facilitates critical thinking in which

students are given high-rigor tasks that elicit the knowledge and skills needed to solve

complex problems in their disciplines (Stylianides & Stylianides, 2008). Explicit and

intentional tasks of high-quality, relevant content, and clear purpose predict higher

outcomes. Thus, following our method does not take away time from learning course

content. We contend instructors who feel anxious about having enough time in the

semester to add yet another item to their curriculum will find that our approach

maximizes time for learning. Finally, instructors wishing to collect more information

about their students’ thinking may choose to use a pre-test to determine the influence of

students’ prior experience in the understanding and use of critical thinking in solving

problems of practice in their disciplines.

4.2. Limitations of Study

Our study contains some limitations due to the disproportionate number of

participants among the three cohorts and the interpretive nature of qualitative research

(Atieno, 2009). Steps were taken to reduce these possible limitations through open coding

(Creswell, 2007) and triangulation of data, which included pre-post assessment on the

CAT, student end-of-course assessment, analysis of student written reflections, and

32
informal interviews with instructors. Even so, we acknowledge the findings from the

study are not generalizable. With the disproportionately low number of participants in

Life Sciences, the students’ dispositions toward critical thinking may not have been

representative of other students in their discipline. With regard to the end-of-course

student self-assessment, it should be noted that while self-appraisal generally produces

consistent results across tasks and short time periods, student self-assessment of skills

and dispositions corresponds only in part to the information generated by teacher

assessments (Boud & Falchikov, 1989; Ross, 2006). Finally, while we did not propose to

validate the questionnaire we employed, it appears that more redundancy in the survey

questions along with open-ended comment fields could aid in the interpretation of

participant responses.

4.3. Suggestions for Further Research

Performance tasks in problem-based learning environments may indeed hold

promise for deepening the language and skills of critical thinking across disciplinary

content. The outcomes of a task given to students are a powerful predictor of

performance on similar tasks (City, Elmore, Fairman & Teitel, 2009). As Lang reasons,

“frequent and deliberate practice” enhances learning (2016, p. 21), thus it would be

informative to conduct studies that use performance tasks more frequently in a course,

and better yet, use them in a series of courses or a full program of studies. Affirming that

repeated practice of critical thinking skills correlates with a greater ability and disposition

for critical thinking would strengthen the argument that faculty must explicitly use the

language of critical thinking and students must repeatedly practice the skills.

Our study suggested two additional avenues for research. First, while much is

33
known about the powerful effects on critical thinking of generating and testing claims,

problem-solving, and receiving corrective feedback (Abrami et al, 2015; Hattie, 2015;

Marzano, Pickering, & Pollock, 2001), it would be useful to determine the effect of

performance tasks that span the length of a course on the transferability of critical

thinking skills. Randomized controlled studies on programs which devote a semester to

one extended topical problem in a discipline could provide convincing evidence on the

power of performance tasks on growth in critical thinking. Second, there is a lack of

research about increasing faculty knowledge of and facility in teaching critical thinking.

Further research is necessary if we are going to strengthen the argument for the routine

teaching of critical thinking across disciplines in the undergraduate curriculum.

5. Conclusions

Faculty in a cross-disciplinary collaboration embarked on this study with the aim

of exploring the teaching and learning of critical thinking by using generic performance

task interventions embedded in a problem-based learning environment, with feedback on

a common rubric, and an explicit focus on critical thinking skills and dispositions. Some

cohorts evidenced measurable improvements in critical thinking skills with less

discernable improvement among other cohorts. While the overall statistical results were

less than anticipated, we found these results were not the only factor informing instructor

decisions about using performance tasks to teach critical thinking. As Hattie observed

almost two decades ago, “achievement is enhanced as a function of feedback” and is

increased “to the degree that students and teachers set and communicate appropriate,

specific, and challenging goals” (1999, p. 2).
In our exploratory study the qualitative

results tended to confirm the value of student participation in rigorous and challenging

34
performance tasks. We also observed the practice of creating and scoring performance

tasks can aid critical thinking in the faculty, motivate their deliberate inclusion of critical

thinking in course designs, and prompt rich discussions about pedagogy which can lead

to positive impacts on students’ critical thinking skills and disposition.

We conclude that using performance tasks with feedback on a common rubric

may be useful in many fields. Regular use of performance tasks could have a substantial

impact in higher education if both faculty and student understanding of critical thinking

skills and dispositions is improved and if the tasks given students are intentionally

integrated into every classroom experience.

Acknowledgements

This study was approved by the University’s Institutional Review Board (protocol

662865-2). We are grateful to the university statistics consultant for his support in the

data analysis and to the instructors for permitting us to conduct our study in their

classrooms. We also thank our colleagues who provided time and expertise to score the

CAT responses. Funding: This research did not receive any specific grant from funding

agencies in the public, commercial, or not-for-profit sectors. Expenses were offset with

funds available from the University’s College of Education.

35
Page intentionally left blank

36
References

Abrami, P., Bernard, R., Borokhovski, E., Wade, A., Surkes, M., Tamin, R., & Zhang, D.

(2008). Instructional interventions affecting critical thinking skills and

dispositions: A stage 1 meta-analysis. Review of Educational Research, 78(4),

1102-1134.

Abrami, P., Bernard, R., Borokhovski, E., Waddington, D., Wade, A. & Persson, T.

(2015). Strategies for teaching students to think critically: A meta-analysis.

Review of Educational Research, 85(2), 275-314.

American Philosophical Association. (1990). Critical thinking: A Statement of expert

consensus for purposes of educational assessment and instruction. ERIC

document ED 315-423.

Andrade, H. G. (2000, February). Using rubrics to promote thinking and learning.

Educational Leadership, 57(5), 13-18.

Arum, R. & Roska, J. (2010). Academically adrift: Limited learning on college

campuses. Chicago, IL: The University of Chicago Press.

Atieno, O. P. (2009). An analysis of the strengths and limitation of qualitative and

quantitative research paradigms. Problems of Education in the 21st Century, 13,

13-18.

Bailin, S., Case, R., Coombs, J. & Daniels, L. (1999). Common misconceptions of critical

thinking. Journal of Curriculum Studies, 31(3), 269-283.

Bailin, S. & Siegel, H. (2003). Critical thinking. In Nigel Blake, Paul Smeyers, Richard

Smith, and Paul Standish, (Eds.), The Blackwell Guide to the Philosophy of

Education, 181-193. Oxford, UK: Blackwell Publishing Ltd.

37
Bartram, P., & Yielding, D. (1973). The development of an empirical method of selecting

phrases used in verbal rating scales: A report on a recent experiment. Journal of

the Market Research Society, 15(3), 151-156.

Black, P. & Wiliam, D. (1998). Assessment and classroom learning. Assessment in

Education: Principles, Policy & Practice, 5(1), 1-65.

Bok, D. (2006). Our underachieving colleges: A candid look at how much students learn

and why they should be learning more. Princeton, NJ: Princeton University Press.

Boud, D., & Falchikov, N. (1989). Quantitative studies of student self-assessment in

higher education: A critical analysis of findings. Higher Education, 18, 529-549.

Brookfield, S. (2012). Teaching for critical thinking: Tools and techniques to help

students question their assumptions. San Francisco, CA: Jossey-Bass.

Browne, M. & Freeman, K. (2000). Distinguishing features of critical thinking

classrooms. Teaching in Higher Education, 5(3), 301-309.

CAT Instrument Technical Information (2016). Supported by the National Science

Foundation. Tennessee Technological University. Retrieved

https://www.tntech.edu/assets/userfiles/resourcefiles/8783/1454517036_CAT%20

Technical%20Information%20V8.pdf

City, E. A., Elmore, R. F., Fiarman, S. E., & Teitel, L. (2009). Instructional rounds in

education: A network approach to improving teaching and learning. Cambridge,

MA: Harvard Education Press.

Costa, A. & Kallick, B. (2008). (Eds.) Learning and leading with habits of mind.

Alexandria, VA: Association for Supervision and Curriculum Development.

Creswell, J. W. (2007). Qualitative inquiry and research design: Choosing among five

38
approaches (2nd ed.). Thousand Oaks, CA: Sage.

Dewey, J. (1933). How we think: a restatement of the relation of reflective thinking to the

educative process. Boston, MA: D.C. Heath.

Dweck, C. S. (2002). Beliefs that make smart people dumb. In R.J. Sternberg (Ed). Why

smart people can be so stupid. New Haven, CT: Yale University Press.

Ennis, R. (1989). Critical thinking and subject specificity: Clarification and needed

research. Educational Researcher, 18(3), 4-10.

Fethke, G. C., & Policano, D. J. (2012). Public no more: A new path to excellence for

America's public universities. Stanford, CA: Stanford University Press.

Fletcher, J. (2015). Teaching arguments: Rhetorical comprehension, critique, and

response. Portland, MA: Stenhouse Publishers.

Gabannesch, H. (2006). Critical thinking...What is it good for? (In fact, what is it?).

Skeptical Inquirer, 30(2), 36-41.

Gottesman, A. J., & Hoskins, S G. (2013). CREATE cornerstone: Introduction to

scientific thinking, a new course for STEM-interested freshmen, demystifies

scientific thinking through analysis of scientific literature. CBE-Life Sciences

Education, 12(1), 59-72.

Hackman, H. (2005). Five components for social justice education. Equity & Excellence

in Education, 38, 103-109.

Halpern, D. (1998). Teaching critical thinking for transfer across domains: Dispositions,

skills, structure training, and metacognitive monitoring. American Psychologist,

53(4), 449-455.

Halx, M. & Reybold, E. (2005). A pedagogy of force: Faculty perspectives of critical

39
thinking capacity in undergraduate students. The Journal of General Education,

54(4), 293-315.

Hattie, J. (1999, August). Influences on student learning. Inaugural Lecture. Auckland,

New Zealand: University of Auckland.

Hattie, J. (2007). The power of feedback. Review of Educational Research, 77(1) 81-112.

Hattie, J. (2015). The applicability of Visible Learning to higher education. Scholarship

of Teaching and Learning in Psychology, 1(1), 79-91.

Hatton, N. & Smith, D. (1995). Reflection in teacher education: Towards definition and

implementation. Sydney, Australia: The University of Sydney: School of

Teaching and Curriculum Studies.

Huba, M. E., and Freed, J. E. Learner-Centered Assessment on College Campuses:

Shifting the Focus from Teaching to Learning. Boston: Allyn and Bacon, 2000.

Huber, C. R. & Kuncel, N. R. (2016). Does college teach critical thinking? A meta-

analysis. Review of Educational Research, 86(2), 431-468.

Kahneman, D. (2011). Thinking, fast and slow. New York: Farrar, Straus and Giroux.

Kelly-Riley, D. (2007). Washington State University critical thinking project: Improving

student learning outcomes through faculty practice. In T. W. Banta (Ed).

Assessing student achievement in general education (35-43). San Francisco, CA:

Jossey-Bass.

Koon, J., & Murray, H. G. (1995). Using multiple outcomes to validate student ratings of

overall teacher effectiveness. Journal of Higher Education, 66(1), 61-81.

Kuhn, D. (2000). Metacognitive development. Current Directions in Psychological

Science, 9, 178-181Lang, J. (2016). Small teaching: Everyday lessons from the

40
science of learning. San Francisco, CA: Jossey-Bass.

Marin, L. & Halpern, D. (2010). Pedagogy for developing critical thinking in

adolescents: Explicit instruction produces the greatest gains. Thinking Skills and

Creativity, 6, 1-13.

Marzano, R. J., Pickering, D., McTighe, J. (1993). Assessing student outcomes:

Performance assessment using the dimensions of learning model. Aurora, CO:

Mid-Continent Regional Educational Lab.

Marzano, R., Pickering, D., & Pollock, J. (2001). Classroom instruction that works:

Research-based strategies for increasing student achievement. Alexandria, VA:

ASCD.

McTighe, J., & Ferrara, S. (1998). Assessing learning in the classroom: Student

assessment series. Annapolis Junction, MD: NEA Professional Library

Distribution Center.

Mitchell, S. M., Anderson, W., Sensibaugh, C., & Osgood, M. (2011). What really

matters: Assessing individual problem-solving performance in the context of

biological sciences. International Journal for the Scholarship of Teaching and

Learning, 5(1), 1-20.

Newton, C., & Fisher, K. (2009). Take 8. Learning spaces: The transformation of

educational spaces for the 21st century. Victoria, Australia: The Australian

Institute of Architects, ACT.

Nosich, G. (2009). Learning to think things through: A guide to critical thinking across

the curriculum. Upper Saddle River, NJ: Pearson Prentice Hall.

O’Connell, D. Q., & Dickinson, D. J. (1993). Student ratings of instruction as a function

of testing conditions and perceptions of amount learned. Journal of Research and

41
Development in Education, 27(1), 18-23.

Pascarella, E. T. & Terenzini, P. T. (2005). How college affects students: A third decade

of research. San Francisco, CA: Jossey-Bass.

Pithers, R. T., & Soden, R. (2000). Critical thinking in education: A review. Educational

Research, 42(3), 237–249

Ross, J. A. (2006). The reliability, validity, and utility of self-assessment. Practical

Assessment, Research, and Evaluation, 11(10), 1-13.

Royalty, J. (1991). The generalizability of critical thinking: Paranormal beliefs versus

statistical reasoning. Journal of Genetic Psychology, 156(4), 477-488.

Sa, W., Stanovich, K., & West, R. (1999). The domain specificity and generality of belief

bias: Searching for generalizable critical thinking skill. Journal of Educational

Psychology, 91(3), 497-510.

Shiel, T. K. (2017). Designing and using performance tasks: Enhancing student learning

and assessment. Thousand Oaks, CA: Corwin.

Shim, W. & Walczak, K. (2012). The impact of faculty teaching practices on the

development of students’ critical thinking skills. International Journal of

Teaching and Learning in Higher Education, 24(1), 16-30.

Schön, D. A. (1991). The reflective turn: Case studies in and on educational practice.

New York: Teachers College Press, Columbia University.

Staib, S. (2003). Teaching and measuring critical thinking. Journal of Nursing Education,

42(11), 498 – 508.

Snyder, L. & Snyder, M. (2008). Teaching critical thinking and problem solving skills.

The Delta Pi Epsilon Journal, L(2), 90-99.

42
Stedman, N. & Adams B. (2012, June). Identifying faculty’s knowledge of critical

thinking concepts and perceptions in critical thinking instruction in higher

education. NACTA Journal, 56(2), 9-14.

Stein, B., Haynes, A., & Redding, M. (2016). National dissemination of the CAT

instrument: Lessons learned and implications. Proceedings of the AAAS/NSF

Envisioning the Future of Undergraduate STEM Education: Research and

Practice Symposium.

Stylianides, A. J. & Stylianides, G. J. (2008). Studying the classroom implementation of

tasks: High-level mathematical tasks embedded in ‘real-life’ contexts. Teaching

and Teacher Education, 24(4) 859–875.

Tsui, L. (1999). Courses and instruction affecting critical thinking. Research in Higher

Education, 40(2), 185-200.

Tsui, L. (2002). Fostering critical thinking through effective pedagogy. The Journal of

Higher Education, 73(6), 740-763.

van Gelder, T. (2005). Teaching critical thinking: Some lessons from cognitive science.

College Teaching, 53(1), 41-46.

Walsh, D. & Paul, R. (1986). The goal of critical thinking: From educational ideal to

educational reality. Washington, D.C.: American Federation of Teachers. (ERIC

Document Reproduction Service No. ED 297 003)

Willingham, D. (2008). Critical thinking: Why is it so hard to teach? Arts Education

Policy Review, 109(4), 21-32.

Wood, D. (2013). ABC of learning and teaching in medicine: Problem based learning.

British Medical Journal, 326, 328-330.

43
Table 1 Student Questionnaire: Mastery and Use of Critical Thinking Skills

Subset Indicators Questions

1 Confidenc Q.5. I feel more confident, as a result of this course, in my

e mastery of critical thinking skills.

Q.6. Solving problem-solving tasks did not create stress for

me.

2 Mastery Q.1. The problem solving tasks in this course helped me

develop my critical thinking skills.

Q.2. I have developed solutions to problems that can be

applied to other situations.

Q.4. This course helped me analyze and evaluate claims.

Q.7. This course helped me separate factual information

from inferences.

Q.9. This course helped me separate relevant from

irrelevant information.

Q. 10. This course will help me apply critical thinking skills

to my future learning.

3 Feedback Q.3. I used suggestions from instructor feedback in this

course to improve my critical thinking skills.

Q.8. Instructor feedback helped me identify strengths and

areas for growth as a critical thinker.

Source: Questionnaire created employing coding from the defined skill sets in the CAT

44
Table 2

Mean Ratings on Student Questionnaire by Discipline and Indicator (n=63), Spring 2015

Discipline Confidence Mastery Feedback Overall

Teacher Education 3.71 4.47 4.60

(n=28)

Social Sciences 3.75 4.18 4.44

(n=28)

Life Sciences 3.37 4.14 4.22

(n=7)

All Programs 4.25

(n=63)

45
Table 3

Student Reflections on Influence of Performance Tasks on Enhancing Critical Thinking

(CT) Skills

CT Skills Student Comments

Analysis I "used critical thinking to separate bias from fact."

I was “able to compare [the] two viewpoints.”

The assignments “required us to view readings without bias.”

“This [activity] is a great way to show students how to back up their beliefs

by using references. Having several pamphlets really reinforces… skill(s)

necessary for critical thinking.”

Evaluation "I used critical thinking in evaluating each article's points against those

made in the others, and in adjusting my thought processes to include new

information."

I was able to “compare the two viewpoints and evaluate the validity of their

evidence.”

“I think critical thinking was applied once there was conflicting

information.”

“Thinking of different validations and criteria was a challenge at first as

well, but almost became the norm a few months into the course. I went from

being unsure of what exactly critical thinking was all about to making it my

initial response to many questions I face every day.”

continued

46
Synthesis I learned “how perceptions can be biased.”

I was prompted to “think of counter arguments.”

I was able to “formulate my own opinion.”

I was able to “resolve contradictions and defend my arguments with

evidence.”

Completing the task “empowered me to speak with credibility,

instead of me just saying the argument was stupid.”

________________________________________________________________________

47
Appendix A: Common Rubric to Assess Quality of Response to a Performance Task
Criteria Below Meets Meets or
Expectation Expectation Exceeds Expectation
0-3 4-6 7-9
Hypothesize  Makes  Generates one hypothesis or  Given a set of observations,
and/or claims/ claim, but fails to provide makes a claim and/or
Make hypotheses specific/underlying reasons generates hypotheses about 3
claims that are not that are likely to have or more underlying reasons
specific to produced or contributed to that are likely to have
the the observations. produced or contributed to the
observations. observations.
Analyze,  Rationale is  Rationale has some  Given a hypothesis or claim to
Investigate not grounded legitimacy, but fails to be be investigated, presents a
in fact; convincing. Makes a claim sound rationale that describes
contains but does not explain why it what is known and unknown
fallacious is controversial. about the history/antecedents
and/or  Does not specify the of the problem.
deceptive evidence needed to  Specifies the evidence needed
reasoning. investigate the situation. to investigate the situation.
 Explains why some  Separates relevant from
observations and/or data are irrelevant observations or data;
irrelevant. separates factual information
from inferences.
Evaluate  Overstates  States one result, but fails to  States 2 or more results that
Results the results; specify important values emerged from the evidence.
does not being observed.  Identifies important values
address what (these may be expressed as
is known magnitudes, sizes, or amounts
from the in graphs/figures).
investigation
Synthesize,  Over-  Cites relevant results and  Interprets results; explains the
Integrate interprets the identifies some observations link between the original basis
results; does and/or data that consistently for the observations; describes
not integrate support the the history and/or antecedents
information. hypothesis/claim, but fails of the problem.
to explain what the results  Specifies the evidence used to
mean. support the claim/hypothesis.
 Makes a new claim or counter-
claim that integrates and
synthesizes information from
the investigation.
Reflect  Reflection is  Misses several components  Critically evaluates own
off-topic, of the required reflection. performance on the task;
aimless and  Word choice is appropriate reflects on the skills/thought
disorganized with few errors. processes needed to complete
 Word choice the task
is confusing.  Specifies 2 or more strategies
 Sentences or approaches used to
are awkward complete the task.
and distract  Identifies 1 or more areas of
the reader. growth.
Total Points Possible (45 points possible) _____ (%)
Adapted from: Mitchell, Anderson, Sensibaugh, & Osgood (2011) and informed by Andrade (2000).

48
Appendix B: Sample Released Test Item from CAT Instrument

“A scientist working at a government agency believes that an ingredient commonly used

in bread causes criminal behavior. To support his theory, the scientist notes the following

evidence:

 99.9% of the people who committed crimes consumed bread prior to committing

crimes.

 Crime rates are extremely low in areas where bread is not consumed.

Do the data presented by the scientist strongly support the theory? Yes ___ No ___

Are there other explanations for the data besides the scientist’s theory? If so, describe.

What kind of additional information or evidence would support the scientist’s theory?”

Source: Overview of the CAT Instrument, https://www.tntech.edu/cat/, used by

institutions for course, program, and general education assessment. The Critical-thinking

Assessment Test (CAT) was developed with input from faculty across a wide range of

institutions and disciplines, with guidance from colleagues in the cognitive/learning

sciences and assessment and with support from the National Science Foundation (NSF).

NSF support also helped distribute the CAT and provides training, consultation, and

statistical support to users. Partial support for this work was provided by the National

Science Foundation’s TUES (formerly CCLI) Program under grant 0404911, 0717654

and 1022789.

49

You might also like