You are on page 1of 13

Thinking Skills and Creativity 27 (2018) 101–113

Contents lists available at ScienceDirect

Thinking Skills and Creativity


journal homepage: www.elsevier.com/locate/tsc

Undergraduate students demonstrate common false scientific


T
reasoning strategies

Jenica Sera Woolley , Austen Michael Deal, Juliette Green, Faith Hathenbruck,
Shelby Ann Kurtz, Trent K.H. Park, Samuel VarSelle Pollock, M. Bryant Transtrum,

Jamie Lee Jensen
4102 LSB, Department of Biology, Brigham Young University, Provo, UT 84602, USA

AR TI CLE I NF O AB S T R A CT

Keywords: American education is failing to fill the growing demand for science, technology, engineering,
Scientific reasoning skills and mathematics (STEM) graduates. The lack of critical reasoning skills may be a causal factor in
STEM student attrition from STEM majors. Our objective in this study was to discover and describe
Strategies common false strategies used by undergraduate students during the scientific reasoning process.
Misconceptions
Each of these false strategies is described, with accompanying examples from student responses,
Intervention
to illustrate the thinking patterns. We defined targeted areas for instruction that can lead to
better performance, greater academic self-confidence, and increased retention in STEM degrees.
Understanding how students think through problems and where they are making mistakes fa-
cilitates the creation of specialized programs to correct these false reasoning strategies and in-
crease the scientific reasoning ability of students.

1. Introduction

Currently, only 40% of students who enter college with the intent to pursue a STEM degree actually do so (PCAST, 2012). STEM
degrees comprise less than 16% of Bachelor’s degrees awarded by American colleges (NCES, 2009). Over twenty-six years ago,
Project 2061 (AAAS) established a goal to advance science, math, and technology learning in the United States (Rutherford &
Ahlgren, 1990). Despite these goals, a 2006 assessment found that students in all developmental stages lack science comprehension
and reasoning skills (Grigg, Lauko, & Brockway, 2006). In a nation where the need for STEM professionals is increasing (Hurtado
et al., 2012), the percentage of students completing corresponding degrees is decreasing. To investigate the reasons for this attrition,
Seymour and Hewitt (1997) conducted a large-scale longitudinal study of college students in STEM degrees. They found two main
reasons for attrition: disappointment with the curriculum and a loss of academic self-confidence in the highly competitive en-
vironment. Performance in introductory courses is directly correlated with students’ retention in STEM degrees (Suresh, 2006).
Therefore, the lack of these critical reasoning skills may be a causal factor in student attrition from STEM majors. Our objective in this
study was to discover the common false strategies used by undergraduate students that lead to critical failures in using scientific
reasoning.
These science process and reasoning skills (SPARS) are necessary for the ‘scientific literacy’ described by the American Association
for the Advancement of Science (AAAS, 1993; Colvill & Pattie, 2002). Science Process Skills first became popularized in 1967 when a
group of working scientists, commissioned by the American Association for the Advancement of Science (AAAS) observed scientists at


Corresponding authors.
E-mail address: Jamie.Jensen@byu.edu (J.L. Jensen).

https://doi.org/10.1016/j.tsc.2017.12.004
Received 25 April 2017; Received in revised form 17 November 2017; Accepted 17 December 2017
Available online 26 December 2017
1871-1871/ © 2017 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license
(http://creativecommons.org/licenses/BY-NC-ND/4.0/).
J.S. Woolley et al. Thinking Skills and Creativity 27 (2018) 101–113

work and defined the skills they were using (AAAS, 1967). These skills were categorized as ‘basic’ skills and ‘integrated’ skills. Basic
skills included observing, using space/time relationships, using numbers, inferring, measuring, communicating, classifying, and
predicting and are closely associated with what Piaget defined as concrete operational skills (Beaumont-Walters & Soyibo, 2001;
Inhelder & Piaget, 1958; Zimmerman, 2000). Integrated skills included controlling variables, defining operationally, formulating
hypotheses, interpreting data, and experimenting, skills closely related to Piaget’s formal operational skills (Inhelder & Piaget, 1958;
Zimmerman, 2000; Huppert, Lomask, & Lazarowitz, 2002; a correlation of 0.73 was found by Padilla, Okey, & Dillashaw, 1983).
These skills are key factors to ‘scientific literacy’ described by AAAS (1993); Colvill & Pattie, 2002).
These skills have gone by many names over the decades including scientific process skills (OECD, 1999), procedural skills (e.g.,
Gott & Duggan, 1994), experimental and investigative science (National Curriculum; DOE, 1995), habits of mind (AAAS, 1993),
scientific inquiry abilities (National Academy of Sciences, 1994), or scientific reasoning skills (Lawson et al., 2000). Kuhn (2002)
defined them as intentional knowledge-seeking behaviors and the coordination of theory and evidence; Zimmerman (2007, p.172)
defined them as “the skills involved in inquiry, experimentation, evidence evaluation, and inference that are done in the service of
conceptual change or scientific understanding”. Others have defined them as ‘critical thinking skills’ or ‘science process skills’ en-
compassing the skills used by scientists, such as identifying causal questions, proposing hypotheses, predicting outcomes, modeling,
using mathematical reasoning, interpreting graphs, etc. (Bransford et al., 2000; NSF, 2000).
A lack of these skills persists in all ages including up to 50% of college-age adults (Kuhn et al., 1992; Kuhn & Franklin, 2006;
Lawson, 1992; Schauble & Glaser, 1990; see Zeineddin & Abd-El-Khalick, 2010, for a review). An emergence of these skills is not
developmentally or temporally constrained (Zimmerman, 2007). However, the full development of these skills likely does not occur
without some explicit instruction or practice (Kuhn & Franklin, 2006). This requires an understanding of the common mistakes
students make in using these skills.
Researchers have studied the development of several specific SPARS. For example, Kuhn and Phelps (1982) found that one of the
strategies that led to success on a controlling variables task was note-taking. Shaklee and colleagues found a variety of rules being
used when approaching evidence covariation tasks (e.g., Shaklee & Mims, 1981; Shaklee & Paszek, 1985; Shaklee et al., 1988).
Several researchers have also shown that prior knowledge plays an influential role in how students approach a controlling variables
task or how they interpret evidence (Koslowski, 1996; Kuhn et al., 1988). Yet much about the development of SPARS remains
unknown, especially in adult learners.
Taking a broad view, there has been research done on general pedagogical strategies that lead to better conceptual understanding
and critical thinking skills (see Bransford et al., 2000, for a detailed review). To summarize, a recent meta-analysis showed that
student-center approaches to instruction lead to increased performance and decreases in failure rates, with an effect size that would
lead to a stop in clinical trials for a treatment effect (Freeman et al., 2014). It has also been shown that these strategies can lead to
gains in scientific reasoning ability (e.g., Heiss et al. 1950; Howard and Miskowski 2005; Jensen & Lawson, 2011; Minner et al., 2009;
Renner et al., 1973; Rissing and Cogan 2009; Spiro and Knisely 2008). These types of findings are the basis for efforts outlined in
publications such as Vision & Change in biology education (AAAS, 2011), Reaching Students: What Research Says About Effective
Instruction in Undergraduate Science and Engineering (National Research Council, 2015) in all STEM fields, and Scientific Teaching
(Handelsman et al., 2007). Thus, beyond explicit instruction of these skills themselves, implicit instruction can take place using a
constructivist, student-center approach to science that focuses on higher-order cognitive skills (HOCS; Zoller, 1993).
Because the development of SPARS generally occurs during adolescence, the focus of most research, however, exists only in the K-
12 literature. Nonetheless, many college students’ scientific reasoning abilities are not proficient enough for them to excel in STEM
classes. Very little research has been done to characterize scientific reasoning in adults and understand the barriers that keep adults
from effectively using these skills. We hypothesize that common false strategies can be identified amongst college students that lead
to difficulty in using these skills appropriately and transferring them within STEM subjects. In this study, we have defined the specific
SPARS commonly used in STEM subjects and characterized common false strategies that college students use when attempting to
implement STEM-related SPARS. By doing so, we have defined targeted areas for instruction that may lead to better performance,
greater academic self-confidence, and increased retention in STEM degrees.

2. Research design

2.1. Defining SPARS

To test our hypothesis it was necessary to explicitly define SPARS. These skills first became popularized in 1967 when a group of
working scientists, commissioned by AAAS (1967), were asked to define them. The skills were categorized as either ‘basic’, included
observing, using space/time relationships, inferring, measuring, communicating, classifying, and predicting – all closely associated
with what Piaget defined as concrete operational skills (Beaumont-Walters & Soyibo, 2001; Inhelder & Piaget, 1958; Zimmerman,
2000) – or ‘integrated’, included controlling variables, defining operationally, formulating hypotheses, interpreting data, and ex-
perimenting which are closely related to Piaget’s formal operational skills (Inhelder & Piaget, 1958; Zimmerman, 2000; Huppert,
Lomask, & Lazarowitz, 2002; Padilla, Okey, & Dillashaw, 1983). These skills are key factors to ‘scientific literacy’ described by AAAS
(1993); Colvill and Pattie, (2002), and have gone by many names over the decades including scientific process skills (OECD, 1999),
procedural skills (e.g., Gott & Duggan, 1994), experimental and investigative science skills (; DOE, 1995), habits of mind (AAAS,
1993), scientific inquiry abilities (National Academy of Sciences, 1994), and scientific reasoning skills (Lawson et al., 2000). Kuhn
(2002) defined them as intentional knowledge-seeking behaviors and the coordination of theory and evidence; Zimmerman (2007,
p.172) defined them as “the skills involved in inquiry, experimentation, evidence evaluation, and inference that are done in the

102
J.S. Woolley et al. Thinking Skills and Creativity 27 (2018) 101–113

Fig 1. Nine sub-skill categories of scientific process and reasoning skills.

service of conceptual change or scientific understanding.” The National Academies have defined them as ‘critical thinking skills’ or
‘science process skills’ encompassing the tools used by scientists, such as identifying causal questions, proposing hypotheses, pre-
dicting outcomes, modeling, using mathematical reasoning, and interpreting graphs (Bransford et al., 2000; NSF, 2000).
After evaluating the broad skill categories, we refined them into nine sub-skill categories which comprise the following:
Hypothetico-deductive (HD) Reasoning Type 1: Designing an Experiment, HD Reasoning Type 2: Predicting Results, HD Reasoning
Type 3: Drawing Conclusions, Proportional Reasoning, Probabilistic Reasoning, Identifying and Controlling Variables (ICV),
Correlational Reasoning, Interpreting Graphs, and Building Graphs, as shown in Fig. 1. We developed 10–15 items to test the use of
each sub-skill independently (e.g., a graphing question did not include data analysis skills), for a total of 80 test questions.

2.2. Subjects

We administered our test undergraduate STEM and non-STEM majors from two large institutions in the West: a public open-
enrollment institution and a highly selective private institution. The public institution has an enrollment of approximately 32,000
students with an average incoming freshman high school GPA of 3.27 and an average ACT score of 23. The private institution has an
enrollment of approximately 30,000 students with more competitive admissions criteria reflected by an average incoming freshman
high school GPA of 3.85 and an average ACT score of 29.

2.3. Characterizing false strategies

We distributed surveys using the created questions from 2.1 and included a follow-up free-response question after each item
soliciting a description of the students’ reasoning process. We predicted that their false strategies would be illuminated as they
explained their reasoning. Responses were gathered from 304 individuals. Student responses were transcribed and distributed to nine
independent raters. Each rater reviewed all responses and identified emergent themes representing common false strategies using
grounded theory (Charmaz, 2014). All nine raters analyzed and corroborated findings until consensus was reached on a common set
of false strategies for each individual reasoning sub-skill. Each characterized false strategy was identified and quantified in the
student responses to written and digital surveys to assess the frequency of each common false strategy. If a response contained more
than one false strategy it was recorded under both categories, while unclear and correct responses were not included. These fre-
quencies were generated in relation to total false strategies characterized. Thus, these are relative frequencies.

103
J.S. Woolley et al. Thinking Skills and Creativity 27 (2018) 101–113

Fig. 2. Relative frequencies of each common false strategy. Common false strategies are numbered from 1 up to a potential 5 corresponding with descriptions in the
tables that follow. Numbers within the bars represent the relative frequencies in percent.

2.4. Triangulation of the data through interviews

In order to triangulate the data and confirm the emergent common false strategies from student responses, we selected 32
participants who employed the identified faulty reasoning strategies and interviewed them. We filmed the students giving step-by-
step accounts of how they solved the problems. Through analyzing the interviews, we effectively coded and confirmed the commonly
used false strategies.

3. Results

This research resulted in a set of common false strategies demonstrated by undergraduate students on each of the described SPARS
categories. Using the responses to written surveys, we determined the frequency of which each false strategy occurred. These fre-
quencies are illustrated for all SPARS categories in Fig. 2. Each of these false strategies will be described below with accompanying
examples from student responses to illustrate the thinking patterns being described.

3.1. Hypothetico-deductive reasoning

HD reasoning was divided into three subcategories: designing an experiment, predicting results, and drawing conclusions. Three,
three, and four common false strategies were identified in each subcategory, respectively, as shown in Table 1.

Table 1
Hypothetico-deductive reasoning false strategies. Each statement should be read with the following preface: “Students who struggle with this reasoning skill might…”.

HD Type 1: Designing an Experiment HD Type 2: Predicting Results HD Type 3: Drawing Conclusions

1 Fail to correctly identify the independent 1 Predict results that are insufficient to support 1 Fail to comprehend mixed or imperfect results
variable the hypothesis
2 Confuse independent and dependent 2 Predict results that do not match the 2 Focus on one set of data without considering
variables hypothesis multivariate conditions
3 Fail to control all variables 3 Rely on prior knowledge 3 Assume only combining variables can confirm a
hypothesis
4 Rely on prior knowledge

104
J.S. Woolley et al. Thinking Skills and Creativity 27 (2018) 101–113

3.1.1. HD type 1: designing an experiment


The first false strategy is failing to correctly identify the independent variable. This often leads students to design an experiment
that will not actually answer the question they are testing. For example, students were given a scenario where the concentration of
algaecide in a fishpond was increased resulting in fish death. Students were asked to design an experiment to determine if the
increased algaecide concentration was the cause of the death of the fish. The independent variable is the concentration of algaecide
(with the control being the normal concentration), but students often incorrectly identified the independent variable as the presence
or absence of the algaecide itself rather than the increased concentration. One student responded, “Bring more fishes and separate
them in two different ponds. One pond with the chemicals and the other one without them. Put the fishes [in] and see how they
behave in each situation.” Testing for the presence or absence of algaecide in the pond does not test the intended independent
variable—the concentration of algaecide—and indicates the presence of this false strategy.
The second false strategy was demonstrated by students who confused the independent with the dependent variable. Using the
same scenario of the algaecide, some students answered that “New fish should be added to the pond water, if the new fish dies it is the
algaecide killing the fish.” In this instance, the students are manipulating the dependent variable (the health of the fish) while holding
the independent variable constant (the concentration of the algaecide) and drawing invalid conclusions since the test does not
logically support nor disprove the hypothesis.
The third false strategy is to fail to control every variable in an experiment. Again using the algaecide scenario, students would
often describe an experiment without providing any control group or other means of ensuring the only change between their two test
groups was the concentration of algaecide. For example, one student replied: “[You] could set up an experiment where one summer
she has a group of 5 fish living in the pond with no algae killer and the next summer have 5 fish live in the pond with algae killer. She
can then see if the algae killer is killing the fish.” The student is failing to account for a large number of uncontrolled variables that
could change over the course of a year such as water levels, temperature, water nutrients, fish species, and weather among many
other variables. (Additionally, the student is demonstrating the second false strategy.)

3.1.2. HD type 2: predicting results


The first false strategy was demonstrated when the student predicts the results of only one of the treatment groups when these
results are not sufficient to support the hypothesis. For example, students were asked to predict the results of two groups of bacteria,
one wild-type, and one with particular genes mutated, when exposed to UV light, to support the idea that these genes are protective.
Students often answered that the bacteria containing the mutations would die, but failed to mention that the wild-type bacteria
should survive. One student replied, “I would predict the E.coli would die. This is because if the uvrA gene does not function the E.coli
will not be able function.” Students did not consider the fact that the death of the mutated bacteria alone was not sufficient evidence
to support the hypothesis.
The second false strategy was displayed when students predicted results of an experiment that either failed to show anything, or
supported the wrong hypothesis. In other words, they could not associate appropriate results with hypotheses. For example, students
were given two competing hypotheses to explain why water condenses on the inside of balloons when they are inflated by mouth:
either water vapor released during respiration condenses on the inside of the balloon, or air reacts with the plastic molecules on the
balloon in a chemical reaction that creates water. The experiment was to place a plastic bag over an empty plant pot (the control) and
another over a pot containing a plant (that presumably also respires). The students were asked to predict what the results of the
experiment would be given that both of the hypotheses were incorrect. A common prediction was, “If water was found in both bags
both hypotheses are incorrect.” These students failed to recognize that this result supports the second hypothesis, thus demonstrating
this false strategy.
The third false strategy was demonstrated when students utilized prior knowledge to answer a question instead of relying strictly
on the information provided. This false strategy commonly appears in almost all the reasoning categories. For example, referencing
the E.coli gene question given in a prior example, a student would assume both colonies of the E. coli would die because bacteria is
damaged by UV light. In this example, one student came to the following conclusion stating, “With the UV-light, the bacteria should
be damaged enough, leading the DNA to ultimately die.” Instead of looking at the information given and drawing conclusions from
those facts, the student relies on prior knowledge to answer the question and consequently misses the more important trend in the
data.

3.1.3. HD type 3: drawing conclusions


The first false strategy was demonstrated by students who were unable to draw the correct conclusions when given mixed or
imperfect results because they could not account for error. In a question testing whether sight or smell had a greater effect on a
mouse’s sense of direction, students were given three different experiments where mice with impaired senses attempted to make their
way home. Students were given the results that 18 out of 20 mice found their way when sight was impaired, 7 out of 20 made it when
smell was impaired, and 6 out of 20 made it without both sight and smell. In this situation, several students concluded that sight must
play a role since 2 of the 20 mice who had sight impaired did not make it home. These students did not account for the possibility that
two mice did not make it home simply by chance alone. Even though the data suggests that smell is most likely the causal navigation
factor, a student responded, “It appears they both have support.” This leads us to believe that they were unable to account for error or
imperfect results.
The second false strategy was demonstrated when the student assumed that testing only one variable is enough to confirm or
preclude a hypothesis when two or more hypotheses have been put forth. Continuing with the mouse scenario, a student using this
false strategy would only look at the trial where the mice had their smell impaired to confirm that smell was the causal factor. They

105
J.S. Woolley et al. Thinking Skills and Creativity 27 (2018) 101–113

Table 2
Proportional reasoning false strategies. Each statement should be read with the
following preface: “Students who struggle with this reasoning skill might…”.

Proportional

1 Swap ratios or units


2 Fail to solve problem to completion
3 Fail to use proportions
4 Fail to create a proper ratio using a colon

assumed that because far fewer mice made it home, the smell hypothesis was supported while the sight hypothesis was incorrect.
Alternatively, a student may look at the trial where 18 out of the 20 mice found their way home without sight, and conclude that
smell must be the primary sense in directionality. In other words, if one hypothesis is not supported, the other must be true. Assuming
that a confirmatory result for one hypothesis automatically precludes the possibility that the other hypothesis might also be true is a
clear demonstration of this false strategy.
The third false strategy takes into account the opposite assumption: only by combining the variables can we confirm a hypothesis.
In this situation, a student would look at only the trial where both sight and smell were impaired and assume that because very few
mice returned home both sight and smell must be important. This is problematic because the student overlooks the correct conclusion
that in the treatment where both senses were impaired, smell was the only sense having an effect. When relying on a multivariate
confirmation, a student misses trends what might be confirmed by single variable trials.
The final false strategy was demonstrated when a student used prior knowledge to answer a question instead of relying on the
information given in the problem, as described above in HD type 2, strategy four.

3.2. Proportional reasoning

Four common false strategies were identified when dealing with proportional reasoning patterns, as shown in Table 2. The first
false strategy in proportional reasoning is when students swap the ratios or units around. For example, students were given a scenario
where they and their roommate are at a location 46 miles from home. The roommate leaves a half hour before them and drives at a
constant 32 miles per hour. Students were asked to calculate how fast they must drive to arrive home at the same time as their
roommate. One participant answered 43.135 miles per hour and wrote the following explanation: “46/
32 = 1.4375 h − 0.5 h = 0.9375 h × 46 miles = 43.135 mph.” In this explanation, the student calculated how long it would take the
roommate to travel 46 miles going 32 miles per hour (∼1.4 h). The student then subtracted the half hour they lost to get a time
remaining of 0.9375 h. At this point, instead of dividing 46 miles by 0.9375 h to get miles per hour, the student multiplied hours by
miles, failing to recognize that their final answer was not given in miles per hour. This false strategy was assigned any time students
inverted a proportion.
The second false strategy was that students often did not solve the problem to completion. One problem stated that fish grow by
550% from birth to adolescence, then another 150% from its new adolescent weight to adulthood. Students were then asked to
calculate the weight of a fish as an adult if it weighed two grams as a newborn. One student answered, 11 g, stating that they “used
0.550 × 2 and shifted some decimals.” This answer was correct for the growth to adolescence, but they forgot to find the growth all
the way to adulthood. Thus, not completing all steps of a problem was a common false strategy displayed.
In the third false strategy, we observed that some students failed to set up any proportion when given a proportional reasoning
question. This false strategy was commonly seen in capture/recapture problem. In this case, 40 deer were captured and tagged. Three
weeks later, 40 deer were captured, of which 15 were already tagged. Students often answered that there were a total of 65 deer in
the population. One student explained their reasoning the following way: “65 since you captured a total of 80 deer but 15 were
already tagged so 80–15 = 65.” These students simply added together all deer who were captured and subtracted the 15 deer who
were captured twice. This showed that the students did not think of the problem as a proportion and only used the numbers that were
given, a very typical display of concrete operations.
The fourth false strategy was indicative of a misunderstanding of the use of a colon in terms of ratios, a common notation utilized
in proportional reasoning problems. For example, students were given the following problem: “You notice in a population of rabbits
that there is a 1:3 ratio of brown rabbits to white rabbits. Natural selection favors white rabbits so that this ratio is doubled every
year. After a year, you find 24 white rabbits. How many brown rabbits are there?” A common answer given was eight brown rabbits.
One student explained this answer saying, “8 brown rabbits because it’s 1/3 the amount of white.” The student assumed that a 1:3
ratio was the same thing as the fraction 1/3 instead of 1/4, which led them to the wrong answer.

3.3. Probabilistic reasoning

Four common false strategies were identified when dealing with probabilistic reasoning patterns, as shown in Table 3. The first
false strategy was that students were not considering all the factors affecting the probability. For example, they were given a pie chart
denoting the fate of bacteria exposed to bacteriophages where 50,000 were infected and lived, 450,000 were infected and died, and
500,000 were not infected at all. When asked to determine the probability that if infected a bacterium would survive, students often

106
J.S. Woolley et al. Thinking Skills and Creativity 27 (2018) 101–113

Table 3
Probabilistic reasoning false strategies. Each statement should be read with the following
preface: “Students who struggle with this reasoning skill might…”.

Probabilistic

1 Not consider all factors that affect the probability


2 Confuse the “and/or” rules of combining probabilities
3 Incorrectly identify the denominator

answered 5%, which they found by dividing 50,000 by 1,000.000. They failed to understand that only bacteria actually infected
affect the probability. This was additionally seen frequently when calculating genetic probabilities when students failed to consider
both parents in the probability of inheriting a trait.
The second false strategy was demonstrated when students failed to recognize when to add probabilities and when to multiply
probabilities. These are referred to as the “and/or” rules of probabilities. In a question about blood types where it was given that out
of 200 people, 50 have type A, 65 have type B, 70 have type O, and 15 have type AB, students were asked to determine the probability
of choosing a person with type A or type AB blood at random. Students would sometimes answer 1.88% because they chose to
multiply the odds of picking a type A individual by the odds of picking a type AB individual. The answer should have been 32.5%
because the probabilities should be added rather than multiplied.
The third false strategy is a case of not considering choice without replacement. For example, students were given a situation
where 4 out of 15 flies had a deformity that could only be passed to offspring if both parents have it. When asked to find probability of
two flies mating and having offspring with the deformity, students often answered 7.1% instead of 5.7%. This is due to multiplying
the odds of choosing a deformed fly out of fifteen twice (4/15 × 4/15) instead of taking into account the loss of the first fly from the
population (i.e., 4/15 × 3/14).

3.4. Identifying and controlling variables

Five common false strategies were identified when students were asked to identify and control variables in an experimental
situation, as shown in Table 4. The first false strategy used was assuming that all possible variables—including unrelated ones—must
be tested in order to draw a conclusion. For example, students were asked which conditions would be necessary to test the effect of
UV light on a banana’s browning time and were given the following four treatments: 0 °C in the dark, 0 °C exposed to UV light, room
temperature in the dark, and room temperature exposed to UV light. When asked which treatments should be run to test the effect of
UV light, students with this false strategy answered that one “must run all four conditions to know,” even though either the first two
or the second two conditions, where light is the only changing variable, would be sufficient to test the effect.
The second false strategy was testing the opposite variables necessary to solve the question. In the same problem as above,
students using this false strategy chose the two conditions where UV light is present, but temperature is changing. In this case,
because UV light is constant and temperature is the variable changing, the opposite variable (temperature) was in fact being tested.
One student using this false strategy explained that they made this choice because “both [conditions] involved a UV light.” Students
with this false strategy presumably fixate on the variable being tested and assume it must be present in both test and control
conditions.
The third false strategy was demonstrated when students failed to recognize a confounding variable. Continuing with the same
scenario as above, students who used this false strategy chose answers that show a change in UV light but were careless in keeping the
variable of temperature constant. Specifically in choosing the conditions with a banana at 0° C in the dark and a banana at room
temperature in UV light in order to test the effects of UV light, they failed to acknowledge that the change in temperature could have
also been a factor in banana discoloration.
The fourth false strategy was demonstrated by students who mistakenly focused on one trial or variable, being unable to consider
other trials. For example, students were presented with data from four different snowboard racing trials testing two different waxes
on two snowboard types. Results showed that a carbon fiber board with natural wax took 38 s, a carbon fiber board with synthetic
wax took 23 s, a fiberglass board with synthetic wax took 24 s, and a fiberglass board with natural wax took 15 s. Based on the data
provided, a student using this false strategy concluded, “the fiberglass/natural wax combo is the fastest,” so “natural wax is better

Table 4
Identifying and Controlling false strategies. Each statement should be read with the
following preface: “Students who struggle with this reasoning skill might…”.

Identifying and Controlling Variable

1 Test all possible (including unrelated) variables


2 Test opposite variables
3 Fail to keep variables constant
4 Focus on only one variable or trial
5 Rely on prior knowledge

107
J.S. Woolley et al. Thinking Skills and Creativity 27 (2018) 101–113

Table 5
Correlational reasoning false strategies. Each statement should be read with the following preface: “Students who
struggle with this reasoning skill might…”.

Correlational

1 Conclude that the hypothesis is not confirmed if one example does not fit
2 Claim that “correlation is not causation” thus there is no relationship
3 Fail to organize data clearly
4 Confuse positive and negative correlation
5 Consider only confirmatory cases instead of reviewing entire data
6 Rely on prior knowledge

than synthetic wax and fiberglass is better than carbon fiber.” However, this student ignored the trial showing that synthetic wax
yields a faster race time on a carbon fiber board. Thus, it appears that the attractiveness of the fastest race time blinded the students to
the data yielded from other trials. The final common false strategy students had was relying on prior knowledge to find their answer,
as was described previously.

3.5. Correlational reasoning

Six common false strategies were identified when dealing with correlational reasoning patterns as shown in Table 5. The first false
strategy was demonstrated when a student failed to confirm a hypothesis if an outlier existed. One problem asked students if there
was a correlation between size and color in starfish, showing them a picture of 3 large purple, 1 small purple, and 8 small pink
starfish. One student portraying this false strategy said, “Had the small purple starfish been excluded the argument could have been
made that there is a causal correlation between size and color, but its presents [sic] suggests that the link is only coincidental.” This
student failed to confirm the pattern because of one outlier in the data.
The second false strategy was demonstrated by students who claimed that “correlation is not causation” and thus there is no
relationship. For example, students were given data about the color of bats and their diet and asked if there was a correlation between
the two. Even though the majority of brown bats ate fruit and the majority of black bats ate insects, students who used this false
strategy made claims such as the following: “There isn’t sufficient information to say wether [sic] or not the color is the linking factor.
Correlation is not causation.” By claiming that correlation does not imply causation, students were actually confusing correlation
with causation and were unable to disentangle the two.
The third false strategy was demonstrated by failing to properly organize the data before checking for a correlation. For example,
students were provided a randomly recorded table of individuals and their hair and eye color and asked to determine if hair and eye
color are correlated. It became evident that many students who claimed “no correlation” simply failed to organize the data in a way
that allowed them to see the pattern. When these same students were shown the data in an organized table, they easily recognized the
correlation.
The fourth false strategy was demonstrated by students who confused positive and negative correlations. One problem stated that
there is a positive correlation between mating success and length of feathers in birds and asked them to identify whether mating
success would be better with feathers 5 cm or 8.5 cm long. Students demonstrating this false strategy consistently chose 5 cm instead
of 8.5 cm. One student stated, “That depends on what the correlation between the length of feathers and mating success is…. If
mating success is correlated with smaller feathers then the 5 cm feathers would give you more mating success.” This student did not
understand that a positive correlation has both variables (mating success and feather length) increasing together, and as a con-
sequence, incorrectly chose the opposite trend.
The fifth false strategy was demonstrated when students considered only confirmatory cases in the data rather than viewing the
data set as a whole. For example, students were presented with several bacterial species with accompanying optimal growth tem-
peratures and GC content of the DNA and asked whether the growing temperature is correlated to the GC content. Upon encountering
one extreme case where both the growing temperature and GC content were high, students using this false strategy assumed a
correlation by presumably ending their search of the data after finding one confirmatory case. However, they failed to notice that
several other cases presented high growing temperatures with low GC content and low growing temperatures with high GC content.
Instead of analyzing the entire data set to recognize the lack of correlation, students utilizing this false strategy focused on one
confirmatory data point.
The final false strategy students had was relying on prior knowledge to find their answer. Regarding hair and eye color, one
student said that there was no correlation because “…the genes for eye color and hair color are not near each other on a strand of
DNA.” This student failed to analyze the data to find the correlation and relied on their prior knowledge to give their answer.

3.6. Graphing

As was described previously, the graphing category was divided into two sub-skills: building graphs and interpreting graphs. Five
common false strategies were identified in each subcategory, as shown in Table 6.

108
J.S. Woolley et al. Thinking Skills and Creativity 27 (2018) 101–113

Table 6
Graphing false strategies. Each statement should be read with the following preface: “Students who struggle with this reasoning skill might…”.

Building Graphs Interpreting Graphs

1 Confuse the x and y axes 1 Identify false trends because scale is not considered
2 Fail to recognize the full scale that needs to be graphed 2 Focus on one aspect of a graph without examining all
3 Fail to construct axes to reflect the relationship between variables 3 Fail to identify the meanings of points and slopes
4 Fail to include all variables on the graph 4 Fail to compare control vs. experimental treatments
5 Rely on prior knowledge 5 Rely on prior knowledge

3.6.1. Building graphs


The first false strategy involved the labeling of the x- and y-axes. Some students incorrectly labeled the vertical axis as the x-axis
and the horizontal as the y-axis. In addition, many students labeled the x-axis as the dependent variable and the y-axis as the
independent variable. Both indicated a lack of understanding of axes.
The second false strategy indicated a lack of recognition of the full scale that needed to be graphed in order to see the pattern.
Responding to a question about the path of a thrown baseball given an equation, students failed to recognize that in order to
completely comprehend the path of the baseball, they needed to display the entire path of the ball from the release until it hit the
ground (from 0 to 30 feet). One student only included the points at a distance of one, two, and three feet. This student’s poorly
constructed graph gave the impression that the baseball traveled in a linear path upward instead of a parabolic path, thus mis-
representing the data.
The third false strategy was to incorrectly build the graph axes such that the relationship between variables was not apparent. For
example, students were asked to graph the results found from testing different pH levels of soil on the growth of bamboo. The correct
way to graph this data was with pH on the x-axis and the average plant growth on the y-axis. A student displaying this false strategy
created a graph with three axes: a primary y-axis indicating the pH, a secondary y-axis as the average plant height, and the x-axis as
each individual plant. The pH and plant height for each trial were graphed as separate points for each bamboo plant measured.
Consequently, the relationship between the two variables (pH and plant growth) was lost.
The fourth false strategy was demonstrated by students who failed to include all experimental variables on the graph. One
question asked students to graph three different variables on a single graph. The given table included data for temperature and
average rainfall during each month in the year. Students with this false strategy creatively came up with many incorrect ways to
misrepresent this data. In one of the most common examples seen, students graphed one independent variable on the x-axis and the
other on the y-axis, incorrectly showing average rainfall as a function of temperature. Another common example of this false strategy
was displayed when students simply left either temperature or average rainfall off their graph or drew two separate graphs for the
data.
The last false strategy was demonstrated by students using prior knowledge rather than the information presented in the problem,
as has been described previously. When given the problem with the path of a baseball, one student displaying this false strategy
wrote, “When a ball is thrown, the path starts going up, but as it travels, the gravity pulls it back down.” While this student had the
correct shape of the curve, no points were labeled and the data was not accurately displayed. He or she used prior knowledge of the
path of a baseball to draw their graph.

3.6.2. Interpreting graphs


The first false strategy under this sub-skill was demonstrated by students who failed to recognize the effect of scale on what a
graph is depicting. For example, students were presented with three graphs showing the internal temperature of three different solar
ovens from the time they started cooking cornbread to the time they returned to their original starting temperature. Students were
then given optimal cooking conditions for cornbread (high temperatures for at least two hours) and asked which solar oven was best.
Those who used this false strategy did not recognize the difference in the scaling of the x- or y-axes, which were dependent upon how
long that particular solar oven took to return to starting temperature and the highest temperature the solar oven reached, respec-
tively. As a consequence, these students incorrectly made direct comparisons between the graphs and chose their answer based on
which curve was highest and for longest failing to recognize that the scale of the axes can lead to incorrect assumptions about the data
being displayed.
The second false strategy for interpreting graphs was demonstrated by students who, when presented with a graph, focused on
one salient aspect rather than the accumulation of all the data presented. For example, a graph that plotted amount of dissolved
oxygen against the temperature of water showed fluctuations up and down but an overall general downward trend, or negative
correlation between these two variables. A student using this strategy may have focused on one upward point and concluded that
higher temperatures lead to more oxygen dissolved. However, they were unable to identify the overall negative correlation because
they focused on only one part or data point. This ultimately leads to the misidentification or overlooking of meaningful trends in the
graph.
The third false strategy was demonstrated by students who did not understand the difference between points and slopes. For
example, students were given a graph of enzyme activity (y-axis) and temperature (x-axis) that shows a clear parabolic trend peaking
at around 35 °C. Students were then asked to determine at what temperature the enzyme’s proteins completely denature. Students
commonly answered around 35 °C because that was where the graph began to slope downward. This false strategy led students to

109
J.S. Woolley et al. Thinking Skills and Creativity 27 (2018) 101–113

believe that the inflection point was key. However, the correct answer was actually the point on the graph where the protein was
completely non-functional, which was the very bottom of the parabola (approximately 40°). Students utilizing this false strategy
focused on a key point on the graph without considering what that key point actually represents.
The fourth false strategy was demonstrated by students who failed to compare results to a control group when making inter-
pretations. Students were given a graph of wild-type and bacterial mutant survival rates when exposed to UV radiation. In the graph,
three lines were drawn: a horizontal line at 100% survival representing the wild type group; a line slightly curving down and
dropping below 0.1% survival, representing the trial with the gene A removed; and a line curving greatly, also dropping below 0.1%
survival, representing the trial with gene B removed. Students answered that the two genes are redundant presumably because they
only compared the two lines of where each gene was mutated without comparing them to the wild type group. Their reason was that
since the lines both drop below 0.1% survival and look relatively similar, the genes must have the same function and are therefore
redundant. However, when comparing the two mutants to the wild type, the absence of either gene drops survival rates significantly
from the normal bacteria. Thus, one can conclude that since the removal of either gene drops survival rates dramatically, both genes
must have an important role in UV protection.
The fifth false strategy again is demonstrated when students refer to prior knowledge rather than utilizing the information given
on the graph, as described previously. Referring back to the solar ovens, one student responded, “If cornbread is going to take two
hours to bake, the temp would be lower. Normal oven recepies [sic] are 375–425F, at about 20–40 min” Instead of using the
information presented in the problem, this student relied on their prior knowledge to answer this question and consequently chose
the wrong answer.

4. Discussion

In this study, we successfully identified common false strategies used by undergraduate students in solving STEM-related scientific
reasoning problems. These false strategies lead to both difficulty in students’ ability to use reasoning skills appropriately and their
ability to transfer them within STEM subjects. The results are both consistent with and add to previous findings.
The literature on hypothetico-deductive reasoning is primarily focused on defining the necessary skills to utilize this reasoning
pattern (e.g. Heisterkamp & Talanquer, 2015). Zimmerman (2000) suggested that to effectively participate in experimentation,
scientists needed skills such as experimentation skills, conceptual formation, and evidence evaluation skills, while Moore (2012) saw
that students with hypothetico-deductive reasoning performed much better in STEM classes and identified several false conclusions
students come to when answering hypothetico-deductive questions. Fischer et al. (2014) also outlines eight epistemic activities
involved in experimentation, among which were experimental design, hypothesis generation, and being able to draw conclusions.
Similarly, these can be directly correlated to the three skills of hypothetico-deductive reasoning outlined in our results. While the
literature focuses on identifying these skills and why they are important, little, if anything, has been done to characterize the false
strategies that students use when attempting to demonstrate these skills (Cracolice & Busby, 2015; Hilton & Hilton, 2016; Ding, Wei &
Liu, 2016.). We found that ten common false strategies are regularly utilized by undergraduate students causing them to consistently
fail to solve hypothetico-deductive reasoning problems.
Similar to the literature on hypothetico-deductive reasoning, much of the literature on proportional reasoning has focused on
suggesting the patterns that would demonstrate appropriate use of proportional reasoning (e.g., Kastberg et al., 2012; Martínez Ortiz,
2015; Hilton et al., 2012). Some work has been done on identifying false strategies. In a study done on prospective elementary school
teachers, it was found that these potential educators had false strategies similar to elementary aged students (Valverde and Castro,
2012). These false strategies included focusing on one value, confusing additive and multiplicative relationships, and misinterpreting
ratios. These three common missteps correspond to our findings that students fail to use proportions and misinterpret ratios. In
addition, we found that students often swap ratios or units and that they often fail to complete all steps of a problem, adding further
understanding to student thought processes when performing proportional problems.
Much of the literature on probabilistic reasoning focuses on the ability of students (mostly of elementary age) to differentiate
between ‘and/or’ rules when working with probabilities (e.g., Rubel & Zolkower 2007) or the ability of students to recognize the
effects of chance (Remigio et al., 2014). Two other common false strategies proposed in the scientific literature are the outcome
approach and the representative heuristic (Konold et al., 1993). The outcome approach states that people do not identify their goal as
specifying probabilities, but as predicting the results of a single trial. Our characterization of other false strategies such as not
considering all factors affecting the probability and incorrectly identifying the denominator appear to be novel identifications of
mistakes made with mathematical probability problems.
Identifying and controlling variables has been the focus of much research. Jong and Joolingen (1998) have identified many points
of difficulty including choosing the right variables to test, determining relationships between variables, designing conclusive ex-
periments to test variables, drawing the correct conclusions from experiments, and interpreting experimental results. Kuhn (2007)
found that students oftentimes did not have the skills necessary to handle multivariable causality, even in cases where they de-
monstrated the skills to correctly design experiments. Boudreaux et al. (2008) also identified the lack in skills necessary to interpret
results and make conclusions based on data provided from experiments with multiple variables. Leatham (2012) argues that the main
cause for false ICV strategies is that students understand the meaning of “dependent” and “independent” variable, however, they do
not know how to identify the variables in differing contexts. Zhou et al. (2016) described three levels of identifying and controlling
variables in students: lower-end, intermediate, and higher end. These range from merely recognizing a situation in which variables
need to be controlled to inferring results from one. Our identified false reasoning strategies serve to elaborate on the current lit-
erature.

110
J.S. Woolley et al. Thinking Skills and Creativity 27 (2018) 101–113

In studying correlational reasoning, several researchers have identified common missteps. Coleman et al. (2015) showed that
students often mistake correlation for causation. We did not find this to be a common mistake, but rather found the opposite (i.e., that
students failed to recognize correlations because they affirm that correlation does not equal causation). Zimmerman (2000) found
that subjects often only compare sums of treatments rather than creating relevant ratios when drawing conclusions about correla-
tions. Similarly, one of the false strategies we identified was that students often do not organize the data in such a way as to view a
correlation properly. In addition, Zimmerman found that students often use their prior knowledge or bias to answer a question
incorrectly, a trend we saw frequently in all reasoning patterns. We found additional false strategies in correlational reasoning that
add to our understanding of the development of this reasoning pattern.
In regards to graphing, research has shown a correlation between the ability to build and interpret graphs and a student’s
scientific conceptual skills (Gültepe, 2012). Leonard and Patterson (2004) focused on the mistakes made while constructing graphs
including using incomplete/missing information, using incorrect scale, not labeling axes, or including too much/inappropriate data.
These observations match up well with the common mistakes we identified. Our findings also elaborate on other mistakes such as
switching the x and y axes and incorporating prior knowledge. Clement (1985) noticed that students often treated graphs as literal
“pictures” of the situation rather than a plot of data, i.e. drawing a parabola to represent a hill instead of plotting the speed vs.
position of a bike going up the hill, or misusing slopes on the graph rather than the height. We observed these same mistakes with the
addition of several more.
Knowing the false strategies that students use in solving scientific reasoning problems is of utmost importance to both educators
and researchers alike. By clearly describing these false strategies, we have offered direction for targeted and explicit instruction. In
addition, we have shown that these false steps do not just occur at the K-12 level, but persist into adulthood. By understanding exactly
where and how in the process of reasoning students are utilizing less than successful strategies, educators can make the necessary
corrections or additions to their curricula to help students overcome these missteps. Additionally, it allows for targeted future
hypothesis testing for researchers focusing on strategies to teach and improve scientific reasoning.
Some limitations within our study may suggest a need for caution in broadly applying these results. Although the insights given
here are a valuable starting point and have been supported by other studies showing that college-level students lack these reasoning
skills (Ding et al., 2016), future studies based on this subject would benefit from involving a more diverse set of college students. The
majority of our research was conducted at a private institution with a fairly homogenous population of high-achieving students.
However, the fact that these false strategies persist in such a highly prepared and motivated student population suggests that these
false strategies are very prevalent and resistant to change. A larger study involving multiple diverse institutions would be less limited
in scope, more generally accurate, and a good way to determine if the false reasoning strategies identified in this study are localized
or broadly circulated.
We have outlined various false strategies students use when faced with problems requiring scientific reasoning skills. Future
research is warranted and might include studies on whether these strategies can be explicitly targeted with direct teaching and if this
would lead to increased and more accurate scientific reasoning skills. A recent meta-analysis on intervention studies (Engelmann
et al., 2016) suggests that direct instruction can be successful, especially if it is constructivist in its approach. Building upon a current
and accurate understanding of where students are starting could greatly benefit these efforts in building curricular interventions. If
scientific reasoning ability is teachable, research could confirm that improving these skills can decrease the gap between the 40% of
students who intend on graduating with STEM degrees and the 16% who actually receive them.
In addition to explicit instruction, the increase in scientific reasoning ability has been shown to be achievable through more
implicit methodologies using some well-known constructivist approaches to instruction. Many studies comparing a traditional lecture
to an active learning style have shown increases in student achievement in STEM (e.g., Freeman et al., 2007; Hake, 1998; Jensen &
Finley, 1996; Knight & Wood, 2005; Nehm & Reilly, 2007; Udovic et al., 2002). Inquiry instruction is a form of active learning that
seeks to mirror the actual process of science. It requires that learners engage in scientifically motivated questions, generate and test
alternative explanations based on evidence, connect explanations with scientific knowledge, and communicate and justify their
explanations (NRC, 2000). Research has shown that inquiry instruction, using the learning cycle, is an effective constructivist
teaching method leading to greater conceptual understanding and scientific reasoning gains over a traditional lecture format (e.g.,
Heiss et al., 1950; Howard and Miskowski 2005; Jensen & Lawson, 2011; Minner et al., 2009; Renner et al., 1973; Rissing and Cogan
2009; Spiro and Knisely 2008). In addition, inquiry instruction improves student attitudes and motivation to learn (Berg et al., 2003;
Gibson and Chase 2002). The Vision and Change movement has emphasized the need to teach science as science is done, to em-
phasize science process skills rather than just content, and to involve students in a more active learning approach (AAAS, 2011).
Thus, implicit approaches to improved STEM education as a whole, coupled with explicit instruction on specific scientific reasoning
skills can lead to improved performance in STEM classes, increased confidence in abilities, and perpetuation in STEM degrees.

References

American Association for the Advancement of Science (AAAS) (1967). Science – a process approach. Washington, DC: AAAS.
American Association for the Advancement of Science (AAAS) (1993). Benchmarks for scientific literacy. New York: Oxford University Press.
American Association for the Advancement of Science (AAAS) (2011). Vision and change in undergraduate biology education: a call to action. Washington, DC: AAAS.
Beaumont-Walters, Y., & Soyibo, K. (2001). An analysis of high school students’ performance on five integrated science process skills. Research in Science &
Technological Education, 19, 133–145.
Berg, A. R., Bergendahl, C. B., & Lundberg, B. K. S. (2003). Benefiting from an open-ended experiment? A comparison of attitudes to, and outcomes of, an expository
versus an open-inquiry version of the same experiment. International Journal of Science Education, 25, 351–372.
Boudreaux, A., Shaffer, P. S., Heron, P. R., & Mcdermott, L. C. (2008). Student understanding of control of variables: deciding whether or not a variable influences the

111
J.S. Woolley et al. Thinking Skills and Creativity 27 (2018) 101–113

behavior of a system. American Journal of Physics, 76, 163–170.


How people learn: brain, mind, experience, and school. In J. D. Bransford, A. L. Brown, & R. R. Cocking (Eds.). National Academy of Sciences – National Research council.
Commission on behavioral and social sciences and education. Washington DC: National Academies Press.
Charmaz, K. (2014). Constructing grounded theory. Los Angeles, CA: Sage.
Clement, J. (1985). International Group for the Psychology of Mathematics Education. Misconceptions in graphing. the Netherlands: Noordwijkerhout.
Coleman, A. B., Lam, D. P., & Soowal, L. N. (2015). Correlation, necessity, and sufficiency: Common errors in the scientific reasoning of undergraduate students for
interpreting experiments. Biochemistry and Molecular Biology Education, 43, 305–315.
Colvill, M., & Pattie, I. (2002). The building blocks for scientific literacy. Australian Primary & Junior Science Journal, 18, 20–23.
Cracolice, M. S., & Busby, B. D. (2015). Preparation for college general chemistry: More than just a matter of content knowledge acquisition. Journal of Chemical
Education, 92(11), 1790–1791.
Department of Education (1995). Science in the national curriculum 1995. London: HMSO.
Ding, L., Wei, X., & Mollohan, K. (2016). Does higher education improve student scientific reasoning skills? International Journal of Science and Math Education, 14,
619–634.
Engelmann, K., Neuhaus, B. J., & Fischer, F. (2016). Fostering scientific reasoning in education – meta-analytic evidence from intervention studies. Educational
Research and Evaluation, 22, 333–349.
Fischer, F., Kollar, I., & Ufer, S. (2014). Scientific reasoning and argumentation: Advancing an interdisciplinary research agenda in education. Frontline Learning
Research, 5(2), 28–45.
Freeman, S., O’Connor, E., Parks, J. W., Cunningham, M., Hurley, D., Haak, D., et al. (2007). Prescribed active learning increases performance in introductory biology.
CBE Life Science Education, 6, 132–139.
Freeman, S., Eddy, S. L., McDonough, M., Smith, M. K., Okoroafor, N., Jordt, H., et al. (2014). Active learning increases student performance in science, engineering,
and mathematics. PNAS, 111(23), 8410–8415.
Gültepe, N. (2012). Reflections on high school students’ graphing skills and their conceptual understanding of drawing chemistry graphs. Educational Sciences: Theory
and Practice, 16, 53–81.
Gibson, H. L., & Chase, C. (2002). Longitudinal impact of an inquiry-based science program on middle school students’ attitudes toward science. Science Education, 86,
693–705.
Gott, R., & Duggan, S. (1994). Investigate work in science curriculum. Buckingham: Open University Press.
Grigg, W., Lauko, M., & Brockway, D. (2006). The nation’s report card: science 2005 (NCES 2006-466)Washington, DC: U.S. Government Printing Office.
Hake, R. R. (1998). Interactive-engagement versus traditional methods: a six-thousand-student survey of mechanics test data for introductory physics courses.
American Journal of Physics, 66, 64–74.
Handelsman, J., Miller, S., & Pfund, C. (2007). Scientific teaching. New York: W. H Freeman and Company.
Heiss, E. D., Obourn, E. S., & Hoffman, C. W. (1950). Modern science teaching. New York: Macmillan.
Heisterkamp, K., & Talanquer, V. (2015). Interpreting data: The hybrid mind. Journal of Chemical Education, 92(12), 1988.
Hilton, A., & Hilton, G. (2016). Proportional reasoning: An essential component of scientific understanding. Teaching Science, 62(4), 32.
Hilton, A., Dole, S., Hilton, G., Goos, M., & O'Brien, M. (2012). Evaluating middle years students’ proportional reasoning. Singapore: Mathematics Education Research
Group of Australasia.
Howard, D. R., & Miskowski, J. A. (2005). Using a module-based laboratory to incorporate inquiry into a large cell biology course. CBE – Life Science Education, 4,
249–260.
Huppert, J., Lomask, S., & Lazarowitz, R. (2002). Computer simulations in high school: Students’ cognitive stages, science process skills and academic achievement in
microbiology. International Journal of Science Education, 24, 803–821.
Hurtado, S., Eagan, K., Pryor, J. H., Whang, H., & Tran, S. (2012). Undergraduate teaching faculty: The 2010–2011 HERI faculty survey. Los Angeles: Higher Education
Research Institute, UCLA.
Inhelder, B., & Piaget, J. (1958). The growth of logical thinking from childhood to adolescence. New York: Basic Books.
Jensen, M. S., & Finley, F. N. (1996). Changes in students’ understanding of evolution resulting from different curricular and instructional strategies. Journal of
Research in Science Teaching, 33, 879–900.
Jensen, J. L., & Lawson, A. E. (2011). Effects of collaborative group composition and inquiry instruction on reasoning gains and achievement in undergraduate biology.
CBE – Life Science Education, 10, 64–73.
Jong, T. D., & Joolingen, W. R. (1998). Scientific discovery learning with computer simulations of conceptual domains. Review of Educational Research, 68, 179–201.
http://dx.doi.org/10.3102/00346543068002179.
Kastberg, S. E., D'Ambrosio, B., & Lynch-Davis, K. (2012). Understanding proportional reasoning for teaching. Australian Mathematics Teacher, 68(3), 32–40.
Knight, J. K., & Wood, W. B. (2005). Teaching more by lecturing less. Cell Biology Education, 4, 298–310.
Konold, C., Pollatsek, A., Well, A., Lohmeier, J., & Lipson, A. (1993). Inconsistencies in students’ reasoning about probability. Journal for Research in Mathematics
Education, 24(5), 392. http://dx.doi.org/10.2307/749150.
Koslowski, B. (1996). Theory and evidence: The development of scientific reasoning. Cambridge: MIT Press.
Kuhn, D., & Franklin, S. (2006). The second decade: What develops (and how). In (6th ed.). W. Damon, R. M. Lerner, D. Kuhn, & R. S. Siegler (Vol. Eds.), Handbook of
child psychology: Cognition, perception and languages: vol 2, (pp. 953–993). Hoboken, NJ: John Wiley & Sons.
Kuhn, D., & Phelps, E. (1982). The development of problem-solving strategies. In H. Reese (Vol. Ed.), Advances in child development and behavior: vol 17, (pp. 1–44).
New York: Academic Press.
Kuhn, D., Amsel, E., & O’Loughlin, M. (1988). The development of scientific thinking skills. Orlando, FL: Academic Press.
Kuhn, D., Schauble, L., & Garcia-Mila, M. (1992). Cross-domain development of scientific reasoning. Cognition & Instruction, 9, 285–327.
Kuhn, D. (2002). What is scientific thinking and how does it develop? In U. Goswami (Ed.). Blackwell handbook of childhood cognitive development (pp. 371–393).
Oxford: Blackwell Publishing.
Kuhn, D. (2007). Reasoning about multiple variables: Control of variables is not the only challenge. Science Education, 91, 710–726. http://dx.doi.org/10.1002/sce.
20214.
Lawson, A. E., Alkhoury, S., Benford, R., Clark, B. R., & Falconer, K. A. (2000). What kinds of scientific concepts exist?: Concept construction and intellectual
development in college biology. Journal of Research in Science Teaching, 37, 996–1018.
Lawson, A. E. (1992). The development of reasoning among college biology students—A review of research. Journal of College Science Teaching, 21, 338–344.
Leatham, K. (2012). Problem identifying independent and dependent variables. School Science and Mathematics, 112(6), 349–358.
Leonard, J. G., & Patterson, T. F. (2004). Simple computer graphing assignment becomes a lesson in critical thinking. NACTA, 48, 17–21 [June 2004].
Martínez Ortiz, A. (2015). Examining students’ proportional reasoning strategy levels as evidence of the impact of an integrated LEGO robotics and mathematics
learning experience. Journal of Technology Education, 26(2), 46–69.
Minner, D. D., Levy, A. J., & Century, J. (2009). Inquiry-based science instruction: What is it and does it matter? Results from a research synthesis years 1984–2002.
Journal of Research in Science Teaching, 47, 474–496.
Moore, J. C. (2012). Transitional to formal operational: Using authentic research experiences to get non-Science students to think more like scientists. European Journal
of Physics Education, 3(4), 1–12.
National Center for Education Statistics. (2009). The Nation’s Report Card: Reading 2009 (NCES 2010 – 458). Washington, DC : Institute of Education Sciences, U.S.
Department of Education. Retrieved from: http://nces.ed.gov/nationsreportcard/pdf/main2009/2010458.pdf.
National Research Council (2000). Inquiry and the national science education standards: A guide for teaching and learning. Washington, DC: National Academy Press.
National Science Foundation. (2000). Fondations volume 2: A monograph for professionals in science, mathematics, and technology education. Retrieved from: http://www.
nsf.gov/pubs/2000/nsf99148/start.htm.

112
J.S. Woolley et al. Thinking Skills and Creativity 27 (2018) 101–113

National Academy of Sciences (1994). National science education standards. Washington, DC: National Academy of Sciences.
National Research Council (2015). Reaching Students: What research says about effective instruction in undergraduate science and engineering. Washington, DC: National
Academy Press.
Nehm, R. H., & Reilly, L. (2007). Biology majors’ knowledge and miscon- ceptions of natural selection. BioScience, 57, 263–272.
OECD. (1999). Measuring Student Knowledge and Skills: A New Framework for Assessment. Paris : OECD, ISBN 92-64-17053-7 [2].
PCAST. 2012. Engage to Excel: Producing one million additional college graduates with degrees in science, technology, engineering, and mathematics. Retrieved August 1, 2013
from http://www.whitehouse.gov/sites/default/files/microsites/ostp/pcast-engage-to-HYPERLINK http://www.whitehouse.gov/sites/default/files/microsites/
ostp/pcast-engage-to-excel-final_feb.pdf \hexcel-final_feb.pdf .
Padilla, M., Okey, J., & Dillashaw, R. (1983). The relationship between science process skills and formal thinking abilities. Journal of Research in Science Teaching, 20,
239–247.
Remigio, K. B., Yangco, R. T., & Espinosa, A. A. (2014). Analogy-Enhanced Instruction: Effects on Reasoning Skills in Science. Malaysian Online Journal of Educational
Sciences, 2(2), 1–9.
Renner, J. W., Stafford, D. G., Coffia, W. J., Kellogg, D. H., & Weber, M. C. (1973). An evaluation of the science curriculum improvement study. School Science &
Mathematics, 73, 291–318.
Rissing, S. W., & Cogan, J. G. (2009). Can an inquiry approach improve college student learning in a teaching laboratory? CBE – Life Science Education, 8, 55–61.
Rubel, L. H., & Zolkower, B. A. (2007). On blocks, stairs, and beyond: Learning about the significance of representations. Mathematics Teacher, 101(5), 340–344.
Rutherford, F. J., & Ahlgren, A. (1990). Science for all americans. New York, NY: Oxford University Press.
Schauble, L., & Glaser, R. (1990). Scientific thinking in children and adults. Contributions to Human Development, 21, 9–27.
Seymour, E., & Hewitt, N. M. (1997). Talking about leaving: Why undergraduates leave the sciences. Boulder, CO: Westview Press.
Shaklee, H., & Mims, M. (1981). Development of rule use in judgments of covariation between events. Child Development, 52, 317–325.
Shaklee, H., & Paszek, D. (1985). Covariation judgment: Systematic rule use in middle childhood. Child Development, 56, 1229–1240.
Shaklee, H., Holt, P., Elek, S., & Hall, L. (1988). Covariation judgment: Improving rule use among children, adolescents and adults. Child Development, 59, 755–768.
Spiro, M. D., & Knisely, K. I. (2008). Alternation of generations and experimental design: A guided-inquiry lab exploring the nature of the her1 developmental mutant
of Ceratopteris richardii (C-Fern). CBE – Life Science Education, 7, 82–88.
Suresh, R. (2006). The relationship between barrier courses and persistence in engineering. Journal of College Student Retention, 8, 215–239.
Udovic, D., Morris, D., Dickman, A., Postlethwait, J., & Wetherwax, P. (2002). Workshop biology: Demonstrating the effectiveness of active learning in an introductory
biology course. Bioscience, 52, 272–281.
Valverde, G., & Castro, E. (2012). Prospective elementary school teachers’ proportional reasoning. PNA, 7, 1–19.
Zeineddin, A., & Abd-El-Khalick, F. (2010). Scientific reasoning and epistemological commitments: Coordination of theory and evidence among college science
students. Journal of Research in Science Teaching, 47, 1064–1093.
Zhou, S., Han, J., Koenig, K., Raplinger, A., Pi, Y., Li, D., et al. (2016). Assessment of scientific reasoning: The effects of task context, data, and design on student
reasoning in control of variables. Thinking Skills and Creativity, 19, 175–187.
Zimmerman, C. (2000). The development of scientific reasoning skills. Developmental Review, 20, 99–149.
Zimmerman, C. (2007). The development of scientific thinking skills in elementary and middle school. Developmental Review, 27, 172–223.
Zoller, U. (1993). Are lecture and learning compatible? Maybe for LOCS: Unlikely for HOCS (SYM). Journal of Chemical Education, 70, 195–197.

113

You might also like