You are on page 1of 17

International Journal of Research & Method in Education

ISSN: (Print) (Online) Journal homepage: https://www.tandfonline.com/loi/cwse20

Methodological pragmatism in educational


research: from qualitative-quantitative to
exploratory-confirmatory distinctions

Colin Foster

To cite this article: Colin Foster (2023): Methodological pragmatism in educational research:
from qualitative-quantitative to exploratory-confirmatory distinctions, International Journal of
Research & Method in Education, DOI: 10.1080/1743727X.2023.2210063

To link to this article: https://doi.org/10.1080/1743727X.2023.2210063

© 2023 The Author(s). Published by Informa


UK Limited, trading as Taylor & Francis
Group

Published online: 05 May 2023.

Submit your article to this journal

Article views: 698

View related articles

View Crossmark data

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=cwse20
INTERNATIONAL JOURNAL OF RESEARCH & METHOD IN EDUCATION
https://doi.org/10.1080/1743727X.2023.2210063

Methodological pragmatism in educational research: from


qualitative-quantitative to exploratory-confirmatory distinctions
Colin Foster
Department of Mathematics Education, Loughborough University, Loughborough, UK

ABSTRACT ARTICLE HISTORY


Educational researchers continue to polarize into ‘qualitative’ and Received 17 May 2022
‘quantitative’ camps, with these terms often functioning as global Accepted 20 March 2023
identity markers, rather than as styles of research that are available to
KEYWORDS
anyone. Many scholars have lamented the drawbacks of researchers Incommensurability;
being siloed into opposing, apparently incommensurable research methodological expertise;
paradigms, and have advocated for more inclusive and mixed-methods pragmatism; qualitative
approaches. However, mixed methods research is not necessarily a research; quantitative
good fit for every researcher, research study or research question. In research; research design;
this theoretical paper, I argue that the distinctions commonly made research paradigms
between qualitative and quantitative research are fundamentally
incoherent and that the challenges researchers face across both styles
of research are essentially analogous. I present methodological
pragmatism as an accessible and convenient, compatibilist framework
for making research design choices which cut across qualitative-
quantitative divides. I propose that the exploratory-(dis)confirmatory
distinction is of considerably more practical relevance to educational
researchers than qualitative-quantitative ones, and I outline how
methodological pragmatism, while consistent with a degree of
methodological specialism, recognizes the availability of all research
methods for all researchers. Methodological pragmatism liberates
researchers in education to conduct the most rigorous research possible
by drawing on any methods from any tradition that will further their
research goals.

1. Introduction
Many scholars have lamented the tendency for empirical researchers in education, as well as in the
social sciences more broadly, to polarize into distinct methodological camps, labelling their research
– and even themselves – as ‘qualitative’ or ‘quantitative’ (e.g. Gorard and Taylor 2004; Ercikan and
Roth 2006). Not only does this division impede inter- as well as intra-disciplinary collaboration,
but the terms ‘qualitative’ and ‘quantitative’ are often poor descriptors of the intended distinction.
The qualitative-quantitative divide rests on philosophical and ideological differences and is much
more than a simple matter of whether ‘numbers’ are present in the generated or collected data
(Lund 2005). A ‘qualitative’ educational researcher conducting a semi-structured interview with a
teacher might ask them how many years they have been teaching, and write down a number,
but they would not consider this to be ‘quantitative data’ that required a fundamentally different,

CONTACT Colin Foster C.Foster@lboro.ac.uk Department of Mathematics Education, Loughborough University, Schofield
Building, Loughborough LE11 3TU, UK
© 2023 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/),
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The terms on which this
article has been published allow the posting of the Accepted Manuscript in a repository by the author(s) or with their consent.
2 C. FOSTER

say ‘positivist’, analytic approach or epistemological/ontological framework (Dienes 2008). In a


similar way, a researcher undertaking a statistical analysis ought to think hard about how to mean-
ingfully interpret their findings in context, and how to construct a persuasive argument based on
them (Abelson 2012). But this highly ‘interpretive’ work does not mean that they have unwittingly
blundered into an ‘interpretivist paradigm’ and must deny the existence of an objective reality
(Denzin and Lincoln 2011).
This paper argues that the similarities between qualitative and quantitative research in education
are much stronger than the differences, and that much is to be gained by de-emphasizing the quali-
tative-quantitative distinction. It is a truism to say that all research in education, in whatever style, is
challenging (Picho and Artino Jr 2016); however, what may frequently be missed is that the chal-
lenges within qualitative and quantitative research are often highly similar. Specifically, I will
argue that research challenges associated with definitions of constructs, operationalization and
measurement, data reduction and synthesis, analytic interpretation and threats to validity/trust-
worthiness are all essentially analogous across qualitative and quantitative research. Better com-
munication across the qualitative-quantitative divide could lead not just to more mixed-methods
research – where this is understood as research that draws on a mixture of both quantitative and
qualitative methods (Johnson and Onwuegbuzie 2004). It could also facilitate a more unified and
deeper understanding derived from both of these research styles and traditions of what it means
to conduct high-quality educational research.
The value and validity of educational research has long been criticized for its perceived lack of
rigour (e.g. Hargreaves 1996; Tooley and Darby 1998; Whitty and Wisby 2009), for being ‘not very
influential [or] useful’ (Burkhardt and Schoenfeld 2003, 3) and for being ‘often inaccessible, irrelevant,
or impenetrable’ (Rycroft-Smith and Macey 2021, 1). Hammersley (2005) argued that high levels of
government interference in educational research had weakened the enterprise. The notion of prac-
tice in schools and other learning settings being ‘research-informed’ or ‘evidence-informed’ is poly-
semous, and ‘evidence’ in education continues to be a contested term, with questions, for instance,
over the priority given to randomized controlled trials and a ‘what works’ agenda (see Davis 2018;
Thomas 2021). Challenges to the validity and generalisability of qualitative studies are mirrored
by a crisis of trust in quantitative studies, in the context of recent concerns about replication
crises across related fields and a perceived need to do better, more reproducible, open science (Gold-
acre 2010; Chambers 2019; Ritchie 2020).
In this paper, I propose methodological pragmatism as an accessible, convenient paradigm for con-
ducting educational research, in which all methods are available to all researchers, and the common-
alities in the challenges of applying these methods are stressed. The overarching principle of
methodological pragmatism is: There are no intrinsically good or bad methods; in each circumstance,
there will be methods that are more or less well suited to particular research questions. Methodological
pragmatism might well lead in certain situations to mixed methods – in the sense of a combination
of qualitative and quantitative methods, a combination of different qualitative methods, or a combi-
nation of different quantitative methods (Johnson and Onwuegbuzie 2004) – but this is by no
means inevitable. Methodological pragmatism makes no prior commitment to the superiority of
mixed methods; rather, whether or not to use qualitative, quantitative or mixed methods is a matter
to be determined purely pragmatically, based solely on the needs of the research. Consequently, for
particular studies, methodological pragmatism would be equally comfortable with an entirely qualitat-
ive approach or an entirely quantitative approach, and would not see either of these as being in any way
inferior to mixed-methods research. Defining methodological pragmatism in this way of course presup-
poses well-defined research questions, which themselves derive from researchers’ perspectives and
what they find ‘interesting’ (Davis 1971) – methodological pragmatism cannot in any sense be por-
trayed as a ‘neutral’ paradigm. However, I will argue that any individual methods of data collection/gen-
eration and analysis, deriving from any tradition, might be incorporated into an educational study in a
relatively unproblematic fashion. I will argue that methodological pragmatism offers a positive way to
navigate the challenges of conducting educational research, in the context of the systemic constraints
INTERNATIONAL JOURNAL OF RESEARCH & METHOD IN EDUCATION 3

placed on researchers and what is valued in public and policy discourse (Evans 2015), so as to support a
more unified and coherent field of research to address these challenges.
In the remainder of this paper, I first (in Section 2) question the qualitative-quantitative distinc-
tions that are commonly made and argue that these are much less stark on close inspection. On
the contrary, I go on in Section 3 to argue that the challenges of doing research in education are
broadly similar between qualitative and quantitative styles of research, taking examples from
across the different stages of conducting a research study. In Section 4, I detail the nature of meth-
odological pragmatism, arguing that the exploratory-confirmatory distinction is of far more rel-
evance to researchers than qualitative-quantitative ones. In Section 5, I respond to some possible
criticisms, and in Section 6 I conclude with some suggestions on how unhelpful polarisations
between research and researchers might be obviated.

2. Questionable distinctions
This paper argues that attempts to differentiate qualitative from quantitative research lack coher-
ence, and that, in practice, research in education is often messy and fails to fit neatly into qualitat-
ive/quantitative categories. Below, I problematize some of the ways in which the qualitative-
quantitative distinction may often be made.

2.1. Numbers versus words


There is no straightforward distinction between qualities and quantities. As Gorard and Taylor (2004,
150) have pointed out:
Most methods of analysis use some form of number, such as ‘tend’, ‘most’, ‘some’, ‘all’, ‘none’, ‘few’ and so on …
The patterns in qualitative analysis are, by definition, numeric, and the things that are traditionally numbered are
qualities … The measurement of all useful quantities requires a prior consideration of theory leading to the
identification of a quality to be measured. (Gorard and Taylor 2004, 150)

To distinguish data according to whether it contains numerical quantities is often hard in practice,
and not very meaningful. Is a statement that ‘Most interview participants expressed X’ a quantitative
statement because it involves counting the frequency of such statements? It does not seem clear
that it assists the conduct of educational research to make this distinction.

2.2. Use of particular methods


In methods training courses, certain data collection/generation methods may often come to be
regarded as associated with either qualitative or quantitative studies, but this frequently seems to
make little sense. A survey, for instance, might contain both items that require a numerical
answer and items that require extended written responses, but it does not necessarily follow that
this would require a ‘mixed methods’ approach to analysis. The written responses might be
coded and categorized, and the frequency of occurrences of various themes might be counted.
During an interview, a student might be asked to order some cards containing statements in an
order of preference – this might not involve any numbers, but the rankings of the cards might sub-
sequently be handled statistically. It seems a category error to try to assign data collection methods
or analysis methods such as ‘interview’ or ‘survey’ to ‘qualitative’ or ‘quantitative’ categories, and the
attempt does not seem to serve any useful purpose.
Whether statistical techniques are ultimately involved in an analysis might depend on many
factors, such as the size and properties of the eventually-acquired data set. A judgment might
need to be made at some stage, based on the properties of the data, as to whether it would be
meaningful to conduct statistical tests, or even use descriptive statistics, or whether this might be
inappropriate or unnecessary. A chi-squared test, for instance, might be conducted on categories
of qualitative data, but does this mean that frequency or ranked data must be considered
4 C. FOSTER

quantitative? If the chi-squared test is not in the end carried out, due perhaps to some cell frequen-
cies being too small, does the study then remain qualitative?
Sometimes entire research methodologies, such as ‘action research’ or ‘design research’, may be
designated qualitative, but this seems not to accord with the variety of ways in which those
approaches may be manifested in practice (see Burkhardt 2013). The categorization seems unhelpful.

2.3. Philosophical and theoretical distinctives


Behind the other differences mentioned above lies a deeper one, in which qualitative and quantitat-
ive research are distinguished according to philosophical and theoretical values and positions.
Conflation of research style and ideological standpoint may lead researchers to feel a need to
‘pick a side’, and this can result in the propagation of simplistic stereotypes. Quantitative researchers
are stereotyped as taking an apolitical, ‘positivist’ stance, and pursuing the impossible goal of elim-
inating all sources of bias in order to arrive at objective, neutral, value-free truths. But it seems unli-
kely that many experienced researchers within the social sciences would take this kind of absolutist
stance. Such a portrayal suggests a straw person, used to ridicule quantitative approaches as simplis-
tic, naïve, untheoretical and uncritical.
On the other hand, qualitative researchers may be stereotyped as vague, anecdotal and ‘merely’
journalistic, writing emotively, but unrigorous in their approaches. They may be characterized as
more interested in changing the world in personal ideological directions than in finding things
out, failing to account for their confirmation biases, unwilling to have their views challenged, and
cherry-picking evidence to fit a pre-determined narrative. Again, such a description seems a straw
person, and does not reflect the care and consideration with which high-quality qualitative research
is undertaken and the respect and primacy given to the data. It would seem that any of these criti-
cisms could potentially also be applied to quantitative research; either style of research can be con-
ducted more rigorously or less rigorously according to different notions of rigour.
The language of ‘paradigms’ is potentially problematic here (Trifonas 2009), because of assump-
tions ‘that adopting a method automatically means adopting an entire “paradigm”’ (Gorard and
Taylor 2004, 9). Researchers, especially early-career researchers, may feel pressured to ‘join a tribe’
and even ‘exclaim, before deciding on a topic and research questions, that they intend, for
example, to use “qualitative” methods of data collection or analysis’ (Gorard and Taylor 2004, 9).
Alternatively, they are ‘encouraged to count or measure everything, even where this is not necess-
arily appropriate’ (Gorard and Taylor 2004, 9), so that qualitative evidence is derided and quantitative
evidence is accepted uncritically. As Ercikan and Roth (2006, p. 14) argued, ‘polarization is confusing
to many and tends to limit research inquiry, often resulting in incomplete answers to research ques-
tions and potentially inappropriate inferences based on findings.’
Many beginning researchers in education are former schoolteachers, or otherwise bring experi-
ence working with children and young people. It is possible that they may be inclined to see the
kinds of skills needed for qualitative research, involving methods such as interviewing and classroom
observation, as being more closely aligned to their existing skill sets (Gorard and Taylor 2004, 146).
Studying phenomena in their natural settings, listening and trying to make sense of what people say,
feel and experience, and co-constructing knowledge alongside participants (see Denzin and Lincoln
2011) may seem more closely aligned with their previous roles in classrooms than more quantitative
methods, such as analyzing numerical data sets.
Researchers’ negative relationships with their own learning of mathematics may also contribute
to an aversion to statistics, which may (consciously or unconsciously) support biases against working
‘quantitatively’. On the other hand, researchers with a talent for mathematics, and perhaps less
inclined to spend time interacting with people, might feel more drawn to the spreadsheets and com-
puter coding of quantitative research. In such ways, researchers might readily polarize on the basis of
features associated with their background or personality, which should perhaps be irrelevant from
the point of view of finding the best ways to address the particular research questions in a
INTERNATIONAL JOURNAL OF RESEARCH & METHOD IN EDUCATION 5

specific study. Over time, and supported by philosophical theorizing, it is possible to see how these
divisions might develop into supposedly ‘incommensurable paradigms’.
Having once ‘picked a side’, researchers might then set about laying out their epistemological and
ontological positions. However, this can sometimes be done with considerable philosophical naivety,
with stereotypical positions passed on uncritically from older to younger researcher. Philosophical
problems around objectivity/subjectivity, realism/idealism, relativism/constructivism, etc., when
taken in their strongest and most radical forms, and presented simplistically, could seem to threaten
the rationale for all styles of research. Choosing methods based on the researcher’s judgment of
which position they align more closely with in centuries-old, unresolved (and possibly unresolvable)
philosophical debates seems precarious, and researchers teaching methods courses in education
departments are unlikely to possess the kinds of strong background in philosophy that could
make such discussions enlightening.
Clearly, all researchers in education need to be as philosophically well-informed as possible
(e.g. Dienes 2008), with awareness of the challenges of ‘true objectivity’ in science (e.g. see
Egan 2002), the theory-ladenness of facts and the underdetermination of theory by evidence.
Awareness that theories can only ever be provisional, that causation in the social sciences is
probabilistic, not deterministic, and that critical scepticism (but not cynicism) is fundamental
to research are important for all researchers in education to understand. But these do not
necessitate ‘picking a side’. Epistemological and ontological assumptions must be acknowl-
edged, and may be more important to some researchers than to others. Inescapably, this
will impact on the study’s design, analysis and interpretation. However, the methodological
pragmatist may view the weight given to these positionings as often excessive, and the ten-
dency to assume that they must represent fixed ‘positions’ over time for any particular
researcher unrealistic.
Theoretical/conceptual/analytic frameworks (see de Vaus 2002; Cai and Hwang 2019; Crotty
2020) should be equally important to qualitative or quantitative research, so ‘theory’ should
not be viewed as more the property of qualitative than quantitative research. It seems unhelpful
to make a strong distinction between ‘grand Theories’ (with a capital ‘T’), which operate as world-
views or meta-narratives (and tend to be more strongly associated with qualitative and theoreti-
cal research), and more local theories that build up from attempts to capture the ways in which
people behave in certain educationally-relevant situations when it would seem that either quali-
tative or quantitative research might be well placed to contribute to either kind. Ultimately, the
overriding consideration for the methodological pragmatist is that if setting out one’s various
epistemological and ontological positions does not seem to have much or any impact on the
details of a study’s design, analysis and interpretation, then it should be de-emphasized and
perhaps not even referred to at all.

3. Analogous challenges
It would be uncontroversial to note here that qualitative and quantitative research each possess
different advantages and disadvantages, and that each might be appropriate for different pur-
poses. This may be typically the kind of message that young researchers receive on balanced
research methods courses and in sensible research methods books. However, the argument of
this paper goes much further than this. It is not just that qualitative and quantitative research
each have their own difficulties; frequently, they present analogous challenges to the researcher.
This has occasionally been noted – a recent tweet stated, ‘Being an applied statistician is a lot like
being an ethnographer’ (Women in Statistics and Data Science 2021). If the qualitative-quantitat-
ive distinction is, as argued here, fundamentally spurious, then we would expect that the chal-
lenges associated with qualitative and quantitative research would often be highly similar. The
sections below argue that this is indeed the case, taking examples from defining, operationalizing
6 C. FOSTER

and measuring constructs, data reduction and synthesis, analytic interpretation and threats to val-
idity/trustworthiness.

3.1. Definitions of constructs, operationalization and measurement


It is often assumed that the hard sciences have it easy when it comes to defining constructs and even
that science is inherently more straightforward than social science (e.g. ‘Imagine how hard physics
would be if electrons could think’, attributed to Murray Gell-Mann). Everyone knows what something
like ‘energy’ is, for instance, and so it can be easily and precisely measured. But this really fails to do
justice to the challenges within the quantitative disciplines, and would seem to be a social-scientist’s
perspective on how they imagine science to be (‘the grass is greener’). On the specific example of
energy, the Nobel-prize-winning physicist Richard Feynman noted that ‘It is important to realize
that in physics today, we have no knowledge of what energy is. … [T]here are formulas for calculat-
ing some numerical quantity, and when we add it all together it gives … always the same number. It
is an abstract thing in that it does not tell us the mechanism or the reasons for the various formulas.’
(Leighton and Sands, 1965, 4-1). This would seem potentially very similar to the kind of difficulty that
an education researcher might have in defining and operationalizing a fuzzy construct, such as
‘learning’, ‘understanding’, ‘problem solving’ or ‘creativity’, where it is also difficult to say ‘what
exactly it is’.
Measurement is another issue often presumed to be straightforward for quantitative researchers.
However, even in a subject like physics, and with supposedly ‘everyday’ constructs, measurement
can be highly challenging (Vincent 2022). Consider temperature as an example. It might seem
straightforward to establish a temperature scale, calibrated with melting ice at 0°C and boiling
water at 100°C, but of course these values are context-dependent (varying with air pressure,
purity of the water, etc.). Even disregarding that, how might a scientist decide what should be
meant by, say, 50°C? What does ‘halfway between’ mean? If we use electrical resistance of a
metal to calibrate our scale with melting ice and boiling water, then 50°C will be halfway in electrical
resistance for that particular metal; if we use the expansion of a column of liquid, as in a glass ther-
mometer, 50°C will be halfway along the column. But these two temperatures will not be precisely
the same in terms of their hotness – heat will flow from one of them to the other, demonstrating one
to be of higher temperature than the other. So, even with something as apparently objective as ‘the
temperature’, important choices have to be made, and the exact definitions and conditions used
reported clearly. This does not seem fundamentally so different in principle from the challenges
of devising a scale for something like ‘self-esteem’, and triangulating it with other measures.
Social scientists sometimes assume that context is not an important or problematic issue in the
hard sciences, but the course that a chemical reaction, for instance, will take might depend on many
external factors (e.g. ambient temperature, pressure, presence of catalysts or other impurities, how
old the reagents are and where they have been purchased from, how they have been prepared,
purified and whether they have been kept refrigerated, and for how long). Control is not at all a
straightforward matter in science – and the challenges can be viewed as similar to those in the
social sciences. The challenges of conducting qualitative research would all seem to apply to con-
ducting quantitative research, and good quantitative researchers are very aware of this. The need
to carefully set up the conditions and measures before collecting statistical data is well-captured
in the statement attributed to Ronald Fisher: ‘To consult the statistician after an experiment is
finished is often merely to ask [them] to conduct a postmortem examination. [They] can perhaps
say what the experiment died of.’
A qualitative researcher might object that qualitative research does not involve measurement. But
a broad view of ‘measurement’ would challenge this. An extreme perspective is that ‘Anything that
exists, exists in some amount. And all amounts can be measured’ (Edward Thorndike). Conducting
semi-structured interviews to ascertain teachers’ views on some matter, for instance, may not
seem like ‘measurement’, but the findings that the researcher wishes to report are likely to
INTERNATIONAL JOURNAL OF RESEARCH & METHOD IN EDUCATION 7

contain statements that attempt to capture the degree to which participants expressed this or that
opinion or related to this or that experience. Nuance in qualitative reporting is often about scaling
(i.e. measuring) qualities. A great deal of quantitative language is often needed in order to summarize
the findings from even the most qualitative piece of research, and this leads us on to the issue of data
reduction and synthesis.

3.2. Data reduction and synthesis


Data reduction is an equally important issue for both qualitative and quantitative research (‘To think
is to forget details, generalize, make abstractions’, Jorge Luis Borges). Quantitative data might consist
of a large spreadsheet of numbers: in order to draw a conclusion from these that is meaningful, infor-
mative and useful, they must be processed and packaged into an overall message for the reader. This
might be done by calculating summary statistics, such as the mean and standard deviation, and/or
by conducting inferential statistical tests to establish whether meaningful differences or correlations
can be detected that could tell us something about the wider population from which the sample is
taken. Qualitative data, on the other hand, might consist of large data files of transcribed text,
perhaps from interviews, but again, in order to draw a useful conclusion from these, this data
must be ‘reduced’ into a condensed message for the reader. In both cases, information, detail and
nuance must be discarded, and this sacrifice is necessary and acceptable in order to derive con-
clusions that will be informative to the researcher and be digestible and communicable to others.
Obviously, in neither case would a researcher merely present their raw data. The reduction of
quantitative data to, say, a mean and a standard deviation, can be viewed as being in some ways
analogous to the reduction of qualitative data to, say, a handful of themes. In both cases, researchers
recognize that this is a ‘lossy’ process, which must be handled with the greatest of care if the con-
clusions arrived at are to be valid, reliable and trustworthy. In both cases, the researcher must guard
against bias and be open and transparent about the details of the process, so that the consumer of
the research can have confidence in the findings. But to regard either process as inherently more or
less ‘rigorous’ or ‘objective’ seems unjustified. To suggest that the outcome of quantitative research
is ‘what’, whereas the outcome of qualitative research is ‘why’, seems not to match the details of
what researchers engage in. The two approaches would seem to have much more in common
than is generally acknowledged, and researchers from both traditions might have much to learn
from the experiences of the other. The essential challenges of doing research would seem to be
largely the same.
Portraying qualitative research as staying with the richness, subtlety and detail is not necessarily
accurate. As King, Keohane and Verba (1994) pointed out:
Even the most comprehensive description done by the best cultural interpreters with the most detailed contex-
tual understanding will drastically simplify, reify, and reduce the reality that has been observed. Indeed, the
difference between the amount of complexity in the world and that in the thickest of descriptions is still vastly
greater than the difference between this thickest of descriptions and the most abstract quantitative or formal analy-
sis. No description, no matter how thick, and no explanation, no matter how many explanatory factors go into it,
comes close to capturing the full ‘blooming and buzzing’ reality of the world. There is no choice but to simplify.
(King, Keohane, and Verba 1994, 43, original emphasis)

Raw data, of whatever kind, must always be reduced during analysis and synthesized, and no style of
research can escape the challenges that this brings.

3.6. Analytic interpretation


Some qualitative research styles are commonly described as ‘interpretivist’, and yet all research
findings require interpretation. This seems to be another example where qualitative and quantitative
researchers face essentially the same challenge. Interpretation is often perceived to be the most chal-
lenging aspect of quantitative research (Abelson 2012), and interpretation of quantitative data
8 C. FOSTER

involves much more than merely distinguishing correlation from causation (see Rohrer 2018). As
argued above, it is a mistake to think of qualitative research as inherently more nuanced. The
tools of statistics sometimes facilitate making highly nuanced statements, which can be hard to
express clearly in words. For example, perhaps two quantities have both increased, but one has
increased more than the other, or the gap between two quantities has decreased, although both
have increased. More complicated findings, such as a 3-way interaction or the results of a mediation
analysis or analysis of covariance, can be even harder to translate precisely into text, and this is surely
as challenging as attempting to capture and summarize subtle and nuanced statements made by
interview participants in qualitative research.
Quantitative science is often portrayed in terms of certainty, universality and ‘laws’, and yet scien-
tists and statisticians tend to talk more in terms of ‘models’ of the real world, which are acknowl-
edged to be deliberately simplified and imperfect representations of ‘reality’ (e.g. see Christensen,
Johnson, Turner and Christensen 2011). Models do not intend to capture all the details; they are
intentionally slimmed-down, ‘reduced’ versions, that are therefore efficient to work with. They are
always provisional and open to being improved with more data and theory: ‘All models are
wrong but some are useful’ (attributed to the statistician George Box, see Wasserstein 2010). In par-
ticular, this means that:

1. Models are descriptive, not prescriptive. They don’t say what will or must happen in the future; they
summarize patterns in what has been observed to happen in the past.
2. Models apply only approximately, and within certain, specified domains. No model can make pre-
dictions with 100% accuracy and there will always be exceptions.
3. Models are probabilistic, not deterministic; i.e. they are ‘good bets’ that work on average, other
things being equal, but not every time or for every case.

Such models, derived from quantitative data, do not seem epistemologically very different from a
speculative theory or framework derived from qualitative data. The challenges and limitations are
essentially the same in handling ideas that are tentative conjectures or hypotheses and attempting
to ensure that they are understood in that way.

3.4. Threats to validity/trustworthiness


Finally, we consider the issue of threats to validity, and again conclude that the threats to validity of
qualitative and quantitative research in education are essentially the same. In qualitative research,
words like ‘transparency’ or ‘authenticity’ may be preferred to ‘validity’ and ‘generalisability’
(Denzin and Lincoln 2011), but, even so, similar issues remain (Smith 2018). Researchers interested
in a particular topic often need to synthesize both qualitative and quantitative research in the same
literature review, so common ways of evaluating how much confidence ought to be placed in each
finding are helpful. Schoenfeld (2007, 81) posed three questions to ask about any study in education:

1. Why should one believe what the author says? (the issue of trustworthiness)
2. What situations or contexts does the research really apply to? (the issue of generality, or scope)
3. Why should one care? (the issue of importance) (Schoenfeld 2007, 81)

These would seem to be readily applicable and relevant across both qualitative and quantitative
studies, since, within all research in education, it would seem fair to say that ‘The function of a
research design is to ensure that the evidence obtained enables us to answer the initial question as
unambiguously as possible’ (de Vaus 2002, 9, original emphasis). All research claims, derived from
whatever tradition, need severe testing before acceptance. Many (even opposing) claims may
seem ‘obvious’ to someone, and one purpose of research is to distinguish statements that are
‘obvious and true’ from those that are ‘obvious and false’ (see Gage 1991).
INTERNATIONAL JOURNAL OF RESEARCH & METHOD IN EDUCATION 9

Qualitative studies are often criticized for being small-scale and not generalizable, but sample size
may not be the most important factor in judging generalisability – and generalisability may not always
be the intention anyway (Smith 2018). A large quantitative study might equally be judged to have low
generalisability due to the ‘artificial’ nature of its design (e.g. a laboratory-based – rather than class-
room-based – setting, using researcher-devised materials, across a short timescale, etc.). However,
the ‘external invalidity’ (lack of ‘ecological validity’) of such studies may be considered a feature,
and not a bug, because it may facilitate theoretical conclusions being drawn with much greater confi-
dence (Mook 1983), in the same way that a qualitative interview situation would not be criticized for
being ‘artificial’. A plural, but not dualistic, approach to generalisability is helpful (Larsson 2009).
When evaluating research, Christian and Griffiths (2016, 223) criticized both ‘cherry-picked per-
sonal anecdotes and aggregated summary statistics. The anecdotes, of course, are rich and vivid,
but they’re unrepresentative. … Aggregate statistics, on the other hand, are the reverse: comprehen-
sive but thin.’ However, these different criticisms seem to have more to do with questions of scale (a
small amount of rich data versus a large amount of superficial data), rather than whether the
research should be considered qualitative or quantitative. Both qualitative and quantitative research-
ers can inappropriately ‘stack the deck’ in their literature reviews by cherry-picking sources in an
unsystematic review. The replication crisis has highlighted quantitative researchers’ excessive
‘researcher degrees of freedom’ at the analytic stage. This seems similar in character to allegations
of bias and subjective idiosyncrasy made against qualitative researchers when conducting their ana-
lyzes. It would seem that similar considerations involving acknowledging the researchers’ positions
and interests/biases, as well as transparent reporting throughout, are important, along with pre-
registering analyses (of both kinds) in advance (Chambers 2019). It is often acknowledged that,
even in qualitative research, being an ‘outsider’ and aspiring to some level of objectivity can be valu-
able, though difficult (Thapar-Björkert and Henry 2004).
In both styles of research, bias is a considerable source of concern, with the researcher’s precon-
ceptions always at risk of exerting undue influence. But it does not seem clear that either qualitative
or quantitative research has a greater problem in this area. As for the other aspects considered
above, the challenges seem essentially the same, perhaps because all researchers bring the same
strengths and limitations of being human.

4. Methodological pragmatism and the exploratory-confirmatory distinction


4.1. Methodological pragmatism
There have been many previous attempts to address the qualitative-quantitative disconnect (e.g.
Scott 2007) and to advocate better ways to teach research methods courses within the social sciences
(e.g. Nind and Lewthwaite 2020; Sarafoglou, Hoogeveen, Matzke, and Wagenmakers 2020). In particu-
lar, Johnson and Onwuegbuzie (2004) argued for mixed methods research to be the third research
paradigm, which can bridge the chasm between qualitative and quantitative research. However,
methodological pragmatism does not position mixed methods as always the optimal research
style, superior to mono-method approaches (see Creswell and Creswell 2018; Ramlo 2020). Methodo-
logical pragmatism is an approach to making research design choices, rather than an endpoint; the
endpoint might be qualitative, quantitative or mixed, and each of these might be optimal for any par-
ticular study. For the methodological pragmatist, mixed methods is just one option.
Drawing on the thinking of the classical pragmatists (e.g. Charles Sanders Peirce, William James),
methodological pragmatism seeks to adopt a pluralist, compatibilist and open-minded, balanced
methodological position, considering all possibilities and seeking as much as possible set to one
side any prior ideological commitments for or against any particular methods. As stated in
Section 1, the overarching principle of methodological pragmatism is: There are no intrinsically
good or bad methods; in each circumstance, there will be methods that are more or less well suited
to particular research questions. The choice of research questions will of course be influenced by
10 C. FOSTER

the personal philosophy of the researcher, and what they regard as interesting and important.
However, once a research question is arrived at, methodological pragmatism dictates that all
methods are on the table; nothing is off limits.
None of this is to say that epistemological and ontological positions are of no relevance to edu-
cational research. It is important to acknowledge the illusory nature of ‘the view from nowhere’. All
researchers are constantly positioned in relation to their ontological and epistemological assump-
tions, and they need to be aware of what those are if they are to understand the warrants for the
methods that they use, the questions that they frame and the kind of ‘knowledge’ that they
expect their research to produce. Any researcher using any research design must be mindful of
the knowledge claims being made and how these can be substantiated. However, philosophical
pragmatism in all its forms seeks to question the overriding importance of such positionings.
Baggini (2018, 82, original emphasis) commented, ‘One consequence of adopting the pragmatist
viewpoint is that many philosophical problems are not so much solved as dissolved’. As Dewey
(1910, 19) wrote of philosophical questions, ‘We do not solve them: we get over them.’ While this
does not give researchers carte blanche to a ‘sloppy mismash’ of methods, and any methods
deployed need to be ‘sympathetic’ to each other, it does open up researchers to consider absolutely
any method, from whatever source. While different positionings will affect how a method might be
understood and used, and our methodological biases are important to acknowledge, in order to try
to work against them, on the pragmatist view they should not rule out any methods a priori.
It is clearly challenging to navigate the many choices associated with a pragmatic approach
(Clarke and Visser 2019), but methodological pragmatism would seem to offer a way towards
greater methodological diversity and coherence within the field of educational research. It also
seems likely to lead to higher-quality research: research is hard enough without throwing away
half of the toolbox of available methods. Seeing research design as a logical problem, rather than
a logistical problem, the methodological pragmatist seeks to identify whatever approaches seem
most appropriate to achieve the research goals. As de Vaus (2002, 9) put it, ‘issues of sampling,
method of data collection (e.g. questionnaire, observation, document analysis), design of questions
are all subsidiary to the matter of “What evidence do I need to collect?”’
Crotty (2020, 15) pointed out that to solve problems in everyday life we all tend to use any
method that will achieve our desired purposes:
We may consider ourselves utterly devoted to qualitative research methods. Yet when we think about investi-
gations carried out in the normal course of our daily lives, how often do measuring and counting turn out to be
essential to our purposes?

Constraining our research questions and methods to accommodate our beliefs – prejudices, even –
about research styles, or limitations in our repertoire of known research methods, seems like the tail
wagging the dog.
However, this is not to say that every researcher in education must have facility in all possible
methods, and that methodological specialism is never defensible. The complexity of many
methods, and the years of experience that may be needed to develop expertise in, say, multilevel
Bayesian modelling, or conversation analysis, necessitates that not all researchers can be highly
skilled in every possible method. The typical, stylized doctoral journey of ‘Find a research interest,
Formulate research questions, Find methods to address those questions, Learn how to do those
methods, Carry out the study and Write it up’ becomes perhaps less practicable for the early-
career researcher, who, due to the systemic constraints they experience as an academic (see
Evans 2015), needs to publish research rapidly. In such a situation, capitalizing on already-known
methods (and using locally available expertise and equipment) is clearly an efficient and perfectly
valid, ‘pragmatic’ approach. Being methods-led to a certain degree can be entirely consistent with
methodological pragmatism. However, retreating into a tribe (see Becher and Trowler 2001) that
prides itself on not engaging with certain methods would seem to do a disservice to the field. Ignor-
ing half of the relevant literature because the researcher ‘doesn’t do statistics’ or ‘doesn’t do Theory’
INTERNATIONAL JOURNAL OF RESEARCH & METHOD IN EDUCATION 11

is not productive. At the very least, all researchers should aspire to be equipped to critically read any
style of research that falls within their area of interest.

4.2. Exploratory and confirmatory research


A seemingly more productive distinction than qualitative-quantitative, that cuts across the qualitat-
ive-quantitative divide, is that between exploratory and confirmatory research. This distinction should
not divide people from one another, since it would seem highly unlikely that any researcher would
want to specialize in only one or other of these. Exploratory and confirmatory research naturally work
in partnership, often in the hands of the same researcher, and largely do not draw on different,
specialized skills. They are in no sense in competition with one another.
Exploratory research (called ‘night science’ by Yanai and Lercher 2020) involves open-minded
investigation, in which strenuous attempts are made to acknowledge and set aside preconceptions
and avoid confirmation bias. This kind of research is necessarily descriptive and limited to particular
situations and contexts. It may lead to thick, rich qualitative descriptions, which may be coded and
organized to generate descriptive themes. Alternatively, it may involve collection of statistical data
(e.g. on populations, from n = 1 up to very large), with a view to using descriptive statistics (as
opposed to inferential statistics, such as statistical tests), and could be the first-stage in a cross-vali-
dation design. Such research can provide proof of concept, existence proofs for a feasibility study,
and/or raise pertinent questions for discussion, and generate hypotheses and theories to be
tested out (Yarkoni 2022). In this way, exploratory research – whether qualitative or quantitative –
is a crucial source of ideas and discussion. The challenges of exploratory research, whether qualitat-
ive or quantitative, are for the researcher to be creative and open-minded, honest and open to all
possibilities, and to ‘listen attentively’ to the data, wherever it leads.
In sharp contrast to this is (dis)Confirmatory research (‘day science’ in Yanai and Lercher’s [2020]
terms), which tests hypotheses/theories/conjectures and seeks to generalize beyond the particular
sample or group studied to some wider population of interest. In qualitative terms, this might
entail scaling up an earlier, exploratory study – perhaps conducting semi-structured interviews to
examine to what extent findings from an exploratory survey might generalize to, or surface again
in, a wider group. Alternatively, quantitative confirmatory research could entail conducting a
survey built around issues emerging from previous exploratory interviews, and seeking to
examine evidence for certain claims in a larger and more diverse sample. There is nothing inherently
‘statistical’ or ‘quantitative’ about the notion of ‘hypothesis testing’ or ‘conjecture testing’ – it does
not depend inexorably on a ‘positivist’ worldview or entail absolutes, such as ‘proof’ and establishing
‘laws’. Confirmatory quantitative research might operate within the hypothetico-deductive tradition,
whether frequentist or Bayesian, making predictions and consequently supporting or rejecting/chal-
lenging/falsifying theories, and making generalizable claims with statistical significance. But none of
this need presuppose any particular worldview on the part of the researchers involved, who might
be equally happy another day interpreting interview transcripts. The challenges of (dis)confirmatory
research are in stating beforehand (e.g. by pre-registration, see Chambers 2019) what is being looked
for and being transparent and reproducible in the search.
Neither exploratory nor confirmatory research should be viewed as ‘better’ – both are needed.
But, unlike the qualitative-quantitative distinction, the exploratory-confirmatory distinction
matters for practical reasons of rigour (see Chambers 2019). Every researcher should be clear
which kind of research they are engaged in at that moment. Failure to make this distinction has
led to questionable research practices in the quantitative world, including ‘p-hacking’ and
HARKing (Hypothesizing After the Results are Known, see Chambers 2019), where inferential stat-
istics have been used in studies that would be better characterized as exploratory. This means
that the resulting p-values are not corrected for multiple comparisons, and therefore are misinter-
preted. Reported findings thus give a false impression of statistical significance when seen divorced
12 C. FOSTER

Figure 1. Exploratory and confirmatory research contrasted (whether qualitative or quantitative).

from the wider context of the other tests that were also carried out, and p-hacked findings should
not be trusted or expected to replicate.
Within the qualitative world, confusing exploratory and confirmatory research is equally proble-
matic. The qualitative analogue of p-hacking/HARKing is the narrative fallacy, in which post-hoc
explanations that sound plausible are arrived at after examining the data. Such accounts should
only ever be presented as hypotheses/conjectures, to be tested out further, separately, with new
data from new participants. Sometimes this distinction is blurred, with researchers perhaps reporting
in the ‘Method’ section that their study does not seek to generalize from their sample, but then in the
‘Discussion’ section perhaps giving themselves licence to make wild, unsupported generalizations.
These are issues for both qualitative and quantitative research, and both styles of research would
benefit from greater clarity regarding the exploratory-(dis)confirmatory distinction (Figure 1).
Exploratory research, of whatever style, is always provisional until confirmatory research has either
backed it up or challenged it.

5. Possible objections
In this Section, we briefly consider some possible objections that might be made to the argument for
methodological pragmatism presented in this paper. The points below build on the discussion
above, and we consider in particular three possible objections.

5.1. An impoverished epistemology


Pragmatism, in all its flavours, has been attacked for being ‘light’ intellectually (e.g. see Madden 1980;
Slaney 2015), and methodological pragmatism may appear to be attempting to be atheoretical and
to sweep inescapable philosophical issues under the carpet. Without adequate attention to theory of
knowledge, how can we know whether any of this ‘pragmatic’ research is valid? Methodological
pragmatism, in its apparently inclusive acceptance of all methods, does not seem to provide the
intellectual machinery needed to assess or criticize any method.
In response to this, we might say that, on the contrary, methodological pragmatism provides the
most severe level of criticality possible: the pragmatist’s ultimate test for any method is whether it
‘works’, here in the sense of being able to answer real questions posed by real researchers in practice.
It is not so much that methodological pragmatism sees all methods as equally valid; it is that it does
not allow prior reservations based on crude categorisations of methods to exert an influence. As long
as a method is experienced as useful in achieving a particular set of research goals, then it justifies its
presence. Within methodological pragmatism, a method can suffer the most robust criticism, but
only on the grounds that it does not adequately contribute to answering a research question.
This would appear to place the burden on methods exactly where it properly belongs.

5.2. A short-termist orientation


A pragmatic concern for ‘results’ could be seen to prioritize short-term gains over longer-term goals
by addressing immediate research problems at the expense of the bigger picture. In this way, it
might limit creativity and innovation, preferring to depend on practical solutions that have been
INTERNATIONAL JOURNAL OF RESEARCH & METHOD IN EDUCATION 13

shown to work in the past, rather than exploring novel, innovative methods that might bring fresh
benefits in the future.
In response, we might say that short-termism would seem to be a danger with all research, not
just that conducted from a methodological pragmatist perspective. The researcher becomes comfor-
table in what they know and does not feel inclined to consider options outside of their ‘bubble’.
However, it would seem that this danger should be at its lowest within methodological pragmatism,
given the imperative within this approach to be as inclusive as possible in considering all existing –
and potential – research methods from all possible traditions. Any narrowness of focusing on
immediate issues at the expense of broader concerns is perhaps most likely to enter in the planning
and devising of research questions, rather than in the selection of suitable methods to address them,
and in that case it does not seem that methodological pragmatism should be to blame.

5.3. Ethical questions


The most serious of the three concerns considered in this Section relates to ethics, and the respon-
sible researcher should certainly ensure that any potential ethical concerns with their research
approaches should attract their fullest attention. A critic might argue that the potential inclusivity
of methodological pragmatism to all possible methods is ethically suspect. A principled researcher,
who eschews certain methods on personal or ideological grounds, may be seen as holding them-
selves to a higher ethical standard than the methodological pragmatist, who seemingly cares only
about ‘getting results’, regardless of the consequences.
In response, we note that, clearly, ethical vigilance must pervade all aspects of conducting any
educational research (BERA 2018). It is important to acknowledge that one aspect of this is
making productive use of available resources, including research funding, and especially the time
and energies of all of the participants involved. Methodological pragmatism encourages us to ask
hard questions about whether the chosen methods are worth everyone’s time, effort and money,
in terms of what they stand to reveal about the situation. If alternative methods could gain the
same or better information at lower ‘cost’ (interpreted broadly), then they should be preferred on
ethical, as well as pragmatic, grounds.
The pragmatist’s quest for ‘utility’ requires us to consider what is valued, and ‘the findings’ are
only one aspect of a research study. The study’s effects (both long-term and short-term) on partici-
pants and researchers are of central importance, and a method cannot be regarded as pragmatically
useful if it risks a negative impact on those involved. Methods which risk harming anyone are not
compatible with methodological pragmatism if our desired outcomes include, as they must, the
wellbeing of all concerned.

6. Conclusion
This paper has argued that the qualitative-quantitative distinction creates unnecessary divisions
among researchers in education who have otherwise similar goals, and inhibits intra- and inter-dis-
ciplinary collaboration. Collaboration is a value held strongly by many educational researchers, and it
seems something of a contradiction that our field continues to be so divided in this way. Researchers
committed to opposing, incommensurable paradigms operate from within different silos, even
when their substantive interests may seem to be extremely close (Wallace and Kuo 2020). Qualitative
and quantitative researchers focused on the same topic might nonetheless read quite distinct litera-
tures and conduct studies that fail to speak to each other or take advantage of all of the insights
gained. Gorard and Taylor (2004, 149) described this separation as ‘divisive and conservative in
nature’ and unlikely to lead to best progress in research and practice. Discarding half of the research
methods at one’s disposal, or half of one’s potential colleagues and collaborators, seems extremely
unwise, and a disservice to the field.
14 C. FOSTER

Although there are certainly divisions within qualitative and quantitative research, it would seem
that polarization into differing, at times even ‘opposing’, methodological camps is likely to entrench
misapprehensions on both sides about what characterizes the ‘other’, which can then become
mutually self-reinforcing. In such a way, a polarized division once created can easily become self-sus-
taining. If ‘qualitative’ researchers derive their understanding of what ‘quantitative’ researchers do
and believe largely from other ‘qualitative’ researchers, with the reverse being true for ‘quantitative’
researchers, then it is hard to see how the situation will improve (Lund 2005).
Having worked across the sciences and the social sciences, Chomsky (1979) described how he
experienced being judged within mathematics, not on the basis of his credentials and academic
background, but purely on the basis of the value of what he had to say. In contrast, he found
that within the social sciences he was constantly challenged about his right to speak on such
matters. He noted:
In mathematics, in physics, people are concerned with what you say, not with your certification. But in order to
speak about social reality, you must have the proper credentials, particularly if you depart from the accepted
framework of thinking. Generally speaking, it seems fair to say that the richer the intellectual substance of a
field, the less there is a concern for credentials, and the greater is concern for content. (6-7)

Judging the content rather than the person would seem to be an inclusive and equitable stance with
everything to commend it, and might enable educational researchers to be more accepting of other
researchers’ ‘right’ to draw pragmatically on any method that they consider of possible use. No one
should have to say whether they are ‘qualitative’ or ‘quantitative’ in order to get a hearing. Research
methods that have been developed should be seen as the property of the entire community, and
research-capacity building should aim to lead all researchers into that shared inheritance (Rees,
Baron, Boyask, and Taylor 2007).
The essential skills of doing research well and reading research intelligently are not specific to
either qualitative or quantitative research styles, and each has much to learn from the other.
Along with other unhelpful and simplistic binaries/dichotomies (e.g. objective/subjective, positi-
vist/interpretivist, inductive/deductive, science/social science), the qualitative-quantitative distinc-
tion has outlived any usefulness that it may once have had. Instead, researchers should be
liberated to draw pragmatically on any methods and research styles that will progress their research,
so as to make educational research more robust and useful to as many people as possible (Alvesson,
Gabriel, and Paulsen 2017).

Data availability statement


There is no data associated with this paper.

Disclosure statement
No potential conflict of interest was reported by the author.

ORCID
Colin Foster http://orcid.org/0000-0003-1648-7485

References
Abelson, R.P., 2012. Statistics as principled argument. New York: Psychology Press.
Alvesson, M., Gabriel, Y., and Paulsen, R., 2017. Return to meaning: a social science with something to say. Oxford: Oxford
University Press.
Baggini, J., 2018. How the world thinks. London: Granta books.
Becher, T., and Trowler, P., 2001. Academic tribes and territories. Buckingham, UK: McGraw-Hill Education UK.
INTERNATIONAL JOURNAL OF RESEARCH & METHOD IN EDUCATION 15

British Educational Research Association [BERA], 2018. Ethical guidelines for educational research. fourth edition. London:
BERA. https://www.bera.ac.uk/researchers-resources/publications/ethical-guidelines-for-educational-research-2018.
Burkhardt, H., 2013. Methodological issues in research and development. In: Y. Li, and J. N. Moschkovich, eds. Proficiency
and beliefs in learning and teaching mathematics: learning from Alan Schoenfeld and Günter Törner. Rotterdam, The
Netherlands: Sense Publishers, 203–235.
Burkhardt, H., and Schoenfeld, A.H., 2003. Improving educational research:toward a more useful, more influential, and
better-funded enterprise. Educational Researcher, 32 (9), 3–14. doi:10.3102/0013189X032009003.
Cai, J., and Hwang, S., 2019. Constructing and employing theoretical frameworks in (mathematics) education research.
For the Learning of Mathematics, 39 (3), 45–47.
Chambers, C., 2019. The seven deadly sins of psychology. Princeton: Princeton University Press.
Chomsky, N., 1979. Language and responsibility: based on conversations with Mitsou Ronat [translated by John Viertel].
New York: Pantheon.
Christensen, L.B., et al., 2011. Research methods, design, and analysis. Harlow, Essex, UK: Pearson.
Christian, B., and Griffiths, T., 2016. Algorithms to live by: the computer science of human decisions. London: Macmillan.
Clarke, E., and Visser, J., 2019. Pragmatic research methodology in education: possibilities and pitfalls. International
Journal of Research & Method in Education, 42 (5), 455–469. doi:10.1080/1743727X.2018.1524866.
Creswell, J.W., and Creswell, J.D., 2018. Research design: qualitative, quantitative, and mixed methods approaches (5th
edition). Thousand Oaks, CA: Sage publications.
Crotty, M., 2020. The foundations of social research: meaning and perspective in the research process. London: Routledge.
Davis, M.S., 1971. That’s interesting!. Philosophy of the Social Sciences, 1 (2), 309–344. doi:10.1177/004839317100100211.
Davis, B., 2018. What sort of science is didactics? For the Learning of Mathematics, 38 (3), 44–49.
Denzin, N.K., and Lincoln, Y.S., 2011. The Sage handbook of qualitative research. London: Sage.
de Vaus, D., 2002. Research design in social research. London: Sage.
Dewey, J., 1910. The influence of Darwin on philosophy and other essays in contemporary thought. New York: Henry Holt
and Company.
Dienes, Z., 2008. Understanding psychology as a science: an introduction to scientific and statistical inference. Basingstoke,
UK: Macmillan International Higher Education.
Egan, K., 2002. Getting it wrong from the beginning: our progressivist inheritance from Herbert Spencer, John Dewey, and
Jean Piaget. Yale: Yale University Press.
Ercikan, K., and Roth, W.M., 2006. What good is polarizing research into qualitative and quantitative? Educational
Researcher, 35 (5), 14–23. doi:10.3102/0013189X035005014.
Evans, J., 2015. How to be a researcher: a strategic guide for academic success. Hove, East Sussex, UK: Routledge.
Gage, N.L., 1991. The obviousness of social and educational research results. Educational Researcher, 20 (1), 10–16.
doi:10.3102/0013189X020001010.
Goldacre, B., 2010. Bad science: quacks, hacks, and big pharma flacks. London: McClelland & Stewart.
Gorard, S., and Taylor, C., 2004. Combining methods in educational and social research. Maidenhead, UK: McGraw-Hill
Education UK.
Hammersley, M., 2005. The myth of research-based practice: the critical case of educational inquiry. International Journal
of Social Research Methodology, 8 (4), 317–330. doi:10.1080/1364557042000232844.
Hargreaves, D.H., 1996. Teaching as a research-based profession: possibilities and prospects. Teacher training agency
annual lecture. Cambridge: University of Cambridge, Department of Education.
Johnson, R.B., and Onwuegbuzie, A.J., 2004. Mixed methods research: a research paradigm whose time has come.
Educational Researcher, 33 (7), 14–26. doi:10.3102/0013189X033007014.
King, G., Keohane, R.O., and Verba, S., 1994. Designing social inquiry: scientific inference in qualitative research. Princeton,
NJ: Princeton University Press.
Larsson, S., 2009. A pluralist view of generalization in qualitative research. International Journal of Research & Method in
Education, 32 (1), 25–38. doi:10.1080/17437270902759931.
Leighton, R.B., and Sands, M., 1965. The Feynman lectures on physics (volume 1). California: Addison-Wesley.
Lund, T., 2005. The qualitative-quantitative distinction: some comments. Scandinavian Journal of Educational Research,
49 (2), 115–132. doi:10.1080/00313830500048790.
Madden, E.H., 1980. Methodological pragmatism appraised. Metaphilosophy, 11 (1), 76–94. doi:10.1111/j.1467-9973.
1980.tb00096.x.
Mook, D.G., 1983. In defense of external invalidity. American Psychologist, 38 (4), 379. doi:10.1037/0003-066X.38.4.379.
Nind, M., and Lewthwaite, S., 2020. A conceptual-empirical typology of social science research methods pedagogy.
Research Papers in Education, 35 (4), 467–487. doi:10.1080/02671522.2019.1601756.
Picho, K., and Artino Jr, A.R., 2016. 7 deadly sins in educational research. Journal of Graduate Medical Education, 8 (4),
483–487. doi:10.4300/JGME-D-16-00332.1.
Ramlo, S.E., 2020. Divergent viewpoints about the statistical stage of a mixed method: qualitative versus quantitative
orientations. International Journal of Research & Method in Education, 43 (1), 93–111. doi:10.1080/1743727X.2019.
1626365.
16 C. FOSTER

Rees, G., et al., 2007. Research-capacity building, professional learning and the social practices of educational research.
British Educational Research Journal, 33 (5), 761–779. doi:10.1080/01411920701582447.
Ritchie, S., 2020. Science fictions: how fraud, bias, negligence, and hype undermine the search for truth. London:
Metropolitan Books.
Rohrer, J.M., 2018. Thinking clearly about correlations and causation: graphical causal models for observational data.
Advances in Methods and Practices in Psychological Science, 1 (1), 27–42. doi:10.1177/2515245917745629.
Rycroft-Smith, L., and Macey, D., 2021. Deep questions of evidence and agency: how might we find ways to resolve ten-
sions between teacher agency and the use of research evidence in mathematics education professional develop-
ment?, edited by R. Marks, ed. Proceedings of the British society for research into learning mathematics, 41(2).
British Society for Research into Learning Mathematics. . https://bsrlm.org.uk/wp-content/uploads/2021/08/
BSRLM-CP-41-2-18.pdf.
Sarafoglou, A., et al., 2020. Teaching good research practices: protocol of a research master course. Psychology Learning
& Teaching, 19 (1), 46–59. doi:10.1177/1475725719858807.
Schoenfeld, A.H., 2007. Method. In: F. K. Lester, ed. Second handbook of research on mathematics teaching and learning.
Charlotte, NC: National Council of Teachers of Mathematics, 69–107.
Scott, D., 2007. Resolving the quantitative-qualitative dilemma: a critical realist approach. International Journal of
Research & Method in Education, 30 (1), 3–17. doi:10.1080/17437270701207694.
Slaney, K.L., 2015. The Wiley handbook of theoretical and philosophical psychology, edited by J. Martin, J. Sugarman,
and K. L. Slaney, eds. The Wiley handbook of theoretical and philosophical psychology: methods, approaches, and
new directions for social sciences, pp. 343–358. Chichester, UK: Wiley Blackwell.
Smith, B., 2018. Generalizability in qualitative research: misunderstandings, opportunities and recommendations for the
sport and exercise sciences. Qualitative Research in Sport, Exercise and Health, 10 (1), 137–149. doi:10.1080/2159676X.
2017.1393221.
Thapar-Björkert, S., and Henry, M., 2004. Reassessing the research relationship: location, position and power in fieldwork
accounts. International Journal of Social Research Methodology, 7 (5), 363–381. doi:10.1080/1364557092000045294.
Thomas, G., 2021. Experiment’s persistent failure in education inquiry, and why it keeps failing. British Educational
Research Journal, 47 (3), 501–519. doi:10.1002/berj.3660.
Tooley, J., and Darby, D., 1998. Educational research: a critique: a survey of published educational research: report presented
to OFSTED. London: Office for Standards in Education.
Trifonas, P.P., 2009. Deconstructing research: paradigms lost. International Journal of Research & Method in Education, 32
(3), 297–308. doi:10.1080/17437270903259824.
Vincent, J., 2022. Beyond measure: the hidden history of measurement. London: Faber.
Wallace, T.L., and Kuo, E., 2020. Publishing qualitative research in the journal of educational psychology: synthesizing
research perspectives across methodological silos. Journal of Educational Psychology, 112 (3), 579–583. doi:10.
1037/edu0000474.
Wasserstein, R., 2010. George box: a model statistician. Significance, 7 (3), 134–135. doi:10.1111/j.1740-9713.2010.00442.x.
Whitty, G., and Wisby, E., 2009. Quality and impact in educational research: some lessons from England under new
labour. In: T. A. C. Besley, ed. Assessing the quality of educational research in higher education. Leiden, The
Netherlands: Brill, 137–153.
Women in Statistics and Data Science [@WomenInStat]. 2021, April 30. I’m going to begin today with a bold claim: Being
an applied statistician is a lot like being an ethnographer [Tweet]. Twitter. https://twitter.com/WomenInStat/status/
1388137491534917637?s=20.
Yanai, I., and Lercher, M., 2020. A hypothesis is a liability. Genome Biology, 21 (1), 1–5. doi:10.1186/s13059-020-02133-w.
Yarkoni, T. 2022. The generalizability crisis. Behavioral and Brain Sciences, 45. doi:10.1017/S0140525X20001685.

You might also like