Professional Documents
Culture Documents
Methodological Pragmatism in Educational Research From Qualitative Quantitative To Exploratory Confirmatory Distinctions
Methodological Pragmatism in Educational Research From Qualitative Quantitative To Exploratory Confirmatory Distinctions
Colin Foster
To cite this article: Colin Foster (2023): Methodological pragmatism in educational research:
from qualitative-quantitative to exploratory-confirmatory distinctions, International Journal of
Research & Method in Education, DOI: 10.1080/1743727X.2023.2210063
1. Introduction
Many scholars have lamented the tendency for empirical researchers in education, as well as in the
social sciences more broadly, to polarize into distinct methodological camps, labelling their research
– and even themselves – as ‘qualitative’ or ‘quantitative’ (e.g. Gorard and Taylor 2004; Ercikan and
Roth 2006). Not only does this division impede inter- as well as intra-disciplinary collaboration,
but the terms ‘qualitative’ and ‘quantitative’ are often poor descriptors of the intended distinction.
The qualitative-quantitative divide rests on philosophical and ideological differences and is much
more than a simple matter of whether ‘numbers’ are present in the generated or collected data
(Lund 2005). A ‘qualitative’ educational researcher conducting a semi-structured interview with a
teacher might ask them how many years they have been teaching, and write down a number,
but they would not consider this to be ‘quantitative data’ that required a fundamentally different,
CONTACT Colin Foster C.Foster@lboro.ac.uk Department of Mathematics Education, Loughborough University, Schofield
Building, Loughborough LE11 3TU, UK
© 2023 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/),
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The terms on which this
article has been published allow the posting of the Accepted Manuscript in a repository by the author(s) or with their consent.
2 C. FOSTER
placed on researchers and what is valued in public and policy discourse (Evans 2015), so as to support a
more unified and coherent field of research to address these challenges.
In the remainder of this paper, I first (in Section 2) question the qualitative-quantitative distinc-
tions that are commonly made and argue that these are much less stark on close inspection. On
the contrary, I go on in Section 3 to argue that the challenges of doing research in education are
broadly similar between qualitative and quantitative styles of research, taking examples from
across the different stages of conducting a research study. In Section 4, I detail the nature of meth-
odological pragmatism, arguing that the exploratory-confirmatory distinction is of far more rel-
evance to researchers than qualitative-quantitative ones. In Section 5, I respond to some possible
criticisms, and in Section 6 I conclude with some suggestions on how unhelpful polarisations
between research and researchers might be obviated.
2. Questionable distinctions
This paper argues that attempts to differentiate qualitative from quantitative research lack coher-
ence, and that, in practice, research in education is often messy and fails to fit neatly into qualitat-
ive/quantitative categories. Below, I problematize some of the ways in which the qualitative-
quantitative distinction may often be made.
To distinguish data according to whether it contains numerical quantities is often hard in practice,
and not very meaningful. Is a statement that ‘Most interview participants expressed X’ a quantitative
statement because it involves counting the frequency of such statements? It does not seem clear
that it assists the conduct of educational research to make this distinction.
quantitative? If the chi-squared test is not in the end carried out, due perhaps to some cell frequen-
cies being too small, does the study then remain qualitative?
Sometimes entire research methodologies, such as ‘action research’ or ‘design research’, may be
designated qualitative, but this seems not to accord with the variety of ways in which those
approaches may be manifested in practice (see Burkhardt 2013). The categorization seems unhelpful.
specific study. Over time, and supported by philosophical theorizing, it is possible to see how these
divisions might develop into supposedly ‘incommensurable paradigms’.
Having once ‘picked a side’, researchers might then set about laying out their epistemological and
ontological positions. However, this can sometimes be done with considerable philosophical naivety,
with stereotypical positions passed on uncritically from older to younger researcher. Philosophical
problems around objectivity/subjectivity, realism/idealism, relativism/constructivism, etc., when
taken in their strongest and most radical forms, and presented simplistically, could seem to threaten
the rationale for all styles of research. Choosing methods based on the researcher’s judgment of
which position they align more closely with in centuries-old, unresolved (and possibly unresolvable)
philosophical debates seems precarious, and researchers teaching methods courses in education
departments are unlikely to possess the kinds of strong background in philosophy that could
make such discussions enlightening.
Clearly, all researchers in education need to be as philosophically well-informed as possible
(e.g. Dienes 2008), with awareness of the challenges of ‘true objectivity’ in science (e.g. see
Egan 2002), the theory-ladenness of facts and the underdetermination of theory by evidence.
Awareness that theories can only ever be provisional, that causation in the social sciences is
probabilistic, not deterministic, and that critical scepticism (but not cynicism) is fundamental
to research are important for all researchers in education to understand. But these do not
necessitate ‘picking a side’. Epistemological and ontological assumptions must be acknowl-
edged, and may be more important to some researchers than to others. Inescapably, this
will impact on the study’s design, analysis and interpretation. However, the methodological
pragmatist may view the weight given to these positionings as often excessive, and the ten-
dency to assume that they must represent fixed ‘positions’ over time for any particular
researcher unrealistic.
Theoretical/conceptual/analytic frameworks (see de Vaus 2002; Cai and Hwang 2019; Crotty
2020) should be equally important to qualitative or quantitative research, so ‘theory’ should
not be viewed as more the property of qualitative than quantitative research. It seems unhelpful
to make a strong distinction between ‘grand Theories’ (with a capital ‘T’), which operate as world-
views or meta-narratives (and tend to be more strongly associated with qualitative and theoreti-
cal research), and more local theories that build up from attempts to capture the ways in which
people behave in certain educationally-relevant situations when it would seem that either quali-
tative or quantitative research might be well placed to contribute to either kind. Ultimately, the
overriding consideration for the methodological pragmatist is that if setting out one’s various
epistemological and ontological positions does not seem to have much or any impact on the
details of a study’s design, analysis and interpretation, then it should be de-emphasized and
perhaps not even referred to at all.
3. Analogous challenges
It would be uncontroversial to note here that qualitative and quantitative research each possess
different advantages and disadvantages, and that each might be appropriate for different pur-
poses. This may be typically the kind of message that young researchers receive on balanced
research methods courses and in sensible research methods books. However, the argument of
this paper goes much further than this. It is not just that qualitative and quantitative research
each have their own difficulties; frequently, they present analogous challenges to the researcher.
This has occasionally been noted – a recent tweet stated, ‘Being an applied statistician is a lot like
being an ethnographer’ (Women in Statistics and Data Science 2021). If the qualitative-quantitat-
ive distinction is, as argued here, fundamentally spurious, then we would expect that the chal-
lenges associated with qualitative and quantitative research would often be highly similar. The
sections below argue that this is indeed the case, taking examples from defining, operationalizing
6 C. FOSTER
and measuring constructs, data reduction and synthesis, analytic interpretation and threats to val-
idity/trustworthiness.
contain statements that attempt to capture the degree to which participants expressed this or that
opinion or related to this or that experience. Nuance in qualitative reporting is often about scaling
(i.e. measuring) qualities. A great deal of quantitative language is often needed in order to summarize
the findings from even the most qualitative piece of research, and this leads us on to the issue of data
reduction and synthesis.
Raw data, of whatever kind, must always be reduced during analysis and synthesized, and no style of
research can escape the challenges that this brings.
involves much more than merely distinguishing correlation from causation (see Rohrer 2018). As
argued above, it is a mistake to think of qualitative research as inherently more nuanced. The
tools of statistics sometimes facilitate making highly nuanced statements, which can be hard to
express clearly in words. For example, perhaps two quantities have both increased, but one has
increased more than the other, or the gap between two quantities has decreased, although both
have increased. More complicated findings, such as a 3-way interaction or the results of a mediation
analysis or analysis of covariance, can be even harder to translate precisely into text, and this is surely
as challenging as attempting to capture and summarize subtle and nuanced statements made by
interview participants in qualitative research.
Quantitative science is often portrayed in terms of certainty, universality and ‘laws’, and yet scien-
tists and statisticians tend to talk more in terms of ‘models’ of the real world, which are acknowl-
edged to be deliberately simplified and imperfect representations of ‘reality’ (e.g. see Christensen,
Johnson, Turner and Christensen 2011). Models do not intend to capture all the details; they are
intentionally slimmed-down, ‘reduced’ versions, that are therefore efficient to work with. They are
always provisional and open to being improved with more data and theory: ‘All models are
wrong but some are useful’ (attributed to the statistician George Box, see Wasserstein 2010). In par-
ticular, this means that:
1. Models are descriptive, not prescriptive. They don’t say what will or must happen in the future; they
summarize patterns in what has been observed to happen in the past.
2. Models apply only approximately, and within certain, specified domains. No model can make pre-
dictions with 100% accuracy and there will always be exceptions.
3. Models are probabilistic, not deterministic; i.e. they are ‘good bets’ that work on average, other
things being equal, but not every time or for every case.
Such models, derived from quantitative data, do not seem epistemologically very different from a
speculative theory or framework derived from qualitative data. The challenges and limitations are
essentially the same in handling ideas that are tentative conjectures or hypotheses and attempting
to ensure that they are understood in that way.
1. Why should one believe what the author says? (the issue of trustworthiness)
2. What situations or contexts does the research really apply to? (the issue of generality, or scope)
3. Why should one care? (the issue of importance) (Schoenfeld 2007, 81)
These would seem to be readily applicable and relevant across both qualitative and quantitative
studies, since, within all research in education, it would seem fair to say that ‘The function of a
research design is to ensure that the evidence obtained enables us to answer the initial question as
unambiguously as possible’ (de Vaus 2002, 9, original emphasis). All research claims, derived from
whatever tradition, need severe testing before acceptance. Many (even opposing) claims may
seem ‘obvious’ to someone, and one purpose of research is to distinguish statements that are
‘obvious and true’ from those that are ‘obvious and false’ (see Gage 1991).
INTERNATIONAL JOURNAL OF RESEARCH & METHOD IN EDUCATION 9
Qualitative studies are often criticized for being small-scale and not generalizable, but sample size
may not be the most important factor in judging generalisability – and generalisability may not always
be the intention anyway (Smith 2018). A large quantitative study might equally be judged to have low
generalisability due to the ‘artificial’ nature of its design (e.g. a laboratory-based – rather than class-
room-based – setting, using researcher-devised materials, across a short timescale, etc.). However,
the ‘external invalidity’ (lack of ‘ecological validity’) of such studies may be considered a feature,
and not a bug, because it may facilitate theoretical conclusions being drawn with much greater confi-
dence (Mook 1983), in the same way that a qualitative interview situation would not be criticized for
being ‘artificial’. A plural, but not dualistic, approach to generalisability is helpful (Larsson 2009).
When evaluating research, Christian and Griffiths (2016, 223) criticized both ‘cherry-picked per-
sonal anecdotes and aggregated summary statistics. The anecdotes, of course, are rich and vivid,
but they’re unrepresentative. … Aggregate statistics, on the other hand, are the reverse: comprehen-
sive but thin.’ However, these different criticisms seem to have more to do with questions of scale (a
small amount of rich data versus a large amount of superficial data), rather than whether the
research should be considered qualitative or quantitative. Both qualitative and quantitative research-
ers can inappropriately ‘stack the deck’ in their literature reviews by cherry-picking sources in an
unsystematic review. The replication crisis has highlighted quantitative researchers’ excessive
‘researcher degrees of freedom’ at the analytic stage. This seems similar in character to allegations
of bias and subjective idiosyncrasy made against qualitative researchers when conducting their ana-
lyzes. It would seem that similar considerations involving acknowledging the researchers’ positions
and interests/biases, as well as transparent reporting throughout, are important, along with pre-
registering analyses (of both kinds) in advance (Chambers 2019). It is often acknowledged that,
even in qualitative research, being an ‘outsider’ and aspiring to some level of objectivity can be valu-
able, though difficult (Thapar-Björkert and Henry 2004).
In both styles of research, bias is a considerable source of concern, with the researcher’s precon-
ceptions always at risk of exerting undue influence. But it does not seem clear that either qualitative
or quantitative research has a greater problem in this area. As for the other aspects considered
above, the challenges seem essentially the same, perhaps because all researchers bring the same
strengths and limitations of being human.
the personal philosophy of the researcher, and what they regard as interesting and important.
However, once a research question is arrived at, methodological pragmatism dictates that all
methods are on the table; nothing is off limits.
None of this is to say that epistemological and ontological positions are of no relevance to edu-
cational research. It is important to acknowledge the illusory nature of ‘the view from nowhere’. All
researchers are constantly positioned in relation to their ontological and epistemological assump-
tions, and they need to be aware of what those are if they are to understand the warrants for the
methods that they use, the questions that they frame and the kind of ‘knowledge’ that they
expect their research to produce. Any researcher using any research design must be mindful of
the knowledge claims being made and how these can be substantiated. However, philosophical
pragmatism in all its forms seeks to question the overriding importance of such positionings.
Baggini (2018, 82, original emphasis) commented, ‘One consequence of adopting the pragmatist
viewpoint is that many philosophical problems are not so much solved as dissolved’. As Dewey
(1910, 19) wrote of philosophical questions, ‘We do not solve them: we get over them.’ While this
does not give researchers carte blanche to a ‘sloppy mismash’ of methods, and any methods
deployed need to be ‘sympathetic’ to each other, it does open up researchers to consider absolutely
any method, from whatever source. While different positionings will affect how a method might be
understood and used, and our methodological biases are important to acknowledge, in order to try
to work against them, on the pragmatist view they should not rule out any methods a priori.
It is clearly challenging to navigate the many choices associated with a pragmatic approach
(Clarke and Visser 2019), but methodological pragmatism would seem to offer a way towards
greater methodological diversity and coherence within the field of educational research. It also
seems likely to lead to higher-quality research: research is hard enough without throwing away
half of the toolbox of available methods. Seeing research design as a logical problem, rather than
a logistical problem, the methodological pragmatist seeks to identify whatever approaches seem
most appropriate to achieve the research goals. As de Vaus (2002, 9) put it, ‘issues of sampling,
method of data collection (e.g. questionnaire, observation, document analysis), design of questions
are all subsidiary to the matter of “What evidence do I need to collect?”’
Crotty (2020, 15) pointed out that to solve problems in everyday life we all tend to use any
method that will achieve our desired purposes:
We may consider ourselves utterly devoted to qualitative research methods. Yet when we think about investi-
gations carried out in the normal course of our daily lives, how often do measuring and counting turn out to be
essential to our purposes?
Constraining our research questions and methods to accommodate our beliefs – prejudices, even –
about research styles, or limitations in our repertoire of known research methods, seems like the tail
wagging the dog.
However, this is not to say that every researcher in education must have facility in all possible
methods, and that methodological specialism is never defensible. The complexity of many
methods, and the years of experience that may be needed to develop expertise in, say, multilevel
Bayesian modelling, or conversation analysis, necessitates that not all researchers can be highly
skilled in every possible method. The typical, stylized doctoral journey of ‘Find a research interest,
Formulate research questions, Find methods to address those questions, Learn how to do those
methods, Carry out the study and Write it up’ becomes perhaps less practicable for the early-
career researcher, who, due to the systemic constraints they experience as an academic (see
Evans 2015), needs to publish research rapidly. In such a situation, capitalizing on already-known
methods (and using locally available expertise and equipment) is clearly an efficient and perfectly
valid, ‘pragmatic’ approach. Being methods-led to a certain degree can be entirely consistent with
methodological pragmatism. However, retreating into a tribe (see Becher and Trowler 2001) that
prides itself on not engaging with certain methods would seem to do a disservice to the field. Ignor-
ing half of the relevant literature because the researcher ‘doesn’t do statistics’ or ‘doesn’t do Theory’
INTERNATIONAL JOURNAL OF RESEARCH & METHOD IN EDUCATION 11
is not productive. At the very least, all researchers should aspire to be equipped to critically read any
style of research that falls within their area of interest.
from the wider context of the other tests that were also carried out, and p-hacked findings should
not be trusted or expected to replicate.
Within the qualitative world, confusing exploratory and confirmatory research is equally proble-
matic. The qualitative analogue of p-hacking/HARKing is the narrative fallacy, in which post-hoc
explanations that sound plausible are arrived at after examining the data. Such accounts should
only ever be presented as hypotheses/conjectures, to be tested out further, separately, with new
data from new participants. Sometimes this distinction is blurred, with researchers perhaps reporting
in the ‘Method’ section that their study does not seek to generalize from their sample, but then in the
‘Discussion’ section perhaps giving themselves licence to make wild, unsupported generalizations.
These are issues for both qualitative and quantitative research, and both styles of research would
benefit from greater clarity regarding the exploratory-(dis)confirmatory distinction (Figure 1).
Exploratory research, of whatever style, is always provisional until confirmatory research has either
backed it up or challenged it.
5. Possible objections
In this Section, we briefly consider some possible objections that might be made to the argument for
methodological pragmatism presented in this paper. The points below build on the discussion
above, and we consider in particular three possible objections.
shown to work in the past, rather than exploring novel, innovative methods that might bring fresh
benefits in the future.
In response, we might say that short-termism would seem to be a danger with all research, not
just that conducted from a methodological pragmatist perspective. The researcher becomes comfor-
table in what they know and does not feel inclined to consider options outside of their ‘bubble’.
However, it would seem that this danger should be at its lowest within methodological pragmatism,
given the imperative within this approach to be as inclusive as possible in considering all existing –
and potential – research methods from all possible traditions. Any narrowness of focusing on
immediate issues at the expense of broader concerns is perhaps most likely to enter in the planning
and devising of research questions, rather than in the selection of suitable methods to address them,
and in that case it does not seem that methodological pragmatism should be to blame.
6. Conclusion
This paper has argued that the qualitative-quantitative distinction creates unnecessary divisions
among researchers in education who have otherwise similar goals, and inhibits intra- and inter-dis-
ciplinary collaboration. Collaboration is a value held strongly by many educational researchers, and it
seems something of a contradiction that our field continues to be so divided in this way. Researchers
committed to opposing, incommensurable paradigms operate from within different silos, even
when their substantive interests may seem to be extremely close (Wallace and Kuo 2020). Qualitative
and quantitative researchers focused on the same topic might nonetheless read quite distinct litera-
tures and conduct studies that fail to speak to each other or take advantage of all of the insights
gained. Gorard and Taylor (2004, 149) described this separation as ‘divisive and conservative in
nature’ and unlikely to lead to best progress in research and practice. Discarding half of the research
methods at one’s disposal, or half of one’s potential colleagues and collaborators, seems extremely
unwise, and a disservice to the field.
14 C. FOSTER
Although there are certainly divisions within qualitative and quantitative research, it would seem
that polarization into differing, at times even ‘opposing’, methodological camps is likely to entrench
misapprehensions on both sides about what characterizes the ‘other’, which can then become
mutually self-reinforcing. In such a way, a polarized division once created can easily become self-sus-
taining. If ‘qualitative’ researchers derive their understanding of what ‘quantitative’ researchers do
and believe largely from other ‘qualitative’ researchers, with the reverse being true for ‘quantitative’
researchers, then it is hard to see how the situation will improve (Lund 2005).
Having worked across the sciences and the social sciences, Chomsky (1979) described how he
experienced being judged within mathematics, not on the basis of his credentials and academic
background, but purely on the basis of the value of what he had to say. In contrast, he found
that within the social sciences he was constantly challenged about his right to speak on such
matters. He noted:
In mathematics, in physics, people are concerned with what you say, not with your certification. But in order to
speak about social reality, you must have the proper credentials, particularly if you depart from the accepted
framework of thinking. Generally speaking, it seems fair to say that the richer the intellectual substance of a
field, the less there is a concern for credentials, and the greater is concern for content. (6-7)
Judging the content rather than the person would seem to be an inclusive and equitable stance with
everything to commend it, and might enable educational researchers to be more accepting of other
researchers’ ‘right’ to draw pragmatically on any method that they consider of possible use. No one
should have to say whether they are ‘qualitative’ or ‘quantitative’ in order to get a hearing. Research
methods that have been developed should be seen as the property of the entire community, and
research-capacity building should aim to lead all researchers into that shared inheritance (Rees,
Baron, Boyask, and Taylor 2007).
The essential skills of doing research well and reading research intelligently are not specific to
either qualitative or quantitative research styles, and each has much to learn from the other.
Along with other unhelpful and simplistic binaries/dichotomies (e.g. objective/subjective, positi-
vist/interpretivist, inductive/deductive, science/social science), the qualitative-quantitative distinc-
tion has outlived any usefulness that it may once have had. Instead, researchers should be
liberated to draw pragmatically on any methods and research styles that will progress their research,
so as to make educational research more robust and useful to as many people as possible (Alvesson,
Gabriel, and Paulsen 2017).
Disclosure statement
No potential conflict of interest was reported by the author.
ORCID
Colin Foster http://orcid.org/0000-0003-1648-7485
References
Abelson, R.P., 2012. Statistics as principled argument. New York: Psychology Press.
Alvesson, M., Gabriel, Y., and Paulsen, R., 2017. Return to meaning: a social science with something to say. Oxford: Oxford
University Press.
Baggini, J., 2018. How the world thinks. London: Granta books.
Becher, T., and Trowler, P., 2001. Academic tribes and territories. Buckingham, UK: McGraw-Hill Education UK.
INTERNATIONAL JOURNAL OF RESEARCH & METHOD IN EDUCATION 15
British Educational Research Association [BERA], 2018. Ethical guidelines for educational research. fourth edition. London:
BERA. https://www.bera.ac.uk/researchers-resources/publications/ethical-guidelines-for-educational-research-2018.
Burkhardt, H., 2013. Methodological issues in research and development. In: Y. Li, and J. N. Moschkovich, eds. Proficiency
and beliefs in learning and teaching mathematics: learning from Alan Schoenfeld and Günter Törner. Rotterdam, The
Netherlands: Sense Publishers, 203–235.
Burkhardt, H., and Schoenfeld, A.H., 2003. Improving educational research:toward a more useful, more influential, and
better-funded enterprise. Educational Researcher, 32 (9), 3–14. doi:10.3102/0013189X032009003.
Cai, J., and Hwang, S., 2019. Constructing and employing theoretical frameworks in (mathematics) education research.
For the Learning of Mathematics, 39 (3), 45–47.
Chambers, C., 2019. The seven deadly sins of psychology. Princeton: Princeton University Press.
Chomsky, N., 1979. Language and responsibility: based on conversations with Mitsou Ronat [translated by John Viertel].
New York: Pantheon.
Christensen, L.B., et al., 2011. Research methods, design, and analysis. Harlow, Essex, UK: Pearson.
Christian, B., and Griffiths, T., 2016. Algorithms to live by: the computer science of human decisions. London: Macmillan.
Clarke, E., and Visser, J., 2019. Pragmatic research methodology in education: possibilities and pitfalls. International
Journal of Research & Method in Education, 42 (5), 455–469. doi:10.1080/1743727X.2018.1524866.
Creswell, J.W., and Creswell, J.D., 2018. Research design: qualitative, quantitative, and mixed methods approaches (5th
edition). Thousand Oaks, CA: Sage publications.
Crotty, M., 2020. The foundations of social research: meaning and perspective in the research process. London: Routledge.
Davis, M.S., 1971. That’s interesting!. Philosophy of the Social Sciences, 1 (2), 309–344. doi:10.1177/004839317100100211.
Davis, B., 2018. What sort of science is didactics? For the Learning of Mathematics, 38 (3), 44–49.
Denzin, N.K., and Lincoln, Y.S., 2011. The Sage handbook of qualitative research. London: Sage.
de Vaus, D., 2002. Research design in social research. London: Sage.
Dewey, J., 1910. The influence of Darwin on philosophy and other essays in contemporary thought. New York: Henry Holt
and Company.
Dienes, Z., 2008. Understanding psychology as a science: an introduction to scientific and statistical inference. Basingstoke,
UK: Macmillan International Higher Education.
Egan, K., 2002. Getting it wrong from the beginning: our progressivist inheritance from Herbert Spencer, John Dewey, and
Jean Piaget. Yale: Yale University Press.
Ercikan, K., and Roth, W.M., 2006. What good is polarizing research into qualitative and quantitative? Educational
Researcher, 35 (5), 14–23. doi:10.3102/0013189X035005014.
Evans, J., 2015. How to be a researcher: a strategic guide for academic success. Hove, East Sussex, UK: Routledge.
Gage, N.L., 1991. The obviousness of social and educational research results. Educational Researcher, 20 (1), 10–16.
doi:10.3102/0013189X020001010.
Goldacre, B., 2010. Bad science: quacks, hacks, and big pharma flacks. London: McClelland & Stewart.
Gorard, S., and Taylor, C., 2004. Combining methods in educational and social research. Maidenhead, UK: McGraw-Hill
Education UK.
Hammersley, M., 2005. The myth of research-based practice: the critical case of educational inquiry. International Journal
of Social Research Methodology, 8 (4), 317–330. doi:10.1080/1364557042000232844.
Hargreaves, D.H., 1996. Teaching as a research-based profession: possibilities and prospects. Teacher training agency
annual lecture. Cambridge: University of Cambridge, Department of Education.
Johnson, R.B., and Onwuegbuzie, A.J., 2004. Mixed methods research: a research paradigm whose time has come.
Educational Researcher, 33 (7), 14–26. doi:10.3102/0013189X033007014.
King, G., Keohane, R.O., and Verba, S., 1994. Designing social inquiry: scientific inference in qualitative research. Princeton,
NJ: Princeton University Press.
Larsson, S., 2009. A pluralist view of generalization in qualitative research. International Journal of Research & Method in
Education, 32 (1), 25–38. doi:10.1080/17437270902759931.
Leighton, R.B., and Sands, M., 1965. The Feynman lectures on physics (volume 1). California: Addison-Wesley.
Lund, T., 2005. The qualitative-quantitative distinction: some comments. Scandinavian Journal of Educational Research,
49 (2), 115–132. doi:10.1080/00313830500048790.
Madden, E.H., 1980. Methodological pragmatism appraised. Metaphilosophy, 11 (1), 76–94. doi:10.1111/j.1467-9973.
1980.tb00096.x.
Mook, D.G., 1983. In defense of external invalidity. American Psychologist, 38 (4), 379. doi:10.1037/0003-066X.38.4.379.
Nind, M., and Lewthwaite, S., 2020. A conceptual-empirical typology of social science research methods pedagogy.
Research Papers in Education, 35 (4), 467–487. doi:10.1080/02671522.2019.1601756.
Picho, K., and Artino Jr, A.R., 2016. 7 deadly sins in educational research. Journal of Graduate Medical Education, 8 (4),
483–487. doi:10.4300/JGME-D-16-00332.1.
Ramlo, S.E., 2020. Divergent viewpoints about the statistical stage of a mixed method: qualitative versus quantitative
orientations. International Journal of Research & Method in Education, 43 (1), 93–111. doi:10.1080/1743727X.2019.
1626365.
16 C. FOSTER
Rees, G., et al., 2007. Research-capacity building, professional learning and the social practices of educational research.
British Educational Research Journal, 33 (5), 761–779. doi:10.1080/01411920701582447.
Ritchie, S., 2020. Science fictions: how fraud, bias, negligence, and hype undermine the search for truth. London:
Metropolitan Books.
Rohrer, J.M., 2018. Thinking clearly about correlations and causation: graphical causal models for observational data.
Advances in Methods and Practices in Psychological Science, 1 (1), 27–42. doi:10.1177/2515245917745629.
Rycroft-Smith, L., and Macey, D., 2021. Deep questions of evidence and agency: how might we find ways to resolve ten-
sions between teacher agency and the use of research evidence in mathematics education professional develop-
ment?, edited by R. Marks, ed. Proceedings of the British society for research into learning mathematics, 41(2).
British Society for Research into Learning Mathematics. . https://bsrlm.org.uk/wp-content/uploads/2021/08/
BSRLM-CP-41-2-18.pdf.
Sarafoglou, A., et al., 2020. Teaching good research practices: protocol of a research master course. Psychology Learning
& Teaching, 19 (1), 46–59. doi:10.1177/1475725719858807.
Schoenfeld, A.H., 2007. Method. In: F. K. Lester, ed. Second handbook of research on mathematics teaching and learning.
Charlotte, NC: National Council of Teachers of Mathematics, 69–107.
Scott, D., 2007. Resolving the quantitative-qualitative dilemma: a critical realist approach. International Journal of
Research & Method in Education, 30 (1), 3–17. doi:10.1080/17437270701207694.
Slaney, K.L., 2015. The Wiley handbook of theoretical and philosophical psychology, edited by J. Martin, J. Sugarman,
and K. L. Slaney, eds. The Wiley handbook of theoretical and philosophical psychology: methods, approaches, and
new directions for social sciences, pp. 343–358. Chichester, UK: Wiley Blackwell.
Smith, B., 2018. Generalizability in qualitative research: misunderstandings, opportunities and recommendations for the
sport and exercise sciences. Qualitative Research in Sport, Exercise and Health, 10 (1), 137–149. doi:10.1080/2159676X.
2017.1393221.
Thapar-Björkert, S., and Henry, M., 2004. Reassessing the research relationship: location, position and power in fieldwork
accounts. International Journal of Social Research Methodology, 7 (5), 363–381. doi:10.1080/1364557092000045294.
Thomas, G., 2021. Experiment’s persistent failure in education inquiry, and why it keeps failing. British Educational
Research Journal, 47 (3), 501–519. doi:10.1002/berj.3660.
Tooley, J., and Darby, D., 1998. Educational research: a critique: a survey of published educational research: report presented
to OFSTED. London: Office for Standards in Education.
Trifonas, P.P., 2009. Deconstructing research: paradigms lost. International Journal of Research & Method in Education, 32
(3), 297–308. doi:10.1080/17437270903259824.
Vincent, J., 2022. Beyond measure: the hidden history of measurement. London: Faber.
Wallace, T.L., and Kuo, E., 2020. Publishing qualitative research in the journal of educational psychology: synthesizing
research perspectives across methodological silos. Journal of Educational Psychology, 112 (3), 579–583. doi:10.
1037/edu0000474.
Wasserstein, R., 2010. George box: a model statistician. Significance, 7 (3), 134–135. doi:10.1111/j.1740-9713.2010.00442.x.
Whitty, G., and Wisby, E., 2009. Quality and impact in educational research: some lessons from England under new
labour. In: T. A. C. Besley, ed. Assessing the quality of educational research in higher education. Leiden, The
Netherlands: Brill, 137–153.
Women in Statistics and Data Science [@WomenInStat]. 2021, April 30. I’m going to begin today with a bold claim: Being
an applied statistician is a lot like being an ethnographer [Tweet]. Twitter. https://twitter.com/WomenInStat/status/
1388137491534917637?s=20.
Yanai, I., and Lercher, M., 2020. A hypothesis is a liability. Genome Biology, 21 (1), 1–5. doi:10.1186/s13059-020-02133-w.
Yarkoni, T. 2022. The generalizability crisis. Behavioral and Brain Sciences, 45. doi:10.1017/S0140525X20001685.