You are on page 1of 16

Oxford Review of Education

ISSN: 0305-4985 (Print) 1465-3915 (Online) Journal homepage: http://www.tandfonline.com/loi/core20

The invisible impact of educational research

Tim Cain & David Allan

To cite this article: Tim Cain & David Allan (2017): The invisible impact of educational research,
Oxford Review of Education, DOI: 10.1080/03054985.2017.1316252

To link to this article: http://dx.doi.org/10.1080/03054985.2017.1316252

Published online: 12 May 2017.

Submit your article to this journal

Article views: 128

View related articles

View Crossmark data

Full Terms & Conditions of access and use can be found at


http://www.tandfonline.com/action/journalInformation?journalCode=core20

Download by: [Mount Sinai Health System Libraries] Date: 04 July 2017, At: 23:36
Oxford Review of Education, 2017
https://doi.org/10.1080/03054985.2017.1316252

The invisible impact of educational research


Tim Cain and David Allan
Edge Hill University, UK

ABSTRACT KEYWORDS
Although there are policy calls for educational research to discover Educational research;
‘what works’ and thereby inform decision making directly, the research research impact; research
literature argues instead for research to have a ‘conceptual’ impact utilisation; teacher thinking
on practice. Empirical studies also suggest that, when teachers use
research, their use is conceptual; research influences the content and
the process of their thinking, changing attitudes and perceptions and
making educational decision making more intelligent. This study
investigates the ways in which educational research has achieved
impact on practice from the perspective of the researchers. A sample
of highly-rated impact case studies in the UK’s research assessment
exercise (REF2014) were subject to content analysis, using qualitative
coding techniques. Analysis shows that most research is ‘invisible’
to education practitioners because it is embedded in educational
policies, technologies, and services. This ‘invisible use’ is unlikely to
realise the conceptual benefits claimed for research utilisation. If
educational research is to make educational decision making more
intelligent at its point of use, it will be necessary to re-think current
notions of quality in research impact.

Introduction
Currently there are calls for education in the public sector to be improved by evidence from
research, as part of a broader, ‘evidence-informed practice’ movement that includes broad
public policy areas such as health, social work, and crime reduction (Bristow, Carter, & Martin,
2015). In education, the government in England expects research to provide teachers ‘with
evidence about what works’ in the expectation that this will improve the quality of teaching
(DfE, 2013/2014). This follows a major, government-sponsored report calling for a ‘revolution’
in education:
A change of culture … with more education about evidence … and whole new systems to run
trials as a matter of routine, to identify questions that matter to practitioners, to gather evi-
dence on what works best, and then, crucially, to get it read, understood, and put into practice.
(Goldacre, 2013, p. 7)
Goldacre (2013) suggested that educational practice could be transformed from being driven
by tradition, personal experience, and political ideology, into a more scientific and inde-
pendent endeavour if practitioners would implement research findings about ‘what works

CONTACT  Tim Cain  tim.cain@edgehill.ac.uk


© 2017 Informa UK Limited, trading as Taylor & Francis Group
2   T. CAIN AND D. ALLAN

best’. Accepted by the government, this report gave weight to several policy initiatives to
promote evidence-based practice, both in education and public policy more generally. The
latter included a requirement that universities should be judged, in the periodic assessments
of their research, on the ‘impact’ of their research on the social world beyond academia. The
universities’ accounts of research impact, submitted to the subsequent assessment exercise
(REF2014) provide an opportunity to investigate how research impacts on educational prac-
tice from the perspective of the researchers. This article reviews theoretical and empirical
explanations of how research can impact on educational practice, and compares these with
a sample of highly-rated ‘impact case studies’ (ICSs) submitted to REF2014. It suggests that
the ICSs have a limited perspective on research impact and explores the ramifications of this
for policy.

How research can impact on educational practice


Four ways of conceptualising research impact are apparent in the literature about educa-
tional practice beyond universities. The first sees research findings as a commodity that can
be commissioned, accessed, and used by consumers in a ‘supply and demand’ relationship
(Behrstock, Drill, & Miller, 2009). From this perspective, empirical studies find that teachers
rarely use research but when they do, they prefer research that is relevant to their day-to-day
concerns, is easily accessible and understandable, and fits into highly pressurised profes-
sional lives which contain little time for reading and reflecting (Behrstock et al., 2009;
Beycioglu, Ozer, & Ugurlu, 2010; Borg, 2009; Dagenais et al., 2012; Everton, Galton, & Pell,
2000). Drawing on Baudrillard (1996), Brown (2015) theorises that the value of research for
teachers has two dimensions: its usability, seen as the extent to which it can answer pressing
concerns; and its ‘signifying value’, seen as the prestige it confers on users. From this per-
spective, ‘luxury’ research has a high ‘use value’ because it helps teachers achieve their prac-
tical objectives and high ‘signifying value’ because it is seen as better than other types of
information. Luxury research confers status on its possessor who is seen as able to under-
stand and to use difficult and complex knowledge and is thereby differentiated from other,
less knowledgeable colleagues.
A second way of conceptualising research impact sees research as informing decision
making at the level of national, local, or institutional policy-making. This perspective assumes
that research is used to support decisions when there is a clear choice between alternatives:
‘the term evidence suggests that the information shines a clear unambiguous light on how
to strengthen school performance’ (Honig & Coburn, 2008, p. 582). Nevertheless research
has established that this simple view of research-based decision making is only occasionally
realised in real-world decisions. Drawing on distinctions outlined in Weiss (1979), a distinction
is made between ‘instrumental’, ‘conceptual’, and ‘symbolic’ research use:
In instrumental utilization there is a concrete application of the research, and the research is nor-
mally translated into a material and useable form, such as a protocol. The research in this case is
used to make specific decisions/interventions. In conceptual utilization the research may change
one’s thinking but not necessarily one’s particular action. In this kind of research utilization, the
research informs and enlightens the decision maker … Symbolic utilization involves the use of
research as a persuasive or political tool to legitimate a position or practice. (Estabrooks, 1999,
p. 204, emphases in original)
OXFORD REVIEW OF EDUCATION   3

The ‘instrumental’, ‘conceptual’, and ‘symbolic’ categorisation has been widely used to
debate research use (e.g. Cain, 2016; Hammersley, 2002; Ion & Iucu, 2014; Nutley, Walter, &
Davies, 2007), sometimes in more elaborated terms (Landry, Amara, & Lamari, 2001; Weiss,
1979). ‘Instrumental’ use is said to be problematic because individual school contexts and
teachers’ practices differ widely, making it difficult to transfer educational practices from
one context to another; this explains problems of ‘treatment fidelity’ when educational pro-
grammes are implemented in several contexts (Detrich, 1999). It is also argued that educa-
tional research cannot, in principle, determine ‘what works’ in teaching because ‘what works’
is open to competing interpretations which depend on competing values (e.g. Biesta, 2007,
2010). ‘Symbolic’ use is also critiqued because it implies superficial engagement with research
and the cynical use of it for political ends. There is therefore general agreement that in edu-
cational matters, the ‘conceptual’ use of educational research is to be preferred because it
implies intelligent and critical involvement of users. In this understanding, research provides
a view of reality and of what is achievable. Conceptual use of research leads to practitioners,
‘redefining issues, sensitising and altering perceptions’ (Nisbet & Broadfoot, 1980, p. 22).
Nisbet and Broadfoot (1980) suggest that the influence of research on education is strongest,
‘where it raises new questions and contributes to transformations in the general paradigms’
(p. 11). Conceptual engagement with research implies that users select what is most relevant
and useful to them, and interpret research in the light of other knowledge (Hammersley,
2002). It also admits a variety of possible research methodologies, in contrast to the ‘instru-
mental’ view of research use, which admits only research that demonstrates ‘what works’
(Oancea & Pring, 2009).
A third perspective on research impact develops this conceptual view by assuming that
research informs teacher thinking. Several authors (e.g. Anwaruddin, 2015; Carr, 2006;
Korthagen, 2007; Matusov, 2016; Winch, Oancea, & Orchard, 2015) employ concepts from
Aristotle’s Nicomachean Ethics to locate education as not merely a technical activity, where
goals and outcomes can be specified in advance and achieved through the exercise of teach-
ers’ craft expertise or ‘techne’. Rather, education is fundamentally about travelling towards
morally worthwhile aims, which are both constituted and realised through teaching and
learning activities and interactions. The thinking that teachers need to achieve this is
described as ‘phronesis’ or ‘practical wisdom’, which is partially tacit because it cannot be
captured in propositional statements. Phronesis is ‘a mode of ethical reasoning in which the
notions of deliberation, reflection and judgement play a central part’ (Carr, 2006, p. 426); it
is ‘a capacity to grasp the salient features of a situation, deliberate imaginatively and holis-
tically and to make ethically and practically sound judgements in specific situations’ (Winch
et al., 2015, p. 205). From this perspective, ‘Teaching is an eventful, relational, dialogic, autho-
rial, experiential, and value-driven process. In other words, teaching is phronêtic’ (Matusov,
2017, pp. 94–119). However, Winch et al. (2015) demur, arguing that teacher thinking nec-
essarily combines ‘techne’ and ‘phronesis’ because it requires both technical knowledge and
situated understanding, together with critical reflection. They see research as contributing
to each type of knowledge. Educational research can provide ‘warrants for action, reference
points for decisions, and practical toolboxes’ that can increase teachers’ technical knowledge;
it can also enhance practical wisdom and reflection, by providing new theoretical frame-
works, by informing the grounds on which judgements are made, and by ‘enabling teachers
to discriminate autonomously between good sense and commonsense’ (p. 213).
4   T. CAIN AND D. ALLAN

Elsewhere, teacher thinking is described in terms of ‘personal’ and/or ‘practical’ theory.


Personal theory is owned by individuals; it is generated by experience and reflection on
experience, particularly those experiences that stand out as important cases. Grounded in
ethical values, it is a ‘living theory’ (Whitehead & McNiff, 2006) that lacks the coherence of
formal theory (Cain & Cursley, 2017) but is sufficiently flexible to guide action in varied and
constantly changing contexts. Whilst thoughtful action is the embodiment of personal, prac-
tical theory, action is underpinned by beliefs, identity, and ethical values (Handal & Lauvas,
1987; Korthagen & Vasalos, 2005). From this perspective, teachers develop through engaging
in research, researching their own practice, preferably within teacher discussion groups
(Cochran-Smith & Lytle, 1999). The role of externally-generated texts is to inform teachers’
own research by suggesting focuses for teacher inquiry; providing concepts and a language
to communicate these concepts; challenging existing thinking by demonstrating that edu-
cational practice can be seen from a variety of perspectives; and giving ideas for action (Cain,
2015; Wiliam, Lee, Harrison, & Black, 2004).
Finally, teachers’ individual, personal knowledge can contribute to organisational learning
through processes which include socialisation (when one person’s tacit knowledge becomes
the tacit knowledge of others), combination (when one person’s explicit knowledge is added
to the explicit knowledge of others), articulation (when one person’s tacit knowledge is
made explicit as it is shared with others), and internalisation (when explicit knowledge is
integrated into thinking so as to become unconscious) (Nonaka & Takeuchi, 1995). Teachers’
discussions, based around an identifiable topic and informed by research, have been shown
to provide a stimulus for collegial explicating, sharing, questioning, and critiquing both
internalised, tacit knowledge and knowledge that is codified in school policies (Cain, 2015;
Wiliam et al., 2004). From this perspective, research is more likely to contribute to organisa-
tional learning when a schools’ climate is focused on learning, experimentation, and valuing
new ideas, when there are frequent and useful interactions about teaching and learning,
and when there are high levels of trust in the school (Brown, Daly, & Liou, 2016).
In principle, these four perspectives are not mutually contradictory; they can be seen as
increasingly sophisticated accounts of research use. The commodity perspective emphasises
teachers’ choice, reminding us that research competes for attention with other research, and
with other types of information. The decision making perspective emphasises potential roles
for research in policy-making at various levels, which will usually be about setting direction
and goals, allocating resources and thereby creating the general conditions in which teaching
and learning take place. The teacher thinking perspective highlights the role of research to
improve teachers’ understanding of the teaching and learning which lie at the heart of the
educational enterprise. The organisational learning perspective focuses attention on the
social aspect of teacher development and the potential for an institutional culture in which
teacher thinking continuously develops.
However, the gap between educational research and practice is notoriously difficult to
overcome. Twenty years ago Zeichner (1995) complained,
More often than not, knowledge presented to teachers generated through academic educational
research is presented in a reified form which does not invite teachers to engage with it intellec-
tually. Educational research … is often simply presented as given or used as the justification for
the imposition of some prescriptive program for teachers to follow. (p. 161)
Twelve years later, this problem had not disappeared. Korthagen (2007) found that research-
ers and practitioners form distinct communities with little commonality between them.
OXFORD REVIEW OF EDUCATION   5

Researchers disagree among themselves about fundamental matters such as what meth-
odologies might yield practice-relevant findings or how the robustness of these might be
assured. Teachers’ preconceptions, which influence how new knowledge is understood, are
resistant to change because of teachers’ long socialisation as students. In the light of these
problems, both Korthagen (2007) and Zeichner (1995) argued for academics and teachers
to undertake collaborative research together. In this context, the Research Excellence
Framework (REF2014) provides an opportunity to explore what researchers claim about the
impact of research on educational practice, and to ascertain how longstanding research-
into-practice problems are being addressed.

The Research Excellence Framework (REF2014)


The Research Excellence Framework is, ‘a fearsome assessment undertaken on behalf of the
UK government to gauge the quality of research in UK universities’ (Kelly, 2016, p. 665).
Although unique to the UK, similar exercises are underway in Hong Kong and Australia. In
2011, the research funding councils in the UK announced that, as part of the regular national
research assessment exercise, publicly funded research within universities would be judged
not only on its quality as research, but also on its ‘impact’, defined as, ‘an effect on, change
or benefit to the economy, society, culture, public policy or services … beyond academia’
(HEFCE, 2011, p. 48). For the purposes of the exercise, impact was defined very broadly to
include, ‘an effect on, change or benefit to’:

• the activity, attitude, awareness, behaviour, capacity, opportunity, performance, policy,


practice, process, or understanding
• of an audience, beneficiary, community, constituency, organisation, or individuals
• in any geographic location whether locally, regionally, nationally, or internationally.

Impact includes the reduction or prevention of harm, risk, cost, or other negative effects.
(HEFCE, 2011, p. 26)
This definition implies that educational research might achieve impact not only on teach-
ing but on educational arrangements more broadly, including educational structures, sys-
tems, and theories. Although material impact might be seen in changed activity, behaviour,
and performance, it might also be seen in less visible changes to attitude, awareness, and
understanding.
To enable assessment of research impact, universities were required to submit accounts
of impact; these were assessed according to published criteria and the results of the assess-
ment were made public. In each discipline, universities were required to complete two types
of templates: a description of the university department’s general approach to, and strategy
for, achieving impact; and a prescribed number of impact case studies (ICSs). The more
researchers who were entered in the exercise (and so the greater potential funding) the
more case studies were required. ICSs were limited to four pages and were rated by expert
panels according to the ‘reach’ of the claimed impact (‘the spread or breadth of influence or
effect on the relevant constituencies’) and its significance (‘the intensity of the influence or
effect’) (HEFCE, 2011).
These case studies can be seen as a means for universities to explain the relevance of
their research to society, but they can also be seen as exercises in competition and performa-
tivity. Kelly (2016) reports that the ‘impact’ element of the REF led to £210 million funding
6   T. CAIN AND D. ALLAN

in 2015–2016 (in all disciplines) and each ICS cost around £7500 and took around 30 days
of work to construct. Intended to attract as large a share as possible of government funding,
they likely stress the individual contributions of researchers within the submitting university,
and downplay the extent to which research is almost always a communal enterprise, con-
tributing to a flow of ideas and debates between, as well as within, universities. In some
universities ICSs were constructed retrospectively, by groups of academics and high-level
administrators, including some with little knowledge of the disciplinary field. Nevertheless,
they represent a source of data that would be difficult to achieve otherwise: concentrated
and focused attempts to capture what is understood as research impact at a large scale, and
how this has been achieved. Their publication, together with the funding councils’ assess-
ment of their quality, enables examples of ‘research impact’ to be identified and provides an
opportunity to examine the relationship between educational research and practice in the
UK.
Both the impact assessment arrangements in REF2014, and the general principle that
governments should oversee assessments of research impact, have been criticised.
Arguments about the ‘impact agenda’ overlap with those about evidence-informed practice
more generally and include the following:

• It fails to capture important aspects of impact (Knowles & Burrows, 2014; Martin, 2011).
• Its costs outweigh its benefits (Kelly, 2016; Martin, 2011).
• It reshapes the working conditions and practices of academics and ultimately, of aca-
demic disciplines (Knowles & Burrows, 2014; Watermeyer, 2014, 2016).
• It denies the political nature of research and disguises the political motivations for the
REF (Colley, 2014).
• It appears to threaten research which is critical of powerful groups, particularly gov-
ernments, and establishes political control over universities (McKibbin, 2010, cited in
Colley, 2014).
• It reduces inquiry to questions of ‘what works’ (Biesta, Allan, & Edwards, 2011).
• It promotes an unhealthy compliance with performative demands and encourages
superficial engagement with research (Dunleavy, 2012; Fielding, 2003).
• It ignores a history of failure, on the part of narrowly conceived, behavioural research,
to generate valid and useful findings (James, 2013).
These criticisms notwithstanding, our purpose in this article is to examine the impact of
educational research on practice beyond universities and, in particular, on the teaching and
learning that are central to education.

Methods
The data for this study were sampled from the ‘impact case studies’ (ICS) submitted for
assessment. We sampled those ICSs which are known to be highly graded, i.e. those from
universities which had had all their ICSs graded as either 4* (‘Outstanding impacts in terms
of their reach and significance’) or 3* (‘Very considerable impacts in terms of their reach and
significance’, HEFCE, 2012). This generated 65 ICSs from the universities of Bristol, Cardiff,
Durham, Edinburgh, Glasgow, King’s College London, London Metropolitan, Loughborough,
Manchester, Manchester Metropolitan, Newcastle, Nottingham, Oxford, Queen’s, (Belfast),
Sheffield, Southampton, Stirling, Ulster, and York. To this we added the ICSs from the Institute
OXFORD REVIEW OF EDUCATION   7

of Education (IoE) at University College, London (UCL) because the IoE submitted by far the
largest number of ICSs of any university, 96% of which were graded either 4* or 3*.
We took a qualitative analytical approach, borrowing from grounded theory (Charmaz,
2006) and making no judgement about the intentions of the authors or the validity of the
text. This was important because most of the academic writing around research impact takes
a critical perspective. In contrast, we adopted a social realist perspective (Archer, 1995),
assuming that claims have meaning in the world beyond the academy, and that these mean-
ings are constructed socially, e.g. by the writers and the readers of the case studies. When
tempted towards a critical perspective, we brought to mind that the case studies also sum-
marised the careers of senior colleagues whose work we respected and admired.
ICSs were downloaded from the REF2014 website (www.ref.ac.uk) and subjected to a
sentence-by-sentence analysis. Initial coding categorised statements relating to research,
practice, and intermediaries. Progressively more focused coding enabled distinctions to be
made between, for example, research which had been commissioned by policy-making
bodies, NGOs, and the employing universities; between users such as pre-schools, univer-
sities, and lifelong learners; and intermediaries such as policy, publishers, and campaigning
organisations. Often, the structural limitations of the ICSs frustrated attempts to discover
more about the journey from research to practice; what follows therefore is a general account
which raises more questions than answers but nevertheless provides a broad overview of
the process from research to practice.

Findings
The relationship between educational research and practice is mediated through four activ-
ities which were consistently found in almost all ICSs:

• research discovers problems with an aspect of educational practice;


• research is disseminated beyond the research community;
• research is incorporated into activities and resources; and
• there are changes in thinking and practice.
Although this list of activities might imply a straightforward path, the research to practice
process was usually messy and iterative, with each stage repeated several times as new
research projects were developed and previous ones were disseminated. Therefore, the
description, below, necessarily simplifies a process that is better understood as almost always,
inherently complex. (A possible exception was the University of Manchester’s evaluations
of two educational programmes. This ICS reported on evaluations, commissioned by the
government, which led to immediate policy action, cutting the funding for one programme
and increasing it for the other.)

Research discovers problems with an aspect of educational practice


More than half the ICSs included an initial investigation of some aspect of educational prac-
tice. Case studies used terms including ‘investigated’, ‘explored’, ‘tested’, ‘measured’, and
‘tracked’ to describe this process. Although the research methodology was often not appar-
ent in the ICSs, such investigations tended to have an evaluative function, positioning some
aspects of practice as problematic and hence, providing a rationale for action and further
8   T. CAIN AND D. ALLAN

research. For example, an account of an evaluation of an initiative to raise the achievement


of black Caribbean learners in schools found that practitioners had varied understandings
and practices, leading to inconsistent outcomes for pupils. These findings contributed to
establishing the need for a further research project in order, ‘to raise public awareness of
achievement issues affecting Black and Minority Ethnic Learners in Bristol schools’ (Bristol
University).
Investigations into practice were also used to evaluate the effects of research-inspired
change although these were infrequent in the ICSs. One evaluation of research-inspired
change was Raising Early Achievement in Literacy. Here, research inspired a project involving
680 children. At the beginning of the project, ‘39% of children were judged to have low or
extremely low levels of involvement (e.g. showing interest, concentration)’. At the end, 97%
were showing, ‘moderate, high, or extremely high levels of involvement’ (Sheffield University).
A similar example was found in the case of an educational programme, involving Queen’s
University Belfast, which was evaluated by means of a cluster randomised trial which pro-
vided evidence of ‘a measurable and positive impact on preschool children’s attitudes to
respect for ethnic diversity’.

Research is disseminated beyond the research community


Every case study provided details of how research was published (the template required
this). In order to achieve impact, research was also disseminated beyond the research com-
munity, chiefly to audiences of policy-makers and practitioners. In education, these groups
are often mutually exclusive. The most common conduit for research to enter practice was,
according to the ICSs, via policy. All 85 case studies claimed some form of influence on public
policy, and in some cases, the only impact claimed was on public policy. Some ICSs presented
practical impact as necessarily following policy impact. For example, one ICS described the
principal beneficiaries as, ‘Politicians and civil servants and, by corollary, children in England’
(UCL).
A common way of disseminating research to policy-makers was through research reports
or briefing papers—in several ICSs, policy-makers had commissioned the research and the
ICSs documented how they responded to the research findings. In some case studies,
researchers were invited to give expert opinions to policy-making bodies—the UK’s parlia-
mentary Select Committees, for example. Researchers also presented findings at conferences,
attended by policy-makers. Several ICSs included citations from named civil servants, mem-
bers of parliament, and occasionally ministers, attesting to the influence of research on
policy. Researchers were sometimes asked to provide consultancy to policy-makers or to sit
on committees with a policy-making function.
The means for disseminating research to practitioners were more diffuse, probably reflect-
ing the fact that practitioners are a larger and less homogenous group than policy-makers.
(The analysis suggests that policy-makers can sometimes be reached fairly straightforwardly
because they perceive some need for research in order to justify decisions to the public. In
contrast, educational practice appears to have less reliance on research; for example, we
found no instances in the analysis of schools commissioning research.) Researchers dissem-
inated to practitioners via practitioner-facing publications, including web-based publica-
tions, via conferences and seminars, and via networks of professionals, including subject
associations. Whereas researchers seemed able to communicate directly to policy-makers,
OXFORD REVIEW OF EDUCATION   9

communication to practitioners was often achieved with the help of intermediaries such as
teaching unions, subject associations, and local authorities.
Dissemination was also facilitated by networks. Various ICSs referred to collaborative
networks of school teachers, lecturers, medical professionals, and other interest groups. For
example, a Cardiff University ICS referred to work with voluntary sector organisations under
the umbrella organisation of the Council for Learning Outside the Classroom, to research and
promote the benefits of outdoor learning.
The role of the media in disseminating research was unclear. Many ICSs cited references
to research in newspaper articles, television and radio broadcasts, or social media. These
were claimed to raise public awareness and contribute to public debate. However, it was
difficult to ascertain how references to research in the media travelled into professional
debate and educational practice.

Research is incorporated into activities and resources


The straightforward dissemination of research to practitioners was relatively rare among the
ICSs although some research was disseminated via practitioner publications (for example,
UCL’s ‘Holocaust Education’ case study cited an article in the practitioner publication, Teaching
History). More frequently, research became incorporated into educational programmes,
resources, initial teacher education, professional development, and a mixture of these activ-
ities. One example was Twenty First Century Science, which was described as:
a research evidence-informed suite of GCSE courses developed by the Science Education Group
… having significant impact on the day-to-day practice of several thousand teachers and on
over 120,000 students annually from 2006 to date. (University of York)
Substantial educational programmes were also reported in, among others, Cardiff University’s
peer mentoring programme, which led to around 1674 fewer adolescents being regular
smokers; UCL’s Musical Futures programme which introduced an innovative pedagogy to
over a third of all secondary schools in England; and Queen’s University Belfast’s Promoting
Respect for Ethnic Diversity programme, which showed evidence of improved attitudes among
over 4000 preschool children. In other cases, there was evidence that research influenced
curriculum design and content, including the National Curricula of the four nations in the
UK.
Research was also incorporated into educational resources. The GCSE science programme,
referred to above, included:
… 10 textbooks, 8 teacher and technician resource packs, and over 20 workbooks and revision
guides … supported by dedicated computer-based materials. (University of York)
These resources, and others like them, involved the collaboration of other stakeholders,
including publishers and examination groups. Occasionally, the researchers were heavily
involved in constructing the resources. For example, the University of Durham’s Pupil Premium
Toolkit was a web-based, practitioner-facing resource, written by the research team. Similarly,
UCL’s Making Games case study documented the development of a piece of software, devel-
oped in partnership with a software publisher, and trialled through a process which has
much in common with design research. Other research-informed resources include measures
of school- and pupil-achievement, developed by the universities of Durham and Bristol, and
10   T. CAIN AND D. ALLAN

Loughborough University’s research-based resources for teaching mathematics to


engineers.
Another common route to impact was via the professional development of teachers;
indeed, one case from the University of York was entirely based on in-service education and
training in Tanzania and Kenya. Some ICSs described how research influenced practice
through the conduit of initial teacher education (ITE). Research findings were incorporated
into ITE for teachers of early years (Stirling University and Manchester Metropolitan University),
English (King’s College, London), music (UCL), and citizenship education (University of York).
Additionally, the University of Glasgow submitted a case study which changed the mode
and structure of initial teacher education in Scotland. Thus research both contributed to the
content and influenced the structure of teachers’ professional development.

There are changes in thinking and practice


Most ICSs documented changes in practice. These included changes in, for example:

• educational provision, e.g. for early years children (University of Oxford);


• curriculum, e.g. the use of phonics to teach early reading (UCL);
• pedagogy, e.g. in teaching citizenship (University of York);
• assessment, e.g. the increased use of formative assessment (King’s College, London); and
• structures, e.g. the development of low-cost, private schools in developing countries
(Newcastle).

Occasionally (as in Manchester University’s evaluations of two educational programmes,


referred to earlier), the changes were presented as concrete and tangible: research led to
revisions in funding, curricula, school performance tables, and so on. However, the effect of
these changes on educational practice (i.e. the interactions that lead to learning, rather than
the structures that provide the context for learning) was often more difficult to ascertain.
As ideas from research travel from the researchers to practitioners, it appears increasingly
difficult for the ICSs to document what practical changes actually mean for their expected
beneficiaries.
In general, ICSs provided an estimate of the number of educational practitioners who
had benefited from the research and sometimes extrapolated the number of children who
might have benefited. However, as the University of Bristol commented, even when there
were data to show improvements (in Bristol University’s case, to the educational attainment
of African Caribbean students and those of mixed heritage groups), ‘it would be difficult to
establish a direct cause effect relationship with these improvements and the research’.
Furthermore, as Manchester Metropolitan University stated, ‘change is gradual’ and therefore
difficult to monitor. In this case, research showed deleterious effects of ‘no touch’ policies
and argued that, as a result of research, ‘formerly dominant approaches to touch and alle-
gations of abuse began to be reversed, and mass vetting was considerably reduced’ (emphasis
added). It is difficult to see how this change could be measured.
As well as documenting practical changes, some ICSs pointed to changes in thinking.
These included the development of specific concepts such as the ‘restrictive-expansive’ con-
tinuum of apprenticeship models, described in ICSs in Southampton University and UCL,
the use of ‘Threshold Concepts’ in Higher Education (Durham University), and the develop-
ment of the concept of ‘performativity’ (UCL). Some ICSs claimed that professionals were
OXFORD REVIEW OF EDUCATION   11

consciously employing new language; one ICS stated, ‘Practices of professionals have
changed as workers in city councils … recognise the need to change their language’ (i.e. to
reflect the concepts developed in Newcastle University’s research). These sit within the ‘con-
ceptual’ view of research use.

Discussion
Viewed from the perspective of the highly rated ICSs submitted to REF2014, with the limi-
tations that this implies, the relationship between research and practice is generally this:
research discovers problems with practice, persuades influential stakeholders of the nature
and importance of these problems, helps to create conceptual and practical tools for address-
ing these problems, and occasionally, evaluates the extent to which the problems have been
addressed. Rather than contributing to a dialogue with practitioners, and advancing the
professional learning of practitioners and organisations, research is more often used to gen-
erate technologies and justify policies. There is evidence that research impacts on educa-
tional structures and arrangements but very few indications in the ICSs of practitioners
engaging with research, interrogating and discussing it, bringing it into relationship with
other forms of knowledge, and reviewing their practice in its light. In the case of impact via
policy, it is unlikely that practitioners read policy documents, let alone the papers cited in
the policy documents, so are unlikely to understand the basis for research findings or the
quality of the research, or to form a view as to whether their current practice would be
improved by adopting research recommendations. The same is true of research that informs
technologies such as educational programmes, resources, or services. Although there is a
growing literature around research use by teachers (including Special Issues of Journal of
Education for Teaching, European Journal of Teacher Education, and Educational Research)
what is overlooked is that most research use, as seen from the researchers’ perspective, is
almost certainly invisible to its end users in classrooms and places of learning. The predom-
inant use of research is ‘instrumental’ in Estabrooks’ (1999) terms but even Goldacre’s (2013)
much-criticised vision, that research is ‘read, understood and put into practice’ is not realised
within the current ‘impact agenda’ as reflected in REF2014. Impact reaches practice, therefore,
at the level of decision making but without informing teacher thinking or organisational
learning. Educational researchers might see this as a narrow and impoverished account of
research in society but this is what is generated by the ‘impact agenda’ and made visible in
the highly rated impact case studies.
The difficulty of bridging the research–practice gap without the aid of intermediaries can
be understood with reference to Carlile’s work in the field of management studies. Carlile
(2004) found that sharing knowledge across boundaries (such as research and practice) can
be achieved relatively straightforwardly when the different communities share common
understandings and purposes, when the need for novelty is not great, and when members
of the communities understand the various differences and dependencies between them.
When these conditions are not met, transferring knowledge is difficult and requires great
effort. In education, researchers and practitioners have different goals, the pressure to gen-
erate continuous improvement produces continual reforms, teachers are not dependent on
research in order to do their jobs, and are ‘often lacking the skills, resources or motivation
to use evidence to innovate their practice’ (Brown & Zhang, 2016, p. 782). In this situation,
sharing knowledge is highly effortful. In contrast, the boundary between research and
12   T. CAIN AND D. ALLAN

intermediaries—the providers of policies and technologies—is easier to cross. In order to


gain advantage in a competitive market, the providers of policies and technologies require
novelty and research can assist by demonstrating why ‘new’ might be better than ‘old’. Having
become integrated into a policy or technology, the boundary between this and practice is
easy to cross because schools have a high dependency on both. In summary, research knowl-
edge can cross boundaries with intermediaries relatively easily because they share similar
interests and have at least some dependence on each other. The boundary between research
and practice is more difficult to cross, which explains why no ICS claims to have influenced
practice without intermediary support.
Nevertheless, there are missed opportunities here. Instead of inspiring discussion among
practitioners, educational researchers have mostly been concerned to influence policies,
resources, and services. The ability of research to contribute intellectually to the intellectual
work of teaching has been submerged. In part, this situation is constructed by the REF criteria
that focus on ‘reach’, encouraging dissemination to large numbers and prioritising breadth
over depth of engagement. In part, it might reflect the fact that conceptual engagement
with research is risky and can lead to practitioners rejecting research. It can also be difficult
to track and impossible to quantify.
The invisibility of research has two consequences for teachers. First, the conceptual ben-
efits of engaging with research, described above, are not realised. Teachers use research but,
because the research is invisible to them, they are not able to critique it, to sharpen their
thinking, to focus on important questions, and to review their practice in the light of research.
Second (and perhaps more importantly) teachers have no opportunity to shape educational
research. Research continues to ask questions that might be important to the providers of
policy, resources, and services, but might be of limited interest to the teachers who shoulder
responsibility for actually implementing policy in teaching and learning interactions.
Furthermore, the implied power positions—research tells powerful others what is wrong
with education, they provide the means for teachers to act, and the teachers carry out the
actions—does nothing to improve teachers’ professional autonomy (Cochran-Smith & Lytle,
1999; Lingard & Renshaw, 2010).
Across the English-speaking world there is a long tradition of teachers using research to
form communities of inquiry, to critique their teaching together, and to ground their thinking
in both values and evidence (e.g. Carr & Kemmis, 1986; Cochran-Smith & Lytle, 1999; Elliott,
1991; Feldman, 2016; Noffke, 1997; Stenhouse, 1975; Whitehead & McNiff, 2006). There is
potential for this tradition to be further enriched when teachers choose, engage with, and
critique published educational research as a means for guiding their own inquiries (Cain,
2015; Mockler & Groundwater-Smith, 2017; Wiliam et al., 2004). Certainly, the case for
research-informed teaching, set out in Goldacre (2013) and accepted by the English govern-
ment, views teachers as conscious users of research. If policy-makers continue to believe in
this as an achievable goal, it will be necessary to give more thought as to how it can be
achieved. Perhaps funding might be available for schools or networks of schools to commis-
sion research from universities. Perhaps more weight could be given to collaborative research,
with research questions, design, analysis, and dissemination being shared more or less
equally between universities and schools. Perhaps the ‘impact’ criteria could take more
account of research that involves practitioners in partnership with researchers in the research
utilisation process, rather than consumers. One thing seems clear: unless there are changes,
the majority of educational research impact will remain largely invisible at the point of use.
OXFORD REVIEW OF EDUCATION   13

Disclosure statement
No potential conflict of interest was reported by the authors.

Notes on contributors
Tim Cain had an extensive career teaching music in Secondary schools, before becoming a teacher
educator in universities including Kingston University, Bath Spa University and the University of
Southampton. He moved to Edge Hill University in 2011 as Professor in Education. He directs the
research centre for Schools, Colleges and Teacher Education (SCaTE) and teaches research methods on
undergraduate and postgraduate programmes. His research interests centre around music education,
teacher research and research utilisation by teachers. His work in this area has appeared in Croatian,
Dutch, German, Italian, Spanish, and Slovene publications.
David Allan is a lecturer on the PGCE in Further Education and Training at Edge Hill University. His
research interests have focused on disaffection with learning, student voice, attitudes to learning, and
research impact. He is particularly interested in the Bourdieusian concepts of habitus and capital, and
how school structures can impact on learning. At present, David is principal investigator for a research
project - involving over forty schools in the north-west of England - that is exploring the use of Lesson
Study as tool for empowering disengaged students.

References
Anwaruddin, S. M. (2015). Teachers’ engagement with educational research: Toward a conceptual
framework for locally-based interpretive communities. Education Policy Analysis Archives, 23(40),
1–25.
Archer, M. S. (1995). Realist social theory: The morphogenetic approach. Cambridge: Cambridge University
Press.
Baudrillard, J. (1996). The system of objects. (J. Benedict, Trans). London: Verso.
Behrstock, E., Drill, K., & Miller, S. (2009). Is the supply in demand? Exploring how, when, and why teachers
use research. Naperville, IL: Learning Point Associates.
Beycioglu, K., Ozer, N., & Ugurlu, C. T. (2010). Teachers’ views on educational research. Teaching and
Teacher Education, 26, 1088–1093.
Biesta, G. (2007). Why ‘what works’ won’t work: Evidence-based practice and the democratic deficit in
educational research. Educational Theory, 57, 1–22.
Biesta, G. J. (2010). Why ‘what works’ still won’t work: From evidence-based education to value-based
education. Studies in Philosophy and Education, 29, 491–503.
Biesta, G., Allan, J., & Edwards, R. (2011). The theory question in research capacity building in education:
Towards an agenda for research and practice. British Journal of Educational Studies, 59, 225–239.
Borg, S. (2009). English language teachers’ conceptions of research. Applied Linguistics, 30, 358–388.
Bristow, D., Carter, L., & Martin, S. (2015). Using evidence to improve policy and practice: The UK what
works centres. Contemporary Social Science, 10, 126–137.
Brown, C. (2015). Evidence-informed policy and practice in education: A sociological grounding. London:
Bloomsbury Publishing.
Brown, C., Daly, A., & Liou, Y. H. (2016). Improving trust, improving schools: Findings from a social
network analysis of 43 primary schools in England. Journal of Professional Capital and Community,
1, 69–91.
Brown, C. & Zhang, D. (2016). Is engaging in evidence-informed practice in education rational? What
accounts for discrepancies in teachers’ attitudes towards evidence use and actual instances of
evidence use in schools? British Educational Research Journal, 42, 780–801.
Cain, T. (2015). Teachers’ engagement with research texts: Beyond instrumental, conceptual or strategic
use. Journal of Education for Teaching, 41, 478–492.
14   T. CAIN AND D. ALLAN

Cain, T. (2016). Denial, opposition, rejection or dissent: Why do teachers contest research evidence?
Research Papers in Education, ahead of print.
Cain, T. & Cursley, J. (2017). Teaching Music Differently. Abingdon: Routledge.
Carlile, P. R. (2004). Transferring, translating, and transforming: An integrative framework for managing
knowledge across boundaries. Organization Science, 15, 555–568.
Carr, W. (2006). Education without theory? British Journal of Educational Studies, 54, 136–159.
Carr, W. & Kemmis, S. (1986). Becoming critical: Education, knowledge and action research. London: Falmer
Press.
Charmaz, K. (2006). Constructing grounded theory: A practical guide through qualitative analysis. London:
Sage.
Cochran-Smith, M. & Lytle, S. L. (1999). Relationships of knowledge and practice: Teacher learning in
communities. Review of Research in Education, 24, 249–305.
Colley, H. (2014). What (a) to do about ‘impact’: A Bourdieusian critique. British Educational Research
Journal, 40, 660–681.
Dagenais, C., Lysenko, L., Abrami, P. C., Bernard, R. M., Ramde, J., & Janosz, M. (2012). Use of research-
based information by school practitioners and determinants of use: A review of empirical research.
Evidence & Policy: A Journal of Research, Debate and Practice, 8, 285–309.
Department for Education. (2013/2014). Improving the quality of teaching and leadership. London:
DfE. Retrieved from www.gov.uk/government/policies/improving-the-quality-of-teaching-and-
leadership
Detrich, R. (1999). Increasing treatment fidelity by matching interventions to contextual variables
within the educational setting. School Psychology Review, 28, 608–616.
Dunleavy, P. (2012). REF advice note 1: Understanding HEFCE’s definition of impact. Retrieved from http://
blogs.lse.ac.uk/impactofsocialsciences/2012/10/22/dunleavy-ref-advice-1
Elliott, J. (1991). Action research for educational change. Buckingham: Open University Press.
Estabrooks, C. A. (1999). The conceptual structure of research utilization. Research in Nursing & Health,
22, 203–216.
Everton, T., Galton, M., & Pell, T. (2000). Teachers’ perspectives on educational research: Knowledge
and context. Journal of Education for Teaching: International Research and Pedagogy, 26, 167–182.
Feldman, A. (2016). An emergent history of educational action research in the English-speaking world.
In L. Rowell, C. L. Bruce, J. M. Shosh, & M. M. Riel (Eds.), The Palgrave international handbook of action
research (pp. 125–145). New York: Palgrave Macmillan.
Fielding, M. (2003). The impact of impact. Cambridge Journal of Education, 33, 289–295.
Goldacre, B. (2013). Building evidence into education. London: Department for Education. Retrieved
from www.gov.uk/government/news/building-evidence-into-education.
Hammersley, M. (2002). Educational research, policymaking and practice. London: Paul Chapman.
Handal, G. & Lauvas, P. (1987). Promoting reflective teaching: Supervision in action. Maidenhead: Open
University Press.
Higher Education Funding Council, England (2011). Assessment framework and guidance on submissions.
Bristol: HEFCE.
Honig, M. I. & Coburn, C. (2008). Evidence-based decision making in school district central offices
toward a policy and research agenda. Educational Policy, 22, 578–608.
Ion, G. & Iucu, R. (2014). Professionals’ perceptions about the use of research in educational practice.
European Journal of Higher Education, 4, 334–347.
James, M. (2013). New (or not new) directions in evidence-based practice in education. Retrieved from
www.bera.ac.uk/wp-content/uploads/2014/02/Mary-james-New-or-not-new-directions-in-
evidence-based-policy.-Response-to-Ben-Goldacre.pdf
Kelly, A. (2016). Funding in English universities and its relationship to the Research Excellence
Framework. British Educational Research Journal, 42, 665–681.
Knowles, C. & Burrows, R. (2014). The impact of impact. Etnográfica, 18, 237–254.
Korthagen, F. A. J. (2007). The gap between research and practice revisited. Educational Research and
Evaluation, 13, 303–310.
Korthagen, F. & Vasalos, A. (2005). Levels in reflection: Core reflection as a means to enhance professional
growth. Teachers and Teaching, 11, 47–71.
OXFORD REVIEW OF EDUCATION   15

Landry, R., Amara, N., & Lamari, M. (2001). Climbing the ladder of research utilization evidence from
social science research. Science Communication, 22, 396–422.
Lingard, B. & Renshaw, P. (2010). Teaching as a research-informed and research-informing profession. In
A. Campbell & S. Groudwater-Smith (Eds.), Connecting inquiry and professional learning in education:
International perspectives and practical solutions (pp. 26–39). Abingdon: Routledge.
Martin, B. R. (2011). The Research Excellence Framework and the ‘impact agenda’: Are we creating a
Frankenstein monster? Research Evaluation, 20, 247–254.
Matusov, E. (2017). Examining how and why to engage practitioners from across the learning landscape
in the research enterprise: Proposal for phronêtic research on education. Integrative Psychological
and Behavioral Science, 51, 94–119.
McKibbin, R. (2010). Good for business. London Review of Books, 32(4), 9–10.
Mockler, N. & Groundwater-Smith, S. (2017). Teacher research: A knowledge-producing profession? In
P. Grootenboer, C. Edwards-Groves, & S. Choy (Eds.), Practice theory perspectives on pedagogy and
education: Praxis, diversity and contestation (pp. 215–230). Singapore: Springer Nature.
Nisbet, J. D. & Broadfoot, P. (1980). The impact of research on policy and practice in education. Aberdeen:
Aberdeen University Press.
Noffke, S. E. (1997). Professional, personal, and political dimensions of action research. Review of
Research in Education, 22, 305–343.
Nonaka, I. & Takeuchi, H. (1995). The knowledge-creating company: How Japanese companies create the
dynamics of innovation. Oxford: Oxford University Press.
Nutley, S. M., Walter, I., & Davies, H. T. (2007). Using evidence: How research can inform public services.
Bristol: Policy Press.
Oancea, A. & Pring, R. (2009). The importance of being thorough: On systematic accumulations of ‘what
works’ in educational research. In D. Bridges, P. Smeyers, & R. Smith (Eds.), Evidence-based education
policy: What evidence what basis whose policy? (pp. 11-35). Chichester: Wiley-Blackwell.
Stenhouse, L. (1975). An introduction to curriculum research and development. London: Heinemann.
Watermeyer, R. (2014). Issues in the articulation of ‘impact’: The responses of UK academics to ‘impact’
as a new measure of research assessment. Studies in Higher Education, 39, 359–377.
Watermeyer, R. (2016). Impact in the REF: Issues and obstacles. Studies in Higher Education, 41, 199–214.
Weiss, C. H. (1979). The many meanings of research utilization. Public Administration Review, 39, 426–431.
Whitehead, J. & McNiff, J. (2006). Action research: Living theory. London: Sage.
Wiliam, D., Lee, C., Harrison, C., & Black, P. (2004). Teachers developing assessment for learning: Impact
on student achievement. Assessment in Education, 11, 49–65.
Winch, C., Oancea, A., & Orchard, J. (2015). The contribution of educational research to teachers’
professional learning: Philosophical understandings. Oxford Review of Education, 41, 202–216.
Zeichner, K. M. (1995). Beyond the divide of teacher research and academic research. Teachers and
Teaching, Theory and Practice, 1, 153–172.

You might also like