You are on page 1of 18

This article was downloaded by: [Michigan State University]

On: 24 December 2014, At: 15:48


Publisher: Routledge
Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered
office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Higher Education Research &


Development
Publication details, including instructions for authors and
subscription information:
http://www.tandfonline.com/loi/cher20

Systematic review methodology in


higher education
a b c
Margaret Bearman , Calvin D. Smith , Angela Carbone , Susan
c d e
Slade , Chi Baik , Marnie Hughes-Warrington & David L.
f
Neumann
a
HealthPEER , Monash University , Melbourne , Australia
b
Griffith Institute for Higher Education, Griffith University , Gold
Coast , Australia
c
Office of the Pro-Vice Chancellor (Learning and Teaching),
Monash University , Melbourne , Australia
d
Centre for the Study of Higher Education, University of
Melbourne , Melbourne , Australia
e
Australian National University , Canberra , Australia
f
School of Applied Psychology, Griffith University , Brisbane ,
Australia
Published online: 09 Oct 2012.

To cite this article: Margaret Bearman , Calvin D. Smith , Angela Carbone , Susan Slade , Chi
Baik , Marnie Hughes-Warrington & David L. Neumann (2012) Systematic review methodology
in higher education, Higher Education Research & Development, 31:5, 625-640, DOI:
10.1080/07294360.2012.702735

To link to this article: http://dx.doi.org/10.1080/07294360.2012.702735

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the
“Content”) contained in the publications on our platform. However, Taylor & Francis,
our agents, and our licensors make no representations or warranties whatsoever as to
the accuracy, completeness, or suitability for any purpose of the Content. Any opinions
and views expressed in this publication are the opinions and views of the authors,
and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content
should not be relied upon and should be independently verified with primary sources
of information. Taylor and Francis shall not be liable for any losses, actions, claims,
proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or
howsoever caused arising directly or indirectly in connection with, in relation to or arising
out of the use of the Content.

This article may be used for research, teaching, and private study purposes. Any
substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,
systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &
Conditions of access and use can be found at http://www.tandfonline.com/page/terms-
and-conditions
Downloaded by [Michigan State University] at 15:48 24 December 2014
Higher Education Research & Development
Vol. 31, No. 5, October 2012, 625–640

Systematic review methodology in higher education


Margaret Bearmana*, Calvin D. Smithb, Angela Carbonec, Susan Sladec,
Chi Baikd, Marnie Hughes-Warringtone and David L. Neumannf
a
HealthPEER, Monash University, Melbourne, Australia; bGriffith Institute for Higher
Education, Griffith University, Gold Coast, Australia; cOffice of the Pro-Vice Chancellor
(Learning and Teaching), Monash University, Melbourne, Australia; dCentre for the Study of
Higher Education, University of Melbourne, Melbourne, Australia; eAustralian National
Downloaded by [Michigan State University] at 15:48 24 December 2014

University, Canberra, Australia; fSchool of Applied Psychology, Griffith University,


Brisbane, Australia

Systematic review methodology can be distinguished from narrative reviews of the


literature through its emphasis on transparent, structured and comprehensive
approaches to searching the literature and its requirement for formal synthesis of
research findings. There appears to be relatively little use of the systematic
review methodology within the higher education sector. This paper outlines the
systematic review methodology, including variations, explores debates regarding
systematic reviews from the educational literature and describes particular issues
for its application within higher education. We conclude that thoughtful use of
the systematic review methodology may be of benefit to the sector.
Keywords: higher education; research methodologies; research synthesis
methodologies; systematic review

Introduction
Systematic review methodology is a protocol-driven and quality-focused approach to
summarising healthcare evidence. Over two decades old, this methodology has revolu-
tionised healthcare delivery, funding and research and is strongly associated with the
now ubiquitous phrase ‘evidence-based practice’. The methodological standards are
supported by the work of an international not-for-profit organisation called the
Cochrane Collaboration, which ‘prepares, maintains and promotes systematic
reviews to inform healthcare decisions’ (Cochrane Collaboration, n.d.).
Evans and Benefield (2001) called for the use of systematic reviews within edu-
cation practice and policy, illustrating the debate with a description of their own sys-
tematic review on interventions to support primary students with emotional and
behavioural difficulties. In reflecting on their experience, they describe their work as
the equivalent of any significant research project and conclude that systematic
review is an excellent mechanism for revealing the gaps in current research and
providing direction for future funding decisions.
Since the late-1990s, the systematic review has been the literature review method-
ology of choice for certain sectors of the educational research community, led by organ-
isations such as the Campbell Collaboration, The Evidence for Policy and Practice

*Corresponding author. Email: margaret.bearman@monash.edu

ISSN 0729-4360 print/ISSN 1469-8366 online


© 2012 HERDSA
http://dx.doi.org/10.1080/07294360.2012.702735
http://www.tandfonline.com
626 M. Bearman et al.

Information and Co-ordinating Centre (EPPI-Centre) and Best Evidence Medical Edu-
cation (BEME). Medical, nursing and other health educational researchers, in particu-
lar, consider systematic review as a key methodology. For example, in 1999 the
prestigious Journal of the American Medical Association (JAMA) published a systema-
tic review on the effects of continuing medical education methodologies on doctor
behaviours or patient outcomes (Davis et al., 1999). A broad indicator of the regard
for the JAMA systematic review is shown by the 849 publications citing this paper,
as indicated by a 2011 citation search on the Web of Knowledge. Examples of other
more recent systematic review topics in the health professions include high fidelity
simulation (Issenberg, Mcgaghie, Petrusa, Lee Gordon, & Scalese, 2005), reflective
practice (Mann, Gordon, & MacLeod, 2009) and workplace assessment (Miller &
Archer, 2010). In other parts of the educational research community there has been a
Downloaded by [Michigan State University] at 15:48 24 December 2014

less warm response. In particular, there is a long-running debate about the relative
value of a methodology that has its foundations in the positivist paradigm, especially
that built upon quantitative measurement of efficacy (Clegg, 2005; Denzin, 2009;
Hammersley, 2001).
An examination of the higher education literature indicates the paucity of systematic
reviews published within higher education journals outside of the health professional
education domain. An Educational Research Information Clearinghouse (ERIC)
search for the term ‘systematic review’ or ‘systematic literature review’, coupled
with synonyms for ‘higher education’, yielded 16 peer-reviewed journal articles, of
which five were systematic reviews in higher education (Gough, Kiwan, Sutcliffe,
Simpson, & Houghton, 2003; Jackson & Kile, 2004; McGrail, Rickard, & Jones, 2006;
Spelt, Biemans, Tobi, Luning, & Mulder, 2009; Trede, Macklin, & Bridges, 2012). The
limited use of systematic review within the higher education field is an interesting phenom-
enon as it is a methodology well used in other educational research sectors. As Evans and
Benefield (2001) note, systematic reviews are time-consuming, but this does not seem in
itself a sufficient hurdle.
This paper critically reflects on the recent and potential use of systematic review in
higher education research, a discussion overdue in the tertiary sector. We provide a
detailed explanation of the method, explore the controversies and discuss how systema-
tic review might best be applied within higher education research.

Literature reviews: themes and variations


In the higher education literature, the term ‘systematic review’ is used loosely. For
example, Konur (2007) refers to a ‘systematic review of four major anti-discriminatory
laws’ within a study of disabled students using computer-assisted learning. In this
instance, ‘systematic review’ refers simply to a methodical examination of a particular
set of documents – such a usage is indicative of the non-technical use of the term.
At the broadest level, a systematic approach to the literature can be distinguished
from a narrative review in that it uses a structured system of inquiry to find and
review publications. However, the term ‘systematic review’ is increasingly used to
refer to a specific type of literature review associated with a specific methodology. Lit-
erature reviews can be categorised as either ‘narrative reviews’ or ‘systematic reviews’
(Petticrew, 2001). It may be helpful to think of two types of narrative review. The first is
the traditional ‘narrative’ or ‘critical’ literature review, which presents a particular per-
spective on the literature, framed entirely through the perspective of the author. The
second is essentially a narrative approach, with some systematised elements, such as
Higher Education Research & Development 627

reproducible searching strategies and a systematic presentation of the studies. These types
of narrative literature reviews will be familiar to most readers as these are usually found in
theses and less commonly published as papers. Likewise, systematic reviews are not all
the same. The standard definition of a systematic review is a literature review that uses a
specific methodology to produce a synthesis of available evidence in answer to a focused
research question. It is also worth distinguishing the Cochrane (in healthcare) or
Campbell (in education, justice and social welfare) systematic review, which are
derived from the Cochrane protocol for conducting systematic reviews. The Cochrane/
Campbell systematic review uses the general systematic review methodology, but
includes a range of other methodological choices, such as independent reviewers and a
specific protocol. As these review types are not as well known in higher education, it
is worth explaining these methodologies more closely.
Downloaded by [Michigan State University] at 15:48 24 December 2014

The Cochrane handbook for systematic reviews of interventions (Higgins & Green,
2011) outlines key features of all systematic review methodologies. It states that a
systematic review has:

. a clearly stated set of objectives with pre-defined eligibility criteria for studies
. an explicit, reproducible methodology
. a systematic search that attempts to identify all studies that would meet the
eligibility criteria
. an assessment of the validity of the findings of the included studies, for example
through the assessment of risk of bias
. a systematic presentation, and synthesis, of the characteristics and findings of the
included studies.

David Gough (2007b), Director of the EPPI-Centre, describes a nine-phase process for
systematic reviews:

(1) establishing the review question


(2) defining inclusion and exclusion criteria
(3) articulating the search strategy, including information sources
(4) screening the articles to see if they meet the inclusion and exclusion criteria
(5) reporting the results of the search strategy, usually through a flowchart
(6) extracting relevant data from included studies
(7) assessing the methodological quality or rigour of the included studies
(8) synthesising, either quantitatively or qualitatively, the collective evidence of the
included studies
(9) drawing conclusions and communicating these findings in a manner which is
relevant to readership.

It is worth noting that the synthesis phase frequently uses formal methods. Meta-analy-
sis is commonly used with quantitative data, particularly experimental data, to statisti-
cally estimate the combined result of individual studies. Qualitative synthesis can use a
range of different methods such as meta-ethnography, adapted grounded-theory tech-
niques and critical interpretive synthesis (Barnett-Page & Thomas, 2009).
If the above nine phases are part of all systematic reviews, what are the distinguish-
ing features of a Campbell or Cochrane type of systematic review? To answer this
question, it is first important to understand the role of these not-for-profit organisations
in the development of a culture of systematic review. In healthcare, the Cochrane
628 M. Bearman et al.

Collaboration is dedicated to the conduct of systematic reviews to deliver evidence for


healthcare practice and provides a central role in standardising the methodology. In the
social sciences, the Campbell Collaboration, an international research network that pro-
duces systematic reviews in education, justice and social welfare, has modelled itself on
the Cochrane Collaboration. Both organisations are founded on the same principles:
working collaboratively, promoting enthusiasm, avoiding duplication, minimising
bias, keeping reviews and other outputs up-to-date, relevant and accessible, continuing
quality development, provision of continuity of governance, and promoting partici-
pation in the collaboration (Campbell Collaboration, n.d.; Cochrane Collaboration,
n.d.). The Campbell or Cochrane Collaboration systematic reviews differ from other
systematic reviews by:

. expanding the systematic search to unpublished reports to avoid publication bias,


Downloaded by [Michigan State University] at 15:48 24 December 2014

such as where results that support the original research hypotheses are more
likely to be accepted into peer-reviewed journals (the so-called ‘file drawer
problem’)
. having a team that usually spans national boundaries
. following a protocol (project plan) that is developed a priori and peer reviewed
. applying inclusion criteria, data extraction and quality assessment that are under-
taken independently by at least two reviewers
. being subjected to peer review and editorial review from within the Campbell or
Cochrane Collaboration.

See Table 1 for a comparison of systematic and narrative review methodologies. It is


worth noting that these are broad categories and individual literature reviews may
contain elements of each approach. It may be that for some research questions the sys-
tematic review methodology is highly appropriate, while for other questions again, a
narrative review may provide better answers. These, and other forms of presenting
and synthesising previous literature, can be conceived of as complementary rather
than competing.

Recent use of systematic review in higher education


The systematic review methodology is difficult to understand in abstract, particularly
given the variation in its application. We have provided three examples of relatively
recent systematic reviews in higher education within Table 2, describing how each
study has addressed each of the nine canonical phases described by Gough (2007b).
These examples show both the potential of systematic review (because the synthesised
conclusions have value for practitioners and policy-makers alike), as well as the varia-
bility of the systematic review methodology.
See Table 3 for the results of a scoping exercise to explore the extent of the systema-
tic review methodology within the higher education sector. Within this brief search of
the literature, we used an inclusive definition of ‘systematic review’ to incorporate
those reviews that mostly, although not definitively, followed the general nine-phase
process, and a definition of ‘higher education’ that excluded the vocational education
sector or school student attitudes towards higher education. We searched the ERIC
database: on title and abstract, on a range of predominant journals that we believed
to be common references for higher education researchers on full text, and on the
Campbell Centre and EPPI-Centre web libraries. Once duplicates were removed, we
Downloaded by [Michigan State University] at 15:48 24 December 2014

Table 1. Comparing narrative and systematic literature review types.


Phase 1 Phase 2 Phase 3 Phases 4 and 5 Phase 6 Phase 7 Phases 8 and 9
Inclusion- Assessment of
Review Variations (not exclusion Screening method and Data extraction methodological
type exclusive) Review question criteria Search strategy results of search processes quality and rigour Synthesis and conclusions

Narrative Narrative review Often general No Not always described, No No Depends on Framed through expertise of
discussion or focus tends to be individual individual researchers
critical focus significant researchers

Higher Education Research & Development


publications.
A literature review More focused No Sometimes provided, Usually not transparent Not done or not Rarely Framed through expertise of
that employs than narrative focus tends to be reported individual researchers
some systematic significant
techniques publications.
Systematic ‘Non-Cochrane’ Clear research Yes Yes, transparent and Yes, transparent Yes, transparent Yes, using quality Use of formal synthesis
systematic question seeks to find all assessment methodologies, such as
review relevant literature tools statistical analysis and
meta-analysis
Campbell/Cochrane Clear research Yes Yes, transparent, seeks Yes, transparent and Yes, transparent and Yes, using quality Use of formal synthesis
systematic question to find all relevant requires two or requires two or assessment methodologies, such as
review literature, including more independent more independent tools statistical analysis and
grey/unpublished reviewers to screen reviewers to screen meta-analysis. Inclusion of
literature a ‘plain language’
summary.

629
630
M. Bearman et al.
Downloaded by [Michigan State University] at 15:48 24 December 2014

Table 2. Three systematic reviews in higher education: processes and outcomes across the nine phases of an educational research systematic review
methodology.

Phase 1 Phase 2 Phase 3 Phases 4 and 5 Phase 6 Phase 7 Phases 8 and 9


Authors (year) title Assessment of
and source of Review Inclusion-exclusion Search Screening method and Data extraction methodological quality Synthesis
publication question criteria strategy results of search processes and rigour and conclusions

McGrail et al. (2006) To review published Inclusions: (1) Reported Databases: Medline, 17 articles met the Full data extraction process Quality of papers described Described positive effects
Publish or perish: literature that implementation of a CINAHL, ERIC, criteria. Reasons for not specifically narratively, quality on publication rates and
A systematic reports the specific structured PsycINFO and Web of exclusion of articles articulated. Two assessment scale not other positive effects.
review of effectiveness of intervention with the Science, 1984–2004. given. No flow chart independent reviewers specifically provided Concludes that the
interventions to measures designed aim of increasing Search terms provided was provided. No extracted data and methodology not evidence is limited but
increase academic to promote publication rates; (2) in general terms. comment regarding regarding: number and articulated. that all three models
publication rates. publication The target group were Reference lists of use of independent type of participants: were beneficial, with a
Higher Education academics or articles also searched. researchers to model of intervention trend for support groups
Research & professionals involved screen. (writing support groups, to be superior;
Development. in academic work; (3) structured writing structured writing
The article provided courses and provision courses to be less
data that assessed the of a writing coach); beneficial.
effectiveness of the evaluation methods;
intervention. No impact on publication
exclusions listed. rate, calculated as
publications per person
per year, pre- and post-
intervention; other
impacts.
Table 2. (Continued ).
Downloaded by [Michigan State University] at 15:48 24 December 2014

Phase 1 Phase 2 Phase 3 Phases 4 and 5 Phase 6 Phase 7 Phases 8 and 9


Authors (year) title Assessment of
and source of Review Inclusion-exclusion Search Screening method and Data extraction methodological quality Synthesis
publication question criteria strategy results of search processes and rigour and conclusions

Cook, Erwin, & How effective are Inclusions: (1) Any health Databases: MEDLINE, Worked independently Full data extraction only Quality assessment Meta analysis of
Triola (2010) ‘virtual patients’ in profession at any stage EMBASE, CINAHL, and in duplicate to for comparative studies procedure described for comparative studies and
Computerized comparison with of learning. (2) ERIC, PsychINFO, screen for inclusion. and rigorous qualitative comparative studies but pooling of themes from
virtual patients in no intervention Learners must emulate Scopus, and the Methodology studies. Data not for ‘rigorous’ qualitative studies.
health professions and alternate the roles of health care University of Toronto provided for abstraction conducted qualitative studies. Pooled effect sizes (ES)
education: A instructional providers via a Research and resolving by two or more Quality indicators for studies of virtual
systematic review methods, and what computer. Exclusions: Development Resource disagreement. independent reviewers. included: patients compared with

Higher Education Research & Development


and meta- virtual patient Non-interactive cases Base. All dates up until Interrater agreement ICC calculated. Data representativeness of the no interventions were
analysis, design features are presented on a February 16, 2009. calculated using included: study type; intervention group; large (>= 0.8) for a
Academic associated with computer; simulation Additional studies intraclass educational level; topic; selection of the range earning outcomes
Medicine. higher learning delivered by means identified by searching correlation aspects of interface and comparison group; and and confidence
outcomes? other than a computer; reference lists of all coefficient (ICC). 48 interaction; individual comparability of intervals (CI) excluding
non clinical included articles. Search articles from 698 or group learning cohorts. small effects (< 0.5).
simulations. terms provided in original search experience; type of Pooled ES for studies of
general terms plus yield. Flowchart outcome evaluated. A virtual patients in
reference to a digital file provided. range of further data comparison with non-
with complete list. were extracted as computer instruction
relevant, e.g. were small (0.17–0.10)
comparison type, with non-significant CI.
themes for qualitative Common themes
studies. identified from
qualitative studies point
to common experiences
and possible design
considerations.

(Continued )

631
632
M. Bearman et al.
Downloaded by [Michigan State University] at 15:48 24 December 2014

Table 2. (Continued ).

Phase 1 Phase 2 Phase 3 Phases 4 and 5 Phase 6 Phase 7 Phases 8 and 9


Authors (year) title Assessment of
and source of Review Inclusion-exclusion Search Screening method and Data extraction methodological quality Synthesis
publication question criteria strategy results of search processes and rigour and conclusions

Canadian Council on How is quality in Inclusions: (1) Post 1985, Databases: Academic 39 articles met the Data extraction included: Developed a scoring rubric, Synthesis takes the form
Learning (2006) post- secondary post-secondary Search Premier, criteria. Flowchart author, country, year of which rated studies from of presenting the
How is quality in education education in OECD. Education Index Full provided. publication, population, ‘limited’ to ‘exemplary’ commonly considered
post-secondary measured? (2) Studies must Text, CBCA Business, study design, analysis quality. Five dimensions indicators. The authors
education identify the indicators CBCA Education, methods, indicators. of quality were provide conclusions
measured? used to measure EconLit, ERIC, and Indicators were further assessed: content; data regarding the optimal
Internet. quality or quality Business Source coded into: inputs collection and data methods to measure
indicators and analyse Premier. Key journals (student, faculty and sources; sampling quality. They describe
the tools or methods handsearched. Grey institution), outputs strategies; methodology the implications of
used to measure literature search. (learning environment, and analysis; and the these findings to
quality of post- Bibliographies of research and evidence provided to policy makers.
secondary education papers searched. development), final support the reliability
as a whole. Exclusions outcomes (graduates, and validity of quality
not listed. institution), measures.
measurement tools
(direct assessment,
ranking, actuarial data)
and stakeholders. All
studies were
categorised according
to a model of quality
measurement as well as
by type of study design.
Table 3. Scoping search strategies for systematic reviews in higher education in peer-reviewed journals or in recognised systematic review centres
(November 2011).
Downloaded by [Michigan State University] at 15:48 24 December 2014

Total Number of systematic reviews in Number of systematic reviews Number of publications


Systematic review search Higher education-related search higher education excluding health from health professions or that were not systematic
Source terms search terms yields professions vocational education reviews

ERIC database ‘Systematic review’ ‘Higher education’ ‘Tertiary 16 3 4 9


‘Systematic literature education’ ‘Post-secondary
review’ education’
Higher Education As above n/a 5 1 0 4
Research &
Development
Journal of Higher As above n/a 2 0 0 2
Education

Higher Education Research & Development


Review of Higher As above n/a 1 0 0 1
Education
Research in Higher As above n/a 1 0 0 1
Education
Quality in Higher As above n/a 5 0 0 5
Education
International Journal of As above n/a 7 0 0 7
Educational
Research
Studies in Higher As above n/a 9 0 0 9
Education
Assessment and As above n/a 4 0 0 4
Evaluation in Higher
Education
Educational Researcher As above n/a 3 0 0 3
Campbell Collaboration n/a Reviews from Education 9 0 0 9
Group
EPPI Centre n/a Any education-related review 16 2 5 9

633
634 M. Bearman et al.

found five systematic reviews in higher education. These were in disparate areas of
study and included reviews of: interventions to increase academic publication rates
(McGrail et al., 2006), interdisciplinary higher education (Spelt et al., 2009), links
between administration and student outcomes (Jackson & Kile, 2004), professional
identity development (Trede et al., 2012), work-based learning (Scesa & Williams,
2008) and personal development planning (Gough et al., 2003). This is far less than
the number of educational systematic reviews of school education on the EPPI
Centre website or the number of health professional education systematic reviews on
the BEME website. Only one of our example systematic reviews in Table 2 is drawn
from this list, the other two are taken from the health professional education literature
and the ‘grey’ (not readily sourced via academic publishers) literature, respectively.
Downloaded by [Michigan State University] at 15:48 24 December 2014

The value of systematic review methodology to educational research


Given the lack of penetration of systematic reviews into the higher education litera-
ture, it is important to ask what benefits there are in adhering to such a rigorous
and time-consuming literature review methodology. As the examples in Table 2 indi-
cate, systematic review allows the reader to make their own judgements as to the
quality and meaning of the evidence, as well as a judgement on the quality of the
review methodology itself. The breadth of the search allows for all research to be con-
sidered, not just the most accessed or best known work. In this way, an educator or
educational researcher can quickly scan what is known about a topic, how trustworthy
the research conclusions are and what the collective implications of the research are.
A good systematic review saves reproduction of literature searching, directs readers to
quality literature and provides a formal synthesis of the research outputs. Gough
(2007a) summarises the key benefits of systematic review to educational policy and
practice as:

. access to the literature, with its ever increasing publication base


. providing trustworthiness and accountability of the literature review (Gough
argues that the methods of systematic review allow us to ‘be clear about what
we know and how we know it within different ideological and theoretical
positions’ (p. 35)
. revealing conceptual and value positions, through mechanisms designed to make
explicit the inevitable biases, which might otherwise be ‘hidden within the dis-
course of the account of knowledge’ (p. 35).

Critiques of systematic review within the educational research community


In contrast to the healthcare education sector, systematic review has been heavily cri-
tiqued by some sectors of the educational research community. It is important to
examine these debates, as they likely influence the use of the systematic review meth-
odology within the tertiary sector. We argue that the majority of concerns revolve
around three interrelated themes:

(1) Positioning of the positivist paradigm. There is a body of work arguing that sys-
tematic reviews privilege certain kinds of research paradigms, either explicitly
or through the language employed to describe them (e.g. Clegg, 2005; Denzin,
2009; Hammersley, 2001; Suri & Clarke, 2009). Associated with this is the
Higher Education Research & Development 635

elevation of the systematic review over other sorts of literature reviews (Ham-
mersley, 2001; Suri & Clarke, 2009). For example, Suri and Clarke (2009)
write: ‘The rhetorical effect of terms such as evidence-based practice, systema-
tic reviews, clarity, comprehensive, reliable, objectivity, and replicable not only
discredits any opposition but also has the political impact of favouring positiv-
ism’ (p. 400). These arguments stem from the notion that systematic reviews,
like any other form of research practice, are ideological in conception and in
practice. Denzin (2009) explicitly links systematic review to the rise of neolib-
eral regulatory government processes and their associated emphasis through
performance measurement.
(2) Adaptation of methodological choices to a social sciences context. Within the
educational research literature concerned with the elevation of systematic
review and its underpinning ‘measurement’ principles, there is an associated
Downloaded by [Michigan State University] at 15:48 24 December 2014

unease regarding the lack of critical interrogation of systematic review meth-


odology. In this argument, the emphasis on establishing a priori criteria for
the inclusion of studies, the transparency of method and the emphasis on
objectivity may lead to what Maclure (2005) terms ‘clarity bordering on stu-
pidity’ (p. 1). In particular, there is a concern that forms of research that are
heavily qualitative and descriptive in nature do not mesh with underlying
principles of systematic review. Qualitative synthesis methodologies within
a systematic review must take account of highly divergent traditions, such
as phenomenography, ethnography, discourse analysis, phenomenology and
so on. There is a range of publications that note that formal qualitative syn-
thesis will be challenged by variant analysis methods and presentation forms
in order to transform what is essentially context-specific information into a
more generalisable form (Barnett-Page & Thomas, 2009; Ring, Ritchie,
Mandala, & Jepson, 2010).
(3) Practical applicability to policy and practice. Clegg (2005) describes, in the
higher education sector, the limited use of the conclusions of systematic
review. She illustrates this with the example of ‘the sense of lack’ for prac-
titioner use with respect to a systematic review of personal development plan-
ning in higher education (Gough et al., 2003). Clegg (2005) argues that a focus
on outcomes means that the review does not provide sufficient information for
educationalists regarding particular mechanisms of what works and when.

These debates are important to acknowledge, not only to avoid the rhetorical position-
ing of ‘systematic review’ as unquestioningly superior, but also to avoid throwing the
proverbial baby out with the bathwater. There is no question that systematic review
derives from a strongly positivist paradigm, with its foundations in meta-analysis of
randomised controlled trials (Cochrane Collaboration, n.d.), but it is another step
altogether, therefore, to discredit the process. Of course, the unquestioning acceptance
of any methodology is not good academic practice and it is important to critically inter-
rogate methodologies of systematic review and to understand the arguments surround-
ing synthesis of divergent studies (Andrews & Harlen, 2006). This occurs also within
the healthcare systematic review community, where using quality appraisal scales with
qualitative research is debated (Dixon-Woods et al., 2006) or, in some situations, a
focus on processes rather than outcomes is suggested (Pawson, Greenhalgh, Harvey,
& Walshe, 2005). In all instances, the concept of evidence does not exist in isolation
within an ivory tower, but has broad societal and political implications.
636 M. Bearman et al.

Potential use of systematic reviews in higher education


The broader question of the applicability of systematic review to higher education is a
complex one because of the multi-disciplinary nature of this broad field. In exploring
the potential use of systematic review in higher education research, several issues
will need to be considered, including sectoral readiness, diversity of research traditions,
the feasibility of the uptake of the processes involved and the likely benefits of uptake.
We will use the example of our own work to provide context to these matters.

Illustrative example: measuring teaching quality


Our research team has commenced a Campbell/Cochrane-style systematic review in the
area of measuring teaching quality in higher education. This review is designed to
Downloaded by [Michigan State University] at 15:48 24 December 2014

answer the question: what is known about the measurement of teaching quality? Our
primary aims are: to define teaching quality and identify the common constructs associ-
ated with teaching quality, to establish how teaching quality is typically measured or
assessed, to establish and evaluate what is known about reliability of available
measures/instruments, and to recommend which one instrument or set of measures to
use or whether to develop our own. In line with the systematic review process, we
have pre-defined criteria for inclusion. Examples of our criteria are: all included
studies must report on instruments, or the development of instruments, that measure,
assess, appraise, evaluate or analyse teaching, and all included studies must report in
any tertiary teaching domain. We have established comprehensive search terms, includ-
ing common synonyms for the terms measurement, instrument, teaching and higher
education, and used these to search 13 databases, including ERIC, PsychINFO,
British Education Index, Current Contents, Expanded Academic, Proquest Education
Journals, A+ Education, CBCA Education, Campbell Collaboration, Medline,
CINAHL and PubMed, together with handsearching of reference lists and contact
with content experts. The resulting 11,173 articles were reduced through scan of title
to remove obviously unrelated publications, with the remainder independently
reviewed by two researchers to see if they meet the set criteria, with disagreements
resolved by consensus or additional reviewers. This time-consuming and labour-inten-
sive phase of the review has resulted in 47 included articles. We are in the process of
extracting the data from these articles and establishing their methodological quality
using standardised descriptors, with each process again performed by two independent
researchers. Undertaking such an intensive collaborative exercise has enlightened us to
many of the issues surrounding systematic review, which we discuss in the following
sections.

Sectoral compatibility and readiness


Systematic review may be easily acceptable within medical education because the
approaches to research that are most aligned with systematic review epistemologies
are found within medical research. Medical research tends towards a more uniform
approach to research, in both method and approach to reporting. But educational
research celebrates a range of approaches, more commensurate with its diverse tra-
ditions in critical, feminist, post-structuralist, post-modern, hermeneutic, interpretivist
and other post-positivist approaches to knowledge. This is particularly true for
higher education, where the approaches to educational research may be strongly influ-
enced by various disciplinary research cultures.
Higher Education Research & Development 637

Furthermore, for systematic review to be useful in the higher education sector, it has
to be applicable to the literature in the sector. For example, theory-based publication
simply does not suit systematic review. This is also a matter of philosophical alignment,
in that practitioners and researchers within the sector must wish to base decisions upon
formal synthesis of a broad range of evidence. In our view, there are subsectors within
higher education which have both research that is suited to systematic review and prac-
titioners who are open to using the results of systematic reviews.
We return to our example of measuring teaching quality. Preliminary results indi-
cate a range of appropriate studies, both quantitative and qualitative, that provide
empirical investigations of instruments that measure teaching quality. The topic itself
is contentious in nature, with political as well as practice implications. This, in our
view, is a good area for systematic review, where the collected evidence can provide
Downloaded by [Michigan State University] at 15:48 24 December 2014

valuable information to on-the-ground educators as well as policy makers.

Managing diversity of research traditions


As stated before, the higher education sector is multi-disciplinary and lacks a single uni-
fying paradigm that underpins research and reporting. At various junctures in the sys-
tematic review process this can lead to challenging issues. In developing the question
and search terms, for instance, there may be different understandings of the meanings of
the words. For example, in our experience, not all members of our systematic review
team interpreted the word ‘instrument’ in the same manner. In exactly the same way,
when searching the literature databases, terminology may be highly diverse and specific
keywords may not be mentioned in title, abstract or, indeed, the entire text. The prior
established criteria for inclusion and exclusion of papers are probably more likely to
result in false positives and false negatives when there is ambiguity surrounding key
terms. As previously mentioned, synthesis must work with a wide variance in study
methodologies, reporting standards and epistemologies. Due to this variance, a team
approach to systematic review is critical within higher education, along with the use
of moderation to arrive at shared understandings of inclusion/exclusion criteria.

Feasibility of processes
The systematic review processes themselves do not pose any impediment to their appli-
cation in higher education, although they are time intensive. Our experience indicates
that the moderation of the selection rules is easily achieved with a small number of col-
leagues. If the criteria and their application are unambiguous to the researchers and the
readership, the whole process will flow relatively efficiently. This highlights the impor-
tance of the process in establishing the selection rules and other analysis criteria. In our
experience, however, with many of the team unfamiliar with the systematic review
methodology, training becomes critical.
Although the processes themselves can be followed easily enough, the required
time investments could be a barrier to the feasibility of systematic review being routi-
nely used in higher education, especially where the process is not adequately supported
with funding or where expert guidance is not readily available.

Benefits
We believe that a growing awareness and use of systematic review will be an asset to
the higher education sector. The three examples provided earlier in this paper (Table 2)
638 M. Bearman et al.

provide meaningful conclusions to policymakers and practitioners. McGrail et al.


(2006) provide clear guidance for academics wishing to enhance publication, Cook
et al. (2010) clearly indicate that virtual patients are of educational benefit, and indicate
likely successful design choices for this type of computer-assisted learning, while the
Canadian Council for Learning’s (2006) report provides recommendations for univer-
sities wishing to measure the quality of their institution in meaningful ways. As alluded
to earlier, there are specific subsectors within higher education that would particularly
benefit from systematic review.
In health professional education, another benefit of systematic reviews has been the
forming of international collaborations and partnership through large review teams in
the BEME organisation. As with the Cochrane and Campbell Collaborations, these
research networks are inclusive and assist in dissemination of research results
Downloaded by [Michigan State University] at 15:48 24 December 2014

through engagement of reviewers. More than that, the work helps define the discipline.
For example, BEME’s reviews are on self-assessment, student portfolios, feedback,
faculty development, inter-professional learning and educational games (BEME, n.
d.). These are all key topics within health professional education and reviewers are
drawn from all over the globe. It is appealing to consider the value of such an organ-
isation exclusively focused upon higher education.
One consequence of the emerging interest in the conduct of systematic review in
higher education might be the enhancement of the quality of research and reporting.
Systematic reviews can build upon the current work of educational researchers
through identifying specific strengths and weaknesses in the literature. Systematic
review may also facilitate researchers and practitioners in simultaneously understand-
ing and influencing the overall practice and status of higher education research.

Conclusion
Systematic review is a specific, carefully defined approach to the literature review,
which has not reached its full potential in its application to higher education. Indeed,
this article is addressing the possibility that many within the higher education sector
are unaware of the precise detail of the systematic review process. The value of sys-
tematic reviews is in providing a transparent, comprehensive and structured approach
to searching, selecting and synthesising the literature. There are some caveats surround-
ing application of the methodology to the diverse traditions within higher education,
which is a particularly complex overlap of a variety of disciplines. Systematic
reviews should be applied only when they can provide a valid means to summarise
the literature and it should be acknowledged that this may not be the case for all
areas of higher education research. We propose that the systematic review method-
ology, associated with a culture of collaboration, rigour and methodological debate,
has potential for the higher education sector. We believe that systematic review can
assist us in making choices that will assist our students to learn because of, not
despite, our efforts.

Acknowledgements
The authors wish to acknowledge the insightful comments made by Professor Richard James
and Dr Phillip Dawson on earlier versions of this paper. This work has been supported by the
Australian Learning and Teaching Council through the strategic priority projects program
(SP10-1845).
Higher Education Research & Development 639

References
Andrews, R., & Harlen, W. (2006). Issues in synthesizing research in education. Educational
Research, 48(3), 287–299.
Barnett-Page, E., & Thomas, J. (2009). Methods for the synthesis of qualitative research: A criti-
cal review. BMC Medical Research Methodology, 9(1), 59.
Best Evidence Medical Education (BEME). (n.d.). Retrieved November 23, 2011, from http://
www2.warwick.ac.uk/fac/med/beme/
Campbell Collaboration. (n.d.). Retrieved November 14, 2011, from http://www.
campbellcollaboration.org/
Canadian Council on Learning. (2006). Measuring quality in post-secondary education.
Retrieved November 17, 2011, from http://www.ccl-cca.ca/NR/rdonlyres/AA910EBF-
6AE9-4DB0-BEDC-987356CF19C6/0/MeasuringQualityinPSE.pdf
Clegg, S. (2005). Evidence-based practice in educational research: A critical realist critique of
systematic review. British Journal of Sociology of Education, 26(3), 415–428.
Cochrane Collaboration. (n.d.). Retrieved November 14, 2011, from http://www.cochrane.org/
Downloaded by [Michigan State University] at 15:48 24 December 2014

Cook, D.A., Erwin, P.J., & Triola, M.M. (2010). Computerized virtual patients in health pro-
fessions education: A systematic review and meta-analysis. Academic Medicine, 85(10),
1589–1602.
Davis, D., O’Brien, M.A.T., Freemantle, N., Wolf, F.M., Mazmanian, P., & Taylor-Vaisey, A.
(1999). Impact of formal continuing medical education. Journal of the American Medical
Association, 282(9), 867–874.
Denzin, N.K. (2009). The elephant in the living room: Or extending the conversation about the
politics of evidence. Qualitative Research, 9(2), 139–160.
Dixon-Woods, M., Bonas, S., Booth, A., Jones, D., Miller, T., Shaw, R., … Young, B. (2006).
How can systematic reviews incorporate qualitative research A critical perspective.
Qualitative Research, 6(1), 27–44.
Evans, J., & Benefield, P. (2001). Systematic reviews of educational research: Does the medical
method fit? British Educational Research Journal, 27(5), 527–541.
Evidence for Policy and Practice Information and Co-ordinating Centre. (EPPI-Centre). (n.d.).
Retrieved November 17, 2011, from http://eppi.ioe.ac.uk/cms/Default.aspx?tabid=53
Gough, D. (2007a). Giving voice: Evidence-informed policy and practice as a democratizing
process. In M. Reiss, R. DePalma, & E. Atkinson (Eds.), Marginality and difference in edu-
cation and beyond (pp. 31–45). London: Trentham Books.
Gough, D. (2007b). Weight of evidence: A framework for the appraisal of the quality and rel-
evance of evidence. Applied and Practice-based Research, 22(2), 213–228.
Gough, D., Kiwan, D., Sutcliffe, K., Simpson, D., & Houghton, N. (2003). A systematic map
and synthesis review of the effectiveness of personal development planning for improving
student learning. London: EPPI-Centre. Social Science Research Unit, Institute of
Education, University of London, Research Evidence in Education Library.
Hammersley, M. (2001). On ‘systematic’ reviews of research literatures: A narrative reply to
Evans and Benefield. British Educational Research Journal, 27(5), 543–554.
Higgins, J.P.T., & Green S. (Eds.). (2011). Cochrane handbook for systematic reviews of inter-
ventions Version 5.1.0 [updated March 2011]. The Cochrane Collaboration, 2011. Retrieved
February 12, 2012, from www.cochrane-handbook.org
Issenberg, B.S., Mcgaghie, W.C., Petrusa, E.R., Lee Gordon, D., & Scalese, R.J. (2005).
Features and uses of high-fidelity medical simulations that lead to effective learning: A
BEME systematic review. Medical Teacher, 27(1), 10–28.
Jackson, J.F.L., & Kile, K.S. (2004). Does a nexus exist between the work of administrators and
student outcomes in higher education? An answer from a systematic review of research.
Innovative Higher Education, 28(4), 285–301.
Konur, O. (2007). Computer-assisted teaching and assessment of disabled students in higher
education: The interface between academic standards and disability rights. Journal of
Computer Assisted Learning, 23(3), 207–219.
MacLure, M. (2005). ‘Clarity bordering on stupidity’: Where’s the quality in systematic review?
Journal of Education Policy, 20(4), 393–416.
McGrail, M.R., Rickard, C.M., & Jones, R. (2006). Publish or perish: A systematic review of
interventions to increase academic publication rates. Higher Education Research &
Development, 25(1), 19–35.
640 M. Bearman et al.

Mann, K., Gordon, J., & MacLeod, A. (2009). Reflection and reflective practice in health pro-
fessions education: A systematic review. Advances in Health Sciences Education, 14(4),
595–621.
Miller, A., & Archer, J. (2010). Impact of workplace based assessment on doctors’ education
and performance: A systematic review. British Medical Journal, 341, c5064.
Pawson, R., Greenhalgh, T., Harvey, G., & Walshe, K. (2005). Realist review a new method of
systematic review designed for complex policy interventions. Journal of Health Services
Research & Policy, 10(Suppl. 1), 21–34.
Petticrew, M. (2001). Systematic reviews from astronomy to zoology: Myths and misconcep-
tions. British Medical Journal, 322(7278), 98–101.
Ring, N., Ritchie, K., Mandala, L., & Jepson, R. (2010). A guide to synthesising qualitative
research for researchers undertaking health technology assessments and systematic
reviews. Retrieved September 11, 2012, from http://www.healthcareimprovementscotland.
org/programmes/clinical__cost_effectiveness/programme_resources/synthesising_research.
aspx
Downloaded by [Michigan State University] at 15:48 24 December 2014

Scesa, A., & Williams, R. (2008). Engagement in course development by employers not tra-
ditionally involved in higher education: Student and employer perceptions of its impact.
London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of
London, Research Evidence in Education Library.
Spelt, E.J.H., Biemans, H.J.A., Tobi, H., Luning, P.A., & Mulder, M. (2009). Teaching and
learning in interdisciplinary higher education: A systematic review. Educational
Psychology Review, 21(4), 365–378.
Suri, H., & Clarke, D. (2009). Advancements in research synthesis methods: From a methodo-
logically inclusive perspective. Review of Educational Research, 79(1), 395–430.
Trede, F., Macklin, R., & Bridges, D. (2012). Professional identity development: A review of the
higher education literature. Studies in Higher Education, 27(3), 365–384.

You might also like