You are on page 1of 8

Evaluation and Program Planning 45 (2014) 119–126

Contents lists available at ScienceDirect

Evaluation and Program Planning


journal homepage: www.elsevier.com/locate/evalprogplan

Applying complexity theory: A review to inform evaluation design


Mat Walton *
School of Health and Social Services, Massey University, New Zealand

A R T I C L E I N F O A B S T R A C T

Article history: Complexity theory has increasingly been discussed and applied within evaluation literature over the
Received 12 July 2013 past decade. This article reviews the discussion and use of complexity theory within academic journal
Received in revised form 4 April 2014 literature. The aim is to identify the issues to be considered when applying complexity theory to
Accepted 6 April 2014
evaluation. Reviewing 46 articles, two groups of themes are identified. The first group considers
Available online 13 April 2014
implications of applying complexity theory concepts for defining evaluation purpose, scope and units of
analysis. The second group of themes consider methodology and method. Results provide a starting point
Keywords:
for a configuration of an evaluation approach consistent with complexity theory, whilst also identifying a
Complexity theory
Methods
number of design considerations to be resolved within evaluation planning.
Evaluation design ß 2014 Elsevier Ltd. All rights reserved.

Over the last decade, an increasing literature has considered the shared between multiple complex systems (Byrne & Callaghan,
implications of complexity theory or the theory of Complex 2014). An example could be viewing a school as a complex system,
Adaptive Systems (CAS) perspectives in development, health and interacting with other complex systems of households, communi-
social service policy, implementation and evaluation (Barnes, ties and the wider education sector. The interaction of components
Matka, & Sullivan, 2003; Forss, Marra, & Schwartz, 2011; Haynes, in a complex system gives rise of ‘emergent’ properties, which
2008; Patton, 2011; Plsek & Greenhalgh, 2001; Sanderson, 2000, cannot be understood by examining the individual system
2009; Stern et al., 2012; Vincent, 2012). Complexity theory is not a components (Goldstein, 1999). Instead to understand the emer-
single coherent body of thought. Whilst complex interventions are gent phenomenon, the system from which it emerged must be
often considered to be those with multiple objectives, strategies understood as a whole (Anderson, Crabtree, Steele, & McDaniel,
and components, implemented across multiple sites by multiple 2005), including identifying both the elements within a system and
actors, the use of complexity in this paper refers to understanding their interaction over time. The interactions within a complex
the social systems within which interventions are implemented as system are non-linear, with the implication that change in once
complex (Shiell, Hawe, & Gold, 2008). This is what Byrne refers to component of the system may have a negligible or large effect on
as a ‘complexity theory frame of reference’ (2011, p. 12). A focus on the system as a whole (Byrne & Callaghan, 2014). Non-linearity
the complexity of systems implies that apparently simple also means that small differences between systems may, over time,
interventions, as well as complicated interventions, may be lead to quite different emergent whole system properties (Room,
candidates for evaluation from a complexity perspective. 2011). While schools may appear similar, the education results
The basics of a complexity theory frame of reference are now might be quite different. The implication of non-linear relation-
well described in multiple publications (Byrne & Callaghan, 2014; ships is a difficulty in predicting the type and scale of system
Eppel, Matheson, & Walton, 2011; Patton, 2011; Rickles, Hawe, & adaptations to interventions (Morçöl, 2012). The system is open to
Shiell, 2007; Room, 2011). Briefly, a complex system is comprised feedback from the wider environment it is operating within,
of multiple interacting actors, objects and processes defined as a meaning that systems may differ between time, social and
system based on interest or function (Gare, 2000). Complex geographic contexts (Room, 2011).
systems are nested, which means that some elements of a complex A complex system may show stability of emergent properties
system may themselves be complex systems, or some elements over time, with change suggesting a system has moved from one
‘attractor state’ to another. When the attractor state of a system
changes, at the point of change, there are a number of possible
* Correspondence to: PO Box 756, Wellington 6140, New Zealand.
attractor states the system could move to, within a ‘phase space’
Tel.: +64 4 8015799x63351. (Capra, 2005; Room, 2011). While it is difficult to predict
E-mail address: m.d.walton@massey.ac.nz if and how a system will change in response to interventions,

http://dx.doi.org/10.1016/j.evalprogplan.2014.04.002
0149-7189/ß 2014 Elsevier Ltd. All rights reserved.
120 M. Walton / Evaluation and Program Planning 45 (2014) 119–126

one target may be to understand the phase space of possible reasons for exclusion were: no discussion of evaluation methods;
attractor states. For example, a change of government adminis- and not explicitly informed by complexity theory or a related
tration will often bring with it a change in ideology, which will systems theory.
in turn define the range of intervention options available for As with complex systems themselves, the boundaries of the
responding. relevant literature are open and boundary judgements can always
Again using schools as an example, evidence of unhealthy diets be contested. Search terms were selected to focus attention on
of children within schools impacting upon education achievement articles explicitly identifying with complexity theory or CAS,
may be addressed by focussing on individual student behaviour or rather than wider application of ‘systems thinking’. The search
the school food environment. The degree to which the school terms also limited articles to those with an evaluation component,
environment is regulated, such as allowing or banning competitive rather than more general policy, organisational or social science
and less healthy food options, will be partly determined by the focus. Within the focus on complexity theory, an overlap between
perceived role of state versus market held by decision makers complexity and certain system theory fields is acknowledged
(Fleischhacker, 2007; Walton, Signal, & Thomson, 2013). However, (Midgley, 2008; Richardson, Gregory, & Midgley, 2007). For this
previous decisions that may limit government action in regulating reason, two ‘systems’ rather than complexity terms were included
products, such as international trade agreements, will also play a in the search strategy. Soft systems and ecological systems were
role in defining possible interventions and hence the phase considered terms referring to specific systems approaches, but
space of the school health system. The potentially unintended also used in a broader way to distinguish from ‘hard’ systems
impacts of outcomes from one complex system (e.g. trade) on other theories (Maani & Cavana, 2000). Inclusion of these two terms
complex systems (e.g. schools) results from the open boundaries doubled articles identified. Despite this approach, search results
of systems. indicate that much of the literature informed by social-ecological
There are several challenges for evaluation implied by the models in health promotion and psychology, or systems informed
understanding of complex systems described above. To summa- operational research, has been excluded but may usefully
rise, the challenges posed by complex social systems for evaluation contribute ideas to a complexity informed evaluation practice.
relate to uncertainty in the nature and timing of impacts arising Other terms relevant to complexity theory, such as ‘context’, were
from interventions, due to the non-linear interactions within trialled but captured many articles well outside complexity and
complex systems and the ‘emergent’ nature of system outcomes systems fields.
(Dyson & Todd, 2010). There are also likely to be differing values A feature of evaluation literature is the large volume of work
and valuation of outcomes from actors across different parts of a published in books, conference proceedings or project reports.
complex system, making judgements of ‘what worked’ contested These are obviously not captured in this review, limited to peer-
(Barnes et al., 2003). Due to the open boundaries of complex reviewed journal articles. With a focus restricted to peer-reviewed
systems, there are always multiple interventions operating and journals and explicit reference to both complexity and evaluation,
interacting, creating difficulties identifying the effects of one there is no claim that the current review provides a definitive
intervention over another (Schwartz & Garcia, 2011). statement of the issues and methods associated with complexity
Across the existing complexity informed literature, there is theory in evaluation practice. However, the aim of the review is to
little consensus regarding what the key characteristics of a identify common themes in the application of complexity theory,
complexity informed policy or programme evaluation approach and not to provide a definitive ‘state of play’.
should be. Questions relating to: the purpose of evaluation; how Notes from each paper were made under the following
evaluation questions are defined; which concepts from complexity headings: where has complexity theory been applied to policy/
theory are most relevant; and broad evaluation design principles programme evaluation; what design and methods are associated
need to be considered before looking at detailed method with complexity theory; what are reported advantages/limitations
considerations. To advance consideration of these broad evaluation of design and methods; if an opinion or theoretical paper, what are
design considerations, this paper reviews both practical examples the suggested advantages or limitations of methods; what
and theoretical discussion of evaluation approaches using a assumptions are being made about the nature of interventions;
complexity theory frame of reference. The aim of the review is and what (if any) impacts on the policy process are discussed? The
to identify themes to be considered in applying a complexity frame notes grouped under each question were compared to identify
of reference to evaluation. what the characteristics of a complexity informed evaluation
approach are, when and where such an approach is appropriate,
1. Methods and implications for the policy process in which the approach is
applied.
This study provides a narrative thematic review of identified Twenty-three of the 46 papers were theoretical or opinion in
academic journal literature (Dixon-Woods, Agarwal, Jones, Young, nature, while 23 were focussed on describing or reflecting upon
& Sutton, 2005; Mays, Pope, & Popay, 2005) related to complexity application of methods to a particular policy or programme.
theory and evaluation. This review draws upon 46 articles in peer- Table 1 shows the distribution of papers by year. It can be seen
reviewed journals identified from a search of bibliographic that the volume of peer-reviewed journal publications that
databases (including Scopus, Web of Knowledge, Social Service consider a complexity theory frame of reference increased from
Abstracts, Sociological abstracts), limited to English language.
Search terms were: complexity theory or complex adaptive system
or CAS or soft system or eco* system; and policy eval* or prog* eval*
or policy analysis or formative eval* or process eval* or outcome Table 1
Publication year of articles included in review.
eval* or impact eval* or context eval*. This search identified 214
articles. Upon review of titles and keywords, 76 articles were Year of publication Number
selected for full review. Abstracts of papers citing these 76 articles 2012–2013 11
were also reviewed for inclusion. In addition, reviewers of an 2010–2011 13
earlier draft of this manuscript suggested a number of journal and 2008–2009 11
2006–2007 6
articles for potential inclusion, which were hand searched. Forty-
2005 5
six articles were included in the full review. The most common
M. Walton / Evaluation and Program Planning 45 (2014) 119–126 121

2008. The type of article has also changed. Earlier articles 2.1. Developing an understanding of the system
included in this review tended to be of a type that considered the
potential of a complexity frame of reference. More recent articles A complex system is made up of many parts, the interaction of
are providing examples of where complexity concepts have been which creates emergent outcomes of interest to evaluation. Several
applied and providing more detailed consideration of complexity authors discuss the need to develop a picture of the system
consistent methods. The included papers are briefly described in operating to aid analysis both of interaction and of changes in the
Table 2. system parts. Techniques such as system dynamics (Levy et al.,
2010), social network analysis (Hawe, Shiell, Riley, & Gold, 2004)
2. Results and agent-based modelling (Hoffer, Bobashev, & Morris, 2009;
Morell, Hilscher, Magura, & Ford, 2010) have been used to examine
The following themes were identified from the reviewed interactions between system elements. In developing a picture of
literature and are discussed in detail below: the system, in and of itself and to guide modelling, authors have
drawn upon detailed ethnographic data (Hoffer et al., 2009) and
 Developing an understanding of system. variations of theories of change approach developed by partici-
 Attractors, emergence and other complexity concerns. pants to identify elements within systems under study (Blackman,
 Defining appropriate level and unit of analysis. Wistow, & Byrne, 2013; Mason & Barnes, 2007). Such theories of
 Timing of evaluations. change can be utilised to inform decisions regarding system
 Participatory methods. boundaries (Verweij & Gerrits, 2013), which interactions between
 Case study and comparison designs. system elements to focus upon within Agent-Based Models (Hoffer
 Multiple and mixed methods. et al., 2009; Morell et al., 2010) or case conditions included within a
 Layering theory to guide evaluation. Qualitative Comparative Analysis (Blackman et al., 2013).

Table 2
Papers included in review.

Reference Short description

Adam et al. (2012) Reviews recent evaluations of health system strengthening initiatives in low and middle income countries. Conclude that there
is a need for more comprehensive evaluations of a wider range of effects, with no evaluation exploring system effects that
reflect complex adaptive systems.
Barnes et al. (2003) Describes the challenges complexity adds to evaluation utilising the Health Action Zone evaluation as an example. Discusses
the use and limitations of Theory of Change method in the face of complexity.
Blackman et al. (2010) Discusses whether the comparison of health inequalities targets and policies between England, Wales and Scotland can be
considered evaluation, as the variability between sites makes it difficult to say ‘what works’. Questions limitations of systems
perspective and highlights discourses and suggests New Institutional Theory could be useful.
Blackman et al. (2011) Utilises Qualitative Comparative Analysis (QCA) to examine efforts to reduce health inequalities. The authors explicitly draw
upon complexity theory to view QCA as a method to compare cases for identifying combinations of conditions that contribute
to an outcome.
Blackman et al. (2013) Discusses use Qualitative Comparative Analysis (QA) as a complexity theory consistent evaluation methodology for
understanding causal combinations of conditions. Provides an example of applying QCA to evaluate local authority based
interventions to reduce teenage conceptions.
Boustani et al. (2010) Considers how to design, implement and evaluate a health care service innovation. Discusses RAP process for design – a
collaborative qualitative process. Identifies N of 1 RCT and time-series analysis to test applicability of RCT findings into specific
contexts. Outlines an evaluative framework designed for particular intervention that: (1) defines borders of Complex Adaptive
Systems; (2) determines outcome measures of entire system; (3) sets pre and post intervention periods; and (4) sets frequency
of performance reports.
Bristow et al. (2009) Outlines a multi-method evaluation framework that integrates qualitative and quantitative, top-down and bottom-up. Paper
has a useful introduction on trends in evaluation (including complexity of goals and design) that require pluralist approaches.
Brousselle and Lessard (2011) Discusses challenges with current economic evaluation methods, including limitations of methods themselves, and degree to
which methods are used and useful to decision makers. Suggests some innovations in economic evaluation drawing upon
complexity theory.
Burgoyne (2010) Argues for a critical realist informed complex systems approach to evaluation in action learning. Action learning seems
concerned with development within organisations, and within a complex frame, developing the skills to read complex systems
and sensibly generalise to other situations.
Byrne (2013) Provides a summary of Byrne’s critical realist complexity theory and outlines implications for understanding complex
causality through case-comparison methods.
Cabrera, Colosi, and Lobdell (2008) Discusses systems thinking in evaluation and proposes four rules to foster ‘systems thinking’. Argues that rather than learn
new methods for evaluation, we can apply systems thinking to existing methods.
Callaghan (2008) Argues for a theory driven approach to evaluation utilising complexity theory and idea of negotiated order. Focussing on the
role of agency in creating change in a complex system, Negotiated order used to understand how policies are implemented at
local level & their interaction with existing organisational structures, policies, power relations, professional agendas, etc.
Dattée and Barlow (2010) Evaluation of attempts to meet Accident and Emergency target in Scotland. Uses case comparison approach. Focuses upon scale
in complex systems and suggests target did not take into account whole system level, but instead only A&E subsystem, which
meant out of hospital change needed was difficult to achieve.
Dickinson (2006) Focuses on evaluation of partnerships in health and social services–identifying partnerships as important to tackling wicked
issues. Suggests that both Theory of Change and Realist Evaluation would be useful to evaluate partnerships and that Critical
Realism provides the ontological position to combine these approaches and address their weaknesses.
Dyson and Todd (2010) Discusses the use of Theory of Change approach to a complex intervention in a complex setting. Advantages and disadvantages
of method considered. Implicit implications for policy process, role of evaluators, and what is ‘‘evidence’’.
Hawe et al. (2009) Considers advantages a complexity frame could offer. Table 7.2, p. 98: a shift of focus from knowledge to capability, with
consequences for measurement; less structured ‘‘dose monitoring’’ as the means of evaluating implementation and more use
of qualitative and narrative approaches; a focus on the structures in which knowledge is embedded (for example, through the
use of social network analysis); long time frames that incorporate the possibility of system phase transitions; measurement at
multiple levels; more observation and analysis of the preintervention context and the natural change processes within it; no
reason to abandon cluster randomised trial designs, as long as interventions adhere to a recognisable theory of action and that
remains replicable across the sites.
122 M. Walton / Evaluation and Program Planning 45 (2014) 119–126

Table 2 (Continued )

Reference Short description

Hawe et al. (2004) Discusses methods used for context and process evaluation within PRISM trial in Victoria, Australia. Informed by system theory
and concerned with complex interventions. Outlines using multiple methods to assess changes in context over time, feedback
and intervention adaptations.
Haynes (2008) Provides an example of a three stage method for evaluating policy system change over time. Method utilises both qualitative
and quantitative data, but is essentially a qualitative analysis. Identifies phase shifts in policy systems over long time period.
Hoffer et al. (2009) Develops an Agent Based Model of heroin market which draws upon ethnographic case study. Model is used to answer two
research questions that seek to determine how two groups within the market effect the market’s overall operation.
Israel and Wolf-Branigin (2011) Suggests that Agent Based Models are useful addition to decision making regarding future programme directions. But need
evaluation prior to ABM development to provide data for model.
Kania et al. (2013) Developed indicators of a complex adaptive systems evaluation approach against which 54 health promotion evaluations were
compared. No explicit complex adaptive system evaluation was identified, although aspects of complexity were incorporated
in the majority of evaluations.
Lessard (2007) Argues that economic evaluation should draw upon complexity theory to deal with related issues of complexity and reflexivity.
Levy et al. (2010) Applies SimSmoke system dynamics model to evaluate effect of tobacco control policies in the Republic of Korea on smoking
prevalence and deaths.
Marchal et al. (2013) An example of applying complexity theory concepts to evaluating fee exemption policies for obstetric care. Argues that an
evaluation of complex intervention should assess programme effectiveness and uncover causal mechanisms.
Mason and Barnes (2007) Considers how to construct Theory of Change in complex intervention and suggests they should be narrative (rather than logic
diagrams) and used to promote learning and intervention refinement, rather than giving evidence of ‘what works’.
Matheson et al. (2009) Comparative study of two community based interventions explicitly using a complexity lens to understand outcomes. Study
evaluates effectiveness at achieving goals, and focuses on mechanisms to understand variable success.
McLean et al. (2009) Provides detail of method to evaluation Healthy Eating–Healthy Action Strategy. Not specifically informed by complexity
theory, but references papers that are, e.g. Barnes et al., 2003.
Morell et al. (2010) Proposes an evaluation approach in complex systems that utilises Agent Based Modelling alongside traditional evaluation
approaches. The modelling can be used to identify areas for evaluative activity, while the evaluation data may also be used to
inform the modelling. Suggest that the Agent Based Model can become a monitoring tool.
Munda (2004) Focus on policy development more than evaluation. Identifies issues of competing values, and scales (local, regional, etc.) in
defining policy problems, and suggests that these competing views of the system/policy need to be transparently identified
and considered within analysis. To do this need participatory processes. Use of multi-criteria decision making tools
appropriate.
Nordtveit (2010) Focusses on development programme design and evaluation. Argues against pre-packaged programmes. Suggests a three
phase approach. (1) Locally define programme focus and approach. (2) Analyse the complex systems within which programme
will run. (3) Use ideas from New Institutional Economics to evaluate cost-effectiveness of programme.
Olney (2005) Focuses on evaluation of library community outreach. Uses evaluation as a intervention planning tool (formative), as well as
considering outcomes.
Parsons (2007) Briefly discusses systems and complexity theories and tools and methods for studying complex systems. Suggests that
methods need to focus on relationships, how people recognise new emergent phenomenon and the values and beliefs of those
within the system under study.
Radej (2011) Focuses on the problem of aggregation of evaluation findings in complex systems. Aggregation of micro-variables not
appropriate, as will not capture system wide outcomes. macro-level aggregation will not tell us much about what is leading to
outcomes. Argues for meso level aggregation.
Rametsteiner and Weiss (2006) Paper describes a method for evaluating implementation of a policy across levels (national, local, organisation), and interaction
between levels. Worked example of forestry innovation policy in Austria. Also emphasises participatory methods, as system
defined through interviews/surveys to build up picture of policy network
Rog (2012) Discusses the role of context in evaluation and proposes a framework that highlights five types of context to consider in
evaluation. Not informed explicitly by complexity theory, but similar notions of context used within framework.
Rogers (2008) Discusses the use of intervention logic for evaluation. Distinguishes between complicated and complex interventions.
Rothwell et al. (2010) Does not take a complexity theory perspective as such, but draws upon social-ecological theory and does refer to complex
adaptive systems. Evaluates the implementation of the Welsh Network of Healthy School Schemes.
Sanderson (2009) Theoretical paper that proposes combining insights from complexity theory and Deweyan pragmatism to develop a practical
rationality approach to evidence based policy. This approach blurs distinction between policy making and evaluation,
suggesting a learning process utilising trials, pilots and experimentation of policies within a deliberative process that combines
wisdom of social scientists, implementers and those impacted by policy.
Schensul (2009) Argues that combining modelling of complex systems with local knowledge is required for implementation and evaluation of
multilevel dynamic systems interventions. Highlights the use of ethnography and participatory action research combined with
network analysis and qualitative comparative analysis.
Simpson et al. (2013) Describes development and application of a complexity theory informed tool to evaluate implementation challenges. The tool
uses complexity theory concepts as a lens to view data and a group ‘sense-making’ process to consider analysis and possible
solutions.
Ssengooba, McPake, and Palmer (2012) Drawing upon complexity adaptive systems and expectancy theories, the article details an evaluation approach that supports
implementation of performance-based contracting initiative in Uganda. The evaluation approach was a ‘theory-based’
evaluation, utilising multiple methods across multiple levels of the system under study.
Stewart and Ayres (2001) Focuses on systems thinking for policy design. Not informed explicitly by complexity theory, but relevant. Not considering
evaluation, but policy design with implied knock on effects for how evaluation might be conducted.
Trenholm and Ferlie (2013) Utilised complexity theory within a qualitative case study design to examine the organisational response to TB epidemic across
London. Findings highlighted large number of organisations involved in the TB ‘system’, with different dynamics and
influences at different levels within the system (pan-London vs local authority). Concludes that complexity theory is useful
perspective to consider organisational response, but also requires understanding of wider organisational and policy context.
Verweij and Gerrits (2013) Argues that a complex systems approach is required for evaluating infrastructure projects and draws upon Byrne and Ragin to
highlight the use of case-comparison and Qualitative Comparative Analysis as the basis for understanding combinations of
causal conditions in complex systems.
Walker (2007) Provides a thought experiment of how a UK evaluation into local employment initiative would have been evaluated if informed
by complexity ideas and contrasts with actual evaluation.
Westhorp (2012) Argues that realist and complexity theory traditions in evaluation methodology are compatible, with complexity theory
providing a theoretical lens about how complex systems behave, upon which evaluation subject relevant substantive theories
can be layered.
M. Walton / Evaluation and Program Planning 45 (2014) 119–126 123

Because of the open nature of complex systems, boundaries are indications of macro-level influences. In contrast, Haynes (2008)
constructs with decisions of inclusion and exclusion reflecting outlines a method focused on identifying emergent outcomes at an
positions of actors involved in boundary definitions (Lessard, 2007; aggregated level by interpreting quantitative monitoring data over
Munda, 2004). For example both Rametsteiner and Weiss (2006) time against a storyline of policy changes. This method does not
and Matheson, Dew, and Cumming (2009), included national level consider emergent outcomes at sub-national level, for example
policy makers within the systems under study and identified whether there has been a change in inequality for certain groups, or
connections between local actors and national level policy makers local adaptation within the aggregate policy storyline.
as important to understand the ability for some individuals to have Trenholm and Ferlie (2013) examined the management of TB
system wide impact. A narrower focus on the local system would within London and explicitly captured both pan-London and local
not have allowed for interaction with the national level to be authority level and found quite different dynamics influencing
identified, and likely reflects the interest of evaluators and others system responses at each level. Understanding the dynamics
defining evaluation scope. between levels of a system is suggested by Schensul (2009) as the
focus of analysis, whilst Callaghan (2008, p. 404) recognises the
2.2. Attractors, emergence and other complexity concerns need to ‘understand both the dynamics of system change over
time, at the macro- and micro-level, and also to explain how that
Complexity theory includes a number of concepts that describe occurs through the meaningful action of individuals in the local
how complex systems develop and behave over time. Two setting’.
concepts of central concern are emergence and attractor states Radej (2011) argues that assessment of overall positive or
(Byrne & Callaghan, 2014). Emergent properties of systems are negative impact of an intervention can only be made through
generated through the operation of the system as a whole and aggregation of micro-level evaluative assessments. That is, through
cannot be identified through examining individual system parts. aggregation a more holistic understanding of the system and
Attractor states depict a pattern of system behaviour and emergent system behaviour can be derived. However, a holistic
represents stability, with a change in attractor state representing understanding cannot be gained through simple aggregation of
a qualitative shift in the system, with likely impacts on emergent micro-level variables, as emergent system outcomes are greater
phenomena (Room, 2011). Complexity terms including emergence than the sum of system parts. Radej (2011) proposes a meso-level
and attractors stand out in their absence from many of the papers analysis, where two or more domains that are weakly incommen-
that discuss evaluation from a complexity frame. In a review of surate are combined. An example here may be considering
health promotion evaluation applying a CAS perspective, Kania employment numbers and occupations trends together as a way
et al. (2013) found few evaluations that incorporated all nine of considering labour policies. In a hypothetical example, an
indicators of a CAS approach they developed and suggest that increase in numbers employed can be assessed in relation to the
opening an evaluation to emergence and adaptation challenges the types of jobs (sector, manual, skilled, part-time, etc.). While a
role of evaluating predetermined goals. Morell et al. (2010) and simple growth in numbers of jobs may indicate only a transition
Marchal, Van Belle, De Brouwere, and Witter (2013) both suggest within a stable cyclical attractor state (job numbers can go up and
that concepts of ‘attractors’ and ‘emergence’ provide a framework down over time without changing the nature of the employment
to understand both stability and change within complex systems. system), a change in the type of jobs being created may signal an
Of the papers that do consider attractors and emergence, keeping a emergent property. If a macro-indicator is used, such as labour
holistic view of the system over long time periods is seen as force participation rates, then the nature of attractor state shift
important (Hawe, Bond, & Butler, 2009; Haynes, 2008). For may not be identifiable. In discussing economic evaluation,
example Hawe et al. (2009, p. 97) state that ‘emergent properties Brousselle and Lessard (2011) suggest that the complexity of
of change processes might be lost if evaluators localise their findings may be better presented to decision-makers as disag-
attention to lower-level micro-phenomena and fail to see the gregated findings, rather than single cost-benefit or value-for-
bigger picture’. money calculations. They identify cost-consequence tables as a
useful method.
2.3. Defining the appropriate level and unit of analysis
2.4. Timing of evaluations
Complex systems are made up of a diverse range of components,
including individuals, organisations, physical resources and other Two implications for the timing of evaluations are evident. First,
complex systems (Byrne, 2011). Complex systems also show that non-linear interactions and potential for sudden system
repeated patterns at different levels (O’Sullivan, 2004; Room, transformation suggest we cannot predict when the effects of an
2011). Whilst emergence is a system level property, to understand intervention will present. Therefore long evaluative time frames
system changes that lead to emergence, a view of the changes within may be required (Dattée & Barlow, 2010; Hawe et al., 2009;
the system, and shifting focus between micro- and macro-views of Haynes, 2008; Marchal et al., 2013). Second, that evaluation
the system are required (Buijs, Eshuis, & Byrne, 2009). Within the should, if possible, occur concurrently alongside programme
reviewed literature there is a clear call for evaluation to focus upon development and implementation (Adam et al., 2012; Barnes
multiple levels, whilst also noting the challenge this creates with et al., 2003). Within his macro-focussed method, Haynes (2008)
practical considerations meaning often a pragmatic single level is used a 27 year period to identify system level changes linked to
focussed upon. Practically, Blackman et al. (2013) suggest that corresponding ‘storyline’ of policy developments. Long timeframes
utilising the system level contained within the policy intervention is pose a challenge to the question of what should be evaluated. Long
legitimate, such as local authority areas. Barnes et al. (2003) discuss timeframes also suggests that evaluative activity needs to be on
theories of change as useful when evaluating local initiatives, yet going and that the line between evaluation and monitoring may be
limited for drawing evaluative conclusions at national level from blurred. Attribution of outcomes to specific interventions becomes
multiple local initiatives. Developing multiple local evaluative more complicated over time with the number and variation of local
system descriptions does not identify or explain emergent outcomes adaptations, national level policy changes, and social and economic
at higher levels of aggregation. Byrne (2013) and Blackman et al. contextual changes likely to increase. Several authors point to the
(2013) would suggest that comparing multiple cases would allow for role of evaluation for understanding local adaptations and feeding
understanding mechanism and context interaction that provides back into implementation processes (Barnes et al., 2003; Burns,
124 M. Walton / Evaluation and Program Planning 45 (2014) 119–126

2006; Olney, 2005; Simpson et al., 2013). Given the increase in Richardson (2009) identify multi-method evaluation approaches
complications of attributing interventions with outcomes as as a logical response to the challenge of providing contextualised
timeframes increase, a focus on local adaptations may be more information on what works, while Rog (2012) suggests the variety
immediate and relevant to current implementation decisions and of context issues evaluators are likely to face requires a portfolio of
therefore provide a more tangible focus for evaluation. methodological strategies. In their review of health promotion
evaluation and CAS, Kania et al. (2013) identified designs that
2.5. Participatory methods mixed quantitative with qualitative methods as more able to
identify why interventions worked. Complexity suggests no one
From a complexity frame, participatory methods have been piece of data will be able to provide a complete view of the system
used to: gather perspectives of actors across the system to develop under study. An example is the evaluation of PRISM (Hawe et al.,
system descriptions (Mason & Barnes, 2007; Rothwell et al., 2010; 2009), where time-trend comparison, network analysis, and
Verweij & Gerrits, 2013); understand how interventions are narratives of practice accessed through interviews and activity
adapted at the local level (Barnes et al., 2003; Callaghan, 2008); diaries of key informants were all used. A drawback of multiple and
and make explicit different value claims of actors across the system mixed methods may be the volume of data generated and a
(Callaghan, 2008; Lessard, 2007; Nordtveit, 2010; Olney, 2005). requirement for interdisciplinary teams and associated communi-
Weiss (1998) identified a continuum of participatory approaches, cation and coordination requirements, which can all push up scale
from stakeholder evaluation through to empowerment evaluation. and cost of evaluations (Marchal et al., 2013)
Stakeholder evaluation utilises stakeholders to help shape the
evaluation questions and interpret data. At the empowerment 2.8. Layering theory to guide evaluation
evaluation end of the continuum, those involved with a
programme are supported to conduct the evaluation, aiming to A notable feature of several articles reviewed was the utilisation
empower the individuals and organisations through organisational of a complexity frame of reference as only one of multiple
learning. Methods associated with a complexity frame span this theoretical strands. A number of authors are also looking at
continuum. For example, in studying attempts to address health integrating wider social science and evaluation theory to provide
inequalities in the UK National Health Service, Blackman, Wistow, the bridge between a complexity theory ontological position and
and Byrne (2011) applied Qualitative Comparative Analysis (QCA). epistemological certainty. Approaches such as realism (Westhorp,
Stakeholders from across the NHS sites studied were involved in 2012); new institutional theories (Blackman et al., 2010); Deweyan
designing data collection and defining variables that went into the pragmatism (Sanderson, 2009) and Straus’s concept of negotiated
QCA analysis, as well as interpreting analysis results. Towards the order (Callaghan, 2008) are all discussed. What these have in
empowerment end, Burns (2006) utilises an action research common is a way of guiding consideration of different levels of a
approach to evaluating system change through the Communities system and micro–micro interactions. Westhorp (2012) provides a
First programme in Wales. set of complexity theory concepts that can be used to identify
theories to guide evaluation and provide greater explanatory
2.6. Case study and comparison designs power. She suggests that multiple theories can be nested for
explanation at multiple levels of a system. Examples of theories
Various forms of case study design are discussed within the used this way in other articles reviewed include Trenholm and
complexity literature (Anderson et al., 2005; Barnes et al., 2003; Ferlie’s (2013) identification of New Public Management as having
Blackman et al., 2013; Burgoyne, 2010; Byrne, 2005; Dyson & Todd, an influence within the TB system they studied, while Morell et al.
2010; Matheson et al., 2009). The advantages of a case study design (2010) drew upon Rogers’ theory of diffusion of innovation to help
are identified as the ability to develop a detailed understanding of a organise data and create parameters for an agent-based model. The
system (or limited number of systems), in line with complexity use of theory can aid in decisions regarding appropriate level of
theory concepts. For example: collecting information on the analysis, defining boundary systems and timing of evaluations.
system history and initial conditions at time of intervention;
understanding horizontal complexity (organisations and actors 3. Conclusion
that make up the system at a local level), as well as the interaction
between horizontal and vertical complexities (the place of national This paper has sought to identify a range of implications of a
level actors within local systems); and identifying local adaptation. complexity frame of reference for policy and programme evalua-
Matheson et al. (2009) identifies initial conditions (the level of tion. Though restricted to a small number of papers in the
health and social inequality within case study communities) and academic literature, some common themes have been identified.
differences in access to central government actors between case The themes can be broken into two groups. The first group
study communities, as important for understanding different identifies implications of applying complexity theory for defining
experiences and activities between cases. the purpose, scope and relevant units of analysis for evaluation
Ethnographic methods within case study designs are utilised by activity. The second group of themes consider method implications
Schensul (2009) and Hoffer et al. (2009), providing a detailed study of applying a complexity frame of reference, including the use of
of the components within the systems under study and their case studies, mixed and participatory methods.
interaction. While Hoffer et al. (2009) use the ethnographic study Across the reviewed literature, there was more consistency in
to develop an agent-based model to test effect of possible terms of the second group of method related themes, with variable
mechanisms on emergent outcomes, Schensul (2009) suggests explicit application of complexity theory concepts. In part the
the use of case-comparison methods, including QCA, also discussed variable uptake of complexity theory concepts could reflect that
by Byrne (2013), Blackman et al. (2013) and Verweij and Gerrits complexity theory is not one coherent theoretical body of thought.
(2013). Richardson and Cilliers (2001) identify three broad applications of
complexity: new reductionism; soft complexity; and complexity
2.7. Multiple and mixed methods thinking. They describe new reductionism school as attempting to
mathematically model complex systems to find the few simple
Also widely used from a complexity frame, is the use of multiple rules that govern system behaviour. This has also been termed
and mixed methods and data types. Bristow, Farrington, Shaw, and restricted complexity (Buijs et al., 2009). While a few articles
M. Walton / Evaluation and Program Planning 45 (2014) 119–126 125

included non-linear modelling of complex systems (e.g. system and what ‘change’ means to various people involved. For Morell
dynamics and agent-based models), they were not of a type strictly (2010), understanding the system is for the purpose of anticipating
concerned with identifying system rules. The second complexity the likelihood of ‘unforeseeable’ surprises cropping up in evalua-
school described by Richardson and Cilliers is soft complexity, or tion. Informed by several CAS concepts, Morell suggests working
the uncritical use of a complexity metaphor. Several of the through a set of questions at the start of the evaluation with
reviewed articles that have discussed complexity theory or CAS stakeholders to work out how much surprise to expect and manage
without detailing how complexity concepts are being utilised may expectations of the intervention and the evaluation. Thus
fall into this school. The third school, labelled complexity thinking, participation is built into the evaluation from the start, and a
takes the understanding of how complex systems behave over time close relationship with stakeholders throughout the evaluation
as a series of ontological positions upon which epistemological lifecycle is part of an ‘agile’ evaluation. In a similar way,
approaches are considered. The reviewed articles that do explicitly developmental evaluators (ideally) are part of the design and
discuss complexity concepts may be coming from this school. The implementation team of the evaluation.
reviewed literature, however, showed that the epistemological While Byrne and Callaghan get quite specific about the range of
implications of a complexity thinking ontology are not yet fully methods they see as consistent with a case-comparison approach
resolved, with no indication of a widely preferred and established to understanding causality in complex systems, neither Morell nor
way to work through issues such as defining the appropriate unit of Patton place emphasis on case comparison and steer away from
analysis for evaluation. specific method guidance, with Patton advocating ‘situational
The commonality amongst the second group of themes suggests responsiveness’.
that evaluation practice may be usefully progressing with broad As with the three books discussed above, specific frameworks
categories of method in spite of wider epistemological debates. and guidance for applying a complexity frame of reference are
These evaluations may well be complexity theory consistent, even available and perhaps more can be expected with the apparent
if a number of aspects are left ill defined. interest in the field shown by increasing number of applied journal
The themes identified in this article may provide a basic articles. Such frameworks provide guidance in navigating options
checklist for an evaluator interested in applying a complexity and tensions in the eight themes identified here, while the themes
frame of reference. Whilst providing a broad configuration of may be useful in comparing complexity framed approaches.
method (case study design, mixed and participatory methods,
developing a view of the system over time), the themes also point Acknowledgements
to a number of design considerations to be resolved (defining
system level and unit of analysis, appropriate timeframe for This project is supported by the Marsden Fund Council from
evaluation and explicit application of complexity theory con- Government funding, administered by the Royal Society of New
cepts). Zealand (MAU1107).
As noted above, much of the evaluation literature is somewhat
hidden in working papers, conference presentations and client
reports. For this reason it is to be expected that a range of examples References
of complexity theory informed evaluations have been missed from
Adam, T., Hsu, J., De Savigny, D., Lavis, J. N., Rottingen, J. A., & Bennett, S. (2012).
this review. The themes identified should not be set in stone, but be Evaluating health systems strengthening interventions in low-income and middle-
subject to critical reflection and development and used as a income countries: Are we asking the right questions? Health Policy and Planning,
starting point for discussion. 27(Suppl. 4), iv9–iv19.
Anderson, R. A., Crabtree, B. F., Steele, D. J., & McDaniel, R. R., Jr. (2005). Case study
For example, three books, outside the scope of this review, have research: The view from complexity science. Qualitative Health Research, 15(5),
recently been published that consider use of complexity theory 669–685 http://dx.doi.org/10.1177/1049732305275208
and Complex Adaptive Systems science for evaluation and the Barnes, M., Matka, E., & Sullivan, H. (2003). Evidence understanding and complexity:
Evaluation in non-linear systems. Evaluation, 9(3), 265–284.
social sciences (Byrne & Callaghan, 2014; Morell, 2010; Patton, Blackman, T., Hunter, D., Marks, L., Harrington, B., Elliott, E., Williams, G., et al. (2010).
2011). The themes identified in this review can be seen within Wicked comparisons: Reflections on cross-national research about health inequal-
these books. For example, all three books layer complexity theory ities in the UK. Evaluation, 16(1), 43–57.
Blackman, T., Wistow, J., & Byrne, D. (2011). A qualitative comparative analysis of
with other theoretical perspectives. Byrne and Callaghan (2014)
factors associated with trends in narrowing health inequalities in England. Social
utilise complexity theory alongside critical realism as well as the Science and Medicine, 72(12), 1965–1974.
relational sociology of Bourdieu, Straus’s notion of negotiated Blackman, T., Wistow, J., & Byrne, D. (2013). Using qualitative comparative analysis to
order and Freire’s ideas of Praxis. Patton explicitly utilises CAS as understand complex policy problems. Evaluation, 19(2), 126–140.
Boustani, M. A., Munger, S., Gulati, R., Vogel, M., Beck, R. A., & Callahan, C. M. (2010).
‘sensitising concepts’ and situates Developmental Evaluation Selecting a change and evaluating its impact on the performance of a complex
within his broader approach of utilisation focussed evaluation adaptive health care delivery system. Clinical Interventions in Aging, 5, 141–148.
(Patton, 2011). Morell (2010) draws upon CAS in thinking about Bristow, G., Farrington, J., Shaw, J., & Richardson, T. (2009). Developing an evaluation
framework for crosscutting policy goals: The accessibility policy assessment tool.
surprise in evaluation, but also applies ideas regarding innovation Environment and Planning A, 41(1), 48–62.
in organisational settings and life-cycle of interventions. Brousselle, A., & Lessard, C. (2011). Economic evaluation to inform health care decision-
All three books also provide guidance on developing and making: Promise, pitfalls and a proposal for an alternative path. Social Science and
Medicine, 72(6), 832–839.
understanding of the system under study. Byrne and Callaghan Buijs, J.-M., Eshuis, J., & Byrne, D. (2009). Approaches to researching complexity in
(2014) place importance on case-comparison methods, where the public management. In G. A. Teisman, v. Buuren, & L. Gerrits (Eds.), Managing
system under study is considered a case. While aspects of complex governance systems (pp. 37–55). New York: Routledge.
Burgoyne, J. G. (2010). Evaluating action learning: A critical realist complex network
programmes help define the boundaries of the case (system)
theory approach. Action Learning: Research and Practice, 7(3), 239–251.
under study (e.g. schools, households, communities), the open Burns, D. (2006). Evaluation in complex governance arenas: The potential of large
nature of complex systems also require an active process of system action research. In B. Williams & I. Imam (Eds.), Systems concepts in
evaluation: An expert anthology (pp. 181–196). Point Reyes, CA: American Evalua-
‘casing’. Relevant questions for understanding systems under
tion Association.
study from a Developmental Evaluation perspective include asking Byrne, D. (2005). Complexity configurations and cases. Theory Culture Society, 22(5),
about the nature of interrelationships within systems, their 95–111.
structure and processes as well as patterns of outcomes (Patton, Byrne, D. (2011). Applying social science: The role of social research in politics, policy and
practice. Bristol: The Policy Press.
2011). Complex system boundaries are socially constructed, so we Byrne, D. (2013). Evaluating complex social interventions in a complex world. Evalua-
should be asking about what systems are being targeted for change tion, 19(3), 217–228 http://dx.doi.org/10.1177/1356389013495617
126 M. Walton / Evaluation and Program Planning 45 (2014) 119–126

Byrne, D., & Callaghan, G. (2014). Complexity theory and the social sciences: The state of Olney, C. A. (2005). Using evaluation to adapt health information outreach to the
the art. Oxon: Routledge. complex environments of community-based organizations. Journal of the Medical
Cabrera, D., Colosi, L., & Lobdell, C. (2008). Systems thinking. Evaluation and Program Library Association, 93(4 Suppl.), S57–S67.
Planning, 31(3), 299–310. O’Sullivan, D. (2004). Complexity science and human geography. Transactions of the
Callaghan, G. (2008). Evaluation and negotiated order: Developing the application of Institute of British Geographers, 29(1), 282–295.
complexity theory. Evaluation, 14(4), 399–411. Parsons, B. A. (2007). The state of methods and tools for social systems change.
Capra, F. (2005). Complexity and life. Theory Culture Society, 22(5), 33–44 http:// American Journal of Community Psychology, 39(3–4), 405–409.
dx.doi.org/10.1177/0263276405057046 Patton, M. Q. (2011). Developmental evaluation: Applying complexity concepts to enhance
Dattée, B., & Barlow, J. (2010). Complexity and whole-system change programmes. innovation and use. New York: The Guilford Press.
Journal of Health Services Research and Policy, 15(Suppl. 2), 19–25. Plsek, P. E., & Greenhalgh, T. (2001). Complexity science: The challenge of complexity in
Dickinson, H. (2006). The evaluation of health and social care partnerships: An analysis health care. BMJ, 323(7313), 625–628 http://dx.doi.org/10.1136/bmj.323.7313.625
of approaches and synthesis for the future. Health and Social Care in the Community, Radej, B. (2011). Synthesis in policy impact assessment. Evaluation, 17(2), 133–150.
14(5), 375–383. Rametsteiner, E., & Weiss, G. (2006). Assessing policies from a systems perspecitve –
Dixon-Woods, M., Agarwal, S., Jones, D., Young, B., & Sutton, A. (2005). Synthesising Experiences with applied innovation systems analysis and implications for policy
qualitative and quantitative evidence: A review of possible methods. Journal of evaluation. Forest Policy and Economics, 8(5), 564–576.
Health Services Research and Policy, 10(1), 45–53. Richardson, K. A., & Cilliers, P. (2001). What Is complexity science? A view from
Dyson, A., & Todd, L. (2010). Dealing with complexity: Theory of change evaluation and different directions. Emergence, 3(1), 5–23.
the full service extended schools initiative. International Journal of Research and Richardson, K. A., Gregory, W. J., & Midgley, G. (2007). Editorial introduction to the
Method in Education, 33(2), 119–134. special double issue on complexity thinking and systems theory. Emergence
Eppel, E., Matheson, A., & Walton, M. (2011). Applying complexity theory to New Complexity & Organization, 9(1/2), vi–viii.
Zealand public policy: Principles for practice. Policy Quarterly, 7(1), 48–55. Rickles, D., Hawe, P., & Shiell, A. (2007). A simple guide to chaos and complexity. Journal
Fleischhacker, S. (2007). Food fight: The battle over redefining competitive foods. The of Epidemiology & Community Health, 61, 933–937.
Journal of School Health, 77(3), 147. Rog, D. J. (2012). When background becomes foreground: Toward context-sensitive
Forss, K., Marra, M., & Schwartz, R. (Eds.), Evaluating the complex: Attribution, contribu- evaluation practice. New Directions for Evaluation, 2012(135), 25–40.
tion, and beyond (Vol. 18,). New Brunswick: Transaction Publishers. Rogers, P. J. (2008). Using programme theory to evaluate complicated and complex
Gare, A. (2000). Systems theory and complexity introduction. Democracy and Nature, aspects of interventions. Evaluation, 14(1), 29–48.
6(3), 327–339. Room, G. (2011). Complexity. Institutions and public policy. Cheltenham: Edward Elgar
Goldstein, J. (1999). Emergence as a construct: History and issues. Emergence, 1(1), 49– Publishing.
72. Rothwell, H., Shepherd, M., Murphy, S., Burgess, S., Townsend, N., & Pimm, C. (2010).
Hawe, P., Bond, L., & Butler, H. (2009). Knowledge theories can inform evaluation Implementing a social-ecological model of health in Wales. Health Education,
practice: What can a complexity lens add? New Directions for Evaluation, 110(6), 471–489.
2009(124), 89–100 http://dx.doi.org/10.1002/ev.316 Sanderson, I. (2000). Evaluation in complex policy systems. Evaluation, 6(4), 433.
Hawe, P., Shiell, A., Riley, T., & Gold, L. (2004). Methods for exploring implementation Sanderson, I. (2009). Intelligent policy making for a complex world: Pragmatism
variation and local context within a cluster randomised community intervention evidence and learning. Political Studies, 57(4), 699–719.
trial. Journal of Epidemiology and Community Health, 58(9), 788–793. Schensul, J. J. (2009). Community, culture and sustainability in multilevel dynamic systems
Haynes, P. (2008). Complexity theory and evaluation in public management. Public intervention science. American Journal of Community Psychology, 43(3–4), 241–256.
Management Review, 10(3), 401–419. Schwartz, R., & Garcia, J. (2011). Intervention Path Contribution Analysis (IPCA) for
Hoffer, L. D., Bobashev, G., & Morris, R. J. (2009). Researching a local heroin market as a complex strategy evaluation: Evaluating the smoke-free Ontario strategy. In K.
complex adaptive system. American Journal of Community Psychology, 44(3), 273– Forss, M. Marra, & R. Schwartz (Eds.), Evaluating the complex: Attribution, contribu-
286. tion and beyond (pp. 187–207). New Brunswick: Transaction Publishers.
Israel, N., & Wolf-Branigin, M. (2011). Nonlinearity in social service evaluation: A Shiell, A., Hawe, P., & Gold, L. (2008). Complex interventions or complex systems?
primer on agent-based modeling. Social Work Research, 35(1), 20–24. Implications for health economic evaluation. BMJ, 336(7656), 1281–1283 http://
Kania, A., Patel, A. B., Roy, A., Yelland, G. S., Nguyen, D. T. K., & Verhoef, M. J. (2013). dx.doi.org/10.1136/bmj.39569.510521.AD
Capturing the complexity of evaluations of health promotion interventions: A Simpson, K. M., Porter, K., McConnell, E. S., Colón-Emeric, C., Daily, K. A., Stalzer, A., et al.
scoping review. Canadian Journal of Program Evaluation, 27(1), 65–91. (2013). Tool for evaluating research implementation challenges: A sense-making
Lessard, C. (2007). Complexity and reflexivity: Two important issues for economic protocol for addressing implementation challenges in complex research settings.
evaluation in health care. Social Science and Medicine, 64(8), 1754–1765. Implementation Science, 8(1).
Levy, D. T., Cho, S. I., Kim, Y. M., Park, S., Suh, M. K., & Kam, S. (2010). SimSmoke model Ssengooba, F., McPake, B., & Palmer, N. (2012). Why performance-based contracting
evaluation of the effect of tobacco control policies in Korea: The unknown success failed in Uganda – An ‘‘open-box’’ evaluation of a complex health system inter-
story. American Journal of Public Health, 100(7), 1267–1273. vention. Social Science and Medicine, 75(2), 377–383.
Maani, K. E., & Cavana, R. Y. (2000). Systems thinking and modelling: Understanding Stern, E., Stame, N., Mayne, J., Forss, K., Davies, R., & Befani, B. (2012). Broadening the
change and complexity. Auckland: Pearson Education New Zealand Limited. range of designs and methods for impact evaluations (Working Paper No. 38) London:
Marchal, B., Van Belle, S., De Brouwere, V., & Witter, S. (2013). Studying complex Department for International Development Retrieved from http://www.dfid.go-
interventions: Reflections from the FEMHealth project on evaluating fee exemp- v.uk/Documents/publications1/design-method-impact-eval.pdf.
tion policies in West Africa and Morocco. BMC Health Services Research, 469. Stewart, J., & Ayres, R. (2001). Systems theory and policy practice: An exploration.
Mason, P., & Barnes, M. (2007). Constructing theories of change: Methods and sources. Policy Sciences, 34, 94–97.
Evaluation, 13(2), 151–170. Trenholm, S., & Ferlie, E. (2013). Using complexity theory to analyse the organisational
Matheson, A., Dew, K., & Cumming, J. (2009). Complexity, evaluation and the effec- response to resurgent tuberculosis across London. Social Science and Medicine, 93,
tiveness of community-based interventions to reduce health inequalities. Health 229–237.
Promotion Journal of Australia, 20(3), 221–226. Verweij, S., & Gerrits, L. M. (2013). Understanding and researching complexity with
Mays, N., Pope, C., & Popay, J. (2005). Systematically reviewing qualitative and qualitative comparative analysis: Evaluating transportation infrastructure pro-
quantitative evidence to inform management and policy-making in the health jects. Evaluation, 19(1), 40–55 http://dx.doi.org/10.1177/1356389012470682
field. Journal of Health Services Research & Policy, 10, S6. Vincent, R. (2012). Insights from complexity theory for evaluation of development action:
McLean, R. M., Hoek, J. A., Buckley, S., Croxson, B., Cumming, J., Ehau, T. H., et al. (2009). Recognising the two faces of complexity. London: PANOS/IKM Emergent Research
‘‘Healthy eating – Healthy action’’: Evaluating New Zealand’s obesity prevention Programme Retrieved from http://wiki.ikmemergent.net/files/1203-IKM_Emer-
strategy. BMC Public Health, 9. http://www.biomedcentral.com/1471-2458/9/452 gent_Working_Paper_14-Complexity_Theory-March_2012.pdf.
Midgley, G. (2008). Systems thinking, complexity and the philosophy of science. E:CO Walker, R. (2007). Entropy and the evaluation of labour market interventions. Evalua-
Emergence: Complexity and Organization, 10(4), 55–73. tion, 13(2), 193–219.
Morçöl, G. (2012). A complexity theory for public policy. New York: Routledge. Walton, M., Signal, L., & Thomson, G. (2013). Public policy to promote healthy nutrition
Morell, J. A. (2010). Evaluation in the face of uncertainty: Anticipating surprise and in schools: Views of policymakers. Health Education Journal, 72(3), 283–291 http://
responding to the inevitable. New York: Guildford Press. dx.doi.org/10.1177/0017896912442950
Morell, J. A., Hilscher, R., Magura, S., & Ford, J. (2010). Integrating evaluation and agent- Weiss, C. H. (1998). Evaluation (2nd ed.). Upper Saddle River: Prentice Hall.
based modeling: Rationale and an example for adopting evidence-based practices. Westhorp, G. (2012). Using complexity-consistent theory for evaluating complex sys-
Journal of MultiDisciplinary Evaluation, 6(14), 32–57. tems. Evaluation, 18(4), 405–420 http://dx.doi.org/10.1177/1356389012460963
Munda, G. (2004). Social multi-criteria evaluation: Methodological foundations and
operational consequences. European Journal of Operational Research, 158(3),
662–677.
Nordtveit, B. H. (2010). Development as a complex process of change: Conception and Mat Walton is a lecturer in the School of Health and Social Services, Massey University,
analysis of projects, programs and policies. International Journal of Educational New Zealand. His research focuses on public health policy design and evaluation, the
Development, 30(1), 110–117. application of complexity theory and policy interaction.

You might also like