You are on page 1of 11

European Journal of Operational Research 249 (2016) 908–918

Contents lists available at ScienceDirect

European Journal of Operational Research


journal homepage: www.elsevier.com/locate/ejor

Recent evidence on the effectiveness of group model building


Rodney J Scott a,∗, Robert Y Cavana b, Donald Cameron c
a
The University of Queensland, Brisbane, QLD 4072, Australia
b
Victoria University of Wellington, Wellington 6140, New Zealand
c
The University of Queensland, Brisbane, QLD 4072, Australia

a r t i c l e i n f o a b s t r a c t

Article history: Group model building (GMB) is a participatory approach to using system dynamics in group decision-making
Received 12 July 2014 and problem structuring. This paper considers the published quantitative evidence base for GMB since the
Accepted 30 June 2015
earlier literature review by Rouwette et al. (2002), to consider the level of understanding on three basic
Available online 8 July 2015
questions: what does it achieve, when should it be applied, and how should it be applied or improved?
Keywords: There have now been at least 45 such studies since 1987, utilising controlled experiments, field experiments,
Behavioural OR pretest/posttest, and observational research designs. There is evidence of GMB achieving a range of outcomes,
Group model building particularly with regard to the behaviour of participants and their learning through the process. There is some
System dynamics evidence that GMB is more effective at supporting communication and consensus than traditional facilitation,
Evidence however GMB has not been compared to other problem structuring methods. GMB has been successfully
Literature review applied in a range of contexts, but there is little evidence on which to select between different GMB tools,
or to understand when certain tools may be more appropriate. There is improving evidence on how GMB
works, but this has not yet been translated into changing practice. Overall the evidence base for GMB has
continued to improve, supporting its use for improving communication and agreement between participants
in group decision processes. This paper argues that future research in group model building would benefit
from three main shifts: from single cases to multiple cases; from controlled settings to applied settings; and
by augmenting survey results with more objective measures.
© 2015 Elsevier B.V. and Association of European Operational Research Societies (EURO) within the
International Federation of Operational Research Societies (IFORS). All rights reserved.

1. Introduction cess. This became known as “group model building” (Vennix,


1996). The term has been criticised as cosy, narrow and parochial
Participative and behavioural aspects of OR are important (Andersen, Vennix, Richardson, & Rouwette, 2007), in that it fails
and underexplored (Hamalainen, Luoma, & Saarinen, 2013). One to mention that the models in question are always system dynam-
area with an emerging evidence-base is group model build- ics models. Authors have proposed that GMB should be considered
ing (“GMB”, Vennix, 1996), a participatory approach to the de- as a sub-set of problem structuring methods (Andersen et al., 2007;
velopment of system dynamics models. Recent GMB literature Rouwette, Vennix, & Felling, 2009) or group decision support sys-
has given more prominence to participant behaviour and in- tems (Vennix, Andersen, Richardson, & Rohrbaugh, 1992). Nonethe-
terpersonal dynamics, to explore how GMB supports persuasion less, the term and its limitation to system dynamics methods has
(Rouwette, Korzilius, Vennix, & Jacobs, 2011a), trust (Black & been used in many publications (e.g. Akkermans & Vennix, 1997; An-
Andersen, 2012), and agreement (Rouwette, 2011). dersen & Richardson, 1997; Andersen, Richardson, & Vennix, 1997,
The importance of involving the client in the modelling pro- 2007; Luna-Reyes et al., 2006; Richardson, 2013; Richardson & An-
cess has been acknowledged since the conception of system dersen, 1995; Richardson, Andersen, Rohrbaugh, & Steinhurst, 1992;
dynamics (Forrester, 1961). Practitioners observed that recommen- Rouwette & Vennix, 2006, 2011; Rouwette et al., 2009; Rouwette,
dations developed through system dynamics were not automati- Vennix, & Thijssen, 2000, 2002, 2011a, 2011b; Vennix, 1996, 1999;
cally adopted by the client (Greenberger, Crenson, & Crissey, 1976), Vennix & Rouwette, 2000; Vennix, Scheper, & Willems, 1993, 1996;
and experimented with involving the client in the modelling pro- Zagonel, 2002, 2004). Maintaining this narrow definition allows this
paper to build directly on an earlier literature review (Rouwette,

Vennix, & Mullekom, 2002).
Corresponding author. Tel.: +64 27 6127 402.
Despite over 100 publications on GMB methods (see Section 3),
E-mail addresses: rodney.scott@gmail.com (R.J. Scott), bob.cavana@vuw.ac.nz
(R.Y. Cavana), donald.cameron@uq.edu.au (D. Cameron). relatively few have described attempts at quantitative analysis. This

http://dx.doi.org/10.1016/j.ejor.2015.06.078
0377-2217/© 2015 Elsevier B.V. and Association of European Operational Research Societies (EURO) within the International Federation of Operational Research Societies (IFORS).
All rights reserved.
R.J. Scott et al. / European Journal of Operational Research 249 (2016) 908–918 909

paper collates the available evidence on group model building, in- Table 1
Selection criteria.
cluding studies of how it is used, what it achieves, and why. This
information is used to reflect on the quantitative evidence base for Criteria Definition
GMB, to synthesise the conclusions, to consider how this addresses
Quantitative Numerical or statistical data reported in the results
fundamental research questions, and to identify key opportunities evidence of the study
for the future. The analysis is arranged around three themes: what System dynamics One of more of the following tools used as part of the
group model building achieves; when it should be applied; and how tools group process: behaviour over time graph, causal
it should be applied or improved. This is likely to be of interest to loop diagram, stock and flow model, simulation
model
GMB practitioners in understanding the state of empirical evidence
Focus on client A decision process involving more than one person,
for their craft, and for GMB researchers in identifying further research participation or with reference to interaction between the
opportunities. The study is also likely to be of interest to the broader group interaction participants in the creation or interpretation of the
OR community, as many of the research challenges (particularly bal- system dynamics tool
ancing experimental control with external validity) are likely to be
applicable to other participative and behavioural approaches.
This paper is arranged in four sections after this introduction. quantitative GMB, through the development of a metric for measur-
Section 2 summarises early research on GMB, as articulated by ing the presumed added understanding and confidence from quan-
Rouwette et al. (2002). The methodology for augmenting and updat- titative modelling. Finally, Rouwette et al. (2002) echoed the earlier
ing the literature review by Rouwette et al. (2002) is then explained call from Andersen et al. (1997) to thoroughly record case research in
in Section 3. Literature is presented and analysed in Section 4 to a standardised format, while also calling for more reporting of unsuc-
describe the quantitative evidence base for GMB. And finally in cessful case studies.
Section 5, there is a discussion of the implications of the research
findings, and an exploration of research gaps and future research 3. Methodology
opportunities.
These calls for more research into how GMB is conducted, what
2. Early research on group model building it achieves, and why, set the scene for several important quantitative
studies over the coming decade, as well as a number of smaller pilot
The first empirical study was conducted in 1988, and in the fol- studies. This paper reviews this research, in order to reflect on the
lowing 13 years, there were 19 studies on GMB that collected quan- current quantitative evidence base regarding GMB.
titative evidence regarding its use. In 2002, Rouwette et al. reviewed
107 GMB studies, including the 19 that attempted some sort of quan- 3.1. Paper selection
titative assessment. The studies that included quantitative evidence
were published between 1987 and 2000. The review considered five A literature search was conducted to identify relevant evidence
aspects of the studies: the source of the data; what data were col- for GMB. This included past issues of five journals from 2001 to 2014
lected; how they were collected; when they were collected; and what (European Journal of Operational Research, Journal of the Operational
was found. The different studies related to a range of intervention Research Society, Group Decision and Negotiation, System Dynamics
contexts and tools, described in different and incomplete ways. The Review, System Research and Behavioral Sciences), and past proceed-
evaluations consisted of post-workshop surveys or pretest/posttest ings of two international conferences (Meeting of the International
questionnaires, and mostly relied on participants’ own views of what Society of Systems Sciences, and International Conference of the Sys-
the workshops had achieved. Of those using a pretest/posttest design, tem Dynamics Society). Papers were selected that included quantita-
three used a single case study and two a field experiment. The authors tive evidence relating to GMB. The references cited in these papers
expressed caution about biases introduced by measurement meth- were subsequently analysed to reveal additional research.
ods, and recommended direct comparison of different measurement This method introduces several possible biases. First, it is possible
methods to determine if they were associated with different results that empirical GMB studies have been published elsewhere than the
(Rouwette et al., 2002). publications examined, and not subsequently referenced by empiri-
The conclusions of this review were relatively modest: GMB litera- cal GMB studies within those publications examined. Secondly, it is
ture included a number of small-scale evaluations that demonstrated possible that some papers were missed due to human error, where it
that participants believe GMB contributes to improved communica- was not immediately apparent that the paper related to GMB. Third,
tion quality, insight, consensus and commitment to conclusions. The not all research is published, for a number of reasons including: am-
reasons for this success were unclear, as was the relative effectiveness bivalence, commercial sensitivity, or a reluctance to publish findings
of GMB versus other techniques (Rouwette et al., 2002). from unsuccessful cases. It is not possible to measure these possible
Three papers at around this time contained recommendations on biases, and therefore caution must be taken in assuming that this pa-
a research programme for the future of GMB. Andersen et al. (1997) per describes all empirical research on GMB.
proposed that more rigorous and consistent recording of the inter- Papers were selected on the basis of three criteria: quantitative
vention context and tools was required, as well as evaluation of evidence, system dynamics tools, and a focus on client participation
several explanatory hypotheses: systems thinking, group structure, or group interaction (see Table 1).
chunking, gifted practitioner, group communication, or Hawthorn ef- Several studies were excluded that evaluated participant learn-
fect. While noting the barriers to effective research design, they rec- ing through use of system dynamics methods but that did not
ommended experiments to complement survey results, and the use feature significant group interaction (e.g. Capelo & Dias, 2009;
of common survey methods to allow results from many studies to Cavaleri, Raphael, & Filletti, 2002; Gary & Wood, 2007, 2011; Hopper
be aggregated. They also proposed: the inclusion of measurement & Stave, 2008; Jensen, 2005; Kopainsky, Alessi, Pedercini, & David-
methods that do not rely on participants reporting their own cog- sen, 2009, 2010a, 2010b, 2011a, 2011b, 2012; Kopainsky & Saldarriaga,
nitive processes; that studies measure either mental models or their 2012; Kopainsky & Sawicka, 2011; Langley & Morecroft, 2004; Maani
changes but not both in the same subjects; that some study of the & Maharaj, 2003; Moxnes, 2004; Mulder, Lazonder, & de Jong, 2011;
enduring effects (if any) of GMB is conducted; and that mixed meth- Plate, 2010; Stouten, Heeme, Gellynck, & Polet, 2012; Yasarcan, 2009).
ods are used to improve the robustness of results. Coyle (2000) pro- Conversely, several papers were included that described individ-
posed an exploration into the wise balance between qualitative and ual work on system dynamics tools alternated with group feedback
910 R.J. Scott et al. / European Journal of Operational Research 249 (2016) 908–918

activities that did not use system dynamics tools (Borštnar, Kljajić, the intervention achieves; when it should be applied; and how it
Škraba, Kofjač, & Rajkovič, 2011; Škraba, Kljajić, & Leskovar, 2003, should be applied or improved.
2007). In considering evidence for small group methods in psychology,
Papers were also excluded that described problem structuring Burlingame et al. (2004) propose that the effects of the interven-
methods or soft OR methods that were not necessarily system tion should be considered in terms of general efficacy (what does it
dynamics tools (Allsop & Taket, 2003; Berry, Bowman, Hernandez, achieve), differential efficacy with respect to context (what does it
& Pratt, 2006; Bryant & Darwin, 2004; Charnley & Engelbert, 2005; achieve in what situations), and differential efficacy with respect to
Cole, 2006; Fan, Shen, & Lin, 2007; Fjermestad, 2004; Franco, 2007; method (what does it achieve compared to other alternative meth-
Halvorsen, 2001; Joldersma & Roelofs, 2004; McGurk et al., 2006; ods). As GMB consists of a range of tools (Rouwette et al., 2002),
Phahlamohlaka & Friend, 2004; Rowe & Frewer, 2004; Rowe, Marsh, this might be further separated into differential efficacy compared
& Frewer, 2004; Rowe, Horlick-Jones, Walls, & Pidgeon, 2005; Shaw to other group methods and differential efficacy between different
2003; Sørensen, Vidal, & Engström, 2004). The distinction between GMB tools. Burlingame et al. (2004) also suggest that insights into
GMB and other problem structuring methods may not always be the mechanism of change are important for informing future efforts
helpful (Andersen et al., 2007). However, maintaining a narrow fo- to improve practice.
cus on participative interventions using system dynamics methods Efficacy in GMB literature is considered at four levels: individual,
allows direct comparison with the earlier review by Rouwette et al. group, organisational and method/efficiency (Rouwette et al., 2002).
(2002), to reflect on how the evidence base has changed in the in- Individual effects include cognitive changes such as insight as well
tervening years. While this paper does not comment on the evidence as affective changes such as commitment to a decision. Group effects
to support other problem structuring methods, lessons from related include group behaviours such as communication quality as well as
fields are included where this is useful as a basis for recommending group alignment through shared language and consensus. Efficiency
where further GMB research should be directed. refers to the time and resources taken to achieve these individual
Some authors have begun to divide GMB practice into two re- and group effects. It is likely to be impossible to attribute organisa-
lated sub-fields based on how the model is perceived (Andersen tional level changes like system improvement to the GMB interven-
et al., 2007; Zagonel, 2002). One perspective considers the model tion (Shadish, Cook, & Campbell, 2001), so this has been omitted from
as an allegedly realistic representation of the external policy envi- the composite framework used in this paper.
ronment (“micro world” – Zagonel, 2002; “virtual world” – Sterman, In relation to SSM, Mingers and Taylor (1992) suggest a baseline
2000). The second perspective considers the model as a socially con- for understanding a group method is first to know how prevalent the
structed artefact for building trust and agreement (“boundary object” method is, and how and where it is currently used.
– Black, 2013; Black & Andersen, 2012; Franco, 2013; Scott, Cavana, These four partial frameworks were summarised into a composite
& Cameron, 2014b; Zagonel, 2002; “transitional object” – Eden & framework, and used to identify the research questions for this paper
Ackermann, 2006). In many of the cases reviewed, it was not possi- (Table 2). Each of the included studies was assessed for its contribu-
ble to clearly distinguish between each perspective, and indeed each tion to answering the research questions, and the collated results are
may be the prevailing view at different points of the same GMB inter- reported in Section 4 of this paper.
vention (Zagonel, 2002). This paper includes both perspectives under
the general category of GMB. 4. Quantitative evidence relating to group model building
Finally, four papers were included that met the criteria for inclu-
sion in this review, but did not describe case research. One paper The literature review resulted in the selection of 26 new studies
measured the time commitment by participants and different mem- since 2001 that include quantitative evidence on GMB. This section
bers of the modelling team (Luna-Reyes et al., 2006). A second pa- describes: where these studies were published; the research designs;
per surveyed leading system dynamics practitioners on best-practice and their findings (what GMB achieves, when is should be used, and
methods – while not explicitly inquiring about GMB, many of the re- how it should be applied or improved).
sponses related to client involvement and participation (Martinez-
Moyano & Richardson, 2013). A third paper concerned the use of 4.1. Publication sources
model validation in GMB, and conducted a meta-analysis of pub-
lished GMB studies (Happach, Veldhuis, Vennix, & Rouwette, 2012). The 19 studies identified by Rouwette et al. (2002) and additional
The fourth paper asked potential GMB clients to rate each of the re- 26 papers in more recent literature were published in a variety of
ported outcomes of GMB based on importance to them in conducting forms, including journal articles, conference papers, and student dis-
group decision processes (Scott, Cavana, & Cameron, 2015). sertations (see Table 3). While the System Dynamics Review and the
proceedings International Conference of the System Dynamics Soci-
3.2. Analysis ety remain the two most popular vehicles for publishing GMB evi-
dence, the more recent studies differed in two ways from the ear-
Each paper identified and selected as described above was read lier review by Rouwette et al. (2002): the increased representation in
and summarised into a short synopsis arranged in five fields: the re- mainstream OR journals, and the decrease in research published only
search design, the research context, the GMB tools used, the evalua- in student dissertations. These two differences are explained below.
tion tools employed, and the results reported. These synopses formed Publication in OR journals such as Group Decision and Negotiation
the data set for subsequent analysis. (Scott et al., 2015; Škraba et al., 2003), and the Journal of the Opera-
No framework was found for assessing the sufficiency of evidence tional Research Society (Rouwette, 2011; Scott, Cavana, & Cameron,
for a group facilitation tool. Therefore, a framework was created as a 2014a) may represent a greater interest by GMB authors and OR jour-
composite of four partial frameworks from the literature. These four nals in how GMB practices relate to similar OR practices (Ackermann,
partial frameworks are each explored below: themes for evaluating Andersen, Eden, & Richardson, 2010; Andersen et al., 2007; Lane &
interventions (Riecken & Boruch, 1974); evidence for effectiveness of Oliva, 1998; Rouwette et al., 2009, 2011b).
group processes (Burlingame, MacKenzie, & Strauss, 2004); levels of In both periods (1987–2000 and 2001–2014), the same research
outcomes (Rouwette et al., 2002); and extent of use (Mingers & Taylor, findings were often published in a combination of dissertations, con-
1992). ference papers and journal articles; where this occurred, only the
Riecken and Boruch (1974) suggest that there are three logical “higher” publication was included in the analysis (journal articles
themes for research into the application of a social intervention: what were included in preference to conference papers, which were in turn
R.J. Scott et al. / European Journal of Operational Research 249 (2016) 908–918 911

Table 2
Framework for assessing GMB evidence.

How should GMB be applied or


Themes What does GMB achieve? When should GMB be applied improved

Dimensions General efficacy (individual, group, Current prevalence and application Current use
efficiency)
Differential efficacy – method Differential efficacy – context Mechanism
(individual, group, efficiency) (individual, group, efficiency)
Research Can GMB produce individual and group When/where is GMB used now? Differential efficacy – tool
questions level effects?
Does GMB produce individual and group Is GMB more effective in certain settings How is GMB used now?
level effects more effectively than other than other settings?
methods?
Does GMB produce individual and group Is GMB more effective in certain settings How does GMB affect participants?
level effects more efficiently than other than other methods?
methods?
Are some GMB tools more effective than
others?
Are different GMB tools more
appropriate for certain contexts?

Table 3 investment of time and resources. Many of the promising findings


Sources of published evidence for GMB.
from these exploratory studies are yet to be investigated by larger
1987–2000 (Rouwette studies with greater statistical power (Cohen, 1988), and this repre-
et al., 2002) 2001–2014 sents the next logical step in GMB research. The large experimental
System Dynamics Review 3 7
studies may also be decreasing in their relevance to understanding
Group Decision and Negotiation 1 2 GMB, for reasons discussed under “study type” below.
Systems Research and 0 3 Beyond the number of research subjects, the number of cases is
Behavioural Science also important in small group research (Levine & Moreland, 1990).
Journal of the Operational 0 2
The dynamics of a group may introduce a systemic effect on the re-
Research Society
Other journal 0 2 sponses of all research subjects in that group in ways that cannot
International Conference of the 9 9 be attributed to the workshop method. This is particularly relevant
System Dynamics Societya in field experiments, when trying to make comparisons between a
Other conference proceedingsa 0 0 treatment group and control group, or between two different treat-
Dissertation (Masters or PhD 6 1
ment groups. Recent GMB field experiments have compared a single
thesis)a
treatment group with a single control group (Dwyer & Stave, 2008;
Total 19 26
Eskinasi et al., 2009; van Nistelrooij et al., 2012), which is a very
a
Where the same research was presented in more than one publication type, only small sample on which to make generalised conclusions (Levine &
one publication is included. Journal articles were included in preference to conference Moreland, 1990).
papers, which were in turn included in preference to dissertations.
One study (Rouwette et al., 2011a) is notable as a meta-analysis of
previously published case studies (including Mooy et al., 2001, and
included in preference to dissertations). The trend toward more jour- Eskinasi et al., 2009). Meta-analysis has the potential to increase the
nal articles and fewer unpublished dissertations can be explained in statistical power of the results and possible reasons for their variation
two ways. First, as a manifestation of more general trends in doctoral (Shadish et al., 2001).
publication, toward the conversion of doctoral research into articles
suitable for journal publication (Kamler, 2008). Alternatively, it may
4.2.2. Study type
be that this paper does not capture all unpublished dissertational re-
Recent studies represent more diverse study types than those ex-
search – all but one dissertation cited by Rouwette et al. (2002) were
plored by Rouwette et al. (2002), as shown in Table 5. Recent research
from the same institution and known to the authors of that paper.
includes five different study types (Cavana, Delahaye, & Sekaran,
4.2. Research designs 2001; Cook & Campbell, 1979): experiments (with a randomized
control group), field experiments (with a non-randomised con-
The quantitative evidence for GMB consists of diverse forms that trol group), pretest–posttest comparisons (with no control group),
differ by study type, problem type, and sample size (both the number posttest surveys (with no control group), and population surveys
of cases and the number of individual research subjects). These are (with no treatment group).
summarised in Table 4, and each explained in detail below. Early research on GMB included mostly posttest surveys, with
three single group pretest/posttest surveys and two field experi-
4.2.1. Sample size ments. Before 2001 there had not been an experiment in controlled
The number of research subjects in the 26 studies above var- conditions using randomised groups; and their new presence repre-
ied widely from single case studies involving a group of 9 partici- sents a shifting balance in GMB research, between external validity
pants (Cockerill et al., 2006; Mooy et al., 2001) to 12 groups featur- against experimental control (Blaikie, 1993). These experiments in-
ing 174 subjects (Škraba et al., 2007; see Table 4). The number of volve problems where participants do not expect their recommenda-
research subjects divided the studies into two main groups: small tions to be implemented (Borštnar et al., 2011; Fokkinga et al., 2009;
case-research involving a small number of research subjects (14 stud- McCardle-Keurentjes et al., 2008, 2009; Shields, 2001, 2002; Škraba
ies featuring between 9 and 42 subjects) and controlled experiments et al., 2003, 2007). These are in contrast to applied problems, where
with large research groups (7 studies featuring 56–174 subjects). participants have an expectation that their recommendations will be
Consideration must be given to whether this pattern of small implemented, and where participants may therefore feel affected by
case research and large experiments remains appropriate. Small case or responsible for the outcomes of the GMB process. In working on
studies are useful for exploratory research, because it requires less applied problems, the recommendations of the group have greater
912 R.J. Scott et al. / European Journal of Operational Research 249 (2016) 908–918

Table 4
Research design of recent GMB studies.

Study Study type Problem type Cases Research subjects

Shields (2001) Experiment Not applied 4 58


Shields (2002) Experiment Not applied 14 56
Škraba et al. (2003) Experiment Not applied 5 95
Škraba et al. (2007) Experiment Not applied 12 174
McCardle-Keurentjes, Rouwette, and Vennix (2008) Experiment Not applied 23 115
Fokkinga, Bleijenbergh, and Vennix (2009) Experiment Not applied 2 18
McCardle-Keurentjes, Rouwette, Vennix, and Jacobs (2009) Experiment Not applied 26 135
Borštnar et al. (2011) Experiment Not applied 8 118
Dwyer and Stave (2008) Field experiment Applied 2 32
Eskinasi, Rouwette, and Vennix (2009) Field experiment Applied 2 23
van Nistelrooij, Rouwette, Verstijnen, and Vennix (2012) Field experiment Applied 2 12
Mooy, Rouwette, Valk, Vennix, and Maas, (2001) Pretest/posttest Applied 1 9
Rouwette et al., 2011a Pretest/posttest Applied 7 42
Scott, Cavana, & Cameron, 2013 Pretest/posttest Applied 4 30
van den Belt, 2004 Pretest/posttest Applied 2 19
Cockerill, Passell, & Tidwell, 2006 Posttest Applied 1 9
Beall & Ford, 2010 Posttest Applied 9 na
Rouwette, 2011 Posttest Applied 3 20
Happach et al., 2012 Posttest Applied 86 na
Videira, Lopes, Antunes, Santos, & Casanova, 2012 Posttest Applied 1 14
Rouwette, Bleijenbergh, & Vennix, 2014 Posttest Applied 1 11
Scott et al., 2014a Posttest Applied 4 40
Scott et al., 2014b Posttest Applied na 30
Luna-Reyes et al., 2006 Population survey na na na
Martinez-Moyano & Richardson, 2013 Population survey na na 27
Scott et al., 2015 Population survey na na 12

Applied = participants expected their proposed interventions would be applied to a real world setting, Not applied = participants knew that their proposed
interventions would not be applied to a real world setting, na = not applicable or not reported.

Table 5 through using system dynamics methods and the interpersonal ef-
Comparison of research characteristics in early (1987–2000) and recent (2001–
fects of groups working together to create system dynamics mod-
2014) GMB studies.
els (GMB). Recent theory in group model building focuses on inter-
Studies published Studies published personal persuasion (e.g. Rouwette et al., 2011a) and conditions that
1987–2000 2001–2014
build group trust, agreement, and fuel further working together (e.g.
Experiment 0 8 Black & Andersen, 2012). Experimental research is likely to be less ap-
Field experiment 2 3 plicable for studying these effects, as the social dynamics likely differ
Pretest/posttest 3 4 between group decisions in applied and not applied contexts.
Posttest 14 8
Population survey 0 3 4.2.3. Measurement tools
Total 19 26 The studies reported in Rouwette et al. (2002) include only sur-
veys and pretest/posttest comparisons. In recent literature, a num-
ber of new tools were used, including interaction analysis (van
consequence, which may affect the attitudes and behaviours of par- Nistelrooij et al., 2012) and content analysis of the workshop con-
ticipants (Aronson, Wilson, & Brewer, 1998; Zagonel, 2002). versations themselves (McCardle-Keurentjes et al., 2008), as well as a
The combination of high validity applied settings and high con- pretest/posttest/delayed comparison (Scott et al., 2013). Several stud-
trol experimental settings should theoretically increase the reliabil- ies compared multiple evaluation methods, either comparing survey
ity of the findings through triangulation (Jick, 1979). Other authors results with interview results (Rouwette, 2011; Scott et al., 2015),
(Andersen et al., 1997; Rouwette et al., 2002; Scott et al., 2013) have or comparing survey results with pretest/posttest comparison (Scott
supported the idea that these various research designs are each valu- et al., 2013). Each method revealed compatible results, which in-
able and make important contributions. Unfortunately, each method creases our confidence in the validity of the instruments used.
explores slightly different topics, reducing the advantage of mixed Surveys are convenient but limited (Baddeley, 1979). Individuals
methods research. Additionally, as experimental research has in- do not provide reliable descriptions of their own cognitive change
creased in prevalence it may simultaneously be declining in rele- (Doyle, 1997), due to: introspection illusion (where individuals de-
vance, as GMB literature places increasing emphasis on social and scribe what they think must have happened, rather than actual recol-
behavioural dynamics. Early GMB research focussed primarily on lections, Nisbett & Wilson, 1977); hindsight bias (where individuals
the individual learning effects of participants (Andersen, Maxwell, assume their current view is the one they have always held, Tversky
Richardson, & Stewart, 1994; Richardson, Andersen, Maxwell, & Stew- & Kahneman, 1973); or subject bias (where individuals report what
art, 1994). It is thought that these learning effects can be studied they think researchers want to hear, Orne, 1962). Recent literature in-
in controlled experiments, as the cognitive processes are thought cludes a range of methods that do not rely on participant introspec-
to be influenced by the workshop method more than the deci- tion (Fokkinga et al., 2009; McCardle-Keurentjes et al., 2008; Scott
sion context (Capelo & Dias, 2009; Cavaleri et al., 2002; Gary & et al., 2013; van Nistelrooij et al., 2012). These more objective tools are
Wood, 2007, 2011; Hopper & Stave, 2008; Jensen, 2005; Kopainsky preferable to increase our insight into actual (rather than perceived)
et al., 2009, 2010a, 2010b, 2011a, 2011b, 2012; Kopainsky & Sal- effects of GMB.
darriaga, 2012; Kopainsky & Sawicka, 2011; Langley & Morecroft, 4.3. What group model building achieves
2004; Maani & Maharaj, 2003; Moxnes, 2004; Mulder et al., 2011;
Plate, 2010; Stouten et al., 2012; Yasarcan, 2010). More recently, there There have now been at least 45 publications from 1987 to 2014
has been a split between research on learning effects of individuals that provide quantitative evidence relating to the effectiveness of
R.J. Scott et al. / European Journal of Operational Research 249 (2016) 908–918 913

Table 6 of the workshops (e.g. Rouwette, 2011; Scott et al., 2014a), or a long
Number of studies reporting on each outcome in early (1987–2000) and recent
time after the intervention (e.g. Huz, 1999; Scott et al., 2013).
(2001–2014) GMB literature.

Outcome Studies 1987–2000 Studies 2001–2014

Individual
4.3.2. Differential efficacy by method
Positive reaction 6 2 Several studies directly compared GMB to other methods, but the
Insight 19c 8c relative effectiveness and appropriate context for GMB versus other
Commitment to conclusions 12 11a methods remains unclear. Four field experiments have been used to
Enduring individual effects 1c 1c
compare GMB to traditional meeting facilitation. Two studies com-
Group pared treatment and control groups in different organisations, and
Communication quality 12 12a,c found greater consensus and commitment with GMB (Dwyer & Stave,
Shared language 1 2
Persuasion 0 3c
2008; Huz, 1999), but it is not clear that the groups in either field ex-
Consensus 15b,c 10c periment were comparable. van Nistelrooij et al. (2012) tested a con-
Decision quality 0 4c trol and treatment group in the same organization, and found that
Power levelling 0 1c GMB is associated with greater power-levelling than a normal meet-
Group cohesion 0 2
ing. One further field experiments compared the conditions for per-
Enduring group effects 1c 1c
suasion and found GMB more effective than a “study day” (Eskinasi
Organisation et al., 2009).
System changes 2c 2c
Three experiments compared GMB to other methods using ran-
Method
Efficiency 10 5c domised control trials, studying university students working on ab-
a
stract problems. Two compared GMB to traditional facilitation, and
Includes 1 finding that did not support GMB.
b
Includes 2 findings that did not support GMB.
found that GMB is associated with more sharing of hidden knowl-
c
Includes measurement methods that do not rely on participant perception. edge (McCardle-Keurentjes et al., 2008), but not more shared un-
derstanding, communication or commitment (McCardle-Keurentjes
et al., 2009). Fokkinga et al. (2009) compared participation in the
GMB. Almost all studies provide evidence that supports the efficacy creation of causal loop diagrams to studying diagrams completed
of GMB (see Table 6). We can say with some confidence that GMB pro- by others, and noted improved outcomes associated with GMB at
duces a range of cognitive and interpersonal outcomes that are con- a group and individual level. As discussed earlier, the artificial con-
sidered beneficial to group decision processes. text for these experiments may affect the behaviour of participants,
and therefore these studies may lack external validity (Shadish et al.,
4.3.1. General efficacy 2001).
Rouwette et al. (2002) described the outcomes of GMB as occur- Several studies ask participants to compare the outcomes of the
ring at four levels: individual; group; organisation and method (see meeting to a hypothetical “normal” meeting (Mooy et al., 2001;
Table 6). Many of the outcomes explored in early research continue to Rouwette, 2011; Scott et al., 2014a; Vennix & Rouwette, 2000;
be evaluated in recent studies, alongside several new outcomes. The Vennix et al., 1993). Participants in all studies believed that GMB re-
new outcomes are persuasion, decision quality, power levelling, and sults in faster and better outcomes than a normal meeting.
group cohesion. Each is considered further below. There is an apparent contradiction between the experimental re-
GMB studies on persuasion include both studies on the outcome sults on shared understanding, communication quality, and com-
(Eskinasi et al., 2009; Rouwette et al., 2011; Scott et al., 2013) and the mitment, that did not support GMB (McCardle-Keurentjes et al.,
mechanism (Eskinasi et al., 2009; Rouwette et al., 2011; Scott et al., 2009), and both field experiments (Dwyer & Stave, 2008; Eskinasi
2014b). et al., 2009; Huz, 1999) and posttest surveys without a control
Decision quality has been rated by participants (van den Belt et al., group (Rouwette, 2011; Scott et al., 2014a; Vennix & Rouwette, 2000;
2004; Rouwette et al., 2014), and also assessed using the number of Vennix et al., 1993), that did support GMB as more effective than nor-
variables considered (Dwyer & Stave, 2008; Fokkinga et al., 2009) and mal meetings. This apparent contradiction highlights the challenges
the extent to which discussion of the problem preceded discussion of with applied business research (Shadish et al., 2001): the abstract
solution (Dwyer & Stave, 2008). experimental research has unknown external validity; the small
The study on power levelling used interaction analysis to demon- number of cases in the field experiments mean the treatment and
strate that, in GMB, less-powerful members are less disadvantaged in control groups may not be comparable; and the method used in the
their contribution to discussion (van Nistelrooij et al., 2012). applied research (where participants compare their experience to an-
In two studies, participants rated the dynamics of the group at the other hypothetical situation) is of unknown reliability (Scott et al.,
conclusion of the GMB intervention as more cohesive and collabora- 2014a). On balance, the bulk of the evidence supports GMB as likely to
tive than at the beginning (van den Belt, 2004; Videira et al., 2012). be more effective than normal meetings at achieving individual and
Although many of the same outcomes were reported, the qual- group outcomes in the group decision contexts in which it has been
ity of evidence has improved. In particular, there has been a shift tested.
from survey results reporting on participants’ perceptions of what the There have also been several calls to understand how GMB is
intervention achieved, to more objective methods. Before 2001, effi- both similar to and different from other related methods (Ackermann
ciency and communication quality had only been reported on the ba- et al., 2010; Andersen et al., 2007; Lane & Oliva, 1998; Rouwette
sis of participants’ perceptions. Now, these have been reported using et al., 2009, 2011b). GMB has been compared only to “traditional fa-
more objective measures (Borštnar et al., 2011; McCardle-Keurentjes cilitation” and “normal” meetings, and not to other problem struc-
et al., 2009). Additionally, there has been a greater focus on objective turing methods. The current state of our understanding with re-
measures in the reporting of insight (Eskinasi et al., 2009; Rouwette spect to what GMB achieves is summarised in Table 7 below. The
et al., 2011; Scott et al., 2013) and consensus (Scott et al., 2013; Škraba research questions have been split for clarity and improved granu-
et al., 2003, 2007). larity where the results differ (for example, “Can GMB produce in-
The outcomes reported in GMB literature have been observed dividual and group level effects” is reported in Table 7 as “Can GMB
at different stages: during modelling workshops (e.g. McCardle- produce individual level effects?” and “Can GMB produce group level
Keurentjes et al., 2008; van Nistelrooij et al., 2012), at the conclusion effects?”).
914 R.J. Scott et al. / European Journal of Operational Research 249 (2016) 908–918

Table 7
Changes in knowledge about what GMB achieves.

Research questions Evidence 1987–2000 Additional evidence 2001–2014

Can GMB produce individual level effects? Agreed by participants in multiple studies Observed independently in multiple studies e.g.
through changed decision preferences or elicited
mental models
Can GMB produce group level effects? Agreed by participants in multiple studies Observed independently in multiple studies e.g.
through aligned decision preferences or shared
knowledge
Can GMB produce individual level effects more Agreed by participants in multiple studies, in Rated more favourably by participants in
effectively than normal meetings? comparison to hypothetical normal meetings multiple field experiments, in comparison with
normal meetings
Can GMB produce group level effects more Agreed by participants in multiple studies in Rated more favourably by participants in
effectively than normal meetings? comparison to hypothetical normal meetings multiple field experiments when compared to
normal meetings; mixed results in controlled
experiments.
Can GMB produce individual and group level No evidence No additional evidence
effects more effectively than other modelling
methods?
Can GMB produce individual and group level Agreed by participants in multiple studies in No additional evidence
effects more efficiently than normal meetings? comparison to hypothetical meetings
Can GMB produce individual and group effects No evidence No additional evidence
more efficiently than other modelling methods?

4.4. When group model building should be used from different contexts. Such a comparison has not yet been pub-
lished. One unpublished dissertation described the conditions under
Group model building has now been evaluated and shown to be which GMB supports learning, based on qualitative study of ten cases
effective in a range of contexts, including policy making (Beall & Ford, (Thompson, 2009).
2010; Cockerill et al., 2006; Dwyer & Stave, 2008; van Nistelrooij et al., It is only part of the story to consider that GMB is effective in cer-
2012), strategy development (Rouwette, 2011), strategy implementa- tain situations, or that it is more effective in some situations than
tion (Scott et al., 2014a), and both intra- (Scott et al., 2013) and inter- others. When faced with a problem setting, there are a large num-
organisational (Rouwette, 2011) agreement. All of these studies re- ber of possible methods to choose from. As described in Section 4.1
port positive outcomes; this provides some evidence to indicate use above, GMB evidence should ideally identify when GMB is likely to
of GMB in these contexts, but nothing to guide when not to use group be the best available approach to meeting the clients’ objectives.
model building. Four field experiments compare GMB effectiveness to “normal” meet-
ings, with findings that are likely to be most applicable in compa-
rable settings. Two studies concern intra-organisational policy deci-
4.4.1. Current prevalence and application
sions (Huz, 1999; van Nistelrooij et al., 2012) and two concern inter-
These case studies provide examples of where GMB has been ap-
organisational policy decisions (Dwyer & Stave, 2008; Eskinasi et al.,
plied, but are likely to be a tiny sample of GMB practice, and provide
2009). Two studies were conducted with a government client (Huz,
no evidence about its prevalence or distribution. There remain unan-
1999; van Nistelrooij et al., 2012) and two with stakeholders from the
swered questions about when and why GMB is chosen by practition-
community sector (Dwyer & Stave, 2008; Eskinasi et al., 2009). All
ers, and when and why GMB practitioners are chosen by clients (see
four studies reported GMB as more effective than “normal” meetings.
“negotiating entry” – Mingers & Rosenhead, 2004). In a related field,
No comparisons were found for other contexts.
members of the OR profession were surveyed about their use and de-
Jackson and Keys (1984) provide one theoretical framework for
cision to use soft systems methodology (Mingers & Taylor, 1992). A
understanding when to use different methods. Though this frame-
similar survey approach could be used in the system dynamics pro-
work does not mention GMB by name, the focus in GMB literature
fession to explore when, how often, and why GMB is used.
on group participation and complex problems (Vennix, 1999) sug-
Despite its apparent virtues, GMB is unlikely to be the best ap-
gests GMB would be most effective in “complex pluralist” settings.
proach to every problem. Two factors appear important for choos-
This framework has not been tested empirically.
ing when to apply GMB: the outcomes desired in a given situation;
The empirical basis for selecting GMB is further complicated by
and the likely effectiveness of GMB in achieving those outcomes com-
multi methodology approaches (Mingers & Brocklesby, 1997; Munro
pared to alternate methods. One study related the reported outcomes
& Mingers, 2002). While some authors suggest choosing a single
of GMB to potential-client objectives in a range of decision contexts
method for each system (Flood & Jackson, 1991), other suggest using
(Scott et al., 2015). This study explored only a small range of contexts
mixed and multi methodology, and adapting the selection of meth-
(group decisions commissioned by public servants in New Zealand),
ods as the intervention progresses and the problem is better under-
but begins the quantitative evidence base for understanding clients’
stood (Mingers & Brocklesby, 1997). Creating evidence-based guide-
objectives.
lines for method selection in a multi method environment may be an
unreasonable expectation, due to the many variables and combina-
4.4.2. Differential efficacy by context tions that could be explored. The current state of our understanding
There remains no direct comparison of GMB effectiveness in dif- with respect to when GMB should be used is summarised in Table 8
ferent settings, to suggest when GMB might be most effective. Several below.
studies have used identical or similar survey instruments (Dwyer &
Stave, 2008; Eskinasi et al., 2009; Mooy et al., 2001; Rouwette, 2011; 4.5. How to improve group model building
Scott et al., 2014a; Vennix & Rouwette, 2000; Vennix et al., 1993).
Meta-analysis of these results could add greater statistical power to Group model building has evolved organically as an offshoot of
the analysis, as well as the opportunity to directly compare results system dynamics modelling. System dynamics was created within a
R.J. Scott et al. / European Journal of Operational Research 249 (2016) 908–918 915

Table 8
Changes in knowledge about when GMB should be used.

Research questions Evidence 1987–2000 Additional evidence 2001–2014

When/where is GMB used now? No evidence No additional evidence


Can GMB produce individual and group level Case studies reporting effectiveness in: Additional case studies in similar settings
effects in a range of settings? government, corporate, and community sectors;
and both inter-organisational and
intra-organisational settings
Is GMB more effective in certain settings than No evidence No additional evidence
other settings?
Is GMB more effective in certain settings than Single field experiment reporting GMB as more Multiple studies reporting GMB as more effective
other methods would be in those settings? effective than “normal” meetings in than “normal” meetings in both
intra-organisational decisions in a government intra-organisational and inter-organisational
setting. decisions, and in government and community
settings.

positivist paradigm as a discipline for to understanding the nonlin- understanding why GMB is effective, and for identifying new tools
ear behaviour of complex systems over time using stocks and flows, to explore. Quantitative evidence has been published to support the
internal feedback loops and time delays (Forrester, 1961). The re- modelling as persuasion (Eskinasi et al., 2009; Rouwette et al., 2011a;
ported positive effects on group trust and agreement (Black & Ander- Scott et al., 2013, 2014b) and boundary object (Scott et al., 2014b)
sen, 2012) were an unintended consequence that is now exploited mechanisms. One study compares these proposed mechanisms, and
by GMB practitioners. These effects were discovered and cultivated finds comparatively greater support for the boundary object mecha-
by talented practitioners, but further development will depend on nism as an explanation for participants’ experience of a GMB inter-
adding more “science” to the “craft” (Andersen et al., 1997). There is vention (Scott et al., 2014b).
evidence of varying quality for how it is used now, how it works, and
how to select the right tools for the job, as explored below.
4.5.3. Differential efficacy – tools
GMB describes a range of system dynamics tools that have been
4.5.1. Current use applied in many different ways (Andersen et al., 1997). There is
There is limited evidence for how GMB is currently applied. One limited evidence for the effectiveness of different tools. Seven of
literature review describes the processes used in 86 published case the reviewed papers used only qualitative GMB tools, and 16 in-
studies (Happach et al., 2012). One survey explores practitioners’ cluded quantitative GMB methods, but none directly compare the
views on best practice in system dynamics (Martinez-Moyano & two. Rouwette et al. (2002) compared data from different case stud-
Richardson, 2013), which includes reference to group and participa- ies to provide indirect evidence that commitment, consensus and
tory methods such as GMB. Another survey documents the time com- system change were more likely in quantitative models (than in
mitment to various activities (Luna-Reyes et al., 2006). There is no di- qualitative-only models), but this could have been due to the longer
rect evidence for which tools are used in practice, or how often. In a time-commitment involved in the case studies that used quantita-
related field, a survey was used to understand when different tools tive GMB. Despite the development of a possible framework (Coyle,
are used and how they are combined (Munro & Mingers, 2002), and 2000), there has been no further study on this topic, which limits the
a similar study could usefully be undertaken with respect to GMB. ability of practitioners to select the most appropriate method for the
The workshop tools used in GMB literature vary widely. Many job.
studies do not describe the process in a way that would allow them Several studies compare the presence or absence of GMB com-
to be repeated (Andersen et al., 1997). GMB “scripts” (Andersen & ponents, and how these contribute to learning outcomes. Experi-
Richardson, 1997; Hovmand et al., 2012) provide a new approach ments have evaluated the importance of the presence of a facilitator
to documenting interventions, and describe the tools used with (Borštnar et al., 2011; Shields, 2001), the creation of causal loop dia-
greater consistency and precision. The processes used in GMB liter- grams (Fokkinga et al., 2009), and the opportunity for group feedback
ature range from: purely qualitative causal loop diagrams (Fokkinga and discussion (Borštnar et al., 2011; Škraba et al., 2003, 2007). One
et al., 2009) through to quantitative simulation (Eskinasi et al., 2009); study explored the relationship between the type of modelling used
participant-led modelling (Scott et al., 2013) and expert-led mod- and the importance of facilitation (Shields, 2001), and suggested that
elling (van den Belt, 2004); and single short workshops (Scott et al., facilitation is more important in simulation modelling than concep-
2014a) to in-depth GMB interventions lasting up to a year (Rouwette, tual analysis.
2011). It is therefore difficult to make general statements about the Other studies, without control groups, ask participants to rate the
efficacy of GMB when the term describes such a wide range of pro- contribution of different components to the success of the interven-
cesses. tion (Eskinasi et al., 2009; Scott et al., 2014a; Vennix & Rouwette,
2000; Vennix et al., 1993). There are limitations to the ability of indi-
4.5.2. Mechanism viduals to describe their own learning (Doyle, 1997; Nisbett & Wilson,
There have recently been efforts to understand how and why 1977), and the reliability of these findings is not clear.
GMB affects participants at an individual and group level (Eskinasi Several authors have called for greater exploration into why some
et al., 2009; Fokkinga et al., 2009; McCardle-Keurentjes et al., 2008; GMB interventions fail (Andersen et al., 1997; Rouwette & Vennix,
Rouwette et al., 2011a; Scott et al., 2013, 2014a; van Nistelrooij 2006). To date this has been explored only qualitatively (Eskinasi &
et al., 2012). These studies provide quantitative evidence to support a Fokkema, 2006; Größler, 2007), and it is not clear whether this is
number of causative mechanisms that may contribute to its effective- due to the context (see Section 4.4) or the methods used. Greater
ness. The concepts of cognitive bias (Scott et al., 2014a), persuasion clarity in this area could be used to improve intervention design.
(Eskinasi et al., 2009; Rouwette et al., 2011a; Scott et al., 2014b), and The limited state of our understanding with respect to how the ap-
models as boundary objects (Black, 2013; Black & Andersen, 2012; plication of GMB should be applied or improved is summarised in
Scott et al., 2014b; Zagonel, 2002), provide a theoretical basis for Table 9 below.
916 R.J. Scott et al. / European Journal of Operational Research 249 (2016) 908–918

Table 9
Changes in knowledge about how GMB should be applied or improved.

Research questions Evidence 1987–2000 Additional evidence 2001–2014

How is GMB used now? No evidence One literature review on case study practice, one
survey on best practice, one survey on time
required for each component
How does GMB affect participants? Some explanations proposed, no quantitative Some evidence for modelling as persuasion and
evidence boundary object mechanisms
Are some GMB tools more effective than others? Meta-analysis of quantitative and qualitative Several experiments demonstrating the
methods suggests quantitative methods more importance of facilitation, causal loop diagrams,
effective and the opportunity for group feedback.
Multiple studies ask participants to rate the
importance of different components; a single
experiment demonstrated the importance of
facilitation
Are different GMB tools more appropriate for Meta-analysis suggests quantitative methods No additional evidence
certain contexts? produce greater commitment, consensus and
system change, suggests this tool effective

5. Conclusions from individual-level learning outcomes (e.g. Andersen et al., 1994;


Richardson et al., 1994) to the study of group and social processes
Twelve years after Rouwette et al. (2002) considered the state of that build trust and agreement (e.g. Black & Andersen 2012; Rouwette
GMB research, much has changed, but many challenges remain. There et al., 2011a; Scott et al., 2014b), which suggests that empirical studies
has been greater use of more-objective measures that do not rely on should shift their emphasis from unrealistic controlled experiments
participant recollection, new measurements methods have explored to field experiments on applied problems. There are limits to what we
different outcomes, and several studies lift the lid on what happens can learn about these social processes through research where partic-
during GMB sessions (rather than only evaluating change at the end). ipants are not invested in the outcomes, and where social dynamics
There are several interesting results from small studies that may war- do not mirror practical applications.
rant further attention: what clients want, the duration of participant It is on the basis of these two points that we conclude that the
change, persuasion, the model as boundary object, power-levelling, biggest opportunities to advance group model building literature lie
and the discovery of hidden knowledge. In almost all cases, GMB is re- in field experiments exploring multiple comparable case studies that
ported to be effective in supporting individual and collective changes differ by a small number of key variables. This can be achieved in
considered desirable in group-decision contexts. two ways. Where resources allow, large research projects can com-
Although the overall picture is now much richer and more com- pare multiple interventions (e.g. Huz, 1999). Alternately, multiple
plex, it is far from comprehensive. Evidence is still based on small projects can be conducted in similar ways to allow for meaningful
sample sizes or single case studies, although the use of common meta-analysis between different studies (e.g. Rouwette et al., 2011).
methods may allow a meta-analysis in the future. Research is usu- This latter approach will require the use of consistent research
ally lacking in either experimental control or external validity, and methods, including providing detailed context (Andersen et al., 1997;
each design has been used to measure different things. Finally, there Scott et al., 2015), consistent description of the intervention (e.g.
is considerable scope for comparison between GMB and other meth- “scripts” – Hovmand et al., 2012) and common measurement tools.
ods, or between different GMB tools. In many of the research ques- The only tool that has been reported in multiple studies is the CICC
tions, the current literature lacks objective measure, external validity survey tool (Vennix et al., 1993), and continued use of this tool will
and direct comparison. increase the sample size for meta-analysis. However, this tool relies
Continuing with the status quo seems unlikely to yield differ- on self-reporting and is therefore unreliable (Nisbett & Wilson, 1977;
ent results. More pilot studies will bring more loose ends (Andersen Orne, 1962; Tversky & Kahneman, 1973), and should be augmented
et al., 1997). Further controlled experiments in unrealistic conditions or supplemented with repeated use of more objective measures.
will be discounted by those who see group model building as a social A number of more objective measurement tools have been de-
process (Scott et al., 2014b). Additional survey-based studies will be scribed in the literature, including two promising categories: tools
challenged by research that suggests individuals are unreliably wit- that track participants’ decision preferences over time (pretest–
nesses to their own learning processes (Nisbett & Wilson, 1977). Fu- posttest – Rouwette et al., 2011a; pretest–posttest–delayed – Scott
ture research in group model building would therefore benefit from et al., 2013); or those that use session transcripts as a basis to analyse
three main shifts: from single cases to multiple cases; from controlled communication between participants (content analysis – McCardle-
settings to applied settings; and by augmenting survey results with Keurentjes et al., 2008; sequential analysis – Franco & Rouwette,
more objective measures. Each of these shifts is explored below. 2011; interaction analysis – van Nistelrooij et al., 2012). The analy-
GMB literature contains a large number of pilot studies with inter- sis of decision preferences is useful for measuring effectiveness and
esting and novel findings. These were useful for determining lines of requires only a small investment of time and effort. Analysis of ses-
enquiry that warrant further investigation, but despite an increasing sion transcripts is more time consuming, but helps to lift the veil on
number of promising leads, GMB literature is not making the logical what actually occurs during a group model building session. The use
step to increased research scale. The findings from these pilot studies of these existing tools should be preferred where possible to the cre-
should be explored with comparative research across multiple cases ation of new tools, as this will increase the available sample size for
(Ragin, 1987, 2000). meta-analysis.
There are different views within the OR community on how to Robust, comparative field experiments, using consistent research
gather evidence regarding group processes. Some advocate for con- methods, provide an opportunity for GMB research to move be-
trolled experiments to understand analytical processes (Finlay, 1998), yond anecdote, experience, case studies and unrealistic controlled ex-
whereas others advocate the use of social science methods to under- periments. Such a research approach could be used to help inform
stand group/social processes in applied contexts (Checkland, 1999). practitioners on both when and how to use group model building
There has been a gradual change in emphasis in GMB literature approaches.
R.J. Scott et al. / European Journal of Operational Research 249 (2016) 908–918 917

References Finlay, P. (1998). On evaluating the performance of GSS: Furthering the debate. Euro-
pean Journal of Operational Research, 107, 193–201.
Ackermann, F., Andersen, D. F., Eden, C., & Richardson, G. P. (2010). Using a group deci- Fjermestad, J. (2004). An analysis of communication mode in group support systems
sion support system to add value to group model building. System Dynamics Review, research. Decision Support Systems, 37, 239–263.
26(4), 335–346. Flood, R., & Jackson, M. (1991). Creative problem solving. London: Wiley.
Akkermans, H. A., & Vennix, J. A. (1997). Clients’ opinions on group model-building: An Fokkinga, B., Bleijenbergh, I., & Vennix, J. (2009). Group model building evaluation in
exploratory study. System Dynamics Review, 13(1), 3–31. single cases: A method to assess changes in mental models. In Proceedings of the
Allsop, J., & Taket, A. (2003). Evaluating user involvement in primary healthcare. Inter- 2009 international conference of the system dynamics society. Albany: System Dy-
national Journal of Healthcare Technology & Management, 5, 34–44. namics Society.
Andersen, D. A., Maxwell, T. A., Richardson, G. P., & Stewart, T. R. (1994). Mental models Forrester, J. W. (1961). Industrial dynamics. Cambridge: The MIT Press.
and dynamic decision making in a simulation of welfare reform. In Proceedings of Franco, L. A. (2007). Assessing the impact of problem structuring methods in multi-
the 1994 International System Dynamics Conference. Chestnut Hill: System Dynamics organizational settings: An empirical investigation. Journal of the Operational Re-
Society. search Society, 58, 760–768.
Andersen, D. F., & Richardson, G. P. (1997). Scripts for group model building. System Franco, L. A. (2013). Rethinking Soft OR interventions: Models as boundary objects.
Dynamics Review, 13(2), 107–129. European Journal of Operational Research, 231(3), 720–733.
Andersen, D. F., Richardson, G. P., & Vennix, J. A. M. (1997). Group model build- Franco, L. A., & Rouwette, E. A. (2011). Decision development in facilitated modelling
ing: Adding more science to the craft. System Dynamics Review, 13(2), 187– workshops. European Journal of Operational Research, 212(1), 164–178.
203. Gary, M. S., & Wood, R. E. (2007). Testing the effects of a system dynamics decision aid
Andersen, D. F., Vennix, J. A. M., Richardson, G. P., & Rouwette, E. A. J. A. (2007). Group on mental model accuracy and performance on dynamic decision making tasks. In
model building: Problem structuring, policy simulation and decision support. Jour- Proceedings of the 2007 international system dynamics conference. Albany: System
nal of the Operational Research Society, 58(5), 691–694. Dynamics Society.
Aronson, E., Wilson, T. D., & Brewer, M. (1998). Experimental methods. In D. Gilbert, Gary, M. S., & Wood, R. E. (2011). Mental models, decision rules, and performance het-
S. Fiske, & G. Lindzey (Eds.), The handbook of social psychology (pp. 99–142). New erogeneity. Strategic Management Journal, 32, 560–594.
York: Random House. Greenberger, M., Crenson, M. A., & Crissey, B. L. (1976). Models in the policy process:
Baddeley, A. (1979). The limitations of human memory: Implications for the design of Public decision making in the computer era. New York: Russell Sage Foundation.
retrospective surveys. The Recall Method in Social Surveys, 9, 13–27. Größler, A. (2007). System dynamics projects that failed to make an impact. System
Beall, A. M., & Ford, A. (2010). Reports from the field: Assessing the art and science of Dynamics Review, 23(4), 437–452.
participatory environmental modeling. International Journal of Information Systems Halvorsen, K. E. (2001). Assessing public participation techniques for comfort, conve-
and Social Change, 1(2), 72–89. nience, satisfaction, and deliberation. Environmental Management, 28, 179–186.
Berry, H., Bowman, S. R., Hernandez, R., & Pratt, C. (2006). Evaluation Hamalainen, R. P., Luoma, J., & Saarinen, E. (2013). On the importance of behavioral op-
tool for community development coalitions. Journal of Extension, 44, erational research: The case of understanding and communicating about dynamic
http://www.joe.org/joe/2006december/tt2.shtml (accessed: 24.05.2014). systems. European Journal of Operational Research, 228(3), 623–634.
Black, L. J. (2013). When visuals are boundary objects in system dynamics work. System Happach, R. M., Veldhuis, G. A., Vennix, J. A. M., & Rouwette, E. A. J. A. (2012). Group
Dynamics Review, 29(2), 70–86. model validation. In Proceedings of the 2012 international conference of the system
Black, L. J., & Andersen, D. F. (2012). Using visual representations as boundary objects to dynamics society. Albany: System Dynamics Society.
resolve conflicts in collaborative model-building approaches. System Research and Hopper, M., & Stave, K. A. (2008). Assessing the effectiveness of systems thinking inter-
Behavioral Science, 29, 194–208. ventions in the classroom. In Proceedings of the 2008 international system dynamics
Blaikie, N. (1993). Approaches to social enquiry. Polity: Cambridge. conference. Albany: System Dynamics Society.
Borštnar, M. K., Kljajić, M., Škraba, A., Kofjač, D., & Rajkovič, V. (2011). The relevance Hovmand, P. S., Andersen, D. F., Rouwette, E., Richardson, G. P., Rux, K., & Calhoun, A.
of facilitation in group decision making supported by a simulation model. System (2012). Group model-building ‘scripts’ as a collaborative planning tool. Systems Re-
Dynamics Review, 27(3), 270–293. search and Behavioral Science, 29(2), 179–193.
Bryant, J. W., & Darwin, J. A. (2004). Exploring inter-organisational relationships in the Huz, S. (1999). Alignment from group model building for systems thinking: Measurement
health service: An immersive drama approach. European Journal of Operational Re- and evaluation from a public policy setting (PhD thesis).
search, 152, 655–666. Jackson, M., & Keys, P. (1984). Towards a system of system methodologies. Journal of the
Burlingame, G. M., MacKenzie, K. R., & Strauss, B. (2004). Small group treatment: Evi- Operational Research Society, 35(6), 473–486.
dence for effectiveness and mechanisms of change. Bergin & Garfield’s Handbook of Jensen, E. (2005). Learning and transfer from a simple dynamic system. Scandinavian
Psychotherapy and Behavior Change, 5, 647–696. Journal of Psychology, 46, 119–131.
Capelo, C., & Dias, J. F. (2009). A system dynamics-based simulation experiment for Jick, T. D. (1979). Mixing qualitative and quantitative methods: Triangulation in action.
testing mental model and performance effects of using the balanced scorecard. Administrative Science Quarterly, 23(4), 602–611.
System Dynamics Review, 25, 1–34. Joldersma, C., & Roelofs, E. (2004). The impact of soft OR-methods on problem struc-
Cavaleri, S., Raphael, M., & Filletti, V. (2002). Evaluating the performance efficacy of turing. European Journal of Operational Research, 152, 696–708.
system thinking tools. In Proceedings of the 2002 international system dynamics con- Kamler, B. (2008). Rethinking doctoral publication practices: Writing from and beyond
ference. Albany: System Dynamics Society. the thesis. Studies in Higher Education, 33(3), 283–294.
Cavana, R. Y., Delahaye, B. L., & Sekaran, U. (2001). Applied business research: Qualitative Kopainsky, B., Alessi, S. M., & Davidsen, P. I. (2011). Measuring knowledge acquisition
and quantitative methods. Brisbane: Wiley. in dynamic decision making tasks. Proceedings of the 2011 international system dy-
Charnley, S., & Engelbert, B. (2005). Evaluating public participation in environmental namics conference. Albany: System Dynamics Society.
decision-making: EPA’s superfund community involvement program. Journal of En- Kopainsky, B., Alessi, S. M., Pedercini, M., & Davidsen, P. I. (2009). Exploratory strategies
vironmental Management, 77, 165–182. for simulation-based learning about national development. Proceedings of the 2009
Checkland, P. (1999). Systems thinking, systems practice; Soft systems methodology: a international system dynamics conference. Albany: System Dynamics Society.
30-year Retrospective. West Sussex, England: John Wiley and Sons. Kopainsky, B., Alessi, S. M., & Pirnay-Dummer, P. (2011). Providing structural trans-
Cockerill, K., Passell, H., & Tidwell, V. (2006). Cooperative modeling: Building bridges parency when exploring a model’s behavior: Effects on performance and knowl-
between science and the public. Journal of the American Water Resources Associa- edge acquisition. Proceedings of the 2011 international system dynamics conference.
tion, 42(2), 457–471. Albany: System Dynamics Society.
Cohen, J. (1988). Statistical power analysis for the behavioral sciences. Hillsdale: Lawrence Kopainsky, B., Pedercini, M., Davidsen, P. I., & Alessi, S. M. (2010a). A blend of planning
Erlbaum. and learning: Simplifying a simulation model of national development. Simulation
Cole, M. (2006). Evaluating the impact of community appraisals: Some lessons from Gaming, 41, 641–662.
South-West England. Policy & Politics, 34, 51–68. Kopainsky, B., Pirnay-Dummer, P., & Alessi, S. M. (2010b). Automated assessment of
Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design and analysis for field learners’ understanding in complex dynamic systems. Proceedings of the 2010 in-
settings. Chicago: The University of Chicago Press. ternational system dynamics conference. Albany: System Dynamics Society.
Coyle, G. (2000). Qualitative and quantitative modelling in system dynamics: Some Kopainsky, B., Pirnay-Dummer, P., & Alessi, S. M. (2012). Automated assessment of
research questions. System Dynamics Review, 16(3), 225–244. learners’ understanding in complex dynamic systems. System Dynamics Review,
Doyle, J. K. (1997). The cognitive psychology of systems thinking. System Dynamics Re- 28(2), 131–156.
view, 13, 253–265. Kopainsky, B., & Saldarriaga, M. (2012). Assessing understanding and learning about
Dwyer, M., & Stave, K. (2008). Group model building wins: The results of a comparative system dynamics. In Proceedings of the 1993 international system dynamics confer-
analysis. In Proceedings of the 2008 international conference of the system dynamics ence. Albany: System Dynamics Society.
society. Albany: System Dynamics Society. Kopainsky, B., & Sawicka, A. (2011). Simulator-supported descriptions of complex dy-
Eden, C., & Ackermann, F. (2006). Where next for problem structuring methods. Journal namic problems: Experimental results on task performance and system under-
of the Operational Research Society, 57(7), 766–768. standing. System Dynamics Review, 27(2), 142–172.
Eskinasi, M., & Fokkema, E. (2006). Lessons learned from unsuccessful modelling inter- Lane, D. C., & Oliva, R. (1998). The greater whole: Towards a synthesis of system dynam-
ventions. Systems Research and Behavioral Science, 23(4), 483–492. ics and soft systems methodology. European Journal of Operational Research, 107(1),
Eskinasi, M., Rouwette, E., & Vennix, J. (2009). Simulating urban transformation in 214–235.
Haaglanden, the Netherlands. System Dynamics Review, 25(3), 182–206. Langley, P. A., & Morecroft, J. D. W. (2004). Performance and learning in a simulation of
Fan, S., Shen, Q., & Lin, G. (2007). Comparative study of idea generation between tra- oil industry dynamics. European Journal of Operational Research, 155, 715–732.
ditional value management workshops and GDSS-supported workshops. Journal of Levine, J. M., & Moreland, R. L. (1990). Progress in small group research. Annual Review
Construction Engineering and Management, 133, 816–825. of Psychology, 41(1), 585–634.
918 R.J. Scott et al. / European Journal of Operational Research 249 (2016) 908–918

Luna-Reyes, L. F., Martinez-Moyano, I. J., Pardo, T. A., Cresswell, A. M., Andersen, D. F., & Rowe, G., & Frewer, L. J. (2004). Evaluating public participation exercises: A research
Richardson, G. P. (2006). Anatomy of a group model-building intervention: Build- agenda. Science, Technology & Human Values, 29, 512–556.
ing dynamic theory from case study research. System Dynamics Review, 22(4), 291– Rowe, G., Horlick-Jones, T., Walls, J., & Pidgeon, N. (2005). Difficulties in evalu-
320. ating public engagement initiatives: Reflections on an evaluation of the UK
Maani, K. E., & Maharaj, V. (2003). Links between systems thinking and complex deci- public debate about transgenic crops. Public Understanding of Science, 14, 331–
sion making. System Dynamics Review, 20(1), 21–48. 352.
Martinez-Moyano, I. J., & Richardson, G. P. (2013). Best practices in system dynamics Rowe, G., Marsh, R., & Frewer, L. J. (2004). Evaluation of a deliberative conference. Sci-
modeling. System Dynamics Review, 29(2), 102–123. ence, Technology & Human Values, 29, 88–121.
McCardle-Keurentjes, M. H. F., Rouwette, E. A. J. A., & Vennix, J. A. (2008). Effective- Scott, R. J., Cavana, R. Y., & Cameron, D. (2013). Evaluating immediate and long-term
ness of group model building in discovering hidden profiles in strategic decision- impacts of qualitative group model building workshops on participants’ mental
making. In Proceedings of the 2008 international conference of the system dynamics models. System Dynamics Review, 29(4), 216–236.
society. Albany: System Dynamics Society. Scott, R. J., Cavana, R. Y., & Cameron, D. (2014a). Group model building and strategy im-
McCardle-Keurentjes, M. H. F., Rouwette, E. A. J. A., Vennix, J. A., & Jacobs, E. (2009). Is plementation. Journal of the Operational Research Society. doi:10.1057/jors.2014.70.
group model building worthwhile? Considering the effectiveness of GMB. In Pro- Scott, R. J., Cavana, R. Y., & Cameron, D. (2014b). Mechanisms for understanding mental
ceedings of the 2009 international conference of the system dynamics society. Albany: model change in group model building. Systems Research and Behavioural Science.
System Dynamics Society. doi:10.1002/sres.2303.
McGurk, B., Sinclair, A. J., & Diduck, A. (2006). An assessment of stakeholder advisory Scott, R. J., Cavana, R. Y., & Cameron, D. (2015). Group model building – Do clients
committees in forest management: Case studies from Manitoba, Canada. Society & value reported outcomes? Group Decision and Negotiation. doi:10.1007/s10726-
Natural Resources, 19, 809–826. 015-9433-y.
Mingers, J., & Brocklesby, J. (1997). Multimethodology: Towards a framework for mixing Shadish, W. R., Cook, T. D., & Campbell, D. T. (2001). Experimental and quasi-experimental
methodologies. Omega, 25(5), 489–509. designs for generalized causal inference (2nd edition). Wadsworth: Cengage Learn-
Mingers, J., & Rosenhead, J. (2004). Problem structuring methods in action. European ing.
Journal of Operational Research, 152, 530–554. Shaw, D. (2003). Evaluating electronic workshops through analysing the ‘brainstormed’
Mingers, J., & Taylor, S. (1992). The use of soft systems methodology in practice. Journal ideas. Journal of the Operational Research Society, 54, 692–705.
of the Operational Research Society, 43(4), 321–332. Shields, M. (2001). An experimental investigation comparing the effects of case study,
Mooy, R., Rouwette, E. A. J. A., Valk, G., Vennix, J. A. M., & Maas, A. (2001). Quantification management flight simulator and facilitation of these methods on mental model
and evaluation issues in group model building: An application to human resource development in a group setting. In Proceedings of the 2001 international conference
management transition. In Proceedings of the 2009 international conference of the of the system dynamics society. Albany: System Dynamics Society.
system dynamics society. Albany: System Dynamics Society. Shields, M. (2002). The role of group dynamics in mental model development. In Pro-
Moxnes, E. (2004). Misperceptions of basic dynamics: The case of renewable resource ceedings of the 2002 international conference of the system dynamics society. Albany:
management. System Dynamics Review, 20, 139–162. System Dynamics Society.
Mulder, Y. G., Lazonder, A. W., & de Jong, T. (2011). Comparing two types of model pro- Škraba, A., Kljajić, M., & Borštnar, M. K. (2007). The role of information feedback in the
gression in an inquiry learning environment with modelling facilities. Learning and management group decision-making process applying system dynamics models.
Instruction, 21(5), 614–624. Group Decision and Negotiation, 16(1), 77–95.
Munro, I., & Mingers, J. (2002). The use of multimethodology in practice – results of a Škraba, A., Kljajić, M., & Leskovar, R. (2003). Group exploration of system dynamics
survey of practitioners. Journal of the Operational Research Society, 59(4), 369–378. models—Is there a place for a feedback loop in the decision process? System Dy-
Nisbett, R., & Wilson, T. (1977). Telling more than we can know: Verbal reports on men- namics Review, 19(3), 243–263.
tal processes. Psychological Review, 84(3), 231–259. Sørensen, L., Vidal, R., & Engström, E. (2004). Using soft OR in a small company – The
Orne, M. (1962). On the social psychology of the psychology experiment. American Psy- case of Kirby. European Journal of Operational Research, 152, 555–570.
chologist, 17, 776–783. Sterman, J. D. (2000). Business dynamics—Systems thinking and modeling for a complex
Phahlamohlaka, J., & Friend, J. (2004). Community planning for rural education in South world. Boston: McGraw-Hill.
Africa. European Journal of Operational Research, 152, 684–695. Stouten, H., Heeme, A., Gellynck, X., & Polet, H. (2012). Learning from playing with mi-
Plate, R. (2010). Assessing individuals’ understanding of nonlinear causal structures in croworlds in policy making: An experimental evaluation in fisheries management.
complex systems. System Dynamics Review, 26, 19–33. Computers in Human Behaviour, 28, 757–770.
Ragin, C. C. (1987). The comparative method. Moving beyond qualitative and quantitative Thompson, J. P. (2009). How and under what conditions client learn in system
strategies. Berkley: University of California Press. dynamics consulting engagements. Strathclyde Business School: Glasgow (PhD the-
Ragin, C. C. (2000). Fuzzy-set social science. Chicago: University of Chicago Press. sis).
Richardson, G. P. (2013). Concept models in group model building. System Dynamics Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for judging frequency and
Review, 29(1), 42–55. probability. Cognitive Psychology, 5, 207–232.
Richardson, G. P., & Andersen, D. F. (1995). Teamwork in group model building. System van den Belt, M. (2004). Mediated modeling: A system dynamics approach to environmen-
Dynamics Review, 11(2), 113–137. tal consensus building. Washington DC: Island Press.
Richardson, G. P., Andersen, D. F., Maxwell, T. A., & Stewart, T. R. (1994). Foundations van Nistelrooij, L. P. J., Rouwette, E. A. J. A., Verstijnen, I., & Vennix, J. A. M. (2012).
of mental model research. In Proceedings of the 1994 International System Dynamics Power-leveling as an effect of group model building. In Proceedings of the 2012
Conference. Chestnut Hill: System Dynamics Society. international conference of the system dynamics society. Albany: System Dynamics
Richardson, G. P., Andersen, D. F., Rohrbaugh, J., & Steinhurst, W. (1992). Group model Society.
building. In Proceedings of 1992 international conference of the system dynamics so- Vennix, J. A. (1999). Group model-building: tackling messy problems. System Dynamics
ciety. Albany: System Dynamics Society. Review, 15(4), 379–401.
Riecken, H. W., & Boruch, R. F. (1974). Social experimentation: A method for planning and Vennix, J. A. M. (1996). Group model building: Facilitating team learning using system
evaluating social intervention. New York: Academic Press. dynamics. Chichester: Wiley.
Rouwette, E. A. J. A. (2011). Facilitated modelling in strategy development: Measuring Vennix, J. A. M., Akkermans, H. A., & Rouwette, E. A. J. A. (1996). Group model-building
the impact on communication, consensus and commitment. Journal of the Opera- to facilitate organizational change: An exploratory study. System Dynamics Review,
tional Research Society, 62(5), 879–887. 12(1), 39–58.
Rouwette, E. A. J. A., Bastings, I., & Blokker, H. (2011b). A comparison of group model Vennix, J. A. M., Andersen, D. F., Richardson, G. P., & Rohrbaugh, J. (1992). Model-
building and strategic options development and analysis. Group Decision and Nego- building for group decision support: Issues and alternatives in knowledge elici-
tiation, 20(6), 781–803. tation. European Journal of Operational Research, 59(1), 28–41.
Rouwette, E., Bleijenbergh, I., & Vennix, J. (2014). Group model-building to support Vennix, J. A. M., & Rouwette, E. A. J. A. (2000). Group model building. What does the
public policy: Addressing a conflicted situation in a problem neighbourhood. Sys- client think of it now?. In Proceedings of 2000 international system dynamics confer-
tems Research and Behavioral Science. doi:10.1002/sres.2301. ence. Albany: System Dynamics Society.
Rouwette, E. A. J. A., Korzilius, H., Vennix, J. A. M., & Jacobs, E. (2011a). Modeling as Vennix, J. A. M., Scheper, W., & Willems, R. (1993). Group model building. What does
persuasion: The impact of group model building on attitudes and behavior. System the client think of it?. In Proceedings of the 1993 international system dynamics con-
Dynamics Review, 27(1), 1–21. ference. Albany: System Dynamics Society.
Rouwette, E. A. J. A., & Vennix, J. A. M. (2006). System dynamics and organizational Videira, N., Lopes, R., Antunes, P., Santos, R., & Casanova, J. L. (2012). Mapping maritime
interventions. Systems Research and Behavioral Science, 23(4), 451–466. sustainability issues with stakeholder groups. Systems Research and Behavioral Sci-
Rouwette, E. A. J. A., & Vennix, J. A. M. (2011). Group model building. In R. Meyers (Ed.), ence, 29(6), 596–619.
Complex systems in finance and econometrics (pp. 484–496). New York: Springer. Yasarcan, H. (2010). Improving understanding, learning, and performances of novices
Rouwette, E. A. J. A., Vennix, J. A. M., & Felling, A. J. (2009). On evaluating the perfor- in dynamic managerial simulation games. Complexity, 15(4), 31–42.
mance of problem structuring methods: An attempt at formulating a conceptual Zagonel, A. A. (2002). Model conceptualization in group model building: A review of
model. Group Decision and Negotiation, 18(6), 567–587. the literature exploring the tension between representing reality and negotiating
Rouwette, E. A. J. A., Vennix, J. A. M., & Mullekom, T. V. (2002). Group model a social order. In Proceedings of the 2002 international conference of the system dy-
building effectiveness: A review of assessment studies. System Dynamics Review, namics society. Albany: System Dynamics Society.
18(1), 5–45. Zagonel, A. A. (2004). Developing an interpretive dialogue for group model building.
Rouwette, E. A. J. A., Vennix, J. A. M., & Thijssen, C. M. (2000). Group model building: A In Proceedings of the 2002 international conference of the system dynamics society.
decision room approach. Simulation & Gaming, 31(3), 359–379. Albany: System Dynamics Society.

You might also like