Professional Documents
Culture Documents
Master thesis
Thesis
submitted in partial fulfilment of the
requirements for the degree of
Master of Science
in
Computer Science
Information Architecture Track
by
at the Delft
Delft University
University of Technology,
of Technology
to be defended
Faculty Electricalpublicly on Thursday
Engineering, December
Mathematics and15, 2016 atScience
Computer 16:00.
Abstract
SAFe is a framework that applies both agile and lean practices
for developing software. The current trend is that increasingly more organizations
develop their software in a globally distributed setting. Although SAFe is being
deployed in such a setting, SAFe was not originally developed for such a setting but
for a co-located setting. Therefore, this research investigates the application of
SAFe in globally distributed settings. Five problems are discovered that can be
expected to fail when SAFe is applied in distributed settings: incorrect execution of
SAFe, language barriers, time zone differences, increased communication effort, and
inefficient communication tools. Given these problems, four SAFe elements are
identified that can be expected to fail when SAFe is applied in distributed settings:
the PI planning, the inspect & adapt meeting, the DevOps team, and the system team.
Finally, a customization of SAFe for distributed settings is proposed. This
customization is focused on solving the discovered problems for the elements
identified to fail.
Thesis committee:
SAFe is a framework that applies both agile and lean practices for developing software. This
framework is publicly available and widely used in mainly the IT industry. The current trend is that
increasingly more organizations develop their software in a globally distributed setting. Although SAFe
is being deployed in such a setting, SAFe was not originally developed for such a setting, but for a co-
located setting. Therefore, it is interesting to research the application of SAFe in globally distributed
settings.
This research has discovered that the following five problems can be expected when SAFe is applied
in a distributed setting: incorrect execution of SAFe, language barriers, time zone differences,
increased communication effort, and inefficient communication tools.
Given these five problems, this research identified that the following four elements of SAFe can be
expected to fail when SAFe is applied in a distributed setting: the PI planning, the inspect & adapt
meeting, the DevOps team, and the system team.
This research is concluded by proposing a customization of SAFe for distributed settings. This
customization is focused on solving the discovered problems for the elements identified to fail. The PI
planning and inspect & adapt can be customized by having a Release Train Engineer at each location,
using a video conferencing system, using a digital program board & digital PI objectives, and extending
the PI planning over multiple days if needed. The DevOps team can be customized by enabling the
team to travel regularly to the other locations. Finally, the system team can be distributed over all
locations.
First, and foremost, I would like to thank my supervisor, Rini van Solingen, for guiding me in giving
direction to this research, and providing critical feedback during this research. Without his guidance
and feedback, the systematic approach and critical view in this report would not have been possible.
Second, I would like to thank my supervisor at Prowareness, Hendrik-Jan van der Waal, for giving his
insights on SAFe, and his critical feedback on the results of this research. Third, because most of my
time I have spent working at Prowareness, I would like to thank all my colleagues for always being
there to help me, and for being a sparring partner to discuss ideas. Fourth, I would like to thank the
participants of the focus groups for their contributions to this research.
Last, but not least, I would like to thank my girlfriend, my family, and my friends for always supporting
me during this project. Without these loving and caring people, taking an interest in my progress,
reviewing my work, and helping me to finish the project, this result would not have been possible.
Finally, I present you my thesis, I hope you will enjoy reading it. Peter van Buul
Leiden, the Netherlands
December 1, 2016
1
It should be noted that all information regarding SAFe in this thesis is as interpreted by the author, and verified
with SAFe experts. Any information is therefore not officially supported by Scaled Agile, Inc., unless quoted
directly from Scaled Agile, Inc., in which case this is specified.
i
Table of Contents
Chapter 1 Introduction ........................................................................................................................ 4
1.1. Context .................................................................................................................................... 4
1.2. Research questions ................................................................................................................. 6
1.3. Reading guide.......................................................................................................................... 7
Chapter 2 Research background .......................................................................................................... 8
2.1. Globally Distributed Software Engineering............................................................................. 8
2.2. SAFe......................................................................................................................................... 9
2.3. Agile scaling frameworks ...................................................................................................... 14
2.4. Research scope ..................................................................................................................... 18
Chapter 3 Research methodology ..................................................................................................... 21
3.1. Research approach................................................................................................................ 21
3.2. Systematic Literature Review ............................................................................................... 22
3.3. Multiple informant methodology ......................................................................................... 24
3.4. Focus group ........................................................................................................................... 26
Chapter 4 Distributed SAFe problems ............................................................................................... 30
4.1. Problems of Distributed Agile Development: Systematic Literature Review ....................... 30
4.2. Distributed SAFe problems: Multiple informant methodology ............................................ 37
Chapter 5 Identification of failing SAFe elements ............................................................................. 42
5.1. Identification based on theory: Literature ............................................................................ 42
5.2. Identification based on theory: Expert focus group ............................................................. 43
5.3. Identification based on practice: Practitioner focus group .................................................. 49
5.4. Result of triangulation .......................................................................................................... 56
Chapter 6 Customizations of SAFe..................................................................................................... 59
6.1. Customizations of SAFe based on theory: Literature ........................................................... 59
6.2. Customizations of SAFe based on practical experience: Practitioner focus group .............. 61
6.3. Combining theory and practical experience ......................................................................... 65
Chapter 7 Discussion.......................................................................................................................... 68
7.1. Answers to the research questions....................................................................................... 68
7.2. Limitations............................................................................................................................. 69
7.3. Reflection .............................................................................................................................. 73
7.4. Recommendations for future research................................................................................. 76
Chapter 8 Conclusion ......................................................................................................................... 78
8.1. Summary ............................................................................................................................... 78
8.2. Conclusion ............................................................................................................................. 78
2
Bibliography .......................................................................................................................................... 81
List of tables .......................................................................................................................................... 93
List of figures ......................................................................................................................................... 98
Appendix A Systematic Literature Review protocol ....................................................................... 100
Appendix B Multiple informant protocol ....................................................................................... 102
Appendix C Expert focus group: protocol ...................................................................................... 103
Appendix D Practitioner focus group: protocol .............................................................................. 109
Appendix E Multiple informant execution ..................................................................................... 115
Appendix F List of SAFe elements .................................................................................................. 117
Appendix G Description of the Agile Release Train elements ........................................................ 120
Appendix H Rejected studies .......................................................................................................... 126
Appendix I Problems and challenges of accepted studies ............................................................ 128
Appendix J Result of SLR reviews .................................................................................................. 133
Appendix K Problem groups ........................................................................................................... 135
Appendix L Ungrouped problems .................................................................................................. 142
Appendix M Expert focus group: invitation letter ........................................................................... 143
Appendix N Expert focus group: attachment ................................................................................. 145
Appendix O Expert focus group: execution .................................................................................... 156
Appendix P Expert focus group: individual votes round 2 ............................................................. 177
Appendix Q Expert focus group: consequences per element round 3 ........................................... 179
Appendix R Expert focus group: individual votes focus group: round 4 ........................................ 185
Appendix S Expert focus group: visualization dot voting consequences: round 4 ........................ 190
Appendix T Survey.......................................................................................................................... 198
Appendix U Results of the survey ................................................................................................... 205
Appendix V Practitioner focus group: invitation letter .................................................................. 219
Appendix W Practitioner focus group: forms .................................................................................. 220
Appendix X Practitioner focus group: execution ........................................................................... 225
Appendix Y Practitioner focus group: categorization participants ................................................ 242
Appendix Z Practitioner focus group: individual votes round 2 .................................................... 243
Appendix AA Practitioner focus group: individual solutions round 3........................................... 248
Appendix BB Practitioner focus group: group solutions round 3 ................................................. 255
Appendix CC Practitioner focus group: individual votes round 4 ................................................ 257
3
Chapter 1 Introduction
In this chapter a short introduction to the research is given. First, the context for this research is
presented. Second, the research questions are described. Finally, a reading guide is given.
1.1. Context
This thesis presents the research that is conducted as part of the Information Architecture Master
track of the Computer Science programme of the Delft University of Technology. A workplace during
this research was provided by Prowareness. This research is done in the research chair on Global
Software Engineering, in the Software Engineering group of the Software Technology department in
the Faculty Electrical Engineering, Mathematics and Computer Science of Delft University of
Technology.
The research presented in this thesis is on the Scaled Agile Framework® (SAFe®) in globally distributed
environments. SAFe and Scaled Agile Framework are registered trademarks of Scaled Agile Inc.
referred in [1]2. To provide context, first, distributed is briefly described. Second, a brief overview of
SAFe is presented. Third, the original way of software development is briefly sketched. Finally, the
agile scaling frameworks that are replacing this original way are briefly mentioned. A more elaborate
description of the agile scaling frameworks and SAFe can be found in Chapter 2.
1.1.1. Distributed
Because of the globalization of business in the 21st century, increasingly more companies develop
software in a globally distributed setting [2], [3], [4], [5], [6], [7], and [8]. When working in a globally
distributed environment, different problems and challenges can occur regarding, among other things,
communication, coordination, and time zone differences, according to [3] and [9]. Despite these
problems, the use of fully distributed teams can be successful, as presented in [10], [11], and [12].
However, the research of [10], [11], and [12], is focused on Scrum, with a small team. SAFe, on which
the research presented in this thesis focusses, is executed with multiple teams, possibly having
different problems.
1.1.2. SAFe
In this section, SAFe is briefly described. A full overview of SAFe can be found at the SAFe website
provided by Scaled Agile, Inc. www.scaledagileframework.com [1]. SAFe is a framework that applies
both agile and lean practices for developing software. This framework is publicly available and widely
used in mainly the IT industry. In the 10th Annual State of Agile report by VersionOne from 2015 [13],
27% of the respondents name SAFe as the method to scale agile. This ranks SAFe as the second most
used scaling method and the most used scaling framework.
As stated previously, the current trend is that increasingly more organizations develop their software
in a globally distributed setting. Although SAFe is being deployed in such a setting, as seen in multiple
case studies [14], [15], [16], and [17], SAFe was not originally developed for such a setting, but for a
co-located setting. Therefore, it is interesting to investigate the application of SAFe in globally
distributed settings.
2
It should be noted that all information regarding SAFe in this thesis is as interpreted by the author, and verified
with SAFe experts. Any information is therefore not officially supported by Scaled Agile, Inc., unless quoted
directly from Scaled Agile, Inc., in which case this is specified.
4
The main workflow in SAFe for delivering value to a customer is using Value Streams. Deployment of
these Value Streams is done using Agile Release Trains which are continuously delivering new versions
of a solution to the customer. The Agile Release Train is a team of teams. In the Agile Release Train,
the agile teams are aligned via a single vision, roadmap, and program backlog. The Agile Release Train
iterates in a so-called program increment, PI, which lasts 8 to 12 weeks during which there are 4 to 6
two-week team iterations. During the team iterations the teams continuously add value to the
solution by finishing fully tested stories. At the end of each team iteration the integrated solution is
demoed.
The SAFe website states: “Along with the various Agile methods, the Manifesto provides the Agile
foundation for effective, empowered, self-organizing-teams. SAFe extends this foundation to the level
of teams of teams” [18]. The Agile Manifesto [19] is key to SAFe, and reads as follows:
We are uncovering better ways of developing software by doing it and helping others do it.
Through this work we have come to value:
That is, while there is value in the items on the right, we value the items on the left more.
Figure
Figure 3: Research
2: Agile Manifesto improvement
from [19] cycleWe are uncovering better ways of developing software by
doing it and helping others do it. Through this work we have come to value:
1.1.3. Traditional project management frameworks
Individualssoftware
To put SAFe in context, a traditional and interactions over process,
development processes and toolsis presented. As well as
waterfall,
traditional project management frameworks such as, PRINCE2 and Rational Unified Process (RUP) are
Working software over comprehensive documentation
described. These traditional frameworks use an upfront planning for projects. This upfront planning
however, does not work if Customer collaboration
the environment over
changes. contract
Though bothnegotiation
RUP and PRINCE2 can be used in
agile projects, these frameworks were not designed as such, [20], [21].
Responding to change over following a plan
In his article in 1970, [22], Royce describes the waterfall model as a model that is at that time widely
That is, while there is value in the items on the right, we value the items on the left more.
used in the manufacturing industry. The waterfall model works in 7 steps, which are executed one
after another, starting with the creation of system requirements and finishing with deploying to
operations. Strikingly, in his article Royce expresses his concerns about the waterfall model: “I believe
in this concept, but the implementation described above is risky and invites failure.”. To counter this,
he proposes feedback and interaction between the steps. Though this is a good idea, no case has been
found were this is done in practice. In practice, there are rigid agreements without feedback and
interaction between the steps.
The upfront planning from traditional frameworks is symbolized in the PRINCE2 acronym which stands
for PRojects IN Controlled Environments [23]. This controlled environment only changes when the
controlling entity changes the environment according to plan, while agile frameworks such as SAFe
are designed to respond to unexpected change [19].
5
RUP is a traditional approach, of which the goal is: “to produce, within a predictable schedule and
budget, high-quality software that meets the needs of its end users.” [24]. RUP prescribes processes
and follows a predefined plan which is not in line with the Agile Manifesto. Moreover, the sixth best
practice in RUP is: “Control changes to software” [25], controlling the response to change rather than
embracing change.
Though none of the traditional frameworks explicitly considers a distributed setting, it seems they can
all be used in distributed development. In PRINCE2, delivery can be done by a supplier which is not
necessarily co-located. RUP and waterfall contain steps, hand overs or toll gates in which the project
is given to the next team based on a set of predefined requirements. Because of these hand over
points the next team could be located in a different location.
Each of these four frameworks scale the development department, however this scaling ends there.
SAFe scales reasoning with Value Streams which, if needed, include the business and other
stakeholders. This enables SAFe to scale on an organizational level rather than only on the IT level.
These questions are answered in sequence, as the previous answer provides the required input for
the next answer. The answer to the first question results in distributed SAFe problems. Based on these
problems the answer to the second research question identifies which SAFe elements can be expected
to fail. To answer the third research question, these failing elements serve as input on how to
customize SAFe. When problems are identified for this customized version it is possible to repeat this
process with this input, starting again with research question one. Thus, a circle is created that can be
used to continuously improve distributed SAFe, as visualized in Figure 4. However, in this research
each step is executed only once, and the customized version remains a theoretical proposal for now.
RQ 1:
RQ 2: Failing
Distributed
SAFe
SAFe
elements
problems
RQ 3:
Customized
distributed
SAFe
6
1.3. Reading guide
This thesis presents the result and approach taken for the research. The research background is given
in Chapter 2. The approach is described in detail in Chapter 3. Substantiation of the results and the
data is presented in Chapter 4 to Chapter 6. Discussion on both the results and approach can be found
in Chapter 7. The conclusions of the research can be read in Chapter 8.
Chapter 2 presents the research background for the research. First, different definitions of distributed
are presented. Second, SAFe is explored and a high-level description of the framework is given, as
interpreted by this author, and verified with SAFe experts. Third, to give some perspective the
different agile scaling frameworks are described. After this exploration, the scope of the research is
set to fit within the timeframe of the master thesis project.
Chapter 3 describes the approach taken and the methodologies that are used to answer the research
questions. For each methodology, the conditions, strengths, limitations, and protocols are presented.
Chapter 4 presents an answer to the first research question: “What problems can be expected when
SAFe is applied in distributed settings?”. First, the Distributed Agile Development problems are
presented. Second, the distributed SAFe problems are presented.
Chapter 5 gives an answer to the second research question: “What SAFe elements can be expected to
fail when applied in distributed settings?”. First, failing elements are identified based on the
distributed SAFe problems that were discovered in the previous chapter. Second, failing elements are
identified in an expert focus group. Third, failing elements are identified in a practitioner focus group.
Finally, these 3 identifications are combined.
Chapter 6 describes an answer to the third research question: “What would SAFe look like, when
customized for distributed settings?”. First, solutions are identified based on the failing elements
discovered in the previous chapter. Second, the solutions identified in the practitioner focus group are
presented. Finally, a proposal on how to customize SAFe is presented.
Chapter 7 discusses the results. First, the extent to which the research questions are answered is
presented. Second, the limitations of the research methodologies are discussed. Third, the answers to
the research questions are reflected upon. Finally, recommendations for future research are
presented.
Chapter 8 gives a summary of the research questions and presents the conclusions of the research.
7
Chapter 2 Research background
In this chapter the research background is presented. First, the research area of Globally Distributed
Software Development is described. After which, two different definitions of distributed are given and
the definition that is applied in this research is clarified. Second, SAFe is explored and a high-level
description of the framework is presented. Third, the different agile scaling frameworks are given.
Finally, it is described how the scope of the research is adjusted to fit within the timeframe of this
master thesis project.
At the same time as this trend of globalization, agile software development methodologies have
become the most used approach for software development [13]. The application of these popular
agile methods in globally distributed settings is called Distributed Agile Development, [7], [30] and
[31]. SAFe is such an agile methodology. The Systematic Literature Review, done as part of this
research will thus focus on Distributed Agile Development.
When working in a globally distributed environment, different problems and challenges can occur
according to [3] and [9]. Examples of these problems are: difficulties with coordinating in multiple time
zones [32], [33], and [34], insufficient communication tools [7], [31], [35], and [10], and delay in
communication [6], [36], [4], and [37]. In the Systematic Literature Review, the problems of
Distributed Agile Development are reviewed.
Despite these problems, the use of fully distributed teams can be successful, as presented in [10], [11],
and [12]. The conclusion of these papers is “it is possible to create a distributed/outsourced Scrum
with the same velocity and quality as a collocated team”.
To solve the problems of distributed, different solutions are applied. For example from [9], regular
traveling to meet face to face [7], [31], [38], [39], [40], [32], [35], frequent communication [7], [41],
[42], [33], [35], and [10], and using different communication channels [7], [32], [39], [33], [10], [12],
and, [43].
8
Globally distributed is different than distributed as specified by Thomas Allen. With globally
distributed the working places are located at locations which are possibly in different countries or time
zones. Though the definition of distributed as presented by Thomas Allen does not exclude this,
problems such as time zone differences and cultural differences do rarely occur over a distance of 50
meters.
In this research, a globally distributed setting is considered. Therefore, from this point on, when talking
about distributed, globally distributed is meant.
2.2. SAFe
The first version of SAFe was launched in 2011. The version considered in this research is the latest
available version of SAFe at the start of this research (SAFe 4.0), which was launched at the 3th of
January 2016. A full overview of SAFe 4.0 can be found at the SAFe website provided by Scaled Agile,
Inc. www.scaledagileframework.com [1]. The SAFe 4.0 big picture shows almost all elements that are
in SAFe and is shown in Figure 5. Some elements are not made visible in this picture, for example the
PO sync and Scrum of Scrums.
9
For organizations that want to adopt SAFe, the core values describe the culture that these
organizations need to develop. These values have to become the heart of the organization. The four
core values are:
Alignment
Built-in Quality
Transparency
Program Execution
The core value Alignment is to make sure that everyone has the same goal and vision so that they can
align on that. This explains why everyone does what they do. Built-in Quality is to make sure that
quality standards are high and maintained. This is required because “you can’t scale crappy code” [46].
Transparency is to give insight into what is being done. These insights provide data on which decisions
can be based to give direction for a project, rather than steering based on gut feeling. Lastly, Program
Execution is to focus on the program by putting the program before the team. This enables the
program to continuously deliver value.
The SAFe principles are the guidelines for decision making when working with SAFe. The nine SAFe
principles are:
The lean-agile mindset is needed to support lean and agile development at scale in an entire
organization. The mindset consists of two parts, thinking lean and embracing agility. The core values,
SAFe principles and lean-agile mindset are explained in detail in [1].
10
Train can deliver the full solution to a customer this train spans the entire Value Stream. In this case
three level SAFe is in place.
The Agile Release Train iterates in a so-called program increment, or PI, of 8 to 12 weeks consisting of
4 to 6 two week iterations from the team level. During the team iterations the Scrum Masters and
Product owners meet twice during a Scrum of Scrums and a PO sync. At the end of each team iteration
the integrated solution is demoed in the system demo done by the system team.
During each program increment all teams and persons which are part of the Agile Release Train come
together during three consecutive days for three events. First, the program increment planning (PI
planning) event during which the next program increment is planned. Second, the Inspect and Adapt
meeting during which previous program increment is reviewed. Third, the Solution Demo, in which
the fully integrated solution is demoed. Again, these meetings correspond to the meetings known in
Scrum.
11
2.2.3. Events
The following events are defined in the SAFe framework. These events are visualized on a timeline in
Figure 6.
Daily event
Daily Scrum
Iteration events
Iteration Planning
Scrum of Scrums
PO Sync
Team Demo
System Demo
Iteration Retrospective
Program increment events
Pre-PI Planning
PI Planning
Post PI Planning
Solution Demo
Inspect and Adapt
12
2.2.4. Flow of a trigger/idea in SAFe
To deliver value SAFe uses Value Streams. A Value Stream starts with a trigger from customers and
results in customers having a solution which adds value to the organization.
Triggers from customers start at the portfolio level and are formulated as epics. If an epic is accepted,
it is put on the portfolio backlog. Epics that are on the portfolio backlog are going to be realized by the
Value Streams. All epics are in line with the strategic themes of the organization, and therefore the
portfolio level is connected to the organization.
In the value stream level, epics that are defined on the portfolio level are split into capabilities. The
size of these capabilities is so that they can be picked up in a single program increment by the Agile
Release Trains that are part of the Value Stream.
In the program level, the epics or capabilities coming from the portfolio level or value stream level are
split into features. A feature is planned and reviewed at the program increment boundaries. These
features are split into stories which can be picked up by a team during an iteration. Teams can pick up
multiple stories during one iteration. A graphic representation of this flow is shown in Figure 7.
13
2.3. Agile scaling frameworks
There are multiple other frameworks that scale agile. The company Agile Scaling maintains and
updates a matrix [47] in which many of these frameworks are described and compared. This matrix
can be found online via http://www.agilescaling.org/ask-matrix.html. Four frequently used
frameworks are Large-Scale Scrum (LeSS) [26], Disciplined Agile Delivery (DAD)3 [27], Nexus [28], and
Spotify [29]. This section describes these frameworks to give insight into what frameworks, other than
SAFe, are used in practice, and because these frameworks could offer a solution that SAFe does not.
An agile/basic version that extends the Scrum Construction lifecycle with proven ideas from
RUP
An advanced/lean lifecycle
3
Note that, in this research Distributed Agile Development is also discussed, this should not be confused with
Disciplined Agile Delivery which is as abbreviated DAD.
14
A lean continuous delivery lifecycle
An exploratory “Lean Startup” lifecycle
2.3.3. Nexus
“Nexus is an exoskeleton that rests on top of multiple Scrum Teams” [28]. This way Nexus aims to solve
the integration of complex software that results in working software. Nexus combines a maximum of
10 teams in a single unit of development, called a Nexus. If multiple of these units of development are
used it is called Scaled Professional Scrum. Nexus extends Scrum with a Nexus Integration Team, the
Nexus Sprint Backlog and additional events, for example the Nexus Daily Scrum. The Nexus framework
is visualized in Figure 10.
15
Figure 10: Nexus scaling model from [51]
2.3.4. Spotify
Spotify scales agile using squads, tribes, chapters, and guilds. How these are related is visualized in
Figure 11. The squad is “the basic unit of development” [29], similar to a Scrum team. Multiple squads
working in related areas are called tribes. The people with similar skills within the tribe are a chapter,
for example testers. Lastly, guilds are all who are interested in a certain topic across tribes, for example
the automated testing guild contains testers and developers from multiple tribes.
16
2.3.5. Comparing the different scaling frameworks to SAFe
When compared to SAFe, the other agile scaling frameworks, Nexus, LeSS and Spotify provide only
few artefacts, roles, and events in addition to those of regular Scrum. The four lifecycles in DAD contain
more elements than the previous three, however still less than SAFe. Same as SAFe, Spotify does not
prescribe Scrum, but leaves the way of working of a team to the teams themselves. LeSS states that it
can be used in a distributed setting, whereas the other frameworks do not mention this. And although
Spotify is used successfully in a distributed setting, it does not contain specific elements that support
working distributed.
Each of these four frameworks scale the development department, however the scaling ends there.
SAFe scales reasoning with Value Streams which, if needed, include the business and other
stakeholders which are needed to deliver value. Consequently, SAFe has more roles, events, artefacts
and practices than the other frameworks. This enables SAFe to scale on an organizational level rather
than only on the IT level.
In the 10th Annual State of Agile report by VersionOne from 2015 [13], SAFe is named as the most used
scaling method with 27%. Both LeSS and DAD are also mentioned, but are used significantly less, 4%
and 6% respectively, as shown in Figure 12. Nexus and Spotify are not mentioned in this report.
Figure 12: 10th State of Agile report - Scaling agile from [13]
Note that in this figure, Scrum of Scrums is mentioned as most used scaling method. However, this
method is a single meeting, not a framework. Thus, SAFe is the most used framework.
17
2.4. Research scope
This research is done as part of a master thesis project, for which the duration is fixed. SAFe is
described in 2.2. Based on this description and the illustration provided in Figure 5, 80 roles, events,
artefacts, and best practices can be identified. This list of 80 elements is presented in Appendix F.
Researching all 80 elements within in the timeframe of such a project is not feasible. Thus, the
research must be scoped, this is done by looking at essential SAFe.
Essential SAFe is the bare minimum required to apply SAFe which is still SAFe. This was first presented
on the 9th of February 2016 in [52], on which this research was initially based. This picture is shown in
Figure 13. An update was presented on the 23rd of June 2016 in [53]. Although there are some
differences, these differences have no impact on this research.
Essential SAFe consists of the program level and the team level. The value stream level and portfolio
level are not part of essential SAFe. The focus of agile delivery is to deliver value via working software.
To deliver value, SAFe uses Value Streams, which are supported by one or more Agile Release Trains.
As such, we consider the Agile Release Train as the core delivery construct of SAFe. Moreover, when
not using the Agile Release Train, an implementation cannot be considered to be a SAFe
implementation [53]. This research will therefore focus on the Agile Release Train, and in particular
the extent to which distributed impacts the Agile Release Train.
Another part of essential SAFe is the team level. Distribution at the team level entails that within the
team the members are distributed. However, in this case the field of research is Distributed Scrum.
This is a different topic, and cannot be considered as being distributed SAFe. Therefore, the team level
is omitted in this research. Distribution within a team has been researched extensively, for example
in [10], [11], and [12].
Thus, the scope of this research will be set to the Agile Release Train. The elements which are part of
the Agile Release Train have been numbered and are visualized in Figure 14.
18
Figure 14: Agile Release Train elements numbered, modified and reproduced with permission from © 2011-2016 Scaled Agile,
Inc. All rights reserved. Original Big Picture graphic found at scaledagileframework.com.
4
A description of all 34 elements can be found in 0.
19
15. Inspect and Adapt
16. PI Planning
20
Chapter 3 Research methodology
In this chapter the approach and different research methodologies used during this research are
described. First, the approach taken to answer each research question is presented, as well as the
applicability of that approach to answer the research question. Second, the conditions, strengths,
limitations, and protocols of the methodologies presented in the approach are given.
The Systematic Literature Review is applicable to answer this question as it is a method to evaluate
and interpret all available research relevant to the topic of Distributed Agile Development. This
corresponds with the method as stated in [54]: “A systematic literature review (often referred to as a
systematic review) is a means of identifying, evaluating and interpreting all available research relevant
to a particular research question, or topic area, or phenomenon of interest.”. Besides this, the result
of this Systematic Literature Review provides a background for the next steps of this research. This
corresponds to one of the reasons to do a Systematic Literature Review in [54].
Not all Distributed Agile Development problems are equally relevant in distributed SAFe. To discover
the relevant distributed SAFe problems, the problems of the Systematic Literature Review have been
filtered using a multiple informant methodology. The use of multiple informants enables the
researcher to objectively reach a decision, using relatively few resources, according to [55] and [56].
Thus, the use of multiple informants is applicable to discover the distributed SAFe problems, providing
an answer to the first research question: “What problems can be expected when SAFe is applied in
distributed settings?”.
5
At the 15th of February 2016 Google Scholar was searched using the following queries: ““Scaled Agile
Framework” AND distributed AND problems”, and ““Scaled Agile Framework” AND SAFe”. The first
query yielded 96 hits, the second query 122.
21
practitioners was done to find an answer based on practice: the practitioner focus group. The overlap
of these identification methods provides an answer to the research question.
To provide an answer to this question based on theory, expertise on both SAFe and distributed is
required. A focus group “involves engaging a small number of people in an informal group discussion
(or discussions), ‘focused’ around a particular topic or set of issues.” [57]. Within these discussions, the
interactions among the participants can yield important data [58], as in these interactions the
experience of the different participants is combined. By combining the experience of both distributed
and SAFe experts in a focus group the use of a focus group is applicable for answering this research
question.
The second focus group aims to answer the question based on practical experience. Practical
experience with SAFe differs for each person, as each organization is unique. To answer the question
properly, these differences must be taken into account. As mentioned previously, a focus group
combines the experience of the participants. With experts from different companies participating in
the focus group the use of a focus group is applicable for answering this research question based on
practice.
Additionally, a survey was done in an attempt to provide an answer based on practice to the question:
“What SAFe elements can be expected to fail when applied in distributed settings?”. However, due to
low response the results of this survey have not been used in this research. The survey and the results
of the survey can be found in Appendix T and Appendix U, so that these can be used for future
research. To emphasize, the survey is not used in this research.
22
Literature Review enables the researcher to analyze a wide range of variables, according to [54]. This
wide range of variables provides the researcher with a broad view of the topic as well as insight into
the current state of the field of research.
Besides these limitations, if any of the conditions is not fulfilled, this becomes a limitation as well and
should be handled as such.
In the first step, the search results from the search in Google Scholar6, on Systematic Literature
Reviews discussing the topics Distributed Agile Development and Distributed Scrum, have been
filtered. The search results have been accepted or rejected based on the criteria described in the
protocol. This filtering resulted in a list of accepted papers.
In the second step, data has been extracted from the accepted papers. The extracted data consists of
the standard information, as described in [54], as well as additional information. The standard
information extracted is: study title, study author(s), study year, and publication details. This is
extended with additional information, namely, the problems or challenges presented in the study.
In the third step, the extracted data of the different studies was combined. Similar problems or
challenges have been grouped together, and presented as in Table 1.
Table 1: Problem groups example
6
Google Scholar has been used for this search because it has indexed many different databases, including those
of different universities, providing a broad view of the available literature
23
Figure 15: Visualization Systematic Literature Review protocol
The second condition is that the right number of informants should be consulted. According to [56],
using 3 informants with different backgrounds is sufficient to eliminate less relevant problems. This is
supported in [55], which gives multiple studies that indicate that 2 or 3 informants are sufficient.
Though in [55], it is stated that this only holds if the selection criteria are applicable to every informant.
24
3.3.3. Limitations of using multiple informants
Using multiple informants, a limitation is that one could wonder whether the informants are able to
individually judge the topic sufficiently. Another limitation could be that the informants as a group do
not provide a broad enough view to make a sufficiently substantiated decision.
As a consensual approach has been taken, one could ask whether the group consensus provides the
best answer. According to [55], some studies indicate that the group consensus performs better than
the average of the individual informants, but is outperformed by the best informant [60], [61]. Also, it
is concluded that the unweighted mean of the individual informants tends to outperform group
consensus [60].
Besides these limitations, if any of the conditions is not fulfilled, this becomes a limitation as well and
should be handled as such.
In the first step, the informants mapped the Distributed Agile Development problems on the core
values of SAFe based on consideration. Based on this mapping, problems that are considered by SAFe,
were filtered out. This results in problems of Distributed Agile Development that are not considered
in SAFe, and are thus SAFe threats.
In the second step, the informants mapped the distributed SAFe threats on the core values based on
impact. Based on this mapping, the threats which have a low impact were filtered out. This results in
problems which are not considered by SAFe and have a high impact on SAFe, thus distributed SAFe
problems.
25
3.4. Focus group
To answer the question: “What SAFe elements can be expected to fail when applied in distributed
settings?”, two focus groups were used. Using a focus group provides insights of experts with different
backgrounds. The first focus group consisted of experts of different fields. This group answered the
question based on theory. The second focus group consisted of practitioners. This second group
answered the question based on practice.
The second condition is that if the participants are asked to individually judge a topic, all participants
should be able to do this based on their expertise.
The last condition is that participants should not have been previously involved in the research. If the
participants have been previously involved in the research, the results of the focus group could be
compromised. Participants could then have information on the topic that could influence the outcome
of the focus group. An example of this could be that a participant might strive towards a certain result
to understate the statements from their previous participation.
Besides these limitations, if any of the conditions is not fulfilled, this becomes a limitation as well and
should be handled as such.
In the first part of the expert focus group protocol the participants identify which Release Train
Elements are challenged by distributed, which is done in two rounds. First the group identifies the
specifically challenged Agile Release Train elements. For this the group filters the Agile Release Train
elements, first in groups, then plenary. For the group filtering, the participants are divided in two
groups, based on expertise. Each group consists of at least a distributed expert, a SAFe expert and a
practitioner. Second, the group ranks the elements using dot voting. From this, the high risk Agile
Release Train elements are identified. This first part is visualized in Figure 17.
26
In the second part of the expert focus group protocol the participants identify the consequences of
the high risk Agile Release Train elements. First, consequences are discovered during a plenary
discussion. Next, these consequences are ranked using dot voting. This second part is visualized in
Figure 18.
27
3.4.5. Practitioner focus group protocol
A short summary of the steps in the practitioner focus group protocol is presented below, these steps
are visualized in Figure 19 and Figure 20. The practitioner focus group protocol can be found in
Appendix D.
In the first part of the practitioner focus group protocol the participants identify which Release Train
Elements are challenged by distributed. Same as in the first focus group, this is done in two rounds.
First the group identifies the specifically challenged Agile Release Train elements. For this the group
filters the Agile Release Train elements, first in groups, then plenary. For the group filtering, the
participants were divided in two groups, based on earlier participation in the research. Those who
participated in the survey were in one group, those who did not previously participate on the other
group. Second, the individuals ranked the elements using dot voting. From this, the high risk Agile
Release Train elements were identified. This first part is visualized in Figure 19.
In the second part of the practitioner focus group protocol the participants identify solutions to
prevent the elements from failing. First, solutions are discovered during a discussion in two groups.
The participants are divided over the groups randomly. Next, these solutions are ranked individually
using dot voting. This second part is visualized in Figure 20.
28
Figure 20: Practitioner focus group - part 2: identifying solutions
29
Chapter 4 Distributed SAFe problems
In this chapter an answer is presented to the first research question: “What problems can be expected
when SAFe is applied in distributed settings?”. The research was split in two steps. First, a Systematic
Literature Review on the problems of Distributed Agile Development and Distributed Scrum was done.
Second, a multiple informant methodology was used to filter these problems and uncover the
distributed SAFe problems.
7
Google Scholar has been used for this search because it has indexed many different databases, including
those of different universities, providing a broad view of the available literature
30
““Scaled Agile Framework” AND distributed AND problems”, and ““Scaled Agile Framework” AND
SAFe”. The first query yielded 96 hits, the second query 122.
From this search two studies discussing problems were found, [65] and [66]. The studies discuss the
challenges of transitioning from a traditional organization to an organization where agile is scaled.
However, the studies do not cover the distributed aspect required for this research. Therefore, a
different approach is taken to find problems on distributed SAFe. For this approach, problems of
Distributed Agile Development and Distributed Scrum are researched in this Systematic Literature
Research. On these topics there is enough literature to fulfill the condition.
The second condition is that a predefined search strategy is used for the review. The protocol, as
summarized in 3.2.4, page 23, fulfills the condition that a predefined search strategy is used.
31
A Systematic Literature Review [9], done prior to this research, discovered problem groups in
Distributed Scrum. These problems are listed ordered by their class, as classified in [9], in Table 2.
Table 2: Problem groups of Distributed Scrum ordered by class from [9]
It should be noted that these problems can also happen when working co-located, for example
“incorrect execution of Scrum” can also happen with co-located teams. Additionally, there is overlap
between these problems, for example “meetings at the office outside office hours” are due to “time
differences” which is a separate problem. Also, the classification itself, as presented by [9] could be
argued, for example the problem “coordinating in multiple time zones is difficult” is classified as
coordination, but could also be classified as time zone.
Although these observations are correct, choices were made in [9] regarding when problems are
included, the way the problems are grouped, and the way they are classified. This research does not
reevaluate these choices because these choices are substantiated in [9]. The problems as presented
are used in this research.
32
Table 3: Challenges of Distributed Agile Development grouped by most applicable class from [3]
Challenge Class
Lack of informal communication Geographic dispersion
Increased effort to initiate contact Geographic dispersion
Reduced hours of collaboration Control and coordination breakdown
Lack of shared understanding Control and coordination breakdown
Increased dependency on technology Control and coordination breakdown
Increased complexity of the technical infrastructure Control and coordination breakdown
Communication delay Loss of communication richness
Loss of cohesion Loss of teamness
Reduced trust Loss of teamness
Perceived threat from low-cost alternatives Loss of teamness
Increased team size Loss of teamness
Differences in language Cultural differences
Differences in ethical values Cultural differences
Differences in organizational vision Cultural differences
Differences in managing individualism and collectivism Cultural differences
Differences in terms of agreement Cultural differences
Differences in time perception Cultural differences
Differences in quality assessment Cultural differences
Differences in design Cultural differences
Same as for the previous study, the choices made in [3] regarding the challenges and their classes are
not reevaluated in this research. The challenges, as presented in [3] are used.
The rejected papers including the reason of rejection can be found in Appendix H. A list of all 184
problems and challenges identified by the accepted papers can be found in Appendix I.
In addition to the studies found, three studies were discovered that have investigated Systematic
Literature Reviews as well, [78], [79] and [80]. The studies that have been found in these studies have
also been tested on the acceptance criteria. No new papers have been found in this search. The results
of these tests can be found in Appendix J.
33
4.1.6. Combining the results
The results as discussed in the previous sections were combined by grouping the similar problems or
challenges. Problems that occurred more than once have been grouped together. In Table 5 the 29
grouped problems are listed and summarized. How these problems are grouped can be found in
Appendix K. There are 25 problems that could not be grouped, these can be found in Appendix L.
It should be noted that some of these problems can also occur when working co-located. For example,
language barriers can also occur when working co-located with team members from different
nationalities. Thus, the problems that are presented are not exclusively occurring when working
distributed. However, the studies that have been reviewed mention these problems as problems that
can occur when working distributed. Moreover, that a problem can occur when working co-located,
does not mean that a problem cannot occur when working distributed. For this reason, these problems
can be expected to occur when SAFe is applied in distributed settings.
Table 5: Problem groups
34
# Distributed Agile Development problems Times mentioned
8 Problems due to misinterpretation 8
Misinterpretation can come from misunderstanding during communication
because of small communication bandwidth. Or because information is not
accessible or even hidden. This can result in reduced cooperation and loss of
information.
9 Problems due to lack of agile training 7
If customers or developers do not have a similar understanding of agile this
creates a gap between the skill levels of the involved parties. This gap makes it
difficult for these parties to work together.
10 Problems due to reduced trust 7
Many studies mention that there is reduced trust between team members
when working in a distributed setting. This reduced trust can lead to lack of
productivity and loss of teamness.
11 Problems due to time zone differences 7
Time zone differences can lead to having meetings outside office hours and
reduced availability for synchronous communication.
12 Problems due to people differences 6
People differences can be many things, difference in time perception, notion
of authority, individualism, or ethical values. These differences can make it
difficult to work together.
13 Problems due to lack of traditional management 6
Managing an agile project can be difficult because of the lack of traditional
management processes to steer the project. This lack of processes can cause
problems if teams do not function autonomously.
14 Problems due to difficulties with coordination 6
Working together with multiple sites in multiple time zones is difficult, this
increases coordination costs and can lead to unnecessary delays and conflicting
work.
15 Problems due to shared ownership and responsibility 6
In agile, teams get shared ownerships over their own projects, also giving them
responsibility for these projects. When the teams work distributed, this can
cause problems as the teams do not feel this responsibility which could result
in avoidance of accountability.
16 Problems due to incorrect execution of Scrum* 6
Not executing Scrum properly results in many problems. Features that are not
ready at the end of the sprint and teams that get no feedback on their work
because they do not hold retrospectives or reviews.
17 Problems due to cultural differences - organizational and national 5
Differences in culture can be differences of national culture between sites but
also differences in organizational culture between sites. These differences can
make it harder for sites to cooperate.
18 Problems due to the loss of informal contact 5
When working distributed the chitchat at the coffee corner is lost. This loss of
informal contact can create a lack of awareness of what is going on with the
team members, leading to less communication and collaboration.
19 Problems due to lack of collective vision 4
If there is no collective vision, teams miss the big picture. This results in less
focus and commitment of the teams.
35
# Distributed Agile Development problems Times mentioned
20 Problems due to lack of requirement documents 4
Scrum does not provide formal documentation in the form of requirement
documents, this can cause problems if the customer wants to work with fixed
requirements. Or teams can have issues with communication if important
decisions are not documented leading to unclear requirements.
21 Problems due to lack of visibility 4
It is difficult to evaluate the current state of a project. This lack of visibility
makes it difficult create trust between sites.
22 Problems due to difficulties in knowledge sharing 4
Knowledge sharing when working distributed is difficult as knowledge is spread
over the different sites. If sharing of this knowledge is not done between sites
this can lead to a lack of domain knowledge in some sites.
23 Problems due to increased communication effort 4
Initiating contact in a distributed environment takes an increased effort as this
cannot be initiated face to face, some tool must be used. This creates
communication overhead and increases communication costs.
24 Problems due to increased team size 3
When the team size is increased, it becomes more difficult to work together as
team.
25 Problems due to different holidays 3
Different countries, cultures and religions have different holidays. When these
holidays are not overlapping, it is difficult to synchronize work between the
distributed sites.
26 Problems due to difficulties with agile decision making 3
Agile decision making is different from traditional decision making because
teams get more decision-making power. This results in management having to
let go and trust the teams to make the right decision.
27 Problems due to increased number of teams 3
When the number of teams increases, this creates difficulties for agile
practices. Agile practices must be scaled for this increased number of teams.
28 Problems due to silence of participants 2
During meetings, some participants can remain silent and passive due to
linguistic or cultural differences.
29 Problems due to increased number of sites 2
Working distributed means working with multiple sites. This can cause all kinds
of problems with communication and coordination. Many of these problems
are listed in this table.
This incorrect execution of methods in the framework can also be applied to distributed SAFe. For
readability in this thesis, the problem is described as “problems due to incorrect execution of SAFe”.
36
Although this step is logical, it should be clearly noted that this generalization is not substantiated in
any way.
It should be noted that not all 29 Distributed Agile Development problems from the Systematic
Literature Review are used. Problems mentioned only twice are not considered, problems mentioned
thrice are. On one hand, dismissing problems that are not mentioned often early in the process could
result in an important problem being filtered out. On the other hand, taking all problems further in
the research could result in solving a problem which does not occur often. If only two out of 13
Systematic Literature Reviews mention a problem, it can be considered a coincidence as the others
should have found the problem as well. Three could also be considered a coincidence. However, this
chance is smaller. Moreover, dismissing those problems would result in four problems being dismissed
early in the process. In this stage of the research this is not preferred.
37
consultation with the informant it was decided to exclude this informant from the results. The fourth
criterion was met as the informants have all implemented SAFe in different companies.
The second condition is that the right number of informants should be consulted. For the multiple
informants methodology 3 informants that have implemented SAFe in different companies were
consulted. As stated in 3.3.1, 3 informants with different backgrounds are sufficient to eliminate less
relevant problems.
Explanation
-- Problem is not considered at all
- Problem is not really considered
-/+ Problem partly considered
+ Problem is considered
++ Problem is strongly considered
Coordination
Unavailability of people + -- - -/+ +
Different execution of work -/+ ++ + ++ ++
practices
Time zone differences -- -- -- -- --
Difficulties with + -/+ -/+ + +
coordination
Increased number of teams ++ + ++ ++ ++
Communication
Lack of synchronous - -- - - -
communication
Loss of informal contact - - - - -
Misinterpretation ++ -- ++ -/+ ++
Increased communication -- -- -- -- --
effort
38
Core value Alignment Built-in Transparency Program Result*
Problem Quality Execution
Cultural
Language barriers -- -- -- -- --
Different holidays -- -- -- -- --
Cultural differences - + -- + -/+ +
organizational and national
Agile expertise
Lack of agile training - -- - -/+ -/+
Lack of traditional -/+ -/+ -/+ + +
management
Difficulties with agile + -- + -/+ +
decision making
Incorrect execution of SAFe - - - - -
Lack of requirement + + -/+ + +
documents
Lack of visibility + - ++ - ++
Teamness
Reduced trust ++ + ++ + ++
Loss of cohesion + + + + +
People differences - - - - -
Shared ownership and ++ + + + ++
responsibility
Increased team size + -- -- + +
Difficulties in knowledge + -- + + +
sharing
Lack of collective vision ++ -- + -/+ ++
*Note on result: The value in the result column is the maximum value of the four other columns.
This table has been discussed and verified with three SAFe program consultants from Prowareness
using the protocol previously discussed in 0, page 25. It should be noted that in the execution of the
protocol some of the informants gave different values for the mapping. However, none of these
differences effected the result column. And, when reaching a consensus on the values, this also did
not affect the result.
From this table the following problems can be extracted which are not really considered or not
considered in SAFe and therefore, are threats to SAFe.
39
4.2.3. Problem - Core Value mapping: impact
Another mapping between the problems and core values is done in which the core values are
viewed as the things that SAFe tries to achieve, the goals of SAFe. In this mapping the impact of a
problem on the core value as a goal is mapped. This mapping is presented in Table 9.
Table 8: Legend for Table 9: degree of impact
Explanation
-- Problem has no impact
- Problem has little impact
-/+ Problem has moderate impact
+ Problem has severe impact
++ Problem has very severe impact
Coordination
Time zone differences ++ -- + + ++
Communication
Lack of synchronous + + - - +
communication
Loss of informal contact + - - - +
Increased communication + + + ++ ++
effort
Cultural
Language barriers ++ + + ++ ++
Different holidays -- -- - - -
Agile expertise
Incorrect execution of SAFe ++ + + ++ ++
Teamness
People differences - - -/+ -/+ -/+
*Note on result: The value in the result column is the maximum value of the four other columns.
This table has been discussed and verified with three SAFe program consultants from Prowareness
using the protocol previously discussed in 0, page 25. It should be noted that, same as for the previous
mapping, in the execution of the protocol, in some cases the informants gave different values for the
mapping. The informants deviated more from each other with filling in this mapping, and this resulted
in some differences, also for the result column. However, these small changes did not have effect on
the next steps of the process.
40
Selecting from the 9 problems that are not considered in the core values, the problems that have a
severe impact on the core values are the following.
Note that, as stated previously, these problems do not exclusively occur when working distributed.
However, that a problem can occur when working co-located, does not mean that a problem cannot
occur when working distributed. Thus, these problems are expected to occur when applying SAFe in
distributed settings.
41
Chapter 5 Identification of failing SAFe elements
In this chapter an answer is presented to the second research question: “What SAFe elements can be
expected to fail when applied in distributed settings?”. To do this, failing elements were identified
based on the distributed SAFe problems. Because such identification, based only on the insights of
one person, cannot be considered sufficiently substantiated this will be substantiated using
triangulation. First, the previously mentioned identification was done based on insights gained from
answering the first research question. Second, an identification was made based on theory using a
focus group in which distributed and SAFe experts participated. Third, an identification was made
based on practice using a focus group in which practitioners participated. Finally, the insights of the
three identification methods are combined to reach a conclusion on what SAFe elements can be
expected to fail in distributed settings.
Figure 24: Agile Release Train elements failing - result of identification based on literature, created based on [1]
42
5.1.1. Elements expected to fail
The argumentations of why the elements are classified as expected to fail are described below. Only
the elements that are expected to fail are presented.
43
5.2.1. Conditions for the expert focus group
The main condition for using a focus group, as stated in 3.4.1 page 26, is that it should be composed
of 6 to 12 people from different backgrounds. With the summer holiday season nearby the choice was
made to try and organize the meeting before the holidays, as otherwise two months would pass before
another option would arise. Therefore, there was little time to recruit participants, thus the session
was held with 6 participants of different backgrounds and genders.
The second condition is that if the participants are asked to individually judge a topic, all participants
should be able to do this based on their expertise. The participants have been asked to vote
individually in the second round, thus they should all be able to individually judge the topic. The
participants for this focus group have three different backgrounds: SAFe program consultants, Release
Train Engineers with distributed experience (practitioners), and distributed experts from the academic
world. The level of expertise of the participants on the topics, distributed and SAFe, is different. This
difference could be a limitation and will be discussed in the limitations section.
The last condition is regarding previous involvement in the research. For the focus group, none of the
participants have been previously involved with the research. Therefore, no influence from this can
be expected.
44
In the first round the participants started in two groups, with each group containing a SAFe expert,
distributed expert, and a practitioner. These groups each identified elements that were specifically
challenged by distributed. After both groups had finished, a plenary session was held to identify the
specifically challenged elements. From this session 9 elements were identified that are specifically
challenged by distributed. The result of this session can be found in Table 10.
Table 10: Expert focus group result of round 1
In the second round the participants individually dot voted on the elements, both on likelihood and
on impact. Likelihood being the likelihood that the element fails in a distributed setting, and impact
being the impact if the element fails in a distributed setting.
Combining the scores of likelihood and impact gives the risk of an element failing. As stated in [81],
𝑅𝑖𝑠𝑘𝑖 = 𝑃(𝐿𝑜𝑠𝑠𝑖 ) ∗ 𝐼(𝐿𝑜𝑠𝑠𝑖 ) in which 𝑃 = 𝑝𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑦 and 𝐼 = 𝑖𝑚𝑝𝑜𝑟𝑡𝑎𝑛𝑐𝑒. Which are in our case:
𝑙𝑖𝑘𝑒𝑙𝑖ℎ𝑜𝑜𝑑 (𝑃) and 𝑖𝑚𝑝𝑎𝑐𝑡 (𝐼). Combining the likelihood and impact score thus shows the risk that
the element fails in a distributed setting. The individual votes of the participants on likelihood and
impact can be found in Appendix P. These individual votes are added in Table 11. Based on this data,
the calculated risk is visualized in Figure 26.
45
Table 11: Votes on likelihood and impact
300
Risk = (likelihood * impact)
250
200
150 121
100
50 28 25 18 18 12
0 0
0
20. 6.
15. 14. 12. 13. 2. Lean-
16. PI DevOps & 1. Core Communit
Inspect & Solution Program System agile
planning System values ies of
Adapt Demo Kanban Demo mindset
team Practice
Risk 324 121 28 25 18 18 12 0 0
Element
46
Figure 27 and Figure 28 show the distribution of the votes per expert group.
14
12 11
10
8 6
6 5
4
4 3 3 3
2 1
0
20.
6.
15. 14. 12. 13. DevOps 2. Lean-
16. PI 1. Core Commun
Inspect Solution Program System & agile
planning values ities of
& Adapt Demo Kanban Demo System mindset
Practice
team
Total 18 11 4 5 6 3 3 3 1
Practitioner 7 5 2 1 1 0 0 2 0
Distributed expert 6 2 2 2 1 3 0 1 1
SAFe expert 5 4 0 2 4 0 3 0 0
Element
14
12 11
10
8 7
6
6 5
4
4 3
2 0 0
0
20.
6.
15. 14. 12. 13. DevOps 2. Lean-
16. PI 1. Core Commun
Inspect Solution Program System & agile
planning values ities of
& Adapt Demo Kanban Demo System mindset
Practice
team
Total 18 11 7 5 3 6 4 0 0
Practitioner 7 5 3 1 2 0 0 0 0
Distributed expert 4 1 4 2 1 6 0 0 0
SAFe expert 7 5 0 2 0 0 4 0 0
Element
47
5.2.3. Analysis of expert focus group results
The top 3 elements of the likelihood and impact ranking, the events of the Agile Release Train, were
further discussed in the focus group. That these events are likely to fail with high impact is obvious
because these events are key to SAFe and part of essential SAFe.
From the results of the focus group, both the program Kanban, and DevOps & system team have at
least the same score for risk as the system demo, which had the lowest score of the events. Therefore,
both the program Kanban and DevOps & system team added to the events as elements with high risk
of failure. Again, the elements with risk of failing are marked red in SAFe big picture in Figure 29. Other
elements are marked green.
Figure 29: Agile Release Train elements failing - result of expert focus group, created based on [1]
The reasoning of the focus group concerning each of the red elements is presented below.
5.2.3.1. PI planning
Both groups identified the PI planning (16) as being challenged by distributed. There was not much
discussion on this element. Doing a meeting distributed with a small team is difficult, so doing it with
an Agile Release Train will be even harder. Participation of all members of the Agile Release Train is
key to the success of the meeting. Getting all team members to actively participate in the meeting is
difficult when being distributed.
48
5.2.3.3. Solution Demo
Like the previous two elements, both groups identified the solution demo (14) as being challenged
without much discussion. However, the reasoning here was different. Including the key stakeholders
is more difficult when the meeting is done distributed. Without the key stakeholders present, the
meeting is of little use. Therefore, also the success of this meeting is challenged by distributed.
The second condition is that if the participants are asked to individually judge a topic, all participants
should be able to do this based on their experience. The participants are asked to vote individually in
the second round, thus they should all be able to individually judge the topic. The selected participants
are all Release Train Engineers. However, their experience differs. This is based on their answers to
the questions regarding their experience, that have been asked before the focus group by email. Based
on their experience, the participants have been divided into groups. This difference could be a
limitation and is discussed in the limitations section.
The last condition is that participants should not have been previously involved in the research. This
condition is violated as one of the participants was also present at the previous focus group. Another
participant attempted to participate as informant for the multiple informant methodology. The
participant, however, did not feel confident to answer the questions, therefore, the session was
49
stopped and the results were left out. These two participants, as well as one other participant had
previously taken part in the survey that was done as part of this research. The implications of this
violation are handled in the limitations.
In the first round the participants started in two groups, divided based on previous participation.
Participants who had previously participated in the research were grouped together. Each group
identified elements that were specifically challenged by distributed. After both groups had finished, a
plenary session was held to identify the specifically challenged elements. From this session 13
elements were identified that are specifically challenged by distributed. The result of this round can
be found in Table 12.
The participants stated that some elements are challenged depending on the implementation. These
elements were: release management, DevOps, system team, release any time and customer.
50
Table 12: Practitioner focus group result of round 1
In the second round the participants individually dot voted on the elements on both likelihood and
impact. Likelihood being the likelihood that the element fails in a distributed setting, and impact being
the impact if the element fails in a distributed setting. Because the experience of the participants
differs, they have been categorized. Before the focus group, the participants were asked 3 questions
regarding their experience on the topics: distributed, SAFe and distributed SAFe. Based on their
responses the participants have been categorized in one of the categories: beginner, intermediate or
expert. Using this categorization, the votes of the participants in the beginner category have been
omitted in the results. How the categorization was done can be found in Appendix Y.
Same as in 5.2.2, combining the scores of likelihood and impact gives the risk of an element failing.
The individual votes of the participants on likelihood and impact can be found in Appendix Z. These
individual votes are added in Table 163. Based on this, the calculated risk is visualized in Figure 31.
Table 13: Votes on likelihood and impact
51
Risk of SAFe element failing
900 806
Risk = (likelihood * impact)
800
700
600
500
390
400
300
176 170
200 108 96 72
100 55 36 18 3 2 1
0
10. Agile
4. 17. Release 6. Release 11. 19. Product
16. PI 15. Inspect 25. Shared 20. Business 26. User
32. Feature Implementi Train Communitie Train / Architectura Manageme 33. Enabler
planning & Adapt Services Owners Experience
ng 1-2-3 Engineer s of Practice Value l runway nt
Stream
Risk 806 390 176 170 108 96 72 55 36 18 3 2 1
Element
Figure 32 and Figure 33 show the distribution of the votes per expert group.
52
Expert distribution on likelihood
35
31
30
26
25
Number of votes
20
16
15
11 10 9 9
10
5 6
5 2 3
1 1
0
10. Agile
6.
4. 17. Release Release 11. 19. Product 20.
16. PI 15. Inspect 25. Shared Communiti 26. User
32. Feature Implementi Train Train / Architectur Manageme 33. Enabler Business
planning & Adapt Services es of Experience
ng 1-2-3 Engineer Value al runway nt Owners
Practice
Stream
Total 31 26 11 10 9 16 9 5 6 2 3 1 1
Intermediate 9 9 1 6 0 5 4 1 3 0 1 0 0
Expert 22 17 10 4 9 11 5 4 3 2 2 1 1
Element
Expert Intermediate
53
Expert distribution on impact
30
26
25
Number of votes
20 17
15 16
15 12 11
8 9
10
6 6
5 2
1 1
0
10. Agile
6.
4. 17. Release Release 11. 19. Product 20.
16. PI 15. Inspect 25. Shared Communiti 26. User
32. Feature Implementi Train Train / Architectur Manageme 33. Enabler Business
planning & Adapt Services es of Experience
ng 1-2-3 Engineer Value al runway nt Owners
Practice
Stream
Total 26 15 16 17 12 6 8 11 6 9 1 2 1
Intermediate 7 8 2 10 2 0 2 1 3 4 0 0 0
Expert 19 7 14 7 10 6 6 10 3 5 1 2 1
Element
Expert Intermediate
54
5.3.3. Analysis of practitioner focus group results
The top 3 elements of the likelihood ranking: PI planning, inspect & adapt and shared services, and
the top 3 elements of the impact ranking: PI planning, implementing 1-2-3, and feature were further
discussed in the focus group. Thus, the PI planning, inspect & adapt, implementing 1-2-3, feature and
shared services were further discussed. Interestingly, in contrast to the theory, the system demo and
solution demo are not in the top 3. Moreover, they are not identified as specifically challenged by
distributed. The reason for this, according to the participants of the focus group, is that in contrast to
what the theory states, in practice these events are done with the teams, not with the entire Agile
Release Train.
The elements implementing 1-2-3 and feature were not identified as specifically challenged by
distributed in theory but are challenged by distributed in practice. Implementing 1-2-3 is challenged
because the different locations are usually trained by different coaches. This results in a different
interpretation of SAFe per location. This difference can cause serious problems when working
together. Features are challenged because, when working on a feature that is distributed across
locations there usually is a lack of common understanding of the feature. Additionally, working across
locations means that dependencies can arise across different locations. Together with a lack of
common understanding this can become a serious problem.
From the results of the focus group, the Release Train Engineer has a higher risk of failing than shared
services. Same as in the expert focus group, the Release Train Engineer is added to the elements with
high risk of failure. Again, the elements with risk of failing are marked red in the SAFe big picture in
Figure 34. Other elements are marked green.
Figure 34: Agile Release Train elements failing - result of practitioner focus group, created based on [1]
The reasoning of the focus group concerning each of the red elements is presented below.
55
5.3.3.1. PI planning
Both groups identified the PI planning (16) as being challenged by distributed. This was done without
much discussion. Trying to do a distributed PI planning is possible, but it definitely makes it a lot
harder. Additionally, this is the key event on which the Agile Release Train works. If this event fails,
everything that will be done in the upcoming PI is compromised.
5.3.3.3. Feature
Both groups identified feature (32) as challenged by distributed. There was some discussion because
the implementation of the feature is challenged, not necessarily the feature itself. However, the
groups decided that the feature is to be implemented by the Agile Release Train. This implementation,
when done distributed is more difficult because, when working on the feature is distributed across
locations there usually is a lack of common understanding of the feature. Additionally, working across
locations means that there will be dependencies across locations.
Combining the results of the three identifications gives two elements that are identified in each of the
identifications. These are the PI planning and inspect & adapt, which have the highest risk of failing in
a distributed setting. Additionally, there are two elements identified with a high risk of failing,
depending on the implementation: DevOps, and system team. Though it should be noted that this
does not mean that the other elements will not fail, it simply states that these are the most likely to
fail. The result of the triangulation can be found in Table 14.
56
Table 14: Overview of triangulation result
Besides these four elements, the system demo and solution demo are also expected to fail, based on
theory. However, in practice these elements are not done with the Agile Release Train. Interestingly,
one of the problems identified is “incorrect execution of SAFe”. Is the reason for this that SAFe is
applied incorrectly by the organization or is SAFe itself incorrect and the way it is executed by certain
organizations better? This question cannot be answered with the data gathered during this research,
therefore, more research is required.
In Figure 35, the elements that have a high risk of failing are marked red, those with high risk of failing,
depending on the implementation are marked yellow, all other elements are marked green.
Figure 35: Agile Release Train elements failing - result of triangulation, created based on [1]
57
5.4.1. PI planning
The PI planning (16) is the key event in which the Agile Release Train synchronizes for the upcoming
PI. If this event fails, the next PI is compromised. Therefore, it is important that this event does not
fail. However, as indicated by both experts on the theory as well as practitioners, when done
distributed, there is a high chance that the PI planning fails. The biggest risk for distributed SAFe
therefore is the PI planning.
5.4.3. DevOps
The experts on theory indicate that DevOps (22) might fail, were the practitioners indicate that this
depends on the implementation. In one implementation DevOps is responsible to set up and support
the release process, by making builds and new versions. In other implementations, the build process
is fully automated and DevOps is simply maintaining the tools. Thus, depending on the situation,
DevOps failing can have serious consequences for the Agile Release Train. Therefore, the third risk for
distributed SAFe is DevOps.
58
Chapter 6 Customizations of SAFe
In this chapter a solution is proposed as an answer to the third research question: “What would SAFe
look like, when customized for distributed settings?”. First, customizations of SAFe, based on theory
are presented. Second, customizations of SAFe, based on practical experience are given. For this, the
results of the practitioner focus group are used. Finally, by combining the insights of both theory and
practical experience a solution is proposed.
If doing the PI planning co-located is not an option, there are other solutions to reduce the risk of the
PI planning failing. For example, by using a video conference system to get a live connection between
the locations. This way, the locations can see and hear each other even though they are physically not
together. The plenary parts of the PI planning can be done together, as though everyone is in a single
room. For the team breakouts, if a team on another location is needed they can also use the video
conference system to talk to one another. Notably, this solution is also given in the PI planning toolkit,
presented by SAFe. However, this toolkit is not publicly available, this can be bought and is available
to the SPC’s. This solution being in the toolkit thus supports this as a solution for the PI planning.
Another solution, if the PI planning is not done co-located, is to have a facilitator present at each
location. This way, the facilitators can prepare the PI planning together, so that each location is
properly prepared for the PI planning. Additionally, during the PI planning each location has their own
facilitator ensuring that everything runs smoothly at the different locations. Finally, if things happen
that effect multiple locations, the facilitators can solve these together and make sure that all locations
are updated on what is happening. Possibly, these extra facilitators could also support the locations
during the rest of the program increment. In which case these extra facilitators become a Release
Train Engineers for their location.
Additionally, if the locations are spread among multiple time zones, in which the overlapping time
window is less than 8 hours for a working day, the PI planning can be done over more than two days.
For example, with a 6-hour overlap, doing the PI planning across 3 days ensures that all agenda items
can be done at all locations present. Note that if there are no overlapping hours during a working day,
there should be at least one location working late or early.
59
Finally, a digital program board and digital PI objectives can be used. This way each location can see
and update the program board. Besides this, the digital PI objectives can be shared across the locations
of the Agile Release Train so each location can see the progress of all teams.
Although all solutions presented above will help, without the use of video conferencing the other
solutions will not be sufficient to give a successful PI planning.
These solutions are substantiated in the Elektra case study [16]. In which the following is stated
regarding the PI planning. “A Joint PI planning with all involved individuals in one place is the preferred
and most collaborative solution to get to jointly develop PI objectives. However, a downscaled co-
located planning meetings might be an alternative if e.g. budget do not allow for meeting face-to-face.
Well-functioning conferencing equipment (video conferencing, remote presentation etc.) along with
electronic tools to reflect different artifacts of the planning (e.g. planning board, risk board, scrum of
scrums etc.) are mandatory prerequisites for such meeting. The time difference is managed through
recorded presentations, and spreading the planning out on more days.”.
If co-locating all members of the Agile Release Train is not possible, there are other solutions for
inspect & adapt, just as for the PI planning. Using a video conference system, and having a Release
Train Engineer facilitating on each location. Additionally, if there is only a small overlapping window
with the different locations, the inspect & adapt meeting could be held on another day, separated
from the PI planning.
Where the PI planning relies heavily on the video conferencing, inspect & adapt does not rely on it as
heavy. Getting action points on how to improve can be done using, for example, a feedback form.
Although it is better with video conferencing, it can also be successful without it.
Another solution would be to provide each location with its own DevOps team. This way each location
should be able to solve everything themselves. This way less communication between the locations is
needed. Downside of this is that it could result in the locations not working together. The locations
might end up solving the same problem with different solutions, which have to be integrated again
later. To prevent this the different DevOps teams need to meet regularly to discuss what they are
working on.
The last solution is to have the DevOps team at one location, but also having the team, or team
members, travel regularly to the other locations. This way the team does not have to work distributed
and no alignment between multiple DevOps teams is needed. However, the team members might not
be willing to travel on a regular basis.
60
Besides these solutions, there are solutions in which there is no separate DevOps team. For example,
by integrating the DevOps expertise into the teams of the Agile Release Train. This way, each team
can do the DevOps activities on its own. Or by automating the DevOps activities so that the teams do
not require DevOps. Although both solutions are good, they cannot be directly implemented. The
transition towards such a solution will take time, for this reason they are not considered in the
proposed solution.
Same as for DevOps, a co-located system team combined with regular traveling could be a solution.
Having multiple system teams is not a solution because the work of these system teams must be
integrated, only adding more overhead.
61
6.2.1. Practitioner focus group results
The solutions identified were ranked based on difficulty (easiness) and impact. Solutions easy to
realize got a high score and solutions with high impact also got a high score. Combining difficulty and
impact gives a solution score, using the formula 𝐷𝑖𝑓𝑓𝑖𝑐𝑢𝑙𝑡𝑦 ∗ 𝐼𝑚𝑝𝑎𝑐𝑡 = 𝑆𝑜𝑙𝑢𝑡𝑖𝑜𝑛 𝑆𝑐𝑜𝑟𝑒, the higher
the score, the better the solution. For each of the solutions the votes and solution score are presented
in Table 15 to Table 19. The solution score is visualized in Figure 37 to Figure 41. The individual votes
can be found in Appendix CC.
Interestingly, some of the solutions are already part of SAFe. For example, “features ready before PI
planning”, and “make sure that each feature has 1 owner to manage dependencies”. However, the
practitioners identified these as being helpful, even though they are already part of SAFe. Possibly
because these solutions are not expressed clearly enough in SAFe and were therefore not recognized
as being part of SAFe.
Table 15: 16. PI planning - votes on difficulty & impact
80
60
40
20
0
16.2 Minimal 1 x period PI
16.1 Features ready before 16.3 Good communication
Planning on-site with all
PI planning tools
teams
Solution Score 105 90 70
Elements
62
Table 16: 15. Inspect & Adapt - votes on difficulty & impact
120
100
80
60
40
20
0
15.1 Good Communication 15.3 First I&A meeting on 1 15.2 Divide the topics over
tools location the locations
Solution Score 156 108 45
Elements
140
120
100
80
60
40
20
0
32.3 Make sure that 32.1 Keep
32.2 Have common
each feature has 1 coordinating
32.4 Small features understanding on
owner to manage dependencies
features
dependencies (Scrum of Scrums)
Solution Score 168 110 99 42
Elements
63
Table 18: 4. Implementing 1-2-3 - votes on difficulty & impact
200
Solution score
150
100
50
0
4.1 Have 1 team to
4.3 Strong vision promoted 4.2 Coaches know culture &
coordinate the trainings &
top down problems of location
implementation
Solution Score 224 88 30
Elements
120
100
80
60
40
20
0
25.1 Distribute
25.4 Involvement 25.3 1 x per period
25.2 Services give features by shared
during planning visit each team
commitment service impact (focus
events visibility
areas)
Solution Score 144 143 72 56
Elements
64
6.2.2. Analysis of practitioner focus group results
During the focus group, multiple possible solutions have been identified. However, the elements that
were discussed during the focus group do not correspond with the elements that resulted from the
triangulation. The elements that do correspond are the PI planning and inspect & adapt. These
solutions will be discussed further.
Not all solutions have the same potential. Based on the solution score, the solutions with high
potential can be selected. The solutions that scored above average for their element are listed below.
The inspect & adapt meeting is a part of the PI planning. Therefore, these solutions can be combined.
The features that will be discussed in the PI planning should be ready, all teams should know them
and have prepared for the PI planning. During the PI planning and inspect & adapt meeting the
communication tools between sites should be good. There can be no problems with the tools during
the event. Additionally, the first inspect & adapt should be done co-located. If the PI planning is done
co-located this can be done directly. Otherwise, this should be done on a later moment. This way, any
problems with the PI planning can be solved together.
As stated previously, features should be ready before the PI planning according to SAFe. When
working distribute there should be more emphasis on this. Therefore, in the proposed solution this is
done by having a Release Train Engineer on each location that makes sure that the teams prepare the
PI planning.
Each location has its own Release Train Engineer. Before the PI planning each Release Train Engineer
ensures that all teams at his or her location prepare the features for PI planning. During the PI planning
and inspect & adapt the Release Train Engineer facilitates the events for his or her location.
By using video conferencing systems during the PI planning and inspect & adapt, plenary sessions are
done with all locations together. This conference system can also be used to contact the other
locations during the team breakouts. A digital program board is used so all locations can see and
update the board. And finally, if needed the PI planning is extended over multiple days to ensure that
the locations can work together simultaneously.
65
The DevOps team works as one team at one location, and travels regularly to the other locations to
answer questions and help the teams. The system team is distributed over the locations so they know
what is going on at the locations when the team must integrate the program.
In Table 20, these 6 solutions are mapped to the problems and elements identified in the previous
studies. It should be noted that the problems all help towards preventing the problem from
happening. However, even with these solutions the problems can still occur.
Table 20: Solutions for problems and failing elements
66
Additionally, with regular traveling of the DevOps team, the team is present at each location for some
time, regardless of the time zone. Finally, distributing the system team over the different locations
ensures that, regardless of the time zone, there is always a team member of the system team available
for the teams of each location.
Additionally, with regular traveling of the DevOps team, questions for the DevOps team can be asked
directly to the team when they are on location. Finally, by distributing the system team over the
different locations, questions can always be asked directly to the member of the system team at the
location.
Besides this, with regular traveling of the DevOps team, and distributing the system team over the
different locations, no communication tools are needed to contact the DevOps or system team.
67
Chapter 7 Discussion
In this chapter the methodologies and results of this research are discussed. First, the answers to each
of the research questions are presented. Second, the limitations of the methodologies and their effect
on this research are discussed. Third, a reflection on the research is given. Finally, recommendations
for future research are proposed.
As there was no literature found on problems of distributed SAFe, a Systematic Literature Review was
done to problems of Distributed Agile Development. These problems can occur when applying SAFe
in a distributed environment. However, if SAFe has practices to mitigate a problem, the problem is not
expected to occur.
Therefore, these problems have been filtered using a multiple informant methodology with
consensual approach. The informants have mapped the problems on the core values of SAFe to
determine if the problems are considered in SAFe. The problems that are not considered in SAFe are
threats to SAFe. The informants have also mapped these threats on the core values of SAFe to
determine what the impact of the threat is. The threats (problems) with high impact are expected to
occur when applying SAFe in a distributed setting. This resulted in the following five distributed SAFe
problems.
These problems are expected to occur when applying SAFe in distributed settings. But note that these
problems do not exclusively occur when working distributed.
The findings give insight into the problems that can be expected in distributed SAFe. Because there is
no literature found on the problems of distributed SAFe, these findings add knowledge to the research
area of Distributed Agile Development.
68
The overlap of these identification methods discovered the following four elements that can be
expected to fail when SAFe is applied in a distributed setting.
1. PI planning
2. Inspect & Adapt
3. DevOps
4. System team
Additionally, based on theory, the system demo and solution demo would also be expected to fail.
However, in practice these are not done with the Agile Release Train. Therefore, these items are not
expected to fail when executed in a distributed setting.
When looking at these findings, they correspond to the elements that one would expect to fail when
applying SAFe in a distributed setting. The added value of this research is that this expectation is now
confirmed scientifically. The result of this research can be used as a base on which to build other
research on distributed SAFe.
Each location has its own Release Train Engineer. Before the PI planning each Release Train Engineer
ensures that all teams at his or her location prepare the features for PI planning. During the PI planning
and inspect & adapt the Release Train Engineer facilitates the events for his or her location.
By using video conferencing systems during the PI planning and inspect & adapt, plenary sessions are
done with all locations together. This conference system can also be used to contact the other
locations during the team breakouts. A digital program board is used so all locations can see and
update the board. And finally, if needed, the PI planning is extended over multiple days to ensure that
the locations can work together simultaneously.
The DevOps team works as one team at one location, and travels regularly to the other locations to
answer questions and help the teams. The system team is distributed over the locations so they know
what is going on at the different locations when the team must integrate the solution.
7.2. Limitations
In this section the limitations the of the methodologies, as described in Chapter 3, and their effects on
this research are discussed.
69
A limitation of using a Systematic Literature Review is that the results might be biased. For the
Systematic Literature Review done in this research this effect is limited as the review only considered
other Systematic Literature reviews. Besides this, investigating two other reviews of the Systematic
Literature Reviews of Distributed Agile Development provided no new studies. Though there is always
some bias, this way the bias is minimalized as much as possible.
Another limitation is that the frame of reference is different for each person. This might result in
slightly different execution of the protocol when applied by another researcher. These small
differences can lead to different papers being accepted and different data being extracted. Though
the transparence provided by using a predefined protocol does not avoid this, it does make the
decisions traceable and repeatable.
Finally, the use of a Systematic Literature Review is limited to previously published studies. However,
for this research, this is sufficient as the Systematic Literature Review is done to discover the current
problems in Distributed Agile Development. Based on these problems, steps are taken, rather than
trying to gain new insights from the Systematic Literature Review.
Using multiple informants, one could wonder whether the informants are able to sufficiently judge
the topic of SAFe. Because the informants all meet the selection criteria presented in 3.3.1, the
informants should be able to sufficiently judge the topics. Moreover, all selected informants are SPC’s,
and are thus certified to give training on SAFe. Therefore, the informants are able to sufficiently judge
the topic of SAFe. Besides insight in SAFe, it seems insight on the topic of distributed is also required.
However, as the problems judged by the informants come from the Systematic Literature Review done
previously, the insight should be based on this literature study. These problems, and corresponding
insights, are therefore provided by the researcher.
Another limitation is that the informants might not be able to provide a broad enough view to make
a sufficiently substantiated decision. As the selected informants, have implemented SAFe in different
companies, this difference in experience should be sufficient to provide a broad enough view. Though
it should be noted that the experts all work at Prowareness. However, this is not where their SAFe
experience comes from, so their backgrounds on SAFe are sufficiently different.
Finally, the use of the consensual approach might not provide the preferred answer, other ways to
aggregate the answers could provide a better outcome. However, as described in 4.2, after each
mapping the changes that were made when the consensus was reached did not change the outcome.
Thus, the use of a consensual approach did not limit the result of the methodology.
70
Due to difference in expertise, the condition that all participants should be able to individually judge
the topic when asked, is not fulfilled. This is thus a limitation. To handle this limitation, it must be
made plausible that all participants have reached a level of expertise where they can properly judge
the topics.
Before the voting is done there are three steps in which the expertise of the participants is increased.
After these steps all participants are expected to be able to properly judge the topics. Moreover, all
participants have to agree with the elements that will be voted on. Participants are not expected to
agree to things that they do not understand.
The growth of knowledge before the participants vote individually is qualitatively visualized in Figure
42 and Figure 43, in both figures, the red line visualizes the minimal required expertise to be able to
properly judge the topics. No action was taken during the session to verify this knowledge level,
because questions regarding the knowledge on the topics could influence the participants.
SAFe expertise
SAFe knowledge level
In the first step, preparation before the session, both the SAFe consultants as well as the practitioners
are not expected to receive new information regarding SAFe. The distributed experts will receive a lot
of new information regarding SAFe, and will therefore improve their knowledge significantly. In the
second and third steps, through discussion in a group, all participants can gain new insights. And thus,
increasing the knowledge level of all participants. The distributed experts however, will increase more
than the SAFe consultants and practitioners. In the plenary discussion the increase is less as
participation with a bigger group is harder. After these three steps it is expected that the knowledge
level of all participants is high enough to be able to properly judge the topic of SAFe by themselves.
71
Distributed knowledge level Distributed expertise
Similar as before, both the practitioners and distributed experts are not expected to receive new
information regarding the problems of distributed development. The SAFe consultants will receive
some new information regarding the problems of distributed development, and will therefore
improve their knowledge. The second and third steps go similar, all participants can gain new insights
in the group discussion, increasing the knowledge level of all participants. The SAFe consultants
however, will increase more than the distributed experts and practitioners. After these three steps it
is expected that the knowledge level of all participants is high enough to be able to properly judge the
topic of distributed.
Due to difference in expertise, the condition that all participants should be able to individually judge
the topic when asked is not fulfilled. This is thus a limitation. To handle this limitation, it must be made
plausible that all participants have reached a level of expertise where they can properly judge the
topics.
To be able to judge the topics, the level of expertise on the topics of both distributed and SAFe has to
be sufficient. The selected participants are all Release Train Engineers. However, their experience
differs. Based on their answers to the questions asked before the focus group via email, the
72
participants have been grouped in three categories: beginner, intermediate and expert. For the results
of the focus group, only the votes of the experts and intermediates are taken into account. Though
the votes of the beginners are not taken into account, their participation in the discussions is of value.
Another option to ensure everyone would be able to individually judge the topics would be to provide
all participants with extra information on the topics of distributed and SAFe. However, because the
goal of the focus group is to gain insight based on practice, the choice was made not to provide the
participants with additional information.
The second condition, that the participants should not have been previously involved in the research
was violated. Therefore, this is also a limitation. To handle this limitation, it must be made plausible
that the previous involvement did not affect the focus group.
One of the participants was present at the previous focus group. Another participant attempted to
participate as informant for the multiple informant methodology. The participant, however, did not
feel confident to answer the questions. The session was stopped and the results were left out. These
participants, as well as one other participant had previously taken part in the survey that was done as
part of this research. For each of these participants, the time between their participation in the survey
and the focus group was roughly 2 months. For the previous focus group, this time was roughly 4
months, and for the informant session this was roughly 6 months. For none of the participants the
research is part of their daily work.
Additionally, in the previous focus group and in the informant session, the participants were asked to
give insight based on theory. Whereas in this focus group the participants are asked to provide insight
based on practice. Therefore, given the time that has passed since their participation, and that the
participants are asked for practical insights, it is unlikely that their previous participation affected the
outcome of the focus group.
7.3. Reflection
In this section, it is reflected upon how the methodologies that are used provide a sufficiently
substantiated answer to the research questions.
First, the perspective provided by the informants can be considered too limited. In the application of
the methodology the selected informants were all SAFE experts. The informants’ experience with
distributed was primarily derived from information provided by the researcher, which was based on
the results of the Systematic Literature Review. By design, the informants’ insights come from the
SAFe perspective, rather than the distributed perspective. Answering a question which requires both
SAFe and distributed knowledge, having only SAFe experts can be considered as being too limited.
Including experts from the field of distributed would have resulted in a broader, more complete,
perspective.
73
Second, the consensual approach can be criticized that it did not generate enough discussion. When
consensus was being reached, one of the three informants was not present. Though the informant
agreed with the consensus that was found, he did not participate in the discussion. It would have been
better if the informant participated in the discussion. This however, was not possible due to
circumstances.
Third, there is no insight in the data behind each cell of the mapping of the problems on the core
values. The mapping provides a good overview of the data. However, because of the way that the
informants were asked to fill in the mapping, this did not provide insight into their reasoning. Only
where the informants had different views their reasoning was given during the discussion, but this
was not documented. It would have been better if the reasoning for each cell of the mappings had
also been documented.
These reflections do not disqualify the approach taken. However, given these reflections, it can be
argued that the answer to the research question can be better substantiated using the previously
described adjustments. The multiple informant methodology provided validated insights using
relatively few resources. In hindsight, using a different approach that has more emphasis on
discussion, such as a focus group, could have yielded more insightful results. Besides this, such an
approach would have enabled inclusion of distributed experts as well as SAFe experts. This however,
would have required considerably more time and resources and thus limiting the time available to
investigate the other research questions.
To gain practical insights, initially a survey in the SAFe community was done. However, due to low
response, the results of the survey could not be sufficiently substantiated. For this reason, the results
are excluded from the research and another method for gaining insights from practice was required.
At this point, the choice was made to conduct another focus group. Although the survey is not used,
it would have been good to analyze why the response on the survey was so low. This would however,
not provide an answer to the research question, and was therefore not done.
The use of a focus group for gaining insights from practice can be argued against. First, there are only
11 participants, while globally there are many more SAFe practitioners. The question can be asked if
these 11 participants are a good representation of all SAFe practitioners. Second, in the focus group,
discussion was used to reach agreement. However, reaching agreement was not always possible
because implementations can differ. Therefore, this use of a focus group for gaining practical insights
might not have been the best approach. The use of for example multiple case studies might have been
better.
74
The proposed solution itself is not very substantiated, again because of the time constraints of this
master thesis project. Increasing support for this solution could be done by verifying it with
practitioners, for example via mail or a survey. However, even with that support it should still be
validated in practice.
Both questions are relevant, and to find a definite answer to them, research into all aspects of SAFe
should be done. However, as stated previously, the Agile Release Train is a core construct of SAFe. As
such, the results of the research are applicable for SAFe, although, possibly not for all elements. To
discover this, additional research must be done.
The answer to the question, can the results be viewed as an answer for SAFe or for the Agile Release
Train, differs for each research question. For the first research question, the problems that are
expected to occur are not specific to the Agile Release Train, thus this answer can be viewed as an
answer for SAFe. For the second and third research questions, the research looked specifically into
the Agile Release Train, and thus the answer can be viewed only for the Agile Release Train.
Additionally, the answers found in this research are rather obvious. Therefore, one could ask what the
added value of this research is. The added value of this research is that these rather obvious answers
are now confirmed scientifically. This confirmation has been done using different techniques, which
have been documented extensively. This gives a result that can be used as a base on which to build
further research on distributed SAFe.
The focus of this research was on the problems and failing elements that are expected to occur when
SAFe is applied in a distributed setting. Regrettably, this means that the question, “is distributed SAFe
possible?” cannot be answered based on this research. Based on the results of this research it would
seem that this is possible, if the right precautions are met to ensure a successful PI planning and
inspect & adapt.
In hindsight, this research has focused on discovering what goes wrong when applying SAFe in a
distributed setting. Rather than, how to prevent or solve these problems. After the first focus group,
the focus of the research could have switched towards solving the problems for the PI planning.
However, if this had been done, the answer to the second research question would not have been
substantiated further. Leaving an open end to that part. Personally, I think from a scientific viewpoint
the choice to further substantiate the second research question was correct. Although, from the
viewpoint of a company, this was not preferred as it did not yield surprising new insights.
On a personal note, I think that the solution for the PI planning, derived from the data of the research
is not the best possible solution. I think it would be better to do, in addition to the PI planning, a mini
PI planning every team iteration. Before the teams do their iteration planning sessions, the Agile
Release Train comes together, possibly digitally, to briefly discuss what features the teams will be
working on in the upcoming iteration. This way, teams know what the other teams will be working on,
and were to go for help. Additionally, this reduces risk of problems occurring when the PI planning
fails. This mini PI can ensure that the train keeps on track. Although this may be a good solution, this
75
solution is completely theoretical and not supported by this research. It could be interesting to
investigate this solution in future research.
Finally, this research gives, problems and elements that are expected to fail, and a proposed solution.
The problems, elements, and the solution are not validated in practice. Although, some validation is
done using a focus group, but no case study or anything of the sort has been done. However, validation
using a single case study, is not sufficient, and doing multiple case studies requires much time and
resources. Validation was also attempted by doing a survey, which would have made a good validation
from practice. However, regrettably, this survey did not get enough response to be included in the
results.
Additionally, this research has mostly focused on the Agile Release Train. Additional research must be
done to discover if this research is also an answer regarding SAFe or only an answer for the Agile
Release Train.
Though all these recommendations are valid, the next step in this research would be to test the
proposed solution in the field. Problems observed during this test brings us back to a reformulated
version of the first research question: “What problems occur when the customized version of SAFe is
applied in a distributed setting?” Based on these problems the second research question can be
reformulated “What SAFe elements of the customized version of SAFe fail when applied in a
distributed setting?”. Which serves as input to further improve this customized version of SAFe.
Furthermore, the practitioners mentioned that they did not do the system demo and solution demo
with the Agile Release Train, as is proposed by the theory. Interestingly, one of the problems identified
is “incorrect execution of SAFe”. Is the reason for this that SAFe is applied incorrectly by the
organization or is SAFe itself incorrect and the way it is executed by certain organizations better?
Additional research must be done to answer these questions.
Besides this, it could be interesting to investigate the solution of bi-weekly mini PI planning sessions
in addition to the PI planning. This to reduce the risk the train derailing due to a failed PI planning.
Finally, based on these recommendations, a list of questions for future research is proposed. First,
regarding the problems and failing elements identified in this research. Second, regarding the
difference between theory and practice of the system demo and solution demo. Third, regarding the
customized version of distributed SAFe. Fourth, regarding the bi-weekly mini PI planning.
Can the problems identified to occur when SAFe is applied in a distributed setting, based on
theory, be verified using a focus group?
Are the failing elements, identified for the Agile Release Train, representative for all of SAFe?
What is the reason that practitioners execute the system demo and solution demo in a
different way than prescribed by theory?
76
What is the consequence of practitioners executing the system demo and solution demo in a
different way than prescribed by theory?
What problems occur when the customized version of SAFe is applied in a distributed setting?
What SAFe elements of the customized version of SAFe fail when applied in a distributed
setting?
Can a bi-weekly mini PI planning help the Agile Release Train to stay on track?
77
Chapter 8 Conclusion
In this chapter, a summary is given on the approach taken to answer the research questions and the
conclusions of the research questions are presented.
8.1. Summary
In this research, the following three research questions have been answered:
To answer the first research question, a Systematic Literature Review was done on the topics of
Distributed Agile Development and Distributed Scrum. This resulted in Distributed Agile Development
problems. To relate these problems to SAFe, a multiple informant methodology with consensual
approach has been used. This multiple informant methodology filtered the problems and discovered
distributed SAFe problems.
To answer the second research question: triangulation was used. First, the author identified failing
elements based on the distributed SAFe problems found in previous research question. Second, a
focus group with both SAFe and distributed experts was done to find an answer based on theory: the
expert focus group. Third, a focus group was done to find an answer based on practice: the practitioner
focus group. The overlap of these identification methods provided an answer to the research question.
To answer the third research question, insights from the author, based on the previous research, were
combined with the results of the second part of the practitioner focus group.
8.2. Conclusion
The following five problems with distributed SAFe are found as an answer to the first research
question: “What problems can be expected when SAFe is applied in distributed settings?”.
These findings give insight into the problems that can be expected in distributed SAFe. Because there
is no literature found on the problems of distributed SAFe, these findings add knowledge to the
research area of Distributed Agile Development.
The answer to the second research question: “What SAFe elements can be expected to fail when
applied in distributed settings?” is visualized in Figure 44. The following four elements are found that
are expected to fail, of which DevOps and system team are depending on the implementation.
1. PI planning
2. Inspect & Adapt
3. DevOps
4. System team
78
Figure 44: Agile Release Train elements failing, created based on [1]
These findings correspond to the elements that one would expect to fail when applying SAFe in a
distributed setting. The added value of these findings is that the expectation is now confirmed
scientifically. Therefore, the result of this research can be used as a base on which to build future
research on distributed SAFe.
No answer has been found to the third research question: “What would SAFe look like, when
customized for distributed settings?”. However, based on theory, this research does provide an
indication what the answer could be. Based on the problems that are expected to occur, a solution is
proposed for the elements that are expected to fail. Therefore, theoretically, the use of the proposed
solution in distributed settings is more likely to succeed than when applying normal SAFe.
Each location has its own Release Train Engineer. Before the PI planning each Release Train Engineer
ensures that all teams at his or her location prepare the features for PI planning. During the PI planning
and inspect & adapt, the Release Train Engineer facilitates the events for his or her location.
By using video conferencing systems during the PI planning and inspect & adapt, plenary sessions are
done with all locations together. This conference system can also be used to contact the other
locations during the team breakouts. A digital program board is used so all locations can see and
update the board. And finally, if needed, the PI planning is extended over multiple days to ensure that
the locations can work together simultaneously.
79
The DevOps team works as one team at one location, and travels regularly to the other locations to
answer questions and help the other teams. The system team is distributed over the locations so they
know what is going on at the different locations when the team must integrate the solution.
80
Bibliography
[1] D. Leffingwell, A. Yakyma, D. Jemilo, R. Knaster and I. Oren, "Scaled Agile Framework," Scaled
Agile. inc., [Online]. Available: http://www.scaledagileframework.com/. [Accessed 8 February
2016].
[2] J. D. Herbsleb and D. Moitra, "Global software development," IEEE software, vol. 18, no. 2, pp.
16-20, 2001.
[3] K. Dullemond and B. van Gameren, "Technological support for distributed agile development,"
TU Delft, Delft University of Technology, 2009.
[4] J. D. Herbsleb, "Global software engineering: The future of socio-technical coordination," 2007.
[5] J. D. Herbsleb and A. Mockus, "An empirical study of speed and communication in globally
distributed software development," IEEE Transactions on software engineering, vol. 29, no. 6,
pp. 481-494, 2003.
[6] P. J. Agerfalk, B. Fitzgerald, H. Holmstrom Olsson, B. Lings, B. Lundell and E. Ó Conchúir, "A
framework for considering opportunities and threats in distributed software development," in
Proceedings of the International Workshop on Distributed Software Development (DiSD 2005),
2005.
[7] M. Paasivaara, S. Durasiewicz and C. Lassenius, "Using scrum in distributed agile development:
A multiple case study," in Fourth IEEE International Conference on Global Software Engineering,
2009.
[8] D. Šmite, C. Wohlin, T. Gorschek and R. Feldt, "Empirical evidence in global software
engineering: a systematic review," Empirical software engineering, vol. 15, no. 1, pp. 91-118,
2010.
[9] P. van Buul and R. van Solingen, "Insights from a structured literature review (SLR) on
documented case-studies of Scrum application in globally distributed settings," Delft Software
Engineering Research Group, Delft, 2016.
[10] J. Sutherland, G. Schoonheim, E. Rustenburg and M. Rijk, "Fully distributed scrum: The secret
sauce for hyperproductive offshored development teams," in Agile Conference (AGILE'08),
2008.
[11] J. Sutherland, G. Schoonheim and M. Rijk, "Fully distributed scrum: Replicating local
productivity and quality with offshore teams," in 42nd Hawaii International Conference on
System Sciences (HICSS'09), 2009.
[12] J. Sutherland, G. Schoonheim, N. Kumar, V. Pandey and S. Vishal, "Fully distributed scrum:
Linear scalability of production between San Francisco and India," in Agile Conference
(AGILE'09), 2009.
81
[13] V. One, "10th Annual State of Agile Report," 2016.
[14] D. Leffingwell, A. Yakyma, D. Jemilo, R. Knaster and I. Oren, "Infogain case study," Scaled Agile,
Inc., [Online]. Available: http://scaledagileframework.com/infogain-case-study/. [Accessed 9
May 2016].
[15] D. Leffingwell, A. Yakyma, D. Jemilo, R. Knaster and I. Oren, "John Deere case study," Scaled
Agile, Inc., [Online]. Available: http://scaledagileframework.com/john-deere-case-study-part-
1/. [Accessed 9 May 2016].
[16] D. Leffingwell, A. Yakyma, D. Jemilo, R. Knaster and I. Oren, "Elektra case study," Scaled Agile,
inc., [Online]. Available: http://scaledagileframework.com/elekta-case-study/. [Accessed 23
May 2016].
[17] D. Leffingwell, A. Yakyma, D. Jemilo, R. Knaster and I. Oren, "Accenture case study," Scaled
Agile, Inc., [Online]. Available: http://scaledagileframework.com/accenture-case-study/.
[Accessed 9 May 2016].
[18] D. Leffingwell, A. Yakyma, D. Jemilo, R. Knaster and I. Oren, "Scaled Agile Framework - Lean
Agile Mindset," Scaled Agile. inc., [Online]. Available: http://scaledagileframework.com/lean-
agile-mindset/. [Accessed 26 September 2016].
[20] M. Hirsch, "Making RUP agile," OOPSLA 2002 Practitioners Reports, pp. 1-8, 2002.
[21] J. Hunt, "Agile Methods with RUP and PRINCE2," Agile Software Construction, pp. 193-210,
2006.
[22] W. W. Royce, "Managing the development of large software systems," proceedings of IEEE
WESCON, vol. 26, no. 8, pp. 328-338, 1970.
[24] P. Kruchten, The rational unified process: an introduction, Addison-Wesley Professional, 2004.
[25] Rational, "Rational Unified Process Best Practices for Software Development Teams," Rational
the software development company, November 2001. [Online]. Available:
https://www.ibm.com/developerworks/rational/library/content/03July/1000/1251/1251_be
stpractices_TP026B.pdf. [Accessed 17 August 2016].
[26] C. Larman and B. Vodde, "LeSS," The LeSS Company B.V., 2014. [Online]. Available:
https://less.works/. [Accessed 18 April 2016].
[27] S. W. Ambler and M. Lines, "Going Beyond Scrum: Disciplined Agile Delivery," Disciplined Agile
Consortium. White Paper Series, pp. 1-16, October 2013.
82
[28] K. Schwaber, "Nexus Guide," Scrum.org, August 2015.
[29] H. Kniberg and A. Ivarsson, "Scaling Agile@ Spotify," Spotify, October 2012. [Online]. Available:
https://ucvox.files.wordpress.com/2012/11/113617905-scaling-agile-spotify-11.pdf.
[Accessed 11 May 2016].
[31] M. Paasivaara, S. Durasiewicz and C. Lassenius, "Distributed agile development: Using Scrum
in a large project," in IEEE International Conference on Global Software Engineering, 2008.
[32] M. Vax and S. Michaud, "Distributed Agile: Growing a practice together," in Conference Agile,
2008. AGILE'08., 2008.
[33] P. L. Bannerman, E. Hossain and R. Jeffery, "Scrum practice mitigation of global software
development coordination challenges: a distinctive advantage?," in 45th Hawaii International
Conference on System Science (HICSS), 2012.
[35] B. S. Drummond and J. Francis, "Yahoo! Distributed Agile: Notes from the world over," in Agile
Conference (AGILE'08), 2008.
[36] J. D. Herbsleb, D. J. Paulish and M. Bass, "Global software development at siemens: experience
from nine projects," in Proceedings. 27th International Conference on Software Engineering,
2005.
[37] E. Ó. Conchúir, H. Holmstrom, J. Agerfalk and B. Fitzgerald, "Exploring the assumed benefits of
global software development," in IEEE International Conference on Global Software
Engineering (ICGSE'06), 2006.
[38] R. K. Gupta and P. Manikreddy, "Challenges in Adapting Scrum in Legacy Global Configurator
Project," in IEEE 10th International Conference on Global Software Engineering (ICGSE), 2015.
[39] R. Vallon, C. Drager, A. Zapletal and T. Grechenig, "Adapting to Changes in a Project's DNA: A
Descriptive Case Study on the Effects of Transforming Agile Single-Site to Distributed Software
Development," in Agile Conference (AGILE'14), 2014.
[40] V. J. Wawryk, C. Krenn and T. Dietinger, "Scaling a running agile fix-bid project with near
shoring: Theory vs. reality and (best) practice," in IEEE Eighth International Conference on
Software Testing, Verification and Validation Workshops (ICSTW), 2015.
[41] R. Noordeloos, C. Manteli and H. Van Vliet, "From RUP to Scrum in global software
development: A case study," in IEEE Seventh International Conference on Global Software
Engineering (ICGSE), 2012.
83
[42] F. Zieris and S. Salinger, "Doing Scrum Rather Than Being Agile: A Case Study on Actual
Nearshoring Practices," in IEEE 8th International Conference on Global Software Engineering
(ICGSE), 2013.
[43] I. Therrien and E. LeBel, "From Anarchy to Sustainable Development: Scrum in Less Than Ideal
Conditions," in Agile Conference (AGILE'09), 2009.
[44] T. J. Allen, "Managing the flow of technology: technology transfer and the dissemination of
technological information within the R and D organization," Massachusetts Institute of
Technology, 1977.
[45] T. Allen and G. Henn, "The organization and architecture of innovation," Routledge, 2007.
[46] D. Leffingwell, A. Yakyma, D. Jemilo, R. Knaster and I. Oren, "Scaled Agile Framework - Core
Values," [Online]. Available: http://www.scaledagileframework.com/safe-core-values/.
[Accessed 25 February 2016].
[47] R. Dolman and S. Spearman, "ASK: AGILE SCALING KNOWLEDGE - THE MATRIX," Agile Scaling,
2014. [Online]. Available: http://www.agilescaling.org/ask-matrix.html. [Accessed 18 April
2016].
[48] C. Larman and B. Vodde, "Large Scale Scrum - More with LeSS," Scrum Alliance, 30 December
2013. [Online]. Available: http://agileatlas.org/articles/item/large-scale-scrum-more-with-
less. [Accessed 18 April 2016].
[49] "Disciplined Agile 2.0," Disciplined Agile Consortium, 2015. [Online]. Available:
http://www.disciplinedagiledelivery.com/introduction-to-dad/. [Accessed 19 April 2016].
[50] "Disciplined Agile 2.0 - posters," Disciplined Agile Consortium, 2015. [Online]. Available:
https://www.disciplinedagileconsortium.org/posters. [Accessed 26 October 2016].
[52] D. Leffingwell, A. Yakyma, D. Jemilo, R. Knaster and I. Oren, "Essential SAFe," Scaled Agile. inc.,
9 February 2016. [Online]. Available: http://www.scaledagileframework.com/first-things-first-
essential-safe/. [Accessed 19 April 2016].
[53] D. Leffingwell, A. Yakyma, D. Jemilo, R. Knaster and I. Oren, "Essential SAFe," Scaled Agile. inc.,
23 June 2016. [Online]. Available: http://www.scaledagileframework.com/an-essential-
update-on-essential-safe/. [Accessed 27 June 2016].
[54] B. Kitchenham and S. Charters, Guidelines for performing systematic literature reviews in
software engineering, Technical report, Ver. 2.3 EBSE Technical Report. EBSE, 2007.
[55] S. M. Wagner, C. Rau and E. Lindemann, "Multiple informant methodology: a critical review
and recommendations," Sociological Methods & Research, vol. 38, no. 4, pp. 582-618, 2010.
84
[57] S. Wilkinson, "Focus group reseach," in Qualitative research: Theory, method and practice,
Sage, 2004, p. 177.
[59] B. Kitchenham, "Procedures for performing systematic reviews," Keele, UK, Keele University,
vol. 33, pp. 1-26, 2004.
[60] D. Gigone and R. Hastie, "Proper analysis of the accuracy of group judgments.," Psychological
Bulletin, vol. 121, no. 1, pp. 149-167, 1997.
[61] G. W. Hill, "Group versus individual performance: Are N+ 1 heads better than one?,"
Psychological bulletin, vol. 91, no. 3, pp. 517-539, 1982.
[62] D. W. Stewart and P. N. Shamdasani, Focus groups: Theory and practice, Sage Publications,
1990.
[63] S. Wilkinson, "Focus group methodology: A review," International Journal of Social Research
Methodology, vol. 1, no. 3, pp. 181-203, 1998.
[64] A. J. Onwuegbuzie, W. B. Dickinson, N. L. Leech and A. G. Zoran, "A qualitative framework for
collecting and analyzing data in focus group research," International journal of qualitative
methods, vol. 8, no. 3, pp. 1-21, 2009.
[66] T. Devos, "Case Study: Agility at Scale Wolters Kluwer Belgium," UHasselt, 2014.
[67] Y. I. Alzoubi, A. Q. Gill and A. Al-Ani, "Empirical studies of geographically distributed agile
development communication challenges: A systematic review," Information & Management,
vol. 53, no. 1, pp. 22-37, 2016.
[68] E. Hossain, M. A. Babar and H.-y. Paik, "Using scrum in global software development: a
systematic literature review," in Fourth IEEE International Conference on Global Software
Engineering (ICGSE), 2009.
[69] E. Hossain, M. A. Babar, H.-y. Paik and J. Verner, "Risk identification and mitigation processes
for using scrum in global software development: A conceptual framework," in Asia-Pacific
Software Engineering Conference (APSEC), 2009.
[70] Y. I. Alzoubi and A. Q. Gill, "Agile global software development communication challenges: A
systematic review," in Pacific Asia Conference on Information Systems (PACIS), 2014.
[71] U. Farooq and M. U. Farooq, "Exploring the Benefits and Challenges of Applying Agile Methods
in Offshore Development," Blekinge Institute of Technology, Karlskrona, Sweden, 2010.
85
[73] B. Rizvi, E. Bagheri and D. Gasevic, "A systematic review of distributed Agile software
engineering," Journal of Software: Evolution and Process, vol. 27, no. 10, pp. 723-762, 2015.
[75] S. M. Shah and M. Amin, "Investigating the Suitability of Extreme Programming for Global
Software Development," Blekinge Institute of Technology, Karlskrona, 2013.
[76] M. Rahman and A. Das, "MITIGATION APPROACHES FOR COMMON ISSUES AND CHALLENGES
WHEN USING SCRUM IN GLOBAL SOFTWARE DEVELOPMENT," Blekinge Institute of
Technology, Faculty of Computing, Department of Software Engineering, Blekinge, 2015.
[77] C. Gurram and S. G. Bandi, "Teamwork in Distributed Agile Software Development," Blekinge
Institute of Technology, School of Computing, Blekinge, 2013.
[78] G. K. Hanssen, D. Šmite and N. B. Moe, "Signs of agile trends in global software engineering
research: A tertiary study," in Sixth IEEE International Conference on Global Software
Engineering Workshop (ICGSEW), 2011.
[79] J. Verner, O. Brereton, B. Kitchenham, M. Turner and M. Niazi, "Risk Mitigation Advice for
Global Software Development from Systematic Literature Reviews," School of Computing and
Mathematics, Keele University, Keele, Staffordshire, UK, 2012.
[80] J. M. Verner, O. P. Brereton, B. A. Kitchenham, M. Turner and M. Niazi, "Risks and risk
mitigation in global software development: A tertiary study," Information and Software
Technology, vol. 56, no. 1, pp. 54-78, 2014.
[81] J. F. Yates and E. R. Stone, "The risk construct," John Wiley & Sons, 1992.
[83] S. Jalali and C. Wohlin, "Global software engineering and agile practices: a systematic review,"
Journal of Software: Evolution and Process, vol. 24, no. 6, pp. 643-659, 2012.
[84] S. Jalali and C. Wohlin, "Agile practices in global software engineering-A systematic map," in
5th IEEE International Conference on Global Software Engineering (ICGSE), 2010.
[85] A. A. Keshlaf and S. Riddle, "Risk management for web and distributed software development
projects," in Fifth International Conference on Internet Monitoring and Protection (ICIMP),
2010.
86
[88] M. Bano and D. Zowghi, "User involvement in software development and system success: a
systematic literature review," in Proceedings of the 17th International Conference on
Evaluation and Assessment in Software Engineering, 2013.
[90] M. Hummel, C. Rosenkranz and R. Holten, "The role of communication in agile systems
development," Business & Information Systems Engineering, vol. 5, no. 5, pp. 343-355, 2013.
[91] S. Ghobadi, "What drives knowledge sharing in software development teams: A literature
review and classification framework," Information & Management, vol. 52, no. 1, pp. 82-97,
2015.
[92] J. Portillo-Rodríguez, A. Vizcaíno, M. Piattini and S. Beecham, "Tools used in Global Software
Engineering: A systematic mapping review," Information and Software Technology, vol. 54, no.
7, pp. 663-685, 2012.
[93] M. Bano and D. Zowghi, "A systematic review on the relationship between user involvement
and system success," Information and Software Technology, vol. 58, pp. 148-169, 2015.
[94] S. Matalonga, M. Solari and G. Matturro, "Factors affecting distributed agile projects: a
systematic review," International Journal of Software Engineering and Knowledge Engineering,
vol. 23, no. 9, pp. 1289-1301, 2013.
[95] A. M. Razavi and R. Ahmad, "Agile development in large and distributed environments: A
systematic literature review on organizational, managerial and cultural aspects," in 8th
Malaysian Software Engineering Conference (MySEC), 2014.
[96] H. H. Khan, M. Naz’ri bin Mahrin and S. bt Chuprat, "Factors generating risks during
requirement engineering process in global software development environment," International
Journal of Digital Information and Wireless Communications (IJDIWC), vol. 4, no. 1, pp. 63-78,
2014.
[98] M. Usman, F. Azam and N. Hashmi, "Analysing and Reducing Risk Factor in 3-C's Model
Communication Phase Used in Global Software Development," in International Conference on
Information Science and Applications (ICISA), 2014.
[99] A. Mishra and D. Mishra, "Cultural Issues in Distributed Software Development: A Review," in
On the Move to Meaningful Internet Systems (OTM), 2014.
[100] M. Hummel, C. Rosenkranz and R. Holten, "Die Bedeutung von Kommunikation bei der agilen
Systementwicklung," Wirtschaftsinformatik, vol. 55, no. 5, pp. 347-360, 2013.
87
[101] M. a. K. S. U. Salam, "Green software multisourcing readiness model (GSMRM) from vendor‟ s
perspective," Science international (Lahore), vol. 26, pp. 1421-1424, 2014.
[102] N. Manzoor and U. Shahzad, "Information Visualization for Agile Development in Large-Scale
Organizations," Blekinge Institiute of Technology, Karlskrona, 2012.
[103] M. A. Babar and M. Zahedi, "Global Software Development: A Review of the State-Of-The-Art
(2007-2011)," IT University of Copenhagen, Copenhagen, 2012.
[105] N. Rashid and S. U. Khan, "Green Agility for Global Software Development Vendors: A
Systematic Literature Review Protocol," in Proceedings of the Pakistan Academy of sciences,
2015.
[106] T. Dreesen, R. Linden, C. Meures, N. Schmidt and C. Rosenkranz, "Beyond the Border: A
Comparative Literature Review on Communication Practices for Agile Global Outsourced
Software Development Projects," University of Cologne, Cologne.
[107] H. Wang, "The review and research on agile oriented method n the pilot industry system,"
Wuhan University of Techonlogy, Wuhan, 2014.
[108] H. Khalid, M. Ahmed, A. Sameer and F. Arif, "Systematic Literature Review of Agile Scalability
for Large Scale Projects," International Journal of Advanced Computer Science and Applications
(IJACSA), vol. 6, no. 9, pp. 63-75, 2015.
[109] B. J. a. P. R. da Silva Estácio, "A Set of Practices for Distributed Pair Programming.," in
International Conference on Enterprise Information Systems (ICEIS), 2014.
[110] K. Baseer, A. R. M. Reddy and C. S. Bindu, "A Systematic Survey on Waterfall Vs. Agile Vs. Lean
Process Paradigms," i-Manager's Journal on Software Engineering, vol. 9, no. 3, 2015.
[112] S. S. Islam Zada and S. Nazir, "Issues and implications of scrum on global software
development," University of Peshawar, Pakistan, Peshawar.
[115] S. Dodda and R. Ansari, "The Use of SCRUM in Global Software Development: An Exploratory
Study," Blekinge Institute of Technology, Karlskrona, 2010.
88
[116] A. C. C. dos Santos, C. C. Borges, D. E. Carneiro and F. Q. da Silva, "Estudo baseado em
Evidências sobre Dificuldades, Fatores e Ferramentas no Gerenciamento da Comunicação em
Projetos de Desenvolvimento Distribuído de Software," in Proceedings of 7th Experimental
Software Engineering Latin American Workshop (ESELAW), 2010.
[117] J. Montonen, "Key Challenges of Virtual Software Development Teams," jyväskylän yliopsto,
2010.
[118] E. Cardozo, J. Neto, A. Barza, A. França and F. da Silva, "SCRUM and productivity in software
projects: a systematic literature review," in 14th International Conference on Evaluation and
Assessment in Software Engineering (EASE), 2010.
[119] S. Schneider, R. Torkar and T. Gorschek, "Solutions in global software engineering: A systematic
literature review," International Journal of Information Management, vol. 33, no. 1, pp. 119-
132, 2013.
[120] R. Sriram and S. Mathew, "Global software development using agile methodologies: A review
of literature," in International Conference on Management of Innovation and Technology
(ICMIT), 2012.
[122] E. Kupiainen, M. V. Mäntylä and J. Itkonen, "Using metrics in Agile and Lean Software
Development-A systematic literature review of industrial studies," Information and Software
Technology, vol. 62, pp. 143-163, 2015.
[123] C. Costa, C. Cunha, R. Rocha, A. C. C. França, F. Q. da Silva and R. Prikladnicki, "Models and
Tools for Managing Distributed Software Development: A Systematic Literature".
[124] P. S. Saripalli and D. H. P. Darse, "Finding common denominators for agile software
development: a systematic literature review," Blekinge Institute of Technology, School of
Computing, Blekinge, 2011.
[125] Z. Li, P. Avgeriou and P. Liang, "A systematic mapping study on technical debt and its
management," Journal of Systems and Software, vol. 101, pp. 193-220, 2015.
[126] P. Räty, "Social Network Analysis in Software Engineering: A Literature Review and a case
study," Aalto University, Aalto, 2014.
[127] R. Prikladnicki and J. L. N. Audy, "Process models in the practice of distributed software
development: A systematic review of the literature," Information and Software Technology,
vol. 52, no. 8, pp. 779-791, 2010.
[128] M. Jiménez, M. Piattini and A. Vizcaíno, "Challenges and improvements in distributed software
development: A systematic review," Advances in Software Engineering, vol. 2009, pp. 1-14,
2009.
89
[129] R. Prikladnicki, D. Damian and J. L. N. Audy, "Patterns of evolution in the practice of distributed
software development: quantitative results from a systematic review," in 12th International
Conference on Evaluation and Assessment in Software Engineering (EASE), 2008.
[130] M. Jiménez and M. Piattini, "Problems and solutions in distributed software development: a
systematic review," in Software Engineering Approaches for Offshore and Outsourced
Development, Springer, 2008, pp. 107-125.
[131] J. Noll, S. Beecham and I. Richardson, "Global software development and collaboration:
barriers and solutions," ACM Inroads, vol. 1, no. 3, pp. 66-78, 2010.
[133] S. U. Khan, M. Niazi and R. Ahmad, "Barriers in the selection of offshore software development
outsourcing vendors: An exploratory study using a systematic literature review," Information
and Software Technology, vol. 53, no. 7, pp. 693-706, 2011.
[134] B. Lings, B. Lundell, P. J. Ågerfalk and B. Fitzgerald, "Ten strategies for successful distributed
development," in The transfer and diffusion of information technology for organizational
resilience, Springer, 2006, pp. 119-137.
[135] F. da Silva, R. Prikladnicki, A. Franca, C. Costa and R. Rocha, "Research and practice of
distributed software development project management: A systematic mapping study,"
Information and System Technology, vol. 24, no. 6, pp. 625-642, 2011.
[136] T. Ebling, J. L. N. Audy and R. Prikladnicki, "A Systematic Literature Review of Requirements
Engineering in Distributed Software Development Environments.," in International Conference
on Enterprise Information Systems (ICEIS), 2009.
[138] M. Jiménez, M. Piattini and A. Vizcaíno, "A Systematic Review of Distributed Software
Development," in Handbook of Research on Software Engineering and Productivity
Technologies: Implications of Globalization: Implications of Globalization, IGI Global, 2009, pp.
209-225.
[139] S. U. Khan, M. Niazi and R. Ahmad, "Factors influencing clients in the selection of offshore
software outsourcing vendors: An exploratory study using a systematic literature review,"
Journal of systems and software, vol. 84, no. 4, pp. 686-699, 2011.
[140] S. U. Khan, M. Niazi and R. Ahmad, "Critical success factors for offshore software development
outsourcing vendors: A systematic literature review," in International Conference on Global
Software Engineering (ICGSE), 2009.
90
[141] A. Lopez, J. Nicolas and A. Toval, "Risks and safeguards for the requirements engineering
process in global software development," in International Conference on Global Software
Engineering (ICGSE), 2009.
[142] I. Nurdiani, R. Jabangwe, D. Šmite and D. Damian, "Risk Identification and Risk Mitigation
Instruments for Global Software Engineering: A systematic review and survey results," in
International Conference on Global Software Engineering Workshop (ICGSEW), 2011.
[143] J. Persson and L. Mathiassen, "A process for managing risks in Distributed teams," IEEE
Software, no. 99, pp. 20-29, 2011.
[145] R. Prikladnicki, J. L. Audy and F. Shull, "Patterns in effective distributed software development,"
Software, vol. 27, no. 2, pp. 12-15, 2010.
[146] D. a. W. C. Šmite, "A whisper of evidence in global software engineering," IEEE software, vol.
28, no. 4, pp. 15-18, 2011.
[147] D. Šmite, C. Wohlin, R. Feldt and T. Gorschek, "Reporting empirical research in global software
engineering: a classification scheme," in International Conference on Global Software
Engineering (ICGSE), 2008.
[148] C. Treude, M.-a. Storey and J. Weber, "Empirical studies on collaboration in software
development: A systematic literature review," Citeseer, 2009.
[149] N. Ali, S. Beecham and I. Mistrík, "Architectural knowledge management in global software
development: a review," in International Conference on Global Software Engineering (ICGSE),
2010.
[150] M. Alsudairi and Y. K. Dwivedi, "A multi-disciplinary profile of IS/IT outsourcing research,"
Journal of Enterprise Information Management, vol. 23, no. 2, pp. 215-258, 2010.
[151] H. Huang, "Cultural Issues in Globally Distributed Information Systems Development: A Survey
and Analysis," in Americas Conference on Information Systems (AMCIS), 2007.
[152] S. U. Khan, M. Niazi and R. Ahmad, "Critical barriers for offshore software development
outsourcing vendors: a systematic literature review," in Asia-Pacific Software Engineering
Conference (APSEC), 2009.
[153] J. Kroll, J. L. N. Audy and R. Prikladnicki, "Mapping the Evolution of Research on Global Software
Engineering-A Systematic Literature Review.," in International Conference on Enterprise
Information Systems (ICEIS), 2011.
91
[155] A. Yalaho, "A conceptual model of ICT-supported unified process of international outsourcing
of software production," in International Enterprise Distributed Object Computing Conference
Workshops (EDOCW), 2006.
92
List of tables
Table 1: Problem groups example ........................................................................................................ 23
Table 2: Problem groups of Distributed Scrum ordered by class from [9] ........................................... 32
Table 3: Challenges of Distributed Agile Development grouped by most applicable class from [3] .... 33
Table 4: Google Scholar search results ................................................................................................. 33
Table 5: Problem groups ....................................................................................................................... 34
Table 6: Legend for Table 7: degree of consideration .......................................................................... 38
Table 7: Problems mapped to core values on consideration ............................................................... 38
Table 8: Legend for Table 9: degree of impact ..................................................................................... 40
Table 9: Problems mapped to core values on impact .......................................................................... 40
Table 10: Expert focus group result of round 1 .................................................................................... 45
Table 11: Votes on likelihood and impact............................................................................................. 46
Table 12: Practitioner focus group result of round 1............................................................................ 51
Table 13: Votes on likelihood and impact............................................................................................. 51
Table 14: Overview of triangulation result ........................................................................................... 57
Table 15: 16. PI planning - votes on difficulty & impact ....................................................................... 62
Table 16: 15. Inspect & Adapt - votes on difficulty & impact ............................................................... 63
Table 17: 32. Feature - votes on difficulty & impact............................................................................. 63
Table 18: 4. Implementing 1-2-3 - votes on difficulty & impact ........................................................... 64
Table 19: 25. Shared services - votes on difficulty & impact ................................................................ 64
Table 20: Solutions for problems and failing elements ........................................................................ 66
Table 21: Problem groups example .................................................................................................... 101
Table 22: Participants focus group form ............................................................................................. 103
Table 23: Table layout focus group ..................................................................................................... 103
Table 24: Table layout focus group ..................................................................................................... 109
Table 25: Rejected studies including reason for rejection.................................................................. 126
Table 26: Challenges of geographically distributed agile development from [67] ............................. 128
Table 27: Challenges of Distributed Scrum from [68] ......................................................................... 128
Table 28: Risks of Distributed Scrum from [69] .................................................................................. 128
Table 29: Challenges of Agile Global Software Development from [70] ............................................ 129
Table 30: Challenges of applying agile methods in offshore development from [71]........................ 129
Table 31: Challenges of Distributed Agile Software Development from [72]..................................... 129
Table 32: Challenges of Distributed Agile Software Engineering from [73] ....................................... 130
Table 33: Challenges of Distributed Agile Development from [74] .................................................... 130
Table 34: Challenges of Extreme Programming in Global Software Development from [75] ............ 130
Table 35: Challenges of Distributed Scrum from [76] ......................................................................... 131
Table 36: Challenges of Distributed Agile Software Development from [77]..................................... 131
Table 37: Result of studies subjected to the acceptance criteria from [78] ....................................... 133
Table 38: Result of studies subjected to the acceptance criteria from [79] ....................................... 133
Table 39: Result of studies subjected to the acceptance criteria from [80] ....................................... 134
Table 40: Problems due to inefficient communication tools.............................................................. 135
Table 41: Problems due to unavailability of people ........................................................................... 135
Table 42: Problems due to lack of synchronous communication ....................................................... 135
Table 43: Problems due to different execution of work practices ..................................................... 136
Table 44: Problems due to language barriers ..................................................................................... 136
Table 45: Problems due to lacking technical infrastructure ............................................................... 136
93
Table 46: Problems due to loss of cohesion ....................................................................................... 136
Table 47: Problems due to misinterpretation..................................................................................... 137
Table 48: Problems due to lack of agile training................................................................................. 137
Table 49: Problems due to reduced trust ........................................................................................... 137
Table 50: Problems due to time zone differences .............................................................................. 137
Table 51: Problems due to people differences ................................................................................... 138
Table 52: Problems due to lack of traditional management .............................................................. 138
Table 53: Problems due to difficulties with coordination................................................................... 138
Table 54: Problems due to shared ownership and responsibility ...................................................... 138
Table 55: Problems due to incorrect execution of Scrum .................................................................. 138
Table 56: Problems due to cultural differences - organizational and national................................... 139
Table 57: Problems due to the loss of informal contact ..................................................................... 139
Table 58: Problems due to lack of collective vision ............................................................................ 139
Table 59: Problems due to lack of requirement documents .............................................................. 139
Table 60: Problems due to lack of visibility ........................................................................................ 139
Table 61: Problems due to difficulties in knowledge sharing ............................................................. 139
Table 62: Problems due to increased communication effort ............................................................. 140
Table 63: Problems due to increased team size ................................................................................. 140
Table 64: Problems due to different holidays..................................................................................... 140
Table 65: Problems due to difficulties with agile decision making ..................................................... 140
Table 66: Problems due to increased number of teams ..................................................................... 140
Table 67: Problems due to silence of participants.............................................................................. 140
Table 68: Problems due to increased number of sites ....................................................................... 141
Table 69: Ungrouped problems .......................................................................................................... 142
Table 70: Start setup round 1 - groups ............................................................................................... 156
Table 71: Results round 1 - group Rini ................................................................................................ 158
Table 72: Results round 1 - group Peter ............................................................................................. 159
Table 73: Start setup round 1 - plenary .............................................................................................. 161
Table 74: Results round 1 - plenary .................................................................................................... 162
Table 75: Result round 2 - dot voting individually .............................................................................. 165
Table 76: Results round 2 - ranking on likelihood............................................................................... 166
Table 77: Results round 2 - ranking on impact ................................................................................... 166
Table 78: Start setup round 3 - plenary .............................................................................................. 167
Table 79: Result round 3 - consequences System Demo .................................................................... 168
Table 80: Result round 3 - consequences Solution Demo .................................................................. 169
Table 81: Result round 3 - consequences Inspect & Adapt ................................................................ 169
Table 82: Result round 3 - consequences PI Planning ........................................................................ 169
Table 83: Result round 4 - dot voting individually on 13. System Demo ............................................ 169
Table 84: Result round 4 - dot voting individually on 14. Solution Demo .......................................... 170
Table 85: Result round 4 - dot voting individually on 15. Inspect & Adapt ........................................ 170
Table 86: Result round 4 - dot voting individually on 16. PI Planning ................................................ 170
Table 87: Result round 4 - consequences System Demo ranked on likelihood .................................. 174
Table 88: Result round 4 - consequences Solution Demo ranked on likelihood ................................ 174
Table 89: Result round 4 - consequences Inspect & Adapt ranked on likelihood .............................. 175
Table 90: Result round 4 - consequences PI Planning ranked on likelihood ...................................... 175
Table 91: Result round 4 - consequences System Demo ranked on impact....................................... 175
Table 92: Result round 4 - consequences Solution Demo ranked on impact ..................................... 175
Table 93: Result round 4 - consequences Inspect & Adapt ranked on impact ................................... 176
94
Table 94: Result round 4 - consequences PI planning ranked on impact ........................................... 176
Table 95: Votes on elements - Distributed expert 1 ........................................................................... 177
Table 96: Votes on elements - SAFe expert 1 ..................................................................................... 177
Table 97: Votes on elements - Distributed Expert 2 ........................................................................... 177
Table 98: Votes on elements - SAFe expert 2 ..................................................................................... 178
Table 99: Votes on elements - Practitioner 1 ..................................................................................... 178
Table 100: Votes on elements – Practitioner 2................................................................................... 178
Table 101: 13.1 No integration no working system ............................................................................ 179
Table 102: 13.2 Unclear value............................................................................................................. 179
Table 103: 13.3 No feedback .............................................................................................................. 179
Table 104: 13.4 Bad team morale ....................................................................................................... 179
Table 105: 13.5 Annoyed customers .................................................................................................. 179
Table 106: 13.6 Unpredictably ............................................................................................................ 180
Table 107: 13.7 Rework ...................................................................................................................... 180
Table 108: 13.8 Delay .......................................................................................................................... 180
Table 109: 14.1 No clear / unknown value ......................................................................................... 180
Table 110: 14.2 Annoyed stakeholders / customers .......................................................................... 180
Table 111: 14.3 Bad morale ................................................................................................................ 180
Table 112: 14.4 Unpredictably ............................................................................................................ 181
Table 113: 14.5 No / late feedback ..................................................................................................... 181
Table 114: 14.6 Delay .......................................................................................................................... 181
Table 115: 14.7 Rework ...................................................................................................................... 181
Table 116: 15.1 No learning ................................................................................................................ 181
Table 117: 15.2 Fallback / unlearning ................................................................................................. 182
Table 118: 15.3 Dissatisfied customers............................................................................................... 182
Table 119: 15.4 No motivation ........................................................................................................... 182
Table 120: 16.1 No goal no execution ................................................................................................ 182
Table 121: 16.2 No alignment between teams ................................................................................... 183
Table 122: 16.3 No real results ........................................................................................................... 183
Table 123: 16.4 Stakeholder annoyance ............................................................................................. 183
Table 124: 16.5 No teamness / commitment ..................................................................................... 183
Table 125: 16.6 Lack of transparency ................................................................................................. 183
Table 126: 16.7 Rework ...................................................................................................................... 183
Table 127: 16.8 Longer time to market .............................................................................................. 184
Table 128: Votes on consequences - Distributed Expert 1 ................................................................. 185
Table 129: Votes on consequences - SAFe expert 1 ........................................................................... 185
Table 130: Votes on consequences - Distributed Expert 2 ................................................................. 186
Table 131: Votes on consequences - SAFe expert 2 ........................................................................... 187
Table 132: Votes on consequences - Practitioner 1 ........................................................................... 188
Table 133: Votes on consequences - Practitioner 2 ........................................................................... 188
Table 134: Practitioner focus group - start setup round 1 - groups ................................................... 225
Table 135: Practitioner focus group - result round 1 - group Hanneke .............................................. 227
Table 136: Practitioner focus group - result round 1 - group Peter ................................................... 227
Table 137: Practitioner focus group - start setup round 1 - plenary .................................................. 229
Table 138: Practitioner focus group - result round 1 - plenary .......................................................... 231
Table 139: Practitioner focus group - result of round 2 - dot voting individually .............................. 232
Table 140: Practitioner focus group - result round 2 - ranking on likelihood ..................................... 233
Table 141: Practitioner focus group - result round 2 - ranking on impact ......................................... 233
95
Table 142: Practitioner focus group - start setup round 3 - plenary .................................................. 235
Table 143: Practitioner focus group - result round 3 - solutions PI planning ..................................... 235
Table 144: Practitioner focus group - result round 3 - solutions Inspect & Adapt ............................. 235
Table 145: Practitioner focus group - result round 3 - solutions Implementing 1-2-3 ....................... 235
Table 146: Practitioner focus group - result round 3 - solutions Feature........................................... 236
Table 147: Practitioner focus group - result round 3 - solutions Shared Services .............................. 236
Table 148: Practitioner focus group - result round 4 - dot voting individually on PI planning ........... 239
Table 149: Practitioner focus group - result round 4 - dot voting individually on Inspect & Adapt ... 239
Table 150: Practitioner focus group - result round 4 - dot voting individually on Implementing 1-2-3
............................................................................................................................................................ 239
Table 151: Practitioner focus group - result round 4 - dot voting individually on Feature ................ 240
Table 152: Practitioner focus group - result round 4 - dot voting individually on Feature ................ 240
Table 153: Practitioner focus group - result round 4 - solutions PI planning ranked on difficulty ..... 240
Table 154: Practitioner focus group - result round 4 - solutions Inspect & Adapt ranked on difficulty
............................................................................................................................................................ 240
Table 155: Practitioner focus group - result round 4 - solutions Implementing 1-2-3 ranked on difficulty
............................................................................................................................................................ 240
Table 156: Practitioner focus group - result round 4 - solutions Feature ranked on difficulty .......... 240
Table 157: Practitioner focus group - result round 4 - solutions Shared Services ranked on difficulty
............................................................................................................................................................ 240
Table 158: Practitioner focus group - result round 4 - solutions PI planning ranked on impact ........ 241
Table 159: Practitioner focus group - result round 4 - solutions Inspect & Adapt ranked on impact 241
Table 160: Practitioner focus group - result round 4 - solutions Implementing 1-2-3 ranked on impact
............................................................................................................................................................ 241
Table 161: Practitioner focus group - result round 4 - solutions Feature ranked on impact ............. 241
Table 162: Practitioner focus group - result round 4 - solutions Shared Services ranked on impact 241
Table 163: Experience level categorization ........................................................................................ 242
Table 164: Expert experience level ..................................................................................................... 242
Table 165: Expert category ................................................................................................................. 242
Table 166: Practitioner focus group - votes on elements - Practitioner 1.......................................... 243
Table 167: Practitioner focus group - votes on elements - Practitioner 2.......................................... 243
Table 168: Practitioner focus group - votes on elements - Practitioner 3.......................................... 243
Table 169: Practitioner focus group - votes on elements - Practitioner 4.......................................... 244
Table 170: Practitioner focus group - votes on elements - Practitioner 5.......................................... 244
Table 171: Practitioner focus group - votes on elements - Practitioner 6.......................................... 245
Table 172: Practitioner focus group - votes on elements - Practitioner 7.......................................... 245
Table 173: Practitioner focus group - votes on elements - Practitioner 8.......................................... 245
Table 174: Practitioner focus group - votes on elements - Practitioner 9.......................................... 246
Table 175: Practitioner focus group - votes on elements - Practitioner 10........................................ 246
Table 176: Practitioner focus group - votes on elements - Practitioner 11........................................ 246
Table 177: Practitioner focus group - 16. PI planning - individual solutions ...................................... 248
Table 178: Practitioner focus group - 15. Inspect & Adapt - individual solutions .............................. 249
Table 179: Practitioner focus group - 4. Implementing 1-2-3 - individual solutions .......................... 249
Table 180: Practitioner focus group - 32. Feature - individual solutions ............................................ 250
Table 181: Practitioner focus group - 25. Shared Services - individual solutions ............................... 250
Table 182: Practitioner focus group - 16. PI planning - individual solutions - translated ................... 251
Table 183: Practitioner focus group - 15. Inspect & Adapt - individual solutions - translated ........... 252
Table 184: Practitioner focus group - 4. Implementing 1-2-3 - individual solutions - translated....... 252
96
Table 185: Practitioner focus group - 32. Feature - individual solutions - translated ........................ 253
Table 186: Practitioner focus group - 25. Shared Services - individual solutions - translated ........... 254
Table 187: Practitioner focus group - 16. PI planning - group solutions ............................................ 255
Table 188: Practitioner focus group - 15. Inspect & Adapt - group solutions .................................... 255
Table 189: Practitioner focus group - 4. Implementing 1-2-3 - group solutions ................................ 255
Table 190: Practitioner focus group - 32. Feature - group solutions .................................................. 255
Table 191: Practitioner focus group - 25. Shared Services - group solutions ..................................... 255
Table 192: Practitioner focus group - 16. PI planning - group solutions - translated ......................... 256
Table 193: Practitioner focus group - 15. Inspect & Adapt - group solutions - translated ................. 256
Table 194: Practitioner focus group - 4. Implementing 1-2-3 - group solutions - translated ............. 256
Table 195: Practitioner focus group - 32. Feature - group solutions - translated .............................. 256
Table 196: Practitioner focus group - 25. Shared Services - group solutions - translated.................. 256
Table 197: Practitioner focus group - votes on solutions - Practitioner 1 .......................................... 257
Table 198: Practitioner focus group - votes on solutions - Practitioner 2 .......................................... 257
Table 199: Practitioner focus group - votes on solutions - Practitioner 3 .......................................... 258
Table 200: Practitioner focus group - votes on solutions - Practitioner 4 .......................................... 258
Table 201: Practitioner focus group - votes on solutions - Practitioner 5 .......................................... 259
Table 202: Practitioner focus group - votes on solutions - Practitioner 6 .......................................... 259
Table 203: Practitioner focus group - votes on solutions - Practitioner 7 .......................................... 260
Table 204: Practitioner focus group - votes on solutions - Practitioner 8 .......................................... 261
Table 205: Practitioner focus group - votes on solutions - Practitioner 9 .......................................... 261
Table 206: Practitioner focus group - votes on solutions - Practitioner 10 ........................................ 262
Table 207: Practitioner focus group - votes on solutions - Practitioner 11 ........................................ 262
97
List of figures
Figure 1: Agile Manifesto from [19] ........................................................................................................ 5
Figure 2: Research improvement cycle ................................................................................................... 6
Figure 3: Four level SAFe picture from [1] .............................................................................................. 9
Figure 4: SAFe meeting timeline, created based on [1] ........................................................................ 12
Figure 5: Flow of a trigger in SAFe based on [1] ................................................................................... 13
Figure 6: LeSS scaling model from [26] ................................................................................................. 14
Figure 7: DAD scaling model from [50] ................................................................................................. 15
Figure 8: Nexus scaling model from [51] .............................................................................................. 16
Figure 9: Spotify scaling model from [29] ............................................................................................. 16
Figure 10: 10th State of Agile report - Scaling agile from [13] ............................................................... 17
Figure 11: Essential SAFe from [52] ...................................................................................................... 18
Figure 12: Agile Release Train elements numbered, modified and reproduced with permission from ©
2011-2016 Scaled Agile, Inc. All rights reserved. Original Big Picture graphic found at
scaledagileframework.com. .................................................................................................................. 19
Figure 13: Visualization Systematic Literature Review protocol .......................................................... 24
Figure 14: Visualization of multiple informant protocol ....................................................................... 25
Figure 15: Expert focus group - part 1: identifying failing elements .................................................... 27
Figure 16: Expert focus group - part 2: identifying consequences ....................................................... 27
Figure 17: Practitioner focus group - part 1: identifying failing elements ............................................ 28
Figure 18: Practitioner focus group - part 2: identifying solutions ....................................................... 29
Figure 19: Visualization Systematic Literature Review protocol .......................................................... 30
Figure 21: Plan-Do-Check-Adjust cycle in SAFe from [1] ...................................................................... 31
Figure 21: Visualization of multiple informant protocol ....................................................................... 37
Figure 22: Agile Release Train elements failing - result of identification based on literature, created
based on [1] .......................................................................................................................................... 42
Figure 23: Expert focus group protocol visualization ........................................................................... 44
Figure 24: Graph of risk of SAFe element failing .................................................................................. 46
Figure 25: Graph expert distribution on likelihood of SAFe elements failing....................................... 47
Figure 26: Graph expert distribution on impact of SAFe elements failing ........................................... 47
Figure 27: Agile Release Train elements failing - result of expert focus group, created based on [1] . 48
Figure 28: Practitioner focus group protocol visualization round 1 & 2............................................... 50
Figure 29: Graph of risk of SAFe element failing .................................................................................. 52
Figure 30: Graph expert distribution on likelihood of SAFe elements failing ....................................... 53
Figure 31: Graph expert distribution on impact of SAFe elements failing ........................................... 54
Figure 32: Agile Release Train elements failing - result of practitioner focus group, created based on
[1] .......................................................................................................................................................... 55
Figure 33: Agile Release Train elements failing - result of triangulation, created based on [1]........... 57
Figure 34: Practitioner focus group protocol visualization round 3 & 4............................................... 61
Figure 35: 16. PI planning solution score graph .................................................................................... 62
Figure 36: 15. Inspect & Adapt solution score graph............................................................................ 63
Figure 37: 32. Feature solution score graph ......................................................................................... 63
Figure 38: 4. Implementing 1-2-3 solution score graph........................................................................ 64
Figure 39: 25. Shared services solution score graph............................................................................. 64
Figure 40: Visualization SAFe expertise participants focus group ........................................................ 71
Figure 41: Visualization distributed expertise participants focus group .............................................. 72
98
Figure 42: Agile Release Train elements failing, created based on [1] ................................................. 79
Figure 43: Agile Release Train elements numbered, Modified and reproduced with permission from ©
2011-2016 Scaled Agile, Inc. All rights reserved. Original Big Picture graphic found at
scaledagileframework.com ................................................................................................................. 149
Figure 44: Picture start setup round 1 - groups .................................................................................. 158
Figure 45: Picture results round 1 - group Rini ................................................................................... 160
Figure 46: Picture results round 1 - group Peter ................................................................................ 161
Figure 47: Impression of plenary exercise .......................................................................................... 162
Figure 48: Picture results round 1 - plenary ....................................................................................... 164
Figure 49: Impression of dot voting .................................................................................................... 165
Figure 50: Picture result round 2 - dot voting individually ................................................................. 166
Figure 51: Picture results round 2 - combined ranking ...................................................................... 167
Figure 52: Example round 3 - post-it’s on wall ................................................................................... 168
Figure 53: Picture result round 4 - dot voting individually on 13. System Demo ............................... 171
Figure 54: Picture result round 4 - dot voting individually on 14. Solution Demo ............................. 172
Figure 55: Picture result round 4 - dot voting individually on 15. Inspect & Adapt ........................... 173
Figure 56: Picture result round 4 - dot voting individually on 16. PI Planning ................................... 174
Figure 57: Graph overview consequences System Demo ................................................................... 190
Figure 58: Graph expert distribution of risk System Demo ................................................................ 190
Figure 59: Graph expert distribution on likelihood System Demo ..................................................... 191
Figure 60: Graph expert distribution on impact System Demo .......................................................... 191
Figure 61: Graph overview consequences Solution Demo ................................................................. 192
Figure 62: Graph Expert distribution of risk Solution Demo ............................................................... 192
Figure 63: Graph expert distribution of likelihood Solution Demo .................................................... 193
Figure 64: Graph expert distribution of impact Solution Demo ......................................................... 193
Figure 65: Graph overview consequences Inspect & Adapt ............................................................... 194
Figure 66: Graph expert distribution of risk Inspect & Adapt............................................................. 194
Figure 67: Graph expert distribution of likelihood Inspect & Adapt .................................................. 195
Figure 68: Graph expert distribution of impact Inspect & Adapt ....................................................... 195
Figure 69: Graph overview consequences PI planning ....................................................................... 196
Figure 70: Graph expert distribution of risk PI planning ..................................................................... 196
Figure 71: Graph expert distribution of likelihood PI planning .......................................................... 197
Figure 72: Graph expert distribution of impact PI planning ............................................................... 197
Figure 73: Practitioner focus group - picture start setup round 1 - groups ........................................ 226
Figure 74: Practitioner focus group - picture result round 1 - group Hanneke .................................. 228
Figure 75: Practitioner focus group - picture results round 1 - group Peter ...................................... 229
Figure 76: Practitioner focus group - picture start setup round 1 - plenary....................................... 230
Figure 77: Practitioner focus group - picture result round 1 - plenary ............................................... 232
Figure 78: Practitioner focus group - picture result round 2 – combined ranking ............................. 234
Figure 79: Practitioner focus group - picture result round 3 - solutions PI planning ......................... 236
Figure 80: Practitioner focus group - picture result round 3 - solutions Inspect & Adapt ................. 237
Figure 81: Practitioner focus group - picture result round 3 - solutions Implementing 1-2-3 ........... 237
Figure 82: Practitioner focus group - picture result round 3 - solutions feature................................ 238
Figure 83: Practitioner focus group - picture result round 3 - solutions Shared Services .................. 238
99
Appendix A Systematic Literature Review protocol
Based on the procedures and guidelines, presented by Kitchenham and Charters in [59] and [54] the
review protocol is discussed.
From this search two studies discussing problems were found [65] and [66]. The studies discuss the
challenges of transitioning from a traditional organization to an organization where agile is scaled.
However, the studies do not cover the distributed aspect required for this research. Therefore, a
different approach is taken to find problems on distributed SAFe, problems of Distributed Agile
Development and Distributed Scrum are researched in this Systematic Literature Research.
Review commissioning
This Systematic Literature Review was done as part of a master thesis project, in the research chair on
Global Software Engineering, in the Software Engineering research group of the Software Technology
department in the Faculty Electrical Engineering, Mathematics and Computer Science of Delft
University of Technology and was not commissioned.
Research questions
The review is done to answer the research question: “What problems can be expected when SAFe is
applied in distributed settings?”.
Research protocol
Based on the review protocol proposed by [59] and [54] this section presents the protocol that has
been used in the Systematic Literature Review.
Search strategy
Same as for the initial search on distributed SAFe, Google Scholar is used for this search because it has
indexed many different databases, including those of different universities. For the search strategy
the following queries where used:
The use of the keywords “Distributed Agile Development” and “Distributed Scrum” makes that general
problems that are both present in Distributed Agile Development and Distributed Scrum are
mentioned more often than Scrum specific problems. The consequences of this effect are mitigated
by considering all problems that are mentioned more than twice for the research. Besides, dismissing
problems early in the process could result in an important problem being missed.
Selection criteria
8
Google Scholar has been used for this search because it has indexed many different databases, including
those of different universities, providing a broad view of the available literature
100
The selection criteria are applied on studies within the field of Globally Distributed Software
Engineering. Within this field, any study that discusses agile is considered. For acceptance of a paper,
the following selection criteria where used:
Selection procedures
Studies are selected based on the criteria mentioned in the previous section.
Project timetable
The Systematic Literature Review was started on the 1st of February 2016, and was finished on 1st of
April 2016.
101
Appendix B Multiple informant protocol
For the multiple informant methodology, the following protocol was used. The execution of this
protocol is presented in Appendix E.
1. An initial mapping between the problems and core values based on consideration is created
based on insights from the author
2. The mapping is presented to the first informant, who adjusts the mapping
3. The adjusted mapping is presented to the second informant, who adjusts the mapping
4. The adjusted mapping is presented to the third informant, who adjusts the mapping
5. The final mapping is presented to all three informants and a consensus is reached
6. During the final mapping discussion, a second mapping between the problems and core values
based on impact is presented to the informants and a consensus is reached
102
Appendix C Expert focus group: protocol
For the focus group the following protocol was used. The names of the participants of this focus group
have been anonymized, and are thus omitted from the protocol.
The participants are asked to identify for each problem which Agile Release Train elements they think
could experience difficulties if the problem occurred. The replies of the participants are put in Table
22, to be used as input for the session.
Table 22: Participants focus group form
In both rooms the elements have been put on a table, the layout of the tables is as presented in Table
23. The elements are ranked based on the input of the participants, the most mentioned element on
top, the least mentioned element at the bottom.
Table 23: Table layout focus group
103
22. Vision
23. Roadmap
2. Lean-agile mindset
6. Communities of Practice
10. Agile Release Train / Value
Stream
21. Release Management,
Shared Services & User
Experience
26. PI Objectives
5. Lean-agile leaders
12. Program Kanban
20 DevOps & System team
25. Milestones & Release
27. Feature
28. Enabler
1. Core values
4. Implementing 1-2-3
8. Weighted Shortest Job First
24. Metrics
29. Epics
11. Architectural runway
3. SAFe principles
9. Release any time
17. System Architect, Release
Train Engineer & Product
Management
18. Business Owners
19. Customer
7. Value Stream coordination
In addition, the following items are needed for focus group session and have been arranged for before.
2 rooms
Lunch
4 bundles post-it’s
2 flipcharts (with sheets)
8 prints of the attachment
2 prints of timeline & text
Example setup
Tape
Bundle of empty A4’s
Present / thank you for the participants
104
Central reception in the hall, the participants will not be able to enter the group rooms at this moment.
The participants will be asked not to talk to the other participants about the focus group, SAFe or
distributed for the duration of the focus group. A welcome to everyone.
The facilitators will be introduced, a short explanation of the program for the rest of the afternoon,
and a presentation on the role of this focus group in the research. Each participant is provided with
printout of the numbered list of Agile Release Train elements.
Text by facilitator:
“Welcome everyone, thank you for your time. I am Rini van Solingen and this is Peter van Buul,
together we are investigating distributed SAFe as part of Peter his master thesis at the Technical
University Delft for which I am the responsible professor. We will be facilitating today’s session; the
objective of today is to gain insights into the challenges around SAFe elements in distributed settings.
The scientific nature of this session requires that the internal validity is guaranteed. For this reason,
we ask you to follow the instructions provided by us carefully, for example, when asked to do an
exercise individually, do so individually, do not discuss any aspect of the exercise. Additionally, we ask
you not to discuss SAFe, distributed or the focus group setting during breaks. If you want to discuss
these topics, please do so after the session is finished.”
Each group will go to their own room. In both rooms, the elements have been put on the table, the
layout of the tables is as presented in Table 23. The elements are ordered based on the input of the
participants prior to the session. The groups will be presented.
Text by facilitator:
“In the first exercise all 29 elements of the Agile Release Train will be discussed in a distributed setting.
A distributed setting being the people that are part of the Agile Release Train work in two or more
locations. This will be done in groups of three, the first group is: (names group A), the second group:
(names group B). In each room there is a sheet with three columns on it: “undecided”, “specifically
challenged” and “Not specifically challenged”. For now, all elements are put in the “undecided”
column, these have been ranked based on your input. The most mentioned element is on top, the
least mentioned at the bottom.
We ask you to decide together as a group for each element if you expect it to be challenged in a
distributed setting. If you think an element will be challenged in a distributed setting put it in the
column: “specifically challenged”, else put it in the column “not specifically challenged”. This exercise
is time-boxed on 40 minutes, elements that have not been agreed upon by then will remain in the
undecided column. Is the exercise clear?”
After both groups have reached agreement the groups come together in one of the group rooms. The
results of the groups are merged. The same as in the previous round, the groups are asked to reach
agreement on all elements.
105
- Reaching agreement - plenary (30 min)
Text by facilitator:
“Same as in the previous exercise there are three columns, “undecided”, “specifically challenged” and
“not specifically challenged”. Elements that both groups have identified as specifically challenged are
put in the “specifically challenged” column. Elements that both groups have identified as not
specifically challenged are put in the “not specifically challenged” column. All other elements are put
in the “undecided” column.
Same as in previous exercise we ask you to reach agreement on each element. If you think an element
will be challenged in a distributed setting put it in the column: “specifically challenged”, else put it in
the column “not specifically challenged”. This exercise is time-boxed on 30 minutes, elements that
have not been agreed upon by then will be dismissed. Is the exercise clear?”
Dot voting is done on two scales, on impact, and on likelihood. Every participant gets 1 vote per
element. The votes are divided individually on an empty sheet; no interactions can occur at this time.
When all participants have divided their votes, the votes are handed in to the facilitator.
Text by facilitator:
“For this exercise, we will use dot voting. Is everyone familiar with dot voting?” (If needed, explain dot
voting)
“For this dot vote session you get one vote per element, so in total X votes. In this exercise, there will
be two topics on which to vote, one on likelihood of occurrence of the element failing, the other on
the impact that the element has when it fails. So you get X votes to divide on how likely you think it is
that the element fails in a distributed setting, more votes mean more likely to fail. You also get X votes
to divide on how big you think the impact is if the element fails in a distributed setting, more votes
mean more impact if the element fails. You may put more than one vote at a single element, if you
wish to do so.
We will now give each of you two sheets of paper, please write down your name, the topic that you
vote for, and a number for each element that is voted for, so X, Y etcetera. After this, we ask you to
divide your votes over the elements. When you are done, return your sheets to me. Please do not
discuss the voting before we start. This exercise is time-boxed on 10 minutes. Is the exercise clear?”
The facilitator puts the results of both dot votes on the wall. The top 3 of both scales is used to proceed
to the next phase with a maximum of 6 elements.
Text by facilitator:
“Based on the votes provided these 3 elements have been selected as most likely to fail. And these 3
elements have been selected as those with the highest impact. For the next round we will use these
X elements.”
106
The participants will individually write down the 5 biggest consequences they think will happen when
an element fails.
Text by facilitator:
“In this exercise we ask each of you to individually write down the five biggest consequences of an
element failing in a distributed setting. Please write down the consequences on a post-it with in the
top right corner the element number. This exercise is time-boxed on 15 minutes. Is the exercise clear?”
Each individual puts his or her consequences on the wall and gives a brief description of the
consequence.
Text by facilitator:
“In this exercise we ask each of you present his or her consequences and put them on the wall with
the corresponding element. Please be brief on the elaboration of the consequences, this exercise is
time-boxed on 15 minutes. Is the exercise clear?”
The group will now discuss per element the consequences of failure. Any additional consequences
that come up are discussed and put on the wall. Each consequence is put on a new post-it with on the
top right corner the problem-element identifier.
Text by facilitator:
“In this exercise we ask you to discuss the consequences with the group. If additional consequences
come from the discussion, we write them down and put them on the wall. This exercise is time-boxed
on 30 minutes. Is the exercise clear?”
The consequences are numbered and again dot voted, based on likelihood and impact.
Text by facilitator:
“In the final exercise, we will again use dot voting on the consequences. Before we vote the
consequences are numbered, same as last time you get one vote per consequence. We will again vote
on the two topics of likelihood and impact. Consequences that are more likely to happen receive more
votes. And consequences that have more impact receive more votes.
We will now give each of you two sheets of paper, please write down your name, the scale that you
vote for, and a number for each element that is voted for. After this, we ask you to divide your votes
over the elements. When you are done, return your sheets to me. Please do not discuss the voting
before we start. This exercise is time-boxed on 10 minutes. Is the exercise clear?”
107
The facilitator puts the results of both dot votes on the wall.
Text by facilitator:
“Based on the dot voting the following ranking for the consequences has been created. This marks the
end of the focus group. Thank you for your time.”
Closing 16:00
108
Appendix D Practitioner focus group: protocol
Based on experience from the previous focus group the protocol for the Release Train Engineer focus
group has been enhanced. The list of SAFe elements is extended, and some elements have been split
into multiple elements. Forms are created to streamline the dot voting by asking the participants to
count their votes themselves allowing for faster processing of the results. The names of the
participants of this focus group have been anonymized, and are thus omitted from the protocol.
Prior to the session an official invitation is send to all participants, this invitation can be found in
Appendix V.
In both rooms the elements have been put on a table, the layout of the tables is as presented in Table
24. The elements are ranked based on the results of the previous focus group. At the top are the
elements that the previous focus group classified as specifically challenged, ranked by risk, highest at
the top. Next is the item that remained undecided during the previous focus group. Then the other
items on which the two groups of the previous focus group had discussion. And finally, the items that
both groups classified as not specifically challenged. This way the top is expected to be specifically
challenged by distributed and the bottom to be not specifically challenged by distributed. The items
in the middle will are expected to cause discussion.
Table 24: Table layout focus group
109
6. Communities of Practice
2. Lean-agile mindset
8. Weighted Shortest Job First
24. Release Management
25. Shared Services
26. User Experience
4. Implementing 1-2-3
9. Release any time
17. Release Train Engineer
18. System Architect
19. Product Management
21. Customer
27. Vision
28. Roadmap
10. Agile Release Train / Value
Stream
31. PI Objectives
5. Lean-agile leaders
30. Milestones & Releases
32. Feature
33. Enabler
29. Metrics
34. Epics
11. Architectural runway
3. SAFe principles
20. Business Owners
7. Value Stream coordination
Besides this, two forms are created to streamline the dot voting process during the session. These
forms can be found in Appendix W.
In addition, the following items are needed for focus group session and have been arranged for before.
2 rooms
Lunch
4 bundles post-it’s
2 flipcharts (with sheets)
16 prints of the attachment
3 prints of timeline & text
Example setup
Tape
15 prints of the forms for the first dot voting
15 prints of the forms for solution brainstorming
15 prints of the forms for the second dot voting
Present / thank you for the participants
110
Protocol during of the session
Opening 13:00
Central reception in the hall, the participants will not be able to enter the group rooms at this moment.
The participants will be asked not to talk to the other participants about the focus group, SAFe or
distributed for the duration of the focus group. A welcome to everyone.
The facilitators will be introduced, a short explanation of the program for the rest of the afternoon,
and a presentation on the role of this focus group in the research. Each participant is provided with
printout of the numbered list of Agile Release Train elements.
Text by facilitator:
“Welcome everyone, thank you for your time. I am Peter van Buul and this is Rini van Solingen,
together we are investigating distributed SAFe as part of my master thesis at the Technical University
Delft for which Rini is the responsible professor. Also with us is Hanneke Gieles who will help with
facilitating today’s session. The objective of today is to gain insights into the challenges and their
solutions around SAFe elements in distributed settings. Discussion is essential for the focus group, so
please note that all participants can tell their story, try not to dominate during the discussion.
The scientific nature of this session requires that the internal validity is guaranteed. For this reason,
we ask you to follow the instructions provided by us carefully, for example, when asked to do an
exercise individually, do so individually, do not discuss any aspect of the exercise. Additionally, we ask
you not to discuss SAFe, distributed or the focus group setting during breaks. If you want to discuss
these topics, please do so after the session is finished.”
Each group will go to their own room. In both rooms, the elements have been put on the table, the
layout of the tables is as presented in Table 24. The elements are ordered based on the previous focus
group, as described in the preparation. The groups will be presented.
Text by facilitator:
“In the first exercise all 34 elements of the Agile Release Train will be discussed in a distributed setting.
A distributed setting being the people that are part of the Agile Release Train work in two or more
locations, not necessarily distributed over more time-zones or continents. This will be done in two
groups; the groups will be divided based on previous participation. Those who have participated in
the survey that has been done previously will join Hanneke in this room. The others will join me in the
other room.
We ask you to decide together as a group for each element if you expect it to be challenged in a
distributed setting. If you think an element will be challenged in a distributed setting put it in the
column: “specifically challenged”, else put it in the column “not specifically challenged”. This exercise
is time-boxed on 40 minutes, elements that have not been agreed upon by then will remain in the
undecided column. Is the exercise clear?”
111
After both groups have reached agreement the groups come together in one of the group rooms. The
results of the groups and that of the previous focus group are merged. The same as in the previous
round, the groups are asked to reach agreement on all elements.
Text by facilitator:
“Same as in the previous exercise there are three columns, “undecided”, “specifically challenged” and
“not specifically challenged”. Elements that both groups have identified as specifically challenged are
put in the “specifically challenged” column. Elements that both groups have identified as not
specifically challenged are put in the “not specifically challenged” column. All other elements are put
in the “undecided” column.
Same as in previous exercise we ask you to reach agreement on each element. If you think an element
will be challenged in a distributed setting put it in the column: “specifically challenged”, else put it in
the column “not specifically challenged”. This exercise is time-boxed on 30 minutes, elements that
have not been agreed upon by then will be dismissed. Is the exercise clear?”
Dot voting is done on two scales, on impact, and on likelihood. Every participant gets 1 vote per
element. The votes are divided individually on an empty sheet; no interactions can occur at this time.
When all participants have divided their votes, the votes are handed in to the facilitator.
Text by facilitator:
“For this exercise, we will use dot voting. Is everyone familiar with dot voting?” (If needed, explain dot
voting)
“For this dot vote session you get one vote per element, so in total X votes. In this exercise, there will
be two topics on which to vote, one on likelihood of occurrence of the element failing, the other on
the impact that the element has when it fails. So you get X votes to divide on how likely you think it is
that the element fails in a distributed setting, more votes mean more likely to fail. You also get X votes
to divide on how big you think the impact is if the element fails in a distributed setting, more votes
mean more impact if the element fails. You may put more than one vote at a single element, if you
wish to do so.
We will now give each of you a form, please write down the element number in the first column, count
your votes and check if the sum is X. After this, we ask you to divide your votes over the elements.
When you are done, return your sheets to me. Please do not discuss the voting before we start. This
exercise is time-boxed on 10 minutes. Is the exercise clear?”
The facilitator puts the results of both dot votes on the wall. The top 3 of both scales is used to proceed
to the next phase with a maximum of 6 elements.
Text by facilitator:
112
“Based on the votes provided these 3 elements have been selected as most likely to fail. And these 3
elements have been selected as those with the highest impact. For the next round we will use these
X elements.”
Text by facilitator:
“This exercise is done individually. You are asked to write down possible solutions for each of the X
Agile Release Train elements. Use the forms, provided to write these down. This exercise is time-boxed
on 10 minutes. Is the exercise clear?”
A plenary instruction of the exercise, then each group will go to their own room. The groups will be
asked to discuss the elements and are asked for each element to reach consensus on the two best
solutions.
Text by facilitator:
“This exercise is done in two groups, the first group consists of (names group A), and the second group
of (names group B). These groups have been created randomly by coin flip. You are asked to discuss
possible solutions for each of the X Agile Release Train elements. For each element reach consensus
on the two best solutions and write each solution down on a post-it. You can take your result of the
individual brain storm with you for reference. However please do not write anything else on this, and
hand them in after the exercise. This exercise is time-boxed on 30 minutes. Is the exercise clear?”
Both groups present their solutions one by one and group similar solutions together.
Text by facilitator:
“In this exercise we ask each group to present their solutions and put them on the wall with the
corresponding element, similar solutions are put together. Please be brief on the elaboration of the
solution, this exercise is time-boxed on 20 minutes, so one minute per solution. Is the exercise clear?”
The solutions are numbered and again dot voted, based on difficulty and impact.
Text by facilitator:
“In the final exercise, we will again use dot voting. Before we start ranking the solutions will be
numbered. We will rank on the two topics of difficulty and impact. Solutions that are easy to realize
receive more votes. And solutions that have a big impact receive more votes.
113
We will now give each of you a form, please fill in the form, write down your name, and the element
that you are ranking in the corresponding box. When you are done, return your sheets to me. Please
do not discuss the ranking before we start. This exercise is time-boxed on 10 minutes. Is the exercise
clear?”
The facilitator puts the results of the rankings on the wall. The participants are asked for any last
remarks on the rankings and anything that stands out to them.
Text by facilitator:
“Based on the ranking the following ranking for the solutions has been created. Are there any remarks
on the rankings? Is there anything that stands out? This marks the end of the focus group. Thank you
for your time.”
Closing 16:00
114
Appendix E Multiple informant execution
The informants, used to verify the mappings are all SAFe Program Consultants from Prowareness who
have implemented SAFe in different companies. An initial mapping was made based on the insights of
the author. This mapping was then presented to the informants, one informant at the time.
Adjustments made by the first informant were marked. The new version was presented to the next
informant. The first meeting regarding the mapping took place on the 11th of April 2016, the final
discussion was on the 31st of May 2016. The duration was longer than initially anticipated as half of
the initially planned meetings did not give results, and finding a timeslot with an informant proved
difficult.
Timeline
11 April, 2016: mapping discussion with informant 1
12 April, 2016: mapping discussion with an informant which was not usable9
19 April, 2016: mapping discussion with informant 2
19 April, 2016: mapping discussion with informant was cancelled rescheduling proved
impossible
10 May, 2016: mapping discussion with informant 3
11 May, 2016: mapping discussion with informant 2 via mail on changes of informant 2
31 May, 2016: final mapping discussion with informant 1 and 3
Because the mappings had to be verified before the focus group session, it was necessary that a
meeting with all three informants was planned before the session. However, planning a meeting
before the focus group session with all three informants was not possible. Therefore, informant 2 was
asked to review the changes made by informant 3, informant 2 agreed with the changes and
reasoning. Besides this, informant 2 also made a mapping on impact for all problems which would be
discussed with informant 1 and 3 in the final mapping discussion. If, in the final discussion no “big”
changes would be made, a minus to a plus for example, informant 3 agreed on the changes in the final
mapping discussion. The final discussion was then held with informant 1 and 3.
Once the informant understood the frame of reasoning for the mapping the mapping was discussed.
One by one, each cell of the mapping was reviewed by the informant. Per row, the problem was
explained and the informant reviewed the values for each core value. Changes made by the informant
were explained, and incorporated. Sometimes, the frame of reasoning for the mapping was lost by
the informant, in that case the author would remind the informant on the frame of reasoning.
After all cells were reviewed by the informant, the informant was thanked for his time.
9
The informant found it very difficult to reason within the frame required for the mapping, and did therefore,
not feel competent to answer the questions. Thus, the choice was made not to discuss the mapping and exclude
the informant from the results.
115
Final mapping discussion execution
The final mapping discussion was held with informant 1 and 3, in this discussion the final version of
the mapping was made. The changes that informant 2 and 3 made to the mapping provided by
informant 1 where discussed. The author explained the changes of informant 2, as he was not present,
and informant 3 explained his own changes. For each change, the informants came to an agreement
of the value that should be filled in. New insights provided by the informants during the discussion did
result in some minor changes for the four core value columns. However, none of these changes
changed the result column, thus not effecting the next steps in the process.
After the informants reached agreement on the mapping on consideration, for the 9 problems that
where not considered the mapping on impact was discussed. The informants deviated more from each
other with filling in this mapping, this resulted in some changes also for the result column. However,
these where small changes which did not have effect on the next steps of the process.
116
Appendix F List of SAFe elements
An elaborate description of each of these elements can be found on the SAFe website
www.scaledagileframework.com [1].
SAFe events:
SAFe roles:
117
31. Shared Services
32. User Experience
Level transcending roles
33. Customer
34. Enterprise
SAFe artefacts:
118
65. Value Stream
66. Value Stream coordination
67. Value Stream Kanban
Portfolio level best practices
68. Value Streams
69. Portfolio Kanban
Level transcending best practices
70. Weighted Shortest Job First
71. Lean-agile leaders
72. Communities of Practice
73. Lean-agile mindset
74. Implementing 1-2-3
75. Release any time
76. Continuous Integration
77. Develop on cadence
78. Model-based systems engineering
79. Set-based design
80. Agile architecture
119
Appendix G Description of the Agile Release Train
elements
Below, a brief description is provided of the different elements of the Agile Release Train. An elaborate
description of each of these elements can be found on the SAFe website
www.scaledagileframework.com [1].
1. Alignment
2. Built-in Quality
3. Transparency
4. Program Execution
2. Lean-agile mindset
A lean-agile mindset is needed to support lean and agile development at scale in an entire
organization. The mindset consists of two parts, thinking lean and embracing agility.
3. SAFe principles
The SAFe principles are the guidelines for decision making when working with SAFe. These principles
also apply for decision making in the Agile Release Train. The nine SAFe principles are:
4. Implementing 1-2-3
Implementing 1-2-3 is the basic deployment pattern for a successful deployment of SAFe that has been
developed over the years of implementing SAFe.
5. Lean-agile leaders
For a SAFe transformation to be successful, the current managers, executives and leaders of the
organization need to adopt and lead the change. They have the power to continuously challenge the
organization to become more agile. After these people have been trained they become the so-called
lean-agile leaders that drive agile from within the organization.
120
6. Communities of Practice
Communities of Practice are informal groups of people from different teams, Agile Release Trains or
even Value Streams that have a shared interest, which is the topic of the Community of Practice. Both
the experts on the topic as well as those who want to become an expert are part of the Community
of Practice. The goal of these Communities of Practice is to allow knowledge to be shared across
different Agile Release Trains or Value Streams.
The agile teams that are part of the Agile Release Train are aligned via a single vision, roadmap and
program backlog. These teams iterate in a so called program increment, or PI, of 8 to 12 weeks
consisting of 4 to 6 two-week team iterations. During the team iterations the teams continuously add
value to the solution by finishing fully tested stories. At the end of each team iteration the integrated
solution is demoed in the System Demo done by the system team.
121
continuously add new functionalities the architecture needs to be extended continuously, this is done
by implementing enablers. The architectural runway provides a means to keep the architecture
designed just right.
122
18. System Architect
The System Architect is the person, team, or teams that is/are responsible for the overall technical
architecture and engineering design of the system. The System Architect designs the common
technical direction of the system.
21. Customer
The customer receives the solution which solves its current needs. The customer works together with
Product Management and other key stakeholders to prioritize development. By actively participating
in events, such as planning sessions and demo’s the customer knows what the teams are doing and
can steer the development of the solution. The customer is thus a part of the Value Stream that is
supported by the Agile Release Train.
22. DevOps
Deploying to operations is required to deliver value to the customer, this complex process is supported
by the DevOps team. This team is part of the Agile Release Train to enable deployment for the solution
developed by the train at any time.
123
26. User Experience
User Experience represents the user’s perception of the system including the user interface. The User
Experience designers support all teams of the Agile Release Train with anything related to interactions
with the user. They also educate the teams on user interface design and testing.
28. Roadmap
The roadmap represents the Agile Release Train deliverables, this consists of the committed PI
objectives of the current PI and a forecast for the next one or two PI’s. Product Management updates
the roadmap according to the vision. A balance has to be obtained between, planning not enough,
resulting in less alignment, and planning too much, resulting in a unresponsive queue which obstructs
change.
29. Metrics
SAFe presents multiple metrics, with these metrics different things are measured. The most important
are progress and whether the desired solution is delivered. This is measured best on the working
solution. SAFe also presents many other ways to measure progress, for example the epic burn-up
chart, Value Stream performance metrics and the Agile Release Train self-assessment. All these
measures are useful; however, they are all inferior to measuring the working solution.
A feature only adds value for the customer if the feature is released and added to the working solution
at the customer. Releasing frequently enables frequent addition of value. However, releasing should
only be done when it actually makes sense, same as stated in release any time.
31. PI Objectives
The PI objectives are a summary of the business and technical objectives of the teams and Agile
Release Train. They are formulated during the PI planning and the teams commit to them for the
upcoming PI. Formulating these objectives is done to validate the teams understanding of the intent
of the business regarding the features the teams do. Result of this is that, when the intent of the
business is known, the goal of the team becomes to get the desired outcome rather than finish the list
of features.
32. Feature
124
A feature describes a service that the system can provided that satisfies a specific need of one or more
users. The size of a feature is such that it can be picked up in a single PI by a single Agile Release Train,
thus a feature is planned and reviewed at the PI boundaries. Features are split into stories which can
be picked up by a team during a iteration.
33. Enabler
Enablers are the technical initiatives that pave the architectural runway and are created as business
initiatives consume the runway. Enablers are only created when needed, to prevent engineering too
far ahead resulting in an over engineered solution. Enablers are formulated at program level as
features, at team level as stories. Enablers that change architecture can be big, however they have to
be broken down into small pieces (enabler stories) so that teams can implement these during a
iteration.
34. Epics
The biggest business initiatives are cast into are epics, in the form of lightweight business cases. Epics
can be split over multiple Value Streams or Agile Release Trains. The features that an Agile Release
Train develops come from the epics that are defined at portfolio level.
125
Appendix H Rejected studies
Table 25 shows the rejected studies including the reason for rejection.
Table 25: Rejected studies including reason for rejection
126
[124] The study is not on distributed
[125] The study is not on agile
[126] The study is not on agile
[127] The study does not present problems or challenges
[128] The study is not on agile
[129] The study does not present problems or challenges
[130] The study is not on agile
[131] The study is not on agile
[132] The study is not on agile
[133] The study is not on agile
[6] The study is not on agile
[134] The study is not on agile
[135] The study is not on agile
[136] The study is not on agile
[137] The study is not on agile
[138] The study is not on agile
[139] The study is not on agile
[140] The study is not on agile
[141] The study is not on agile
[142] The study is not on agile
[143] The study is not on agile
[144] The study is not on agile
[145] The study is not on agile
[146] The study is not on agile
[147] The study is not on agile
[148] The study is not on agile
[149] The study is not on agile
[150] The study is not on agile
[151] The study is not on agile
[152] The study is not on agile
[153] The study is not on agile
[154] The study is not on agile
[155] The study is not on agile
127
Appendix I Problems and challenges of accepted
studies
Table 26 to Table 36 list the different challenges that are presented, per study.
Table 26: Challenges of geographically distributed agile development from [67]
Challenge
Time zone differences
Geographic differences
Team size
Number of teams
Coordination
Project domain
Project architecture
Customer involvement
Customer representative involvement
Project management process
Communication tools
Communication infrastructure
Organizational culture
Language
National culture
Trust in team or team members
Personal practice
Challenge
Lack of synchronous communication
Collaboration difficulties
Poor communication bandwidth
Wide range of tool support needed
Team management
Finding the correct office space
Coordinating with multiple sites
Risk
Asynchronous communication
Lack of group awareness
Poor communication bandwidth
Lack of tool support
Large team sizes
Lack of collaborative office environment
Increased number of sites
128
Table 29: Challenges of Agile Global Software Development from [70]
Challenge
People differences
Distance differences
Team issues
Technology issues
Architectural issues
Processes issues
Customer communication
Table 30: Challenges of applying agile methods in offshore development from [71]
Challenge
Daily Scrum difficult to organize
Silence of participants due to linguistic and cultural differences
Temporal difference
Difficult to truly evaluate the state of the project
Lack of confidence due to cultural and language differences
Challenge
Lack of communication and collaboration during all development stages
There is a lack of English language skills within project team members that minimizes the
communication levels
There is a lack of communication between the developers and the Product Owners
Lack of shared knowledge and information
The increased distance between Agile developers minimizes the level of communication and
collaboration
Some development teams have issues with poor infrastructures
The visibility level of development progress is low
The cultural differences between project stockholders can lead to lack of awareness
There is a lack of trust between team members
There is a lack of understanding of authority with some team participants
The differences in culture that can reduce the team responsibility and moral
There is a lack of transparency from some members regarding cultural differences
The cultural differences reduce developers’ productivity
There is a lack of team management “configuration management”
The development team has estimation difficulties with the development cost, scope and development
schedule
The differences of the development countries make barriers to adapt within the different local
regulations
There is a security risk according to the distances between teams. “Some information could be lost
during the communication”
Increasing the number of sites creates difficulties for team control and management
Time differences between teams reduce the available time for synchronous communication
Different holiday schedules make it difficult for teams to synchronize work
Some stockholders have a lack of Agile skills
Global development settings can lead to insufficient development meetings
129
Lack of formal documents “no standards”
There is an increase of documentation during development
The large number of team members creates difficulties in applying some of the agile practices
Some technical issues with Global Software Development can lead to insufficient applications for
some agile practices and methods
Challenge
Time zone differences
Lack of visibility on priority, requirements, demo and sprint reviews
Lack of synchronous communication
Inadequate infrastructure
Availability of the Scrum Master
Lack of team structure and roles and responsibility
Inexperience with agile methods
Remote coaches
Trust and lack of productivity
Work distribution with distributed human resources
Team missing the big picture
Lack of documentation, unclear requirements
Cost of synchronous communication
Having common or shared components
Handling sensitive data at the offsite
Lack of processes
Work pattern varies per culture
Regional holidays are different
Language differences
Challenge
Communication need versus communication impedance
Fixed requirements versus evolving requirements
People-oriented versus process-oriented control
Formal agreements versus informal agreements
Short iterations versus distance complexity
Team cohesion versus team dispersion
Table 34: Challenges of Extreme Programming in Global Software Development from [75]
Challenge
Lack of communication due to asynchronous coordination
Difficulty in having synchronous collaboration
Reduced productivity due to communication overhead
Lack of frequent/informal communication
Lack of trust
Lack of team cohesion
Lack of interaction with the customer
130
Cultural differences
Language barriers
Time zone differences
Conflicting work/unnecessary delays due to lack of coordination
Difficulty in having an on-site customer
Lack of experience of XP
Difficulty in configuration management and version control system
Difficulty in editing shared data simultaneously
Difficulties due to weak technical infrastructure
Lack of productivity due to geographical distance
Difficulty in coordination
Lack of accessibility of information
Difficulty in maintaining tacit knowledge
Difficulty in accepting shared ownership
Difficulty in making independent decisions due to dependency on the superiors
Customer not aligned with agile
Challenge
Lack of synchronous/overlapping working hours
Lack of face to face communication
Cultural differences (language/behavior)
Increased communication cost
Expansion of number of sub teams in onshore and offshore teams that suffer the effect of poor
communication process between teams
Over reliance on one person per team for communication lead to misinterpretation and/or loss of
information
Long traveling time between distributed sites
Increased coordination cost
Reduce cooperation arising from misunderstanding
Reduce informal contact can lead to lack of critical task awareness
Inconsistent work practices can impinge of effect of coordination
Manager must adopt local regulation
Manage project artefacts
Lack of trust/teamness
Lack of interpersonal relationship/poor team dynamics
Lack of domain knowledge
Lack of visibility
Skill differences
Technical issues
Challenge
Ineffective communication
Language barriers
Unavailability of people
Lack of customer communication
131
Information hiding
Lack of directness and honesty
Sense of belonging to a team
Feeling insecurity
Trust building
Commitment
Collective vision
Collective ownership
Cultural differences
Authority
Lack of training
Technical infrastructure
Avoidance of accountability
Balance workload
Decision making
132
Appendix J Result of SLR reviews
Table 37 to Table 39 show studies from the different studies that investigated Systematic Literature
Reviews.
Table 37: Result of studies subjected to the acceptance criteria from [78]
Table 38: Result of studies subjected to the acceptance criteria from [79]
133
[147] Rejected The study is not on agile
[132] Rejected The study is not on agile
[148] Rejected The study is not on agile
Table 39: Result of studies subjected to the acceptance criteria from [80]
134
Appendix K Problem groups
The tables Table 40 to Table 68 show the problem groups.
Table 40: Problems due to inefficient communication tools
# Ref Problem
1 [9] Hardware and tools not sufficient
2 [3] Increased dependency on technology
3 [67] Communication tools
4 [67] Communication infrastructure
5 [68] Wide range of tool support needed
6 [68] Finding the correct office space
7 [69] Lack of collaborative office environment
8 [69] Lack of tool support
9 [70] Technology issues
10 [75] Difficulty in editing shared data simultaneously
11 [75] Difficulty in configuration management and version control system
12 [76] Technical issues
# Ref Problem
1 [9] Product Owner not present
2 [67] Customer involvement
3 [67] Customer representative involvement
4 [70] Customer communication
5 [72] There is a lack of communication between the developers and the Product Owners
6 [72] Lack of communication and collaboration during all development stages
7 [73] Availability of the Scrum Master
8 [75] Difficulty in having an on-site customer
9 [75] Lack of interaction with the customer
10 [77] Unavailability of people
11 [77] Lack of customer communication
# Ref Problem
1 [3] Communication delay
2 [3] Reduced hours of collaboration
3 [68] Lack of synchronous communication
4 [69] Asynchronous communication
5 [73] Lack of synchronous communication
6 [73] Cost of synchronous communication
7 [75] Lack of communication due to asynchronous coordination
8 [75] Difficulty in having synchronous collaboration
9 [76] Lack of face to face communication
10 [76] Lack of synchronous/overlapping working hours
135
Table 43: Problems due to different execution of work practices
# Ref Problem
1 [9] Difference in reporting impediments
2 [9] Different work practices
3 [3] Differences in terms of agreement
4 [3] Differences in quality assessment
5 [3] Differences in design
6 [67] Personal practice
7 [73] Work pattern varies per culture
8 [76] Inconsistent work practices can impinge of effect of coordination
9 [77] Balance workload
# Ref Problem
1 [3] Differences in language
2 [67] Language
3 [71] Lack of confidence due to cultural and language differences
4 [72] There is a lack of English language skills within project team members that minimizes
the communication levels
5 [73] Language differences
6 [75] Language barriers
7 [76] Cultural differences (language/behavior)
8 [77] Language barriers
# Ref Problem
1 [3] Increased complexity of the technical infrastructure
2 [67] Project architecture
3 [70] Architectural issues
4 [72] Some development teams have issues with poor infrastructures
5 [72] Some technical issues with Global Software Development can lead to insufficient
applications for some agile practices and methods
6 [73] Inadequate infrastructure
7 [75] Difficulties due to weak technical infrastructure
8 [77] Technical infrastructure
# Ref Problem
1 [3] Loss of cohesion
2 [69] Lack of group awareness
3 [70] Team issues
4 [74] Team cohesion versus team dispersion
5 [75] Lack of team cohesion
6 [75] Lack of productivity due to geographical distance
7 [76] Lack of interpersonal relationship/poor team dynamics
8 [77] Sense of belonging to a team
136
Table 47: Problems due to misinterpretation
# Ref Problem
1 [9] Misunderstanding
2 [68] Poor communication bandwidth
3 [69] Poor communication bandwidth
4 [72] There is a security risk according to the distances between teams. “Some information
could be lost during the communication”
5 [75] Lack of accessibility of information
6 [76] Over reliance on one person per team for communication lead to misinterpretation
and/or loss of information
7 [76] Reduce cooperation arising from misunderstanding
8 [77] Information hiding
# Ref Problem
1 [9] Managing customers new to agile
2 [72] Some stockholders have a lack of Agile skills
3 [73] Inexperience with agile methods
4 [75] Lack of experience of XP
5 [75] Customer not aligned with agile
6 [76] Skill differences
7 [77] Lack of training
# Ref Problem
1 [3] Reduced trust
2 [67] Trust in team or team members
3 [72] There is a lack of trust between team members
4 [73] Trust and lack of productivity
5 [75] Lack of trust
6 [76] Lack of trust/teamness
7 [77] Trust building
# Ref Problem
1 [9] Time differences
2 [9] Meetings at the office outside office hours
3 [67] Time zone differences
4 [71] Temporal difference
5 [72] Time differences between teams reduce the available time for synchronous
communication
6 [73] Time zone differences
7 [75] Time zone differences
137
Table 51: Problems due to people differences
# Ref Problem
1 [3] Differences in ethical values
2 [3] Differences in managing individualism and collectivism
3 [3] Differences in time perception
4 [70] People differences
5 [72] There is a lack of understanding of authority with some team participants
6 [77] Authority
# Ref Problem
1 [67] Project management process
2 [68] Team management
3 [70] Processes issues
4 [73] Lack of processes
5 [74] People-oriented versus process-oriented control
6 [76] Manage project artefacts
# Ref Problem
1 [9] Coordinating in multiple time zones is difficult
2 [67] Coordination
3 [68] Coordinating with multiple sites
4 [75] Conflicting work/unnecessary delays due to lack of coordination
5 [75] Difficulty in coordination
6 [76] Increased coordination cost
# Ref Problem
1 [72] The differences in culture that can reduce the team responsibility and moral
2 [73] Lack of team structure and roles and responsibility
3 [73] Having common or shared components
4 [75] Difficulty in accepting shared ownership
5 [77] Avoidance of accountability
6 [77] Collective ownership
# Ref Problem
1 [9] Incorrect execution of Scrum
2 [9] Scrum of Scrums not effectively used
3 [9] Features not being deployment ready at end of sprint
4 [72] Global development settings can lead to insufficient development meetings
5 [72] There is an increase of documentation during development
6 [74] Short iterations versus distance complexity
138
Table 56: Problems due to cultural differences - organizational and national
# Ref Problem
1 [3] Differences in organizational vision
2 [67] Organizational culture
3 [67] National culture
4 [75] Cultural differences
5 [77] Cultural difference
# Ref Problem
1 [9] Informal contact is lost
2 [3] Lack of informal communication
3 [72] The increased distance between Agile developers minimizes the level of
communication and collaboration
4 [75] Lack of frequent/informal communication
5 [76] Reduce informal contact can lead to lack of critical task awareness
# Ref Problem
1 [9] Lack of focus
2 [73] Team missing the big picture
3 [77] Commitment
4 [77] Collective vision
# Ref Problem
1 [72] Lack of formal documents “no standards”
2 [73] Lack of documentation, unclear requirements
3 [74] Fixed requirements versus evolving requirements
4 [74] Formal agreements versus informal agreements
# Ref Problem
1 [71] Difficult to truly evaluate the state of the project
2 [72] The visibility level of development progress is low
3 [73] Lack of visibility on priority, requirements, demo and sprint reviews
4 [76] Lack of visibility
# Ref Problem
1 [3] Lack of shared understanding
2 [72] Lack of shared knowledge and information
139
3 [75] Difficulty in maintaining tacit knowledge
4 [76] Lack of domain knowledge
# Ref Problem
1 [3] Increased effort to initiate contact
2 [74] Communication need versus communication impedance
3 [75] Reduced productivity due to communication overhead
4 [76] Increased communication cost
# Ref Problem
1 [3] Increased team size
2 [67] Team size
3 [69] Large team sizes
# Ref Problem
1 [9] Different holidays
2 [72] Different holiday schedules make it difficult for teams to synchronize work
3 [73] Regional holidays are different
# Ref Problem
1 [75] Difficulty in making independent decisions due to dependency on the superiors
2 [76] Manager must adopt local regulation
3 [77] Decision making
# Ref Problem
1 [67] Number of teams
2 [72] The large number of team members creates difficulties in applying some of the agile
practices
3 [76] Expansion of number of sub teams in onshore and offshore teams that suffer the
effect of poor communication process between teams
# Ref Problem
1 [9] Silence / passivism
2 [71] Silence of participants due to linguistic and cultural differences
140
Table 68: Problems due to increased number of sites
# Ref Problem
1 [69] Increased number of sites
2 [72] Increasing the number of sites creates difficulties for team control and management
141
Appendix L Ungrouped problems
Table 69 shows the ungrouped problems.
Table 69: Ungrouped problems
# Ref Problem
1 [70] Distance differences
2 [72] The cultural differences between project stockholders can lead to lack of awareness
3 [72] There is a lack of transparency from some members regarding cultural differences
4 [72] The differences of the development countries make barriers to adapt within the
different local regulations
5 [73] Handling sensitive data at the offsite
6 [73] Work distribution with distributed human resources
7 [9] No syncing between sites
8 [9] Planning a meeting with everyone present is difficult
9 [9] Integration difficulties
10 [9] Multiple Product Owners not in sync
11 [67] Geographic differences
12 [67] Project domain
13 [68] Collaboration difficulties
14 [72] There is a lack of team management “configuration management”
15 [72] The development team has estimation difficulties with the development cost, scope
and development schedule
16 [73] Remote coaches
17 [9] Not communicating all information to team
18 [72] The cultural differences reduce developers’ productivity
19 [77] Ineffective communication
20 [3] Perceived threat from low-cost alternatives
21 [77] Lack of directness and honest
22 [77] Feeling insecurity
23 [9] No transparency between sites
24 [71] Daily Scrum difficult to organize
25 [76] Long traveling time between distributed sites
142
Appendix M Expert focus group: invitation letter
Dear <name>,
First of all, thank you very much for your willingness to participate in this research.
My name is Peter van Buul, and together with Rini van Solingen we are investigating distributed SAFe,
as part of my master thesis in Information Architecture at the Technical University Delft. The research
is carried out by me as a graduate student assignment at Prowareness. The first step in the research
is to discover potential problems when applying SAFe in a distributed setting. As part of this research
a focus group session is organized in which experts can contribute to the research. This focus group
will help identify possible problems with distributed SAFe.
The focus group session will be Monday the 13th of June 2016, from 13:00 to 17:00, in Delft at
Prowareness, Brassersplein 1, 2612 CT Delft. The session can only start when all participants are
present. Could you please contact me via phone if you are delayed. We will arrange a quick lunch for
you, so being present at 12:30 will help to start punctually.
Information is provided in English to allow the international community to be able to examine the
research. During the focus group the assignments will be in English as well, interactions however can
be in Dutch.
Find attached a mapping which is used as input for the focus group. Would you be so kind to fill in
this mapping before the session and send it back before the 12th of June so that it can be used
during the focus group? Please mind that this can take about an hour of your time.
143
Appendix N Expert focus group: attachment
145
Agile Release Train Elements as numbered in the picture on the previous page.
6. Communities of Practice 17. System Architect Release Train Engineer & 27. Feature
Product Management
7. Value Stream coordination 28. Enabler
10. Agile Release Train / Value Stream 20. DevOps & System team
11. Architectural runway 21. Release Management, Shared Services & User
Experience
12. Program Kanban
Problems due to incorrect Misunderstanding due to Problems due to time zone No or less communication Unable to communicate
execution of SAFe language barriers differences due to increased properly due to inefficient
communication difficulty communication tools
146
Example mapping
An example of a mapping when cooking a four course dinner with two persons with the following problems. No gas on the stove, little cooking experience
and too much distraction. The dinner elements (courses) that are expected to fail if a problem occurs are placed in the table.
147
148
Description of Agile Release Train elements
If you are new to safe, please watch https://www.youtube.com/watch?v=tmJ_mJw8xec which is a
good 5-minute introduction video to SAFe.
There are two versions of SAFe: four level SAFe and three level SAFe. The difference between these
versions is an extra level, the value stream. A clear distinction has to be made between the value
stream level and a Value Stream. The value stream level consists of many practices and is used by large
enterprises when a single Agile Release Train cannot handle a Value Stream. While a Value Stream is
the practice that is used by SAFe to continuously deliver value to a customer. In three level SAFe, an
single Agile Release Train is used per value stream to continuously deliver value to the customer. This
research, and the focus group session will focus on the Agile Release Train, and therefore, on three
level SAFe.
The different elements of the Agile Release Train in SAFe are described. In Figure 45, the SAFe
overview picture of three level SAFe is shown, the different elements that are part of the Agile Release
Train are numbered. Below, a brief description is provided of the different elements of the Agile
Release Train. A elaborate description of each of these elements can be found on the SAFe website
www.scaledagileframework.com [1].
Figure 45: Agile Release Train elements numbered, Modified and reproduced with permission from © 2011-2016 Scaled Agile,
Inc. All rights reserved. Original Big Picture graphic found at scaledagileframework.com
2. Lean-agile mindset
The lean-agile mindset needed to support lean and agile development at scale in an entire
organization. The mindset consists of two parts, thinking lean and embracing agility.
3. SAFe principles
The SAFe principles are the guidelines for decision making when working with SAFe. These principles
also apply for decision making in the Agile Release Train. The nine SAFe principles are:
4. Implementing 1-2-3
Implementing 1-2-3 is the basic deployment pattern for a successful deployment of SAFe that has been
developed over the years of implementing SAFe.
5. Lean-agile leaders
For a SAFe transformation to be successful, the current managers, executives and leaders of the
organization need to adopt the change. They have the power to continuously challenge the
organization to become more agile. After these have been trained they become the so-called lean-
agile leaders.
6. Communities of Practice
Communities of Practice are informal groups of people from different teams, Agile Release Trains or
even Value Streams that have a shared interest, the topic of the Community of Practice. Both the
experts on the topic as well as those who want to become an expert are part of the Community of
Practice. The goal of these Communities of Practice is to allow knowledge to be shared across different
Agile Release Trains or Value Streams.
150
managed to make the Value Streams function independent. Value Stream coordination contains the
different tools that SAFe provides to manage these dependencies.
The agile teams that are part of the Agile Release Train are aligned via a single vision, roadmap and
program backlog. These teams iterate in a so called program increment, or PI, of 8 to 12 weeks
consisting of 4 to 6 two week team iterations. During the team iterations the teams continuously add
value to the solution by finishing fully tested stories. At the end of each team iteration the integrated
solution is demoed in the system demo done by the system team.
152
execution they actively participate in the PI, they enable decentralized decision making by providing
the team members with the appropriate authority, and function as a coach for the teams enabling
them to continuously improve their skills.
19. Customer
The customer receives the solution which solves its current needs. The customer works together with
Product Management and other key stakeholders to prioritize development. By actively participating
in events, such as planning sessions and demo’s the customer knows what he gets and can steer the
solution. The customer is thus part of the Value Stream that is supported by the Agile Release Train.
23. Roadmap
The roadmap is used to present the Agile Release Train deliverables, this consists of the committed PI
objectives of the current PI and a forecast for the next PI or two. Product Management updates the
roadmap according to the vision. A balance has to be obtained between planning not enough,
resulting in less alignment and planning too much, resulting in a unresponsive queue which obstructs
change.
24. Metrics
SAFe presents multiple metrics that can be used, with these metrics different results are measured.
The most important is progress and whether the desired solution is delivered. This is measured best
by measuring the working solution at the customer. SAFe also presents many other ways to measure
progress at the different levels, for example the epic burn-up chart, Value Stream performance metrics
A feature only adds value for the customer if the feature is released and added to the working solution
at the customer. Releasing frequently enables frequent addition of value. However, releasing should
only be done when it actually makes sense, same as stated in release any time.
26. PI Objectives
The PI objectives are a summary of the business and technical objectives of the team and Agile Release
Train. They are formulated during the PI planning and the teams commit to them for the upcoming PI.
Formulating these objectives is done to validate the teams understanding of the intent of the business
regarding the features the teams do. Result of this is that, when the intent of the business is known
the goal of the team becomes to get the desired outcome rather than finish the list of features.
27. Feature
A feature describes a service that the system can provided that satisfies a specific need of one or more
users. The size of a feature is such that it can be picked up in a single PI by a single Agile Release Train,
thus a feature is planned and reviewed at the PI boundaries. Features are split into stories which can
be picked up by a team during a iteration.
28. Enabler
Enablers are the technical initiatives that pave the architectural runway and are created as business
initiatives consume the runway. Enablers are only created when needed, to prevent engineering too
far ahead resulting in an over engineered solution. Enablers are formulated at program level as
features, at team level as stories. Enablers that change architecture can be big, however they have to
be broken down into small pieces (enabler stories) so that teams can implement these during a
iteration.
29. Epics
The biggest business initiatives are cast into are epics, in the form of lightweight business cases. Epics
can be split over multiple Value Streams or Agile Release Trains. The features that an Agile Release
Train develops come from the epics that are defined at portfolio level.
Problem
Problems due to incorrect execution of SAFe
154
Not executing SAFe properly results in many problems. Features that are not ready at the end of the
sprint/PI, teams that get no feedback on their work because there are no retrospectives, PI
planning’s or demos. Or increase in documentation because SAFe does not require enough
documentation for the teams.
Misunderstanding due to language barriers
Many studies report problems due to language barriers. If the language used for communication is
not someone’s native language it can be hard for this person to follow a conversation or express
themselves. Also speakers from different countries might have different dialects that can be hard to
follow for others.
Problems due to time zone differences
Time zone differences can lead to having meetings outside office hours and reduced availability for
synchronous communication.
No or less communication due to increased communication difficulty
Initiating contact in a distributed environment takes an increased effort as this cannot be initiated
face to face, some tool has to be used. This creates communication overhead and increases
communication costs.
Unable to communicate properly due to inefficient communication tools
The most mentioned problem in the literature is being unable to communicate properly due to
inefficient communication tools. Both the hardware and tools used for communication are part of
the communication infrastructure. These problems are quite severe as there is a high dependency
on this infrastructure for communication.
Opening
Central reception in the hall, when the participants came in they were asked not to talk to the other
participants about the focus group, SAFe or distributed for the duration of the focus group except
when asked to do so during the session. The participants were all welcomed and walked in by one of
the facilitators.
Introduction
Rini did a plenary introduction and the list of Agile Release Train elements was provided to the
participants.
Peter explained the first exercise, after the explanation the questions of the participants were
answered by Rini. After the exercise was clear, the first group went with Rini (group Rini) to the other
room to execute the first exercise. The other group of stayed with Peter (group Peter) to execute the
first exercise.
Both groups started with the setup as presented in Table 70, a picture is provided in Figure 46.
Table 70: Start setup round 1 - groups
156
4. Implementing 1-2-3
8. Weighted Shortest Job First
24. Metrics
29. Epics
11. Architectural runway
3. SAFe principles
9. Release any time
17. System Architect, Release
Train Engineer & Product
Management
18. Business Owners
19. Customer
7. Value Stream coordination
Both groups did the exercise, the groups progressed roughly at the same rate. The results of group
Rini are presented in Table 71 and Figure 47, the results of group Peter are presented in Table 72 and
Figure 48.
Table 71: Results round 1 - group Rini
158
10. Agile Release Train / Value 6. Communities of Practice
Stream
9. Release any time 12. Program Kanban
21. Release Management,
Shared Services & User-
Experience
17. System Architect, Release
Train Engineer & Product
Manager
26. PI Objectives
18. Business Owners
5. Lean-agile leaders
7. Value Stream coordination
25. Milestones & Release
29. Epics
27. Feature
8. Weighted Shortest Job First
28. Enabler
160
Figure 48: Picture results round 1 - group Peter
After the time-box was finished the result of both groups where combined which created a new
starting point. Items both groups classified the same where put in the corresponding column, this
resulted in the start setup as presented in Table 73.
Table 73: Start setup round 1 - plenary
Peter explained the exercise, after this explanation the group started with the exercise. A picture to
give an impression of the session is presented in Figure 49.
As the participants were all equal, the group found it hard to converge and reach a decision on the
elements. Though it took a little longer, this did not have any effect on the result. In the end the group
did not reach agreement on one item, “8. Weighted Shortest Job First”, so they chose to leave that
item undecided. The result of the exercise is presented in Table 74 and Figure 50.
Table 74: Results round 1 - plenary
162
22. Vision 15. Inspect & Adapt
21. Release Management, 12. Program Kanban
Shared Services & User-
Experience
10. Agile Release Train / Value 2. Lean-agile mindset
Stream
23. Roadmap 20. DevOps & System team
5. Lean-agile leaders 1. Core Values
26. PI objectives 6. Communities of Practice
27. Feature
25. Milestones & Release
24. Metrics
28. Enabler
11. Architectural Runway
29. Epics
17. System Architect, Release
Train Engineer & Product
Management
3. SAFe principles
7. Value Stream coordination
18. Business Owners
Rini explained the next exercise, dot voting, after which the participants wrote down their votes
individually. No interaction occurred during this time. An impression of the exercise is shown in Figure
51.
164
Figure 51: Impression of dot voting
After all participants were done with voting, the facilitators put the results on the wall. These results
are presented in Table 75 and Figure 52. The ranking on likelihood is presented in Table 76, and the
ranking on impact in Table 77. The individual votes of the participants can be found in Appendix P.
Table 75: Result round 2 - dot voting individually
166
1 16. PI planning 18
2 15. Inspect & Adapt 11
3 14. Solution Demo 7
4 20. DevOps & System team 6
5 12. Program Kanban 5
6 1. Core values 4
7 13. System Demo 3
8 2. Lean-agile mindset 0
9 6. Communities of Practice 0
From these rankings the top 3 was taken and combined resulting in the following items to be
discussed: “16. PI planning”, “15. Inspect & Adapt”, “14. Solution Demo”, and “13. System Demo”. As
shown in Figure 53.
Peter explained the exercise after which the participants each wrote down the consequences. For
each consequence a separate post-it was used, with the name of the participant and the element
number on it.
One by one the participants put down their votes on the wall, with the corresponding element, as
presented in Figure 54. Grouping similar consequences as they are put on the wall. If multiple post-
it’s contained the same problem Rini summarized the problem on a separate post-it. At the start the
four elements where put on the wall with no post-it’s under them, as presented in Table 78.
Table 78: Start setup round 3 - plenary
16. PI planning 15. Inspect & Adapt 14. Solution Demo 13. System Demo
As the group discussed the consequences there was no moderator, so the groups did not converge to
a result. After this went on for a while Rini started to moderate as to get the group to go to a result.
Asking them to look at the ungrouped consequences, either group the or make them a new group.
This way for each element the consequences where mapped. Table 79 to Table 82 present the
consequences of each of the elements. The build-up of each of the consequences can be found in
Appendix Q.
Table 79: Result round 3 - consequences System Demo
Number Consequence
13.1 No integration no working system
13.2 Unclear value
13.3 No feedback
13.4 Bad team morale
13.5 Annoyed customers
13.6 Unpredictably
13.7 Rework
13.8 Delay
168
Table 80: Result round 3 - consequences Solution Demo
Number Consequence
14.1 No clear / unknown value
14.2 Annoyed stakeholders / customers
14.3 Bad morale
14.4 Unpredictably
14.5 No/late feedback
14.6 Delay
14.7 Rework
Number Consequence
15.1 No learning
15.2 Fallback / unlearning
15.3 Dissatisfied / annoyed customer / stakeholders
15.4 No motivation
Number Consequence
16.1 No goal no execution
16.2 No alignment between teams
16.3 No real results
16.4 Stakeholder annoyance
16.5 No teamness / commitment
16.6 Lack of Transparency
16.7 Rework
16.8 Longer time to market
Peter explained the last exercise, dot voting, after which the participants started writing down their
votes individually. During this exercise no interactions occurred.
After all participants were done with voting, the facilitators put the results on the wall. The results are
presented per element, in Table 83 to Table 86, and in Figure 55 to Figure 58. The ranking on likelihood
can be found in Table 87 to Table 90, and the ranking on impact in Table 91 to Table 94. The individual
votes of the participants can be found in Appendix R. The visualization of the data can be found in
Appendix S.
Table 83: Result round 4 - dot voting individually on 13. System Demo
Table 84: Result round 4 - dot voting individually on 14. Solution Demo
Table 85: Result round 4 - dot voting individually on 15. Inspect & Adapt
170
Figure 55: Picture result round 4 - dot voting individually on 13. System Demo
172
Figure 57: Picture result round 4 - dot voting individually on 15. Inspect & Adapt
174
1 14.3 Bad morale 10
2 14.1 No clear / unknown value 8
3 14.2 Annoyed stakeholders / customers 8
4 14.5 No/late feedback 7
5 14.4 Unpredictability 5
6 14.6 Delay 3
7 14.7 Rework 1
Table 89: Result round 4 - consequences Inspect & Adapt ranked on likelihood
Table 93: Result round 4 - consequences Inspect & Adapt ranked on impact
Closing
Everybody was thanked for their time and cooperating with the studies, a small gift is presented to
the participants as thank you.
176
Appendix P Expert focus group: individual votes round
2
Table 95: Votes on elements - Distributed expert 1
178
Appendix Q Expert focus group: consequences per
element round 3
13. System Demo
Table 101: 13.1 No integration no working system
Expert Note
SAFe expert 1 Technical debt as integration proven late leads to quick fixing and too big issues to
be able to resolve
SAFe expert 1 Save up trouble for the end (integrating late leads to trail of misery)
SAFe expert 2 Is the system integrated / working?
Expert Note
Distributed Unable to show added value
Expert 1
SAFe expert 2 No observation of a worksystem possible -> Where are we regarding PI objectives?
Expert Note
SAFe expert 2 No feedback -> are we still doing what is of value?
Distributed Unable to gather feedback
Expert 1
SAFe expert 1 No user / customer feedback so building to much or the wrong thing
Practitioner 1 No good feedback for next iteration can't change to better customer solution
Expert Note
Distributed Dissatisfied team
Expert 2
Practitioner 1 No good platform for the teams to show their results -> demotivation
SAFe expert 1 Team finger pointing (it works on our side)
Practitioner 2 Team morale
Expert Note
Practitioner 2 Stakeholder buyin
Distributed Dissatisfied customers
Expert 2
Expert Note
Distributed Current state of the system is not clear
Expert 1
Expert Note
Distributed Rework
Expert 2
Expert Note
Distributed Delay
Expert 2
Distributed Time to market longer
Expert 2
Expert Note
SAFe expert What value has been delivered? = unclear
2
Practitioner No acceptance
2
Distributed Unable to show added value to stakeholders
Expert 1
Expert Note
Practitioner No stakeholder buyin
2
Distributed Dissatisfied customers
Expert 2
Expert Note
SAFe expert No transparency to main stakeholders -> no "demo or die" -> urgency -> is the
2 system integrated?
SAFe expert Disengagement of people on the train (just do my job)
1
Practitioner No good platform for the teams to show their results -> demotivation
1
180
Distributed Dissatisfied teams
Expert 2
Practitioner Morale of ART
2
Expert Note
SAFe expert Release Train cycle disappears, stops the cadence of releasing, unpredictability of
1 releasing
Distributed Current state of the solution is not clear
Expert 1
Expert Note
Practitioner No good feedback, feedback on prod. Will be higher
1
Distributed Unable to gather feedback
Expert 1
Expert Note
Distributed Delay
Expert 2
Distributed Time to market longer
Expert 2
Expert Note
Distributed Rework
Expert 2
Expert Note
Distributed Not possible to identify the current challenges
Expert 1
Distributed Not possible to improve the process
Expert 1
Practitioner 1 No improvements no learning in the Release Train / teams
Practitioner 1 Not becoming a team in norming or storming phase
SAFe expert 2 No (cross team) learning
SAFe expert 1 No learning from mistakes, no improvement process
Expert Note
Distributed Current state is unclear
Expert 1
SAFe expert Fallback into old process and structures
1
Expert Note
Distributed Dissatisfied customers
Expert 2
Expert Note
Practitioner 1 People becoming unhappy, performance decrease
SAFe expert 2 No flow optimization owned & proposed by train members
SAFe expert 1 Assumptions rather than shared learning drives behavior so less focus +
engagement
Practitioner 2 Demotivated team
16. PI Planning
Table 120: 16.1 No goal no execution
Expert Note
SAFe expert No alignment to a common goal -> "why" is not answered
2
Distributed Unclear PI's
Expert 2
SAFe expert Teams cannot commit/focus train does not execute anything
1
Distributed Lack of scope
Expert 1
Distributed Lack of focus
Expert 1
Distributed Lack of prioritization
Expert 1
Practitioner Not working towards the same goal
1
182
Table 121: 16.2 No alignment between teams
Expert Note
SAFe expert 2 Missing out dependencies -> surprises -> less/no value delivery or replanning
Expert Note
SAFe expert 2 Just do work instead of leveraging knowledge in teams
Practitioner 1 No aligned planning with result of not working software
Distributed Workflow stops
Expert 2
Expert Note
SAFe expert Business stakeholders detached from execution
1
Practitioner Stakeholders pissed off
2
Expert Note
SAFe expert 2 Missing the feeling of "We are in this together"
SAFe expert 1 People feel insecure and uncertain
SAFe expert 1 Blaming and complaining culture due to lack of commitment
Practitioner 2 No commitment
Distributed Angry/distressed developers
Expert 2
Distributed Lack of commitment
Expert 1
Expert Note
SAFe expert Lack of transparency
1
Distributed Lack of transparency
Expert 1
Expert Note
Practitioner Rework/repair
2
Practitioner No aligned planning with result of big issues with integration
1
Expert Note
Distributed Delay causes waiting time
Expert 2
184
Appendix R Expert focus group: individual votes focus
group: round 4
Table 128: Votes on consequences - Distributed Expert 1
15.1 No learning 2 1
15.2 Fallback / unlearning 2 2
15.3 Dissatisfied / annoyed customer / stakeholders 1
15.4 No motivation
15.1 No learning 2 1
15.2 Fallback / unlearning 1 1
15.3 Dissatisfied / annoyed customer / stakeholders 1
15.4 No motivation 1 1
186
14.7 Rework
15.1 No learning 2 1
15.2 Fallback / unlearning
15.3 Dissatisfied / annoyed customer / stakeholders
15.4 No motivation 2 3
15.1 No learning 1
15.2 Fallback / unlearning 2 2
15.3 Dissatisfied / annoyed customer / stakeholders
15.4 No motivation 1 2
15.1 No learning 2 1
15.2 Fallback / unlearning 1 2
15.3 Dissatisfied / annoyed customer / stakeholders
15.4 No motivation 1 1
188
13.4 Bad team morale 1
13.5 Annoyed customers
13.6 Unpredictability
13.7 Rework
13.8 Delay
15.1 No learning 2 2
15.2 Fallback / unlearning
15.3 Dissatisfied / annoyed customer / stakeholders
15.4 No motivation 2 2
25 19
20 17
15 15
15 10 10 9
8 9 8 7 9
10 6
4 5 3 3 3 3
5 2 2
0 0
0
Consequences
25
20 17 17 17
15 9
10 6
3 2
5
0
Consequences
190
Expert distribution of likelihood - System Demo
12 10 10
Number of votes
10 8 8
8
6 4
4 3 3
2
2
0
Consequences
16
14
12 9 9
10 7
8 5
6 3
4
2 0 0
0
Consequences
20 18
16
15 12
10 11 10 10
10 8 8 8 7
5 5 5 5
5 3 2 2 1 1
0
Consequences
20 18
16
15 12
10
10
5
5 2
0
Consequences
192
Expert distribution of likelihood - Solution Demo
12
10
Number of votes
10
8 8
8 7
6 5
4 3
2 1
0
Consequences
12 10
10 8
8
6 5 5
4 2
2 1
0
Consequences
13
14
11
12
9
10
7 7
8 6 6
6
4 2 2
2 0
0
15.1 No learning 15.4 No motivation 15.2 Fallback / unlearning 15.3 Dissatisfied /
annoyed customer /
stakeholders
Consequences
15 13
10
5 2
0
15.1 No learning 15.4 No motivation 15.2 Fallback / unlearning 15.3 Dissatisfied /
annoyed customer /
stakeholders
Consequences
194
Expert distribution of likelihood - Inspect & Adapt
12 11
10
Number of votes
8 7
6
6
4
2
0
0
15.1 No learning 15.4 No motivation 15.2 Fallback / unlearning 15.3 Dissatisfied /
annoyed customer /
stakeholders
Consequences
8 7
6
6
4
2
2
0
15.1 No learning 15.4 No motivation 15.2 Fallback / unlearning 15.3 Dissatisfied /
annoyed customer /
stakeholders
Consequences
20
14 13
15 12 11
9 10 9
10 8 7 8
5 6 5 5 5 6 5
5 3 3 2 2
0
0
16.1 No goal 16.2 No 16.5 No 16.4 16.6 Lack of 16.3 No real 16.7 Rework 16.8 Longer
no execution alignment teamness / Stakeholder Transparancy results time to
between commitment annoyance market
teams
Consequences
25 21
20
15 12 11 10 9
10 8
5 2
0
16.1 No goal 16.2 No 16.5 No 16.4 16.6 Lack of 16.3 No real 16.7 Rework 16.8 Longer
no execution alignment teamness / Stakeholder Transparancy results time to
between commitment annoyance market
teams
Consequences
196
Expert distribution of likelihood - PI planning
16
13
Number of votes
14
12
9
10
8 6
5 5 5
6
3
4 2
2
0
16.1 No goal 16.2 No 16.5 No 16.4 16.6 Lack of 16.3 No real 16.7 Rework 16.8 Longer
no execution alignment teamness / Stakeholder Transparancy results time to
between commitment annoyance market
teams
Consequences
14
12
10 8
7
8 6
5 5
6
3
4
2 0
0
16.1 No goal 16.2 No 16.5 No 16.4 16.6 Lack of 16.3 No real 16.7 Rework 16.8 Longer
no execution alignment teamness / Stakeholder Transparancy results time to
between commitment annoyance market
teams
Consequences
198
Master Thesis Peter van Buul
199
200
Master Thesis Peter van Buul
201
202
Master Thesis Peter van Buul
203
204
Appendix U Results of the survey
The survey was executed from the 25th of July to 31st of August. It consisted of three parts:
Demographics, Challenged elements of the Agile Release Train and Options for future research. In the
demographics, the experience regarding SAFe and distributed were asked. In the challenged elements
of the Agile Release Train, the nine elements identified by the focus group were rated based on
likelihood and impact of failing. In the options for future research, the assumption that any distributed
SAFe implementation a conscious decision is made weather to do the events co-located or not, was
checked. Additionally, the helpfulness of possible solution directions for future research was asked. A
printed version of the survey, containing all questions, can be found in Appendix T.
The survey was spread digitally using SurveyMonkey. It was spread to the community using social
media, posted in the SAFe LinkedIn group, on twitter and on Facebook. Besides the community, some
people where approached directly, the survey was spread among the Agile Release Train of T-Mobile.
As well as to all participants of SAFe trainings at Prowareness. Additionally, the consultants at
Prowareness also shared the survey in their network. The response of the community was very low,
in total 20 people took part in the survey, all from LinkedIn. Both T-Mobile and the trainees responded
better, with respectively 14 and 26 replies. Finally, from the network came in total 6 replies, which
brings the total number of participants to 66, of which 42 completed the entire survey.
The survey was initially executed during the summer holiday till the 31st of August, which was possibly
the cause for the low response on the survey. The duration was thus extended till the 20th of
September to eliminate the summer holiday as cause for the low response. Despite this, during this
extended period, only 6 persons participated not significantly heightening the response rate.
Dear <name>,
First of all, thank you very much for your willingness to participate in this research.
My name is Peter van Buul, and together with Rini van Solingen we are investigating distributed SAFe,
as part of my master thesis in Information Architecture at the Technical University Delft. The research
is carried out by me as a graduate student assignment at Prowareness. During this research, challenges
with executing SAFe in a distributed environment have been discovered. As part of this research an
expert session is organized in which Release Train Engineers will discuss the practical side of this
research.
The expert session will be on Monday the 31st of October 2016, from 13:00 to 17:00, in Delft at
Prowareness, Brassersplein 1, 2612 CT Delft. The session can only start when all participants are
present. We will arrange a lunch for you from 12:00, please be there on time, this will help punctuality.
Could you please contact me via phone if you are delayed?
Information is provided in English to allow the international community to be able to examine the
research. During the expert session the assignments will be in English as well, interactions during the
session however can be in Dutch.
The program for the day is expected to consist of the following 3 parts:
- Discussion on challenged distributed SAFe elements 13:15 – 15:00
- Discussion on solutions to prevent elements from failing 15:00 – 16:00
- Discussion on consequences of elements failing 16:00 – 17:00 (ultimate latest)
The program is not finalized yet, so the final program might be slightly different. To finalize the
program, I would like to ask you the following questions.
- How long have you worked with SAFe?
- What is your experience with distributed working?
- What is your experience with distributed in SAFe?
Additionally, I might ask you to prepare some things before the session.
Likelihood Impact
220
Name:
Element
Element
Element
Element
Element
222
Name:
Element
Solution
Element
Solution
Element
Solution
Element
Solution
Element
Solution
Element
Solution
224
Appendix X Practitioner focus group: execution
Opening
Central reception in the hall, when the participants came in they were asked not to talk to the other
participants about the focus group, SAFe or distributed for the duration of the focus group except
when asked to do so during the session. The participants were all welcomed and walked in by one of
the facilitators.
Introduction
Peter did a plenary introduction, and the participants and facilitators introduced themselves. One of
the participants was not present yet. However, given the tight schedule the introduction was done
without the participant. When the first exercise was explained all participants were present.
Peter explained the first exercise, after the explanation some answers were answered by Peter. After
the exercise was clear, the group was split in two. The three participants that had been previously
involved with the research stayed in the room with Hanneke (group Hanneke) to execute the first
exercise. The other 8 went with Peter to the other room (group Peter) to execute the first exercise.
Both groups started with the setup as presented in Table 134, a picture is provided in Figure 75.
Table 134: Practitioner focus group - start setup round 1 - groups
Figure 75: Practitioner focus group - picture start setup round 1 - groups
226
Both groups did the exercise, despite the difference in size both groups progressed at roughly the
same rate. The results of group Hanneke are presented in Table 135 and Figure 76, the results of group
Peter are presented in Table 136 and Figure 77.
Table 135: Practitioner focus group - result round 1 - group Hanneke
Figure 76: Practitioner focus group - picture result round 1 - group Hanneke
228
Figure 77: Practitioner focus group - picture results round 1 - group Peter
After both groups had finished, the results of both groups were combined which created a new
starting point. Items both groups classified the same were put in the corresponding column, this
resulted in the start setup as presented in Table 137 and Figure 78.
Table 137: Practitioner focus group - start setup round 1 - plenary
Figure 78: Practitioner focus group - picture start setup round 1 - plenary
Peter explained the exercise, after this explanation the group started with the exercise. As the
participants were asked to provide insight based on their practical experience, implementation
differences made it hard to reach agreement on some items. Because of this, the participants the
discussions of the elements took more time then there was available. Therefore, Hanneke came up
with the idea to use a lean coffee break technique in which the participants discuss an item for 1
230
minute, and then decide whether to add another minute or conclude the item. This way the processing
speed was increased so that the group stayed within the (extended) timebox.
When trying to resolve differences when looking at the items from theory, the theory serves as a truth
on which to validate and helps to reach agreement. While, when looking at the items from practice,
these differences come from different implementations. In this case, both are correct, result of this is
that reaching agreement is not possible. This was happening during the focus group, a consequence
of this was that there were five items of which the participants stated, no agreement can be made
upon these items as this is dependent on the implementation. These five items were: Release
Management, DevOps, system team, Release any time and Customer. The result of the exercise is
presented in Table 138 and Figure 79.
Table 138: Practitioner focus group - result round 1 - plenary
Hanneke explained the next exercise, dot voting, after which the participants wrote down their votes
individually. No interaction occurred during this time.
While Hanneke calculated the votes to determine the top 3 of likelihood and impact, Peter explained
the next exercise. The votes for each element can be found in Table 139. The ranking on likelihood can
be found in Table 140, and the ranking on impact in Table 141. The individual votes of the participants
can be found in 0.
Table 139: Practitioner focus group - result of round 2 - dot voting individually
232
4. Implementing 1-2-3 13 19
17. Release Train Engineer 13 15
32. Feature 11 16
25. Shared Services 17 6
33. Enabler 3 3
19. Product Management 2 9
20. Business Owners 2 3
26. User Experience 2 2
Figure 80: Practitioner focus group - picture result round 2 – combined ranking
234
Collect individual experience (round 3)
After Peter explained the exercise, and the ranking was presented the participants wrote down their
solutions to the 5 elements that resulted from the combined ranking. The solutions that each
participant came up with can be found in Appendix AA.
The participants were divided into two groups. Within each group, the solutions of the individuals
were discussed, based on this discussion the two best solutions per element were formulated. These
solutions were a combination of the ideas of the individuals. Each solution with corresponding
element was written down on a separate post-it to be used in the next round. The results of the groups
can be found in Appendix BB.
After Peter explained the exercise, a delegate from each of the groups presented the solutions of their
group and put the post-it with the solution on the wall with the corresponding element. This was done
one at the time, so first group A presented a solution, then group B, etc. Before moving to the next
solution, it was asked if the solution was clear to all participants. Similar solutions were grouped
together based on group consensus. At the start the 5 elements were put on the wall with no post-it’s
under them, as presented in Table 142. After all solutions were presented, the solutions were
numbered for the dot voting in the next exercise. The result of this exercise can be found in Table 143
to Table 147, and in Figure 81 to Figure 85.
Table 142: Practitioner focus group - start setup round 3 - plenary
16. PI planning 15. Inspect & 4. Implementing 32. Feature 25. Shared
Adapt 1-2-3 Services
Number Solution
1 Features ready before PI planning
2 Minimal 1 x period PI Planning on-site with all teams
3 Good communication tools
Table 144: Practitioner focus group - result round 3 - solutions Inspect & Adapt
Number Solution
1 Good communication tools
2 Divide the topics over the locations
3 First I&A meeting on 1 location
Table 145: Practitioner focus group - result round 3 - solutions Implementing 1-2-3
Number Solution
1 Have 1 team to coordinate the trainings & implementation
2 Coaches know culture & problems of location
3 Strong vision promoted top down
Number Solution
1 Keep coordinating dependencies (Scrum of Scrums)
2 Have common understanding on features
3 Make sure that each feature has 1 owner to manage dependencies
4 Small features
Table 147: Practitioner focus group - result round 3 - solutions Shared Services
Number Solution
1 Distribute features by shared service impact (focus areas)
2 Services give commitment
3 1 x per period visit each team visibility
4 Involvement during planning events
Figure 81: Practitioner focus group - picture result round 3 - solutions PI planning
236
Figure 82: Practitioner focus group - picture result round 3 - solutions Inspect & Adapt
Figure 83: Practitioner focus group - picture result round 3 - solutions Implementing 1-2-3
Figure 85: Practitioner focus group - picture result round 3 - solutions Shared Services
238
Dot voting (round 4)
Hanneke explained the last exercise, dot voting, after which the participants wrote down their votes
individually. During this exercise no interactions between the participants occurred. However, during
the exercise some participants found the scales presented difficult to reason with. Therefore, some
extra explanation on the scales was provided to some participants. Which was then repeated to all
participants.
While Hanneke was calculating the results of the dot voting, Peter asked an additional question to the
participants: “You were asked to brainstorm on solutions, which of these solutions are actually used
in practice, and which are just an idea but not realizable?”. The participants responded that all
solutions where used in practice, except those that involved flying, due to the costs.
The results of the votes can be found in Table 148 to Table 152. Ranking on difficulty can be found in
Table 153 to Table 157, and ranking on impact can be found in Table 158 to Table 162. The individual
votes can be found in Appendix CC.
Table 148: Practitioner focus group - result round 4 - dot voting individually on PI planning
Table 149: Practitioner focus group - result round 4 - dot voting individually on Inspect & Adapt
Table 150: Practitioner focus group - result round 4 - dot voting individually on Implementing 1-2-3
Table 152: Practitioner focus group - result round 4 - dot voting individually on Feature
Table 153: Practitioner focus group - result round 4 - solutions PI planning ranked on difficulty
Table 154: Practitioner focus group - result round 4 - solutions Inspect & Adapt ranked on difficulty
Table 155: Practitioner focus group - result round 4 - solutions Implementing 1-2-3 ranked on difficulty
Table 156: Practitioner focus group - result round 4 - solutions Feature ranked on difficulty
Table 157: Practitioner focus group - result round 4 - solutions Shared Services ranked on difficulty
240
2 2. Services give commitment 11
3 1. Distribute features by shared service impact (focus areas) 9
4 3. 1 x per period visit each team visibility 8
Table 158: Practitioner focus group - result round 4 - solutions PI planning ranked on impact
Table 159: Practitioner focus group - result round 4 - solutions Inspect & Adapt ranked on impact
Table 160: Practitioner focus group - result round 4 - solutions Implementing 1-2-3 ranked on impact
Table 161: Practitioner focus group - result round 4 - solutions Feature ranked on impact
Table 162: Practitioner focus group - result round 4 - solutions Shared Services ranked on impact
Closing
Everyone is thanked for their time and cooperating with the studies, a small gift is presented to the
participants as thank you.
242
Appendix Z Practitioner focus group: individual votes
round 2
Table 166: Practitioner focus group - votes on elements - Practitioner 1
244
Table 171: Practitioner focus group - votes on elements - Practitioner 6
246
16. PI planning 3 3
15. Inspect & Adapt 3 2
6. Communities of Practice
4. Implementing 1-2-3
17. Release Train Engineer 2 2
32. Feature 1 1
25. Shared Services 2 2
33. Enabler
19. Product Management 1 2
20. Business Owners
26. User Experience 1 1
Expert Solution
Practitioner 1 Iedereen invliegen
Practitioner 1 Binnen trein feature kampen maken
Practitioner 1 Microsoft Halo
Practitioner 1 Vaker & korter doen
Practitioner 2 Betere voorbereiding feature niveau
Practitioner 2 Increase common understanding of Value Stream by teammembers
Practitioner 2 Train RTE
Practitioner 2 Train Feature Engineers
Practitioner 2 Langere PI planning
Practitioner 2 Meer reconciliatie tussen teams in PI planningdag
Practitioner 2 Tussentijdse demo's in PI planningdag
Practitioner 3 Make sure all possible features are clear to everyone upfront
Practitioner 3 Have preliminary meeting in the weeks before PI day
Practitioner 3 Foresee needed infrastructure & test upfront
Practitioner 4 Good audio/video
Practitioner 4 RTE per site
Practitioner 4 Aligned Tooling for feature/ustracking, program board
Practitioner 5 Verhoog reisbudget: % van teams laten reizen
Practitioner 5 Video conferencing
Practitioner 5 Tooling voor PI planning
Practitioner 6 Starten op 1 lokatie, minstens 2 PI's
Practitioner 6 Goede tooling comm & planning / Kanban Board
Practitioner 7 Goede communicatiemiddelen
Practitioner 7 Mensen invliegen om het samen te doen
Practitioner 7 Mensen tijdelijk onsite halen om de PI planning min. 1 x onsite mee te maken
Practitioner 8 Collaboration - tooling
Practitioner 8 Take more time to prepare alignments
Practitioner 9 Overkoepelend overleg plannen waar afhankelijkheden gemanaged worden en
prioriteiten bepaald
Practitioner 10 Invliegen voor PI planning (duur!)
Practitioner 10 Perfecte videofaciliteiten
Practitioner 10 Professionele moderator
Practitioner 10 Digitale middelen / white boards ?
Practitioner 11 PI-planning op 1 locatie, 'vlieg' mensen in
248
Practitioner 11 Deel van PI met hele trein, mbv digitale communicatiemiddelen, deel van PI per
locatie
Practitioner 11 Aantal mini-PI's en daarna de grote volledige PI met hele trein of
vertegenwoordigers v/d locaties
Table 178: Practitioner focus group - 15. Inspect & Adapt - individual solutions
Name Solution
Practitioner 1 Iedereen invliegen
Practitioner 2 Logisch gevolg van verbetering op PI planningvlak
Practitioner 3 Involve everyone in this process
Practitioner 4 Good audio/video
Practitioner 4 RET/ organiser per site
Practitioner 4 Well trained facilitators
Practitioner 5 Zie PI planning (combineren)
Practitioner 6 Starten op 1 lokatie, minstens 2 PI's
Practitioner 6 Goede tooling comm
Practitioner 7 Goede communicatiemiddelen
Practitioner 7 Mensen invliegen om het samen op 1 lokatie te doen
Practitioner 7 Verschillende onderdelen van I&A door verschillende teams/locaties te laten
doen
Practitioner 8 Collaboration tooling
Practitioner 10 Invliegen (combi PI-planning)
Practitioner 10 Video faciliteiten
Practitioner 10 Wisslende locatie voor uitzending (rouleren)
Practitioner 11 I&A per locatie, delen uitkomsten door vertegenwoordigers van elke locatie
Name Solution
Practitioner 1 Iedereen invliegen
Practitioner 1 Treden regionaal splitsen
Practitioner 1 Competing ART naast bestaande keten
Practitioner 3 Define implementation plan with roles and responsibility
Practitioner 3 Align different steps with all parties
Practitioner 4 Good trainers at all sites
Practitioner 4 Engaged leadership at all sites
Practitioner 4 Train all sites same timeframe
Practitioner 5 Zorg dat Key roles (bv SPC) dezelfde training op zelfde plaats en tijd krijgen
Practitioner 6 Starten op 1 lokatie, minstens 2 PI's
Practitioner 6 Op elke lokatie een Agile coach, die elkaar kennen, vertrouwen en alignment
kunnen vinden in samenwerking
Practitioner 7 Implementing Guild opzetten - Community of Impl. Practice
Practitioner 7 Automated testing & continuous deployment implementeren
Practitioner 8 Rotating teams & roles
Practitioner 8 Team x period local before going distributed
Name Solution
Practitioner 1 Invliegen
Practitioner 1 Aantal teams beperken
Practitioner 1 Goede refinement
Practitioner 1 PI Planning
Practitioner 2 Train feature engineers in "Cultural Aspects"
Practitioner 2 Extend time spent "per feature" during PI planning
Practitioner 3 Make sure scope of feature is clear
Practitioner 3 Define input feature on each part
Practitioner 3 Define roles different people
Practitioner 3 Make correct stories and define owner
Practitioner 3 Make sure all stories are defined
Practitioner 3 Visit each others Daily Scrums
Practitioner 4 Clear acceptance criteria
Practitioner 4 Feature should be ready before planning it
Practitioner 4 Small features
Practitioner 5 Tooling (shared)
Practitioner 5 Extra aandacht voor scherpe en "just enough" acceptance criteria
Practitioner 6 Sharing tooling voor zowel communicatie als werkverdeling & planning
Practitioner 7 Goede Scrum of Scrums met goede comm. middelen
Practitioner 7 Goede afspraken maken wat ownership van Feature inhoudt
Practitioner 7 Goede comm.tools
Practitioner 8 Collaboration tooling
Practitioner 8 Extra Scrum of Scrums sessions
Practitioner 9 BIA's laten bepalen en werkpakketten bij BO-ers neerleggen
Practitioner 10 Afstemming tussen teams schedulen, ook tussen demo's
Practitioner 10 Encourage directe contacten tussen teams
Practitioner 11 Maak features zo klein mogelijk, zodat ze door ze door teams op 1 locatie kunnen
worden uitgevoerd
Table 181: Practitioner focus group - 25. Shared Services - individual solutions
Name Solution
Practitioner 1 Feature kamp bouwen met same focus
250
Practitioner 1 IBM Watson
Practitioner 1 Vliegen
Practitioner 1 Tijdelijk meedraaien in teams
Practitioner 2 Increase scrum practices in these teams
Practitioner 3 Make sure contact persons are known to all
Practitioner 3 Define and document Shared Services
Practitioner 4 Good agreements
Practitioner 4 Involvement during planning events
Practitioner 5 Zoals voor alle distributed teams zou moeten gelden: zorg voor "mental
closeness" (Jurgen Appelo) pas tools toe waarbij remote werkers elkaar beter leren
kennen
Practitioner 5 Verhoog reisbudget: shared service mensen brengen fysiek tijd door op verschillende
lokaties
Practitioner 6 Op alle lokaties 1 PI samenzitten
Practitioner 7 Shared Services team moet zich bij alles goed inleven in de ervaring/wens van
verschillende locaties
Practitioner 8 Collaboration tooling
Practitioner 9 Duidelijke afspraken maken waar te vinden hoe te onderhouden + gebruiken
Table 182: Practitioner focus group - 16. PI planning - individual solutions - translated
Expert Solution
Practitioner 1 Fly everyone to one location
Practitioner 1 Create feature camps within the train
Practitioner 1 Microsoft Halo
Practitioner 1 Do the PI planning more often with smaller iterations
Practitioner 2 Better preparation on feature level
Practitioner 2 Increase common understanding of Value Stream by team members
Practitioner 2 Train RTE
Practitioner 2 Train Feature Engineers
Practitioner 2 Longer PI planning
Practitioner 2 More reconciliation between teams during PI planning
Practitioner 2 Interim demo’s during PI planning
Practitioner 3 Make sure all possible features are clear to everyone upfront
Practitioner 3 Have preliminary meeting in the weeks before PI day
Practitioner 3 Foresee needed infrastructure & test upfront
Practitioner 4 Good audio/video
Practitioner 4 RTE per site
Practitioner 4 Aligned Tooling for feature/ustracking, program board
Practitioner 5 Increased travel budget: % of the teams travel
Practitioner 5 Video conferencing
Practitioner 5 Tooling for PI planning
Practitioner 6 Start at 1 location, during at least 2 PI’s
Practitioner 6 Good tooling for communication & planning / Kanban Board
Practitioner 7 Good communication tooling
Table 183: Practitioner focus group - 15. Inspect & Adapt - individual solutions - translated
Name Solution
Practitioner 1 Fly everyone to one location
Practitioner 2 Logical consequence of improvements on PI planning
Practitioner 3 Involve everyone in this process
Practitioner 4 Good audio/video
Practitioner 4 RET/ organizer per site
Practitioner 4 well trained facilitators
Practitioner 5 See PI planning (combine)
Practitioner 6 Start at 1 location, during at least 2 PI’s
Practitioner 6 Good tooling for communication
Practitioner 7 Good communication tooling
Practitioner 7 Fly everyone to one location to let them do it together
Practitioner 7 Let different parts of I&A be done by different teams/locations
Practitioner 8 Collaboration tooling
Practitioner 10 Fly everyone to one location (combine with PI-planning)
Practitioner 10 Video tooling
Practitioner 10 Rotate location
Practitioner 11 I&A per location, share results by representatives of each location
Table 184: Practitioner focus group - 4. Implementing 1-2-3 - individual solutions - translated
Name Solution
Practitioner 1 Fly everyone to one location
Practitioner 1 Split based on region
Practitioner 1 Competing ART next to existing chain
Practitioner 3 Define implementation plan with roles and responsibility
Practitioner 3 Align different steps with all parties
Practitioner 4 Good trainers at all sites
252
Practitioner 4 Engaged leadership at all sites
Practitioner 4 Train all sites same timeframe
Practitioner 5 Ensure that key roles (e.g. SPC) get the same training at the same place and time
Practitioner 6 Start co-located on 1 location, for a duration of at least 2 PI’s
Practitioner 6 An Agile coach on each location. These coaches should know and trust each other,
and be able to align their work
Practitioner 7 Set up an implementing Guild - Community of Implementation. Practice
Practitioner 7 Implement automated testing & continuous deployment
Practitioner 8 Rotating teams & roles
Practitioner 8 Team x period local before going distributed
Practitioner 8 Collaboration tooling
Practitioner 9 If no dependencies, DevOps can do otap-tests & implementing themselves ->
infrastructure should support this
Practitioner 10 Training on one location for new train
Practitioner 10 Circle the trainers around the locations
Practitioner 10 Don’t use video to train!
Practitioner 11 SPC’s should be trained on 1 location during the same training
Practitioner 11 SPC’s know culture & problems of location/region were they are implementing
SAFe
Table 185: Practitioner focus group - 32. Feature - individual solutions - translated
Name Solution
Practitioner 1 Fly everyone to one location
Practitioner 1 Limit the number of teams
Practitioner 1 Good refinement
Practitioner 1 PI Planning
Practitioner 2 Train feature engineers in "Cultural Aspects"
Practitioner 2 Extend time spent "per feature" during PI planning
Practitioner 3 Make sure scope of feature is clear
Practitioner 3 Define input feature on each part
Practitioner 3 Define roles different people
Practitioner 3 Make correct stories and define owner
Practitioner 3 Make sure all stories are defined
Practitioner 3 Visit each others Daily Scrums
Practitioner 4 Clear acceptance criteria
Practitioner 4 Feature should be ready before planning it
Practitioner 4 Small features
Practitioner 5 Tooling (shared)
Practitioner 5 Extra focus on well-defined and “just enough” acceptance criteria
Practitioner 6 Sharing tooling for communication, workload division and planning
Practitioner 7 Good Scrum of Scrums with good communication tools
Practitioner 7 Make good agreements on what ownership of the feature is
Practitioner 7 Good communication tools
Practitioner 8 Collaboration tooling
Table 186: Practitioner focus group - 25. Shared Services - individual solutions - translated
Name Solution
Practitioner 1 Create a feature camp with the same focus
Practitioner 1 IBM Watson
Practitioner 1 Fly everyone to one location
Practitioner 1 Join the teams temporarily
Practitioner 2 Increase scrum practices in these teams
Practitioner 3 Make sure contact persons are known to all
Practitioner 3 Define and document Shared Services
Practitioner 4 Good agreements
Practitioner 4 Involvement during planning events
Practitioner 5 As should be for all distributed teams: ensure “mental closeness” (Jurgen Appelo), use
tools when remote workers need to know another better
Practitioner 5 Increase travel budget: Shared Services should be physically present on the different
locations
Practitioner 6 For 1 PI co-locate on every location
Practitioner 7 Shared Services team must at all empathize with the experience / desire from different
locations
Practitioner 8 Collaboration tooling
Practitioner 9 Clear agreements on were to find the shared services, how to maintain and how to use
254
Appendix BB Practitioner focus group: group solutions
round 3
As most participants felt more at ease when writing in Dutch, the participants were allowed to write
down their answers in Dutch, these can be found in Table 187 to Table 191. The translated versions
can be found in Table 192 to Table 196.
Table 187: Practitioner focus group - 16. PI planning - group solutions
Group Solution
Hanneke Features ready voor de planning sessie met de distr. teams
Hanneke Good audio/video Tooling
Peter Minimal 1 x period PI Planning on-site with all teams
Peter Communication Tools
Table 188: Practitioner focus group - 15. Inspect & Adapt - group solutions
Group Solution
Hanneke Good audio/video Tooling
Hanneke Eerste op 1 locatie
Peter Comms tools
Peter Divide the I&A topics over the locations
Group Solution
Hanneke Aligned coaches op verschillende locaties
Hanneke Coaches kennen cultuur en specifieke problemen locatie
Peter 1 team to coordinate the trainings/Impl
Peter Strong Vision promoted top down
Group Solution
Hanneke Gemeenschappelijk begrip
Hanneke Small features
Peter Keep co-ordinating dependencies (Scrum of Scrums)
Peter Make sure that a Feature has 1 owner to manage dependencies
Table 191: Practitioner focus group - 25. Shared Services - group solutions
Group Solution
Hanneke Geven Commitment
Hanneke Involvement during planning events
Peter Distribute features by shared service impact (focus areas)
Table 192: Practitioner focus group - 16. PI planning - group solutions - translated
Group Solution
Hanneke Features ready before PI planning
Hanneke Good audio/video Tooling
Peter Minimal 1 x period PI Planning on-site with all teams
Peter Communication Tools
Table 193: Practitioner focus group - 15. Inspect & Adapt - group solutions - translated
Group Solution
Hanneke Good audio/video Tooling
Hanneke First I&A meeting on 1 location
Peter Communication Tools
Peter Divide the I&A topics over the locations
Table 194: Practitioner focus group - 4. Implementing 1-2-3 - group solutions - translated
Group Solution
Hanneke Aligned coaches on different locations
Hanneke Coaches know culture & problems of location
Peter 1 team to coordinate the trainings/Impl
Peter Strong Vision promoted top down
Table 195: Practitioner focus group - 32. Feature - group solutions - translated
Group Solution
Hanneke Have common understanding on features
Hanneke Small features
Peter Keep co-ordinating dependencies (Scrum of Scrums)
Peter Make sure that a Feature has 1 owner to manage dependencies
Table 196: Practitioner focus group - 25. Shared Services - group solutions - translated
Group Solution
Hanneke Services give commitment
Hanneke Involvement during planning events
Peter Distribute features by shared service impact (focus areas)
Peter 1 x per period visit each team visibility
256
Appendix CC Practitioner focus group: individual votes
round 4
Table 197: Practitioner focus group - votes on solutions - Practitioner 1
258
4.1 Have 1 team to coordinate the trainings & implementation 1 1
4.2 Coaches know culture & problems of location 1 2
4.3 Strong vision promoted top down 1
260
Table 204: Practitioner focus group - votes on solutions - Practitioner 8
262
32.1 Keep coordinating dependencies (Scrum of Scrums) 2 1
32.2 Have common understanding on features
32.3 Make sure that each feature has 1 owner to manage dependencies 2 1
32.4 Small features 2