You are on page 1of 269

Globally Distributed Agile Release Trains

Adjusting the Scaled Agile Framework® (SAFe®) for distributed


environments

Master thesis

Peter van Buul


Globally Distributed Agile Release Trains
Adjusting the Scaled Agile Framework® (SAFe®) for distributed
environments

Thesis
submitted in partial fulfilment of the
requirements for the degree of

Master of Science
in

Computer Science
Information Architecture Track
by

Peter van Buul


born in Leiden

at the Delft
Delft University
University of Technology,
of Technology
to be defended
Faculty Electricalpublicly on Thursday
Engineering, December
Mathematics and15, 2016 atScience
Computer 16:00.

Thesis committee: Prof.dr.ir. Rini van Solingen TU Delft, supervisor university


Dr. Georgios Gousios TU Delft
Ir. Hendrik-Jan van der Waal Prowareness, supervisor company

An electronic version of this thesis is available at http://repository.tudelft.nl/


Globally Distributed Agile Release Trains
Adjusting the Scaled Agile Framework® (SAFe®) for distributed
environments

Author: Peter van Buul


Student ID: 1512269
Email: petervanbuul@gmail.com

Abstract
SAFe is a framework that applies both agile and lean practices
for developing software. The current trend is that increasingly more organizations
develop their software in a globally distributed setting. Although SAFe is being
deployed in such a setting, SAFe was not originally developed for such a setting but
for a co-located setting. Therefore, this research investigates the application of
SAFe in globally distributed settings. Five problems are discovered that can be
expected to fail when SAFe is applied in distributed settings: incorrect execution of
SAFe, language barriers, time zone differences, increased communication effort, and
inefficient communication tools. Given these problems, four SAFe elements are
identified that can be expected to fail when SAFe is applied in distributed settings:
the PI planning, the inspect & adapt meeting, the DevOps team, and the system team.
Finally, a customization of SAFe for distributed settings is proposed. This
customization is focused on solving the discovered problems for the elements
identified to fail.

Thesis committee:

Prof.dr.ir. Rini van Solingen TU Delft, supervisor university


Dr. Georgios Gousios TU Delft
Ir. Hendrik-Jan van der Waal Prowareness, supervisor company
Preface
This thesis presents the research that is conducted as part of the Information Architecture Master
track of the Computer Science programme of the Delft University of Technology. A workplace during
this research was provided by Prowareness. The research is focused on the Scaled Agile Framework®
(SAFe®) in globally distributed environments. SAFe and Scaled Agile Framework are registered
trademarks of Scaled Agile Inc. referred in [1]1.

SAFe is a framework that applies both agile and lean practices for developing software. This
framework is publicly available and widely used in mainly the IT industry. The current trend is that
increasingly more organizations develop their software in a globally distributed setting. Although SAFe
is being deployed in such a setting, SAFe was not originally developed for such a setting, but for a co-
located setting. Therefore, it is interesting to research the application of SAFe in globally distributed
settings.

This research has discovered that the following five problems can be expected when SAFe is applied
in a distributed setting: incorrect execution of SAFe, language barriers, time zone differences,
increased communication effort, and inefficient communication tools.

Given these five problems, this research identified that the following four elements of SAFe can be
expected to fail when SAFe is applied in a distributed setting: the PI planning, the inspect & adapt
meeting, the DevOps team, and the system team.

This research is concluded by proposing a customization of SAFe for distributed settings. This
customization is focused on solving the discovered problems for the elements identified to fail. The PI
planning and inspect & adapt can be customized by having a Release Train Engineer at each location,
using a video conferencing system, using a digital program board & digital PI objectives, and extending
the PI planning over multiple days if needed. The DevOps team can be customized by enabling the
team to travel regularly to the other locations. Finally, the system team can be distributed over all
locations.

First, and foremost, I would like to thank my supervisor, Rini van Solingen, for guiding me in giving
direction to this research, and providing critical feedback during this research. Without his guidance
and feedback, the systematic approach and critical view in this report would not have been possible.
Second, I would like to thank my supervisor at Prowareness, Hendrik-Jan van der Waal, for giving his
insights on SAFe, and his critical feedback on the results of this research. Third, because most of my
time I have spent working at Prowareness, I would like to thank all my colleagues for always being
there to help me, and for being a sparring partner to discuss ideas. Fourth, I would like to thank the
participants of the focus groups for their contributions to this research.

Last, but not least, I would like to thank my girlfriend, my family, and my friends for always supporting
me during this project. Without these loving and caring people, taking an interest in my progress,
reviewing my work, and helping me to finish the project, this result would not have been possible.

Finally, I present you my thesis, I hope you will enjoy reading it. Peter van Buul
Leiden, the Netherlands
December 1, 2016

1
It should be noted that all information regarding SAFe in this thesis is as interpreted by the author, and verified
with SAFe experts. Any information is therefore not officially supported by Scaled Agile, Inc., unless quoted
directly from Scaled Agile, Inc., in which case this is specified.

i
Table of Contents
Chapter 1 Introduction ........................................................................................................................ 4
1.1. Context .................................................................................................................................... 4
1.2. Research questions ................................................................................................................. 6
1.3. Reading guide.......................................................................................................................... 7
Chapter 2 Research background .......................................................................................................... 8
2.1. Globally Distributed Software Engineering............................................................................. 8
2.2. SAFe......................................................................................................................................... 9
2.3. Agile scaling frameworks ...................................................................................................... 14
2.4. Research scope ..................................................................................................................... 18
Chapter 3 Research methodology ..................................................................................................... 21
3.1. Research approach................................................................................................................ 21
3.2. Systematic Literature Review ............................................................................................... 22
3.3. Multiple informant methodology ......................................................................................... 24
3.4. Focus group ........................................................................................................................... 26
Chapter 4 Distributed SAFe problems ............................................................................................... 30
4.1. Problems of Distributed Agile Development: Systematic Literature Review ....................... 30
4.2. Distributed SAFe problems: Multiple informant methodology ............................................ 37
Chapter 5 Identification of failing SAFe elements ............................................................................. 42
5.1. Identification based on theory: Literature ............................................................................ 42
5.2. Identification based on theory: Expert focus group ............................................................. 43
5.3. Identification based on practice: Practitioner focus group .................................................. 49
5.4. Result of triangulation .......................................................................................................... 56
Chapter 6 Customizations of SAFe..................................................................................................... 59
6.1. Customizations of SAFe based on theory: Literature ........................................................... 59
6.2. Customizations of SAFe based on practical experience: Practitioner focus group .............. 61
6.3. Combining theory and practical experience ......................................................................... 65
Chapter 7 Discussion.......................................................................................................................... 68
7.1. Answers to the research questions....................................................................................... 68
7.2. Limitations............................................................................................................................. 69
7.3. Reflection .............................................................................................................................. 73
7.4. Recommendations for future research................................................................................. 76
Chapter 8 Conclusion ......................................................................................................................... 78
8.1. Summary ............................................................................................................................... 78
8.2. Conclusion ............................................................................................................................. 78

2
Bibliography .......................................................................................................................................... 81
List of tables .......................................................................................................................................... 93
List of figures ......................................................................................................................................... 98
Appendix A Systematic Literature Review protocol ....................................................................... 100
Appendix B Multiple informant protocol ....................................................................................... 102
Appendix C Expert focus group: protocol ...................................................................................... 103
Appendix D Practitioner focus group: protocol .............................................................................. 109
Appendix E Multiple informant execution ..................................................................................... 115
Appendix F List of SAFe elements .................................................................................................. 117
Appendix G Description of the Agile Release Train elements ........................................................ 120
Appendix H Rejected studies .......................................................................................................... 126
Appendix I Problems and challenges of accepted studies ............................................................ 128
Appendix J Result of SLR reviews .................................................................................................. 133
Appendix K Problem groups ........................................................................................................... 135
Appendix L Ungrouped problems .................................................................................................. 142
Appendix M Expert focus group: invitation letter ........................................................................... 143
Appendix N Expert focus group: attachment ................................................................................. 145
Appendix O Expert focus group: execution .................................................................................... 156
Appendix P Expert focus group: individual votes round 2 ............................................................. 177
Appendix Q Expert focus group: consequences per element round 3 ........................................... 179
Appendix R Expert focus group: individual votes focus group: round 4 ........................................ 185
Appendix S Expert focus group: visualization dot voting consequences: round 4 ........................ 190
Appendix T Survey.......................................................................................................................... 198
Appendix U Results of the survey ................................................................................................... 205
Appendix V Practitioner focus group: invitation letter .................................................................. 219
Appendix W Practitioner focus group: forms .................................................................................. 220
Appendix X Practitioner focus group: execution ........................................................................... 225
Appendix Y Practitioner focus group: categorization participants ................................................ 242
Appendix Z Practitioner focus group: individual votes round 2 .................................................... 243
Appendix AA Practitioner focus group: individual solutions round 3........................................... 248
Appendix BB Practitioner focus group: group solutions round 3 ................................................. 255
Appendix CC Practitioner focus group: individual votes round 4 ................................................ 257

3
Chapter 1 Introduction

In this chapter a short introduction to the research is given. First, the context for this research is
presented. Second, the research questions are described. Finally, a reading guide is given.

1.1. Context
This thesis presents the research that is conducted as part of the Information Architecture Master
track of the Computer Science programme of the Delft University of Technology. A workplace during
this research was provided by Prowareness. This research is done in the research chair on Global
Software Engineering, in the Software Engineering group of the Software Technology department in
the Faculty Electrical Engineering, Mathematics and Computer Science of Delft University of
Technology.

The research presented in this thesis is on the Scaled Agile Framework® (SAFe®) in globally distributed
environments. SAFe and Scaled Agile Framework are registered trademarks of Scaled Agile Inc.
referred in [1]2. To provide context, first, distributed is briefly described. Second, a brief overview of
SAFe is presented. Third, the original way of software development is briefly sketched. Finally, the
agile scaling frameworks that are replacing this original way are briefly mentioned. A more elaborate
description of the agile scaling frameworks and SAFe can be found in Chapter 2.

1.1.1. Distributed
Because of the globalization of business in the 21st century, increasingly more companies develop
software in a globally distributed setting [2], [3], [4], [5], [6], [7], and [8]. When working in a globally
distributed environment, different problems and challenges can occur regarding, among other things,
communication, coordination, and time zone differences, according to [3] and [9]. Despite these
problems, the use of fully distributed teams can be successful, as presented in [10], [11], and [12].
However, the research of [10], [11], and [12], is focused on Scrum, with a small team. SAFe, on which
the research presented in this thesis focusses, is executed with multiple teams, possibly having
different problems.

1.1.2. SAFe
In this section, SAFe is briefly described. A full overview of SAFe can be found at the SAFe website
provided by Scaled Agile, Inc. www.scaledagileframework.com [1]. SAFe is a framework that applies
both agile and lean practices for developing software. This framework is publicly available and widely
used in mainly the IT industry. In the 10th Annual State of Agile report by VersionOne from 2015 [13],
27% of the respondents name SAFe as the method to scale agile. This ranks SAFe as the second most
used scaling method and the most used scaling framework.

As stated previously, the current trend is that increasingly more organizations develop their software
in a globally distributed setting. Although SAFe is being deployed in such a setting, as seen in multiple
case studies [14], [15], [16], and [17], SAFe was not originally developed for such a setting, but for a
co-located setting. Therefore, it is interesting to investigate the application of SAFe in globally
distributed settings.

2
It should be noted that all information regarding SAFe in this thesis is as interpreted by the author, and verified
with SAFe experts. Any information is therefore not officially supported by Scaled Agile, Inc., unless quoted
directly from Scaled Agile, Inc., in which case this is specified.

4
The main workflow in SAFe for delivering value to a customer is using Value Streams. Deployment of
these Value Streams is done using Agile Release Trains which are continuously delivering new versions
of a solution to the customer. The Agile Release Train is a team of teams. In the Agile Release Train,
the agile teams are aligned via a single vision, roadmap, and program backlog. The Agile Release Train
iterates in a so-called program increment, PI, which lasts 8 to 12 weeks during which there are 4 to 6
two-week team iterations. During the team iterations the teams continuously add value to the
solution by finishing fully tested stories. At the end of each team iteration the integrated solution is
demoed.

The SAFe website states: “Along with the various Agile methods, the Manifesto provides the Agile
foundation for effective, empowered, self-organizing-teams. SAFe extends this foundation to the level
of teams of teams” [18]. The Agile Manifesto [19] is key to SAFe, and reads as follows:

We are uncovering better ways of developing software by doing it and helping others do it.
Through this work we have come to value:

Individuals and interactions over processes and tools

Working software over comprehensive documentation

Customer collaboration over contract negotiation

Responding to change over following a plan

That is, while there is value in the items on the right, we value the items on the left more.

Figure 1: Agile Manifesto from [19]

Figure
Figure 3: Research
2: Agile Manifesto improvement
from [19] cycleWe are uncovering better ways of developing software by
doing it and helping others do it. Through this work we have come to value:
1.1.3. Traditional project management frameworks
Individualssoftware
To put SAFe in context, a traditional and interactions over process,
development processes and toolsis presented. As well as
waterfall,
traditional project management frameworks such as, PRINCE2 and Rational Unified Process (RUP) are
Working software over comprehensive documentation
described. These traditional frameworks use an upfront planning for projects. This upfront planning
however, does not work if Customer collaboration
the environment over
changes. contract
Though bothnegotiation
RUP and PRINCE2 can be used in
agile projects, these frameworks were not designed as such, [20], [21].
Responding to change over following a plan
In his article in 1970, [22], Royce describes the waterfall model as a model that is at that time widely
That is, while there is value in the items on the right, we value the items on the left more.
used in the manufacturing industry. The waterfall model works in 7 steps, which are executed one
after another, starting with the creation of system requirements and finishing with deploying to
operations. Strikingly, in his article Royce expresses his concerns about the waterfall model: “I believe
in this concept, but the implementation described above is risky and invites failure.”. To counter this,
he proposes feedback and interaction between the steps. Though this is a good idea, no case has been
found were this is done in practice. In practice, there are rigid agreements without feedback and
interaction between the steps.

The upfront planning from traditional frameworks is symbolized in the PRINCE2 acronym which stands
for PRojects IN Controlled Environments [23]. This controlled environment only changes when the
controlling entity changes the environment according to plan, while agile frameworks such as SAFe
are designed to respond to unexpected change [19].

5
RUP is a traditional approach, of which the goal is: “to produce, within a predictable schedule and
budget, high-quality software that meets the needs of its end users.” [24]. RUP prescribes processes
and follows a predefined plan which is not in line with the Agile Manifesto. Moreover, the sixth best
practice in RUP is: “Control changes to software” [25], controlling the response to change rather than
embracing change.

Though none of the traditional frameworks explicitly considers a distributed setting, it seems they can
all be used in distributed development. In PRINCE2, delivery can be done by a supplier which is not
necessarily co-located. RUP and waterfall contain steps, hand overs or toll gates in which the project
is given to the next team based on a set of predefined requirements. Because of these hand over
points the next team could be located in a different location.

1.1.4. Agile scaling frameworks


Agile scaling frameworks are currently replacing these traditional frameworks, because the upfront
planning of the traditional frameworks can not handle unplanned changes as much as needed. Besides
SAFe, there are multiple agile scaling frameworks available which are designed to handle change. Four
frequently used frameworks are Large-Scale Scrum (LeSS) [26], Disciplined Agile Delivery (DAD) [27],
Nexus [28], and Spotify [29]. An elaborate description of these frameworks is provided in 0.

Each of these four frameworks scale the development department, however this scaling ends there.
SAFe scales reasoning with Value Streams which, if needed, include the business and other
stakeholders. This enables SAFe to scale on an organizational level rather than only on the IT level.

1.2. Research questions


The goal of the research presented in this thesis is to investigate if and how the Scaled Agile
Framework (SAFe) should be adjusted for distributed settings. To this end the following research
questions have been formulated.

1. What problems can be expected when SAFe is applied in distributed settings?


2. What SAFe elements can be expected to fail when applied in distributed settings?
3. What would SAFe look like, when customized for distributed settings?

These questions are answered in sequence, as the previous answer provides the required input for
the next answer. The answer to the first question results in distributed SAFe problems. Based on these
problems the answer to the second research question identifies which SAFe elements can be expected
to fail. To answer the third research question, these failing elements serve as input on how to
customize SAFe. When problems are identified for this customized version it is possible to repeat this
process with this input, starting again with research question one. Thus, a circle is created that can be
used to continuously improve distributed SAFe, as visualized in Figure 4. However, in this research
each step is executed only once, and the customized version remains a theoretical proposal for now.

RQ 1:
RQ 2: Failing
Distributed
SAFe
SAFe
elements
problems

RQ 3:
Customized
distributed
SAFe

Figure 4: Research improvement cycle

6
1.3. Reading guide
This thesis presents the result and approach taken for the research. The research background is given
in Chapter 2. The approach is described in detail in Chapter 3. Substantiation of the results and the
data is presented in Chapter 4 to Chapter 6. Discussion on both the results and approach can be found
in Chapter 7. The conclusions of the research can be read in Chapter 8.

Chapter 2 presents the research background for the research. First, different definitions of distributed
are presented. Second, SAFe is explored and a high-level description of the framework is given, as
interpreted by this author, and verified with SAFe experts. Third, to give some perspective the
different agile scaling frameworks are described. After this exploration, the scope of the research is
set to fit within the timeframe of the master thesis project.

Chapter 3 describes the approach taken and the methodologies that are used to answer the research
questions. For each methodology, the conditions, strengths, limitations, and protocols are presented.

Chapter 4 presents an answer to the first research question: “What problems can be expected when
SAFe is applied in distributed settings?”. First, the Distributed Agile Development problems are
presented. Second, the distributed SAFe problems are presented.

Chapter 5 gives an answer to the second research question: “What SAFe elements can be expected to
fail when applied in distributed settings?”. First, failing elements are identified based on the
distributed SAFe problems that were discovered in the previous chapter. Second, failing elements are
identified in an expert focus group. Third, failing elements are identified in a practitioner focus group.
Finally, these 3 identifications are combined.

Chapter 6 describes an answer to the third research question: “What would SAFe look like, when
customized for distributed settings?”. First, solutions are identified based on the failing elements
discovered in the previous chapter. Second, the solutions identified in the practitioner focus group are
presented. Finally, a proposal on how to customize SAFe is presented.

Chapter 7 discusses the results. First, the extent to which the research questions are answered is
presented. Second, the limitations of the research methodologies are discussed. Third, the answers to
the research questions are reflected upon. Finally, recommendations for future research are
presented.

Chapter 8 gives a summary of the research questions and presents the conclusions of the research.

7
Chapter 2 Research background

In this chapter the research background is presented. First, the research area of Globally Distributed
Software Development is described. After which, two different definitions of distributed are given and
the definition that is applied in this research is clarified. Second, SAFe is explored and a high-level
description of the framework is presented. Third, the different agile scaling frameworks are given.
Finally, it is described how the scope of the research is adjusted to fit within the timeframe of this
master thesis project.

2.1. Globally Distributed Software Engineering


Because of the globalization of business in the 21st century, increasingly more companies develop
software in a globally distributed setting [2], [3], [4], [5], [6], [7], and [8]. The area of research on
software development in a globally distributed setting is referred to in various ways. In [3] these are
listed as: Global software development (GSD), global software engineering (GSE), globally distributed
software engineering (GDSE), and globally distributed software development (GDSD). Common among
all these is the globally distributed setting in which SAFe is also being applied, according to case studies
[14], [15], [16], and [17].

At the same time as this trend of globalization, agile software development methodologies have
become the most used approach for software development [13]. The application of these popular
agile methods in globally distributed settings is called Distributed Agile Development, [7], [30] and
[31]. SAFe is such an agile methodology. The Systematic Literature Review, done as part of this
research will thus focus on Distributed Agile Development.

When working in a globally distributed environment, different problems and challenges can occur
according to [3] and [9]. Examples of these problems are: difficulties with coordinating in multiple time
zones [32], [33], and [34], insufficient communication tools [7], [31], [35], and [10], and delay in
communication [6], [36], [4], and [37]. In the Systematic Literature Review, the problems of
Distributed Agile Development are reviewed.

Despite these problems, the use of fully distributed teams can be successful, as presented in [10], [11],
and [12]. The conclusion of these papers is “it is possible to create a distributed/outsourced Scrum
with the same velocity and quality as a collocated team”.

To solve the problems of distributed, different solutions are applied. For example from [9], regular
traveling to meet face to face [7], [31], [38], [39], [40], [32], [35], frequent communication [7], [41],
[42], [33], [35], and [10], and using different communication channels [7], [32], [39], [33], [10], [12],
and, [43].

2.1.1. Definition of distributed


In [44], Thomas Allen presents the Allen Curve which describes the probability of communication
related to distance. From this research originates the critical distance of 50 meters for weekly technical
communication. Because of the advances in telecommunication, Thomas Allen revisited this research
later in [45]. His research shows that his previous conclusion holds. Therefore, the definition of
distributed based on the Allen Curve remains: when the distance between working places is more than
50 meters it is called distributed.

8
Globally distributed is different than distributed as specified by Thomas Allen. With globally
distributed the working places are located at locations which are possibly in different countries or time
zones. Though the definition of distributed as presented by Thomas Allen does not exclude this,
problems such as time zone differences and cultural differences do rarely occur over a distance of 50
meters.

In this research, a globally distributed setting is considered. Therefore, from this point on, when talking
about distributed, globally distributed is meant.

2.2. SAFe
The first version of SAFe was launched in 2011. The version considered in this research is the latest
available version of SAFe at the start of this research (SAFe 4.0), which was launched at the 3th of
January 2016. A full overview of SAFe 4.0 can be found at the SAFe website provided by Scaled Agile,
Inc. www.scaledagileframework.com [1]. The SAFe 4.0 big picture shows almost all elements that are
in SAFe and is shown in Figure 5. Some elements are not made visible in this picture, for example the
PO sync and Scrum of Scrums.

Figure 5: Four level SAFe picture from [1]

2.2.1. Core values, SAFe principles, and lean-agile mindset


SAFe is built upon the core values and SAFe principles, which are the foundations of SAFe. Therefore,
first the core values and SAFe principles are discussed.

9
For organizations that want to adopt SAFe, the core values describe the culture that these
organizations need to develop. These values have to become the heart of the organization. The four
core values are:

 Alignment
 Built-in Quality
 Transparency
 Program Execution

The core value Alignment is to make sure that everyone has the same goal and vision so that they can
align on that. This explains why everyone does what they do. Built-in Quality is to make sure that
quality standards are high and maintained. This is required because “you can’t scale crappy code” [46].
Transparency is to give insight into what is being done. These insights provide data on which decisions
can be based to give direction for a project, rather than steering based on gut feeling. Lastly, Program
Execution is to focus on the program by putting the program before the team. This enables the
program to continuously deliver value.

The SAFe principles are the guidelines for decision making when working with SAFe. The nine SAFe
principles are:

1. Take an economic view


2. Apply systems thinking
3. Assume variability; preserve options
4. Build incrementally with fast, integrated learning cycles
5. Base milestones on objective evaluation of working systems
6. Visualize and limit Work In Progress, reduce batch sizes and manage queue lengths
7. Apply cadence, synchronize with cross-domain planning
8. Unlock the intrinsic motivation of knowledge workers
9. Decentralize decision-making

The lean-agile mindset is needed to support lean and agile development at scale in an entire
organization. The mindset consists of two parts, thinking lean and embracing agility. The core values,
SAFe principles and lean-agile mindset are explained in detail in [1].

2.2.2. The four levels of SAFe


With the introduction of SAFe 4.0 there are two different ways of implementing the SAFe framework,
three level SAFe and four level SAFe. The difference between these versions is an extra level: the value
stream level. The four levels in SAFe are, in order of scale: the team level, the program level, the value
stream level, and the portfolio level.

2.2.2.1. Team level


At the team level are the agile teams that build software. These agile teams use either Scrum or
Kanban with an iteration length of 2 weeks, as long as they keep to the iteration length. Each iteration
contains the following three meetings: iteration planning, iteration retrospective and team demo. On
a daily basis each team has a daily Scrum to synchronize teamwork. These meetings correspond to the
meetings known in Scrum.

2.2.2.2. Program level


At the program level a single Agile Release Train, or ART, is managed. An Agile Release Train contains
all necessary components, including the different agile teams, to deliver end to end value to the
customer. The only way to deliver value in SAFe is using a Value Stream. So, if a single Agile Release

10
Train can deliver the full solution to a customer this train spans the entire Value Stream. In this case
three level SAFe is in place.

The Agile Release Train iterates in a so-called program increment, or PI, of 8 to 12 weeks consisting of
4 to 6 two week iterations from the team level. During the team iterations the Scrum Masters and
Product owners meet twice during a Scrum of Scrums and a PO sync. At the end of each team iteration
the integrated solution is demoed in the system demo done by the system team.

During each program increment all teams and persons which are part of the Agile Release Train come
together during three consecutive days for three events. First, the program increment planning (PI
planning) event during which the next program increment is planned. Second, the Inspect and Adapt
meeting during which previous program increment is reviewed. Third, the Solution Demo, in which
the fully integrated solution is demoed. Again, these meetings correspond to the meetings known in
Scrum.

2.2.2.3. Value stream level


The value stream level is used when a single Agile Release Train cannot deliver the full solution. Then
multiple Agile Release Trains work together to deliver the full solution of a Value Stream. This is four
level SAFe. The value stream level iterates in the same cadence as the program level following the
program increment. A pre- and post-program increment planning event is added to synchronize the
different Agile Release Trains. The Solution Demo is used to demo the fully integrated solution of all
Agile Release Trains. Though the added meetings do not correspond with the meetings in Scrum, the
meetings do follow the pattern of the Scrum meetings: planning (PI planning), retrospective (Inspect
and Adapt), and demo (solution demo).

2.2.2.4. Portfolio level


At the portfolio level, the highest level of SAFe, all Value Streams supported by the organization are
handled. The portfolio level ensures that budgeting on the Value Streams is handled and that the
Value Streams realize the strategic themes of the enterprise.

11
2.2.3. Events
The following events are defined in the SAFe framework. These events are visualized on a timeline in
Figure 6.

 Daily event
 Daily Scrum
 Iteration events
 Iteration Planning
 Scrum of Scrums
 PO Sync
 Team Demo
 System Demo
 Iteration Retrospective
 Program increment events
 Pre-PI Planning
 PI Planning
 Post PI Planning
 Solution Demo
 Inspect and Adapt

Figure 6: SAFe meeting timeline, created based on [1]

12
2.2.4. Flow of a trigger/idea in SAFe
To deliver value SAFe uses Value Streams. A Value Stream starts with a trigger from customers and
results in customers having a solution which adds value to the organization.

Triggers from customers start at the portfolio level and are formulated as epics. If an epic is accepted,
it is put on the portfolio backlog. Epics that are on the portfolio backlog are going to be realized by the
Value Streams. All epics are in line with the strategic themes of the organization, and therefore the
portfolio level is connected to the organization.

In the value stream level, epics that are defined on the portfolio level are split into capabilities. The
size of these capabilities is so that they can be picked up in a single program increment by the Agile
Release Trains that are part of the Value Stream.

In the program level, the epics or capabilities coming from the portfolio level or value stream level are
split into features. A feature is planned and reviewed at the program increment boundaries. These
features are split into stories which can be picked up by a team during an iteration. Teams can pick up
multiple stories during one iteration. A graphic representation of this flow is shown in Figure 7.

Figure 7: Flow of a trigger in SAFe based on [1]

13
2.3. Agile scaling frameworks
There are multiple other frameworks that scale agile. The company Agile Scaling maintains and
updates a matrix [47] in which many of these frameworks are described and compared. This matrix
can be found online via http://www.agilescaling.org/ask-matrix.html. Four frequently used
frameworks are Large-Scale Scrum (LeSS) [26], Disciplined Agile Delivery (DAD)3 [27], Nexus [28], and
Spotify [29]. This section describes these frameworks to give insight into what frameworks, other than
SAFe, are used in practice, and because these frameworks could offer a solution that SAFe does not.

2.3.1. Large-Scale Scrum


The goal of LeSS is “to apply Scrum to very large, multisite, and offshore product development” [48].
There are two versions of the framework, LeSS, which supports up to 10 teams, and LeSS Huge for
more teams. LeSS Huge is applied in settings up to a few thousand people spread over multiple sites.
LeSS attempts to stay similar to Scrum. Thus, LeSS uses one product backlog, one Definition of Done,
one Potentially Releasable Increment, one Product Owner (possibly multiple Area Product Owners),
and one sprint. Teams are not specialized, so teams have to communicate new insights that are gained
to other teams. The structure of LeSS is shown in Figure 8.

Figure 8: LeSS scaling model from [26]

2.3.2. Disciplined Agile Delivery


The goal of DAD is “to fill in the process gaps that Scrum purposely ignores” [27].
DaD uses a “hybrid approach which extends Scrum with proven strategies from Agile Modeling (AM),
Extreme Programming (XP), Unified Process (UP), Kanban, Lean software Development, Outside In
Development (OID) and several other methods.” [27] to scale agile. DAD uses four lifecycles which
organizations can apply according to their needs. As stated in [49] the four lifecycles are:

 An agile/basic version that extends the Scrum Construction lifecycle with proven ideas from
RUP
 An advanced/lean lifecycle

3
Note that, in this research Distributed Agile Development is also discussed, this should not be confused with
Disciplined Agile Delivery which is as abbreviated DAD.

14
 A lean continuous delivery lifecycle
 An exploratory “Lean Startup” lifecycle

The structure of DAD is shown in Figure 9.

Figure 9: DAD scaling model from [50]

2.3.3. Nexus
“Nexus is an exoskeleton that rests on top of multiple Scrum Teams” [28]. This way Nexus aims to solve
the integration of complex software that results in working software. Nexus combines a maximum of
10 teams in a single unit of development, called a Nexus. If multiple of these units of development are
used it is called Scaled Professional Scrum. Nexus extends Scrum with a Nexus Integration Team, the
Nexus Sprint Backlog and additional events, for example the Nexus Daily Scrum. The Nexus framework
is visualized in Figure 10.

15
Figure 10: Nexus scaling model from [51]

2.3.4. Spotify
Spotify scales agile using squads, tribes, chapters, and guilds. How these are related is visualized in
Figure 11. The squad is “the basic unit of development” [29], similar to a Scrum team. Multiple squads
working in related areas are called tribes. The people with similar skills within the tribe are a chapter,
for example testers. Lastly, guilds are all who are interested in a certain topic across tribes, for example
the automated testing guild contains testers and developers from multiple tribes.

Figure 11: Spotify scaling model from [29]

16
2.3.5. Comparing the different scaling frameworks to SAFe
When compared to SAFe, the other agile scaling frameworks, Nexus, LeSS and Spotify provide only
few artefacts, roles, and events in addition to those of regular Scrum. The four lifecycles in DAD contain
more elements than the previous three, however still less than SAFe. Same as SAFe, Spotify does not
prescribe Scrum, but leaves the way of working of a team to the teams themselves. LeSS states that it
can be used in a distributed setting, whereas the other frameworks do not mention this. And although
Spotify is used successfully in a distributed setting, it does not contain specific elements that support
working distributed.

Each of these four frameworks scale the development department, however the scaling ends there.
SAFe scales reasoning with Value Streams which, if needed, include the business and other
stakeholders which are needed to deliver value. Consequently, SAFe has more roles, events, artefacts
and practices than the other frameworks. This enables SAFe to scale on an organizational level rather
than only on the IT level.

In the 10th Annual State of Agile report by VersionOne from 2015 [13], SAFe is named as the most used
scaling method with 27%. Both LeSS and DAD are also mentioned, but are used significantly less, 4%
and 6% respectively, as shown in Figure 12. Nexus and Spotify are not mentioned in this report.

Figure 12: 10th State of Agile report - Scaling agile from [13]

Note that in this figure, Scrum of Scrums is mentioned as most used scaling method. However, this
method is a single meeting, not a framework. Thus, SAFe is the most used framework.

17
2.4. Research scope
This research is done as part of a master thesis project, for which the duration is fixed. SAFe is
described in 2.2. Based on this description and the illustration provided in Figure 5, 80 roles, events,
artefacts, and best practices can be identified. This list of 80 elements is presented in Appendix F.
Researching all 80 elements within in the timeframe of such a project is not feasible. Thus, the
research must be scoped, this is done by looking at essential SAFe.

Essential SAFe is the bare minimum required to apply SAFe which is still SAFe. This was first presented
on the 9th of February 2016 in [52], on which this research was initially based. This picture is shown in
Figure 13. An update was presented on the 23rd of June 2016 in [53]. Although there are some
differences, these differences have no impact on this research.

Figure 13: Essential SAFe from [52]

Essential SAFe consists of the program level and the team level. The value stream level and portfolio
level are not part of essential SAFe. The focus of agile delivery is to deliver value via working software.
To deliver value, SAFe uses Value Streams, which are supported by one or more Agile Release Trains.
As such, we consider the Agile Release Train as the core delivery construct of SAFe. Moreover, when
not using the Agile Release Train, an implementation cannot be considered to be a SAFe
implementation [53]. This research will therefore focus on the Agile Release Train, and in particular
the extent to which distributed impacts the Agile Release Train.

Another part of essential SAFe is the team level. Distribution at the team level entails that within the
team the members are distributed. However, in this case the field of research is Distributed Scrum.
This is a different topic, and cannot be considered as being distributed SAFe. Therefore, the team level
is omitted in this research. Distribution within a team has been researched extensively, for example
in [10], [11], and [12].

Thus, the scope of this research will be set to the Agile Release Train. The elements which are part of
the Agile Release Train have been numbered and are visualized in Figure 14.

18
Figure 14: Agile Release Train elements numbered, modified and reproduced with permission from © 2011-2016 Scaled Agile,
Inc. All rights reserved. Original Big Picture graphic found at scaledagileframework.com.

Below a list of the 34 elements of the Agile Release Train4

Agile Release Train best practices:

 Level transcending best practices


1. Core values
2. Lean-agile mindset
3. SAFe principles
4. Implementing 1-2-3
5. Lean-agile leaders
6. Communities of Practice
7. Value Stream coordination
8. Weighted Shortest Job First
9. Release any time
• Program level best practices
10. Agile Release Train / Value Stream
11. Architectural runway
12. Program Kanban

Agile Release Train events:

• Program level events


13. System Demo
14. Solution Demo

4
A description of all 34 elements can be found in 0.

19
15. Inspect and Adapt
16. PI Planning

Agile Release Train roles:

• Program level roles


17. Release Train Engineer
18. System Architect
19. Product Management
20. Business Owners
 Level transcending roles
21. Customer
 Spanning palette
22. DevOps
23. System team
24. Release Management
25. Shared Services
26. User Experience

Agile Release Train artefacts:

• Spanning palette artefacts


27. Vision
28. Roadmap
29. Metrics
30. Milestones & Releases
 Program level artefacts
31. PI Objectives
32. Feature
33. Enabler
 Level transcending artefacts
34. Epics

A description of all 34 elements can be found in 0.

20
Chapter 3 Research methodology

In this chapter the approach and different research methodologies used during this research are
described. First, the approach taken to answer each research question is presented, as well as the
applicability of that approach to answer the research question. Second, the conditions, strengths,
limitations, and protocols of the methodologies presented in the approach are given.

3.1. Research approach


This section describes the approach taken to answer each research question. As well as the
applicability of this approach to provide an answer for the research question.

3.1.1. Approach for research question 1


There is no literature found on the problems of distributed SAFe, based on a literature search done by
the author5. Therefore, to answer the first research question “What problems can be expected when
SAFe is applied in distributed settings?”, a Systematic Literature Review was done on the topics of
Distributed Agile Development and Distributed Scrum. To relate these problems to SAFe and discover
distributed SAFe problems, a multiple informant methodology with consensual approach was used to
filter the problems.

The Systematic Literature Review is applicable to answer this question as it is a method to evaluate
and interpret all available research relevant to the topic of Distributed Agile Development. This
corresponds with the method as stated in [54]: “A systematic literature review (often referred to as a
systematic review) is a means of identifying, evaluating and interpreting all available research relevant
to a particular research question, or topic area, or phenomenon of interest.”. Besides this, the result
of this Systematic Literature Review provides a background for the next steps of this research. This
corresponds to one of the reasons to do a Systematic Literature Review in [54].

Not all Distributed Agile Development problems are equally relevant in distributed SAFe. To discover
the relevant distributed SAFe problems, the problems of the Systematic Literature Review have been
filtered using a multiple informant methodology. The use of multiple informants enables the
researcher to objectively reach a decision, using relatively few resources, according to [55] and [56].
Thus, the use of multiple informants is applicable to discover the distributed SAFe problems, providing
an answer to the first research question: “What problems can be expected when SAFe is applied in
distributed settings?”.

3.1.2. Approach for research question 2


To answer the second research question: “What SAFe elements can be expected to fail when applied
in distributed settings?” triangulation was used. First, the author identified failing elements based on
the distributed SAFe problems found in previous research question. Such identification, based only on
the insights of one person, cannot be considered sufficiently substantiated. Therefore, this has been
substantiated using triangulation. Second, a focus group with both SAFe, and distributed experts was
done to find an answer based on theory: the expert focus group. And third, a focus group with

5
At the 15th of February 2016 Google Scholar was searched using the following queries: ““Scaled Agile
Framework” AND distributed AND problems”, and ““Scaled Agile Framework” AND SAFe”. The first
query yielded 96 hits, the second query 122.

21
practitioners was done to find an answer based on practice: the practitioner focus group. The overlap
of these identification methods provides an answer to the research question.

To provide an answer to this question based on theory, expertise on both SAFe and distributed is
required. A focus group “involves engaging a small number of people in an informal group discussion
(or discussions), ‘focused’ around a particular topic or set of issues.” [57]. Within these discussions, the
interactions among the participants can yield important data [58], as in these interactions the
experience of the different participants is combined. By combining the experience of both distributed
and SAFe experts in a focus group the use of a focus group is applicable for answering this research
question.

The second focus group aims to answer the question based on practical experience. Practical
experience with SAFe differs for each person, as each organization is unique. To answer the question
properly, these differences must be taken into account. As mentioned previously, a focus group
combines the experience of the participants. With experts from different companies participating in
the focus group the use of a focus group is applicable for answering this research question based on
practice.

Additionally, a survey was done in an attempt to provide an answer based on practice to the question:
“What SAFe elements can be expected to fail when applied in distributed settings?”. However, due to
low response the results of this survey have not been used in this research. The survey and the results
of the survey can be found in Appendix T and Appendix U, so that these can be used for future
research. To emphasize, the survey is not used in this research.

3.1.3. Approach for research question 3


To answer the third research question: “What would SAFe look like, when customized for distributed
settings?” insights from the author are combined with focus group results. The discussions within the
focus group can provide new insights on possible solutions. This does not provide a sufficiently
substantiated answer, which is due to the time constraints of this research not possible. However, the
answer does provide an indication where to focus future research. To this end, providing an indication
for future research, the use of a focus group is applicable.

3.2. Systematic Literature Review


A Systematic Literature Review is done to provide a background on which to base the following steps
of the research. In previous research [9], the author has explored the problems of Distributed Scrum
using a Systematic Literature Review. To gain an insight as broad as possible, this research explores
the topics of Distributed Agile Development and Distributed Scrum by reviewing Systematic Literature
Reviews. Doing yet another Systematic Literature Review is not expected to give new insights, as many
reviews have already been done on Distributed Agile Development and Distributed Scrum. For this
reason, a review of existing Systematic Literature Reviews is done, using the approach of a Systematic
Literature Review.

3.2.1. Conditions for a Systematic Literature Review


The main condition for using a Systematic Literature Review is that there should be sufficient literature
on the topic of the review to avoid coincidental findings. Another condition for using a Systematic
Literature Review is that it is undertaken in accordance to a predefined search strategy [54].

3.2.2. Strengths of a Systematic Literature Review


According to [54] and [59], the main strength of a Systematic Literature Review is that the predefined
search strategy makes it less likely that the results are biased. Additionally, using a Systematic

22
Literature Review enables the researcher to analyze a wide range of variables, according to [54]. This
wide range of variables provides the researcher with a broad view of the topic as well as insight into
the current state of the field of research.

3.2.3. Limitations of a Systematic Literature Review


The major disadvantage mentioned in [54] and [59] is that Systematic Literature Reviews require a lot
more effort than traditional literature reviews. This is a limitation if time is limited. Besides this, a
limitation is that the results of the review might be biased. Another limitation is that the frame of
reference is different for each person. Result of this is that the execution of the protocol might be
slightly different when applied by another researcher. These small differences can lead to different
papers being accepted and different data being extracted. Finally, the use of a Systematic Literature
Review is limited to previously published research.

Besides these limitations, if any of the conditions is not fulfilled, this becomes a limitation as well and
should be handled as such.

3.2.4. Systematic Literature Review protocol


A review protocol is created based on the procedures and guidelines, presented by Kitchenham and
Charters in [59] and [54]. A short summary of the steps in the protocol is presented below and these
steps are visualized in Figure 15. The review protocol can be found in Appendix A.

In the first step, the search results from the search in Google Scholar6, on Systematic Literature
Reviews discussing the topics Distributed Agile Development and Distributed Scrum, have been
filtered. The search results have been accepted or rejected based on the criteria described in the
protocol. This filtering resulted in a list of accepted papers.

In the second step, data has been extracted from the accepted papers. The extracted data consists of
the standard information, as described in [54], as well as additional information. The standard
information extracted is: study title, study author(s), study year, and publication details. This is
extended with additional information, namely, the problems or challenges presented in the study.

In the third step, the extracted data of the different studies was combined. Similar problems or
challenges have been grouped together, and presented as in Table 1.
Table 1: Problem groups example

Distributed Agile Development problems Times mentioned


Problem A 12
Description of problem A
Problem B 11
Description of problem B

6
Google Scholar has been used for this search because it has indexed many different databases, including those
of different universities, providing a broad view of the available literature

23
Figure 15: Visualization Systematic Literature Review protocol

3.3. Multiple informant methodology


To discover the relevant distributed SAFe problems, the problems of the Systematic Literature Review
were filtered using a multiple informant methodology with a consensual approach.

3.3.1. Conditions for using multiple informants


The first condition for using multiple informants is regarding the selection of the informants. According
to [55] there are 4 selection criteria. First, the informants should be likely to be able to answer all
questions under investigation. Second, members of the organization should nominate the informants
as the most knowledgeable in within the organization. Third, the selected informants themselves
should think they are competent to answer the questions. Fourth, the duration that the informants
have worked with the topic should be long enough that the answers of the informants are plausible.

The second condition is that the right number of informants should be consulted. According to [56],
using 3 informants with different backgrounds is sufficient to eliminate less relevant problems. This is
supported in [55], which gives multiple studies that indicate that 2 or 3 informants are sufficient.
Though in [55], it is stated that this only holds if the selection criteria are applicable to every informant.

3.3.2. Strengths of using multiple informants


The strength of using multiple informants enables a researcher to make a decision using relatively few
resources. Using multiple informants instead of a key informant reduces the impact of bias and
random errors, according to [55]. Also, this is less costly than using a focus group or other group
decision schemes, according to [56]. Another advantage of using a consensual approach is that the
researcher does not have to perform the aggregation himself.

24
3.3.3. Limitations of using multiple informants
Using multiple informants, a limitation is that one could wonder whether the informants are able to
individually judge the topic sufficiently. Another limitation could be that the informants as a group do
not provide a broad enough view to make a sufficiently substantiated decision.

As a consensual approach has been taken, one could ask whether the group consensus provides the
best answer. According to [55], some studies indicate that the group consensus performs better than
the average of the individual informants, but is outperformed by the best informant [60], [61]. Also, it
is concluded that the unweighted mean of the individual informants tends to outperform group
consensus [60].

Besides these limitations, if any of the conditions is not fulfilled, this becomes a limitation as well and
should be handled as such.

3.3.4. Multiple informant protocol


The consulted informants made two mappings in which Distributed Agile Development problems were
mapped on the core values of SAFe. On both mappings consensus was reached. For this a protocol
was used and is presented in Appendix B. A short summary of the steps in the protocol is presented
below and is visualized in Figure 16.

In the first step, the informants mapped the Distributed Agile Development problems on the core
values of SAFe based on consideration. Based on this mapping, problems that are considered by SAFe,
were filtered out. This results in problems of Distributed Agile Development that are not considered
in SAFe, and are thus SAFe threats.

In the second step, the informants mapped the distributed SAFe threats on the core values based on
impact. Based on this mapping, the threats which have a low impact were filtered out. This results in
problems which are not considered by SAFe and have a high impact on SAFe, thus distributed SAFe
problems.

Figure 16: Visualization of multiple informant protocol

25
3.4. Focus group
To answer the question: “What SAFe elements can be expected to fail when applied in distributed
settings?”, two focus groups were used. Using a focus group provides insights of experts with different
backgrounds. The first focus group consisted of experts of different fields. This group answered the
question based on theory. The second focus group consisted of practitioners. This second group
answered the question based on practice.

3.4.1. Conditions for a focus group


The main condition for a focus group is that it should be composed of 6 to 12 people with different
backgrounds and preferably different genders as stated in [62]. Although this book focuses on focus
group interviews, the rationale is the same.

The second condition is that if the participants are asked to individually judge a topic, all participants
should be able to do this based on their expertise.

The last condition is that participants should not have been previously involved in the research. If the
participants have been previously involved in the research, the results of the focus group could be
compromised. Participants could then have information on the topic that could influence the outcome
of the focus group. An example of this could be that a participant might strive towards a certain result
to understate the statements from their previous participation.

3.4.2. Strengths of a focus group


As stated in 3.1.2, the strength of a focus group is that the interactions in a focus group combine the
experience of the participants. This is reinforced in [63]: “Focus group interactions reveal not only
shared ways of talking, but also shared experiences, and shared ways of making sense of these
experiences”.

3.4.3. Limitations of a focus group


The main limitation of using a focus group is that the data analysis is complex and can lead to
unwarranted conclusions. According to [64]: “just as using counts by themselves can be problematic,
mainly reporting and describing the themes that emerge from an analysis of focus groups also can be
misleading because, to the extent that any themes that might stem from dissenters are ignored, it can
lead to unwarranted analytical generalizations”. It must be explained how the data analysis was done
to avoid unwarranted conclusions.

Besides these limitations, if any of the conditions is not fulfilled, this becomes a limitation as well and
should be handled as such.

3.4.4. Expert focus group protocol


For each of the focus groups a protocol has been made to structure the execution of the focus group.
A short summary of the steps in the expert focus group protocol is presented below, these steps are
visualized in Figure 17 and Figure 18. The expert focus group protocol can be found in Appendix C.

In the first part of the expert focus group protocol the participants identify which Release Train
Elements are challenged by distributed, which is done in two rounds. First the group identifies the
specifically challenged Agile Release Train elements. For this the group filters the Agile Release Train
elements, first in groups, then plenary. For the group filtering, the participants are divided in two
groups, based on expertise. Each group consists of at least a distributed expert, a SAFe expert and a
practitioner. Second, the group ranks the elements using dot voting. From this, the high risk Agile
Release Train elements are identified. This first part is visualized in Figure 17.

26
In the second part of the expert focus group protocol the participants identify the consequences of
the high risk Agile Release Train elements. First, consequences are discovered during a plenary
discussion. Next, these consequences are ranked using dot voting. This second part is visualized in
Figure 18.

Figure 17: Expert focus group - part 1: identifying failing elements

Figure 18: Expert focus group - part 2: identifying consequences

27
3.4.5. Practitioner focus group protocol
A short summary of the steps in the practitioner focus group protocol is presented below, these steps
are visualized in Figure 19 and Figure 20. The practitioner focus group protocol can be found in
Appendix D.

In the first part of the practitioner focus group protocol the participants identify which Release Train
Elements are challenged by distributed. Same as in the first focus group, this is done in two rounds.
First the group identifies the specifically challenged Agile Release Train elements. For this the group
filters the Agile Release Train elements, first in groups, then plenary. For the group filtering, the
participants were divided in two groups, based on earlier participation in the research. Those who
participated in the survey were in one group, those who did not previously participate on the other
group. Second, the individuals ranked the elements using dot voting. From this, the high risk Agile
Release Train elements were identified. This first part is visualized in Figure 19.

In the second part of the practitioner focus group protocol the participants identify solutions to
prevent the elements from failing. First, solutions are discovered during a discussion in two groups.
The participants are divided over the groups randomly. Next, these solutions are ranked individually
using dot voting. This second part is visualized in Figure 20.

Figure 19: Practitioner focus group - part 1: identifying failing elements

28
Figure 20: Practitioner focus group - part 2: identifying solutions

29
Chapter 4 Distributed SAFe problems

In this chapter an answer is presented to the first research question: “What problems can be expected
when SAFe is applied in distributed settings?”. The research was split in two steps. First, a Systematic
Literature Review on the problems of Distributed Agile Development and Distributed Scrum was done.
Second, a multiple informant methodology was used to filter these problems and uncover the
distributed SAFe problems.

4.1. Problems of Distributed Agile Development: Systematic Literature Review


To provide a background on which to base the following steps of the research the literature is searched
for problems of Distributed Agile Development and Distributed Scrum. For this Systematic Literature
Review a protocol is used. The protocol is described in 3.2.4, page 23, and is visualized in Figure 21.

Figure 21: Visualization Systematic Literature Review protocol

4.1.1. Conditions for the Systematic Literature Review


As stated in 3.2.1 page 22, the first condition for conducting a Systematic Literature is that there
should be sufficient literature on the topic of the review. To check if this condition was fulfilled for the
topic of distributed SAFe, Google Scholar7 was searched on the 15th of February 2016 using the queries:

7
Google Scholar has been used for this search because it has indexed many different databases, including
those of different universities, providing a broad view of the available literature

30
““Scaled Agile Framework” AND distributed AND problems”, and ““Scaled Agile Framework” AND
SAFe”. The first query yielded 96 hits, the second query 122.

From this search two studies discussing problems were found, [65] and [66]. The studies discuss the
challenges of transitioning from a traditional organization to an organization where agile is scaled.
However, the studies do not cover the distributed aspect required for this research. Therefore, a
different approach is taken to find problems on distributed SAFe. For this approach, problems of
Distributed Agile Development and Distributed Scrum are researched in this Systematic Literature
Research. On these topics there is enough literature to fulfill the condition.

The second condition is that a predefined search strategy is used for the review. The protocol, as
summarized in 3.2.4, page 23, fulfills the condition that a predefined search strategy is used.

4.1.2. SAFe transformation problems


In the literature search on the 15th of February 2016 two papers were identified that discuss the
challenges of transitioning from a traditional organization to an organization where agile is scaled, [65]
and [66]. In [65] distribution is not mentioned as a common challenge for scaling agile. Although for
example, inter-team communication, team coordination, and dependency management are
mentioned. These problems overlap with problems regarding coordination and communication that
can occur when being distributed. In [66] distribution is mentioned as a challenge. Four levels are used
towards agility at scale, agile delivery, team agility, product agility and enterprise agility. For these
levels there are different challenges mentioned, distribution being one of them, which is mentioned
both at team and at enterprise level.

4.1.3. Problems of Distributed Scrum


SAFe applies the Plan-Do-Check-Adjust cycle at scale, which is the basis of Scrum and Agile, as shown
in Figure 22. Both Scrum and SAFe have their meetings structured based on this cycle, resulting in
similar meetings and the use of backlogs. Due to the similarities between Scrum and SAFe, problems
that occur in Distributed Scrum are expected to also be present in distributed SAFe. Additionally, SAFe
scales agile. Therefore, problems of Distributed Agile Development are researched as well.

Figure 22: Plan-Do-Check-Adjust cycle in SAFe from [1]

31
A Systematic Literature Review [9], done prior to this research, discovered problem groups in
Distributed Scrum. These problems are listed ordered by their class, as classified in [9], in Table 2.
Table 2: Problem groups of Distributed Scrum ordered by class from [9]

Problem group Class


No syncing between sites Coordination
Planning a meeting with everyone present is difficult Coordination
Integration difficulties Coordination
Lack of focus Coordination
Coordinating in multiple time zones is difficult Coordination
Multiple Product Owners not in sync Coordination
Misunderstanding Culture
Difference in reporting impediments Culture
Different holidays Culture
Silence / passivism Culture
Different work practices Culture
Incorrect execution of Scrum Scrum
Scrum of Scrums not effectively used Scrum
Features not being deployment ready at end of sprint Scrum
Managing customers new to agile Scrum
Not communicating all information to team Communication
Product Owner not present Communication
Informal contact is lost Communication
Meetings at the office outside office hours Time zone
No transparency between sites Time zone
Time differences Time zone
Hardware and tools not sufficient Technical

It should be noted that these problems can also happen when working co-located, for example
“incorrect execution of Scrum” can also happen with co-located teams. Additionally, there is overlap
between these problems, for example “meetings at the office outside office hours” are due to “time
differences” which is a separate problem. Also, the classification itself, as presented by [9] could be
argued, for example the problem “coordinating in multiple time zones is difficult” is classified as
coordination, but could also be classified as time zone.

Although these observations are correct, choices were made in [9] regarding when problems are
included, the way the problems are grouped, and the way they are classified. This research does not
reevaluate these choices because these choices are substantiated in [9]. The problems as presented
are used in this research.

4.1.4. Problems of Distributed Agile Development


SAFe is designed to scale agile development, therefore the problems of Distributed Agile Development
can be expected to occur in SAFe as well. In their thesis, Dullemond and van Gameren listed the
challenges of Distributed Agile Development [3]. These challenges are depicted and based on their
most applicable class, as classified in [3], in Table 3.

32
Table 3: Challenges of Distributed Agile Development grouped by most applicable class from [3]

Challenge Class
Lack of informal communication Geographic dispersion
Increased effort to initiate contact Geographic dispersion
Reduced hours of collaboration Control and coordination breakdown
Lack of shared understanding Control and coordination breakdown
Increased dependency on technology Control and coordination breakdown
Increased complexity of the technical infrastructure Control and coordination breakdown
Communication delay Loss of communication richness
Loss of cohesion Loss of teamness
Reduced trust Loss of teamness
Perceived threat from low-cost alternatives Loss of teamness
Increased team size Loss of teamness
Differences in language Cultural differences
Differences in ethical values Cultural differences
Differences in organizational vision Cultural differences
Differences in managing individualism and collectivism Cultural differences
Differences in terms of agreement Cultural differences
Differences in time perception Cultural differences
Differences in quality assessment Cultural differences
Differences in design Cultural differences

Same as for the previous study, the choices made in [3] regarding the challenges and their classes are
not reevaluated in this research. The challenges, as presented in [3] are used.

4.1.5. Extending the search


Besides these two Systematic Literature Reviews, an additional Systematic Literature Review was
done. The protocol that was used is described in 3.2.4, page 23. The results of the search are
presented in Table 4.
Table 4: Google Scholar search results

Query # hits # accepted Accepted papers


“Distributed Agile Development” AND (problems 124 9 [67], [68], [69], [70],
OR challenges) AND “systematic literature review” [71], [72], [73], [74],
[75]
"Distributed Scrum" AND (problems OR challenges) 140 10 [68], [69], [70], [67],
AND "systematic literature review" [72], [73], [71], [76],
[74], [77]

The rejected papers including the reason of rejection can be found in Appendix H. A list of all 184
problems and challenges identified by the accepted papers can be found in Appendix I.

In addition to the studies found, three studies were discovered that have investigated Systematic
Literature Reviews as well, [78], [79] and [80]. The studies that have been found in these studies have
also been tested on the acceptance criteria. No new papers have been found in this search. The results
of these tests can be found in Appendix J.

33
4.1.6. Combining the results
The results as discussed in the previous sections were combined by grouping the similar problems or
challenges. Problems that occurred more than once have been grouped together. In Table 5 the 29
grouped problems are listed and summarized. How these problems are grouped can be found in
Appendix K. There are 25 problems that could not be grouped, these can be found in Appendix L.

It should be noted that some of these problems can also occur when working co-located. For example,
language barriers can also occur when working co-located with team members from different
nationalities. Thus, the problems that are presented are not exclusively occurring when working
distributed. However, the studies that have been reviewed mention these problems as problems that
can occur when working distributed. Moreover, that a problem can occur when working co-located,
does not mean that a problem cannot occur when working distributed. For this reason, these problems
can be expected to occur when SAFe is applied in distributed settings.
Table 5: Problem groups

# Distributed Agile Development problems Times mentioned


1 Problems due to inefficient communication tools 12
The most mentioned problem group is problems due to inefficient
communication tools. Both the hardware and tools used for communication
are part of the communication infrastructure. These problems are quite severe
as there is a high dependency on this infrastructure for communication.
2 Problems due to unavailability of people 11
Within this group the main person that is mentioned that is not available is the
Product Owner. This results in difficulties with communication between the
Product Owner and the developers. It is also mentioned that the Scrum Master
or developers are not available, resulting in communication difficulties as well.
3 Problems due to lack of synchronous communication 10
Lack of synchronous communication makes collaboration difficult because
there are less hours in which to collaborate. This lack of synchronous
communication comes from the asynchronous nature of distributed working
which results in few overlapping work hours.
4 Problems due to different execution of work practices 9
The different work practices can come from personal preferences or cultural
differences. These different work practices result in differences in design,
quality assessment, and terms of agreement. This makes it difficult to
coordinate and collaborate.
5 Problems due to language barriers 8
Many studies report problems due to language barriers. If the language used
for communication is not someone’s native language it can be hard for this
person to follow a conversation or express themselves. Also, speakers from
different countries might have different dialects that can be hard to follow for
others.
6 Problems due to lacking technical infrastructure 8
Lacking technical infrastructure is caused by increased complexity of the
infrastructure in big projects. This tends to lead to a weak infrastructure that
makes it difficult for the developers to work with.
7 Problems due to loss of cohesion 8
The loss of cohesion can make that teams no longer feel that they are one
team. This can lead to poor team dynamics and less productivity.

34
# Distributed Agile Development problems Times mentioned
8 Problems due to misinterpretation 8
Misinterpretation can come from misunderstanding during communication
because of small communication bandwidth. Or because information is not
accessible or even hidden. This can result in reduced cooperation and loss of
information.
9 Problems due to lack of agile training 7
If customers or developers do not have a similar understanding of agile this
creates a gap between the skill levels of the involved parties. This gap makes it
difficult for these parties to work together.
10 Problems due to reduced trust 7
Many studies mention that there is reduced trust between team members
when working in a distributed setting. This reduced trust can lead to lack of
productivity and loss of teamness.
11 Problems due to time zone differences 7
Time zone differences can lead to having meetings outside office hours and
reduced availability for synchronous communication.
12 Problems due to people differences 6
People differences can be many things, difference in time perception, notion
of authority, individualism, or ethical values. These differences can make it
difficult to work together.
13 Problems due to lack of traditional management 6
Managing an agile project can be difficult because of the lack of traditional
management processes to steer the project. This lack of processes can cause
problems if teams do not function autonomously.
14 Problems due to difficulties with coordination 6
Working together with multiple sites in multiple time zones is difficult, this
increases coordination costs and can lead to unnecessary delays and conflicting
work.
15 Problems due to shared ownership and responsibility 6
In agile, teams get shared ownerships over their own projects, also giving them
responsibility for these projects. When the teams work distributed, this can
cause problems as the teams do not feel this responsibility which could result
in avoidance of accountability.
16 Problems due to incorrect execution of Scrum* 6
Not executing Scrum properly results in many problems. Features that are not
ready at the end of the sprint and teams that get no feedback on their work
because they do not hold retrospectives or reviews.
17 Problems due to cultural differences - organizational and national 5
Differences in culture can be differences of national culture between sites but
also differences in organizational culture between sites. These differences can
make it harder for sites to cooperate.
18 Problems due to the loss of informal contact 5
When working distributed the chitchat at the coffee corner is lost. This loss of
informal contact can create a lack of awareness of what is going on with the
team members, leading to less communication and collaboration.
19 Problems due to lack of collective vision 4
If there is no collective vision, teams miss the big picture. This results in less
focus and commitment of the teams.

35
# Distributed Agile Development problems Times mentioned
20 Problems due to lack of requirement documents 4
Scrum does not provide formal documentation in the form of requirement
documents, this can cause problems if the customer wants to work with fixed
requirements. Or teams can have issues with communication if important
decisions are not documented leading to unclear requirements.
21 Problems due to lack of visibility 4
It is difficult to evaluate the current state of a project. This lack of visibility
makes it difficult create trust between sites.
22 Problems due to difficulties in knowledge sharing 4
Knowledge sharing when working distributed is difficult as knowledge is spread
over the different sites. If sharing of this knowledge is not done between sites
this can lead to a lack of domain knowledge in some sites.
23 Problems due to increased communication effort 4
Initiating contact in a distributed environment takes an increased effort as this
cannot be initiated face to face, some tool must be used. This creates
communication overhead and increases communication costs.
24 Problems due to increased team size 3
When the team size is increased, it becomes more difficult to work together as
team.
25 Problems due to different holidays 3
Different countries, cultures and religions have different holidays. When these
holidays are not overlapping, it is difficult to synchronize work between the
distributed sites.
26 Problems due to difficulties with agile decision making 3
Agile decision making is different from traditional decision making because
teams get more decision-making power. This results in management having to
let go and trust the teams to make the right decision.
27 Problems due to increased number of teams 3
When the number of teams increases, this creates difficulties for agile
practices. Agile practices must be scaled for this increased number of teams.
28 Problems due to silence of participants 2
During meetings, some participants can remain silent and passive due to
linguistic or cultural differences.
29 Problems due to increased number of sites 2
Working distributed means working with multiple sites. This can cause all kinds
of problems with communication and coordination. Many of these problems
are listed in this table.

4.1.6.1. *Note on: problems due to incorrect execution of Scrum


From the literature search the problem “problems due to incorrect execution of Scrum” is mentioned
multiple times. This problem, however, is Scrum specific and mentioned in the literature as such.
Looking at this problem from a framework perspective the problem is incorrect application of the
methods presented by the framework (Scrum) in a distributed setting. The problem described is thus
“problems due to incorrect execution of the methods presented by the framework”, in short
“problems due to incorrect execution of the framework”

This incorrect execution of methods in the framework can also be applied to distributed SAFe. For
readability in this thesis, the problem is described as “problems due to incorrect execution of SAFe”.

36
Although this step is logical, it should be clearly noted that this generalization is not substantiated in
any way.

4.2. Distributed SAFe problems: Multiple informant methodology


In this section, the 27 problems of Distributed Agile Development mentioned more than twice (see
Table 5) are filtered in two stages to discover the distributed SAFe problems. This filtering is done
using a multiple informant methodology with a consensual approach. The protocol used for this is
summarized in 0, and visualized in Figure 23, below.

Figure 23: Visualization of multiple informant protocol

It should be noted that not all 29 Distributed Agile Development problems from the Systematic
Literature Review are used. Problems mentioned only twice are not considered, problems mentioned
thrice are. On one hand, dismissing problems that are not mentioned often early in the process could
result in an important problem being filtered out. On the other hand, taking all problems further in
the research could result in solving a problem which does not occur often. If only two out of 13
Systematic Literature Reviews mention a problem, it can be considered a coincidence as the others
should have found the problem as well. Three could also be considered a coincidence. However, this
chance is smaller. Moreover, dismissing those problems would result in four problems being dismissed
early in the process. In this stage of the research this is not preferred.

4.2.1. Conditions for using multiple informants


To check whether the first condition regarding the selection of the informants is fulfilled, the four
selection criteria presented in 3.3.1 should be met. All four criteria have been met. The first criterion
is met as all informants are Safe Program Consultants (SPC’s) and therefore have sufficient knowledge
of SAFe to create the mappings. The second criterion is met as the SPC’s are regarded as the most
knowledgeable on SAFe. The third criterion is met. Three informants did think they were competent.
During one of the sessions, one informant did not think he was competent to make the mapping. In

37
consultation with the informant it was decided to exclude this informant from the results. The fourth
criterion was met as the informants have all implemented SAFe in different companies.

The second condition is that the right number of informants should be consulted. For the multiple
informants methodology 3 informants that have implemented SAFe in different companies were
consulted. As stated in 3.3.1, 3 informants with different backgrounds are sufficient to eliminate less
relevant problems.

4.2.2. Problem - Core Value mapping: consideration


The problems were first mapped on the core values of SAFe based on consideration. A high value (++)
means that the problem is strongly considered in that core value. In this mapping the core values are
seen as a means to an end. All elements in the SAFe framework support these core values. The
mapping thus provides insight into which problems might not be covered in SAFe. In Table 7 this
mapping is presented.
Table 6: Legend for Table 7: degree of consideration

Explanation
-- Problem is not considered at all
- Problem is not really considered
-/+ Problem partly considered
+ Problem is considered
++ Problem is strongly considered

Table 7: Problems mapped to core values on consideration

Core value Alignment Built-in Transparency Program Result*


Problem Quality Execution
Technical
Inefficient communication - - - - -
tools
Lacking technical -/+ ++ -/+ + ++
infrastructure

Coordination
Unavailability of people + -- - -/+ +
Different execution of work -/+ ++ + ++ ++
practices
Time zone differences -- -- -- -- --
Difficulties with + -/+ -/+ + +
coordination
Increased number of teams ++ + ++ ++ ++

Communication
Lack of synchronous - -- - - -
communication
Loss of informal contact - - - - -
Misinterpretation ++ -- ++ -/+ ++
Increased communication -- -- -- -- --
effort

38
Core value Alignment Built-in Transparency Program Result*
Problem Quality Execution
Cultural
Language barriers -- -- -- -- --
Different holidays -- -- -- -- --
Cultural differences - + -- + -/+ +
organizational and national

Agile expertise
Lack of agile training - -- - -/+ -/+
Lack of traditional -/+ -/+ -/+ + +
management
Difficulties with agile + -- + -/+ +
decision making
Incorrect execution of SAFe - - - - -
Lack of requirement + + -/+ + +
documents
Lack of visibility + - ++ - ++

Teamness
Reduced trust ++ + ++ + ++
Loss of cohesion + + + + +
People differences - - - - -
Shared ownership and ++ + + + ++
responsibility
Increased team size + -- -- + +
Difficulties in knowledge + -- + + +
sharing
Lack of collective vision ++ -- + -/+ ++

*Note on result: The value in the result column is the maximum value of the four other columns.

This table has been discussed and verified with three SAFe program consultants from Prowareness
using the protocol previously discussed in 0, page 25. It should be noted that in the execution of the
protocol some of the informants gave different values for the mapping. However, none of these
differences effected the result column. And, when reaching a consensus on the values, this also did
not affect the result.

From this table the following problems can be extracted which are not really considered or not
considered in SAFe and therefore, are threats to SAFe.

1. Incorrect execution of SAFe


2. Language barriers
3. Different holidays
4. People differences
5. Time zone differences
6. Increased communication effort
7. Loss of informal contact
8. Lack of synchronous communication
9. Inefficient communication tools

39
4.2.3. Problem - Core Value mapping: impact
Another mapping between the problems and core values is done in which the core values are
viewed as the things that SAFe tries to achieve, the goals of SAFe. In this mapping the impact of a
problem on the core value as a goal is mapped. This mapping is presented in Table 9.
Table 8: Legend for Table 9: degree of impact

Explanation
-- Problem has no impact
- Problem has little impact
-/+ Problem has moderate impact
+ Problem has severe impact
++ Problem has very severe impact

Table 9: Problems mapped to core values on impact

Core value Alignment Built-in Transparency Program Result*


Problem Quality Execution
Technical
Inefficient communication ++ + ++ ++ ++
tools

Coordination
Time zone differences ++ -- + + ++

Communication
Lack of synchronous + + - - +
communication
Loss of informal contact + - - - +
Increased communication + + + ++ ++
effort

Cultural
Language barriers ++ + + ++ ++
Different holidays -- -- - - -

Agile expertise
Incorrect execution of SAFe ++ + + ++ ++

Teamness
People differences - - -/+ -/+ -/+

*Note on result: The value in the result column is the maximum value of the four other columns.

This table has been discussed and verified with three SAFe program consultants from Prowareness
using the protocol previously discussed in 0, page 25. It should be noted that, same as for the previous
mapping, in the execution of the protocol, in some cases the informants gave different values for the
mapping. The informants deviated more from each other with filling in this mapping, and this resulted
in some differences, also for the result column. However, these small changes did not have effect on
the next steps of the process.

40
Selecting from the 9 problems that are not considered in the core values, the problems that have a
severe impact on the core values are the following.

1. Incorrect execution of SAFe


2. Language barriers
3. Time zone differences
4. Increased communication effort
5. Inefficient communication tools

Note that, as stated previously, these problems do not exclusively occur when working distributed.
However, that a problem can occur when working co-located, does not mean that a problem cannot
occur when working distributed. Thus, these problems are expected to occur when applying SAFe in
distributed settings.

41
Chapter 5 Identification of failing SAFe elements

In this chapter an answer is presented to the second research question: “What SAFe elements can be
expected to fail when applied in distributed settings?”. To do this, failing elements were identified
based on the distributed SAFe problems. Because such identification, based only on the insights of
one person, cannot be considered sufficiently substantiated this will be substantiated using
triangulation. First, the previously mentioned identification was done based on insights gained from
answering the first research question. Second, an identification was made based on theory using a
focus group in which distributed and SAFe experts participated. Third, an identification was made
based on practice using a focus group in which practitioners participated. Finally, the insights of the
three identification methods are combined to reach a conclusion on what SAFe elements can be
expected to fail in distributed settings.

5.1. Identification based on theory: Literature


The elements in SAFe that are part of the Agile Release Train (presented in 0) can be categorized in
two groups: elements which are expected to fail when applied in distributed settings and elements
which are not expected to fail. These elements are selected based on the problems found in previous
research. In Figure 24 coloring has been applied based on this categorization. The elements that are
expected to fail are colored red, the elements that are not expected to fail are colored green. This
does not mean that the elements that are green will never fail. However, their failure is not deemed
likely, given the problems identified in Chapter 4.

Figure 24: Agile Release Train elements failing - result of identification based on literature, created based on [1]

42
5.1.1. Elements expected to fail
The argumentations of why the elements are classified as expected to fail are described below. Only
the elements that are expected to fail are presented.

5.1.1.1. Agile Release Train


If the Agile Release Train is spread over multiple time zones it can be difficult to adhere to all
prescribed practices, which could result in an incorrect execution of SAFe. Incorrect execution of SAFe
is one of the problems according to the literature. An example of this, mentioned in the literature for
distributed Scrum, is that a retrospective is not done because it is difficult when working in multiple
time zones. Applying this to SAFe would mean, not doing Inspect and Adapt because it is difficult. This
will probably have a high impact on the Agile Release Train, possibly even running it of its tracks.
Consequence of the increased communication effort is that there is less communication between the
teams of the Agile Release Train during the program increment. This lack of communication can cause
the teams to go in different directions. These are examples of why the Agile Release Train might fail.

5.1.1.2. System Demos


System demos are given every iteration. To give the system demo, the system should be integrated, if
this is not possible partial integration can be used, but this is not preferable. The system team does
this integration with assistance from the other teams if needed. However, if the teams are spread over
multiple time zones, or locations, teams are not always available to assist the system team with this
integration. Therefore, the integration could fail. Additionally, key stakeholders should be present at
the system demo. If these are spread over multiple locations, it can be more difficult to get all
stakeholders to be present and actively participating. Because of these problems the system demo
might fail.

5.1.1.3. PI planning, Inspect & Adapt, and Solution Demo


The program increment planning is essential for aligning the Agile Release Train. The inspect & adapt
is essential to continuously improve the program, and the solution demo is essential to gain feedback
on the created solution. Not doing any of these three events are likely to derail the Agile Release Train
in no time due to lack of alignment, improvement, and feedback. When doing the events distributed,
inefficient communication tools and language barriers can cause the communication during the events
to be problematic. If the communication becomes too big of a problem, it could also derail the train.
Even when doing the events co-located, language barriers can still make communication problematic.
Therefore, the program increment planning, inspect and adapt, and solution demo might fail.

5.1.1.4. Spanning palette teams


The teams that are part of the spanning palette: DevOps, system team, release management, shared
services, and user experience are available for all teams of the Agile Release Train. When the teams
of the Agile Release Train are spread over multiple time zones, the spanning palette needs to be
available during multiple time zones. However, if these time zones span a time of, for example, 14
hours the teams have to be available for 14 hours. This is not necessarily a problem, but this extra load
on these teams can cause them to fail. Therefore, the spanning palette teams might fail.

5.2. Identification based on theory: Expert focus group


To identify which SAFe elements fail, a focus group has been used. This focus group has been executed
according to the protocol presented in Chapter 3, page 26. This section describes how the conditions
for using a focus group are fulfilled. The results of focus group, and an analysis of these results are also
presented. A detailed description of the execution of the focus group can be found in Appendix O.

43
5.2.1. Conditions for the expert focus group
The main condition for using a focus group, as stated in 3.4.1 page 26, is that it should be composed
of 6 to 12 people from different backgrounds. With the summer holiday season nearby the choice was
made to try and organize the meeting before the holidays, as otherwise two months would pass before
another option would arise. Therefore, there was little time to recruit participants, thus the session
was held with 6 participants of different backgrounds and genders.

The second condition is that if the participants are asked to individually judge a topic, all participants
should be able to do this based on their expertise. The participants have been asked to vote
individually in the second round, thus they should all be able to individually judge the topic. The
participants for this focus group have three different backgrounds: SAFe program consultants, Release
Train Engineers with distributed experience (practitioners), and distributed experts from the academic
world. The level of expertise of the participants on the topics, distributed and SAFe, is different. This
difference could be a limitation and will be discussed in the limitations section.

The last condition is regarding previous involvement in the research. For the focus group, none of the
participants have been previously involved with the research. Therefore, no influence from this can
be expected.

5.2.2. Expert focus group results


In this section the results of round 1 & 2 of the focus group are presented. The results of the other
rounds are not presented as those results are not used in the research but can be found in Appendix
Q to Appendix S. A full description of the execution of the focus group, including the results of the
other rounds, can be found in Appendix O. The protocol used in the focus group is visualized in Figure
25.

Figure 25: Expert focus group protocol visualization

44
In the first round the participants started in two groups, with each group containing a SAFe expert,
distributed expert, and a practitioner. These groups each identified elements that were specifically
challenged by distributed. After both groups had finished, a plenary session was held to identify the
specifically challenged elements. From this session 9 elements were identified that are specifically
challenged by distributed. The result of this session can be found in Table 10.
Table 10: Expert focus group result of round 1

Not specifically challenged by Undecided Specifically challenged by


distributed distributed
Customer (19) Weighted Shortest Job First (8) PI planning (16)
Implementing 1-2-3 (4) Solution Demo (14)
Release any time (9) System Demo (13)
Vision (22) Inspect & Adapt (15)
Release Management, Shared Program Kanban (12)
Services & User-Experience
(21)
Agile Release Train / Value Lean-agile mindset (2)
Stream (10)
Roadmap (23) DevOps & system team (20)
Lean-agile leaders (5) Core Values (1)
PI objectives (26) Communities of Practice (6)
Feature (27)
Milestones & Releases (25)
Metrics (24)
Enabler (28)
Architectural Runway (11)
Epics (29)
System Architect, Release
Train Engineer & Product
Management (17)
SAFe principles (3)
Value Stream coordination (7)
Business Owners (18)

In the second round the participants individually dot voted on the elements, both on likelihood and
on impact. Likelihood being the likelihood that the element fails in a distributed setting, and impact
being the impact if the element fails in a distributed setting.

Combining the scores of likelihood and impact gives the risk of an element failing. As stated in [81],
𝑅𝑖𝑠𝑘𝑖 = 𝑃(𝐿𝑜𝑠𝑠𝑖 ) ∗ 𝐼(𝐿𝑜𝑠𝑠𝑖 ) in which 𝑃 = 𝑝𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑦 and 𝐼 = 𝑖𝑚𝑝𝑜𝑟𝑡𝑎𝑛𝑐𝑒. Which are in our case:
𝑙𝑖𝑘𝑒𝑙𝑖ℎ𝑜𝑜𝑑 (𝑃) and 𝑖𝑚𝑝𝑎𝑐𝑡 (𝐼). Combining the likelihood and impact score thus shows the risk that
the element fails in a distributed setting. The individual votes of the participants on likelihood and
impact can be found in Appendix P. These individual votes are added in Table 11. Based on this data,
the calculated risk is visualized in Figure 26.

45
Table 11: Votes on likelihood and impact

Element Likelihood Impact Risk


PI planning (16) 18 18 324
Solution Demo (14) 4 7 121
System Demo (13) 6 3 28
Inspect & Adapt (15) 11 11 25
Program Kanban (12) 5 5 18
Lean-agile mindset (2) 1 0 18
DevOps & system 3 6 12
team (20)
Core values (1) 3 4 0
Communities of 3 0 0
Practice (6)

Risk of SAFe element failing


350 324

300
Risk = (likelihood * impact)

250

200

150 121

100

50 28 25 18 18 12
0 0
0
20. 6.
15. 14. 12. 13. 2. Lean-
16. PI DevOps & 1. Core Communit
Inspect & Solution Program System agile
planning System values ies of
Adapt Demo Kanban Demo mindset
team Practice
Risk 324 121 28 25 18 18 12 0 0
Element

Figure 26: Graph of risk of SAFe element failing

46
Figure 27 and Figure 28 show the distribution of the votes per expert group.

Expert distribution of likelihood


20 18
18
16
Number of votes

14
12 11
10
8 6
6 5
4
4 3 3 3
2 1
0
20.
6.
15. 14. 12. 13. DevOps 2. Lean-
16. PI 1. Core Commun
Inspect Solution Program System & agile
planning values ities of
& Adapt Demo Kanban Demo System mindset
Practice
team
Total 18 11 4 5 6 3 3 3 1
Practitioner 7 5 2 1 1 0 0 2 0
Distributed expert 6 2 2 2 1 3 0 1 1
SAFe expert 5 4 0 2 4 0 3 0 0
Element

Figure 27: Graph expert distribution on likelihood of SAFe elements failing

Expert distribution of impact


20 18
18
16
Number of votes

14
12 11
10
8 7
6
6 5
4
4 3
2 0 0
0
20.
6.
15. 14. 12. 13. DevOps 2. Lean-
16. PI 1. Core Commun
Inspect Solution Program System & agile
planning values ities of
& Adapt Demo Kanban Demo System mindset
Practice
team
Total 18 11 7 5 3 6 4 0 0
Practitioner 7 5 3 1 2 0 0 0 0
Distributed expert 4 1 4 2 1 6 0 0 0
SAFe expert 7 5 0 2 0 0 4 0 0
Element

Figure 28: Graph expert distribution on impact of SAFe elements failing

47
5.2.3. Analysis of expert focus group results
The top 3 elements of the likelihood and impact ranking, the events of the Agile Release Train, were
further discussed in the focus group. That these events are likely to fail with high impact is obvious
because these events are key to SAFe and part of essential SAFe.

From the results of the focus group, both the program Kanban, and DevOps & system team have at
least the same score for risk as the system demo, which had the lowest score of the events. Therefore,
both the program Kanban and DevOps & system team added to the events as elements with high risk
of failure. Again, the elements with risk of failing are marked red in SAFe big picture in Figure 29. Other
elements are marked green.

Figure 29: Agile Release Train elements failing - result of expert focus group, created based on [1]

The reasoning of the focus group concerning each of the red elements is presented below.

5.2.3.1. PI planning
Both groups identified the PI planning (16) as being challenged by distributed. There was not much
discussion on this element. Doing a meeting distributed with a small team is difficult, so doing it with
an Agile Release Train will be even harder. Participation of all members of the Agile Release Train is
key to the success of the meeting. Getting all team members to actively participate in the meeting is
difficult when being distributed.

5.2.3.2. Inspect & Adapt


Like the PI planning, both groups identified the inspect & adapt meeting (15) as being challenged by
distributed, also without much discussion. Similar reasoning was applied as with the PI planning.
Participation of all members of the Agile Release Train is key to the success of the meeting.
Participation becomes more difficult when the meeting is done distributed.

48
5.2.3.3. Solution Demo
Like the previous two elements, both groups identified the solution demo (14) as being challenged
without much discussion. However, the reasoning here was different. Including the key stakeholders
is more difficult when the meeting is done distributed. Without the key stakeholders present, the
meeting is of little use. Therefore, also the success of this meeting is challenged by distributed.

5.2.3.4. Program Kanban


Both groups also identified the program Kanban (12) as being challenged by distributed. The program
Kanban is already difficult when working co-located because it is hard to get the same frame of
reference. When adding distributed to this, many signals in the communication are lost, thus this
becomes even more difficult.

5.2.3.5. System Demo


Both groups also identified the system demo (13) as being challenged by distributed. In addition to
the reasons given for the solution demo, the system demo is also being challenged because the work
of the different locations must be integrated every 2 weeks. This integration is more difficult when
the members of the Agile Release Train cannot work co-located.

5.2.3.6. DevOps & system team


For the DevOps & system team (20) there was some discussion. The integration which must be done
for the system demo is done by DevOps and the system team. Therefore, the question is whether the
problem lies with the system demo or with DevOps and the system team. Because this integration is
not only needed for the system demo, but also for example for releases, both the teams and the demo
are being challenged by distributed.

5.3. Identification based on practice: Practitioner focus group


The third part of the triangulation is to gain the insights from practice. For this a focus group with
practitioners has been used. This focus group has been executed according to the protocol presented
in Chapter 3, page 28. This section describes how the conditions for using a focus group are fulfilled.
The results of focus group, and an analysis of these results. A detailed description of the execution of
the focus group can be found in Appendix O.

5.3.1. Conditions for the practitioner focus group


The main condition for using a focus group, as stated in 3.4.1 page 26, is that it should be composed
of 6 to 12 people from different backgrounds. For the focus group there were 11 participants from 7
different companies present. Though all participants were Release Train Engineers, the difference in
companies ensures the required difference in backgrounds. Additionally, both genders were present,
though the majority was men.

The second condition is that if the participants are asked to individually judge a topic, all participants
should be able to do this based on their experience. The participants are asked to vote individually in
the second round, thus they should all be able to individually judge the topic. The selected participants
are all Release Train Engineers. However, their experience differs. This is based on their answers to
the questions regarding their experience, that have been asked before the focus group by email. Based
on their experience, the participants have been divided into groups. This difference could be a
limitation and is discussed in the limitations section.

The last condition is that participants should not have been previously involved in the research. This
condition is violated as one of the participants was also present at the previous focus group. Another
participant attempted to participate as informant for the multiple informant methodology. The
participant, however, did not feel confident to answer the questions, therefore, the session was

49
stopped and the results were left out. These two participants, as well as one other participant had
previously taken part in the survey that was done as part of this research. The implications of this
violation are handled in the limitations.

5.3.2. Practitioner focus group results


In this section the results of round 1 & 2 of the focus group are presented. The results of the other
rounds are not presented here as those results are analyzed in Chapter 6. A full description of the
execution of the focus group, can be found in Appendix X. The protocol used in the focus group is
visualized in Figure 30.

Figure 30: Practitioner focus group protocol visualization round 1 & 2

In the first round the participants started in two groups, divided based on previous participation.
Participants who had previously participated in the research were grouped together. Each group
identified elements that were specifically challenged by distributed. After both groups had finished, a
plenary session was held to identify the specifically challenged elements. From this session 13
elements were identified that are specifically challenged by distributed. The result of this round can
be found in Table 12.

The participants stated that some elements are challenged depending on the implementation. These
elements were: release management, DevOps, system team, release any time and customer.

50
Table 12: Practitioner focus group result of round 1

Not specifically challenged by Undecided Specifically challenged by


distributed distributed
Program Kanban (12) System Demo (13) Agile Release Train / Value
Stream (10)
Lean-agile mindset (2) Core values (1) PI planning (16)
Weighted Shortest Job First (8) Inspect & Adapt (15
System Architect (18) Release Management (24) Communities of Practice (6)
Vision (27) DevOps (22) Implementing 1-2-3 (4)
Roadmap (28) System team (23) Release Train Engineer (17)
Lean-agile leaders (5) Release any time (9) Feature (32)
Milestones & Releases (30) Customer (21) Enabler (33)
Metrics (29) Shared Services (25)
SAFe principles (3) Product Management (19)
Value Stream coordination (7) Business Owners (20)
Solution Demo (14) Architectural runway (11)
PI Objectives (31) User Experience (26)
Epics (34)

In the second round the participants individually dot voted on the elements on both likelihood and
impact. Likelihood being the likelihood that the element fails in a distributed setting, and impact being
the impact if the element fails in a distributed setting. Because the experience of the participants
differs, they have been categorized. Before the focus group, the participants were asked 3 questions
regarding their experience on the topics: distributed, SAFe and distributed SAFe. Based on their
responses the participants have been categorized in one of the categories: beginner, intermediate or
expert. Using this categorization, the votes of the participants in the beginner category have been
omitted in the results. How the categorization was done can be found in Appendix Y.

Same as in 5.2.2, combining the scores of likelihood and impact gives the risk of an element failing.
The individual votes of the participants on likelihood and impact can be found in Appendix Z. These
individual votes are added in Table 163. Based on this, the calculated risk is visualized in Figure 31.
Table 13: Votes on likelihood and impact

Element Likelihood Impact Risk


PI planning (16) 31 26 806
Inspect & Adapt (15) 26 15 390
Feature (32) 11 16 176
Implementing 1-2-3 (4) 10 17 170
Release Train Engineer (17) 9 12 108
Shared Services (25) 16 6 96
Communities of Practice (6) 9 8 72
Agile Release Train / Value Stream (10) 5 11 55
Architectural runway (11) 6 6 36
Product Management (19) 2 9 18
Enabler (33) 3 1 3
Business Owners (20) 1 2 2
User Experience (26) 1 1 1

51
Risk of SAFe element failing
900 806
Risk = (likelihood * impact)

800
700
600
500
390
400
300
176 170
200 108 96 72
100 55 36 18 3 2 1
0
10. Agile
4. 17. Release 6. Release 11. 19. Product
16. PI 15. Inspect 25. Shared 20. Business 26. User
32. Feature Implementi Train Communitie Train / Architectura Manageme 33. Enabler
planning & Adapt Services Owners Experience
ng 1-2-3 Engineer s of Practice Value l runway nt
Stream
Risk 806 390 176 170 108 96 72 55 36 18 3 2 1
Element

Figure 31: Graph of risk of SAFe element failing

Figure 32 and Figure 33 show the distribution of the votes per expert group.

52
Expert distribution on likelihood
35
31
30
26
25
Number of votes

20
16
15
11 10 9 9
10
5 6
5 2 3
1 1
0
10. Agile
6.
4. 17. Release Release 11. 19. Product 20.
16. PI 15. Inspect 25. Shared Communiti 26. User
32. Feature Implementi Train Train / Architectur Manageme 33. Enabler Business
planning & Adapt Services es of Experience
ng 1-2-3 Engineer Value al runway nt Owners
Practice
Stream
Total 31 26 11 10 9 16 9 5 6 2 3 1 1
Intermediate 9 9 1 6 0 5 4 1 3 0 1 0 0
Expert 22 17 10 4 9 11 5 4 3 2 2 1 1
Element

Expert Intermediate

Figure 32: Graph expert distribution on likelihood of SAFe elements failing

53
Expert distribution on impact
30
26
25
Number of votes

20 17
15 16
15 12 11
8 9
10
6 6
5 2
1 1
0
10. Agile
6.
4. 17. Release Release 11. 19. Product 20.
16. PI 15. Inspect 25. Shared Communiti 26. User
32. Feature Implementi Train Train / Architectur Manageme 33. Enabler Business
planning & Adapt Services es of Experience
ng 1-2-3 Engineer Value al runway nt Owners
Practice
Stream
Total 26 15 16 17 12 6 8 11 6 9 1 2 1
Intermediate 7 8 2 10 2 0 2 1 3 4 0 0 0
Expert 19 7 14 7 10 6 6 10 3 5 1 2 1
Element

Expert Intermediate

Figure 33: Graph expert distribution on impact of SAFe elements failing

54
5.3.3. Analysis of practitioner focus group results
The top 3 elements of the likelihood ranking: PI planning, inspect & adapt and shared services, and
the top 3 elements of the impact ranking: PI planning, implementing 1-2-3, and feature were further
discussed in the focus group. Thus, the PI planning, inspect & adapt, implementing 1-2-3, feature and
shared services were further discussed. Interestingly, in contrast to the theory, the system demo and
solution demo are not in the top 3. Moreover, they are not identified as specifically challenged by
distributed. The reason for this, according to the participants of the focus group, is that in contrast to
what the theory states, in practice these events are done with the teams, not with the entire Agile
Release Train.

The elements implementing 1-2-3 and feature were not identified as specifically challenged by
distributed in theory but are challenged by distributed in practice. Implementing 1-2-3 is challenged
because the different locations are usually trained by different coaches. This results in a different
interpretation of SAFe per location. This difference can cause serious problems when working
together. Features are challenged because, when working on a feature that is distributed across
locations there usually is a lack of common understanding of the feature. Additionally, working across
locations means that dependencies can arise across different locations. Together with a lack of
common understanding this can become a serious problem.

From the results of the focus group, the Release Train Engineer has a higher risk of failing than shared
services. Same as in the expert focus group, the Release Train Engineer is added to the elements with
high risk of failure. Again, the elements with risk of failing are marked red in the SAFe big picture in
Figure 34. Other elements are marked green.

Figure 34: Agile Release Train elements failing - result of practitioner focus group, created based on [1]

The reasoning of the focus group concerning each of the red elements is presented below.

55
5.3.3.1. PI planning
Both groups identified the PI planning (16) as being challenged by distributed. This was done without
much discussion. Trying to do a distributed PI planning is possible, but it definitely makes it a lot
harder. Additionally, this is the key event on which the Agile Release Train works. If this event fails,
everything that will be done in the upcoming PI is compromised.

5.3.3.2. Inspect & Adapt


Same as for the PI planning, both groups categorized inspect & adapt (15) as being challenged by
distributed without much discussion. Being part of the PI planning, if the PI planning becomes harder,
this will as well. Additionally, doing the inspect & adapt without everyone present would not improve
the Agile Release Train.

5.3.3.3. Feature
Both groups identified feature (32) as challenged by distributed. There was some discussion because
the implementation of the feature is challenged, not necessarily the feature itself. However, the
groups decided that the feature is to be implemented by the Agile Release Train. This implementation,
when done distributed is more difficult because, when working on the feature is distributed across
locations there usually is a lack of common understanding of the feature. Additionally, working across
locations means that there will be dependencies across locations.

5.3.3.4. Implementing 1-2-3


Same as the previous elements, both groups identified implementing 1-2-3 (4) as challenged by
distributed without much discussion. Implementing 1-2-3 is challenged because, the people working
at different locations are usually trained by different coaches. This results in a different interpretation
of SAFe per location.

5.3.3.5. Release Train Engineer


Both groups also identified the Release Train Engineer (17) as challenged by distributed without much
discussion. The Release Train Engineer has to ensure that all teams work together and the Agile
Release Train comes in a flow. This becomes significantly harder if those teams are distributed.

5.3.3.6. Shared Services


In the beginning it was unclear what the shared services (25) are, but when that was clear the group
quickly decided that it was challenged by distributed. Because the shared services are usually very
busy so it is hard to get priority from those services. Therefore, teams must wait for the service or
start doing it themselves. If the teams are distributed, teams might not know that there is a service,
and getting priority form the service becomes even harder.

5.4. Result of triangulation


In this section, the result of the triangulation is presented after which the four failing items are again
described to conclude the chapter.

Combining the results of the three identifications gives two elements that are identified in each of the
identifications. These are the PI planning and inspect & adapt, which have the highest risk of failing in
a distributed setting. Additionally, there are two elements identified with a high risk of failing,
depending on the implementation: DevOps, and system team. Though it should be noted that this
does not mean that the other elements will not fail, it simply states that these are the most likely to
fail. The result of the triangulation can be found in Table 14.

56
Table 14: Overview of triangulation result

Approach Literature Expert focus group Practitioner focus


Element group
PI Planning X X X
Inspect & Adapt X X X
DevOps X X X*
System team X X X*
System Demo X X
Solution Demo X X
Shared Services X X
Release management X
Agile Release Train X
User Experience X
Program Kanban X
Feature X
Implementing 1-2-3 X
Release Train Engineer X
* Note that DevOps and system team were identified to fail depending on the implementation

Besides these four elements, the system demo and solution demo are also expected to fail, based on
theory. However, in practice these elements are not done with the Agile Release Train. Interestingly,
one of the problems identified is “incorrect execution of SAFe”. Is the reason for this that SAFe is
applied incorrectly by the organization or is SAFe itself incorrect and the way it is executed by certain
organizations better? This question cannot be answered with the data gathered during this research,
therefore, more research is required.

In Figure 35, the elements that have a high risk of failing are marked red, those with high risk of failing,
depending on the implementation are marked yellow, all other elements are marked green.

Figure 35: Agile Release Train elements failing - result of triangulation, created based on [1]

57
5.4.1. PI planning
The PI planning (16) is the key event in which the Agile Release Train synchronizes for the upcoming
PI. If this event fails, the next PI is compromised. Therefore, it is important that this event does not
fail. However, as indicated by both experts on the theory as well as practitioners, when done
distributed, there is a high chance that the PI planning fails. The biggest risk for distributed SAFe
therefore is the PI planning.

5.4.2. Inspect & Adapt


In the inspect & adapt meeting (15), the Agile Release Train reflects on the previous PI. Using the
results of the inspect & adapt meeting the Agile Release Train improves. If this improvement
mechanism fails, problems will not be picked up and the Agile Release Train can get stuck. Same as for
the PI planning, both experts on the theory as well as practitioners indicate that there is a high chance
that the inspect & adapt meeting fails. The second risk for distributed SAFe thus is the inspect & adapt
meeting.

5.4.3. DevOps
The experts on theory indicate that DevOps (22) might fail, were the practitioners indicate that this
depends on the implementation. In one implementation DevOps is responsible to set up and support
the release process, by making builds and new versions. In other implementations, the build process
is fully automated and DevOps is simply maintaining the tools. Thus, depending on the situation,
DevOps failing can have serious consequences for the Agile Release Train. Therefore, the third risk for
distributed SAFe is DevOps.

5.4.4. System team


Same as for DevOps, the experts on theory indicate that the system team (23) might fail, were the
practitioners indicate that this depends on the implementation. The system team can be responsible
for the integration of the solution. While in other cases, the system team is responsible for the
architecture that supports the other teams. Again, depending on the situation the system team failing
can have serious consequences for the Agile Release Train. Thus, the fourth risk for distributed SAFe
is the system team.

58
Chapter 6 Customizations of SAFe

In this chapter a solution is proposed as an answer to the third research question: “What would SAFe
look like, when customized for distributed settings?”. First, customizations of SAFe, based on theory
are presented. Second, customizations of SAFe, based on practical experience are given. For this, the
results of the practitioner focus group are used. Finally, by combining the insights of both theory and
practical experience a solution is proposed.

6.1. Customizations of SAFe based on theory: Literature


In this section, possible solutions on how to customize SAFe for distributed based on theory are
presented. Previous research shows that solutions are required for: the PI planning, inspect & adapt,
DevOps and the system team. Based on the insights of previous research and observations done at
two PI planning sessions, one distributed and one co-located, solutions for each of the elements are
presented.

6.1.1. Solutions for PI planning


Though rather straight forward, the first solution when working distributed is to fly all members of the
Agile Release Train to one location and do the PI planning co-located. Rotating the location that is
used prevents that the members of one location feel inferior because they must always travel. Doing
the PI planning co-located enables the Agile Release Train to work distributed while not having to deal
with the risk of doing a distributed PI planning.

If doing the PI planning co-located is not an option, there are other solutions to reduce the risk of the
PI planning failing. For example, by using a video conference system to get a live connection between
the locations. This way, the locations can see and hear each other even though they are physically not
together. The plenary parts of the PI planning can be done together, as though everyone is in a single
room. For the team breakouts, if a team on another location is needed they can also use the video
conference system to talk to one another. Notably, this solution is also given in the PI planning toolkit,
presented by SAFe. However, this toolkit is not publicly available, this can be bought and is available
to the SPC’s. This solution being in the toolkit thus supports this as a solution for the PI planning.

Another solution, if the PI planning is not done co-located, is to have a facilitator present at each
location. This way, the facilitators can prepare the PI planning together, so that each location is
properly prepared for the PI planning. Additionally, during the PI planning each location has their own
facilitator ensuring that everything runs smoothly at the different locations. Finally, if things happen
that effect multiple locations, the facilitators can solve these together and make sure that all locations
are updated on what is happening. Possibly, these extra facilitators could also support the locations
during the rest of the program increment. In which case these extra facilitators become a Release
Train Engineers for their location.

Additionally, if the locations are spread among multiple time zones, in which the overlapping time
window is less than 8 hours for a working day, the PI planning can be done over more than two days.
For example, with a 6-hour overlap, doing the PI planning across 3 days ensures that all agenda items
can be done at all locations present. Note that if there are no overlapping hours during a working day,
there should be at least one location working late or early.

59
Finally, a digital program board and digital PI objectives can be used. This way each location can see
and update the program board. Besides this, the digital PI objectives can be shared across the locations
of the Agile Release Train so each location can see the progress of all teams.

Although all solutions presented above will help, without the use of video conferencing the other
solutions will not be sufficient to give a successful PI planning.

These solutions are substantiated in the Elektra case study [16]. In which the following is stated
regarding the PI planning. “A Joint PI planning with all involved individuals in one place is the preferred
and most collaborative solution to get to jointly develop PI objectives. However, a downscaled co-
located planning meetings might be an alternative if e.g. budget do not allow for meeting face-to-face.
Well-functioning conferencing equipment (video conferencing, remote presentation etc.) along with
electronic tools to reflect different artifacts of the planning (e.g. planning board, risk board, scrum of
scrums etc.) are mandatory prerequisites for such meeting. The time difference is managed through
recorded presentations, and spreading the planning out on more days.”.

6.1.2. Solutions for Inspect & Adapt


Inspect & adapt is a part of the PI planning, so the same solutions can be applied. First, flying all
members of the Agile Release Train to one location, and doing it co-located, possibly changing the
location after each PI.

If co-locating all members of the Agile Release Train is not possible, there are other solutions for
inspect & adapt, just as for the PI planning. Using a video conference system, and having a Release
Train Engineer facilitating on each location. Additionally, if there is only a small overlapping window
with the different locations, the inspect & adapt meeting could be held on another day, separated
from the PI planning.

Where the PI planning relies heavily on the video conferencing, inspect & adapt does not rely on it as
heavy. Getting action points on how to improve can be done using, for example, a feedback form.
Although it is better with video conferencing, it can also be successful without it.

6.1.3. Solutions for DevOps


The risk of DevOps failing is depending on how SAFe is implemented. These solutions however can be
applied in any implementation, though they might not fit every implementation. The first solution is
to distribute the DevOps team over the locations. This way, each location has its own DevOps
specialist(s) which supports the teams of the Agile Release Train. Downside of this is that the DevOps
team is a distributed team which can lead to problems of its own. However, such a team can be
successful, as supported in the article on fully distributed Scrum [10], [11], and [12].

Another solution would be to provide each location with its own DevOps team. This way each location
should be able to solve everything themselves. This way less communication between the locations is
needed. Downside of this is that it could result in the locations not working together. The locations
might end up solving the same problem with different solutions, which have to be integrated again
later. To prevent this the different DevOps teams need to meet regularly to discuss what they are
working on.

The last solution is to have the DevOps team at one location, but also having the team, or team
members, travel regularly to the other locations. This way the team does not have to work distributed
and no alignment between multiple DevOps teams is needed. However, the team members might not
be willing to travel on a regular basis.

60
Besides these solutions, there are solutions in which there is no separate DevOps team. For example,
by integrating the DevOps expertise into the teams of the Agile Release Train. This way, each team
can do the DevOps activities on its own. Or by automating the DevOps activities so that the teams do
not require DevOps. Although both solutions are good, they cannot be directly implemented. The
transition towards such a solution will take time, for this reason they are not considered in the
proposed solution.

6.1.4. Solutions for system team


Same as for DevOps, the solutions can be applied in any implementation, though they might not fit
every implementation. Again, distributing the systems team over the locations so that each location
has its own specialist. This way, when integrating, the system team knows what is going on at each
location. However, the same downside as for DevOps, this means the system team becomes a
distributed team which can again be successful, according to [10], [11], and [12].

Same as for DevOps, a co-located system team combined with regular traveling could be a solution.
Having multiple system teams is not a solution because the work of these system teams must be
integrated, only adding more overhead.

6.2. Customizations of SAFe based on practical experience: Practitioner focus group


In this section, solutions on how to customize SAFe based on practical experience are given. For this
the results of the second half of the practitioner focus group are used. In the second half of the
practitioner focus group the participants looked into solutions for the top 3 elements of likelihood and
impact. The group identified solutions for the PI planning, inspect & adapt, feature, implementing 1-
2-3, and shared services. This was done in two groups, these groups presented their solutions to each
other after which the participants dot voted on impact and difficulty. The protocol used for this part
of the focus group is visualized in Figure 36.

Figure 36: Practitioner focus group protocol visualization round 3 & 4

61
6.2.1. Practitioner focus group results
The solutions identified were ranked based on difficulty (easiness) and impact. Solutions easy to
realize got a high score and solutions with high impact also got a high score. Combining difficulty and
impact gives a solution score, using the formula 𝐷𝑖𝑓𝑓𝑖𝑐𝑢𝑙𝑡𝑦 ∗ 𝐼𝑚𝑝𝑎𝑐𝑡 = 𝑆𝑜𝑙𝑢𝑡𝑖𝑜𝑛 𝑆𝑐𝑜𝑟𝑒, the higher
the score, the better the solution. For each of the solutions the votes and solution score are presented
in Table 15 to Table 19. The solution score is visualized in Figure 37 to Figure 41. The individual votes
can be found in Appendix CC.

Interestingly, some of the solutions are already part of SAFe. For example, “features ready before PI
planning”, and “make sure that each feature has 1 owner to manage dependencies”. However, the
practitioners identified these as being helpful, even though they are already part of SAFe. Possibly
because these solutions are not expressed clearly enough in SAFe and were therefore not recognized
as being part of SAFe.
Table 15: 16. PI planning - votes on difficulty & impact

Solution Difficulty Impact Solution Score


16.1 Features ready before PI planning 15 7 105
16.3 Good communication tools 10 9 90
16.2 Minimal 1 x period PI Planning on-site with all teams 5 14 70

Solution score PI planning


120
100
Solution score

80
60
40
20
0
16.2 Minimal 1 x period PI
16.1 Features ready before 16.3 Good communication
Planning on-site with all
PI planning tools
teams
Solution Score 105 90 70
Elements

Figure 37: 16. PI planning solution score graph

62
Table 16: 15. Inspect & Adapt - votes on difficulty & impact

Solution Difficulty Impact Solution Score


15.1 Good Communication tools 12 13 156
15.3 First I&A meeting on 1 location 9 12 108
15.2 Divide the topics over the locations 9 5 45

Solution score inspect & adapt


180
160
140
Solution score

120
100
80
60
40
20
0
15.1 Good Communication 15.3 First I&A meeting on 1 15.2 Divide the topics over
tools location the locations
Solution Score 156 108 45
Elements

Figure 38: 15. Inspect & Adapt solution score graph

Table 17: 32. Feature - votes on difficulty & impact

Solution Difficulty Impact Solution Score


32.3 Make sure that each feature has 1 owner to manage 14 12 168
dependencies
32.1 Keep coordinating dependencies (Scrum of Scrums) 11 10 110
32.4 Small features 9 11 99
32.2 Have common understanding on features 6 7 42

Solution score feature


180
160
Solution score

140
120
100
80
60
40
20
0
32.3 Make sure that 32.1 Keep
32.2 Have common
each feature has 1 coordinating
32.4 Small features understanding on
owner to manage dependencies
features
dependencies (Scrum of Scrums)
Solution Score 168 110 99 42
Elements

Figure 39: 32. Feature solution score graph

63
Table 18: 4. Implementing 1-2-3 - votes on difficulty & impact

Solution Difficulty Impact Solution Score


4.3 Strong vision promoted top down 16 14 224
4.1 Have 1 team to coordinate the trainings & 8 11 88
implementation
4.2 Coaches know culture & problems of location 6 5 30

Solution score implementing 1-2-3


250

200
Solution score

150

100

50

0
4.1 Have 1 team to
4.3 Strong vision promoted 4.2 Coaches know culture &
coordinate the trainings &
top down problems of location
implementation
Solution Score 224 88 30
Elements

Figure 40: 4. Implementing 1-2-3 solution score graph

Table 19: 25. Shared services - votes on difficulty & impact

Solution Difficulty Impact Solution Score


25.4 Involvement during planning events 12 12 144
25.2 Services give commitment 11 13 143
25.1 Distribute features by shared service impact (focus 9 8 72
areas)
25.3 1 x per period visit each team visibility 8 7 56

Solution score shared services


160
140
Solution score

120
100
80
60
40
20
0
25.1 Distribute
25.4 Involvement 25.3 1 x per period
25.2 Services give features by shared
during planning visit each team
commitment service impact (focus
events visibility
areas)
Solution Score 144 143 72 56
Elements

Figure 41: 25. Shared services solution score graph

64
6.2.2. Analysis of practitioner focus group results
During the focus group, multiple possible solutions have been identified. However, the elements that
were discussed during the focus group do not correspond with the elements that resulted from the
triangulation. The elements that do correspond are the PI planning and inspect & adapt. These
solutions will be discussed further.

Not all solutions have the same potential. Based on the solution score, the solutions with high
potential can be selected. The solutions that scored above average for their element are listed below.

For the PI planning:

 Features ready before PI planning


 Good communication tools

For inspect & adapt:

 Good Communication tools


 First inspect & adapt meeting on 1 location

The inspect & adapt meeting is a part of the PI planning. Therefore, these solutions can be combined.
The features that will be discussed in the PI planning should be ready, all teams should know them
and have prepared for the PI planning. During the PI planning and inspect & adapt meeting the
communication tools between sites should be good. There can be no problems with the tools during
the event. Additionally, the first inspect & adapt should be done co-located. If the PI planning is done
co-located this can be done directly. Otherwise, this should be done on a later moment. This way, any
problems with the PI planning can be solved together.

As stated previously, features should be ready before the PI planning according to SAFe. When
working distribute there should be more emphasis on this. Therefore, in the proposed solution this is
done by having a Release Train Engineer on each location that makes sure that the teams prepare the
PI planning.

6.3. Combining theory and practical experience


In this section, a solution is proposed on how to customize SAFe for distributed. This solution is created
by combining the insights of both theory and practical experience.

6.3.1. Proposed solution for distributed SAFe


Preferably both the PI planning and inspect & adapt meeting are done co-located. However, if this is
not possible the following solution is proposed, based on the insights from theory combined with the
insights from practical experience.

Each location has its own Release Train Engineer. Before the PI planning each Release Train Engineer
ensures that all teams at his or her location prepare the features for PI planning. During the PI planning
and inspect & adapt the Release Train Engineer facilitates the events for his or her location.

By using video conferencing systems during the PI planning and inspect & adapt, plenary sessions are
done with all locations together. This conference system can also be used to contact the other
locations during the team breakouts. A digital program board is used so all locations can see and
update the board. And finally, if needed the PI planning is extended over multiple days to ensure that
the locations can work together simultaneously.

65
The DevOps team works as one team at one location, and travels regularly to the other locations to
answer questions and help the teams. The system team is distributed over the locations so they know
what is going on at the locations when the team must integrate the program.

6.3.2. Solving the problems


To give an overview, the presented solutions are numbered as followed.

1. Having a Release Train Engineer at each location


2. Using a video conferencing system
3. Using a digital program board & digital PI objectives
4. Extending the PI planning over multiple days
5. Regular traveling of the DevOps team to the other locations
6. Distributing the system team over all locations

In Table 20, these 6 solutions are mapped to the problems and elements identified in the previous
studies. It should be noted that the problems all help towards preventing the problem from
happening. However, even with these solutions the problems can still occur.
Table 20: Solutions for problems and failing elements

Element PI planning Inspect & Adapt DevOps System team


Problem
Incorrect execution of 1 1
SAFe
Language barriers 3 3
Time zone differences 4 4 5 6
Increased 2 2 5 6
communication effort
Inefficient 2 2 5 6
communication tools

6.3.2.1. Incorrect execution of SAFe


By having a Release Train Engineer on each location, the risk for incorrect execution of SAFe is limited.
The Release Train Engineer ensures that each location executes the PI planning and inspect & adapt
correctly. Additionally, during the PI, the Release Train Engineer can guide the teams on his or her own
location to ensure correct execution of SAFe on the location.

6.3.2.2. Language barriers


By using a digital program board and digital PI objectives, each location can see the dependencies
between the teams and see what the other teams are planning to do. When talking to a team at a
different location (with a different native language) these digital artefacts provide additional context
to understand each other. Although this does not solve the problem of language barriers, it does help
to mitigate the problem.

6.3.2.3. Time zone differences


By extending the PI planning & inspect & adapt over multiple days, depending on the overlap between
time zones, time zone differences are overcome. The events are done during the overlapping hours of
the time zones. If there are no overlapping hours during a working day, there should be at least one
location working late or early.

66
Additionally, with regular traveling of the DevOps team, the team is present at each location for some
time, regardless of the time zone. Finally, distributing the system team over the different locations
ensures that, regardless of the time zone, there is always a team member of the system team available
for the teams of each location.

6.3.2.4. Increased communication effort


By having a video conferencing system in place, the communication effort required to contact another
location is brought down. During a session with the conferencing system, questions can be asked
directly to the other location rather than having to send an email after watching a recorded session.

Additionally, with regular traveling of the DevOps team, questions for the DevOps team can be asked
directly to the team when they are on location. Finally, by distributing the system team over the
different locations, questions can always be asked directly to the member of the system team at the
location.

6.3.2.5. Inefficient communication tools


By using a video conferencing system all locations can see and hear each other live, limiting the effect
of using inefficient communication tools. Though not as efficient as sitting together in the same room
this is more efficient than using e-mail or other asynchronous communication tools. Although using a
video conferencing system can still be problematic if the connection is bad or fails. Therefore, this
does not overcome the problem of inefficient communication tools, it does however, help to limit the
negative impact.

Besides this, with regular traveling of the DevOps team, and distributing the system team over the
different locations, no communication tools are needed to contact the DevOps or system team.

67
Chapter 7 Discussion

In this chapter the methodologies and results of this research are discussed. First, the answers to each
of the research questions are presented. Second, the limitations of the methodologies and their effect
on this research are discussed. Third, a reflection on the research is given. Finally, recommendations
for future research are proposed.

7.1. Answers to the research questions


In this section, the answers to the research questions are presented.

7.1.1. Answer to research question 1


To provide an answer to the first research question: “What problems can be expected when SAFe is
applied in distributed settings?”, first, a Systematic Literature Review was done on problems in
Distributed Agile Development. Second, the problems resulting from this review have then been
filtered using a multiple informant methodology with a consensual approach.

As there was no literature found on problems of distributed SAFe, a Systematic Literature Review was
done to problems of Distributed Agile Development. These problems can occur when applying SAFe
in a distributed environment. However, if SAFe has practices to mitigate a problem, the problem is not
expected to occur.

Therefore, these problems have been filtered using a multiple informant methodology with
consensual approach. The informants have mapped the problems on the core values of SAFe to
determine if the problems are considered in SAFe. The problems that are not considered in SAFe are
threats to SAFe. The informants have also mapped these threats on the core values of SAFe to
determine what the impact of the threat is. The threats (problems) with high impact are expected to
occur when applying SAFe in a distributed setting. This resulted in the following five distributed SAFe
problems.

1. Incorrect execution of SAFe


2. Language barriers
3. Time zone differences
4. Increased communication effort
5. Inefficient communication tools

These problems are expected to occur when applying SAFe in distributed settings. But note that these
problems do not exclusively occur when working distributed.

The findings give insight into the problems that can be expected in distributed SAFe. Because there is
no literature found on the problems of distributed SAFe, these findings add knowledge to the research
area of Distributed Agile Development.

7.1.2. Answer to research question 2


To provide an answer to the second research question: “What SAFe elements can be expected to fail
when applied in distributed settings?” triangulation was used. First, failing elements have been
identified based on the distributed SAFe problems. Second, a focus group with both SAFe, and
distributed experts was done to find an answer based on theory: the expert focus group. Third, a focus
group with practitioners was done to find an answer based on practice: the practitioner focus group.

68
The overlap of these identification methods discovered the following four elements that can be
expected to fail when SAFe is applied in a distributed setting.

1. PI planning
2. Inspect & Adapt
3. DevOps
4. System team

Additionally, based on theory, the system demo and solution demo would also be expected to fail.
However, in practice these are not done with the Agile Release Train. Therefore, these items are not
expected to fail when executed in a distributed setting.

When looking at these findings, they correspond to the elements that one would expect to fail when
applying SAFe in a distributed setting. The added value of this research is that this expectation is now
confirmed scientifically. The result of this research can be used as a base on which to build other
research on distributed SAFe.

7.1.3. Answer to research question 3


The third research question: “What would SAFe look like, when customized for distributed settings?”
has not been answered in this research. However, based on theory, this research does provide an
indication what the answer could be. Based on the problems that are expected to occur, a solution is
proposed for the elements that are expected to fail. Thus, theoretically, the use of the proposed
solution in distributed settings is more likely to succeed than when applying normal SAFe. Further
research is required to validate if this solution works in practice.

7.1.3.1. Proposed solution for distributed SAFe


Preferably both the PI planning and inspect & adapt meeting are done co-located. However, if this is
not possible the following solution is proposed, based on the insights from theory combined with the
insights from practical experience.

Each location has its own Release Train Engineer. Before the PI planning each Release Train Engineer
ensures that all teams at his or her location prepare the features for PI planning. During the PI planning
and inspect & adapt the Release Train Engineer facilitates the events for his or her location.

By using video conferencing systems during the PI planning and inspect & adapt, plenary sessions are
done with all locations together. This conference system can also be used to contact the other
locations during the team breakouts. A digital program board is used so all locations can see and
update the board. And finally, if needed, the PI planning is extended over multiple days to ensure that
the locations can work together simultaneously.

The DevOps team works as one team at one location, and travels regularly to the other locations to
answer questions and help the teams. The system team is distributed over the locations so they know
what is going on at the different locations when the team must integrate the solution.

7.2. Limitations
In this section the limitations the of the methodologies, as described in Chapter 3, and their effects on
this research are discussed.

7.2.1. Limitations of the Systematic Literature Review


All conditions for the Systematic Literature Review have been fulfilled, thus no limitations come from
conditions not being fulfilled.

69
A limitation of using a Systematic Literature Review is that the results might be biased. For the
Systematic Literature Review done in this research this effect is limited as the review only considered
other Systematic Literature reviews. Besides this, investigating two other reviews of the Systematic
Literature Reviews of Distributed Agile Development provided no new studies. Though there is always
some bias, this way the bias is minimalized as much as possible.

Another limitation is that the frame of reference is different for each person. This might result in
slightly different execution of the protocol when applied by another researcher. These small
differences can lead to different papers being accepted and different data being extracted. Though
the transparence provided by using a predefined protocol does not avoid this, it does make the
decisions traceable and repeatable.

Finally, the use of a Systematic Literature Review is limited to previously published studies. However,
for this research, this is sufficient as the Systematic Literature Review is done to discover the current
problems in Distributed Agile Development. Based on these problems, steps are taken, rather than
trying to gain new insights from the Systematic Literature Review.

7.2.2. Limitations of the multiple informant methodology


All conditions for the multiple informant methodology have been fulfilled, thus no limitations come
from conditions not being fulfilled.

Using multiple informants, one could wonder whether the informants are able to sufficiently judge
the topic of SAFe. Because the informants all meet the selection criteria presented in 3.3.1, the
informants should be able to sufficiently judge the topics. Moreover, all selected informants are SPC’s,
and are thus certified to give training on SAFe. Therefore, the informants are able to sufficiently judge
the topic of SAFe. Besides insight in SAFe, it seems insight on the topic of distributed is also required.
However, as the problems judged by the informants come from the Systematic Literature Review done
previously, the insight should be based on this literature study. These problems, and corresponding
insights, are therefore provided by the researcher.

Another limitation is that the informants might not be able to provide a broad enough view to make
a sufficiently substantiated decision. As the selected informants, have implemented SAFe in different
companies, this difference in experience should be sufficient to provide a broad enough view. Though
it should be noted that the experts all work at Prowareness. However, this is not where their SAFe
experience comes from, so their backgrounds on SAFe are sufficiently different.

Finally, the use of the consensual approach might not provide the preferred answer, other ways to
aggregate the answers could provide a better outcome. However, as described in 4.2, after each
mapping the changes that were made when the consensus was reached did not change the outcome.
Thus, the use of a consensual approach did not limit the result of the methodology.

7.2.3. Limitations of the expert focus group


The first limitation from 3.4.3 is that data analysis for a focus group is complex and can lead to
unwarranted conclusions. To gather warranted measurable results, the discussions in the focus group
have been structured around the SAFe elements, and the participants are asked to dot vote
individually on the results of the discussions. This dot voting resulted in rankings, which are warranted.
Besides being warranted, this structure also makes transparent were the conclusions come from.
Additionally, all data gained from the focus group can be found in the appendixes, this allows
researchers to interpret the data for themselves and derive their own conclusions.

70
Due to difference in expertise, the condition that all participants should be able to individually judge
the topic when asked, is not fulfilled. This is thus a limitation. To handle this limitation, it must be
made plausible that all participants have reached a level of expertise where they can properly judge
the topics.

Before the voting is done there are three steps in which the expertise of the participants is increased.
After these steps all participants are expected to be able to properly judge the topics. Moreover, all
participants have to agree with the elements that will be voted on. Participants are not expected to
agree to things that they do not understand.

The growth of knowledge before the participants vote individually is qualitatively visualized in Figure
42 and Figure 43, in both figures, the red line visualizes the minimal required expertise to be able to
properly judge the topics. No action was taken during the session to verify this knowledge level,
because questions regarding the knowledge on the topics could influence the participants.

SAFe expertise
SAFe knowledge level

Starting point Preparation before Discussion of Discussion of


the session - elements and elements and
individually problems - groups problems - plenary
SAFe consultant Distributed expert Practitioner

Figure 42: Visualization SAFe expertise participants focus group

In the first step, preparation before the session, both the SAFe consultants as well as the practitioners
are not expected to receive new information regarding SAFe. The distributed experts will receive a lot
of new information regarding SAFe, and will therefore improve their knowledge significantly. In the
second and third steps, through discussion in a group, all participants can gain new insights. And thus,
increasing the knowledge level of all participants. The distributed experts however, will increase more
than the SAFe consultants and practitioners. In the plenary discussion the increase is less as
participation with a bigger group is harder. After these three steps it is expected that the knowledge
level of all participants is high enough to be able to properly judge the topic of SAFe by themselves.

71
Distributed knowledge level Distributed expertise

Starting point Preparation before Discussion of Discussion of


the session - elements and elements and
individually problems - groups problems - plenary
SAFe consultant Distributed expert Practitioner

Figure 43: Visualization distributed expertise participants focus group

Similar as before, both the practitioners and distributed experts are not expected to receive new
information regarding the problems of distributed development. The SAFe consultants will receive
some new information regarding the problems of distributed development, and will therefore
improve their knowledge. The second and third steps go similar, all participants can gain new insights
in the group discussion, increasing the knowledge level of all participants. The SAFe consultants
however, will increase more than the distributed experts and practitioners. After these three steps it
is expected that the knowledge level of all participants is high enough to be able to properly judge the
topic of distributed.

7.2.4. Limitations of the practitioners focus group


The first limitation from 3.4.3 is that data analysis for a focus group is complex and can lead to
unwarranted conclusions. To gather warranted measurable results, the discussions in the focus group
have been structured around the SAFe elements, and the participants are asked to dot vote
individually on the results of the discussions. This dot voting results in rankings, which are warranted.
Besides being warranted, this structure also makes transparent where the conclusions come from.
Additionally, all data gained from the focus group can be found in the appendixes, this allows
researchers to interpret the data for themselves and derive their own conclusions.

Due to difference in expertise, the condition that all participants should be able to individually judge
the topic when asked is not fulfilled. This is thus a limitation. To handle this limitation, it must be made
plausible that all participants have reached a level of expertise where they can properly judge the
topics.

To be able to judge the topics, the level of expertise on the topics of both distributed and SAFe has to
be sufficient. The selected participants are all Release Train Engineers. However, their experience
differs. Based on their answers to the questions asked before the focus group via email, the

72
participants have been grouped in three categories: beginner, intermediate and expert. For the results
of the focus group, only the votes of the experts and intermediates are taken into account. Though
the votes of the beginners are not taken into account, their participation in the discussions is of value.

Another option to ensure everyone would be able to individually judge the topics would be to provide
all participants with extra information on the topics of distributed and SAFe. However, because the
goal of the focus group is to gain insight based on practice, the choice was made not to provide the
participants with additional information.

The second condition, that the participants should not have been previously involved in the research
was violated. Therefore, this is also a limitation. To handle this limitation, it must be made plausible
that the previous involvement did not affect the focus group.

One of the participants was present at the previous focus group. Another participant attempted to
participate as informant for the multiple informant methodology. The participant, however, did not
feel confident to answer the questions. The session was stopped and the results were left out. These
participants, as well as one other participant had previously taken part in the survey that was done as
part of this research. For each of these participants, the time between their participation in the survey
and the focus group was roughly 2 months. For the previous focus group, this time was roughly 4
months, and for the informant session this was roughly 6 months. For none of the participants the
research is part of their daily work.

Additionally, in the previous focus group and in the informant session, the participants were asked to
give insight based on theory. Whereas in this focus group the participants are asked to provide insight
based on practice. Therefore, given the time that has passed since their participation, and that the
participants are asked for practical insights, it is unlikely that their previous participation affected the
outcome of the focus group.

7.3. Reflection
In this section, it is reflected upon how the methodologies that are used provide a sufficiently
substantiated answer to the research questions.

7.3.1. Reflection on research question 1


To answer the first research question, both a Systematic Literature Review and a multiple informant
methodology were used. The data on the Distributed Agile Development problems from the
Systematic Literature Review has been combined with SAFe during the multiple informant
methodology. In the step from the Systematic Literature Review to the multiple informant
methodology, 2 problems were dismissed, “silence of participants” and “increased number of sights”.
It might have been better not to dismiss these problems because no statistical argument can be made
regarding this choice. However, because the rationale for dismissing the problems to avoid
coincidental findings affecting the research still holds, this is an arbitrary choice. Additionally, using a
multiple informant methodology might not have been the best approach.

First, the perspective provided by the informants can be considered too limited. In the application of
the methodology the selected informants were all SAFE experts. The informants’ experience with
distributed was primarily derived from information provided by the researcher, which was based on
the results of the Systematic Literature Review. By design, the informants’ insights come from the
SAFe perspective, rather than the distributed perspective. Answering a question which requires both
SAFe and distributed knowledge, having only SAFe experts can be considered as being too limited.
Including experts from the field of distributed would have resulted in a broader, more complete,
perspective.

73
Second, the consensual approach can be criticized that it did not generate enough discussion. When
consensus was being reached, one of the three informants was not present. Though the informant
agreed with the consensus that was found, he did not participate in the discussion. It would have been
better if the informant participated in the discussion. This however, was not possible due to
circumstances.

Third, there is no insight in the data behind each cell of the mapping of the problems on the core
values. The mapping provides a good overview of the data. However, because of the way that the
informants were asked to fill in the mapping, this did not provide insight into their reasoning. Only
where the informants had different views their reasoning was given during the discussion, but this
was not documented. It would have been better if the reasoning for each cell of the mappings had
also been documented.

These reflections do not disqualify the approach taken. However, given these reflections, it can be
argued that the answer to the research question can be better substantiated using the previously
described adjustments. The multiple informant methodology provided validated insights using
relatively few resources. In hindsight, using a different approach that has more emphasis on
discussion, such as a focus group, could have yielded more insightful results. Besides this, such an
approach would have enabled inclusion of distributed experts as well as SAFe experts. This however,
would have required considerably more time and resources and thus limiting the time available to
investigate the other research questions.

7.3.2. Reflection on research question 2


To answer the second research question, triangulation was used. By combining insights from
literature, theory, and practice a substantiated conclusion is drawn on the elements that fail when
SAFe is applied in a distributed setting. For both the theoretic and practical insights a focus group was
used. The use of a focus group to gain theoretical insights is logical as it enables the researcher to
combine the insights of distributed experts with those of SAFe experts.

To gain practical insights, initially a survey in the SAFe community was done. However, due to low
response, the results of the survey could not be sufficiently substantiated. For this reason, the results
are excluded from the research and another method for gaining insights from practice was required.
At this point, the choice was made to conduct another focus group. Although the survey is not used,
it would have been good to analyze why the response on the survey was so low. This would however,
not provide an answer to the research question, and was therefore not done.

The use of a focus group for gaining insights from practice can be argued against. First, there are only
11 participants, while globally there are many more SAFe practitioners. The question can be asked if
these 11 participants are a good representation of all SAFe practitioners. Second, in the focus group,
discussion was used to reach agreement. However, reaching agreement was not always possible
because implementations can differ. Therefore, this use of a focus group for gaining practical insights
might not have been the best approach. The use of for example multiple case studies might have been
better.

7.3.3. Reflection on research question 3


No answer to the third research question is given, only an indication what the answer could be. As
indicated, the proposed solution should be validated in practice. This validation was not possible
within the timeframe of this research. If the results of the survey would have been sufficiently
substantiated, there would have been time to validate the third research question.

74
The proposed solution itself is not very substantiated, again because of the time constraints of this
master thesis project. Increasing support for this solution could be done by verifying it with
practitioners, for example via mail or a survey. However, even with that support it should still be
validated in practice.

7.3.4. Reflection on the results of the research


When looking at this research, it has focused on the Agile Release Train. Although the Agile Release
Train is a core construct of SAFe, as argued in 2.4, SAFe consists of much more than this. This results
in two questions that must be asked regarding the research. First, it can be questioned whether the
results of this research are applicable for SAFe, or only for the Agile Release Train. Second, it can be
asked whether the results can be viewed as an answer the research questions for SAFe, or only for the
Agile Release Train.

Both questions are relevant, and to find a definite answer to them, research into all aspects of SAFe
should be done. However, as stated previously, the Agile Release Train is a core construct of SAFe. As
such, the results of the research are applicable for SAFe, although, possibly not for all elements. To
discover this, additional research must be done.

The answer to the question, can the results be viewed as an answer for SAFe or for the Agile Release
Train, differs for each research question. For the first research question, the problems that are
expected to occur are not specific to the Agile Release Train, thus this answer can be viewed as an
answer for SAFe. For the second and third research questions, the research looked specifically into
the Agile Release Train, and thus the answer can be viewed only for the Agile Release Train.

Additionally, the answers found in this research are rather obvious. Therefore, one could ask what the
added value of this research is. The added value of this research is that these rather obvious answers
are now confirmed scientifically. This confirmation has been done using different techniques, which
have been documented extensively. This gives a result that can be used as a base on which to build
further research on distributed SAFe.

The focus of this research was on the problems and failing elements that are expected to occur when
SAFe is applied in a distributed setting. Regrettably, this means that the question, “is distributed SAFe
possible?” cannot be answered based on this research. Based on the results of this research it would
seem that this is possible, if the right precautions are met to ensure a successful PI planning and
inspect & adapt.

In hindsight, this research has focused on discovering what goes wrong when applying SAFe in a
distributed setting. Rather than, how to prevent or solve these problems. After the first focus group,
the focus of the research could have switched towards solving the problems for the PI planning.
However, if this had been done, the answer to the second research question would not have been
substantiated further. Leaving an open end to that part. Personally, I think from a scientific viewpoint
the choice to further substantiate the second research question was correct. Although, from the
viewpoint of a company, this was not preferred as it did not yield surprising new insights.

On a personal note, I think that the solution for the PI planning, derived from the data of the research
is not the best possible solution. I think it would be better to do, in addition to the PI planning, a mini
PI planning every team iteration. Before the teams do their iteration planning sessions, the Agile
Release Train comes together, possibly digitally, to briefly discuss what features the teams will be
working on in the upcoming iteration. This way, teams know what the other teams will be working on,
and were to go for help. Additionally, this reduces risk of problems occurring when the PI planning
fails. This mini PI can ensure that the train keeps on track. Although this may be a good solution, this

75
solution is completely theoretical and not supported by this research. It could be interesting to
investigate this solution in future research.

Finally, this research gives, problems and elements that are expected to fail, and a proposed solution.
The problems, elements, and the solution are not validated in practice. Although, some validation is
done using a focus group, but no case study or anything of the sort has been done. However, validation
using a single case study, is not sufficient, and doing multiple case studies requires much time and
resources. Validation was also attempted by doing a survey, which would have made a good validation
from practice. However, regrettably, this survey did not get enough response to be included in the
results.

7.4. Recommendations for future research


Based on the reflection for each of the research questions, the answers to each of the research
questions could be better substantiated. For the first research question, it is recommended that the
answer is validated, for example by using a focus group. For the second question, it is recommended
that the answer is further substantiated by doing an in-debt case study. For the third research
question, it is recommended that the proposed solution is validated with practitioners and tested in
practice.

Additionally, this research has mostly focused on the Agile Release Train. Additional research must be
done to discover if this research is also an answer regarding SAFe or only an answer for the Agile
Release Train.

Though all these recommendations are valid, the next step in this research would be to test the
proposed solution in the field. Problems observed during this test brings us back to a reformulated
version of the first research question: “What problems occur when the customized version of SAFe is
applied in a distributed setting?” Based on these problems the second research question can be
reformulated “What SAFe elements of the customized version of SAFe fail when applied in a
distributed setting?”. Which serves as input to further improve this customized version of SAFe.

Furthermore, the practitioners mentioned that they did not do the system demo and solution demo
with the Agile Release Train, as is proposed by the theory. Interestingly, one of the problems identified
is “incorrect execution of SAFe”. Is the reason for this that SAFe is applied incorrectly by the
organization or is SAFe itself incorrect and the way it is executed by certain organizations better?
Additional research must be done to answer these questions.

Besides this, it could be interesting to investigate the solution of bi-weekly mini PI planning sessions
in addition to the PI planning. This to reduce the risk the train derailing due to a failed PI planning.

Finally, based on these recommendations, a list of questions for future research is proposed. First,
regarding the problems and failing elements identified in this research. Second, regarding the
difference between theory and practice of the system demo and solution demo. Third, regarding the
customized version of distributed SAFe. Fourth, regarding the bi-weekly mini PI planning.

 Can the problems identified to occur when SAFe is applied in a distributed setting, based on
theory, be verified using a focus group?
 Are the failing elements, identified for the Agile Release Train, representative for all of SAFe?

 What is the reason that practitioners execute the system demo and solution demo in a
different way than prescribed by theory?

76
 What is the consequence of practitioners executing the system demo and solution demo in a
different way than prescribed by theory?

 What problems occur when the customized version of SAFe is applied in a distributed setting?
 What SAFe elements of the customized version of SAFe fail when applied in a distributed
setting?

 Can a bi-weekly mini PI planning help the Agile Release Train to stay on track?

77
Chapter 8 Conclusion

In this chapter, a summary is given on the approach taken to answer the research questions and the
conclusions of the research questions are presented.

8.1. Summary
In this research, the following three research questions have been answered:

1. What problems can be expected when SAFe is applied in distributed settings?


2. What SAFe elements can be expected to fail when applied in distributed settings?
3. What would SAFe look like, when customized for distributed settings?

To answer the first research question, a Systematic Literature Review was done on the topics of
Distributed Agile Development and Distributed Scrum. This resulted in Distributed Agile Development
problems. To relate these problems to SAFe, a multiple informant methodology with consensual
approach has been used. This multiple informant methodology filtered the problems and discovered
distributed SAFe problems.

To answer the second research question: triangulation was used. First, the author identified failing
elements based on the distributed SAFe problems found in previous research question. Second, a
focus group with both SAFe and distributed experts was done to find an answer based on theory: the
expert focus group. Third, a focus group was done to find an answer based on practice: the practitioner
focus group. The overlap of these identification methods provided an answer to the research question.

To answer the third research question, insights from the author, based on the previous research, were
combined with the results of the second part of the practitioner focus group.

8.2. Conclusion
The following five problems with distributed SAFe are found as an answer to the first research
question: “What problems can be expected when SAFe is applied in distributed settings?”.

1. Incorrect execution of SAFe


2. Language barriers
3. Time zone differences
4. Increased communication effort
5. Inefficient communication tools

These findings give insight into the problems that can be expected in distributed SAFe. Because there
is no literature found on the problems of distributed SAFe, these findings add knowledge to the
research area of Distributed Agile Development.

The answer to the second research question: “What SAFe elements can be expected to fail when
applied in distributed settings?” is visualized in Figure 44. The following four elements are found that
are expected to fail, of which DevOps and system team are depending on the implementation.

1. PI planning
2. Inspect & Adapt
3. DevOps
4. System team

78
Figure 44: Agile Release Train elements failing, created based on [1]

These findings correspond to the elements that one would expect to fail when applying SAFe in a
distributed setting. The added value of these findings is that the expectation is now confirmed
scientifically. Therefore, the result of this research can be used as a base on which to build future
research on distributed SAFe.

No answer has been found to the third research question: “What would SAFe look like, when
customized for distributed settings?”. However, based on theory, this research does provide an
indication what the answer could be. Based on the problems that are expected to occur, a solution is
proposed for the elements that are expected to fail. Therefore, theoretically, the use of the proposed
solution in distributed settings is more likely to succeed than when applying normal SAFe.

8.2.1. Proposed solution for distributed SAFe


Preferably both the PI planning and inspect & adapt meeting are done co-located. However, if this is
not possible, based on the insights from theory combined with the insights from practical experience,
the following solution is proposed.

Each location has its own Release Train Engineer. Before the PI planning each Release Train Engineer
ensures that all teams at his or her location prepare the features for PI planning. During the PI planning
and inspect & adapt, the Release Train Engineer facilitates the events for his or her location.

By using video conferencing systems during the PI planning and inspect & adapt, plenary sessions are
done with all locations together. This conference system can also be used to contact the other
locations during the team breakouts. A digital program board is used so all locations can see and
update the board. And finally, if needed, the PI planning is extended over multiple days to ensure that
the locations can work together simultaneously.

79
The DevOps team works as one team at one location, and travels regularly to the other locations to
answer questions and help the other teams. The system team is distributed over the locations so they
know what is going on at the different locations when the team must integrate the solution.

80
Bibliography

[1] D. Leffingwell, A. Yakyma, D. Jemilo, R. Knaster and I. Oren, "Scaled Agile Framework," Scaled
Agile. inc., [Online]. Available: http://www.scaledagileframework.com/. [Accessed 8 February
2016].

[2] J. D. Herbsleb and D. Moitra, "Global software development," IEEE software, vol. 18, no. 2, pp.
16-20, 2001.

[3] K. Dullemond and B. van Gameren, "Technological support for distributed agile development,"
TU Delft, Delft University of Technology, 2009.

[4] J. D. Herbsleb, "Global software engineering: The future of socio-technical coordination," 2007.

[5] J. D. Herbsleb and A. Mockus, "An empirical study of speed and communication in globally
distributed software development," IEEE Transactions on software engineering, vol. 29, no. 6,
pp. 481-494, 2003.

[6] P. J. Agerfalk, B. Fitzgerald, H. Holmstrom Olsson, B. Lings, B. Lundell and E. Ó Conchúir, "A
framework for considering opportunities and threats in distributed software development," in
Proceedings of the International Workshop on Distributed Software Development (DiSD 2005),
2005.

[7] M. Paasivaara, S. Durasiewicz and C. Lassenius, "Using scrum in distributed agile development:
A multiple case study," in Fourth IEEE International Conference on Global Software Engineering,
2009.

[8] D. Šmite, C. Wohlin, T. Gorschek and R. Feldt, "Empirical evidence in global software
engineering: a systematic review," Empirical software engineering, vol. 15, no. 1, pp. 91-118,
2010.

[9] P. van Buul and R. van Solingen, "Insights from a structured literature review (SLR) on
documented case-studies of Scrum application in globally distributed settings," Delft Software
Engineering Research Group, Delft, 2016.

[10] J. Sutherland, G. Schoonheim, E. Rustenburg and M. Rijk, "Fully distributed scrum: The secret
sauce for hyperproductive offshored development teams," in Agile Conference (AGILE'08),
2008.

[11] J. Sutherland, G. Schoonheim and M. Rijk, "Fully distributed scrum: Replicating local
productivity and quality with offshore teams," in 42nd Hawaii International Conference on
System Sciences (HICSS'09), 2009.

[12] J. Sutherland, G. Schoonheim, N. Kumar, V. Pandey and S. Vishal, "Fully distributed scrum:
Linear scalability of production between San Francisco and India," in Agile Conference
(AGILE'09), 2009.

81
[13] V. One, "10th Annual State of Agile Report," 2016.

[14] D. Leffingwell, A. Yakyma, D. Jemilo, R. Knaster and I. Oren, "Infogain case study," Scaled Agile,
Inc., [Online]. Available: http://scaledagileframework.com/infogain-case-study/. [Accessed 9
May 2016].

[15] D. Leffingwell, A. Yakyma, D. Jemilo, R. Knaster and I. Oren, "John Deere case study," Scaled
Agile, Inc., [Online]. Available: http://scaledagileframework.com/john-deere-case-study-part-
1/. [Accessed 9 May 2016].

[16] D. Leffingwell, A. Yakyma, D. Jemilo, R. Knaster and I. Oren, "Elektra case study," Scaled Agile,
inc., [Online]. Available: http://scaledagileframework.com/elekta-case-study/. [Accessed 23
May 2016].

[17] D. Leffingwell, A. Yakyma, D. Jemilo, R. Knaster and I. Oren, "Accenture case study," Scaled
Agile, Inc., [Online]. Available: http://scaledagileframework.com/accenture-case-study/.
[Accessed 9 May 2016].

[18] D. Leffingwell, A. Yakyma, D. Jemilo, R. Knaster and I. Oren, "Scaled Agile Framework - Lean
Agile Mindset," Scaled Agile. inc., [Online]. Available: http://scaledagileframework.com/lean-
agile-mindset/. [Accessed 26 September 2016].

[19] K. Beck, M. Beedle, A. Van Bennekum, A. Cockburn, W. Cunningham, M. Fowler, J. Grenning, J.


Highsmith, A. Hunt, R. Jeffries, J. Kern, B. Marick, R. C. Martin, S. Mellor, K. Schwaber, J.
Sutherland and D. Thomas, "The agile manifesto," 2001. [Online]. Available:
http://agilemanifesto.org/.

[20] M. Hirsch, "Making RUP agile," OOPSLA 2002 Practitioners Reports, pp. 1-8, 2002.

[21] J. Hunt, "Agile Methods with RUP and PRINCE2," Agile Software Construction, pp. 193-210,
2006.

[22] W. W. Royce, "Managing the development of large software systems," proceedings of IEEE
WESCON, vol. 26, no. 8, pp. 328-338, 1970.

[23] "What is PRINCE2?," ILX Group 2016, [Online]. Available: https://www.prince2.com/eur/what-


is-prince2. [Accessed 19 April 2016].

[24] P. Kruchten, The rational unified process: an introduction, Addison-Wesley Professional, 2004.

[25] Rational, "Rational Unified Process Best Practices for Software Development Teams," Rational
the software development company, November 2001. [Online]. Available:
https://www.ibm.com/developerworks/rational/library/content/03July/1000/1251/1251_be
stpractices_TP026B.pdf. [Accessed 17 August 2016].

[26] C. Larman and B. Vodde, "LeSS," The LeSS Company B.V., 2014. [Online]. Available:
https://less.works/. [Accessed 18 April 2016].

[27] S. W. Ambler and M. Lines, "Going Beyond Scrum: Disciplined Agile Delivery," Disciplined Agile
Consortium. White Paper Series, pp. 1-16, October 2013.

82
[28] K. Schwaber, "Nexus Guide," Scrum.org, August 2015.

[29] H. Kniberg and A. Ivarsson, "Scaling Agile@ Spotify," Spotify, October 2012. [Online]. Available:
https://ucvox.files.wordpress.com/2012/11/113617905-scaling-agile-spotify-11.pdf.
[Accessed 11 May 2016].

[30] M. Korkala and P. Abrahamsson, "Communication in distributed agile development: A case


study," in 33rd EUROMICRO Conference on Software Engineering and Advanced Applications
(EUROMICRO 2007), 2007.

[31] M. Paasivaara, S. Durasiewicz and C. Lassenius, "Distributed agile development: Using Scrum
in a large project," in IEEE International Conference on Global Software Engineering, 2008.

[32] M. Vax and S. Michaud, "Distributed Agile: Growing a practice together," in Conference Agile,
2008. AGILE'08., 2008.

[33] P. L. Bannerman, E. Hossain and R. Jeffery, "Scrum practice mitigation of global software
development coordination challenges: a distinctive advantage?," in 45th Hawaii International
Conference on System Science (HICSS), 2012.

[34] H. Smits and G. Pshigoda, "Implementing scrum in a distributed software development


organization," in Agile Conference (AGILE), 2007.

[35] B. S. Drummond and J. Francis, "Yahoo! Distributed Agile: Notes from the world over," in Agile
Conference (AGILE'08), 2008.

[36] J. D. Herbsleb, D. J. Paulish and M. Bass, "Global software development at siemens: experience
from nine projects," in Proceedings. 27th International Conference on Software Engineering,
2005.

[37] E. Ó. Conchúir, H. Holmstrom, J. Agerfalk and B. Fitzgerald, "Exploring the assumed benefits of
global software development," in IEEE International Conference on Global Software
Engineering (ICGSE'06), 2006.

[38] R. K. Gupta and P. Manikreddy, "Challenges in Adapting Scrum in Legacy Global Configurator
Project," in IEEE 10th International Conference on Global Software Engineering (ICGSE), 2015.

[39] R. Vallon, C. Drager, A. Zapletal and T. Grechenig, "Adapting to Changes in a Project's DNA: A
Descriptive Case Study on the Effects of Transforming Agile Single-Site to Distributed Software
Development," in Agile Conference (AGILE'14), 2014.

[40] V. J. Wawryk, C. Krenn and T. Dietinger, "Scaling a running agile fix-bid project with near
shoring: Theory vs. reality and (best) practice," in IEEE Eighth International Conference on
Software Testing, Verification and Validation Workshops (ICSTW), 2015.

[41] R. Noordeloos, C. Manteli and H. Van Vliet, "From RUP to Scrum in global software
development: A case study," in IEEE Seventh International Conference on Global Software
Engineering (ICGSE), 2012.

83
[42] F. Zieris and S. Salinger, "Doing Scrum Rather Than Being Agile: A Case Study on Actual
Nearshoring Practices," in IEEE 8th International Conference on Global Software Engineering
(ICGSE), 2013.

[43] I. Therrien and E. LeBel, "From Anarchy to Sustainable Development: Scrum in Less Than Ideal
Conditions," in Agile Conference (AGILE'09), 2009.

[44] T. J. Allen, "Managing the flow of technology: technology transfer and the dissemination of
technological information within the R and D organization," Massachusetts Institute of
Technology, 1977.

[45] T. Allen and G. Henn, "The organization and architecture of innovation," Routledge, 2007.

[46] D. Leffingwell, A. Yakyma, D. Jemilo, R. Knaster and I. Oren, "Scaled Agile Framework - Core
Values," [Online]. Available: http://www.scaledagileframework.com/safe-core-values/.
[Accessed 25 February 2016].

[47] R. Dolman and S. Spearman, "ASK: AGILE SCALING KNOWLEDGE - THE MATRIX," Agile Scaling,
2014. [Online]. Available: http://www.agilescaling.org/ask-matrix.html. [Accessed 18 April
2016].

[48] C. Larman and B. Vodde, "Large Scale Scrum - More with LeSS," Scrum Alliance, 30 December
2013. [Online]. Available: http://agileatlas.org/articles/item/large-scale-scrum-more-with-
less. [Accessed 18 April 2016].

[49] "Disciplined Agile 2.0," Disciplined Agile Consortium, 2015. [Online]. Available:
http://www.disciplinedagiledelivery.com/introduction-to-dad/. [Accessed 19 April 2016].

[50] "Disciplined Agile 2.0 - posters," Disciplined Agile Consortium, 2015. [Online]. Available:
https://www.disciplinedagileconsortium.org/posters. [Accessed 26 October 2016].

[51] "scrum.org - resources," Scrum.org, [Online]. Available:


https://www.scrum.org/Resources/The-Nexus-Guide. [Accessed 26 October 2016].

[52] D. Leffingwell, A. Yakyma, D. Jemilo, R. Knaster and I. Oren, "Essential SAFe," Scaled Agile. inc.,
9 February 2016. [Online]. Available: http://www.scaledagileframework.com/first-things-first-
essential-safe/. [Accessed 19 April 2016].

[53] D. Leffingwell, A. Yakyma, D. Jemilo, R. Knaster and I. Oren, "Essential SAFe," Scaled Agile. inc.,
23 June 2016. [Online]. Available: http://www.scaledagileframework.com/an-essential-
update-on-essential-safe/. [Accessed 27 June 2016].

[54] B. Kitchenham and S. Charters, Guidelines for performing systematic literature reviews in
software engineering, Technical report, Ver. 2.3 EBSE Technical Report. EBSE, 2007.

[55] S. M. Wagner, C. Rau and E. Lindemann, "Multiple informant methodology: a critical review
and recommendations," Sociological Methods & Research, vol. 38, no. 4, pp. 582-618, 2010.

[56] R. Libby and R. K. Blashfield, "Performance of a composite as a function of the number of


judges," Organizational Behavior and Human Performance, vol. 21, no. 2, pp. 121-129, 1978.

84
[57] S. Wilkinson, "Focus group reseach," in Qualitative research: Theory, method and practice,
Sage, 2004, p. 177.

[58] D. L. Morgan, Focus groups as qualitative research, Sage, 1996.

[59] B. Kitchenham, "Procedures for performing systematic reviews," Keele, UK, Keele University,
vol. 33, pp. 1-26, 2004.

[60] D. Gigone and R. Hastie, "Proper analysis of the accuracy of group judgments.," Psychological
Bulletin, vol. 121, no. 1, pp. 149-167, 1997.

[61] G. W. Hill, "Group versus individual performance: Are N+ 1 heads better than one?,"
Psychological bulletin, vol. 91, no. 3, pp. 517-539, 1982.

[62] D. W. Stewart and P. N. Shamdasani, Focus groups: Theory and practice, Sage Publications,
1990.

[63] S. Wilkinson, "Focus group methodology: A review," International Journal of Social Research
Methodology, vol. 1, no. 3, pp. 181-203, 1998.

[64] A. J. Onwuegbuzie, W. B. Dickinson, N. L. Leech and A. G. Zoran, "A qualitative framework for
collecting and analyzing data in focus group research," International journal of qualitative
methods, vol. 8, no. 3, pp. 1-21, 2009.

[65] A. Pitkänen, "Agile Transformation: A case study," Aalto University, 2015.

[66] T. Devos, "Case Study: Agility at Scale Wolters Kluwer Belgium," UHasselt, 2014.

[67] Y. I. Alzoubi, A. Q. Gill and A. Al-Ani, "Empirical studies of geographically distributed agile
development communication challenges: A systematic review," Information & Management,
vol. 53, no. 1, pp. 22-37, 2016.

[68] E. Hossain, M. A. Babar and H.-y. Paik, "Using scrum in global software development: a
systematic literature review," in Fourth IEEE International Conference on Global Software
Engineering (ICGSE), 2009.

[69] E. Hossain, M. A. Babar, H.-y. Paik and J. Verner, "Risk identification and mitigation processes
for using scrum in global software development: A conceptual framework," in Asia-Pacific
Software Engineering Conference (APSEC), 2009.

[70] Y. I. Alzoubi and A. Q. Gill, "Agile global software development communication challenges: A
systematic review," in Pacific Asia Conference on Information Systems (PACIS), 2014.

[71] U. Farooq and M. U. Farooq, "Exploring the Benefits and Challenges of Applying Agile Methods
in Offshore Development," Blekinge Institute of Technology, Karlskrona, Sweden, 2010.

[72] A. S. Alqahtani, J. D. Moore, D. K. Harrison and B. M. Wood, "THE CHALLENGES OF APPLYING


DISTRIBUTED AGILE SOFTWARE DEVELOPMENT: ASystematic REVIEW," International Journal
of Advances in Engineering & Technology, vol. 5, no. 2, pp. 23-36, 2013.

85
[73] B. Rizvi, E. Bagheri and D. Gasevic, "A systematic review of distributed Agile software
engineering," Journal of Software: Evolution and Process, vol. 27, no. 10, pp. 723-762, 2015.

[74] R. Noordeloos, "AGILE SOFTWARE DEVELOPMENT IN A GLOBALLY DISTRIBUTED


ENVIRONMENT," VU University Amsterdam, Amsterdam, 2012.

[75] S. M. Shah and M. Amin, "Investigating the Suitability of Extreme Programming for Global
Software Development," Blekinge Institute of Technology, Karlskrona, 2013.

[76] M. Rahman and A. Das, "MITIGATION APPROACHES FOR COMMON ISSUES AND CHALLENGES
WHEN USING SCRUM IN GLOBAL SOFTWARE DEVELOPMENT," Blekinge Institute of
Technology, Faculty of Computing, Department of Software Engineering, Blekinge, 2015.

[77] C. Gurram and S. G. Bandi, "Teamwork in Distributed Agile Software Development," Blekinge
Institute of Technology, School of Computing, Blekinge, 2013.

[78] G. K. Hanssen, D. Šmite and N. B. Moe, "Signs of agile trends in global software engineering
research: A tertiary study," in Sixth IEEE International Conference on Global Software
Engineering Workshop (ICGSEW), 2011.

[79] J. Verner, O. Brereton, B. Kitchenham, M. Turner and M. Niazi, "Risk Mitigation Advice for
Global Software Development from Systematic Literature Reviews," School of Computing and
Mathematics, Keele University, Keele, Staffordshire, UK, 2012.

[80] J. M. Verner, O. P. Brereton, B. A. Kitchenham, M. Turner and M. Niazi, "Risks and risk
mitigation in global software development: A tertiary study," Information and Software
Technology, vol. 56, no. 1, pp. 54-78, 2014.

[81] J. F. Yates and E. R. Stone, "The risk construct," John Wiley & Sons, 1992.

[82] F. Q. da Silva, C. Costa, A. C. C. Franca and R. Prikladinicki, "Challenges and solutions in


distributed software development project management: a systematic literature review," in 5th
IEEE International Conference on Global Software Engineering (ICGSE), 2010.

[83] S. Jalali and C. Wohlin, "Global software engineering and agile practices: a systematic review,"
Journal of Software: Evolution and Process, vol. 24, no. 6, pp. 643-659, 2012.

[84] S. Jalali and C. Wohlin, "Agile practices in global software engineering-A systematic map," in
5th IEEE International Conference on Global Software Engineering (ICGSE), 2010.

[85] A. A. Keshlaf and S. Riddle, "Risk management for web and distributed software development
projects," in Fifth International Conference on Internet Monitoring and Protection (ICIMP),
2010.

[86] E. Hossain, P. L. Bannerman and D. R. Jeffery, "Scrum Practices in Global Software


Development: A Research Framework.," in PROFES, 2011.

[87] I. a. S. S. S. a. M. S. a. D. M. a. S. S. Inayat, "A systematic literature review on agile requirements


engineering practices and challenges," Computers in human behavior, vol. 51, pp. 915-929,
2015.

86
[88] M. Bano and D. Zowghi, "User involvement in software development and system success: a
systematic literature review," in Proceedings of the 17th International Conference on
Evaluation and Assessment in Software Engineering, 2013.

[89] F. Q. da Silva, R. Prikladnicki, A. C. C. França, C. V. Monteiro, C. Costa and R. Rocha, "An


evidence-based model of distributed software development project management: results from
a systematic mapping study," Journal of software: Evolution and Process, vol. 24, no. 6, pp. 625-
642, 2012.

[90] M. Hummel, C. Rosenkranz and R. Holten, "The role of communication in agile systems
development," Business & Information Systems Engineering, vol. 5, no. 5, pp. 343-355, 2013.

[91] S. Ghobadi, "What drives knowledge sharing in software development teams: A literature
review and classification framework," Information & Management, vol. 52, no. 1, pp. 82-97,
2015.

[92] J. Portillo-Rodríguez, A. Vizcaíno, M. Piattini and S. Beecham, "Tools used in Global Software
Engineering: A systematic mapping review," Information and Software Technology, vol. 54, no.
7, pp. 663-685, 2012.

[93] M. Bano and D. Zowghi, "A systematic review on the relationship between user involvement
and system success," Information and Software Technology, vol. 58, pp. 148-169, 2015.

[94] S. Matalonga, M. Solari and G. Matturro, "Factors affecting distributed agile projects: a
systematic review," International Journal of Software Engineering and Knowledge Engineering,
vol. 23, no. 9, pp. 1289-1301, 2013.

[95] A. M. Razavi and R. Ahmad, "Agile development in large and distributed environments: A
systematic literature review on organizational, managerial and cultural aspects," in 8th
Malaysian Software Engineering Conference (MySEC), 2014.

[96] H. H. Khan, M. Naz’ri bin Mahrin and S. bt Chuprat, "Factors generating risks during
requirement engineering process in global software development environment," International
Journal of Digital Information and Wireless Communications (IJDIWC), vol. 4, no. 1, pp. 63-78,
2014.

[97] B. J. a. P. R. da Silva Estácio, "Distributed Pair Programming: A Systematic Literature Review,"


Information and Software Technology, vol. 63, pp. 1-10, 2015.

[98] M. Usman, F. Azam and N. Hashmi, "Analysing and Reducing Risk Factor in 3-C's Model
Communication Phase Used in Global Software Development," in International Conference on
Information Science and Applications (ICISA), 2014.

[99] A. Mishra and D. Mishra, "Cultural Issues in Distributed Software Development: A Review," in
On the Move to Meaningful Internet Systems (OTM), 2014.

[100] M. Hummel, C. Rosenkranz and R. Holten, "Die Bedeutung von Kommunikation bei der agilen
Systementwicklung," Wirtschaftsinformatik, vol. 55, no. 5, pp. 347-360, 2013.

87
[101] M. a. K. S. U. Salam, "Green software multisourcing readiness model (GSMRM) from vendor‟ s
perspective," Science international (Lahore), vol. 26, pp. 1421-1424, 2014.

[102] N. Manzoor and U. Shahzad, "Information Visualization for Agile Development in Large-Scale
Organizations," Blekinge Institiute of Technology, Karlskrona, 2012.

[103] M. A. Babar and M. Zahedi, "Global Software Development: A Review of the State-Of-The-Art
(2007-2011)," IT University of Copenhagen, Copenhagen, 2012.

[104] M. Simão Filho, P. R. Pinheiro and A. B. Albuquerque, "Task Allocation Approaches in


Distributed Agile Software Development: A Quasi-systematic Review," in Software Engineering
in Intelligent Systems, 2015.

[105] N. Rashid and S. U. Khan, "Green Agility for Global Software Development Vendors: A
Systematic Literature Review Protocol," in Proceedings of the Pakistan Academy of sciences,
2015.

[106] T. Dreesen, R. Linden, C. Meures, N. Schmidt and C. Rosenkranz, "Beyond the Border: A
Comparative Literature Review on Communication Practices for Agile Global Outsourced
Software Development Projects," University of Cologne, Cologne.

[107] H. Wang, "The review and research on agile oriented method n the pilot industry system,"
Wuhan University of Techonlogy, Wuhan, 2014.

[108] H. Khalid, M. Ahmed, A. Sameer and F. Arif, "Systematic Literature Review of Agile Scalability
for Large Scale Projects," International Journal of Advanced Computer Science and Applications
(IJACSA), vol. 6, no. 9, pp. 63-75, 2015.

[109] B. J. a. P. R. da Silva Estácio, "A Set of Practices for Distributed Pair Programming.," in
International Conference on Enterprise Information Systems (ICEIS), 2014.

[110] K. Baseer, A. R. M. Reddy and C. S. Bindu, "A Systematic Survey on Waterfall Vs. Agile Vs. Lean
Process Paradigms," i-Manager's Journal on Software Engineering, vol. 9, no. 3, 2015.

[111] A. Johansson, "Toward improvements of teamwork in globally distributed agile teams,"


University of Gothenburg, Gothenburg, 2015.

[112] S. S. Islam Zada and S. Nazir, "Issues and implications of scrum on global software
development," University of Peshawar, Pakistan, Peshawar.

[113] B. V. de Carvalhoa and C. H. P. Mellob, "Scrum agile product development method-literature


review, analysis and classification," Univercity of Itajubá, Itajubá.

[114] H. H. Khan, M. N. Mahrin and S. Chuprat, "Risk Generating Situations of Requirement


Engineering in Global Software Development," in International Conference on Informatics
Engineering & Information Science (ICIEIS), 2013.

[115] S. Dodda and R. Ansari, "The Use of SCRUM in Global Software Development: An Exploratory
Study," Blekinge Institute of Technology, Karlskrona, 2010.

88
[116] A. C. C. dos Santos, C. C. Borges, D. E. Carneiro and F. Q. da Silva, "Estudo baseado em
Evidências sobre Dificuldades, Fatores e Ferramentas no Gerenciamento da Comunicação em
Projetos de Desenvolvimento Distribuído de Software," in Proceedings of 7th Experimental
Software Engineering Latin American Workshop (ESELAW), 2010.

[117] J. Montonen, "Key Challenges of Virtual Software Development Teams," jyväskylän yliopsto,
2010.

[118] E. Cardozo, J. Neto, A. Barza, A. França and F. da Silva, "SCRUM and productivity in software
projects: a systematic literature review," in 14th International Conference on Evaluation and
Assessment in Software Engineering (EASE), 2010.

[119] S. Schneider, R. Torkar and T. Gorschek, "Solutions in global software engineering: A systematic
literature review," International Journal of Information Management, vol. 33, no. 1, pp. 119-
132, 2013.

[120] R. Sriram and S. Mathew, "Global software development using agile methodologies: A review
of literature," in International Conference on Management of Innovation and Technology
(ICMIT), 2012.

[121] N. K. Kamaruddin, N. H. Arshad and A. Mohamed, "Chaos issues on communication in agile


global software development," in Business Engineering and Industrial Applications Colloquium
(BEIAC), 2012.

[122] E. Kupiainen, M. V. Mäntylä and J. Itkonen, "Using metrics in Agile and Lean Software
Development-A systematic literature review of industrial studies," Information and Software
Technology, vol. 62, pp. 143-163, 2015.

[123] C. Costa, C. Cunha, R. Rocha, A. C. C. França, F. Q. da Silva and R. Prikladnicki, "Models and
Tools for Managing Distributed Software Development: A Systematic Literature".

[124] P. S. Saripalli and D. H. P. Darse, "Finding common denominators for agile software
development: a systematic literature review," Blekinge Institute of Technology, School of
Computing, Blekinge, 2011.

[125] Z. Li, P. Avgeriou and P. Liang, "A systematic mapping study on technical debt and its
management," Journal of Systems and Software, vol. 101, pp. 193-220, 2015.

[126] P. Räty, "Social Network Analysis in Software Engineering: A Literature Review and a case
study," Aalto University, Aalto, 2014.

[127] R. Prikladnicki and J. L. N. Audy, "Process models in the practice of distributed software
development: A systematic review of the literature," Information and Software Technology,
vol. 52, no. 8, pp. 779-791, 2010.

[128] M. Jiménez, M. Piattini and A. Vizcaíno, "Challenges and improvements in distributed software
development: A systematic review," Advances in Software Engineering, vol. 2009, pp. 1-14,
2009.

89
[129] R. Prikladnicki, D. Damian and J. L. N. Audy, "Patterns of evolution in the practice of distributed
software development: quantitative results from a systematic review," in 12th International
Conference on Evaluation and Assessment in Software Engineering (EASE), 2008.

[130] M. Jiménez and M. Piattini, "Problems and solutions in distributed software development: a
systematic review," in Software Engineering Approaches for Offshore and Outsourced
Development, Springer, 2008, pp. 107-125.

[131] J. Noll, S. Beecham and I. Richardson, "Global software development and collaboration:
barriers and solutions," ACM Inroads, vol. 1, no. 3, pp. 66-78, 2010.

[132] I. Steinmacher, A. P. Chaves and M. A. Gerosa, "Awareness support in global software


development: a systematic review based on the 3C collaboration model," in 16th Conference
on Collaboration and Technology (CRIWG), Maastricht, 2010.

[133] S. U. Khan, M. Niazi and R. Ahmad, "Barriers in the selection of offshore software development
outsourcing vendors: An exploratory study using a systematic literature review," Information
and Software Technology, vol. 53, no. 7, pp. 693-706, 2011.

[134] B. Lings, B. Lundell, P. J. Ågerfalk and B. Fitzgerald, "Ten strategies for successful distributed
development," in The transfer and diffusion of information technology for organizational
resilience, Springer, 2006, pp. 119-137.

[135] F. da Silva, R. Prikladnicki, A. Franca, C. Costa and R. Rocha, "Research and practice of
distributed software development project management: A systematic mapping study,"
Information and System Technology, vol. 24, no. 6, pp. 625-642, 2011.

[136] T. Ebling, J. L. N. Audy and R. Prikladnicki, "A Systematic Literature Review of Requirements
Engineering in Distributed Software Development Environments.," in International Conference
on Enterprise Information Systems (ICEIS), 2009.

[137] S. S. M. Fauzi, P. L. Bannerman and M. Staples, "Software Configuration Management in Global


Software Development: A Systematic Map," in Asia Pacific Software Engineering Conference
(APSEC), 2010.

[138] M. Jiménez, M. Piattini and A. Vizcaíno, "A Systematic Review of Distributed Software
Development," in Handbook of Research on Software Engineering and Productivity
Technologies: Implications of Globalization: Implications of Globalization, IGI Global, 2009, pp.
209-225.

[139] S. U. Khan, M. Niazi and R. Ahmad, "Factors influencing clients in the selection of offshore
software outsourcing vendors: An exploratory study using a systematic literature review,"
Journal of systems and software, vol. 84, no. 4, pp. 686-699, 2011.

[140] S. U. Khan, M. Niazi and R. Ahmad, "Critical success factors for offshore software development
outsourcing vendors: A systematic literature review," in International Conference on Global
Software Engineering (ICGSE), 2009.

90
[141] A. Lopez, J. Nicolas and A. Toval, "Risks and safeguards for the requirements engineering
process in global software development," in International Conference on Global Software
Engineering (ICGSE), 2009.

[142] I. Nurdiani, R. Jabangwe, D. Šmite and D. Damian, "Risk Identification and Risk Mitigation
Instruments for Global Software Engineering: A systematic review and survey results," in
International Conference on Global Software Engineering Workshop (ICGSEW), 2011.

[143] J. Persson and L. Mathiassen, "A process for managing risks in Distributed teams," IEEE
Software, no. 99, pp. 20-29, 2011.

[144] J. S. Persson, L. Mathiassen, J. Boeg, T. S. Madsen and F. Steinson, "Managing risks in


distributed software projects: an integrative framework.," Transactions on Engineering
Management, vol. 56, no. 3, pp. 508-532, 2009.

[145] R. Prikladnicki, J. L. Audy and F. Shull, "Patterns in effective distributed software development,"
Software, vol. 27, no. 2, pp. 12-15, 2010.

[146] D. a. W. C. Šmite, "A whisper of evidence in global software engineering," IEEE software, vol.
28, no. 4, pp. 15-18, 2011.

[147] D. Šmite, C. Wohlin, R. Feldt and T. Gorschek, "Reporting empirical research in global software
engineering: a classification scheme," in International Conference on Global Software
Engineering (ICGSE), 2008.

[148] C. Treude, M.-a. Storey and J. Weber, "Empirical studies on collaboration in software
development: A systematic literature review," Citeseer, 2009.

[149] N. Ali, S. Beecham and I. Mistrík, "Architectural knowledge management in global software
development: a review," in International Conference on Global Software Engineering (ICGSE),
2010.

[150] M. Alsudairi and Y. K. Dwivedi, "A multi-disciplinary profile of IS/IT outsourcing research,"
Journal of Enterprise Information Management, vol. 23, no. 2, pp. 215-258, 2010.

[151] H. Huang, "Cultural Issues in Globally Distributed Information Systems Development: A Survey
and Analysis," in Americas Conference on Information Systems (AMCIS), 2007.

[152] S. U. Khan, M. Niazi and R. Ahmad, "Critical barriers for offshore software development
outsourcing vendors: a systematic literature review," in Asia-Pacific Software Engineering
Conference (APSEC), 2009.

[153] J. Kroll, J. L. N. Audy and R. Prikladnicki, "Mapping the Evolution of Research on Global Software
Engineering-A Systematic Literature Review.," in International Conference on Enterprise
Information Systems (ICEIS), 2011.

[154] R. G. Rocha, C. Costa, C. Rodrigues, R. Ribeiro de Azevedo, I. H. Junior, S. Meira and R.


Prikladnicki, "Collaboration Models in Distributed Software Development: a Systematic
Review," CLEI Electronic Journal, vol. 14, no. 2, 2011.

91
[155] A. Yalaho, "A conceptual model of ICT-supported unified process of international outsourcing
of software production," in International Enterprise Distributed Object Computing Conference
Workshops (EDOCW), 2006.

92
List of tables
Table 1: Problem groups example ........................................................................................................ 23
Table 2: Problem groups of Distributed Scrum ordered by class from [9] ........................................... 32
Table 3: Challenges of Distributed Agile Development grouped by most applicable class from [3] .... 33
Table 4: Google Scholar search results ................................................................................................. 33
Table 5: Problem groups ....................................................................................................................... 34
Table 6: Legend for Table 7: degree of consideration .......................................................................... 38
Table 7: Problems mapped to core values on consideration ............................................................... 38
Table 8: Legend for Table 9: degree of impact ..................................................................................... 40
Table 9: Problems mapped to core values on impact .......................................................................... 40
Table 10: Expert focus group result of round 1 .................................................................................... 45
Table 11: Votes on likelihood and impact............................................................................................. 46
Table 12: Practitioner focus group result of round 1............................................................................ 51
Table 13: Votes on likelihood and impact............................................................................................. 51
Table 14: Overview of triangulation result ........................................................................................... 57
Table 15: 16. PI planning - votes on difficulty & impact ....................................................................... 62
Table 16: 15. Inspect & Adapt - votes on difficulty & impact ............................................................... 63
Table 17: 32. Feature - votes on difficulty & impact............................................................................. 63
Table 18: 4. Implementing 1-2-3 - votes on difficulty & impact ........................................................... 64
Table 19: 25. Shared services - votes on difficulty & impact ................................................................ 64
Table 20: Solutions for problems and failing elements ........................................................................ 66
Table 21: Problem groups example .................................................................................................... 101
Table 22: Participants focus group form ............................................................................................. 103
Table 23: Table layout focus group ..................................................................................................... 103
Table 24: Table layout focus group ..................................................................................................... 109
Table 25: Rejected studies including reason for rejection.................................................................. 126
Table 26: Challenges of geographically distributed agile development from [67] ............................. 128
Table 27: Challenges of Distributed Scrum from [68] ......................................................................... 128
Table 28: Risks of Distributed Scrum from [69] .................................................................................. 128
Table 29: Challenges of Agile Global Software Development from [70] ............................................ 129
Table 30: Challenges of applying agile methods in offshore development from [71]........................ 129
Table 31: Challenges of Distributed Agile Software Development from [72]..................................... 129
Table 32: Challenges of Distributed Agile Software Engineering from [73] ....................................... 130
Table 33: Challenges of Distributed Agile Development from [74] .................................................... 130
Table 34: Challenges of Extreme Programming in Global Software Development from [75] ............ 130
Table 35: Challenges of Distributed Scrum from [76] ......................................................................... 131
Table 36: Challenges of Distributed Agile Software Development from [77]..................................... 131
Table 37: Result of studies subjected to the acceptance criteria from [78] ....................................... 133
Table 38: Result of studies subjected to the acceptance criteria from [79] ....................................... 133
Table 39: Result of studies subjected to the acceptance criteria from [80] ....................................... 134
Table 40: Problems due to inefficient communication tools.............................................................. 135
Table 41: Problems due to unavailability of people ........................................................................... 135
Table 42: Problems due to lack of synchronous communication ....................................................... 135
Table 43: Problems due to different execution of work practices ..................................................... 136
Table 44: Problems due to language barriers ..................................................................................... 136
Table 45: Problems due to lacking technical infrastructure ............................................................... 136

93
Table 46: Problems due to loss of cohesion ....................................................................................... 136
Table 47: Problems due to misinterpretation..................................................................................... 137
Table 48: Problems due to lack of agile training................................................................................. 137
Table 49: Problems due to reduced trust ........................................................................................... 137
Table 50: Problems due to time zone differences .............................................................................. 137
Table 51: Problems due to people differences ................................................................................... 138
Table 52: Problems due to lack of traditional management .............................................................. 138
Table 53: Problems due to difficulties with coordination................................................................... 138
Table 54: Problems due to shared ownership and responsibility ...................................................... 138
Table 55: Problems due to incorrect execution of Scrum .................................................................. 138
Table 56: Problems due to cultural differences - organizational and national................................... 139
Table 57: Problems due to the loss of informal contact ..................................................................... 139
Table 58: Problems due to lack of collective vision ............................................................................ 139
Table 59: Problems due to lack of requirement documents .............................................................. 139
Table 60: Problems due to lack of visibility ........................................................................................ 139
Table 61: Problems due to difficulties in knowledge sharing ............................................................. 139
Table 62: Problems due to increased communication effort ............................................................. 140
Table 63: Problems due to increased team size ................................................................................. 140
Table 64: Problems due to different holidays..................................................................................... 140
Table 65: Problems due to difficulties with agile decision making ..................................................... 140
Table 66: Problems due to increased number of teams ..................................................................... 140
Table 67: Problems due to silence of participants.............................................................................. 140
Table 68: Problems due to increased number of sites ....................................................................... 141
Table 69: Ungrouped problems .......................................................................................................... 142
Table 70: Start setup round 1 - groups ............................................................................................... 156
Table 71: Results round 1 - group Rini ................................................................................................ 158
Table 72: Results round 1 - group Peter ............................................................................................. 159
Table 73: Start setup round 1 - plenary .............................................................................................. 161
Table 74: Results round 1 - plenary .................................................................................................... 162
Table 75: Result round 2 - dot voting individually .............................................................................. 165
Table 76: Results round 2 - ranking on likelihood............................................................................... 166
Table 77: Results round 2 - ranking on impact ................................................................................... 166
Table 78: Start setup round 3 - plenary .............................................................................................. 167
Table 79: Result round 3 - consequences System Demo .................................................................... 168
Table 80: Result round 3 - consequences Solution Demo .................................................................. 169
Table 81: Result round 3 - consequences Inspect & Adapt ................................................................ 169
Table 82: Result round 3 - consequences PI Planning ........................................................................ 169
Table 83: Result round 4 - dot voting individually on 13. System Demo ............................................ 169
Table 84: Result round 4 - dot voting individually on 14. Solution Demo .......................................... 170
Table 85: Result round 4 - dot voting individually on 15. Inspect & Adapt ........................................ 170
Table 86: Result round 4 - dot voting individually on 16. PI Planning ................................................ 170
Table 87: Result round 4 - consequences System Demo ranked on likelihood .................................. 174
Table 88: Result round 4 - consequences Solution Demo ranked on likelihood ................................ 174
Table 89: Result round 4 - consequences Inspect & Adapt ranked on likelihood .............................. 175
Table 90: Result round 4 - consequences PI Planning ranked on likelihood ...................................... 175
Table 91: Result round 4 - consequences System Demo ranked on impact....................................... 175
Table 92: Result round 4 - consequences Solution Demo ranked on impact ..................................... 175
Table 93: Result round 4 - consequences Inspect & Adapt ranked on impact ................................... 176

94
Table 94: Result round 4 - consequences PI planning ranked on impact ........................................... 176
Table 95: Votes on elements - Distributed expert 1 ........................................................................... 177
Table 96: Votes on elements - SAFe expert 1 ..................................................................................... 177
Table 97: Votes on elements - Distributed Expert 2 ........................................................................... 177
Table 98: Votes on elements - SAFe expert 2 ..................................................................................... 178
Table 99: Votes on elements - Practitioner 1 ..................................................................................... 178
Table 100: Votes on elements – Practitioner 2................................................................................... 178
Table 101: 13.1 No integration no working system ............................................................................ 179
Table 102: 13.2 Unclear value............................................................................................................. 179
Table 103: 13.3 No feedback .............................................................................................................. 179
Table 104: 13.4 Bad team morale ....................................................................................................... 179
Table 105: 13.5 Annoyed customers .................................................................................................. 179
Table 106: 13.6 Unpredictably ............................................................................................................ 180
Table 107: 13.7 Rework ...................................................................................................................... 180
Table 108: 13.8 Delay .......................................................................................................................... 180
Table 109: 14.1 No clear / unknown value ......................................................................................... 180
Table 110: 14.2 Annoyed stakeholders / customers .......................................................................... 180
Table 111: 14.3 Bad morale ................................................................................................................ 180
Table 112: 14.4 Unpredictably ............................................................................................................ 181
Table 113: 14.5 No / late feedback ..................................................................................................... 181
Table 114: 14.6 Delay .......................................................................................................................... 181
Table 115: 14.7 Rework ...................................................................................................................... 181
Table 116: 15.1 No learning ................................................................................................................ 181
Table 117: 15.2 Fallback / unlearning ................................................................................................. 182
Table 118: 15.3 Dissatisfied customers............................................................................................... 182
Table 119: 15.4 No motivation ........................................................................................................... 182
Table 120: 16.1 No goal no execution ................................................................................................ 182
Table 121: 16.2 No alignment between teams ................................................................................... 183
Table 122: 16.3 No real results ........................................................................................................... 183
Table 123: 16.4 Stakeholder annoyance ............................................................................................. 183
Table 124: 16.5 No teamness / commitment ..................................................................................... 183
Table 125: 16.6 Lack of transparency ................................................................................................. 183
Table 126: 16.7 Rework ...................................................................................................................... 183
Table 127: 16.8 Longer time to market .............................................................................................. 184
Table 128: Votes on consequences - Distributed Expert 1 ................................................................. 185
Table 129: Votes on consequences - SAFe expert 1 ........................................................................... 185
Table 130: Votes on consequences - Distributed Expert 2 ................................................................. 186
Table 131: Votes on consequences - SAFe expert 2 ........................................................................... 187
Table 132: Votes on consequences - Practitioner 1 ........................................................................... 188
Table 133: Votes on consequences - Practitioner 2 ........................................................................... 188
Table 134: Practitioner focus group - start setup round 1 - groups ................................................... 225
Table 135: Practitioner focus group - result round 1 - group Hanneke .............................................. 227
Table 136: Practitioner focus group - result round 1 - group Peter ................................................... 227
Table 137: Practitioner focus group - start setup round 1 - plenary .................................................. 229
Table 138: Practitioner focus group - result round 1 - plenary .......................................................... 231
Table 139: Practitioner focus group - result of round 2 - dot voting individually .............................. 232
Table 140: Practitioner focus group - result round 2 - ranking on likelihood ..................................... 233
Table 141: Practitioner focus group - result round 2 - ranking on impact ......................................... 233

95
Table 142: Practitioner focus group - start setup round 3 - plenary .................................................. 235
Table 143: Practitioner focus group - result round 3 - solutions PI planning ..................................... 235
Table 144: Practitioner focus group - result round 3 - solutions Inspect & Adapt ............................. 235
Table 145: Practitioner focus group - result round 3 - solutions Implementing 1-2-3 ....................... 235
Table 146: Practitioner focus group - result round 3 - solutions Feature........................................... 236
Table 147: Practitioner focus group - result round 3 - solutions Shared Services .............................. 236
Table 148: Practitioner focus group - result round 4 - dot voting individually on PI planning ........... 239
Table 149: Practitioner focus group - result round 4 - dot voting individually on Inspect & Adapt ... 239
Table 150: Practitioner focus group - result round 4 - dot voting individually on Implementing 1-2-3
............................................................................................................................................................ 239
Table 151: Practitioner focus group - result round 4 - dot voting individually on Feature ................ 240
Table 152: Practitioner focus group - result round 4 - dot voting individually on Feature ................ 240
Table 153: Practitioner focus group - result round 4 - solutions PI planning ranked on difficulty ..... 240
Table 154: Practitioner focus group - result round 4 - solutions Inspect & Adapt ranked on difficulty
............................................................................................................................................................ 240
Table 155: Practitioner focus group - result round 4 - solutions Implementing 1-2-3 ranked on difficulty
............................................................................................................................................................ 240
Table 156: Practitioner focus group - result round 4 - solutions Feature ranked on difficulty .......... 240
Table 157: Practitioner focus group - result round 4 - solutions Shared Services ranked on difficulty
............................................................................................................................................................ 240
Table 158: Practitioner focus group - result round 4 - solutions PI planning ranked on impact ........ 241
Table 159: Practitioner focus group - result round 4 - solutions Inspect & Adapt ranked on impact 241
Table 160: Practitioner focus group - result round 4 - solutions Implementing 1-2-3 ranked on impact
............................................................................................................................................................ 241
Table 161: Practitioner focus group - result round 4 - solutions Feature ranked on impact ............. 241
Table 162: Practitioner focus group - result round 4 - solutions Shared Services ranked on impact 241
Table 163: Experience level categorization ........................................................................................ 242
Table 164: Expert experience level ..................................................................................................... 242
Table 165: Expert category ................................................................................................................. 242
Table 166: Practitioner focus group - votes on elements - Practitioner 1.......................................... 243
Table 167: Practitioner focus group - votes on elements - Practitioner 2.......................................... 243
Table 168: Practitioner focus group - votes on elements - Practitioner 3.......................................... 243
Table 169: Practitioner focus group - votes on elements - Practitioner 4.......................................... 244
Table 170: Practitioner focus group - votes on elements - Practitioner 5.......................................... 244
Table 171: Practitioner focus group - votes on elements - Practitioner 6.......................................... 245
Table 172: Practitioner focus group - votes on elements - Practitioner 7.......................................... 245
Table 173: Practitioner focus group - votes on elements - Practitioner 8.......................................... 245
Table 174: Practitioner focus group - votes on elements - Practitioner 9.......................................... 246
Table 175: Practitioner focus group - votes on elements - Practitioner 10........................................ 246
Table 176: Practitioner focus group - votes on elements - Practitioner 11........................................ 246
Table 177: Practitioner focus group - 16. PI planning - individual solutions ...................................... 248
Table 178: Practitioner focus group - 15. Inspect & Adapt - individual solutions .............................. 249
Table 179: Practitioner focus group - 4. Implementing 1-2-3 - individual solutions .......................... 249
Table 180: Practitioner focus group - 32. Feature - individual solutions ............................................ 250
Table 181: Practitioner focus group - 25. Shared Services - individual solutions ............................... 250
Table 182: Practitioner focus group - 16. PI planning - individual solutions - translated ................... 251
Table 183: Practitioner focus group - 15. Inspect & Adapt - individual solutions - translated ........... 252
Table 184: Practitioner focus group - 4. Implementing 1-2-3 - individual solutions - translated....... 252

96
Table 185: Practitioner focus group - 32. Feature - individual solutions - translated ........................ 253
Table 186: Practitioner focus group - 25. Shared Services - individual solutions - translated ........... 254
Table 187: Practitioner focus group - 16. PI planning - group solutions ............................................ 255
Table 188: Practitioner focus group - 15. Inspect & Adapt - group solutions .................................... 255
Table 189: Practitioner focus group - 4. Implementing 1-2-3 - group solutions ................................ 255
Table 190: Practitioner focus group - 32. Feature - group solutions .................................................. 255
Table 191: Practitioner focus group - 25. Shared Services - group solutions ..................................... 255
Table 192: Practitioner focus group - 16. PI planning - group solutions - translated ......................... 256
Table 193: Practitioner focus group - 15. Inspect & Adapt - group solutions - translated ................. 256
Table 194: Practitioner focus group - 4. Implementing 1-2-3 - group solutions - translated ............. 256
Table 195: Practitioner focus group - 32. Feature - group solutions - translated .............................. 256
Table 196: Practitioner focus group - 25. Shared Services - group solutions - translated.................. 256
Table 197: Practitioner focus group - votes on solutions - Practitioner 1 .......................................... 257
Table 198: Practitioner focus group - votes on solutions - Practitioner 2 .......................................... 257
Table 199: Practitioner focus group - votes on solutions - Practitioner 3 .......................................... 258
Table 200: Practitioner focus group - votes on solutions - Practitioner 4 .......................................... 258
Table 201: Practitioner focus group - votes on solutions - Practitioner 5 .......................................... 259
Table 202: Practitioner focus group - votes on solutions - Practitioner 6 .......................................... 259
Table 203: Practitioner focus group - votes on solutions - Practitioner 7 .......................................... 260
Table 204: Practitioner focus group - votes on solutions - Practitioner 8 .......................................... 261
Table 205: Practitioner focus group - votes on solutions - Practitioner 9 .......................................... 261
Table 206: Practitioner focus group - votes on solutions - Practitioner 10 ........................................ 262
Table 207: Practitioner focus group - votes on solutions - Practitioner 11 ........................................ 262

97
List of figures
Figure 1: Agile Manifesto from [19] ........................................................................................................ 5
Figure 2: Research improvement cycle ................................................................................................... 6
Figure 3: Four level SAFe picture from [1] .............................................................................................. 9
Figure 4: SAFe meeting timeline, created based on [1] ........................................................................ 12
Figure 5: Flow of a trigger in SAFe based on [1] ................................................................................... 13
Figure 6: LeSS scaling model from [26] ................................................................................................. 14
Figure 7: DAD scaling model from [50] ................................................................................................. 15
Figure 8: Nexus scaling model from [51] .............................................................................................. 16
Figure 9: Spotify scaling model from [29] ............................................................................................. 16
Figure 10: 10th State of Agile report - Scaling agile from [13] ............................................................... 17
Figure 11: Essential SAFe from [52] ...................................................................................................... 18
Figure 12: Agile Release Train elements numbered, modified and reproduced with permission from ©
2011-2016 Scaled Agile, Inc. All rights reserved. Original Big Picture graphic found at
scaledagileframework.com. .................................................................................................................. 19
Figure 13: Visualization Systematic Literature Review protocol .......................................................... 24
Figure 14: Visualization of multiple informant protocol ....................................................................... 25
Figure 15: Expert focus group - part 1: identifying failing elements .................................................... 27
Figure 16: Expert focus group - part 2: identifying consequences ....................................................... 27
Figure 17: Practitioner focus group - part 1: identifying failing elements ............................................ 28
Figure 18: Practitioner focus group - part 2: identifying solutions ....................................................... 29
Figure 19: Visualization Systematic Literature Review protocol .......................................................... 30
Figure 21: Plan-Do-Check-Adjust cycle in SAFe from [1] ...................................................................... 31
Figure 21: Visualization of multiple informant protocol ....................................................................... 37
Figure 22: Agile Release Train elements failing - result of identification based on literature, created
based on [1] .......................................................................................................................................... 42
Figure 23: Expert focus group protocol visualization ........................................................................... 44
Figure 24: Graph of risk of SAFe element failing .................................................................................. 46
Figure 25: Graph expert distribution on likelihood of SAFe elements failing....................................... 47
Figure 26: Graph expert distribution on impact of SAFe elements failing ........................................... 47
Figure 27: Agile Release Train elements failing - result of expert focus group, created based on [1] . 48
Figure 28: Practitioner focus group protocol visualization round 1 & 2............................................... 50
Figure 29: Graph of risk of SAFe element failing .................................................................................. 52
Figure 30: Graph expert distribution on likelihood of SAFe elements failing ....................................... 53
Figure 31: Graph expert distribution on impact of SAFe elements failing ........................................... 54
Figure 32: Agile Release Train elements failing - result of practitioner focus group, created based on
[1] .......................................................................................................................................................... 55
Figure 33: Agile Release Train elements failing - result of triangulation, created based on [1]........... 57
Figure 34: Practitioner focus group protocol visualization round 3 & 4............................................... 61
Figure 35: 16. PI planning solution score graph .................................................................................... 62
Figure 36: 15. Inspect & Adapt solution score graph............................................................................ 63
Figure 37: 32. Feature solution score graph ......................................................................................... 63
Figure 38: 4. Implementing 1-2-3 solution score graph........................................................................ 64
Figure 39: 25. Shared services solution score graph............................................................................. 64
Figure 40: Visualization SAFe expertise participants focus group ........................................................ 71
Figure 41: Visualization distributed expertise participants focus group .............................................. 72

98
Figure 42: Agile Release Train elements failing, created based on [1] ................................................. 79
Figure 43: Agile Release Train elements numbered, Modified and reproduced with permission from ©
2011-2016 Scaled Agile, Inc. All rights reserved. Original Big Picture graphic found at
scaledagileframework.com ................................................................................................................. 149
Figure 44: Picture start setup round 1 - groups .................................................................................. 158
Figure 45: Picture results round 1 - group Rini ................................................................................... 160
Figure 46: Picture results round 1 - group Peter ................................................................................ 161
Figure 47: Impression of plenary exercise .......................................................................................... 162
Figure 48: Picture results round 1 - plenary ....................................................................................... 164
Figure 49: Impression of dot voting .................................................................................................... 165
Figure 50: Picture result round 2 - dot voting individually ................................................................. 166
Figure 51: Picture results round 2 - combined ranking ...................................................................... 167
Figure 52: Example round 3 - post-it’s on wall ................................................................................... 168
Figure 53: Picture result round 4 - dot voting individually on 13. System Demo ............................... 171
Figure 54: Picture result round 4 - dot voting individually on 14. Solution Demo ............................. 172
Figure 55: Picture result round 4 - dot voting individually on 15. Inspect & Adapt ........................... 173
Figure 56: Picture result round 4 - dot voting individually on 16. PI Planning ................................... 174
Figure 57: Graph overview consequences System Demo ................................................................... 190
Figure 58: Graph expert distribution of risk System Demo ................................................................ 190
Figure 59: Graph expert distribution on likelihood System Demo ..................................................... 191
Figure 60: Graph expert distribution on impact System Demo .......................................................... 191
Figure 61: Graph overview consequences Solution Demo ................................................................. 192
Figure 62: Graph Expert distribution of risk Solution Demo ............................................................... 192
Figure 63: Graph expert distribution of likelihood Solution Demo .................................................... 193
Figure 64: Graph expert distribution of impact Solution Demo ......................................................... 193
Figure 65: Graph overview consequences Inspect & Adapt ............................................................... 194
Figure 66: Graph expert distribution of risk Inspect & Adapt............................................................. 194
Figure 67: Graph expert distribution of likelihood Inspect & Adapt .................................................. 195
Figure 68: Graph expert distribution of impact Inspect & Adapt ....................................................... 195
Figure 69: Graph overview consequences PI planning ....................................................................... 196
Figure 70: Graph expert distribution of risk PI planning ..................................................................... 196
Figure 71: Graph expert distribution of likelihood PI planning .......................................................... 197
Figure 72: Graph expert distribution of impact PI planning ............................................................... 197
Figure 73: Practitioner focus group - picture start setup round 1 - groups ........................................ 226
Figure 74: Practitioner focus group - picture result round 1 - group Hanneke .................................. 228
Figure 75: Practitioner focus group - picture results round 1 - group Peter ...................................... 229
Figure 76: Practitioner focus group - picture start setup round 1 - plenary....................................... 230
Figure 77: Practitioner focus group - picture result round 1 - plenary ............................................... 232
Figure 78: Practitioner focus group - picture result round 2 – combined ranking ............................. 234
Figure 79: Practitioner focus group - picture result round 3 - solutions PI planning ......................... 236
Figure 80: Practitioner focus group - picture result round 3 - solutions Inspect & Adapt ................. 237
Figure 81: Practitioner focus group - picture result round 3 - solutions Implementing 1-2-3 ........... 237
Figure 82: Practitioner focus group - picture result round 3 - solutions feature................................ 238
Figure 83: Practitioner focus group - picture result round 3 - solutions Shared Services .................. 238

99
Appendix A Systematic Literature Review protocol
Based on the procedures and guidelines, presented by Kitchenham and Charters in [59] and [54] the
review protocol is discussed.

The need for a systematic review


The literature is searched for problems of Distributed Agile Development and Distributed Scrum
because there is little literature on problems of distributed SAFe. Google Scholar8 was searched on the
15th of February 2016 using the queries: ““Scaled Agile Framework” AND distributed AND problems”,
and ““Scaled Agile Framework” AND SAFe”. The first query yielded 96 hits, the second query 122.

From this search two studies discussing problems were found [65] and [66]. The studies discuss the
challenges of transitioning from a traditional organization to an organization where agile is scaled.
However, the studies do not cover the distributed aspect required for this research. Therefore, a
different approach is taken to find problems on distributed SAFe, problems of Distributed Agile
Development and Distributed Scrum are researched in this Systematic Literature Research.

Review commissioning
This Systematic Literature Review was done as part of a master thesis project, in the research chair on
Global Software Engineering, in the Software Engineering research group of the Software Technology
department in the Faculty Electrical Engineering, Mathematics and Computer Science of Delft
University of Technology and was not commissioned.

Research questions
The review is done to answer the research question: “What problems can be expected when SAFe is
applied in distributed settings?”.

Research protocol
Based on the review protocol proposed by [59] and [54] this section presents the protocol that has
been used in the Systematic Literature Review.

Search strategy
Same as for the initial search on distributed SAFe, Google Scholar is used for this search because it has
indexed many different databases, including those of different universities. For the search strategy
the following queries where used:

 “Distributed Agile Development” AND (problems OR challenges) AND “systematic literature


review”
 "Distributed Scrum" AND (problems OR challenges) AND "systematic literature review"

The use of the keywords “Distributed Agile Development” and “Distributed Scrum” makes that general
problems that are both present in Distributed Agile Development and Distributed Scrum are
mentioned more often than Scrum specific problems. The consequences of this effect are mitigated
by considering all problems that are mentioned more than twice for the research. Besides, dismissing
problems early in the process could result in an important problem being missed.

Selection criteria

8
Google Scholar has been used for this search because it has indexed many different databases, including
those of different universities, providing a broad view of the available literature

100
The selection criteria are applied on studies within the field of Globally Distributed Software
Engineering. Within this field, any study that discusses agile is considered. For acceptance of a paper,
the following selection criteria where used:

 The study is a literature review


 The literature review is done systematic
 Only Systematic Literature Reviews are considered
 The study is published after the year 2000 and before the 1st of February 2016, because the
subject of Globally Distributed Software Engineering is relatively new, and the review was
started in February 2016
 The study is in English, in order to avoid misinterpretation due to language
 Regarding the subject of the study:
 The study is on distributed development
 The study is on agile or Scrum
 The study identifies problems or challenges

For exclusion, no selection criteria have been used.

Selection procedures
Studies are selected based on the criteria mentioned in the previous section.

Quality assessment and checklists


Only literature reviews that are done systematically are considered. Therefore, findings and approach
of the papers are well documented, so could be repeated. Based on this, the quality of the accepted
studies is assumed to be of sufficient for this Systematic Literature Review.

Data extraction strategy


The extracted data consists of the standard information, as described in [54], as well as additional
information. The standard information extracted is: study title, study author(s), study year, and
publication details. This is extended with additional information, namely, the problems or challenges
presented in the study.

Synthesis of the extracted data


The gathered problems and challenges are grouped, similar problems and challenges have been
grouped together, and presented as in Table 21.
Table 21: Problem groups example

Distributed Agile Development problems Times mentioned


Problem A 12
Description of problem A
Problem B 11
Description of problem B

Project timetable
The Systematic Literature Review was started on the 1st of February 2016, and was finished on 1st of
April 2016.

Research protocol review


The selection criteria and search strategy have been reviewed by the thesis supervisor, and have been
adjusted based on the feedback.

101
Appendix B Multiple informant protocol
For the multiple informant methodology, the following protocol was used. The execution of this
protocol is presented in Appendix E.

1. An initial mapping between the problems and core values based on consideration is created
based on insights from the author
2. The mapping is presented to the first informant, who adjusts the mapping
3. The adjusted mapping is presented to the second informant, who adjusts the mapping
4. The adjusted mapping is presented to the third informant, who adjusts the mapping
5. The final mapping is presented to all three informants and a consensus is reached
6. During the final mapping discussion, a second mapping between the problems and core values
based on impact is presented to the informants and a consensus is reached

102
Appendix C Expert focus group: protocol
For the focus group the following protocol was used. The names of the participants of this focus group
have been anonymized, and are thus omitted from the protocol.

Prior to the focus group


Participants are selected with three different backgrounds, SAFe program consultants, Release Train
Engineers with distributed experience (practitioners), and distributed experts from the academic
world. Prior to the meeting a list of problems of distributed agile development that are expected to
be problematic is send to the participants. For each of the problems an explanation is provided. Also,
a list of Agile Release Train elements is send in which each element is numbered for later use. As not
all experts present are well known with SAFe a brief description of each element is also provided. The
invitation letter that is send to the participants can be found in Appendix M and the attachment that
is send can be found in Appendix N.

The participants are asked to identify for each problem which Agile Release Train elements they think
could experience difficulties if the problem occurred. The replies of the participants are put in Table
22, to be used as input for the session.
Table 22: Participants focus group form

Incorrect Language Time zone Increased Inefficient


execution of barriers differences communication communication
SAFe effort tools

Preparation of the focus group


The focus group session will be held at Prowareness; the reception will be at the central hall where
lunch will be provided. The participants are split into two groups with every expertise represented in
both groups. The groups will each have their own room, the participants will not be allowed to enter
the rooms prior to the session.

In both rooms the elements have been put on a table, the layout of the tables is as presented in Table
23. The elements are ranked based on the input of the participants, the most mentioned element on
top, the least mentioned element at the bottom.
Table 23: Table layout focus group

Not specifically challenged by Undecided Specifically challenged by


distributed distributed
16. PI Planning
14. Solution Demo
15. Inspect and Adapt
13. System Demo

103
22. Vision
23. Roadmap
2. Lean-agile mindset
6. Communities of Practice
10. Agile Release Train / Value
Stream
21. Release Management,
Shared Services & User
Experience
26. PI Objectives
5. Lean-agile leaders
12. Program Kanban
20 DevOps & System team
25. Milestones & Release
27. Feature
28. Enabler
1. Core values
4. Implementing 1-2-3
8. Weighted Shortest Job First
24. Metrics
29. Epics
11. Architectural runway
3. SAFe principles
9. Release any time
17. System Architect, Release
Train Engineer & Product
Management
18. Business Owners
19. Customer
7. Value Stream coordination

In addition, the following items are needed for focus group session and have been arranged for before.

 2 rooms
 Lunch
 4 bundles post-it’s
 2 flipcharts (with sheets)
 8 prints of the attachment
 2 prints of timeline & text
 Example setup
 Tape
 Bundle of empty A4’s
 Present / thank you for the participants

Protocol during of the session


Opening 13:00

104
Central reception in the hall, the participants will not be able to enter the group rooms at this moment.
The participants will be asked not to talk to the other participants about the focus group, SAFe or
distributed for the duration of the focus group. A welcome to everyone.

Introduction 13:00 – 13:15

The facilitators will be introduced, a short explanation of the program for the rest of the afternoon,
and a presentation on the role of this focus group in the research. Each participant is provided with
printout of the numbered list of Agile Release Train elements.

- Introduction by Rini - plenary (15 min)

Text by facilitator:

“Welcome everyone, thank you for your time. I am Rini van Solingen and this is Peter van Buul,
together we are investigating distributed SAFe as part of Peter his master thesis at the Technical
University Delft for which I am the responsible professor. We will be facilitating today’s session; the
objective of today is to gain insights into the challenges around SAFe elements in distributed settings.

The scientific nature of this session requires that the internal validity is guaranteed. For this reason,
we ask you to follow the instructions provided by us carefully, for example, when asked to do an
exercise individually, do so individually, do not discuss any aspect of the exercise. Additionally, we ask
you not to discuss SAFe, distributed or the focus group setting during breaks. If you want to discuss
these topics, please do so after the session is finished.”

Combining experience in groups (round 1) 13:15 – 13:55

Each group will go to their own room. In both rooms, the elements have been put on the table, the
layout of the tables is as presented in Table 23. The elements are ordered based on the input of the
participants prior to the session. The groups will be presented.

- Reaching agreement - groups of 3 (40 min)

Text by facilitator:

“In the first exercise all 29 elements of the Agile Release Train will be discussed in a distributed setting.
A distributed setting being the people that are part of the Agile Release Train work in two or more
locations. This will be done in groups of three, the first group is: (names group A), the second group:
(names group B). In each room there is a sheet with three columns on it: “undecided”, “specifically
challenged” and “Not specifically challenged”. For now, all elements are put in the “undecided”
column, these have been ranked based on your input. The most mentioned element is on top, the
least mentioned at the bottom.

We ask you to decide together as a group for each element if you expect it to be challenged in a
distributed setting. If you think an element will be challenged in a distributed setting put it in the
column: “specifically challenged”, else put it in the column “not specifically challenged”. This exercise
is time-boxed on 40 minutes, elements that have not been agreed upon by then will remain in the
undecided column. Is the exercise clear?”

Collect and discuss group results (round 1) 13:55 – 14:25

After both groups have reached agreement the groups come together in one of the group rooms. The
results of the groups are merged. The same as in the previous round, the groups are asked to reach
agreement on all elements.

105
- Reaching agreement - plenary (30 min)

Text by facilitator:

“Same as in the previous exercise there are three columns, “undecided”, “specifically challenged” and
“not specifically challenged”. Elements that both groups have identified as specifically challenged are
put in the “specifically challenged” column. Elements that both groups have identified as not
specifically challenged are put in the “not specifically challenged” column. All other elements are put
in the “undecided” column.

Same as in previous exercise we ask you to reach agreement on each element. If you think an element
will be challenged in a distributed setting put it in the column: “specifically challenged”, else put it in
the column “not specifically challenged”. This exercise is time-boxed on 30 minutes, elements that
have not been agreed upon by then will be dismissed. Is the exercise clear?”

Dot voting (round 2) 14:25 – 14:35

Dot voting is done on two scales, on impact, and on likelihood. Every participant gets 1 vote per
element. The votes are divided individually on an empty sheet; no interactions can occur at this time.
When all participants have divided their votes, the votes are handed in to the facilitator.

- Dot voting - individually (10 min)

Text by facilitator:

“For this exercise, we will use dot voting. Is everyone familiar with dot voting?” (If needed, explain dot
voting)

“For this dot vote session you get one vote per element, so in total X votes. In this exercise, there will
be two topics on which to vote, one on likelihood of occurrence of the element failing, the other on
the impact that the element has when it fails. So you get X votes to divide on how likely you think it is
that the element fails in a distributed setting, more votes mean more likely to fail. You also get X votes
to divide on how big you think the impact is if the element fails in a distributed setting, more votes
mean more impact if the element fails. You may put more than one vote at a single element, if you
wish to do so.

We will now give each of you two sheets of paper, please write down your name, the topic that you
vote for, and a number for each element that is voted for, so X, Y etcetera. After this, we ask you to
divide your votes over the elements. When you are done, return your sheets to me. Please do not
discuss the voting before we start. This exercise is time-boxed on 10 minutes. Is the exercise clear?”

Analyzing the result with the group (round 2) 14:35 – 14:45

The facilitator puts the results of both dot votes on the wall. The top 3 of both scales is used to proceed
to the next phase with a maximum of 6 elements.

- Analyze dot vote and put on wall - plenary (10 min)

Text by facilitator:

“Based on the votes provided these 3 elements have been selected as most likely to fail. And these 3
elements have been selected as those with the highest impact. For the next round we will use these
X elements.”

Collect individual experience (round 3) 14:45 – 15:00

106
The participants will individually write down the 5 biggest consequences they think will happen when
an element fails.

- Writing down 5 consequences - individually (15 min)

Text by facilitator:

“In this exercise we ask each of you to individually write down the five biggest consequences of an
element failing in a distributed setting. Please write down the consequences on a post-it with in the
top right corner the element number. This exercise is time-boxed on 15 minutes. Is the exercise clear?”

Present findings to group (round 3) 15:00 – 15:15

Each individual puts his or her consequences on the wall and gives a brief description of the
consequence.

- Present consequences - plenary (15 min)

Text by facilitator:

“In this exercise we ask each of you present his or her consequences and put them on the wall with
the corresponding element. Please be brief on the elaboration of the consequences, this exercise is
time-boxed on 15 minutes. Is the exercise clear?”

Collect and discuss group experience (round 3) 15:15 – 15:45

The group will now discuss per element the consequences of failure. Any additional consequences
that come up are discussed and put on the wall. Each consequence is put on a new post-it with on the
top right corner the problem-element identifier.

- Discussion on consequences - plenary (30 min)

Text by facilitator:

“In this exercise we ask you to discuss the consequences with the group. If additional consequences
come from the discussion, we write them down and put them on the wall. This exercise is time-boxed
on 30 minutes. Is the exercise clear?”

Dot voting (round 4) 15:45 – 15:55

The consequences are numbered and again dot voted, based on likelihood and impact.

- Dot voting - individually (10 min)

Text by facilitator:

“In the final exercise, we will again use dot voting on the consequences. Before we vote the
consequences are numbered, same as last time you get one vote per consequence. We will again vote
on the two topics of likelihood and impact. Consequences that are more likely to happen receive more
votes. And consequences that have more impact receive more votes.

We will now give each of you two sheets of paper, please write down your name, the scale that you
vote for, and a number for each element that is voted for. After this, we ask you to divide your votes
over the elements. When you are done, return your sheets to me. Please do not discuss the voting
before we start. This exercise is time-boxed on 10 minutes. Is the exercise clear?”

Analyzing the result with the group (round 4) 15:55 – 16:00

107
The facilitator puts the results of both dot votes on the wall.

- Analyze dot vote and put on wall - plenary (5 min)

Text by facilitator:

“Based on the dot voting the following ranking for the consequences has been created. This marks the
end of the focus group. Thank you for your time.”

Closing 16:00

Thank everyone for their time.

108
Appendix D Practitioner focus group: protocol
Based on experience from the previous focus group the protocol for the Release Train Engineer focus
group has been enhanced. The list of SAFe elements is extended, and some elements have been split
into multiple elements. Forms are created to streamline the dot voting by asking the participants to
count their votes themselves allowing for faster processing of the results. The names of the
participants of this focus group have been anonymized, and are thus omitted from the protocol.

Prior to the focus group


Prior to the focus group, the participants have been asked to provide information regarding their
experience with SAFe, distributed and distributed SAFe. All participants have experience with SAFe,
some are just starting with the transition to SAFe and others have worked with SAFe for a couple of
years. To cover for this difference in experience, those who have just started with SAFe are asked to
read up before the session. The other participants have worked with SAFe for a longer time and are
expected to have enough experience. For the difference in distributed experience no action is taken.

Prior to the session an official invitation is send to all participants, this invitation can be found in
Appendix V.

Preparation of the focus group


The focus group session will be held at Prowareness; the reception will be at the central hall where
lunch will be provided. For round 1 and round 3 the participants are split into two groups. The first
split is based on distributed SAFe expertise, one group will contain all participants with distributed
SAFe expertise, the others are in the other group. For the second split, these groups are mixed so both
groups have equal experience. And so, the less experienced participants can challenge the statements
of those with experience. The groups will each have their own room, the participants will not be
allowed to enter the rooms prior to the session.

In both rooms the elements have been put on a table, the layout of the tables is as presented in Table
24. The elements are ranked based on the results of the previous focus group. At the top are the
elements that the previous focus group classified as specifically challenged, ranked by risk, highest at
the top. Next is the item that remained undecided during the previous focus group. Then the other
items on which the two groups of the previous focus group had discussion. And finally, the items that
both groups classified as not specifically challenged. This way the top is expected to be specifically
challenged by distributed and the bottom to be not specifically challenged by distributed. The items
in the middle will are expected to cause discussion.
Table 24: Table layout focus group

Not specifically challenged by Undecided Specifically challenged by


distributed distributed
16. PI planning
15. Inspect & Adapt
14. Solution Demo
12. Program Kanban
13. System Demo
22. DevOps
23. System team
1. Core values

109
6. Communities of Practice
2. Lean-agile mindset
8. Weighted Shortest Job First
24. Release Management
25. Shared Services
26. User Experience
4. Implementing 1-2-3
9. Release any time
17. Release Train Engineer
18. System Architect
19. Product Management
21. Customer
27. Vision
28. Roadmap
10. Agile Release Train / Value
Stream
31. PI Objectives
5. Lean-agile leaders
30. Milestones & Releases
32. Feature
33. Enabler
29. Metrics
34. Epics
11. Architectural runway
3. SAFe principles
20. Business Owners
7. Value Stream coordination

Besides this, two forms are created to streamline the dot voting process during the session. These
forms can be found in Appendix W.

In addition, the following items are needed for focus group session and have been arranged for before.

 2 rooms
 Lunch
 4 bundles post-it’s
 2 flipcharts (with sheets)
 16 prints of the attachment
 3 prints of timeline & text
 Example setup
 Tape
 15 prints of the forms for the first dot voting
 15 prints of the forms for solution brainstorming
 15 prints of the forms for the second dot voting
 Present / thank you for the participants

110
Protocol during of the session
Opening 13:00

Central reception in the hall, the participants will not be able to enter the group rooms at this moment.
The participants will be asked not to talk to the other participants about the focus group, SAFe or
distributed for the duration of the focus group. A welcome to everyone.

Introduction 13:00 – 13:15

The facilitators will be introduced, a short explanation of the program for the rest of the afternoon,
and a presentation on the role of this focus group in the research. Each participant is provided with
printout of the numbered list of Agile Release Train elements.

- Introduction by Peter - plenary (15 min)

Text by facilitator:

“Welcome everyone, thank you for your time. I am Peter van Buul and this is Rini van Solingen,
together we are investigating distributed SAFe as part of my master thesis at the Technical University
Delft for which Rini is the responsible professor. Also with us is Hanneke Gieles who will help with
facilitating today’s session. The objective of today is to gain insights into the challenges and their
solutions around SAFe elements in distributed settings. Discussion is essential for the focus group, so
please note that all participants can tell their story, try not to dominate during the discussion.

The scientific nature of this session requires that the internal validity is guaranteed. For this reason,
we ask you to follow the instructions provided by us carefully, for example, when asked to do an
exercise individually, do so individually, do not discuss any aspect of the exercise. Additionally, we ask
you not to discuss SAFe, distributed or the focus group setting during breaks. If you want to discuss
these topics, please do so after the session is finished.”

Combining experience in groups (round 1) 13:15 – 13:55

Each group will go to their own room. In both rooms, the elements have been put on the table, the
layout of the tables is as presented in Table 24. The elements are ordered based on the previous focus
group, as described in the preparation. The groups will be presented.

- Reaching agreement - two groups (40 min)

Text by facilitator:

“In the first exercise all 34 elements of the Agile Release Train will be discussed in a distributed setting.
A distributed setting being the people that are part of the Agile Release Train work in two or more
locations, not necessarily distributed over more time-zones or continents. This will be done in two
groups; the groups will be divided based on previous participation. Those who have participated in
the survey that has been done previously will join Hanneke in this room. The others will join me in the
other room.

We ask you to decide together as a group for each element if you expect it to be challenged in a
distributed setting. If you think an element will be challenged in a distributed setting put it in the
column: “specifically challenged”, else put it in the column “not specifically challenged”. This exercise
is time-boxed on 40 minutes, elements that have not been agreed upon by then will remain in the
undecided column. Is the exercise clear?”

Collect and discuss group results (round 1) 13:55 – 14:25

111
After both groups have reached agreement the groups come together in one of the group rooms. The
results of the groups and that of the previous focus group are merged. The same as in the previous
round, the groups are asked to reach agreement on all elements.

- Reaching agreement - plenary (30 min)

Text by facilitator:

“Same as in the previous exercise there are three columns, “undecided”, “specifically challenged” and
“not specifically challenged”. Elements that both groups have identified as specifically challenged are
put in the “specifically challenged” column. Elements that both groups have identified as not
specifically challenged are put in the “not specifically challenged” column. All other elements are put
in the “undecided” column.

Same as in previous exercise we ask you to reach agreement on each element. If you think an element
will be challenged in a distributed setting put it in the column: “specifically challenged”, else put it in
the column “not specifically challenged”. This exercise is time-boxed on 30 minutes, elements that
have not been agreed upon by then will be dismissed. Is the exercise clear?”

Dot voting (round 2) 14:25 – 14:35

Dot voting is done on two scales, on impact, and on likelihood. Every participant gets 1 vote per
element. The votes are divided individually on an empty sheet; no interactions can occur at this time.
When all participants have divided their votes, the votes are handed in to the facilitator.

- Dot voting - individually (10 min)

Text by facilitator:

“For this exercise, we will use dot voting. Is everyone familiar with dot voting?” (If needed, explain dot
voting)

“For this dot vote session you get one vote per element, so in total X votes. In this exercise, there will
be two topics on which to vote, one on likelihood of occurrence of the element failing, the other on
the impact that the element has when it fails. So you get X votes to divide on how likely you think it is
that the element fails in a distributed setting, more votes mean more likely to fail. You also get X votes
to divide on how big you think the impact is if the element fails in a distributed setting, more votes
mean more impact if the element fails. You may put more than one vote at a single element, if you
wish to do so.

We will now give each of you a form, please write down the element number in the first column, count
your votes and check if the sum is X. After this, we ask you to divide your votes over the elements.
When you are done, return your sheets to me. Please do not discuss the voting before we start. This
exercise is time-boxed on 10 minutes. Is the exercise clear?”

Analyzing the result with the group (round 2) 14:35 – 14:45

The facilitator puts the results of both dot votes on the wall. The top 3 of both scales is used to proceed
to the next phase with a maximum of 6 elements.

- Analyze dot vote and put on wall - plenary (10 min)

Text by facilitator:

112
“Based on the votes provided these 3 elements have been selected as most likely to fail. And these 3
elements have been selected as those with the highest impact. For the next round we will use these
X elements.”

Collect individual experience (round 3) 14:45 – 14:55

A plenary instruction of the exercise, Individuals write down possible solutions.

- Brainstorming on solutions per element – individually (10 min)

Text by facilitator:

“This exercise is done individually. You are asked to write down possible solutions for each of the X
Agile Release Train elements. Use the forms, provided to write these down. This exercise is time-boxed
on 10 minutes. Is the exercise clear?”

Collect group experience (round 3) 14:55 – 15:25

A plenary instruction of the exercise, then each group will go to their own room. The groups will be
asked to discuss the elements and are asked for each element to reach consensus on the two best
solutions.

- Selecting top 2 solutions per element – groups (30 min)

Text by facilitator:

“This exercise is done in two groups, the first group consists of (names group A), and the second group
of (names group B). These groups have been created randomly by coin flip. You are asked to discuss
possible solutions for each of the X Agile Release Train elements. For each element reach consensus
on the two best solutions and write each solution down on a post-it. You can take your result of the
individual brain storm with you for reference. However please do not write anything else on this, and
hand them in after the exercise. This exercise is time-boxed on 30 minutes. Is the exercise clear?”

Present findings to group (round 3) 15:25 – 15:45

Both groups present their solutions one by one and group similar solutions together.

- Present solutions - plenary (20 min)

Text by facilitator:

“In this exercise we ask each group to present their solutions and put them on the wall with the
corresponding element, similar solutions are put together. Please be brief on the elaboration of the
solution, this exercise is time-boxed on 20 minutes, so one minute per solution. Is the exercise clear?”

Dot voting (round 4) 15:45 – 15:55

The solutions are numbered and again dot voted, based on difficulty and impact.

- Dot voting - individually (10 min)

Text by facilitator:

“In the final exercise, we will again use dot voting. Before we start ranking the solutions will be
numbered. We will rank on the two topics of difficulty and impact. Solutions that are easy to realize
receive more votes. And solutions that have a big impact receive more votes.

113
We will now give each of you a form, please fill in the form, write down your name, and the element
that you are ranking in the corresponding box. When you are done, return your sheets to me. Please
do not discuss the ranking before we start. This exercise is time-boxed on 10 minutes. Is the exercise
clear?”

Analyzing the result with the group (round 4) 15:55 – 16:00

The facilitator puts the results of the rankings on the wall. The participants are asked for any last
remarks on the rankings and anything that stands out to them.

- Analyze dot vote and put on wall - plenary (5 min)

Text by facilitator:

“Based on the ranking the following ranking for the solutions has been created. Are there any remarks
on the rankings? Is there anything that stands out? This marks the end of the focus group. Thank you
for your time.”

Closing 16:00

Thank everyone for their time.

114
Appendix E Multiple informant execution
The informants, used to verify the mappings are all SAFe Program Consultants from Prowareness who
have implemented SAFe in different companies. An initial mapping was made based on the insights of
the author. This mapping was then presented to the informants, one informant at the time.
Adjustments made by the first informant were marked. The new version was presented to the next
informant. The first meeting regarding the mapping took place on the 11th of April 2016, the final
discussion was on the 31st of May 2016. The duration was longer than initially anticipated as half of
the initially planned meetings did not give results, and finding a timeslot with an informant proved
difficult.

Timeline
 11 April, 2016: mapping discussion with informant 1
 12 April, 2016: mapping discussion with an informant which was not usable9
 19 April, 2016: mapping discussion with informant 2
 19 April, 2016: mapping discussion with informant was cancelled rescheduling proved
impossible
 10 May, 2016: mapping discussion with informant 3
 11 May, 2016: mapping discussion with informant 2 via mail on changes of informant 2
 31 May, 2016: final mapping discussion with informant 1 and 3

Because the mappings had to be verified before the focus group session, it was necessary that a
meeting with all three informants was planned before the session. However, planning a meeting
before the focus group session with all three informants was not possible. Therefore, informant 2 was
asked to review the changes made by informant 3, informant 2 agreed with the changes and
reasoning. Besides this, informant 2 also made a mapping on impact for all problems which would be
discussed with informant 1 and 3 in the final mapping discussion. If, in the final discussion no “big”
changes would be made, a minus to a plus for example, informant 3 agreed on the changes in the final
mapping discussion. The final discussion was then held with informant 1 and 3.

Mapping discussion execution


The mapping discussions took place at the workplace of the author, at Prowareness, with access to a
pc. This way, insights provided by the informants could be immediately incorporated into the
mapping. At the start of the meeting the frame of reasoning for the mapping was explained. The frame
of reasoning being, the mapping shows whether the core value considers the problem.

Once the informant understood the frame of reasoning for the mapping the mapping was discussed.
One by one, each cell of the mapping was reviewed by the informant. Per row, the problem was
explained and the informant reviewed the values for each core value. Changes made by the informant
were explained, and incorporated. Sometimes, the frame of reasoning for the mapping was lost by
the informant, in that case the author would remind the informant on the frame of reasoning.

After all cells were reviewed by the informant, the informant was thanked for his time.

9
The informant found it very difficult to reason within the frame required for the mapping, and did therefore,
not feel competent to answer the questions. Thus, the choice was made not to discuss the mapping and exclude
the informant from the results.

115
Final mapping discussion execution
The final mapping discussion was held with informant 1 and 3, in this discussion the final version of
the mapping was made. The changes that informant 2 and 3 made to the mapping provided by
informant 1 where discussed. The author explained the changes of informant 2, as he was not present,
and informant 3 explained his own changes. For each change, the informants came to an agreement
of the value that should be filled in. New insights provided by the informants during the discussion did
result in some minor changes for the four core value columns. However, none of these changes
changed the result column, thus not effecting the next steps in the process.

After the informants reached agreement on the mapping on consideration, for the 9 problems that
where not considered the mapping on impact was discussed. The informants deviated more from each
other with filling in this mapping, this resulted in some changes also for the result column. However,
these where small changes which did not have effect on the next steps of the process.

116
Appendix F List of SAFe elements
An elaborate description of each of these elements can be found on the SAFe website
www.scaledagileframework.com [1].

SAFe events:

 Team level events


1. Daily Scrum
2. Iteration Planning
3. Team Demo
4. Iteration Retrospective
 Program level events
5. Scrum of Scrums
6. PO Sync
7. System Demo
8. PI Planning
9. Solution Demo
10. Inspect and Adapt
 Value stream level events
11. Pre PI Planning
12. Post PI Planning

SAFe roles:

 Team level roles


13. Agile Teams
14. Product Owner
15. Scrum Master
 Program level roles
16. System Architect
17. Product Management
18. Release Train Engineer
19. Business Owners
 Value stream level roles
20. Solution Architect
21. Solution Management
22. Value Stream Engineer
23. Supplier
 Portfolio level roles
24. Enterprise
25. Enterprise Architect
26. Program Portfolio Management
27. Epic Owners
 Spanning palette
28. DevOps
29. System team
30. Release Management

117
31. Shared Services
32. User Experience
 Level transcending roles
33. Customer
34. Enterprise

SAFe artefacts:

 Team level artefacts


35. Iteration goals
36. Team PI objectives
37. Stories
38. Team backlog
 Program level artefacts
39. Program PI objectives
40. Business feature
41. Enabler feature
 Value stream level artefacts
42. Solution intent
43. Solution context
44. Value Stream PI objectives
45. Business capability
46. Enabler capability
 Portfolio level artefacts
47. Strategic themes
48. Budgets
49. Business epic
50. Enabler epic
 Level transcending artefacts
51. Vision
52. Roadmap
53. Metrics
54. Milestones & Releases
55. Core values
56. SAFe principles
57. Nonfunctional requirements

SAFe best practices:

 Team level best practices


58. Innovation and planning iteration
59. ScrumXP / Kanban
60. Iterations
 Program level best practices
61. Agile Release Train
62. Architectural runway
63. Program Kanban
 Value stream level best practices
64. Economic framework

118
65. Value Stream
66. Value Stream coordination
67. Value Stream Kanban
 Portfolio level best practices
68. Value Streams
69. Portfolio Kanban
 Level transcending best practices
70. Weighted Shortest Job First
71. Lean-agile leaders
72. Communities of Practice
73. Lean-agile mindset
74. Implementing 1-2-3
75. Release any time
76. Continuous Integration
77. Develop on cadence
78. Model-based systems engineering
79. Set-based design
80. Agile architecture

119
Appendix G Description of the Agile Release Train
elements
Below, a brief description is provided of the different elements of the Agile Release Train. An elaborate
description of each of these elements can be found on the SAFe website
www.scaledagileframework.com [1].

Agile Release Train Practices:


1. Core values
The core values describe the culture that these organizations need to get when implementing SAFe.
These core values are therefore also the culture that the Agile Release Train needs to get. The four
core values are:

1. Alignment
2. Built-in Quality
3. Transparency
4. Program Execution

2. Lean-agile mindset
A lean-agile mindset is needed to support lean and agile development at scale in an entire
organization. The mindset consists of two parts, thinking lean and embracing agility.

3. SAFe principles
The SAFe principles are the guidelines for decision making when working with SAFe. These principles
also apply for decision making in the Agile Release Train. The nine SAFe principles are:

1. Take an economic view


2. Apply systems thinking
3. Assume variability; preserve options
4. Build incrementally with fast, integrated learning cycles
5. Base milestones on objective evaluation of working systems
6. Visualize and limit Work In Progress, reduce batch sizes and manage queue lengths
7. Apply cadence, synchronize with cross-domain planning
8. Unlock the intrinsic motivation of knowledge workers
9. Decentralize decision-making

4. Implementing 1-2-3
Implementing 1-2-3 is the basic deployment pattern for a successful deployment of SAFe that has been
developed over the years of implementing SAFe.

5. Lean-agile leaders
For a SAFe transformation to be successful, the current managers, executives and leaders of the
organization need to adopt and lead the change. They have the power to continuously challenge the
organization to become more agile. After these people have been trained they become the so-called
lean-agile leaders that drive agile from within the organization.

120
6. Communities of Practice
Communities of Practice are informal groups of people from different teams, Agile Release Trains or
even Value Streams that have a shared interest, which is the topic of the Community of Practice. Both
the experts on the topic as well as those who want to become an expert are part of the Community
of Practice. The goal of these Communities of Practice is to allow knowledge to be shared across
different Agile Release Trains or Value Streams.

7. Value Stream coordination


Value Streams are organized to be able to deliver value independently, however, in practice there are
dependencies between the different Value Streams. These dependencies have to be properly
managed to make the Value Streams function independently. Value Stream coordination contains the
different tools that SAFe provides to manage these dependencies.

8. Weighted Shortest Job First


Prioritizing items in SAFe is done using the Weighted Shortest Job First (WSJF). SAFe uses three values
to calculate the cost of delay, namely, user-business value (UBV), time criticality (TC), and risk
reduction-opportunity enablement value (RROE). This risk reduction-opportunity enablement value
consists of two parts, does this enable other new business opportunities and what is the risk when
this is not done. The cost of delay is then calculated as followed.

𝐶𝑜𝑠𝑡 𝑜𝑓 𝐷𝑒𝑙𝑎𝑦 = 𝑈𝐵𝑉 + 𝑇𝐶 + 𝑅𝑅𝑂𝐸


This cost of delay is used calculate the WSJF as followed.
𝐶𝑜𝑠𝑡 𝑜𝑓 𝐷𝑒𝑙𝑎𝑦
𝑊𝑆𝐽𝐹 =
𝐽𝑜𝑏 𝑆𝑖𝑧𝑒

9. Release any time


When continuous delivery is possible, releasing can be done any time. However, a product may consist
of multiple systems which require different release models, therefore SAFe states: “Release whatever
you want, whenever it makes sense within the governance and business model”.

10. Agile Release Train / Value Stream


A Value Stream is used in SAFe to deliver value to a customer, when a single Agile Release Train, on
its own supports the entire Value Stream these are inseparably linked. All resources required to deliver
the solution to the customer are part of the Agile Release Train so that the Agile Release Train spans
the entire Value Stream.

The agile teams that are part of the Agile Release Train are aligned via a single vision, roadmap and
program backlog. These teams iterate in a so called program increment, or PI, of 8 to 12 weeks
consisting of 4 to 6 two-week team iterations. During the team iterations the teams continuously add
value to the solution by finishing fully tested stories. At the end of each team iteration the integrated
solution is demoed in the System Demo done by the system team.

11. Architectural runway


Using the architectural runway, SAFe ensures that the architecture is designed just right, not over
engineered, not under engineered. When business epics, features and stories are implemented the
architectural runway is consumed, the runway is used to support their functionalities. So to be able to

121
continuously add new functionalities the architecture needs to be extended continuously, this is done
by implementing enablers. The architectural runway provides a means to keep the architecture
designed just right.

12. Program Kanban


The program Kanban is part of the governance model that SAFe uses to create a sustainable flow of
value to the customer. The program Kanban provides a flow of features for the program increments
of the Agile Release Train. This flow is regulated as to not starve or overload the Agile Release Train.

Agile Release Train events:


13. System Demo
After every team iteration the system demo takes place. This demo functions as the primary measure
of progress for the Agile Release Train. During the system demo, the system team demonstrates the
fully integrated work of all teams that are part of the Agile Release Train to the stakeholders, sponsors
and customers.

14. Solution Demo


At the end of each program increment the system team demonstrates work that all agile teams that
are part of the Agile Release Train have done during the previous program increment. This is presented
to all stakeholders and everyone involved with the Agile Release Train. In the case where a single Agile
Release Train supports a Value Stream this is called the system demo. However, it is different from
the regular system demo, as more work is presented and everyone involved with the Agile Release
Train is present.

15. Inspect and Adapt


The inspect and adapt workshop is for the Agile Release Train what the retrospective is for a Scrum
team. At the end of every program increment the inspect and adapt workshop is held to reflect, gather
data, solve problems and take action based on learnings of the previous program increment. Everyone
involved with the Agile Release Train participates in the workshop, this includes stakeholders. The
result of the workshop are improvement stories which can be added to the backlog of the PI planning.

16. Program Increment Planning


The program increment planning in short, PI planning, is one of the three events for synchronization
of all teams that are part of the Agile Release Train. The PI planning is facilitated by the Release Train
Engineer of the Agile Release Train and takes place during two days. First the teams get presentations
on the business context and vision. After these presentations the teams create plans for the next
program increment. The result of this planning are the program increment objectives which are the
commitments of the teams for the upcoming program increment.

Agile Release Train roles:


17. Release Train Engineer
The Release Train Engineer is the Scrum Master of the Agile Release Train, the Release Train Engineer
facilitates the processes, removes impediments, manages risk and continuously improves the
program. Together with the System Architect and Product Management, these three make sure that
the train keeps moving forward.

122
18. System Architect
The System Architect is the person, team, or teams that is/are responsible for the overall technical
architecture and engineering design of the system. The System Architect designs the common
technical direction of the system.

19. Product Management


Program Management is the Product Owner of the Agile Release Train, Product Management is
responsible for Program Vision and the Backlog creating the framework within which the Product
Owners can operate.

20. Business Owners


The Business Owners are responsible for the value delivered by the Agile Release Train. They provide
leadership to the Agile Release Train by maintaining and updating the mission and vision. They
participate in the PI planning by providing the mission of the train, assigning business value to PI
objectives and approving the PI plan, if necessary they defend this plan to management. During PI
execution they actively participate in the PI, they enable decentralized decision making by providing
the team members with the appropriate authority, and function as a coach for the teams enabling
them to continuously improve their skills.

21. Customer
The customer receives the solution which solves its current needs. The customer works together with
Product Management and other key stakeholders to prioritize development. By actively participating
in events, such as planning sessions and demo’s the customer knows what the teams are doing and
can steer the development of the solution. The customer is thus a part of the Value Stream that is
supported by the Agile Release Train.

22. DevOps
Deploying to operations is required to deliver value to the customer, this complex process is supported
by the DevOps team. This team is part of the Agile Release Train to enable deployment for the solution
developed by the train at any time.

23. System team


One or more system teams are formed to assist the agile teams with integration of the solution, demo
of the solution and ensuring a proper development infrastructure.

24. Release Management


Release Management effects all teams and is thus shared across the Agile Release Train. Release
Management plans and manages the release of the solution, they also help to guide the solution
towards the business goals.

25. Shared Services


Shared Services consist of any specialized roles that are required for the success of the Agile Release
Train but that cannot be dedicated to the train full time, for example security specialists or technical
writers. These resources are available for all teams in the Agile Release Train and must be planned in
when they are needed.

123
26. User Experience
User Experience represents the user’s perception of the system including the user interface. The User
Experience designers support all teams of the Agile Release Train with anything related to interactions
with the user. They also educate the teams on user interface design and testing.

Agile Release Train artefacts:


27. Vision
The vision describes the long-term vision of the solution, using the input of the customer and
stakeholders as well as the features that are on the backlog. It provides context on the solution that
is being developed and sets the boundaries for content decisions and new features.

28. Roadmap
The roadmap represents the Agile Release Train deliverables, this consists of the committed PI
objectives of the current PI and a forecast for the next one or two PI’s. Product Management updates
the roadmap according to the vision. A balance has to be obtained between, planning not enough,
resulting in less alignment, and planning too much, resulting in a unresponsive queue which obstructs
change.

29. Metrics
SAFe presents multiple metrics, with these metrics different things are measured. The most important
are progress and whether the desired solution is delivered. This is measured best on the working
solution. SAFe also presents many other ways to measure progress, for example the epic burn-up
chart, Value Stream performance metrics and the Agile Release Train self-assessment. All these
measures are useful; however, they are all inferior to measuring the working solution.

30. Milestones & Releases


SAFe distinguishes different kind of milestones. PI milestones which occur on the PI cadence and
objectively measure progress based on the working solution. Learning milestones which occur ad-hoc,
when a question arises from a hypothesis is formulated and validated against market conditions.
Fixed-date milestones which occur when a preplanned event occurs, for example a customer demo or
scheduled large-scale integration. Other milestones could be required for release that come from, for
example audits.

A feature only adds value for the customer if the feature is released and added to the working solution
at the customer. Releasing frequently enables frequent addition of value. However, releasing should
only be done when it actually makes sense, same as stated in release any time.

31. PI Objectives
The PI objectives are a summary of the business and technical objectives of the teams and Agile
Release Train. They are formulated during the PI planning and the teams commit to them for the
upcoming PI. Formulating these objectives is done to validate the teams understanding of the intent
of the business regarding the features the teams do. Result of this is that, when the intent of the
business is known, the goal of the team becomes to get the desired outcome rather than finish the list
of features.

32. Feature

124
A feature describes a service that the system can provided that satisfies a specific need of one or more
users. The size of a feature is such that it can be picked up in a single PI by a single Agile Release Train,
thus a feature is planned and reviewed at the PI boundaries. Features are split into stories which can
be picked up by a team during a iteration.

33. Enabler
Enablers are the technical initiatives that pave the architectural runway and are created as business
initiatives consume the runway. Enablers are only created when needed, to prevent engineering too
far ahead resulting in an over engineered solution. Enablers are formulated at program level as
features, at team level as stories. Enablers that change architecture can be big, however they have to
be broken down into small pieces (enabler stories) so that teams can implement these during a
iteration.

34. Epics
The biggest business initiatives are cast into are epics, in the form of lightweight business cases. Epics
can be split over multiple Value Streams or Agile Release Trains. The features that an Agile Release
Train develops come from the epics that are defined at portfolio level.

125
Appendix H Rejected studies
Table 25 shows the rejected studies including the reason for rejection.
Table 25: Rejected studies including reason for rejection

Ref Reason for rejection


[82] The study is not on agile
[8] The study is not on agile
[83] The study does not present problems or challenges
[84] The study does not present problems or challenges
[85] The study is not on agile
[86] The study does not present problems or challenges on Distributed Agile
[87] The study is not on distributed
[88] The study is not on agile
[89] The study is not on agile
[90] The study is not on distributed
[91] The study is not on distributed
[92] The study is not on agile
[93] The study is not on agile
[94] The study does not present problems or challenges
[95] The study does not present problems or challenges
[96] The study is not on agile
[97] The study is not on agile
[98] The literature review is not done systematic
[99] The study is not on agile
[100] The study is not in English
[101] The literature review is not done systematic
[102] The study does not make a clear distinction between co-located and distributed problems
[103] The study is not on agile
[104] The study does not present problems or challenges
[105] The study does not present problems or challenges
[106] The study does not present problems or challenges
[107] The study is a pilot not a complete Systematic Literature Review
[108] The study does not make a clear distinction between co-located and distributed problems
[109] The study is not on agile
[110] The study is not on distributed
[111] The literature review is not done systematic
[112] The literature review is not done systematic
[113] The study is not on distributed
[114] The study is not on agile
[115] The study does not present problems or challenges on Distributed Agile
[116] The study is not in English
[117] The literature review is not done systematic
[118] The study is not on distributed
[119] The study is not on agile
[120] The literature review is not done systematic
[121] The literature review is not done systematic
[122] The study is not on distributed
[123] The study is not on agile

126
[124] The study is not on distributed
[125] The study is not on agile
[126] The study is not on agile
[127] The study does not present problems or challenges
[128] The study is not on agile
[129] The study does not present problems or challenges
[130] The study is not on agile
[131] The study is not on agile
[132] The study is not on agile
[133] The study is not on agile
[6] The study is not on agile
[134] The study is not on agile
[135] The study is not on agile
[136] The study is not on agile
[137] The study is not on agile
[138] The study is not on agile
[139] The study is not on agile
[140] The study is not on agile
[141] The study is not on agile
[142] The study is not on agile
[143] The study is not on agile
[144] The study is not on agile
[145] The study is not on agile
[146] The study is not on agile
[147] The study is not on agile
[148] The study is not on agile
[149] The study is not on agile
[150] The study is not on agile
[151] The study is not on agile
[152] The study is not on agile
[153] The study is not on agile
[154] The study is not on agile
[155] The study is not on agile

127
Appendix I Problems and challenges of accepted
studies
Table 26 to Table 36 list the different challenges that are presented, per study.
Table 26: Challenges of geographically distributed agile development from [67]

Challenge
Time zone differences
Geographic differences
Team size
Number of teams
Coordination
Project domain
Project architecture
Customer involvement
Customer representative involvement
Project management process
Communication tools
Communication infrastructure
Organizational culture
Language
National culture
Trust in team or team members
Personal practice

Table 27: Challenges of Distributed Scrum from [68]

Challenge
Lack of synchronous communication
Collaboration difficulties
Poor communication bandwidth
Wide range of tool support needed
Team management
Finding the correct office space
Coordinating with multiple sites

Table 28: Risks of Distributed Scrum from [69]

Risk
Asynchronous communication
Lack of group awareness
Poor communication bandwidth
Lack of tool support
Large team sizes
Lack of collaborative office environment
Increased number of sites

128
Table 29: Challenges of Agile Global Software Development from [70]

Challenge
People differences
Distance differences
Team issues
Technology issues
Architectural issues
Processes issues
Customer communication

Table 30: Challenges of applying agile methods in offshore development from [71]

Challenge
Daily Scrum difficult to organize
Silence of participants due to linguistic and cultural differences
Temporal difference
Difficult to truly evaluate the state of the project
Lack of confidence due to cultural and language differences

Table 31: Challenges of Distributed Agile Software Development from [72]

Challenge
Lack of communication and collaboration during all development stages
There is a lack of English language skills within project team members that minimizes the
communication levels
There is a lack of communication between the developers and the Product Owners
Lack of shared knowledge and information
The increased distance between Agile developers minimizes the level of communication and
collaboration
Some development teams have issues with poor infrastructures
The visibility level of development progress is low
The cultural differences between project stockholders can lead to lack of awareness
There is a lack of trust between team members
There is a lack of understanding of authority with some team participants
The differences in culture that can reduce the team responsibility and moral
There is a lack of transparency from some members regarding cultural differences
The cultural differences reduce developers’ productivity
There is a lack of team management “configuration management”
The development team has estimation difficulties with the development cost, scope and development
schedule
The differences of the development countries make barriers to adapt within the different local
regulations
There is a security risk according to the distances between teams. “Some information could be lost
during the communication”
Increasing the number of sites creates difficulties for team control and management
Time differences between teams reduce the available time for synchronous communication
Different holiday schedules make it difficult for teams to synchronize work
Some stockholders have a lack of Agile skills
Global development settings can lead to insufficient development meetings

129
Lack of formal documents “no standards”
There is an increase of documentation during development
The large number of team members creates difficulties in applying some of the agile practices
Some technical issues with Global Software Development can lead to insufficient applications for
some agile practices and methods

Table 32: Challenges of Distributed Agile Software Engineering from [73]

Challenge
Time zone differences
Lack of visibility on priority, requirements, demo and sprint reviews
Lack of synchronous communication
Inadequate infrastructure
Availability of the Scrum Master
Lack of team structure and roles and responsibility
Inexperience with agile methods
Remote coaches
Trust and lack of productivity
Work distribution with distributed human resources
Team missing the big picture
Lack of documentation, unclear requirements
Cost of synchronous communication
Having common or shared components
Handling sensitive data at the offsite
Lack of processes
Work pattern varies per culture
Regional holidays are different
Language differences

Table 33: Challenges of Distributed Agile Development from [74]

Challenge
Communication need versus communication impedance
Fixed requirements versus evolving requirements
People-oriented versus process-oriented control
Formal agreements versus informal agreements
Short iterations versus distance complexity
Team cohesion versus team dispersion

Table 34: Challenges of Extreme Programming in Global Software Development from [75]

Challenge
Lack of communication due to asynchronous coordination
Difficulty in having synchronous collaboration
Reduced productivity due to communication overhead
Lack of frequent/informal communication
Lack of trust
Lack of team cohesion
Lack of interaction with the customer

130
Cultural differences
Language barriers
Time zone differences
Conflicting work/unnecessary delays due to lack of coordination
Difficulty in having an on-site customer
Lack of experience of XP
Difficulty in configuration management and version control system
Difficulty in editing shared data simultaneously
Difficulties due to weak technical infrastructure
Lack of productivity due to geographical distance
Difficulty in coordination
Lack of accessibility of information
Difficulty in maintaining tacit knowledge
Difficulty in accepting shared ownership
Difficulty in making independent decisions due to dependency on the superiors
Customer not aligned with agile

Table 35: Challenges of Distributed Scrum from [76]

Challenge
Lack of synchronous/overlapping working hours
Lack of face to face communication
Cultural differences (language/behavior)
Increased communication cost
Expansion of number of sub teams in onshore and offshore teams that suffer the effect of poor
communication process between teams
Over reliance on one person per team for communication lead to misinterpretation and/or loss of
information
Long traveling time between distributed sites
Increased coordination cost
Reduce cooperation arising from misunderstanding
Reduce informal contact can lead to lack of critical task awareness
Inconsistent work practices can impinge of effect of coordination
Manager must adopt local regulation
Manage project artefacts
Lack of trust/teamness
Lack of interpersonal relationship/poor team dynamics
Lack of domain knowledge
Lack of visibility
Skill differences
Technical issues

Table 36: Challenges of Distributed Agile Software Development from [77]

Challenge
Ineffective communication
Language barriers
Unavailability of people
Lack of customer communication

131
Information hiding
Lack of directness and honesty
Sense of belonging to a team
Feeling insecurity
Trust building
Commitment
Collective vision
Collective ownership
Cultural differences
Authority
Lack of training
Technical infrastructure
Avoidance of accountability
Balance workload
Decision making

132
Appendix J Result of SLR reviews
Table 37 to Table 39 show studies from the different studies that investigated Systematic Literature
Reviews.
Table 37: Result of studies subjected to the acceptance criteria from [78]

Ref Accepted / Rejected Reason for rejection


[127] Rejected The study does not present problems or challenges
[8] Rejected The study is not on agile (from base search)
[128] Rejected The study is not on agile
[129] Rejected The study does not present problems or challenges
[68] Accepted (from base search)
[130] Rejected The study is not on agile
[131] Rejected The study is not on agile
[96] Rejected The study is not on agile (from base search)
[84] Rejected The study does not present problems or challenges (from base search)
[82] Rejected The study is not on agile (from base search)
[132] Rejected The study is not on agile
[133] Rejected The study is not on agile

Table 38: Result of studies subjected to the acceptance criteria from [79]

Ref Accepted / Rejected Reason for rejection


[6] Rejected The study is not on agile
[134] Rejected The study is not on agile
[135] Rejected The study is not on agile
[89] Rejected The study is not on agile (from base search)
[82] Rejected The study is not on agile (from base search)
[136] Rejected The study is not on agile
[137] Rejected The study is not on agile
[68] Accepted (from base search)
[83] Rejected The study does not present problems or challenges (from base search)
[84] Rejected The study does not present problems or challenges (from base search)
[138] Rejected The study is not on agile
[130] Rejected The study is not on agile
[128] Rejected The study is not on agile
[139] Rejected The study is not on agile
[140] Rejected The study is not on agile
[141] Rejected The study is not on agile
[131] Rejected The study is not on agile
[142] Rejected The study is not on agile
[143] Rejected The study is not on agile
[144] Rejected The study is not on agile
[127] Rejected The study does not present problems or challenges
[145] Rejected The study is not on agile
[129] Rejected The study does not present problems or challenges
[8] Rejected The study is not on agile (from base search)
[146] Rejected The study is not on agile

133
[147] Rejected The study is not on agile
[132] Rejected The study is not on agile
[148] Rejected The study is not on agile

Table 39: Result of studies subjected to the acceptance criteria from [80]

Ref Accepted / Rejected Reason for rejection


[6] Rejected The study is not on agile
[134] Rejected The study is not on agile
[149] Rejected The study is not on agile
[150] Rejected The study is not on agile
[123] Rejected The study is not on agile (from base search)
[135] Rejected The study is not on agile
[89] Rejected The study is not on agile (from base search)
[82] Rejected The study is not on agile (from base search)
[136] Rejected The study is not on agile
[137] Rejected The study is not on agile
[68] Accepted (from base search)
[151] Rejected The study is not on agile
[83] Rejected The study does not present problems or challenges (from base search)
[84] Rejected The study does not present problems or challenges (from base search)
[138] Rejected The study is not on agile
[130] Rejected The study is not on agile
[128] Rejected The study is not on agile
[133] Rejected The study is not on agile
[152] Rejected The study is not on agile
[96] Rejected The study is not on agile (from base search)
[140] Rejected The study is not on agile
[153] Rejected The study is not on agile
[141] Rejected The study is not on agile
[131] Rejected The study is not on agile
[142] Rejected The study is not on agile
[143] Rejected The study is not on agile
[144] Rejected The study is not on agile
[127] Rejected The study does not present problems or challenges
[145] Rejected The study is not on agile
[129] Rejected The study does not present problems or challenges
[154] Rejected The study is not on agile
[8] Rejected The study is not on agile (from base search)
[146] Rejected The study is not on agile
[147] Rejected The study is not on agile
[132] Rejected The study is not on agile
[148] Rejected The study is not on agile
[155] Rejected The study is not on agile

134
Appendix K Problem groups
The tables Table 40 to Table 68 show the problem groups.
Table 40: Problems due to inefficient communication tools

# Ref Problem
1 [9] Hardware and tools not sufficient
2 [3] Increased dependency on technology
3 [67] Communication tools
4 [67] Communication infrastructure
5 [68] Wide range of tool support needed
6 [68] Finding the correct office space
7 [69] Lack of collaborative office environment
8 [69] Lack of tool support
9 [70] Technology issues
10 [75] Difficulty in editing shared data simultaneously
11 [75] Difficulty in configuration management and version control system
12 [76] Technical issues

Table 41: Problems due to unavailability of people

# Ref Problem
1 [9] Product Owner not present
2 [67] Customer involvement
3 [67] Customer representative involvement
4 [70] Customer communication
5 [72] There is a lack of communication between the developers and the Product Owners
6 [72] Lack of communication and collaboration during all development stages
7 [73] Availability of the Scrum Master
8 [75] Difficulty in having an on-site customer
9 [75] Lack of interaction with the customer
10 [77] Unavailability of people
11 [77] Lack of customer communication

Table 42: Problems due to lack of synchronous communication

# Ref Problem
1 [3] Communication delay
2 [3] Reduced hours of collaboration
3 [68] Lack of synchronous communication
4 [69] Asynchronous communication
5 [73] Lack of synchronous communication
6 [73] Cost of synchronous communication
7 [75] Lack of communication due to asynchronous coordination
8 [75] Difficulty in having synchronous collaboration
9 [76] Lack of face to face communication
10 [76] Lack of synchronous/overlapping working hours

135
Table 43: Problems due to different execution of work practices

# Ref Problem
1 [9] Difference in reporting impediments
2 [9] Different work practices
3 [3] Differences in terms of agreement
4 [3] Differences in quality assessment
5 [3] Differences in design
6 [67] Personal practice
7 [73] Work pattern varies per culture
8 [76] Inconsistent work practices can impinge of effect of coordination
9 [77] Balance workload

Table 44: Problems due to language barriers

# Ref Problem
1 [3] Differences in language
2 [67] Language
3 [71] Lack of confidence due to cultural and language differences
4 [72] There is a lack of English language skills within project team members that minimizes
the communication levels
5 [73] Language differences
6 [75] Language barriers
7 [76] Cultural differences (language/behavior)
8 [77] Language barriers

Table 45: Problems due to lacking technical infrastructure

# Ref Problem
1 [3] Increased complexity of the technical infrastructure
2 [67] Project architecture
3 [70] Architectural issues
4 [72] Some development teams have issues with poor infrastructures
5 [72] Some technical issues with Global Software Development can lead to insufficient
applications for some agile practices and methods
6 [73] Inadequate infrastructure
7 [75] Difficulties due to weak technical infrastructure
8 [77] Technical infrastructure

Table 46: Problems due to loss of cohesion

# Ref Problem
1 [3] Loss of cohesion
2 [69] Lack of group awareness
3 [70] Team issues
4 [74] Team cohesion versus team dispersion
5 [75] Lack of team cohesion
6 [75] Lack of productivity due to geographical distance
7 [76] Lack of interpersonal relationship/poor team dynamics
8 [77] Sense of belonging to a team

136
Table 47: Problems due to misinterpretation

# Ref Problem
1 [9] Misunderstanding
2 [68] Poor communication bandwidth
3 [69] Poor communication bandwidth
4 [72] There is a security risk according to the distances between teams. “Some information
could be lost during the communication”
5 [75] Lack of accessibility of information
6 [76] Over reliance on one person per team for communication lead to misinterpretation
and/or loss of information
7 [76] Reduce cooperation arising from misunderstanding
8 [77] Information hiding

Table 48: Problems due to lack of agile training

# Ref Problem
1 [9] Managing customers new to agile
2 [72] Some stockholders have a lack of Agile skills
3 [73] Inexperience with agile methods
4 [75] Lack of experience of XP
5 [75] Customer not aligned with agile
6 [76] Skill differences
7 [77] Lack of training

Table 49: Problems due to reduced trust

# Ref Problem
1 [3] Reduced trust
2 [67] Trust in team or team members
3 [72] There is a lack of trust between team members
4 [73] Trust and lack of productivity
5 [75] Lack of trust
6 [76] Lack of trust/teamness
7 [77] Trust building

Table 50: Problems due to time zone differences

# Ref Problem
1 [9] Time differences
2 [9] Meetings at the office outside office hours
3 [67] Time zone differences
4 [71] Temporal difference
5 [72] Time differences between teams reduce the available time for synchronous
communication
6 [73] Time zone differences
7 [75] Time zone differences

137
Table 51: Problems due to people differences

# Ref Problem
1 [3] Differences in ethical values
2 [3] Differences in managing individualism and collectivism
3 [3] Differences in time perception
4 [70] People differences
5 [72] There is a lack of understanding of authority with some team participants
6 [77] Authority

Table 52: Problems due to lack of traditional management

# Ref Problem
1 [67] Project management process
2 [68] Team management
3 [70] Processes issues
4 [73] Lack of processes
5 [74] People-oriented versus process-oriented control
6 [76] Manage project artefacts

Table 53: Problems due to difficulties with coordination

# Ref Problem
1 [9] Coordinating in multiple time zones is difficult
2 [67] Coordination
3 [68] Coordinating with multiple sites
4 [75] Conflicting work/unnecessary delays due to lack of coordination
5 [75] Difficulty in coordination
6 [76] Increased coordination cost

Table 54: Problems due to shared ownership and responsibility

# Ref Problem
1 [72] The differences in culture that can reduce the team responsibility and moral
2 [73] Lack of team structure and roles and responsibility
3 [73] Having common or shared components
4 [75] Difficulty in accepting shared ownership
5 [77] Avoidance of accountability
6 [77] Collective ownership

Table 55: Problems due to incorrect execution of Scrum

# Ref Problem
1 [9] Incorrect execution of Scrum
2 [9] Scrum of Scrums not effectively used
3 [9] Features not being deployment ready at end of sprint
4 [72] Global development settings can lead to insufficient development meetings
5 [72] There is an increase of documentation during development
6 [74] Short iterations versus distance complexity

138
Table 56: Problems due to cultural differences - organizational and national

# Ref Problem
1 [3] Differences in organizational vision
2 [67] Organizational culture
3 [67] National culture
4 [75] Cultural differences
5 [77] Cultural difference

Table 57: Problems due to the loss of informal contact

# Ref Problem
1 [9] Informal contact is lost
2 [3] Lack of informal communication
3 [72] The increased distance between Agile developers minimizes the level of
communication and collaboration
4 [75] Lack of frequent/informal communication
5 [76] Reduce informal contact can lead to lack of critical task awareness

Table 58: Problems due to lack of collective vision

# Ref Problem
1 [9] Lack of focus
2 [73] Team missing the big picture
3 [77] Commitment
4 [77] Collective vision

Table 59: Problems due to lack of requirement documents

# Ref Problem
1 [72] Lack of formal documents “no standards”
2 [73] Lack of documentation, unclear requirements
3 [74] Fixed requirements versus evolving requirements
4 [74] Formal agreements versus informal agreements

Table 60: Problems due to lack of visibility

# Ref Problem
1 [71] Difficult to truly evaluate the state of the project
2 [72] The visibility level of development progress is low
3 [73] Lack of visibility on priority, requirements, demo and sprint reviews
4 [76] Lack of visibility

Table 61: Problems due to difficulties in knowledge sharing

# Ref Problem
1 [3] Lack of shared understanding
2 [72] Lack of shared knowledge and information

139
3 [75] Difficulty in maintaining tacit knowledge
4 [76] Lack of domain knowledge

Table 62: Problems due to increased communication effort

# Ref Problem
1 [3] Increased effort to initiate contact
2 [74] Communication need versus communication impedance
3 [75] Reduced productivity due to communication overhead
4 [76] Increased communication cost

Table 63: Problems due to increased team size

# Ref Problem
1 [3] Increased team size
2 [67] Team size
3 [69] Large team sizes

Table 64: Problems due to different holidays

# Ref Problem
1 [9] Different holidays
2 [72] Different holiday schedules make it difficult for teams to synchronize work
3 [73] Regional holidays are different

Table 65: Problems due to difficulties with agile decision making

# Ref Problem
1 [75] Difficulty in making independent decisions due to dependency on the superiors
2 [76] Manager must adopt local regulation
3 [77] Decision making

Table 66: Problems due to increased number of teams

# Ref Problem
1 [67] Number of teams
2 [72] The large number of team members creates difficulties in applying some of the agile
practices
3 [76] Expansion of number of sub teams in onshore and offshore teams that suffer the
effect of poor communication process between teams

Table 67: Problems due to silence of participants

# Ref Problem
1 [9] Silence / passivism
2 [71] Silence of participants due to linguistic and cultural differences

140
Table 68: Problems due to increased number of sites

# Ref Problem
1 [69] Increased number of sites
2 [72] Increasing the number of sites creates difficulties for team control and management

141
Appendix L Ungrouped problems
Table 69 shows the ungrouped problems.
Table 69: Ungrouped problems

# Ref Problem
1 [70] Distance differences
2 [72] The cultural differences between project stockholders can lead to lack of awareness
3 [72] There is a lack of transparency from some members regarding cultural differences
4 [72] The differences of the development countries make barriers to adapt within the
different local regulations
5 [73] Handling sensitive data at the offsite
6 [73] Work distribution with distributed human resources
7 [9] No syncing between sites
8 [9] Planning a meeting with everyone present is difficult
9 [9] Integration difficulties
10 [9] Multiple Product Owners not in sync
11 [67] Geographic differences
12 [67] Project domain
13 [68] Collaboration difficulties
14 [72] There is a lack of team management “configuration management”
15 [72] The development team has estimation difficulties with the development cost, scope
and development schedule
16 [73] Remote coaches
17 [9] Not communicating all information to team
18 [72] The cultural differences reduce developers’ productivity
19 [77] Ineffective communication
20 [3] Perceived threat from low-cost alternatives
21 [77] Lack of directness and honest
22 [77] Feeling insecurity
23 [9] No transparency between sites
24 [71] Daily Scrum difficult to organize
25 [76] Long traveling time between distributed sites

142
Appendix M Expert focus group: invitation letter

Dear <name>,

First of all, thank you very much for your willingness to participate in this research.

My name is Peter van Buul, and together with Rini van Solingen we are investigating distributed SAFe,
as part of my master thesis in Information Architecture at the Technical University Delft. The research
is carried out by me as a graduate student assignment at Prowareness. The first step in the research
is to discover potential problems when applying SAFe in a distributed setting. As part of this research
a focus group session is organized in which experts can contribute to the research. This focus group
will help identify possible problems with distributed SAFe.

The focus group session will be Monday the 13th of June 2016, from 13:00 to 17:00, in Delft at
Prowareness, Brassersplein 1, 2612 CT Delft. The session can only start when all participants are
present. Could you please contact me via phone if you are delayed. We will arrange a quick lunch for
you, so being present at 12:30 will help to start punctually.

The program for the day will consist of 3 parts:


- Discussion on the mapping based on your input 13:15 – 15:00
- Discussion on likelihood and impact of elements failing 15:00 – 16:00
- Discussion on consequences of elements failing 16:00 – 17:00 (ultimate latest)

Information is provided in English to allow the international community to be able to examine the
research. During the focus group the assignments will be in English as well, interactions however can
be in Dutch.

Find attached a mapping which is used as input for the focus group. Would you be so kind to fill in
this mapping before the session and send it back before the 12th of June so that it can be used
during the focus group? Please mind that this can take about an hour of your time.

With best regards,

Peter van Buul, master student Technical University Delft


Mobile: +31652478416
Email: p.vanbuul@prowareness.nl

143
Appendix N Expert focus group: attachment

145
Agile Release Train Elements as numbered in the picture on the previous page.

Best practices Events Artefacts

1. Core values 13. System Demo 22. Vision

2. Lean-agile mindset 14. Solution Demo 23. Roadmap

3. SAFe principles 15. Inspect and Adapt 24. Metrics

4. Implementing 1-2-3 16. PI Planning 25. Milestones & Releases

5. Lean-agile leaders Roles 26. PI Objectives

6. Communities of Practice 17. System Architect Release Train Engineer & 27. Feature
Product Management
7. Value Stream coordination 28. Enabler

8. Weighted Shortest Job First 18. Business Owners 29. Epics

9. Release any time 19. Customer

10. Agile Release Train / Value Stream 20. DevOps & System team

11. Architectural runway 21. Release Management, Shared Services & User
Experience
12. Program Kanban

Attachment for focus group session


Please add to this table for each problem (column) the elements of the Agile Release Train that you expect to fail if the problem occurs, if you don’t know go
to the next element. An example of such mapping is provided on page 3. The elements of the Agile Release Train are listed above, numbered based on the
safe overview picture on page 1. A short description of each of the elements can be found on page 4, a description of the problems on page 10. Please send
back the filled in table before the 12th of June so that it can be used in the focus group. For questions please contact me.

Problems due to incorrect Misunderstanding due to Problems due to time zone No or less communication Unable to communicate
execution of SAFe language barriers differences due to increased properly due to inefficient
communication difficulty communication tools

146
Example mapping
An example of a mapping when cooking a four course dinner with two persons with the following problems. No gas on the stove, little cooking experience
and too much distraction. The dinner elements (courses) that are expected to fail if a problem occurs are placed in the table.

147
148
Description of Agile Release Train elements
If you are new to safe, please watch https://www.youtube.com/watch?v=tmJ_mJw8xec which is a
good 5-minute introduction video to SAFe.

There are two versions of SAFe: four level SAFe and three level SAFe. The difference between these
versions is an extra level, the value stream. A clear distinction has to be made between the value
stream level and a Value Stream. The value stream level consists of many practices and is used by large
enterprises when a single Agile Release Train cannot handle a Value Stream. While a Value Stream is
the practice that is used by SAFe to continuously deliver value to a customer. In three level SAFe, an
single Agile Release Train is used per value stream to continuously deliver value to the customer. This
research, and the focus group session will focus on the Agile Release Train, and therefore, on three
level SAFe.

The different elements of the Agile Release Train in SAFe are described. In Figure 45, the SAFe
overview picture of three level SAFe is shown, the different elements that are part of the Agile Release
Train are numbered. Below, a brief description is provided of the different elements of the Agile
Release Train. A elaborate description of each of these elements can be found on the SAFe website
www.scaledagileframework.com [1].

Figure 45: Agile Release Train elements numbered, Modified and reproduced with permission from © 2011-2016 Scaled Agile,
Inc. All rights reserved. Original Big Picture graphic found at scaledagileframework.com

Agile Release Train Practices:


1. Core values
The core values describe the culture that these organizations need to get when implementing SAFe.
These core values are therefore also the culture that the Agile Release Train needs to get. The four
core values are:

Master Thesis Peter van Buul


149
5. Alignment
6. Built-in Quality
7. Transparency
8. Program Execution

2. Lean-agile mindset
The lean-agile mindset needed to support lean and agile development at scale in an entire
organization. The mindset consists of two parts, thinking lean and embracing agility.

3. SAFe principles
The SAFe principles are the guidelines for decision making when working with SAFe. These principles
also apply for decision making in the Agile Release Train. The nine SAFe principles are:

10. Take an economic view


11. Apply systems thinking
12. Assume variability; preserve options
13. Build incrementally with fast, integrated learning cycles
14. Base milestones on objective evaluation of working systems
15. Visualize and limit Work In Progress, reduce batch sizes and manage queue lengths
16. Apply cadence, synchronize with cross-domain planning
17. Unlock the intrinsic motivation of knowledge workers
18. Decentralize decision-making

4. Implementing 1-2-3
Implementing 1-2-3 is the basic deployment pattern for a successful deployment of SAFe that has been
developed over the years of implementing SAFe.

1. Train implementers and lean-agile change agents


2. Train all executives, managers and leaders
3. Train teams and launch Agile Release Trains

5. Lean-agile leaders
For a SAFe transformation to be successful, the current managers, executives and leaders of the
organization need to adopt the change. They have the power to continuously challenge the
organization to become more agile. After these have been trained they become the so-called lean-
agile leaders.

6. Communities of Practice
Communities of Practice are informal groups of people from different teams, Agile Release Trains or
even Value Streams that have a shared interest, the topic of the Community of Practice. Both the
experts on the topic as well as those who want to become an expert are part of the Community of
Practice. The goal of these Communities of Practice is to allow knowledge to be shared across different
Agile Release Trains or Value Streams.

7. Value Stream coordination


Value Streams are organized to be able to deliver independent. However, in practice there are
dependencies between the different Value Streams. These dependencies have to be properly

150
managed to make the Value Streams function independent. Value Stream coordination contains the
different tools that SAFe provides to manage these dependencies.

8. Weighted Shortest Job First


Prioritizing items in SAFe is done using the Weighted Shortest Job First (WSJF). SAFe uses three values
to calculate the cost of delay, user-business value (UBV), time criticality (TC), and risk reduction-
opportunity enablement value (RROE). This risk reduction-opportunity enablement value consists of
two parts, does this enable other new business opportunities and what is the risk when this is not
done. The cost of delay is then calculated as followed.

𝐶𝑜𝑠𝑡 𝑜𝑓 𝐷𝑒𝑙𝑎𝑦 = 𝑈𝐵𝑉 + 𝑇𝐶 + 𝑅𝑅𝑂𝐸

This cost of delay is used calculate the WSJF as followed.


𝐶𝑜𝑠𝑡 𝑜𝑓 𝐷𝑒𝑙𝑎𝑦
𝑊𝑆𝐽𝐹 =
𝐽𝑜𝑏 𝑆𝑖𝑧𝑒

9. Release any time


When using SAFe to the fullest and continuous delivery is possible, releasing can be done any time.
However, a product may consist of multiple systems which require different release models, therefore
SAFe states: “Release whatever you want, whenever it makes sense within the governance and
business model”.

10. Agile Release Train / Value Stream


A Value Stream is used in SAFe to deliver value to a customer, in three level SAFe, which is used for
this research, a single Agile Release Train can on its own support the entire Value Stream. All resources
required to deliver the solution to the customer are part of the Agile Release Train so that the Agile
Release Train spans the entire Value Stream.

The agile teams that are part of the Agile Release Train are aligned via a single vision, roadmap and
program backlog. These teams iterate in a so called program increment, or PI, of 8 to 12 weeks
consisting of 4 to 6 two week team iterations. During the team iterations the teams continuously add
value to the solution by finishing fully tested stories. At the end of each team iteration the integrated
solution is demoed in the system demo done by the system team.

11. Architectural runway


The architectural runway is the way that SAFe ensures that the architecture is designed just right, not
over engineered, not under engineered. When business epics, features and stories are implemented
the architectural runway is consumed, the runway is used to support their functionalities. So to be
able to continuously add new functionalities the architecture needs to be extended continuously, this
is done by implementing enablers. The architectural runway provides a means to keep the architecture
designed just right.

12. Program Kanban


The program Kanban is part of the governance model that SAFe uses to create a sustainable flow of
value to the customer. The program Kanban provides a flow of features for the program increments
of the Agile Release Train. This flow is regulated as to not starve or overload the Agile Release Train
using the program Kanban.

Master Thesis Peter van Buul


151
Agile Release Train events:
13. System Demo
After every team iteration the system demo takes place. This demo functions as the primary measure
of progress for the Agile Release Train. During the system demo, the system team demonstrates the
fully integrated work of all teams that are part of the Agile Release Train is presented to the
stakeholders, sponsors, and customers.

14. Solution Demo


At the end of each program increment the system team demonstrates work that all agile teams that
are part of the Agile Release Train have done during the previous program increment. This is presented
to all stakeholders and everyone involved with the Agile Release Train. In the case where a single Agile
Release Train supports a Value Stream this is called the system demo. However, it is different from
the regular system demo after every team iteration as more work is presented and the entire Agile
Release Train is present.

15. Inspect and Adapt


The inspect and adapt workshop for the Agile Release Train what the retrospective is for a Scrum
team. At the end of every program increment the inspect and adapt workshop is held to reflect, gather
data, solve problems and take action based on learnings of the previous program increment. All
stakeholders and everyone involved with the Agile Release Train participates in the workshop. The
result of the workshop are improvement stories which can be added to the backlog of the PI planning.

16. Program Increment Planning


The program increment planning in short, PI planning, is one of the three events for synchronization
of all teams that are part of the Agile Release Train. The PI planning is facilitated by the Release Train
Engineer of the Agile Release Train and takes place during two days. First the teams get presentations
on the business context and vision. After these presentations the teams create plans for the next
program increment. The result of this planning are the program increment objectives which are the
commitments of the teams for the upcoming program increment.

Agile Release Train roles:


17. System Architect, Release Train Engineer & Product Management
First, the System Architect is the person, or the teams that are responsible for the overall technical
architecture and engineering design of the system. Second, he Release Train Engineer is the Scrum
Master of the Agile Release Train, the Release Train Engineer facilitates the processes, removes
impediments, manages risk and continuously improves the program. Last, Program Management is
the Product Owner of the Agile Release Train, Product Management is responsible for Program Vision
and the Backlog creating the framework within which the Product Owners can operate. Together
these three coordinate the Agile Release Train and make sure that the train keeps moving forward.

18. Business Owners


The Business Owners are responsible for the value delivered by the Agile Release Train. They provide
leadership to the Agile Release Train by maintaining and updating the mission and vision. They
participate in the PI planning by providing the mission of the train, assigning business value to PI
objectives and approving the PI plan, if necessary they defend this plan to management. During PI

152
execution they actively participate in the PI, they enable decentralized decision making by providing
the team members with the appropriate authority, and function as a coach for the teams enabling
them to continuously improve their skills.

19. Customer
The customer receives the solution which solves its current needs. The customer works together with
Product Management and other key stakeholders to prioritize development. By actively participating
in events, such as planning sessions and demo’s the customer knows what he gets and can steer the
solution. The customer is thus part of the Value Stream that is supported by the Agile Release Train.

20. DevOps & System team


Deploying to operations is required to deliver value to the customer, this complex process is supported
by the DevOps team. This team is part of the Agile Release Train to enable deployment for the solution
developed by the train at any time. One or more system teams are formed to assist the agile teams
with integration of the solution, demo of the solution and ensuring a proper development
infrastructure.

21. Release Management, Shared Services & User Experience


All three roles are shared across the Agile Release Train, what they do however is very different. First,
Release Management plans and manages the release of the solution, they also help to guide the
solution towards the business goals. Second, Shared Services, any specialized roles that are required
for the success of the Agile Release Train but that cannot be dedicated full time are part of the Shared
Services, this can be security specialists or technical writers. These resources are available for all teams
in the Agile Release Train and must be planned in when they are needed. Last, User Experience is the
users perception of the system including the user interface. The User Experience designers support all
teams of the Agile Release Train with anything related to interactions with the user. They also educate
the teams on user interface design and testing.

Agile Release Train artefacts:


22. Vision
The vision describes the long-term vision of the solution, using the input of the customer and
stakeholders as well as the features that are on the backlog. It provides context on the solution that
is being developed and sets the boundaries for content decisions and new features.

23. Roadmap
The roadmap is used to present the Agile Release Train deliverables, this consists of the committed PI
objectives of the current PI and a forecast for the next PI or two. Product Management updates the
roadmap according to the vision. A balance has to be obtained between planning not enough,
resulting in less alignment and planning too much, resulting in a unresponsive queue which obstructs
change.

24. Metrics
SAFe presents multiple metrics that can be used, with these metrics different results are measured.
The most important is progress and whether the desired solution is delivered. This is measured best
by measuring the working solution at the customer. SAFe also presents many other ways to measure
progress at the different levels, for example the epic burn-up chart, Value Stream performance metrics

Master Thesis Peter van Buul


153
and the Agile Release Train self-assessment. All these measures are useful, however they are all
inferior to measuring the working solution at the customer.

25. Milestones & Releases


SAFe distinguishes different kind of milestones. PI milestones which occur on the PI cadence and
objectively measure progress based on the working solution. Learning milestones which occur ad-hoc,
when a question arises from a hypothesis is formulated and validated against market conditions.
Fixed-date milestones which occur when a preplanned event occurs, for example a customer demo or
scheduled large-scale integration. Other milestones could be required for release that come from for
example audits.

A feature only adds value for the customer if the feature is released and added to the working solution
at the customer. Releasing frequently enables frequent addition of value. However, releasing should
only be done when it actually makes sense, same as stated in release any time.

26. PI Objectives
The PI objectives are a summary of the business and technical objectives of the team and Agile Release
Train. They are formulated during the PI planning and the teams commit to them for the upcoming PI.
Formulating these objectives is done to validate the teams understanding of the intent of the business
regarding the features the teams do. Result of this is that, when the intent of the business is known
the goal of the team becomes to get the desired outcome rather than finish the list of features.

27. Feature
A feature describes a service that the system can provided that satisfies a specific need of one or more
users. The size of a feature is such that it can be picked up in a single PI by a single Agile Release Train,
thus a feature is planned and reviewed at the PI boundaries. Features are split into stories which can
be picked up by a team during a iteration.

28. Enabler
Enablers are the technical initiatives that pave the architectural runway and are created as business
initiatives consume the runway. Enablers are only created when needed, to prevent engineering too
far ahead resulting in an over engineered solution. Enablers are formulated at program level as
features, at team level as stories. Enablers that change architecture can be big, however they have to
be broken down into small pieces (enabler stories) so that teams can implement these during a
iteration.

29. Epics
The biggest business initiatives are cast into are epics, in the form of lightweight business cases. Epics
can be split over multiple Value Streams or Agile Release Trains. The features that an Agile Release
Train develops come from the epics that are defined at portfolio level.

Description of Distributed Agile Development problems


The following problems of Distributed Agile Development have been identified to be discussed during
the focus group session.

Problem
Problems due to incorrect execution of SAFe

154
Not executing SAFe properly results in many problems. Features that are not ready at the end of the
sprint/PI, teams that get no feedback on their work because there are no retrospectives, PI
planning’s or demos. Or increase in documentation because SAFe does not require enough
documentation for the teams.
Misunderstanding due to language barriers
Many studies report problems due to language barriers. If the language used for communication is
not someone’s native language it can be hard for this person to follow a conversation or express
themselves. Also speakers from different countries might have different dialects that can be hard to
follow for others.
Problems due to time zone differences
Time zone differences can lead to having meetings outside office hours and reduced availability for
synchronous communication.
No or less communication due to increased communication difficulty
Initiating contact in a distributed environment takes an increased effort as this cannot be initiated
face to face, some tool has to be used. This creates communication overhead and increases
communication costs.
Unable to communicate properly due to inefficient communication tools
The most mentioned problem in the literature is being unable to communicate properly due to
inefficient communication tools. Both the hardware and tools used for communication are part of
the communication infrastructure. These problems are quite severe as there is a high dependency
on this infrastructure for communication.

Master Thesis Peter van Buul


155
Appendix O Expert focus group: execution
Description of the execution of the focus group session.

Opening

Central reception in the hall, when the participants came in they were asked not to talk to the other
participants about the focus group, SAFe or distributed for the duration of the focus group except
when asked to do so during the session. The participants were all welcomed and walked in by one of
the facilitators.

Introduction

Rini did a plenary introduction and the list of Agile Release Train elements was provided to the
participants.

Combining experience in groups (round 1)

Peter explained the first exercise, after the explanation the questions of the participants were
answered by Rini. After the exercise was clear, the first group went with Rini (group Rini) to the other
room to execute the first exercise. The other group of stayed with Peter (group Peter) to execute the
first exercise.

Both groups started with the setup as presented in Table 70, a picture is provided in Figure 46.
Table 70: Start setup round 1 - groups

Not specifically challenged by Undecided Specifically challenged by


distributed distributed
16. PI Planning
14. Solution Demo
15. Inspect and Adapt
13. System Demo
22. Vision
23. Roadmap
2. Lean-agile mindset
6. Communities of Practice
10. Agile Release Train / Value
Stream
21. Release Management,
Shared Services & User
Experience
26. PI Objectives
5. Lean-agile leaders
12. Program Kanban
20 DevOps & System team
25. Milestones & Release
27. Feature
28. Enabler
1. Core values

156
4. Implementing 1-2-3
8. Weighted Shortest Job First
24. Metrics
29. Epics
11. Architectural runway
3. SAFe principles
9. Release any time
17. System Architect, Release
Train Engineer & Product
Management
18. Business Owners
19. Customer
7. Value Stream coordination

Master Thesis Peter van Buul


157
Figure 46: Picture start setup round 1 - groups

Both groups did the exercise, the groups progressed roughly at the same rate. The results of group
Rini are presented in Table 71 and Figure 47, the results of group Peter are presented in Table 72 and
Figure 48.
Table 71: Results round 1 - group Rini

Not specifically challenged by Undecided Specifically challenged by


distributed distributed
24. Metrics 20. DevOps & System team 16. PI planning
22. Vision 1. Core Values 14. Solution Demo
11. Architectural Runway 4. Implementing 1-2-3 15. Inspect & Adapt
23. Roadmap 19. Customer 13. System Demo
3. SAFe principles 2. Lean-agile mindset

158
10. Agile Release Train / Value 6. Communities of Practice
Stream
9. Release any time 12. Program Kanban
21. Release Management,
Shared Services & User-
Experience
17. System Architect, Release
Train Engineer & Product
Manager
26. PI Objectives
18. Business Owners
5. Lean-agile leaders
7. Value Stream coordination
25. Milestones & Release
29. Epics
27. Feature
8. Weighted Shortest Job First
28. Enabler

Table 72: Results round 1 - group Peter

Not specifically challenged by Undecided Specifically challenged by


distributed distributed
22. Vision 16. PI planning
2. Lean-agile mindset 14. Solution Demo
23. Roadmap 15. Inspect & Adapt
10. Agile Release Train / Value 13. System Demo
Stream
26. PI Objectives 21. Release Management
Shared Services & User
Experience
6. Communities of Practice 20. DevOps & System team
25. Milestones & Release 12. Program Kanban
5. Lean-agile leaders 9. Release Any Time
27. Feature 8. Weighted Shortest Job First
1. Core Values
28. Enabler
4. Implementing 1-2-3
11. Architectural Runway
24. Metrics
17. System Architect, Release
Train Engineer & Product
Management
3. SAFe principles
18. Business Owners
7. Value Stream Coordination
19. Customer

Master Thesis Peter van Buul


159
29. Epics

Figure 47: Picture results round 1 - group Rini

160
Figure 48: Picture results round 1 - group Peter

Collect and discuss group results (round 1)

After the time-box was finished the result of both groups where combined which created a new
starting point. Items both groups classified the same where put in the corresponding column, this
resulted in the start setup as presented in Table 73.
Table 73: Start setup round 1 - plenary

Not specifically challenged by Undecided Specifically challenged by


distributed distributed
22. Vision 2. Lean-agile mindset 16. PI Planning
23. Roadmap 6. Communities of Practice 14. Solution Demo
10. Agile Release Train / Value 21. Release Management, 15. Inspect and Adapt
Stream Shared Services & User
Experience
26. PI Objectives 20 DevOps & System team 13. System Demo

Master Thesis Peter van Buul


161
5. Lean-agile leaders 1. Core values 12. Program Kanban
25. Milestones & Release 4. Implementing 1-2-3
27. Feature 8. Weighted Shortest Job First
28. Enabler 9. Release any time
24. Metrics 17. System Architect, Release
Train Engineer & Product
Management
29. Epics 19. Customer
11. Architectural runway
3. SAFe principles
18. Business Owners
7. Value Stream coordination

Peter explained the exercise, after this explanation the group started with the exercise. A picture to
give an impression of the session is presented in Figure 49.

Figure 49: Impression of plenary exercise

As the participants were all equal, the group found it hard to converge and reach a decision on the
elements. Though it took a little longer, this did not have any effect on the result. In the end the group
did not reach agreement on one item, “8. Weighted Shortest Job First”, so they chose to leave that
item undecided. The result of the exercise is presented in Table 74 and Figure 50.
Table 74: Results round 1 - plenary

Not specifically challenged by Undecided Specifically challenged by


distributed distributed
19. Customer 8. Weighted Shortest Job First 16. PI planning
4. Implementing 1-2-3 14. Solution Demo
9. Release any time 13. System Demo

162
22. Vision 15. Inspect & Adapt
21. Release Management, 12. Program Kanban
Shared Services & User-
Experience
10. Agile Release Train / Value 2. Lean-agile mindset
Stream
23. Roadmap 20. DevOps & System team
5. Lean-agile leaders 1. Core Values
26. PI objectives 6. Communities of Practice
27. Feature
25. Milestones & Release
24. Metrics
28. Enabler
11. Architectural Runway
29. Epics
17. System Architect, Release
Train Engineer & Product
Management
3. SAFe principles
7. Value Stream coordination
18. Business Owners

Master Thesis Peter van Buul


163
Figure 50: Picture results round 1 - plenary

Dot voting (round 2)

Rini explained the next exercise, dot voting, after which the participants wrote down their votes
individually. No interaction occurred during this time. An impression of the exercise is shown in Figure
51.

164
Figure 51: Impression of dot voting

Analyzing the result with the group (round 2)

After all participants were done with voting, the facilitators put the results on the wall. These results
are presented in Table 75 and Figure 52. The ranking on likelihood is presented in Table 76, and the
ranking on impact in Table 77. The individual votes of the participants can be found in Appendix P.
Table 75: Result round 2 - dot voting individually

Element Likelihood Impact


16. PI planning 18 18
14. Solution Demo 4 7
13. System Demo 6 3
15. Inspect & Adapt 11 11
12. Program Kanban 5 5
2. Lean-agile mindset 1 0
20. DevOps & System team 3 6
1. Core values 3 4
6. Communities of Practice 3 0

Master Thesis Peter van Buul


165
Figure 52: Picture result round 2 - dot voting individually

Table 76: Results round 2 - ranking on likelihood

Ranking Element Likelihood


1 16. PI planning 18
2 15. Inspect & Adapt 11
3 13. System Demo 6
4 12. Program Kanban 5
5 14. Solution Demo 4
6 20. DevOps & System team 3
7 1. Core values 3
8 6. Communities of Practice 3
9 2. Lean-agile mindset 1

Table 77: Results round 2 - ranking on impact

Ranking Element Impact

166
1 16. PI planning 18
2 15. Inspect & Adapt 11
3 14. Solution Demo 7
4 20. DevOps & System team 6
5 12. Program Kanban 5
6 1. Core values 4
7 13. System Demo 3
8 2. Lean-agile mindset 0
9 6. Communities of Practice 0

From these rankings the top 3 was taken and combined resulting in the following items to be
discussed: “16. PI planning”, “15. Inspect & Adapt”, “14. Solution Demo”, and “13. System Demo”. As
shown in Figure 53.

Figure 53: Picture results round 2 - combined ranking

Collect individual experience (round 3)

Peter explained the exercise after which the participants each wrote down the consequences. For
each consequence a separate post-it was used, with the name of the participant and the element
number on it.

Present findings to group (round 3)

One by one the participants put down their votes on the wall, with the corresponding element, as
presented in Figure 54. Grouping similar consequences as they are put on the wall. If multiple post-
it’s contained the same problem Rini summarized the problem on a separate post-it. At the start the
four elements where put on the wall with no post-it’s under them, as presented in Table 78.
Table 78: Start setup round 3 - plenary

16. PI planning 15. Inspect & Adapt 14. Solution Demo 13. System Demo

Master Thesis Peter van Buul


167
Figure 54: Example round 3 - post-it’s on wall

Collect and discuss group experience (round 3)

As the group discussed the consequences there was no moderator, so the groups did not converge to
a result. After this went on for a while Rini started to moderate as to get the group to go to a result.
Asking them to look at the ungrouped consequences, either group the or make them a new group.
This way for each element the consequences where mapped. Table 79 to Table 82 present the
consequences of each of the elements. The build-up of each of the consequences can be found in
Appendix Q.
Table 79: Result round 3 - consequences System Demo

Number Consequence
13.1 No integration no working system
13.2 Unclear value
13.3 No feedback
13.4 Bad team morale
13.5 Annoyed customers
13.6 Unpredictably
13.7 Rework
13.8 Delay

168
Table 80: Result round 3 - consequences Solution Demo

Number Consequence
14.1 No clear / unknown value
14.2 Annoyed stakeholders / customers
14.3 Bad morale
14.4 Unpredictably
14.5 No/late feedback
14.6 Delay
14.7 Rework

Table 81: Result round 3 - consequences Inspect & Adapt

Number Consequence
15.1 No learning
15.2 Fallback / unlearning
15.3 Dissatisfied / annoyed customer / stakeholders
15.4 No motivation

Table 82: Result round 3 - consequences PI Planning

Number Consequence
16.1 No goal no execution
16.2 No alignment between teams
16.3 No real results
16.4 Stakeholder annoyance
16.5 No teamness / commitment
16.6 Lack of Transparency
16.7 Rework
16.8 Longer time to market

Dot voting (round 4)

Peter explained the last exercise, dot voting, after which the participants started writing down their
votes individually. During this exercise no interactions occurred.

Analyzing the result with the group (round 4)

After all participants were done with voting, the facilitators put the results on the wall. The results are
presented per element, in Table 83 to Table 86, and in Figure 55 to Figure 58. The ranking on likelihood
can be found in Table 87 to Table 90, and the ranking on impact in Table 91 to Table 94. The individual
votes of the participants can be found in Appendix R. The visualization of the data can be found in
Appendix S.
Table 83: Result round 4 - dot voting individually on 13. System Demo

Consequence Likelihood Impact

Master Thesis Peter van Buul


169
13.1 No integration no working system 10 15
13.2 Unclear value 8 7
13.3 No feedback 10 9
13.4 Bad team morale 8 9
13.5 Annoyed customers 4 5
13.6 Unpredictability 3 3
13.7 Rework 3 0
13.8 Delay 2 0

Table 84: Result round 4 - dot voting individually on 14. Solution Demo

Consequence Likelihood Impact


14.1 No clear / unknown value 8 10
14.2 Annoyed stakeholders / customers 8 8
14.3 Bad morale 10 11
14.4 Unpredictability 5 5
14.5 No/late feedback 7 5
14.6 Delay 3 2
14.7 Rework 1 1

Table 85: Result round 4 - dot voting individually on 15. Inspect & Adapt

Consequence Likelihood Impact


15.1 No learning 11 6
15.2 Fallback / unlearning 6 7
15.3 Dissatisfied / annoyed customer / stakeholders 0 2
15.4 No motivation 7 9

Table 86: Result round 4 - dot voting individually on 16. PI Planning

Consequence Likelihood Impact


16.1 No goal no execution 9 14
16.2 No alignment between teams 13 8
16.3 No real results 3 6
16.4 Stakeholder annoyance 6 5
16.5 No teamness / commitment 5 7
16.6 Lack of Transparancy 5 5
16.7 Rework 5 3
16.8 Longer time to market 2 0

170
Figure 55: Picture result round 4 - dot voting individually on 13. System Demo

Master Thesis Peter van Buul


171
Figure 56: Picture result round 4 - dot voting individually on 14. Solution Demo

172
Figure 57: Picture result round 4 - dot voting individually on 15. Inspect & Adapt

Master Thesis Peter van Buul


173
Figure 58: Picture result round 4 - dot voting individually on 16. PI Planning

Table 87: Result round 4 - consequences System Demo ranked on likelihood

Ranking Consequence Likelihood


1 13.1 No integration no working system 10
2 13.3 No feedback 10
3 13.2 Unclear value 8
4 13.4 Bad team morale 8
5 13.5 Annoyed customers 4
6 13.6 Unpredictability 3
7 13.7 Rework 3
8 13.8 Delay 2

Table 88: Result round 4 - consequences Solution Demo ranked on likelihood

Ranking Consequence Likelihood

174
1 14.3 Bad morale 10
2 14.1 No clear / unknown value 8
3 14.2 Annoyed stakeholders / customers 8
4 14.5 No/late feedback 7
5 14.4 Unpredictability 5
6 14.6 Delay 3
7 14.7 Rework 1

Table 89: Result round 4 - consequences Inspect & Adapt ranked on likelihood

Ranking Consequence Likelihood


1 15.1 No learning 11
2 15.4 No motivation 7
3 15.2 Fallback / unlearning 6
4 15.3 Dissatisfied / annoyed customer / stakeholders 0

Table 90: Result round 4 - consequences PI Planning ranked on likelihood

Ranking Consequence Likelihood


1 16.2 No alignment between teams 13
2 16.1 No goal no execution 9
3 16.4 Stakeholder annoyance 6
4 16.5 No teamness / commitment 5
5 16.6 Lack of Transparancy 5
6 16.7 Rework 5
7 16.3 No real results 3
8 16.8 Longer time to market 2

Table 91: Result round 4 - consequences System Demo ranked on impact

Ranking Consequence Impact


1 13.1 No integration no working system 15
2 13.3 No feedback 9
3 13.4 Bad team morale 9
4 13.2 Unclear value 7
5 13.5 Annoyed customers 5
6 13.6 Unpredictability 3
7 13.7 Rework 0
8 13.8 Delay 0

Table 92: Result round 4 - consequences Solution Demo ranked on impact

Ranking Consequence Impact


1 14.3 Bad morale 11
2 14.1 No clear / unknown value 10

Master Thesis Peter van Buul


175
3 14.2 Annoyed stakeholders / customers 8
4 14.4 Unpredictability 5
5 14.5 No/late feedback 5
6 14.6 Delay 2
7 14.7 Rework 1

Table 93: Result round 4 - consequences Inspect & Adapt ranked on impact

Ranking Consequence Impact


1 15.4 No motivation 9
2 15.2 Fallback / unlearning 7
3 15.1 No learning 6
4 15.3 Dissatisfied / annoyed customer / stakeholders 2

Table 94: Result round 4 - consequences PI planning ranked on impact

Ranking Consequence Impact


1 16.1 No goal no execution 14
2 16.2 No alignment between teams 8
3 16.5 No teamness / commitment 7
4 16.3 No real results 6
5 16.4 Stakeholder annoyance 5
6 16.6 Lack of Transparancy 5
7 16.7 Rework 3
8 16.8 Longer time to market 0

Closing

Everybody was thanked for their time and cooperating with the studies, a small gift is presented to
the participants as thank you.

176
Appendix P Expert focus group: individual votes round
2
Table 95: Votes on elements - Distributed expert 1

Element Likelihood Impact


16. PI planning 2 3
14. Solution Demo 1 1
13. System Demo 1 1
15. Inspect & Adapt 2 1
12. Program Kanban 1
2. Lean-agile mindset 1
20. DevOps & System team 1 2
1. Core values
6. Communities of Practice 1

Table 96: Votes on elements - SAFe expert 1

Element Likelihood Impact


16. PI planning 2 5
14. Solution Demo
13. System Demo 4
15. Inspect & Adapt 1 2
12. Program Kanban 2 2
2. Lean-agile mindset
20. DevOps & System team
1. Core values
6. Communities of Practice

Table 97: Votes on elements - Distributed Expert 2

Element Likelihood Impact


16. PI planning 4 1
14. Solution Demo 1 3
13. System Demo
15. Inspect & Adapt
12. Program Kanban 2 1
2. Lean-agile mindset
20. DevOps & System team 2 4
1. Core values
6. Communities of Practice

Master Thesis Peter van Buul


177
Table 98: Votes on elements - SAFe expert 2

Element Likelihood Impact


16. PI planning 3 2
14. Solution Demo
13. System Demo
15. Inspect & Adapt 3 3
12. Program Kanban
2. Lean-agile mindset
20. DevOps & System team
1. Core values 3 4
6. Communities of Practice

Table 99: Votes on elements - Practitioner 1

Element Likelihood Impact


16. PI planning 3 2
14. Solution Demo 1 2
13. System Demo 1 2
15. Inspect & Adapt 2 2
12. Program Kanban 1 1
2. Lean-agile mindset
20. DevOps & System team
1. Core values
6. Communities of Practice 1

Table 100: Votes on elements – Practitioner 2

Element Likelihood Impact


16. PI planning 4 5
14. Solution Demo 1 1
13. System Demo
15. Inspect & Adapt 3 3
12. Program Kanban
2. Lean-agile mindset
20. DevOps & System team
1. Core values
6. Communities of Practice 1

178
Appendix Q Expert focus group: consequences per
element round 3
13. System Demo
Table 101: 13.1 No integration no working system

Expert Note
SAFe expert 1 Technical debt as integration proven late leads to quick fixing and too big issues to
be able to resolve
SAFe expert 1 Save up trouble for the end (integrating late leads to trail of misery)
SAFe expert 2 Is the system integrated / working?

Table 102: 13.2 Unclear value

Expert Note
Distributed Unable to show added value
Expert 1
SAFe expert 2 No observation of a worksystem possible -> Where are we regarding PI objectives?

Table 103: 13.3 No feedback

Expert Note
SAFe expert 2 No feedback -> are we still doing what is of value?
Distributed Unable to gather feedback
Expert 1
SAFe expert 1 No user / customer feedback so building to much or the wrong thing
Practitioner 1 No good feedback for next iteration can't change to better customer solution

Table 104: 13.4 Bad team morale

Expert Note
Distributed Dissatisfied team
Expert 2
Practitioner 1 No good platform for the teams to show their results -> demotivation
SAFe expert 1 Team finger pointing (it works on our side)
Practitioner 2 Team morale

Table 105: 13.5 Annoyed customers

Expert Note
Practitioner 2 Stakeholder buyin
Distributed Dissatisfied customers
Expert 2

Master Thesis Peter van Buul


179
Table 106: 13.6 Unpredictably

Expert Note
Distributed Current state of the system is not clear
Expert 1

Table 107: 13.7 Rework

Expert Note
Distributed Rework
Expert 2

Table 108: 13.8 Delay

Expert Note
Distributed Delay
Expert 2
Distributed Time to market longer
Expert 2

14. Solution Demo


Table 109: 14.1 No clear / unknown value

Expert Note
SAFe expert What value has been delivered? = unclear
2
Practitioner No acceptance
2
Distributed Unable to show added value to stakeholders
Expert 1

Table 110: 14.2 Annoyed stakeholders / customers

Expert Note
Practitioner No stakeholder buyin
2
Distributed Dissatisfied customers
Expert 2

Table 111: 14.3 Bad morale

Expert Note
SAFe expert No transparency to main stakeholders -> no "demo or die" -> urgency -> is the
2 system integrated?
SAFe expert Disengagement of people on the train (just do my job)
1
Practitioner No good platform for the teams to show their results -> demotivation
1

180
Distributed Dissatisfied teams
Expert 2
Practitioner Morale of ART
2

Table 112: 14.4 Unpredictably

Expert Note
SAFe expert Release Train cycle disappears, stops the cadence of releasing, unpredictability of
1 releasing
Distributed Current state of the solution is not clear
Expert 1

Table 113: 14.5 No / late feedback

Expert Note
Practitioner No good feedback, feedback on prod. Will be higher
1
Distributed Unable to gather feedback
Expert 1

Table 114: 14.6 Delay

Expert Note
Distributed Delay
Expert 2
Distributed Time to market longer
Expert 2

Table 115: 14.7 Rework

Expert Note
Distributed Rework
Expert 2

15. Inspect & Adapt


Table 116: 15.1 No learning

Expert Note
Distributed Not possible to identify the current challenges
Expert 1
Distributed Not possible to improve the process
Expert 1
Practitioner 1 No improvements no learning in the Release Train / teams
Practitioner 1 Not becoming a team in norming or storming phase
SAFe expert 2 No (cross team) learning
SAFe expert 1 No learning from mistakes, no improvement process

Master Thesis Peter van Buul


181
SAFe expert 1 Mediocrasy
Practitioner 2 Returning failures (no learning)

Table 117: 15.2 Fallback / unlearning

Expert Note
Distributed Current state is unclear
Expert 1
SAFe expert Fallback into old process and structures
1

Table 118: 15.3 Dissatisfied customers

Expert Note
Distributed Dissatisfied customers
Expert 2

Table 119: 15.4 No motivation

Expert Note
Practitioner 1 People becoming unhappy, performance decrease
SAFe expert 2 No flow optimization owned & proposed by train members
SAFe expert 1 Assumptions rather than shared learning drives behavior so less focus +
engagement
Practitioner 2 Demotivated team

16. PI Planning
Table 120: 16.1 No goal no execution

Expert Note
SAFe expert No alignment to a common goal -> "why" is not answered
2
Distributed Unclear PI's
Expert 2
SAFe expert Teams cannot commit/focus train does not execute anything
1
Distributed Lack of scope
Expert 1
Distributed Lack of focus
Expert 1
Distributed Lack of prioritization
Expert 1
Practitioner Not working towards the same goal
1

182
Table 121: 16.2 No alignment between teams

Expert Note
SAFe expert 2 Missing out dependencies -> surprises -> less/no value delivery or replanning

Table 122: 16.3 No real results

Expert Note
SAFe expert 2 Just do work instead of leveraging knowledge in teams
Practitioner 1 No aligned planning with result of not working software
Distributed Workflow stops
Expert 2

Table 123: 16.4 Stakeholder annoyance

Expert Note
SAFe expert Business stakeholders detached from execution
1
Practitioner Stakeholders pissed off
2

Table 124: 16.5 No teamness / commitment

Expert Note
SAFe expert 2 Missing the feeling of "We are in this together"
SAFe expert 1 People feel insecure and uncertain
SAFe expert 1 Blaming and complaining culture due to lack of commitment
Practitioner 2 No commitment
Distributed Angry/distressed developers
Expert 2
Distributed Lack of commitment
Expert 1

Table 125: 16.6 Lack of transparency

Expert Note
SAFe expert Lack of transparency
1
Distributed Lack of transparency
Expert 1

Table 126: 16.7 Rework

Expert Note
Practitioner Rework/repair
2
Practitioner No aligned planning with result of big issues with integration
1

Master Thesis Peter van Buul


183
Distributed Wrong solutions
Expert 2

Table 127: 16.8 Longer time to market

Expert Note
Distributed Delay causes waiting time
Expert 2

184
Appendix R Expert focus group: individual votes focus
group: round 4
Table 128: Votes on consequences - Distributed Expert 1

Consequence Likelihood Impact


13.1 No integration no working system 1 3
13.2 Unclear value 3 1
13.3 No feedback 3 2
13.4 Bad team morale 1
13.5 Annoyed customers
13.6 Unpredictability 1 1
13.7 Rework
13.8 Delay

14.1 No clear / unknown value 3 2


14.2 Annoyed stakeholders / customers 1
14.3 Bad morale 1
14.4 Unpredictability 1 1
14.5 No/late feedback 2 2
14.6 Delay
14.7 Rework 1

15.1 No learning 2 1
15.2 Fallback / unlearning 2 2
15.3 Dissatisfied / annoyed customer / stakeholders 1
15.4 No motivation

16.1 No goal no execution 3 3


16.2 No alignment between teams 3 1
16.3 No real results 1
16.4 Stakeholder annoyance 1
16.5 No teamness / commitment 1
16.6 Lack of Transparancy 1 1
16.7 Rework
16.8 Longer time to market 1

Table 129: Votes on consequences - SAFe expert 1

Consequence Likelihood Impact


13.1 No integration no working system 1 2
13.2 Unclear value 1 2
13.3 No feedback 1
13.4 Bad team morale 1 2

Master Thesis Peter van Buul


185
13.5 Annoyed customers 1 1
13.6 Unpredictability 1 1
13.7 Rework 1
13.8 Delay 1

14.1 No clear / unknown value 1 1


14.2 Annoyed stakeholders / customers 1 2
14.3 Bad morale 2 2
14.4 Unpredictability 1 1
14.5 No/late feedback 1 1
14.6 Delay 1
14.7 Rework

15.1 No learning 2 1
15.2 Fallback / unlearning 1 1
15.3 Dissatisfied / annoyed customer / stakeholders 1
15.4 No motivation 1 1

16.1 No goal no execution 1 2


16.2 No alignment between teams 3 1
16.3 No real results 1
16.4 Stakeholder annoyance 2 1
16.5 No teamness / commitment 1 2
16.6 Lack of Transparancy 1 1
16.7 Rework
16.8 Longer time to market

Table 130: Votes on consequences - Distributed Expert 2

Consequence Likelihood Impact


13.1 No integration no working system
13.2 Unclear value
13.3 No feedback
13.4 Bad team morale 3 4
13.5 Annoyed customers 2 4
13.6 Unpredictability
13.7 Rework 2
13.8 Delay 1

14.1 No clear / unknown value


14.2 Annoyed stakeholders / customers 3 3
14.3 Bad morale 3 3
14.4 Unpredictability
14.5 No/late feedback
14.6 Delay 1 1

186
14.7 Rework

15.1 No learning 2 1
15.2 Fallback / unlearning
15.3 Dissatisfied / annoyed customer / stakeholders
15.4 No motivation 2 3

16.1 No goal no execution 1 1


16.2 No alignment between teams
16.3 No real results 2 3
16.4 Stakeholder annoyance 2
16.5 No teamness / commitment
16.6 Lack of Transparancy 1 2
16.7 Rework 2 2
16.8 Longer time to market

Table 131: Votes on consequences - SAFe expert 2

Consequence Likelihood Impact


13.1 No integration no working system 2 3
13.2 Unclear value 2 3
13.3 No feedback 2 1
13.4 Bad team morale 2 1
13.5 Annoyed customers
13.6 Unpredictability
13.7 Rework
13.8 Delay

14.1 No clear / unknown value 1 2


14.2 Annoyed stakeholders / customers 1 2
14.3 Bad morale 1 1
14.4 Unpredictability 1 1
14.5 No/late feedback 1
14.6 Delay 1 1
14.7 Rework 1

15.1 No learning 1
15.2 Fallback / unlearning 2 2
15.3 Dissatisfied / annoyed customer / stakeholders
15.4 No motivation 1 2

16.1 No goal no execution 2 4


16.2 No alignment between teams 3 2
16.3 No real results
16.4 Stakeholder annoyance

Master Thesis Peter van Buul


187
16.5 No teamness / commitment
16.6 Lack of Transparancy 2 1
16.7 Rework 1 1
16.8 Longer time to market

Table 132: Votes on consequences - Practitioner 1

Consequence Likelihood Impact


13.1 No integration no working system 3 3
13.2 Unclear value 1 1
13.3 No feedback 1 2
13.4 Bad team morale 1 1
13.5 Annoyed customers 1
13.6 Unpredictability 1 1
13.7 Rework
13.8 Delay

14.1 No clear / unknown value 1 1


14.2 Annoyed stakeholders / customers 1 1
14.3 Bad morale 1 1
14.4 Unpredictability 1 2
14.5 No/late feedback 3 2
14.6 Delay
14.7 Rework

15.1 No learning 2 1
15.2 Fallback / unlearning 1 2
15.3 Dissatisfied / annoyed customer / stakeholders
15.4 No motivation 1 1

16.1 No goal no execution 3


16.2 No alignment between teams 2 3
16.3 No real results 1 1
16.4 Stakeholder annoyance
16.5 No teamness / commitment 2 1
16.6 Lack of Transparancy
16.7 Rework 2
16.8 Longer time to market 1

Table 133: Votes on consequences - Practitioner 2

Consequence Likelihood Impact


13.1 No integration no working system 3 4
13.2 Unclear value 1
13.3 No feedback 3 4

188
13.4 Bad team morale 1
13.5 Annoyed customers
13.6 Unpredictability
13.7 Rework
13.8 Delay

14.1 No clear / unknown value 2 4


14.2 Annoyed stakeholders / customers 1
14.3 Bad morale 3 3
14.4 Unpredictability 1
14.5 No/late feedback
14.6 Delay
14.7 Rework

15.1 No learning 2 2
15.2 Fallback / unlearning
15.3 Dissatisfied / annoyed customer / stakeholders
15.4 No motivation 2 2

16.1 No goal no execution 2 1


16.2 No alignment between teams 2 1
16.3 No real results
16.4 Stakeholder annoyance 2 3
16.5 No teamness / commitment 2 3
16.6 Lack of Transparancy
16.7 Rework
16.8 Longer time to market

Master Thesis Peter van Buul


189
Appendix S Expert focus group: visualization dot
voting consequences: round 4
Graphs System Demo
In Figure 59 an overview of the consequences of the system demo is presented. In Figure 60 to Figure
62 the distribution of the experts on risk, likelihood and impact is shown.

Overview consequences - System Demo


30 25
Number of votes

25 19
20 17
15 15
15 10 10 9
8 9 8 7 9
10 6
4 5 3 3 3 3
5 2 2
0 0
0

Consequences

Risk Likelihood Impact

Figure 59: Graph overview consequences System Demo

Expert distribution of risk - System Demo


30 25
Number of votes

25
20 17 17 17
15 9
10 6
3 2
5
0

Consequences

SAFe expert Distributed expert Practitioner

Figure 60: Graph expert distribution of risk System Demo

190
Expert distribution of likelihood - System Demo
12 10 10
Number of votes

10 8 8
8
6 4
4 3 3
2
2
0

Consequences

SAFe expert Distributed expert Practitioner

Figure 61: Graph expert distribution on likelihood System Demo

Expert distribution of impact - System Demo


18 15
Number of votes

16
14
12 9 9
10 7
8 5
6 3
4
2 0 0
0

Consequences

SAFe expert Distributed expert Practitioner

Figure 62: Graph expert distribution on impact System Demo

Master Thesis Peter van Buul


191
Graphs Solution Demo
In Figure 63 an overview of the consequences of the Solution Demo is presented. In Figure 64 to Figure
66 the distribution of the experts on risk, likelihood and impact is shown.

Overview consequences - Solution Demo


25 21
Number of votes

20 18
16
15 12
10 11 10 10
10 8 8 8 7
5 5 5 5
5 3 2 2 1 1
0

Consequences

Risk Likelihood Impact

Figure 63: Graph overview consequences Solution Demo

Expert distribution of risk - Solution Demo


25 21
Number of votes

20 18
16
15 12
10
10
5
5 2
0

Consequences

SAFe expert Distributed expert Practitioner Total-label

Figure 64: Graph Expert distribution of risk Solution Demo

192
Expert distribution of likelihood - Solution Demo
12
10
Number of votes

10
8 8
8 7
6 5
4 3
2 1
0

Consequences

SAFe expert Distributed expert Practitioner

Figure 65: Graph expert distribution of likelihood Solution Demo

Expert distribution of impact - Solution Demo


14
11
Number of votes

12 10
10 8
8
6 5 5
4 2
2 1
0

Consequences

SAFe expert Distributed expert Practitioner Total-label

Figure 66: Graph expert distribution of impact Solution Demo

Master Thesis Peter van Buul


193
Graphs Inspect & Adapt
In Figure 67 an overview of the consequences of inspect & adapt is presented. In Figure 68 to Figure
70 the distribution of the experts on risk, likelihood and impact is shown.

Overview consequences - Inspect & Adapt


17
18 16
16
Number of votes

13
14
11
12
9
10
7 7
8 6 6
6
4 2 2
2 0
0
15.1 No learning 15.4 No motivation 15.2 Fallback / unlearning 15.3 Dissatisfied /
annoyed customer /
stakeholders
Consequences

Risk Likelihood Impact

Figure 67: Graph overview consequences Inspect & Adapt

Expert distribution of risk - Inspect & Adapt


20 17 16
Number of votes

15 13

10

5 2

0
15.1 No learning 15.4 No motivation 15.2 Fallback / unlearning 15.3 Dissatisfied /
annoyed customer /
stakeholders
Consequences

SAFe expert Distributed expert Practitioner

Figure 68: Graph expert distribution of risk Inspect & Adapt

194
Expert distribution of likelihood - Inspect & Adapt
12 11
10
Number of votes

8 7
6
6
4
2
0
0
15.1 No learning 15.4 No motivation 15.2 Fallback / unlearning 15.3 Dissatisfied /
annoyed customer /
stakeholders
Consequences

SAFe expert Distributed expert Practitioner

Figure 69: Graph expert distribution of likelihood Inspect & Adapt

Expert distribution of impact - Inspect & Adapt


10 9
Number of votes

8 7
6
6
4
2
2
0
15.1 No learning 15.4 No motivation 15.2 Fallback / unlearning 15.3 Dissatisfied /
annoyed customer /
stakeholders
Consequences

SAFe expert Distributed expert Practitioner

Figure 70: Graph expert distribution of impact Inspect & Adapt

Master Thesis Peter van Buul


195
Graphs PI planning
In Figure 71 an overview of the consequences of the PI planning is presented. In Figure 72Figure
68Figure 60 to Figure 74Figure 70 the distribution of the experts on risk, likelihood and impact is
shown.

Overview consequences - PI planning


25 23
21
Number of votes

20
14 13
15 12 11
9 10 9
10 8 7 8
5 6 5 5 5 6 5
5 3 3 2 2
0
0
16.1 No goal 16.2 No 16.5 No 16.4 16.6 Lack of 16.3 No real 16.7 Rework 16.8 Longer
no execution alignment teamness / Stakeholder Transparancy results time to
between commitment annoyance market
teams
Consequences

Risk Likelihood Impact

Figure 71: Graph overview consequences PI planning

Expert distribution of risk - PI planning


30
23
Number of votes

25 21
20
15 12 11 10 9
10 8
5 2
0
16.1 No goal 16.2 No 16.5 No 16.4 16.6 Lack of 16.3 No real 16.7 Rework 16.8 Longer
no execution alignment teamness / Stakeholder Transparancy results time to
between commitment annoyance market
teams
Consequences

SAFe expert Distributed expert Practitioner

Figure 72: Graph expert distribution of risk PI planning

196
Expert distribution of likelihood - PI planning
16
13
Number of votes

14
12
9
10
8 6
5 5 5
6
3
4 2
2
0
16.1 No goal 16.2 No 16.5 No 16.4 16.6 Lack of 16.3 No real 16.7 Rework 16.8 Longer
no execution alignment teamness / Stakeholder Transparancy results time to
between commitment annoyance market
teams
Consequences

SAFe expert Distributed expert Practitioner

Figure 73: Graph expert distribution of likelihood PI planning

Expert distribution of impact - PI planning


16 14
Number of votes

14
12
10 8
7
8 6
5 5
6
3
4
2 0
0
16.1 No goal 16.2 No 16.5 No 16.4 16.6 Lack of 16.3 No real 16.7 Rework 16.8 Longer
no execution alignment teamness / Stakeholder Transparancy results time to
between commitment annoyance market
teams
Consequences

SAFe expert Distributed expert Practitioner

Figure 74: Graph expert distribution of impact PI planning

Master Thesis Peter van Buul


197
Appendix T Survey

198
Master Thesis Peter van Buul
199
200
Master Thesis Peter van Buul
201
202
Master Thesis Peter van Buul
203
204
Appendix U Results of the survey
The survey was executed from the 25th of July to 31st of August. It consisted of three parts:
Demographics, Challenged elements of the Agile Release Train and Options for future research. In the
demographics, the experience regarding SAFe and distributed were asked. In the challenged elements
of the Agile Release Train, the nine elements identified by the focus group were rated based on
likelihood and impact of failing. In the options for future research, the assumption that any distributed
SAFe implementation a conscious decision is made weather to do the events co-located or not, was
checked. Additionally, the helpfulness of possible solution directions for future research was asked. A
printed version of the survey, containing all questions, can be found in Appendix T.

The survey was spread digitally using SurveyMonkey. It was spread to the community using social
media, posted in the SAFe LinkedIn group, on twitter and on Facebook. Besides the community, some
people where approached directly, the survey was spread among the Agile Release Train of T-Mobile.
As well as to all participants of SAFe trainings at Prowareness. Additionally, the consultants at
Prowareness also shared the survey in their network. The response of the community was very low,
in total 20 people took part in the survey, all from LinkedIn. Both T-Mobile and the trainees responded
better, with respectively 14 and 26 replies. Finally, from the network came in total 6 replies, which
brings the total number of participants to 66, of which 42 completed the entire survey.

The survey was initially executed during the summer holiday till the 31st of August, which was possibly
the cause for the low response on the survey. The duration was thus extended till the 20th of
September to eliminate the summer holiday as cause for the low response. Despite this, during this
extended period, only 6 persons participated not significantly heightening the response rate.

Master Thesis Peter van Buul


205
206
Master Thesis Peter van Buul
207
208
Master Thesis Peter van Buul
209
210
Master Thesis Peter van Buul
211
212
Master Thesis Peter van Buul
213
214
Master Thesis Peter van Buul
215
216
Master Thesis Peter van Buul
217
218
Appendix V Practitioner focus group: invitation letter

Dear <name>,

First of all, thank you very much for your willingness to participate in this research.

My name is Peter van Buul, and together with Rini van Solingen we are investigating distributed SAFe,
as part of my master thesis in Information Architecture at the Technical University Delft. The research
is carried out by me as a graduate student assignment at Prowareness. During this research, challenges
with executing SAFe in a distributed environment have been discovered. As part of this research an
expert session is organized in which Release Train Engineers will discuss the practical side of this
research.

The expert session will be on Monday the 31st of October 2016, from 13:00 to 17:00, in Delft at
Prowareness, Brassersplein 1, 2612 CT Delft. The session can only start when all participants are
present. We will arrange a lunch for you from 12:00, please be there on time, this will help punctuality.
Could you please contact me via phone if you are delayed?

Information is provided in English to allow the international community to be able to examine the
research. During the expert session the assignments will be in English as well, interactions during the
session however can be in Dutch.

The program for the day is expected to consist of the following 3 parts:
- Discussion on challenged distributed SAFe elements 13:15 – 15:00
- Discussion on solutions to prevent elements from failing 15:00 – 16:00
- Discussion on consequences of elements failing 16:00 – 17:00 (ultimate latest)

The program is not finalized yet, so the final program might be slightly different. To finalize the
program, I would like to ask you the following questions.
- How long have you worked with SAFe?
- What is your experience with distributed working?
- What is your experience with distributed in SAFe?

Additionally, I might ask you to prepare some things before the session.

With best regards,

Peter van Buul, master student Technical University Delft


Mobile: +31652478416
Email: p.vanbuul@prowareness.nl

Master Thesis Peter van Buul


219
Appendix W Practitioner focus group: forms
Name:

Likelihood Impact

Element Votes Total Votes Total

Total number of votes on likelihood:

Total number of votes on impact:

220
Name:

Element

Solution Difficulty Rank Impact Rank

Element

Solution Difficulty Rank Impact Rank

Element

Solution Difficulty Rank Impact Rank

Element

Solution Difficulty Rank Impact Rank

Master Thesis Peter van Buul


221
Element

Solution Difficulty Rank Impact Rank

Element

Solution Difficulty Rank Impact Rank

222
Name:

Element

Solution

Element

Solution

Element

Solution

Element

Solution

Element

Solution

Master Thesis Peter van Buul


223
1

Element

Solution

224
Appendix X Practitioner focus group: execution
Opening

Central reception in the hall, when the participants came in they were asked not to talk to the other
participants about the focus group, SAFe or distributed for the duration of the focus group except
when asked to do so during the session. The participants were all welcomed and walked in by one of
the facilitators.

Introduction

Peter did a plenary introduction, and the participants and facilitators introduced themselves. One of
the participants was not present yet. However, given the tight schedule the introduction was done
without the participant. When the first exercise was explained all participants were present.

Combining experience in groups (round 1)

Peter explained the first exercise, after the explanation some answers were answered by Peter. After
the exercise was clear, the group was split in two. The three participants that had been previously
involved with the research stayed in the room with Hanneke (group Hanneke) to execute the first
exercise. The other 8 went with Peter to the other room (group Peter) to execute the first exercise.

Both groups started with the setup as presented in Table 134, a picture is provided in Figure 75.
Table 134: Practitioner focus group - start setup round 1 - groups

Not specifically challenged by Undecided Specifically challenged by


distributed distributed
16. PI planning
15. Inspect & Adapt
14. Solution Demo
12. Program Kanban
13. System Demo
22. DevOps
23. System team
1. Core values
6. Communities of Practice
2. Lean-agile mindset
8. Weighted Shortest Job First
24. Release Management
25. Shared Services
26. User Experience
4. Implementing 1-2-3
9. Release any time
17. Release Train Engineer
18. System Architect
19. Product Management
21. Customer
27. Vision

Master Thesis Peter van Buul


225
28. Roadmap
10. Agile Release Train / Value
Stream
31. PI Objectives
5. Lean-agile leaders
30. Milestones & Releases
32. Feature
33. Enabler
29. Metrics
34. Epics
11. Architectural runway
3. SAFe principles
20. Business Owners
7. Value Stream coordination

Figure 75: Practitioner focus group - picture start setup round 1 - groups

226
Both groups did the exercise, despite the difference in size both groups progressed at roughly the
same rate. The results of group Hanneke are presented in Table 135 and Figure 76, the results of group
Peter are presented in Table 136 and Figure 77.
Table 135: Practitioner focus group - result round 1 - group Hanneke

Not specifically challenged by Undecided Specifically challenged by


distributed distributed
2. Lean-agile mindset 7. Value Stream coordination 1. Core values
3. SAFe principles 13. System Demo 4. Implementing 1-2-3
5. Lean-agile leaders 14. Solution Demo 6. Communities of Practice
8. Weighted Shortest Job First 10. Agile Release Train / Value
Stream
9. Release any time 11. Architectural runway
12. Program Kanban 15. Inspect & Adapt
18. System Architect 16. PI planning
22. DevOps 17. Release Train Engineer
23. System team 19. Product Management
24. Release Management 20. Business Owners
25. Shared Services 21. Customer
26. User Experience 31. PI Objectives
27. Vision 32. Feature
28. Roadmap 33. Enabler
29. Metrics 34. Epics
30. Milestones & Releases

Table 136: Practitioner focus group - result round 1 - group Peter

Not specifically challenged by Undecided Specifically challenged by


distributed distributed
1. Core values 9. Release any time 4. Implementing 1-2-3
2. Lean-agile mindset 24. Release Management 6. Communities of Practice
3. SAFe principles 34. Epics 15. Inspect & Adapt
5. Lean-agile leaders 16. PI planning
7. Value Stream coordination 17. Release Train Engineer
8. Weighted Shortest Job First 22. DevOps
10. Agile Release Train / Value 23. System team
Stream
11. Architectural runway 25. Shared Services
12. Program Kanban 26. User Experience
13. System Demo 32. Feature
14. Solution Demo 33. Enabler
18. System Architect
19. Product Management
20. Business Owners
21. Customer
27. Vision

Master Thesis Peter van Buul


227
28. Roadmap
29. Metrics
30. Milestones & Releases
31. PI Objectives

Figure 76: Practitioner focus group - picture result round 1 - group Hanneke

228
Figure 77: Practitioner focus group - picture results round 1 - group Peter

Collect and discuss group results (round 1)

After both groups had finished, the results of both groups were combined which created a new
starting point. Items both groups classified the same were put in the corresponding column, this
resulted in the start setup as presented in Table 137 and Figure 78.
Table 137: Practitioner focus group - start setup round 1 - plenary

Not specifically challenged by Undecided Specifically challenged by


distributed distributed
12. Program Kanban 14. Solution Demo 16. PI planning
2. Lean-agile mindset 13. System Demo 15. Inspect & Adapt
8. Weighted Shortest Job First 22. DevOps 6. Communities of Practice
18. System Architect 23. System team 4. Implementing 1-2-3
27. Vision 1. Core values 17. Release Train Engineer
28. Roadmap 24. Release Management 32. Feature

Master Thesis Peter van Buul


229
5. Lean-agile leaders 25. Shared Services 33. Enabler
30. Milestones & Releases 26. User Experience
29. Metrics 9. Release any time
3. SAFe principles 19. Product Management
21. Customer
10. Agile Release Train / Value
Stream
31. PI Objectives
34. Epics
11. Architectural runway
20. Business Owners
7. Value Stream coordination

Figure 78: Practitioner focus group - picture start setup round 1 - plenary

Peter explained the exercise, after this explanation the group started with the exercise. As the
participants were asked to provide insight based on their practical experience, implementation
differences made it hard to reach agreement on some items. Because of this, the participants the
discussions of the elements took more time then there was available. Therefore, Hanneke came up
with the idea to use a lean coffee break technique in which the participants discuss an item for 1

230
minute, and then decide whether to add another minute or conclude the item. This way the processing
speed was increased so that the group stayed within the (extended) timebox.

When trying to resolve differences when looking at the items from theory, the theory serves as a truth
on which to validate and helps to reach agreement. While, when looking at the items from practice,
these differences come from different implementations. In this case, both are correct, result of this is
that reaching agreement is not possible. This was happening during the focus group, a consequence
of this was that there were five items of which the participants stated, no agreement can be made
upon these items as this is dependent on the implementation. These five items were: Release
Management, DevOps, system team, Release any time and Customer. The result of the exercise is
presented in Table 138 and Figure 79.
Table 138: Practitioner focus group - result round 1 - plenary

Not specifically challenged by Undecided Specifically challenged by


distributed distributed
12. Program Kanban 13. System Demo 10. Agile Release Train / Value
Stream
2. Lean-agile mindset 1. Core values 16. PI planning
8. Weighted Shortest Job First 15. Inspect & Adapt
18. System Architect 24. Release Management 6. Communities of Practice
27. Vision 22. DevOps 4. Implementing 1-2-3
28. Roadmap 23. System team 17. Release Train Engineer
5. Lean-agile leaders 9. Release any time 32. Feature
30. Milestones & Releases 21. Customer 33. Enabler
29. Metrics 25. Shared Services
3. SAFe principles 19. Product Management
7. Value Stream coordination 20. Business Owners
14. Solution Demo 11. Architectural runway
31. PI Objectives 26. User Experience
34. Epics

Master Thesis Peter van Buul


231
Figure 79: Practitioner focus group - picture result round 1 - plenary

Dot voting (round 2)

Hanneke explained the next exercise, dot voting, after which the participants wrote down their votes
individually. No interaction occurred during this time.

Analyzing the result with the group (round 2)

While Hanneke calculated the votes to determine the top 3 of likelihood and impact, Peter explained
the next exercise. The votes for each element can be found in Table 139. The ranking on likelihood can
be found in Table 140, and the ranking on impact in Table 141. The individual votes of the participants
can be found in 0.
Table 139: Practitioner focus group - result of round 2 - dot voting individually

Element Likelihood Impact


11. Architectural runway 8 8
10. Agile Release Train / Value Stream 5 11
16. PI planning 31 27
15. Inspect & Adapt 26 16
6. Communities of Practice 10 8

232
4. Implementing 1-2-3 13 19
17. Release Train Engineer 13 15
32. Feature 11 16
25. Shared Services 17 6
33. Enabler 3 3
19. Product Management 2 9
20. Business Owners 2 3
26. User Experience 2 2

Table 140: Practitioner focus group - result round 2 - ranking on likelihood

Ranking Element Likelihood


1 16. PI planning 31
2 15. Inspect & Adapt 26
3 25. Shared Services 17
4 4. Implementing 1-2-3 13
4 17. Release Train Engineer 13
5 32. Feature 11
6 6. Communities of Practice 10
7 11. Architectural runway 8
8 10. Agile Release Train / Value Stream 5
9 33. Enabler 3
10 19. Product Management 2
10 20. Business Owners 2
10 26. User Experience 2

Table 141: Practitioner focus group - result round 2 - ranking on impact

Ranking Element Impact


1 16. PI planning 27
2 4. Implementing 1-2-3 19
3 15. Inspect & Adapt 16
3 32. Feature 16
4 17. Release Train Engineer 15
5 10. Agile Release Train / Value Stream 11
6 19. Product Management 9
7 11. Architectural runway 8
7 6. Communities of Practice 8
8 25. Shared Services 6
9 33. Enabler 3
9 20. Business Owners 3
10 26. User Experience 2

Master Thesis Peter van Buul


233
From these rankings, the following result was taken and result was taken and put on the wall, as shown
in Figure 80. For the following 5 items solutions will be discussed: “16. PI planning”, “15. Inspect &
Adapt”. “4. Implementing 1-2-3”, “32. Feature”, and “25. Shared Services”.

Figure 80: Practitioner focus group - picture result round 2 – combined ranking

234
Collect individual experience (round 3)

After Peter explained the exercise, and the ranking was presented the participants wrote down their
solutions to the 5 elements that resulted from the combined ranking. The solutions that each
participant came up with can be found in Appendix AA.

Collect group experience (round 3)

The participants were divided into two groups. Within each group, the solutions of the individuals
were discussed, based on this discussion the two best solutions per element were formulated. These
solutions were a combination of the ideas of the individuals. Each solution with corresponding
element was written down on a separate post-it to be used in the next round. The results of the groups
can be found in Appendix BB.

Present findings to group (round 3)

After Peter explained the exercise, a delegate from each of the groups presented the solutions of their
group and put the post-it with the solution on the wall with the corresponding element. This was done
one at the time, so first group A presented a solution, then group B, etc. Before moving to the next
solution, it was asked if the solution was clear to all participants. Similar solutions were grouped
together based on group consensus. At the start the 5 elements were put on the wall with no post-it’s
under them, as presented in Table 142. After all solutions were presented, the solutions were
numbered for the dot voting in the next exercise. The result of this exercise can be found in Table 143
to Table 147, and in Figure 81 to Figure 85.
Table 142: Practitioner focus group - start setup round 3 - plenary

16. PI planning 15. Inspect & 4. Implementing 32. Feature 25. Shared
Adapt 1-2-3 Services

Table 143: Practitioner focus group - result round 3 - solutions PI planning

Number Solution
1 Features ready before PI planning
2 Minimal 1 x period PI Planning on-site with all teams
3 Good communication tools

Table 144: Practitioner focus group - result round 3 - solutions Inspect & Adapt

Number Solution
1 Good communication tools
2 Divide the topics over the locations
3 First I&A meeting on 1 location

Table 145: Practitioner focus group - result round 3 - solutions Implementing 1-2-3

Number Solution
1 Have 1 team to coordinate the trainings & implementation
2 Coaches know culture & problems of location
3 Strong vision promoted top down

Master Thesis Peter van Buul


235
Table 146: Practitioner focus group - result round 3 - solutions Feature

Number Solution
1 Keep coordinating dependencies (Scrum of Scrums)
2 Have common understanding on features
3 Make sure that each feature has 1 owner to manage dependencies
4 Small features

Table 147: Practitioner focus group - result round 3 - solutions Shared Services

Number Solution
1 Distribute features by shared service impact (focus areas)
2 Services give commitment
3 1 x per period visit each team visibility
4 Involvement during planning events

Figure 81: Practitioner focus group - picture result round 3 - solutions PI planning

236
Figure 82: Practitioner focus group - picture result round 3 - solutions Inspect & Adapt

Figure 83: Practitioner focus group - picture result round 3 - solutions Implementing 1-2-3

Master Thesis Peter van Buul


237
Figure 84: Practitioner focus group - picture result round 3 - solutions feature

Figure 85: Practitioner focus group - picture result round 3 - solutions Shared Services

238
Dot voting (round 4)

Hanneke explained the last exercise, dot voting, after which the participants wrote down their votes
individually. During this exercise no interactions between the participants occurred. However, during
the exercise some participants found the scales presented difficult to reason with. Therefore, some
extra explanation on the scales was provided to some participants. Which was then repeated to all
participants.

Analyzing the result with the group (round 4)

While Hanneke was calculating the results of the dot voting, Peter asked an additional question to the
participants: “You were asked to brainstorm on solutions, which of these solutions are actually used
in practice, and which are just an idea but not realizable?”. The participants responded that all
solutions where used in practice, except those that involved flying, due to the costs.

The results of the votes can be found in Table 148 to Table 152. Ranking on difficulty can be found in
Table 153 to Table 157, and ranking on impact can be found in Table 158 to Table 162. The individual
votes can be found in Appendix CC.
Table 148: Practitioner focus group - result round 4 - dot voting individually on PI planning

Solution Difficulty Impact


1. Features ready before PI planning 15 7
2. Minimal 1 x period PI Planning on-site with all teams 5 14
3. Good communication tools 10 9

Table 149: Practitioner focus group - result round 4 - dot voting individually on Inspect & Adapt

Solution Difficulty Impact


1. Good communication tools 12 13
2. Divide the topics over the locations 9 5
3. First I&A meeting on 1 location 9 12

Table 150: Practitioner focus group - result round 4 - dot voting individually on Implementing 1-2-3

Solution Difficulty Impact


1. Have 1 team to coordinate the trainings & implementation 8 11
2. Coaches know culture & problems of location 6 5
3. Strong vision promoted top down 16 14

Master Thesis Peter van Buul


239
Table 151: Practitioner focus group - result round 4 - dot voting individually on Feature

Solution Difficulty Impact


1. Keep coordinating dependencies (Scrum of Scrums) 11 10
2. Have common understanding on features 6 7
3. Make sure that each feature has 1 owner to manage dependencies 14 12
4. Small features 9 11

Table 152: Practitioner focus group - result round 4 - dot voting individually on Feature

Solution Difficulty Impact


1. Distribute features by shared service impact (focus areas) 9 8
2. Services give commitment 11 13
3. 1 x per period visit each team visibility 8 7
4. Involvement during planning events 12 12

Table 153: Practitioner focus group - result round 4 - solutions PI planning ranked on difficulty

Ranking Solution Difficulty


1 1. Features ready before PI planning 15
2 3. Good communication tools 10
3 2. Minimal 1 x period PI Planning on-site with all teams 5

Table 154: Practitioner focus group - result round 4 - solutions Inspect & Adapt ranked on difficulty

Ranking Solution Difficulty


1 1. Good communication tools 12
2 2. Divide the topics over the locations 9
3 3. First I&A meeting on 1 location 9

Table 155: Practitioner focus group - result round 4 - solutions Implementing 1-2-3 ranked on difficulty

Ranking Solution Difficulty


1 3. Strong vision promoted top down 16
2 1. Have 1 team to coordinate the trainings & implementation 8
3 2. Coaches know culture & problems of location 6

Table 156: Practitioner focus group - result round 4 - solutions Feature ranked on difficulty

Ranking Solution Difficulty


1 3. Make sure that each feature has 1 owner to manage dependencies 14
2 1. Keep coordinating dependencies (Scrum of Scrums) 11
3 4. Small features 9
4 2. Have common understanding on features 6

Table 157: Practitioner focus group - result round 4 - solutions Shared Services ranked on difficulty

Ranking Solution Difficulty


1 4. Involvement during planning events 12

240
2 2. Services give commitment 11
3 1. Distribute features by shared service impact (focus areas) 9
4 3. 1 x per period visit each team visibility 8

Table 158: Practitioner focus group - result round 4 - solutions PI planning ranked on impact

Ranking Solution Impact


1 2. Minimal 1 x period PI Planning on-site with all teams 14
2 3. Good communication tools 9
3 1. Features ready before PI planning 7

Table 159: Practitioner focus group - result round 4 - solutions Inspect & Adapt ranked on impact

Ranking Solution Impact


1 1. Good communication tools 13
2 3. First I&A meeting on 1 location 12
3 2. Divide the topics over the locations 5

Table 160: Practitioner focus group - result round 4 - solutions Implementing 1-2-3 ranked on impact

Ranking Solution Impact


1 3. Strong vision promoted top down 14
2 1. Have 1 team to coordinate the trainings & implementation 11
3 2. Coaches know culture & problems of location 5

Table 161: Practitioner focus group - result round 4 - solutions Feature ranked on impact

Ranking Solution Impact


1 3. Make sure that each feature has 1 owner to manage dependencies 12
2 4. Small features 11
3 1. Keep coordinating dependencies (Scrum of Scrums) 10
4 2. Have common understanding on features 7

Table 162: Practitioner focus group - result round 4 - solutions Shared Services ranked on impact

Ranking Solution Impact


1 2. Services give commitment 13
2 4. Involvement during planning events 12
3 1. Distribute features by shared service impact (focus areas) 8
4 3. 1 x per period visit each team visibility 7

Closing

Everyone is thanked for their time and cooperating with the studies, a small gift is presented to the
participants as thank you.

Master Thesis Peter van Buul


241
Appendix Y Practitioner focus group: categorization
participants
How each of the participants was categorized can be found in Table 164 and Table 165. How these
two tables are related can be found in Table 163.
Table 163: Experience level categorization

Expert category Experience level


Expert 3-4
Intermediate 5-7
Beginner 8-9

Table 164: Expert experience level

Expert SAFe experience Distributed Distributed SAFe Experience level


level experience level experience level
Practitioner 1 1 1 1 3
Practitioner 2 1 1 1 3
Practitioner 3 1 1 1 3
Practitioner 4 1 1 1 3
Practitioner 5 1 1 1 3
Practitioner 6 1 1 3 5
Practitioner 7 1 1 3 5
Practitioner 8 1 3 3 7
Practitioner 9 3 3 3 9
Practitioner 10 1 2 1 4
Practitioner 11 1 1 1 3

Table 165: Expert category

Expert Expert category


Practitioner 1 Expert
Practitioner 2 Expert
Practitioner 3 Expert
Practitioner 4 Expert
Practitioner 5 Expert
Practitioner 6 Intermediate
Practitioner 7 Intermediate
Practitioner 8 Intermediate
Practitioner 9 Beginner
Practitioner 10 Expert
Practitioner 11 Expert

242
Appendix Z Practitioner focus group: individual votes
round 2
Table 166: Practitioner focus group - votes on elements - Practitioner 1

Element Likelihood Impact


11. Architectural runway 1 1
10. Agile Release Train / Value Stream 1
16. PI planning 4 3
15. Inspect & Adapt 6 2
6. Communities of Practice
4. Implementing 1-2-3 2 4
17. Release Train Engineer 1
32. Feature
25. Shared Services
33. Enabler
19. Product Management 1
20. Business Owners
26. User Experience

Table 167: Practitioner focus group - votes on elements - Practitioner 2

Element Likelihood Impact


11. Architectural runway
10. Agile Release Train / Value Stream 3 3
16. PI planning 4 4
15. Inspect & Adapt
6. Communities of Practice 3 3
4. Implementing 1-2-3
17. Release Train Engineer
32. Feature 3 3
25. Shared Services
33. Enabler
19. Product Management
20. Business Owners
26. User Experience

Table 168: Practitioner focus group - votes on elements - Practitioner 3

Element Likelihood Impact


11. Architectural runway
10. Agile Release Train / Value Stream
16. PI planning 1 1
15. Inspect & Adapt

Master Thesis Peter van Buul


243
6. Communities of Practice
4. Implementing 1-2-3
17. Release Train Engineer 2 4
32. Feature 4 5
25. Shared Services 5 2
33. Enabler 1 1
19. Product Management
20. Business Owners
26. User Experience

Table 169: Practitioner focus group - votes on elements - Practitioner 4

Element Likelihood Impact


11. Architectural runway 1
10. Agile Release Train / Value Stream 4
16. PI planning 5 3
15. Inspect & Adapt 4
6. Communities of Practice
4. Implementing 1-2-3 2 3
17. Release Train Engineer 1
32. Feature
25. Shared Services 2 1
33. Enabler
19. Product Management
20. Business Owners
26. User Experience

Table 170: Practitioner focus group - votes on elements - Practitioner 5

Element Likelihood Impact


11. Architectural runway 1 1
10. Agile Release Train / Value Stream
16. PI planning 2 2
15. Inspect & Adapt 2 1
6. Communities of Practice 1 1
4. Implementing 1-2-3
17. Release Train Engineer 2 2
32. Feature 2 3
25. Shared Services 1 1
33. Enabler
19. Product Management 1 1
20. Business Owners 1 1
26. User Experience

244
Table 171: Practitioner focus group - votes on elements - Practitioner 6

Element Likelihood Impact


11. Architectural runway
10. Agile Release Train / Value Stream
16. PI planning 6 4
15. Inspect & Adapt 4 3
6. Communities of Practice
4. Implementing 1-2-3 3 6
17. Release Train Engineer
32. Feature
25. Shared Services
33. Enabler
19. Product Management
20. Business Owners
26. User Experience

Table 172: Practitioner focus group - votes on elements - Practitioner 7

Element Likelihood Impact


11. Architectural runway 2 2
10. Agile Release Train / Value Stream
16. PI planning 2 1
15. Inspect & Adapt 3 2
6. Communities of Practice 3 2
4. Implementing 1-2-3
17. Release Train Engineer 2
32. Feature
25. Shared Services 3
33. Enabler
19. Product Management 4
20. Business Owners
26. User Experience

Table 173: Practitioner focus group - votes on elements - Practitioner 8

Element Likelihood Impact


11. Architectural runway 1 1
10. Agile Release Train / Value Stream 1 1
16. PI planning 1 2
15. Inspect & Adapt 2 3
6. Communities of Practice 1
4. Implementing 1-2-3 3 4
17. Release Train Engineer
32. Feature 1 2

Master Thesis Peter van Buul


245
25. Shared Services 2
33. Enabler 1
19. Product Management
20. Business Owners
26. User Experience

Table 174: Practitioner focus group - votes on elements - Practitioner 9

Element Likelihood Impact


11. Architectural runway 2 2
10. Agile Release Train / Value Stream
16. PI planning 1
15. Inspect & Adapt 1
6. Communities of Practice 1
4. Implementing 1-2-3 3 2
17. Release Train Engineer 4 3
32. Feature
25. Shared Services 1
33. Enabler 2
19. Product Management
20. Business Owners 1 1
26. User Experience 1 1

Table 175: Practitioner focus group - votes on elements - Practitioner 10

Element Likelihood Impact


11. Architectural runway 1
10. Agile Release Train / Value Stream 1 2
16. PI planning 3 3
15. Inspect & Adapt 2 2
6. Communities of Practice 1 2
4. Implementing 1-2-3
17. Release Train Engineer 3
32. Feature 2
25. Shared Services 1
33. Enabler 1
19. Product Management 1
20. Business Owners 1
26. User Experience

Table 176: Practitioner focus group - votes on elements - Practitioner 11

Element Likelihood Impact


11. Architectural runway
10. Agile Release Train / Value Stream

246
16. PI planning 3 3
15. Inspect & Adapt 3 2
6. Communities of Practice
4. Implementing 1-2-3
17. Release Train Engineer 2 2
32. Feature 1 1
25. Shared Services 2 2
33. Enabler
19. Product Management 1 2
20. Business Owners
26. User Experience 1 1

Master Thesis Peter van Buul


247
Appendix AA Practitioner focus group: individual
solutions round 3
As most participants felt more at ease when writing in Dutch, the participants were allowed to write
down their answers in Dutch, these can be found in Table 177 to Table 181. The translated versions
can be found in Table 182 to Table 186.
Table 177: Practitioner focus group - 16. PI planning - individual solutions

Expert Solution
Practitioner 1 Iedereen invliegen
Practitioner 1 Binnen trein feature kampen maken
Practitioner 1 Microsoft Halo
Practitioner 1 Vaker & korter doen
Practitioner 2 Betere voorbereiding feature niveau
Practitioner 2 Increase common understanding of Value Stream by teammembers
Practitioner 2 Train RTE
Practitioner 2 Train Feature Engineers
Practitioner 2 Langere PI planning
Practitioner 2 Meer reconciliatie tussen teams in PI planningdag
Practitioner 2 Tussentijdse demo's in PI planningdag
Practitioner 3 Make sure all possible features are clear to everyone upfront
Practitioner 3 Have preliminary meeting in the weeks before PI day
Practitioner 3 Foresee needed infrastructure & test upfront
Practitioner 4 Good audio/video
Practitioner 4 RTE per site
Practitioner 4 Aligned Tooling for feature/ustracking, program board
Practitioner 5 Verhoog reisbudget: % van teams laten reizen
Practitioner 5 Video conferencing
Practitioner 5 Tooling voor PI planning
Practitioner 6 Starten op 1 lokatie, minstens 2 PI's
Practitioner 6 Goede tooling comm & planning / Kanban Board
Practitioner 7 Goede communicatiemiddelen
Practitioner 7 Mensen invliegen om het samen te doen
Practitioner 7 Mensen tijdelijk onsite halen om de PI planning min. 1 x onsite mee te maken
Practitioner 8 Collaboration - tooling
Practitioner 8 Take more time to prepare alignments
Practitioner 9 Overkoepelend overleg plannen waar afhankelijkheden gemanaged worden en
prioriteiten bepaald
Practitioner 10 Invliegen voor PI planning (duur!)
Practitioner 10 Perfecte videofaciliteiten
Practitioner 10 Professionele moderator
Practitioner 10 Digitale middelen / white boards ?
Practitioner 11 PI-planning op 1 locatie, 'vlieg' mensen in

248
Practitioner 11 Deel van PI met hele trein, mbv digitale communicatiemiddelen, deel van PI per
locatie
Practitioner 11 Aantal mini-PI's en daarna de grote volledige PI met hele trein of
vertegenwoordigers v/d locaties

Table 178: Practitioner focus group - 15. Inspect & Adapt - individual solutions

Name Solution
Practitioner 1 Iedereen invliegen
Practitioner 2 Logisch gevolg van verbetering op PI planningvlak
Practitioner 3 Involve everyone in this process
Practitioner 4 Good audio/video
Practitioner 4 RET/ organiser per site
Practitioner 4 Well trained facilitators
Practitioner 5 Zie PI planning (combineren)
Practitioner 6 Starten op 1 lokatie, minstens 2 PI's
Practitioner 6 Goede tooling comm
Practitioner 7 Goede communicatiemiddelen
Practitioner 7 Mensen invliegen om het samen op 1 lokatie te doen
Practitioner 7 Verschillende onderdelen van I&A door verschillende teams/locaties te laten
doen
Practitioner 8 Collaboration tooling
Practitioner 10 Invliegen (combi PI-planning)
Practitioner 10 Video faciliteiten
Practitioner 10 Wisslende locatie voor uitzending (rouleren)
Practitioner 11 I&A per locatie, delen uitkomsten door vertegenwoordigers van elke locatie

Table 179: Practitioner focus group - 4. Implementing 1-2-3 - individual solutions

Name Solution
Practitioner 1 Iedereen invliegen
Practitioner 1 Treden regionaal splitsen
Practitioner 1 Competing ART naast bestaande keten
Practitioner 3 Define implementation plan with roles and responsibility
Practitioner 3 Align different steps with all parties
Practitioner 4 Good trainers at all sites
Practitioner 4 Engaged leadership at all sites
Practitioner 4 Train all sites same timeframe
Practitioner 5 Zorg dat Key roles (bv SPC) dezelfde training op zelfde plaats en tijd krijgen
Practitioner 6 Starten op 1 lokatie, minstens 2 PI's
Practitioner 6 Op elke lokatie een Agile coach, die elkaar kennen, vertrouwen en alignment
kunnen vinden in samenwerking
Practitioner 7 Implementing Guild opzetten - Community of Impl. Practice
Practitioner 7 Automated testing & continuous deployment implementeren
Practitioner 8 Rotating teams & roles
Practitioner 8 Team x period local before going distributed

Master Thesis Peter van Buul


249
Practitioner 8 Collaboration tooling
Practitioner 9 Indien geen ketenafhankelijkheid kunnen devops zelf otap-testen +
implementeren-> dan wel goed omgevingenbeheer inrichten
Practitioner 10 Training op 1 lokatie voor nw trein
Practitioner 10 Opleidingsteam laten circuleren (routine, naar elkaar toe)
Practitioner 10 Geen video toepassen!
Practitioner 11 SPC's opgeleid op 1 locatie, met zelfde training
Practitioner 11 SPC's kennen cultuur en specifieke problemen van de locatie/regio waar ze de
implementatie doen

Table 180: Practitioner focus group - 32. Feature - individual solutions

Name Solution
Practitioner 1 Invliegen
Practitioner 1 Aantal teams beperken
Practitioner 1 Goede refinement
Practitioner 1 PI Planning
Practitioner 2 Train feature engineers in "Cultural Aspects"
Practitioner 2 Extend time spent "per feature" during PI planning
Practitioner 3 Make sure scope of feature is clear
Practitioner 3 Define input feature on each part
Practitioner 3 Define roles different people
Practitioner 3 Make correct stories and define owner
Practitioner 3 Make sure all stories are defined
Practitioner 3 Visit each others Daily Scrums
Practitioner 4 Clear acceptance criteria
Practitioner 4 Feature should be ready before planning it
Practitioner 4 Small features
Practitioner 5 Tooling (shared)
Practitioner 5 Extra aandacht voor scherpe en "just enough" acceptance criteria
Practitioner 6 Sharing tooling voor zowel communicatie als werkverdeling & planning
Practitioner 7 Goede Scrum of Scrums met goede comm. middelen
Practitioner 7 Goede afspraken maken wat ownership van Feature inhoudt
Practitioner 7 Goede comm.tools
Practitioner 8 Collaboration tooling
Practitioner 8 Extra Scrum of Scrums sessions
Practitioner 9 BIA's laten bepalen en werkpakketten bij BO-ers neerleggen
Practitioner 10 Afstemming tussen teams schedulen, ook tussen demo's
Practitioner 10 Encourage directe contacten tussen teams
Practitioner 11 Maak features zo klein mogelijk, zodat ze door ze door teams op 1 locatie kunnen
worden uitgevoerd

Table 181: Practitioner focus group - 25. Shared Services - individual solutions

Name Solution
Practitioner 1 Feature kamp bouwen met same focus

250
Practitioner 1 IBM Watson
Practitioner 1 Vliegen
Practitioner 1 Tijdelijk meedraaien in teams
Practitioner 2 Increase scrum practices in these teams
Practitioner 3 Make sure contact persons are known to all
Practitioner 3 Define and document Shared Services
Practitioner 4 Good agreements
Practitioner 4 Involvement during planning events
Practitioner 5 Zoals voor alle distributed teams zou moeten gelden: zorg voor "mental
closeness" (Jurgen Appelo) pas tools toe waarbij remote werkers elkaar beter leren
kennen
Practitioner 5 Verhoog reisbudget: shared service mensen brengen fysiek tijd door op verschillende
lokaties
Practitioner 6 Op alle lokaties 1 PI samenzitten
Practitioner 7 Shared Services team moet zich bij alles goed inleven in de ervaring/wens van
verschillende locaties
Practitioner 8 Collaboration tooling
Practitioner 9 Duidelijke afspraken maken waar te vinden hoe te onderhouden + gebruiken

Table 182: Practitioner focus group - 16. PI planning - individual solutions - translated

Expert Solution
Practitioner 1 Fly everyone to one location
Practitioner 1 Create feature camps within the train
Practitioner 1 Microsoft Halo
Practitioner 1 Do the PI planning more often with smaller iterations
Practitioner 2 Better preparation on feature level
Practitioner 2 Increase common understanding of Value Stream by team members
Practitioner 2 Train RTE
Practitioner 2 Train Feature Engineers
Practitioner 2 Longer PI planning
Practitioner 2 More reconciliation between teams during PI planning
Practitioner 2 Interim demo’s during PI planning
Practitioner 3 Make sure all possible features are clear to everyone upfront
Practitioner 3 Have preliminary meeting in the weeks before PI day
Practitioner 3 Foresee needed infrastructure & test upfront
Practitioner 4 Good audio/video
Practitioner 4 RTE per site
Practitioner 4 Aligned Tooling for feature/ustracking, program board
Practitioner 5 Increased travel budget: % of the teams travel
Practitioner 5 Video conferencing
Practitioner 5 Tooling for PI planning
Practitioner 6 Start at 1 location, during at least 2 PI’s
Practitioner 6 Good tooling for communication & planning / Kanban Board
Practitioner 7 Good communication tooling

Master Thesis Peter van Buul


251
Practitioner 7 Fly everyone to one location to let them do it together
Practitioner 7 Get members temporary on-site to experience at least 1 PI planning
Practitioner 8 Collaboration - tooling
Practitioner 8 Take more time to prepare alignments
Practitioner 9 Have an extra meeting to manage dependencies and priorities
Practitioner 10 Fly everyone to one location for PI planning (expensive!)
Practitioner 10 Have perfect video tooling
Practitioner 10 Professional moderator
Practitioner 10 Digital resources / such as white board ?
Practitioner 11 Fly everyone to one location for PI planning
Practitioner 11 Part of the PI planning with the whole team, using digital communication tooling,
part of the PI planning on location
Practitioner 11 Some small PI’s and later a big PI with the whole train or representatives of each
location

Table 183: Practitioner focus group - 15. Inspect & Adapt - individual solutions - translated

Name Solution
Practitioner 1 Fly everyone to one location
Practitioner 2 Logical consequence of improvements on PI planning
Practitioner 3 Involve everyone in this process
Practitioner 4 Good audio/video
Practitioner 4 RET/ organizer per site
Practitioner 4 well trained facilitators
Practitioner 5 See PI planning (combine)
Practitioner 6 Start at 1 location, during at least 2 PI’s
Practitioner 6 Good tooling for communication
Practitioner 7 Good communication tooling
Practitioner 7 Fly everyone to one location to let them do it together
Practitioner 7 Let different parts of I&A be done by different teams/locations
Practitioner 8 Collaboration tooling
Practitioner 10 Fly everyone to one location (combine with PI-planning)
Practitioner 10 Video tooling
Practitioner 10 Rotate location
Practitioner 11 I&A per location, share results by representatives of each location

Table 184: Practitioner focus group - 4. Implementing 1-2-3 - individual solutions - translated

Name Solution
Practitioner 1 Fly everyone to one location
Practitioner 1 Split based on region
Practitioner 1 Competing ART next to existing chain
Practitioner 3 Define implementation plan with roles and responsibility
Practitioner 3 Align different steps with all parties
Practitioner 4 Good trainers at all sites

252
Practitioner 4 Engaged leadership at all sites
Practitioner 4 Train all sites same timeframe
Practitioner 5 Ensure that key roles (e.g. SPC) get the same training at the same place and time
Practitioner 6 Start co-located on 1 location, for a duration of at least 2 PI’s
Practitioner 6 An Agile coach on each location. These coaches should know and trust each other,
and be able to align their work
Practitioner 7 Set up an implementing Guild - Community of Implementation. Practice
Practitioner 7 Implement automated testing & continuous deployment
Practitioner 8 Rotating teams & roles
Practitioner 8 Team x period local before going distributed
Practitioner 8 Collaboration tooling
Practitioner 9 If no dependencies, DevOps can do otap-tests & implementing themselves ->
infrastructure should support this
Practitioner 10 Training on one location for new train
Practitioner 10 Circle the trainers around the locations
Practitioner 10 Don’t use video to train!
Practitioner 11 SPC’s should be trained on 1 location during the same training
Practitioner 11 SPC’s know culture & problems of location/region were they are implementing
SAFe

Table 185: Practitioner focus group - 32. Feature - individual solutions - translated

Name Solution
Practitioner 1 Fly everyone to one location
Practitioner 1 Limit the number of teams
Practitioner 1 Good refinement
Practitioner 1 PI Planning
Practitioner 2 Train feature engineers in "Cultural Aspects"
Practitioner 2 Extend time spent "per feature" during PI planning
Practitioner 3 Make sure scope of feature is clear
Practitioner 3 Define input feature on each part
Practitioner 3 Define roles different people
Practitioner 3 Make correct stories and define owner
Practitioner 3 Make sure all stories are defined
Practitioner 3 Visit each others Daily Scrums
Practitioner 4 Clear acceptance criteria
Practitioner 4 Feature should be ready before planning it
Practitioner 4 Small features
Practitioner 5 Tooling (shared)
Practitioner 5 Extra focus on well-defined and “just enough” acceptance criteria
Practitioner 6 Sharing tooling for communication, workload division and planning
Practitioner 7 Good Scrum of Scrums with good communication tools
Practitioner 7 Make good agreements on what ownership of the feature is
Practitioner 7 Good communication tools
Practitioner 8 Collaboration tooling

Master Thesis Peter van Buul


253
Practitioner 8 Extra Scrum of Scrums sessions
Practitioner 9 Determine BIA’s and place work at BO’s
Practitioner 10 Schedule when teams align, including demo’s
Practitioner 10 Encourage direct contact between teams
Practitioner 11 Make features as small as possible, so that they can done by a single location

Table 186: Practitioner focus group - 25. Shared Services - individual solutions - translated

Name Solution
Practitioner 1 Create a feature camp with the same focus
Practitioner 1 IBM Watson
Practitioner 1 Fly everyone to one location
Practitioner 1 Join the teams temporarily
Practitioner 2 Increase scrum practices in these teams
Practitioner 3 Make sure contact persons are known to all
Practitioner 3 Define and document Shared Services
Practitioner 4 Good agreements
Practitioner 4 Involvement during planning events
Practitioner 5 As should be for all distributed teams: ensure “mental closeness” (Jurgen Appelo), use
tools when remote workers need to know another better
Practitioner 5 Increase travel budget: Shared Services should be physically present on the different
locations
Practitioner 6 For 1 PI co-locate on every location
Practitioner 7 Shared Services team must at all empathize with the experience / desire from different
locations
Practitioner 8 Collaboration tooling
Practitioner 9 Clear agreements on were to find the shared services, how to maintain and how to use

254
Appendix BB Practitioner focus group: group solutions
round 3
As most participants felt more at ease when writing in Dutch, the participants were allowed to write
down their answers in Dutch, these can be found in Table 187 to Table 191. The translated versions
can be found in Table 192 to Table 196.
Table 187: Practitioner focus group - 16. PI planning - group solutions

Group Solution
Hanneke Features ready voor de planning sessie met de distr. teams
Hanneke Good audio/video Tooling
Peter Minimal 1 x period PI Planning on-site with all teams
Peter Communication Tools

Table 188: Practitioner focus group - 15. Inspect & Adapt - group solutions

Group Solution
Hanneke Good audio/video Tooling
Hanneke Eerste op 1 locatie
Peter Comms tools
Peter Divide the I&A topics over the locations

Table 189: Practitioner focus group - 4. Implementing 1-2-3 - group solutions

Group Solution
Hanneke Aligned coaches op verschillende locaties
Hanneke Coaches kennen cultuur en specifieke problemen locatie
Peter 1 team to coordinate the trainings/Impl
Peter Strong Vision promoted top down

Table 190: Practitioner focus group - 32. Feature - group solutions

Group Solution
Hanneke Gemeenschappelijk begrip
Hanneke Small features
Peter Keep co-ordinating dependencies (Scrum of Scrums)
Peter Make sure that a Feature has 1 owner to manage dependencies

Table 191: Practitioner focus group - 25. Shared Services - group solutions

Group Solution
Hanneke Geven Commitment
Hanneke Involvement during planning events
Peter Distribute features by shared service impact (focus areas)

Master Thesis Peter van Buul


255
Peter 1 x per period visit each team visibility

Table 192: Practitioner focus group - 16. PI planning - group solutions - translated

Group Solution
Hanneke Features ready before PI planning
Hanneke Good audio/video Tooling
Peter Minimal 1 x period PI Planning on-site with all teams
Peter Communication Tools

Table 193: Practitioner focus group - 15. Inspect & Adapt - group solutions - translated

Group Solution
Hanneke Good audio/video Tooling
Hanneke First I&A meeting on 1 location
Peter Communication Tools
Peter Divide the I&A topics over the locations

Table 194: Practitioner focus group - 4. Implementing 1-2-3 - group solutions - translated

Group Solution
Hanneke Aligned coaches on different locations
Hanneke Coaches know culture & problems of location
Peter 1 team to coordinate the trainings/Impl
Peter Strong Vision promoted top down

Table 195: Practitioner focus group - 32. Feature - group solutions - translated

Group Solution
Hanneke Have common understanding on features
Hanneke Small features
Peter Keep co-ordinating dependencies (Scrum of Scrums)
Peter Make sure that a Feature has 1 owner to manage dependencies

Table 196: Practitioner focus group - 25. Shared Services - group solutions - translated

Group Solution
Hanneke Services give commitment
Hanneke Involvement during planning events
Peter Distribute features by shared service impact (focus areas)
Peter 1 x per period visit each team visibility

256
Appendix CC Practitioner focus group: individual votes
round 4
Table 197: Practitioner focus group - votes on solutions - Practitioner 1

Solution Difficulty Impact


16.1 Features ready before PI planning
16.2 Minimal 1 x period PI Planning on-site with all teams 2 3
16.3 Good communication tools 1

15.1 Good Communication tools


15.2 Divide the topics over the locations 1 1
15.3 First I&A meeting on 1 location 2 2

4.1 Have 1 team to coordinate the trainings & implementation 1


4.2 Coaches know culture & problems of location
4.3 Strong vision promoted top down 3 2

32.1 Keep coordinating dependencies (Scrum of Scrums) 1


32.2 Have common understanding on features 2
32.3 Make sure that each feature has 1 owner to manage dependencies 1 1
32.4 Small features 2 1

25.1 Distribute features by shared service impact (focus areas) 1 2


25.2 Services give commitment
25.3 1 x per period visit each team visibility 3 2
25.4 Involvement during planning events

Table 198: Practitioner focus group - votes on solutions - Practitioner 2

Solution Difficulty Impact


16.1 Features ready before PI planning 2 1
16.2 Minimal 1 x period PI Planning on-site with all teams
16.3 Good communication tools 1 2

15.1 Good Communication tools 3 3


15.2 Divide the topics over the locations
15.3 First I&A meeting on 1 location

4.1 Have 1 team to coordinate the trainings & implementation


4.2 Coaches know culture & problems of location
4.3 Strong vision promoted top down 3 3

32.1 Keep coordinating dependencies (Scrum of Scrums)

Master Thesis Peter van Buul


257
32.2 Have common understanding on features 4 4
32.3 Make sure that each feature has 1 owner to manage dependencies
32.4 Small features

25.1 Distribute features by shared service impact (focus areas)


25.2 Services give commitment 4 4
25.3 1 x per period visit each team visibility
25.4 Involvement during planning events

Table 199: Practitioner focus group - votes on solutions - Practitioner 3

Solution Difficulty Impact


16.1 Features ready before PI planning 2 2
16.2 Minimal 1 x period PI Planning on-site with all teams
16.3 Good communication tools 1 1

15.1 Good Communication tools 1 2


15.2 Divide the topics over the locations 2 1
15.3 First I&A meeting on 1 location

4.1 Have 1 team to coordinate the trainings & implementation


4.2 Coaches know culture & problems of location 2 2
4.3 Strong vision promoted top down 1 1

32.1 Keep coordinating dependencies (Scrum of Scrums) 3 3


32.2 Have common understanding on features 1 1
32.3 Make sure that each feature has 1 owner to manage dependencies
32.4 Small features

25.1 Distribute features by shared service impact (focus areas) 1


25.2 Services give commitment 1 2
25.3 1 x per period visit each team visibility
25.4 Involvement during planning events 2 2

Table 200: Practitioner focus group - votes on solutions - Practitioner 4

Solution Difficulty Impact


16.1 Features ready before PI planning 1
16.2 Minimal 1 x period PI Planning on-site with all teams 2
16.3 Good communication tools 2 1

15.1 Good Communication tools 2 1


15.2 Divide the topics over the locations 1
15.3 First I&A meeting on 1 location 2

258
4.1 Have 1 team to coordinate the trainings & implementation 1 1
4.2 Coaches know culture & problems of location 1 2
4.3 Strong vision promoted top down 1

32.1 Keep coordinating dependencies (Scrum of Scrums) 1 2


32.2 Have common understanding on features
32.3 Make sure that each feature has 1 owner to manage dependencies 2 1
32.4 Small features 1 1

25.1 Distribute features by shared service impact (focus areas)


25.2 Services give commitment 1 2
25.3 1 x per period visit each team visibility 1
25.4 Involvement during planning events 2 2

Table 201: Practitioner focus group - votes on solutions - Practitioner 5

Solution Difficulty Impact


16.1 Features ready before PI planning 2
16.2 Minimal 1 x period PI Planning on-site with all teams 1 2
16.3 Good communication tools 1

15.1 Good Communication tools 1 2


15.2 Divide the topics over the locations
15.3 First I&A meeting on 1 location 2 1

4.1 Have 1 team to coordinate the trainings & implementation 2 2


4.2 Coaches know culture & problems of location 1
4.3 Strong vision promoted top down 1

32.1 Keep coordinating dependencies (Scrum of Scrums)


32.2 Have common understanding on features
32.3 Make sure that each feature has 1 owner to manage dependencies 2 2
32.4 Small features 2 2

25.1 Distribute features by shared service impact (focus areas)


25.2 Services give commitment
25.3 1 x per period visit each team visibility 2 2
25.4 Involvement during planning events 2 2

Table 202: Practitioner focus group - votes on solutions - Practitioner 6

Solution Difficulty Impact


16.1 Features ready before PI planning 2 1
16.2 Minimal 1 x period PI Planning on-site with all teams 1
16.3 Good communication tools 1 1

Master Thesis Peter van Buul


259
15.1 Good Communication tools 2 1
15.2 Divide the topics over the locations
15.3 First I&A meeting on 1 location 1 2

4.1 Have 1 team to coordinate the trainings & implementation 1 2


4.2 Coaches know culture & problems of location
4.3 Strong vision promoted top down 2 1

32.1 Keep coordinating dependencies (Scrum of Scrums)


32.2 Have common understanding on features
32.3 Make sure that each feature has 1 owner to manage dependencies 2 2
32.4 Small features 2 2

25.1 Distribute features by shared service impact (focus areas) 3 1


25.2 Services give commitment
25.3 1 x per period visit each team visibility
25.4 Involvement during planning events 1 3

Table 203: Practitioner focus group - votes on solutions - Practitioner 7

Solution Difficulty Impact


16.1 Features ready before PI planning 2
16.2 Minimal 1 x period PI Planning on-site with all teams 1
16.3 Good communication tools 1 2

15.1 Good Communication tools 1 2


15.2 Divide the topics over the locations 2 1
15.3 First I&A meeting on 1 location

4.1 Have 1 team to coordinate the trainings & implementation 2 2


4.2 Coaches know culture & problems of location
4.3 Strong vision promoted top down 1 1

32.1 Keep coordinating dependencies (Scrum of Scrums) 1 2


32.2 Have common understanding on features 1
32.3 Make sure that each feature has 1 owner to manage dependencies 2 2
32.4 Small features

25.1 Distribute features by shared service impact (focus areas) 2 2


25.2 Services give commitment
25.3 1 x per period visit each team visibility 2
25.4 Involvement during planning events 2

260
Table 204: Practitioner focus group - votes on solutions - Practitioner 8

Solution Difficulty Impact


16.1 Features ready before PI planning 3 2
16.2 Minimal 1 x period PI Planning on-site with all teams 1
16.3 Good communication tools

15.1 Good Communication tools 1


15.2 Divide the topics over the locations 1 2
15.3 First I&A meeting on 1 location 2

4.1 Have 1 team to coordinate the trainings & implementation 1 1


4.2 Coaches know culture & problems of location 1 1
4.3 Strong vision promoted top down 1 1

32.1 Keep coordinating dependencies (Scrum of Scrums) 1 1


32.2 Have common understanding on features
32.3 Make sure that each feature has 1 owner to manage dependencies 2 2
32.4 Small features 1 1

25.1 Distribute features by shared service impact (focus areas) 1 2


25.2 Services give commitment 2 1
25.3 1 x per period visit each team visibility 1 1
25.4 Involvement during planning events

Table 205: Practitioner focus group - votes on solutions - Practitioner 9

Solution Difficulty Impact


16.1 Features ready before PI planning 1
16.2 Minimal 1 x period PI Planning on-site with all teams 2 2
16.3 Good communication tools 1

15.1 Good Communication tools 2


15.2 Divide the topics over the locations 1 1
15.3 First I&A meeting on 1 location 2

4.1 Have 1 team to coordinate the trainings & implementation 1


4.2 Coaches know culture & problems of location 2 1
4.3 Strong vision promoted top down 2 1

32.1 Keep coordinating dependencies (Scrum of Scrums)


32.2 Have common understanding on features 2 2
32.3 Make sure that each feature has 1 owner to manage dependencies 1
32.4 Small features 2 1

Master Thesis Peter van Buul


261
25.1 Distribute features by shared service impact (focus areas) 1 2
25.2 Services give commitment 1
25.3 1 x per period visit each team visibility 2 1
25.4 Involvement during planning events 1

Table 206: Practitioner focus group - votes on solutions - Practitioner 10

Solution Difficulty Impact


16.1 Features ready before PI planning
16.2 Minimal 1 x period PI Planning on-site with all teams 2 2
16.3 Good communication tools 1 1

15.1 Good Communication tools 1


15.2 Divide the topics over the locations
15.3 First I&A meeting on 1 location 2 3

4.1 Have 1 team to coordinate the trainings & implementation


4.2 Coaches know culture & problems of location
4.3 Strong vision promoted top down 3 3

32.1 Keep coordinating dependencies (Scrum of Scrums) 2 1


32.2 Have common understanding on features
32.3 Make sure that each feature has 1 owner to manage dependencies 1 1
32.4 Small features 1 2

25.1 Distribute features by shared service impact (focus areas)


25.2 Services give commitment 2 3
25.3 1 x per period visit each team visibility 1
25.4 Involvement during planning events 1 1

Table 207: Practitioner focus group - votes on solutions - Practitioner 11

Solution Difficulty Impact


16.1 Features ready before PI planning 1 1
16.2 Minimal 1 x period PI Planning on-site with all teams 2
16.3 Good communication tools 2

15.1 Good Communication tools 1 1


15.2 Divide the topics over the locations 2
15.3 First I&A meeting on 1 location 2

4.1 Have 1 team to coordinate the trainings & implementation 1 2


4.2 Coaches know culture & problems of location 1
4.3 Strong vision promoted top down 1 1

262
32.1 Keep coordinating dependencies (Scrum of Scrums) 2 1
32.2 Have common understanding on features
32.3 Make sure that each feature has 1 owner to manage dependencies 2 1
32.4 Small features 2

25.1 Distribute features by shared service impact (focus areas) 1 1


25.2 Services give commitment 1 1
25.3 1 x per period visit each team visibility
25.4 Involvement during planning events 2 2

Master Thesis Peter van Buul


263

You might also like