You are on page 1of 86

How Mature are You?

An Exploratory Investigation on Digital


Maturity and the Effects Management
Teams Have on Digital Transformation
Oscar Gustin, Victor Hellholm

Department of Business Administration


Master’s Program in Management
Master’s Thesis in Business Administration III, 30 Credits, Spring 2022
Supervisor: Vladimir Vanyushyn
Acknowledgements
We would like to thank the person who made us realize that this study was needed and that
the field of digital maturity needs further investigation. We would also like to acknowledge
all of the participants who provided us with their useful insights and knowledge within the
field. We have had some great discussions and have gained a great deal from them.

We would also like to thank our supervisor Vladimir Vanyushyn for his support in
developing this thesis, the insight and support given has been essential for the success of
this project.

May 12th, 2022


Umeå School of Business, Economics, and Statistics
Umeå University

Oscar Gustin Victor Hellholm


[This page is left intentionally blank]
Abstract
With the rapid growth of industry 4.0 and the digital age, we can see that the use of digital tools,
systems and solutions are becoming more standardized in all sectors. We are currently
witnessing these tools becoming a much more integral part of future industry and therefore
putting pressure on current organizations to adapt. Digital maturity is a phenomenon that has
gained an abundance of exposure in recent years, and the importance of this phenomenon grows
in relation to the growth of the digital environment. According to academia, questions about
companies’ digital status have recently become topical. Digital maturity can act as a portal for
possibilities and change. For example, exploiting opportunities, implementing technological
processes and averting business risks that stem from advanced technologies.

Academic research in this field mainly concentrates on the quantitative aspects and therefore we
have engaged in a qualitative approach in order to fill the gap within this field of research.
Moreover, we have identified that studies and contemporary assessment models do not consider
the importance of management and aspect of change to the necessary extent. Academia mentions
the importance of converting digital maturity assessments to company actions through a
systematic engagement from managers. However, most studies in the field have focused on
either giving overviews on different perspectives of assessments or answered questions regarding
the success rate of digital transformations, we believe that management and change management
are key components to digital maturity and transformational success. A qualitative approach is
therefore suitable because of the exploratory nature of our interests.

To gather data, we conducted interviews with participants chosen through a purposive sampling
technique. The purposive sampling method was chosen to achieve a better and more qualitative
result since the participants all had expertise within the field of digital development, digital
maturity measurements and management. The interview questions were based on theories in our
conceptual framework and served as a foundation for the interview guide. The intention was to
see if our initial theories on digital maturity transformation would match the views of the
participants. By using a thematic analysis method, we connected the participants answers to our
identified framework themes, potentially altering our conceptual framework. The original
components of our conceptual framework were: Importance of management and change
management, which included theories of organizations, people, change kaleidoscope, the theory
of constraints, PDSA-cycle and RACI-model. The result of the study showed that our predicted
components of importance of management and change management were essential for the
success of digital maturity. Our underlying theories also matched the participants views.
However, new theories and components were added based on the empirical findings. These
were: long term solutions and strategies, strong and motivating leadership, project triangle and
communication plans. Our study indicates that management and change management can be
success factors in the process of becoming more digitally mature.

Keywords: Digital maturity, digital maturity assessments, digital transformation, management,


leadership, change management, organization, people, change kaleidoscope, theory of
constraints, PDSA-cycle, RACI-model, project triangle, communication plans, top management
teams (TMT)

I
Table of contents
1. INTRODUCTION ................................................................................................................................................... 1
1.1 DIGITAL AGE AND INDUSTRY 4.0.......................................................................................................................... 1
1.2 DIGITAL MATURITY ............................................................................................................................................. 2
1.2.1 Digital Maturity Assessments ....................................................................................................................... 4
1.3 IMPORTANCE OF MANAGEMENT ........................................................................................................................... 7
1.3.1 Change management ................................................................................................................................... 7
1.3.2 TMT (Top Management Team)..................................................................................................................... 8
1.4 PROBLEM BACKGROUND ...................................................................................................................................... 9
1.5 RESEARCH GAP ................................................................................................................................................. 10
1.6 RESEARCH QUESTIONS AND PURPOSE OF STUDY ................................................................................................. 11
1.7 DELIMITATIONS ................................................................................................................................................. 11
2. THEORETICAL FRAMEWORK ........................................................................................................................ 13
2.1 DEFINING DIGITAL MATURITY ........................................................................................................................... 13
2.2 MAIN DIGITAL MATURITY CHARACTERISTICS .................................................................................................... 14
2.2.1 Organization ............................................................................................................................................. 15
2.2.2 People ....................................................................................................................................................... 16
2.3 ORGANIZATIONAL CHANGE................................................................................................................................ 16
2.4 CONTINUOUS IMPROVEMENT .............................................................................................................................. 17
2.4.1 The Theory of Constraints (TOC) ............................................................................................................... 17
2.4.2 PDSA-Cycle (Continuous improvement theory/model) ................................................................................ 18
2.4.3 RACI Model .............................................................................................................................................. 19
2.5 LITERATURE REVIEW ......................................................................................................................................... 22
2.6 CONCEPTUAL FRAMEWORK ................................................................................................................................ 25
3. SCIENTIFIC METHODOLOGY ......................................................................................................................... 26
3.1 RESEARCH PHILOSOPHY ..................................................................................................................................... 26
3.1.1 Ontology ................................................................................................................................................... 26
3.1.2 Epistemology ............................................................................................................................................. 27
3.1.3 Axiology .................................................................................................................................................... 28
3.2 RESEARCH APPROACH ....................................................................................................................................... 28
4. RESEARCH METHODOLOGY .......................................................................................................................... 30
4.1 RESEARCH DESIGN ............................................................................................................................................ 30
4.2 DATA COLLECTION ............................................................................................................................................ 32
4.2.1 Semi-structured Interviews......................................................................................................................... 32
4.2.2 Interview Guide ......................................................................................................................................... 33
4.3 DATA SAMPLING ............................................................................................................................................... 36
4.3.1 Sample Size ............................................................................................................................................... 36
4.3.2 Sampling technique.................................................................................................................................... 37
4.3.3 Presentation of participants ....................................................................................................................... 38
4.4 ANALYSIS OF COLLECTED DATA ........................................................................................................................ 39
4.5 ETHICAL CONSIDERATIONS ................................................................................................................................ 40
4.6 TRUTH CRITERIA ............................................................................................................................................... 42
5. EMPIRICAL FINDINGS...................................................................................................................................... 44

II
5.1 DIGITAL MATURITY ........................................................................................................................................... 44
5.2 DIGITAL MATURITY ASSESSMENTS .................................................................................................................... 49
5.3 IMPORTANCE OF MANAGEMENT ......................................................................................................................... 52
5.4 CHANGE MANAGEMENT ..................................................................................................................................... 54
6. DISCUSSION ........................................................................................................................................................ 59
6.1 DIGITAL MATURITY ........................................................................................................................................... 59
6.2 DIGITAL MATURITY ASSESSMENTS .................................................................................................................... 60
6.3 IMPORTANCE OF MANAGEMENT ......................................................................................................................... 61
6.3.1 Strong and Motivating Leadership ............................................................................................................. 61
6.3.2 Short-term Solutions .................................................................................................................................. 62
6.4 CHANGE MANAGEMENT ..................................................................................................................................... 62
6.4.1 Resource Planning..................................................................................................................................... 62
6.4.2 Follow-up.................................................................................................................................................. 63
6.4.3 Communication ......................................................................................................................................... 64
6.4.4 Prioritizing ................................................................................................................................................ 64
6.4.5 Foundation for Continuous Improvements.................................................................................................. 64
6.5 REVISED AND DEVELOPED CONCEPTUAL FRAMEWORK ....................................................................................... 66
7. CONCLUSION...................................................................................................................................................... 68
7.1 GENERAL CONCLUSIONS .................................................................................................................................... 68
7.2 CONTRIBUTIONS TO LITERATURE ....................................................................................................................... 70
7.3 MANAGERIAL IMPLICATIONS .............................................................................................................................. 70
7.4 SOCIETAL IMPLICATIONS .................................................................................................................................... 71
7.5 LIMITATIONS AND FURTHER RESEARCH .............................................................................................................. 71
REFERENCES.......................................................................................................................................................... 73
APPENDIX 1: SUCCESS RATE OF DIGITAL TRANSFORMATIONS BY KEY FACTORS ............................ 78

III
List of figures
FIGURE 1. A DIGITAL MATURITY ASSESSMENT USING A CLUSTER-BASED APPROACH ...................... 5
FIGURE 2. EXAMPLE OF ANOTHER DIGITAL MATURITY ASSESSMENT ................................................... 6
FIGURE 3. AN EXAMPLE OF ASSIGNING ROLES IN A RACI MODEL......................................................... 20
FIGURE 4. INITIAL CONCEPTUAL FRAMEWORK ........................................................................................ 25
FIGURE 5. REVISED AND DEVELOPED CONCEPTUAL FRAMEWORK. ..................................................... 66

List of tables
TABLE 1. OVERVIEW AND SUMMARY OF RESEARCH QUESTIONS AND CONNECTED PURPOSES .... 11
TABLE 2. SELECTED DEFINITION OF DIGITAL MATURITY. ...................................................................... 13
TABLE 3. THE FIVE MAIN DIGITAL MATURITY CHARACTERISTICS. ...................................................... 14
TABLE 4. THE RACI-MODEL 5-STEP PROCESS ............................................................................................. 21
TABLE 5. LITERATURE REVIEW .................................................................................................................... 24
TABLE 6. INTERVIEW GUIDE.......................................................................................................................... 35

IV
1. Introduction
1.1 Digital Age and Industry 4.0
The industrial revolution has taken many different steps, from the early stages of steam power
and introduction to mechanical manufacturing systems. The industry later evolved to the mass
use of electrical power and further towards the age of information of the 21th century. Now the
world is rapidly moving towards the standard of cps (Cyber Physical Systems), thus changing the
industry standard for cooperation between humans, and artificial intelligence. The term industry
4.0 refers to the fourth part of the industrial revolution, which we are currently in or beginning to
move into within some sectors (Hermann et al., 2016). This stage has in some areas been
implemented and is expected to be a much more integral part of future industry, usually
mentioned as industry 4.0. Industry 4.0 is mainly represented by CPS, Internet of Things (IoT)
and cloud computing (Jasperneite 2012; Kagermann, Wahlster, and Helbig 2013 cited in Xu et
al., 2018, p. 2942).

Internet of things (IoT) refers to identifiable and interconnectable objects using some sort of
radio-frequency identification usually referred to as RFID (Ashton 2009; Xu, He, and Li 2014).
The classification IoT have later been used to define different equipment connected to the
internet, such as sensors, actuators, and in connection with Global Positioning Systems (GPS) ,
mobile devices who operate via Wi-Fi, Bluetooth, cellular networks or near field communication
(NFC) (Xu et al.,. 2018, p. 2944). With the use of IoT, companies can move to the new standard
for industry 4.0 since it is mainly focused on the use of cognitive computing techniques that
work along with industrial IoT (IIoT) applications. Advancements in this field have given
companies the possibility to build more integrated systems and to use IoT enabled sensors and
other kinds of measurements that can be monitored and overlooked in real time. This evolution
has affected everything from production, distribution, transportation, service and maintenance in
the industry context (Tao et al., 2016, cited in Xu et al., 2018, p. 2945).

Cloud computing is another part of industry 4.0. This technology can offer high performance for
a low cost (Zheng et al. 2014; Mitra et al. 2017). Using virtualization technology, cloud
computing can be used to share resources between computers and distribute the power to give
more efficient allocation for each use case. Data can be distributed and stored at localized setups
and used via access portals that are connected via the internet. In regard to industry 4.0, cloud
computing can move the flexibility of performing calculations and tasks from the localized
factories and company locations to more efficient server halls. These smart allocation systems
can be modular and flexible based on the cycle and variations within different processes (Xu et
al., 2018, p. 2947).

The last part of industry 4.0 is the interconnection of physical and digital systems, cyber-physical
systems (CPS). This is the main part of industry 4.0 (Lee et al., 2015). CPS systems are built and
designed with the goal of seamless integration between algorithms, and physical components
such as human interaction. The scaling of this aspect of industry 4.0 will enable adaptability,
capability, scalability, resiliency and safety as well as security. Broad implementation of this will

1
also enable a level of usability that will far exceed the simple embedded systems that we use
today (Xu et al., p. 2018, p.2947).

1.2 Digital Maturity


Maturity can be defined as “the state of being complete, perfect or ready” (Simpson & Weiner,
1989). In that regard, maturity is an evolutionary progress and can demonstrate a specific ability
or accomplishment. In the field of Information Systems (IS) and Management Science, Maturity
models (MM) are continuously being improved and work as an informed approach towards
steady improvements (Lahrmann et al., 2011, p.176). In the IS field, maturity is more regarded as
a measure to evaluate the capabilities of an organization (Rosemann & De Bruin, 2005).
Maturity models are often used within areas such as software engineering, quality management
or project management (Chanias & Hess, 2016, p. 3). The models are often used in a broad
capacity and have a purpose of standardizing improvement and process optimizing. Maturity
models describe to which degree the company has reached different process goals and
improvement, these models also give the user the ability to see if they have reached perfection or
completion in different categories (Lahrmann et al. 2011, p.177).

The digital age has had us witness the emergence of new digital platforms through various
technologies such as cloud computing, internet of things and big data (Remane et al., 2017, p. 2;
Bharadwaj et al., 2013, p. 475). As a result of the technological exposure, companies from all
types of industries need to alter and adapt to new business models in order to seize new
opportunities and improve operations as the digital market environment keeps increasing
(Remane et al., 2017, p. 2; Kane et al., 2017, p. 3). In light of these changes, questions about
companies' digital status have become topical over recent years (Chanias & Hess, 2016, p. 2).
Most companies are recognizing the power of formulating a digital business strategy where the
use of digital resources could create new IT capabilities and craft new strategies surrounding
products and services (Bharadwaj et al., 2013, p. 474). Therefore, management and other
stakeholders are now more attentive to know how capable their company is of exploiting digital
business opportunities, in order to successfully keep up with the present digital transformation
(Chanias & Hess, 2016, p. 2). Moreover, a company's so-called digital maturity can impact many
aspects of the business depending on the level of maturity.
First and foremost, we need to break down what digital maturity means. To this day, there is no
single definition of digital maturity (Aslanova & Kulichkina, 2014, p. 443). The term has gained
popularity alongside the increasing topicality of digital transformation, which can be defined as
the application and use of modern technologies in the organization’s business (Aslanova &
Kulichkina, 2014, p. 443). According to Gill and VanBoskirk (2016, p. 2) digital maturity is
another way of evaluating and measuring a company’s digital sophistication and readiness. This
includes many dimensions of a business, examining for example the culture, technology and
insights of a company in order to measure their digital maturity. Moreover, there are several
other terms for digital maturity in recent literature that we are aware of. In BCG’s (2022) Digital
Acceleration Index, digital maturity is described as a measure of an organization's ability to
create value through digital means. Meanwhile Salviotti et al. (2019, p. 2) define it as the extent
of the learned ability to adapt to the ongoing digital changes and transformation efforts in an
appropriate manner. Although all of these definitions generally describe the phenomena of
digital maturity, we decide to adopt and follow the definition of Chanias and Hess (2016, p. 4),

2
which adequately defines digital maturity as the status of a company’s digital transformation.
This describes the phenomena of digital maturity from a more managerial point of view
according to the authors. It describes what a company has already achieved in terms of
performing transformation efforts. Said efforts mostly describe changes in the operation, such as
product and process changes or acquired meta-abilities from successful change processes
(Chanias & Hess, 2016, p. 4).

Digital maturity can be seen as a portal for new possibilities and positive change. It should be
considered necessary for companies in present times. For example, its capability of exploiting
opportunities, implementing technological processes and averting business risks that stem from
advanced technologies (Chanias & Hess, 2016, p. 2; Matt et al., 2015, p. 4). Besides the
overarching impacts of digital maturity, there are micro-level factors that influence a company’s
ability to handle the change of IT-induced business.
Kane et al. (2017, p. 3) conducted a global survey with over 3,500 managers and executives, as
well as 15 interviews with executives and leaders. Through this collection of data, they were able
to recognize five key practices of companies that are developing into or considered as digitally
mature organizations.

1. Implementing systemic changes in how they organize and develop workforces, spur
workplace innovation and cultivate digitally minded cultures and experience -
Shifting to this type of work have previously had substantial impacts on organizational
behavior, corporate culture, leadership and talent recruitment (Kane et al., 2017, p. 3).
2. Playing the long game - Digitally mature organizations often have a longer planning
horizon compared to less digitally mature organizations, where strategy focuses on both
technology and business capabilities. Linking digital strategies to a company’s core
business strategy enables better response to rapidly changing environments (Kane et al.,
2017, p. 3).
3. Scaling small digital experiments into enterprise-wide initiatives that have business-
impact - Digitally mature companies are twice as likely to succeed in converting small
digital experiments to enterprise-wide initiatives (Kane et al., 2017, p. 3).
4. Become talent-magnets - Digitally mature organizations are better at creating
environments that inspire career growth for digital talent. These types of organizations
are better at understanding the value and the need for attracting new talent that will in
turn give them digital skills and experience (Kane et al., 2017, p. 4).
5. Securing leaders with the vision necessary to lead a digital strategy, and willingness
to commit resources to achieve this vision - Defining digital initiatives as part of the
company’s core strategy is something that these types of leaders often do. It is important
to find a leader who fits the criteria for developing a digitally mature company (Kane et
al., 2017, p. 4).

3
Whilst these practices are in no way definitive to achieve digital maturity, practices like these are
proven to have substantial impact on the maturity of a company in relation to digitalization. The
fundamental idea behind “maturing” in this sense is being able to adapt the organization to an
increasingly digital environment. It is derived from the psychological definition of “maturity”,
which is the learned ability to respond to the environment in an appropriate manner (Kane,
2017). Furthermore, it is a continuous process of adaptation in order to compete successfully and
it is not seen as something as simple as implementing new technology to meet expectations
(Kane et al., 2017, p. 5).

1.2.1 Digital Maturity Assessments


With the increasing exposure of digitalization and rise of industry 4.0, organizations shift their
focus to prioritizing a transformation of the business into a more digitally mature one. In order to
make a qualified assessment of a company’s maturity, several models have been created
throughout the years to handle the many areas that need to be assessed. Digital maturity
assessments spawned from the original idea which was to measure the overall maturity of a
company. The breakthrough model that pioneered the maturity assessment is today known as the
Capability Maturity Model (CMM), introduced and engineered by the Software Engineering
Institute Carnegie Mellon in the early 2000’s (De Bruin et al., 2005, p. 9). The efforts to cut
costs, improve quality, gain competitive advantage and reduce time on the market were among
some areas that were handled, evaluated and compared by such maturity models (De Bruin et al.,
2005, p. 9).
Furthermore, digital maturity assessment models have become increasingly important
considering that companies need to adapt to the current business climate and respond to the
present threat of digital disruption (Gill & VanBoskirk, 2016).
There is no universal model for digital maturity assessment. Numerous models are used today
which all have different structures, but what they all have in common is that they share the same
general idea. That is, to gain competitive advantage along multiple performance indicators
(BCG, 2022). According to Chanias and Hess (2016, p. 5) the majority of assessment models
base their evaluation on four to five evolutionary maturity levels. Nevertheless, there are others
that base their evaluations differently, for example by using status levels, clusters, archetypes or
dimensions describing digital penetration and characteristics (Chanias & Hess, 2016, p. 5). A
cluster-based approach can be seen as described in Figure 1, which plots various firms' readiness
in a plane defined by digital readiness and impact.

4
Figure 1. A digital maturity assessment using a cluster-based approach. Retrived from Remane
et al. (2017).

Regardless of whether you choose to use an assessment model that is built off of levels,
dimensions, clusters or archetypes, they will be very similar in their characterization and
classification of elements. According to Aslanova and Kulichkina (2014, p. 446), the elements
that mainly become characterized are:
1. Strategy
2. Organization
3. People
4. Technologies
5. Data

The later stage of a digital maturity assessment is then to classify or categorize the characteristics
that have been discovered within the different elements of the organization. The overall digital
maturity of an organization will be classified by for example a level from 0-5 or where 0 would
be “not possible” and 4-5 would be “managed” or “continuous improvement” (DeBruin et al.,
2005, p. 12). Another example is provided by Chanias and Hess (2016, p. 9) where ascending
maturity levels are categorized from a 1-5 scale where 1 would imply that the organization
belongs in a “testing” phase and 5 would imply that they have “optimized” their digital
transformation and are indeed digitally mature. Figure 2 provides an example of assessing a
digital maturity score, from skeptics to differentiators.

5
The digital maturity assessment’s purpose is to provide organizations with insight and offers a
way to make an object of interest’s progress towards a targeted transformation (Remane et al.,
2017, p. 3). With the help of these progress insights, it is then up to top management and the
organization as a whole to undertake change efforts. According to a survey study made by
McKinsey (2018) it is shown that success rates for such change efforts are low and that the
establishment and goals of the top management team are a key influence factor to
transformational success.

Figure 2. Example of another digital maturity assessment by Forrester, retrieved from Gill and
VanBoskirk (2016).

6
1.3 Importance of Management
Management has many different stages to success and one of them is efficient goal management
and performance. For reaching goals, it is important to continuously monitor progress and
evaluate what has been done through measurement-based management (Tracy, 2014, p. 12), and
to base decisions on facts (Bergman & Klefsjö, 2012, p. 42). To achieve this, managers need to
focus on supporting processes, activities and resources to achieve set goals (Bergman & Klefsjö,
2012, p. 44). To achieve digital maturity, management plays a huge role in getting the
organization ready for both the digital transformation and the change in structure and processes
needed. Organizations moving towards digital maturity may feel like they need to work more in
cross-functional teams and with a higher degree of agile development to stay successful (Kane et
al., 2017, p. 10). Kane et al. (2017, p. 10) also declares that more than 70% of companies who
are digitally mature are using cross-functional teams to organize their work while 30% of early-
stage organizations do. This further emphasizes the need for more agile methods if organizations
want to achieve digital maturity. To manage a company moving towards digital maturity, many
do not have the previous or existing talent to succeed. Changing staff in an organization is not
always a fast and easy option and it is not enough to change the core of the company. Companies
need to focus on building an environment that is founded on continuously developing internal
components and learn from experience (Kane et al., 2017, p. 13).
A continuous improvement approach is essential to keep up with increasing expectations from
customers and new technical solutions and innovations can change the landscape in any business
sector (Bergman & Klefsjö, 2012, p. 46). This can be done by implementing standards for
continuous improvements through the PDSA “Plan-Do-Study-Act” model as described by
Deming (1993, cited in Bergman & Klefsjö, 2012, p. 46). This model is mainly used to improve
technical and quality focused improvements for processes to ensure that the company always
keeps on improving. This can however also be useful when the business environment is changing
and if used correctly, companies can move their development with the change and therefore have
a higher chance to succeed.

1.3.1 Change management


To drive change, managers need to improve performance in several areas such as: planning,
organizing, leading and controlling to change current processes to fit the new strategic objectives
with cooperation between the organization, workers and guidance from the company. This can
be defined as change management (Baca, 2005, p. 1; Bergman & Klefsjö, 2012, p. 415-416; Ha,
2014, p. 2; Murthy, 2007, p. 22). Change plays an important role in the business environment
and can affect companies in many different ways. Therefore, it is essential for managers to
understand the changes in the environment and how this affects their business to further be able
to move with the change (Todnem, 2005, p. 378). Bergman and Kelfsjö (2012, p. 414) states that
there are four different objectives to be able to make continuous improvements and be a learning
organization to proactively work in changing environments.

The individual personnel of a company play a huge role in overall business development and
change management. Companies should focus on the individual development of their employees
to further extend the knowledge of the company. This is no guarantee but should be used to build
experience and to be used as development for company activities. Individual development is also
used to inspire coworkers to contribute to the organization, give a sense of worth and to promote
7
new thinking (Bergman & Kelfsjö, 2012, p. 414). Learning together is according to Bergman and
Kelfsjö (2012, p. 416) an important step to achieve efficient change management and to see
organizations from the perspective of a team. Learning in a group is a collective skill and
requires good management. A clear vision and plan for innovation in coordination with good
management can lift individual ideas into group level development. A group of highly skilled
innovators and developers are nothing without backing from the organization (Bergman &
Kelfsjö, 2012, p. 416; Kushal et al., 2009, p. 5).

1.3.2 TMT (Top Management Team)


With the expansion of companies comes adventuring into new ventures, and companies that want
to expand often find themselves in a situation where the process of managing and planning for
every part of the business is too much for one single person. This is where the aspect of Top
Management Teams comes in. Eventually, companies grow to a certain point where the
management teams consist of people with a lot of different expertise in certain fields that need to
cooperate to reach multiple goals. This role differentiation exists in most, if not all top
management groups (Finkelstein et al., 2008, p. 2). The perspective of the CEO is important, but
to understand the overall strategizing group of a company can be beneficial to predict future
organizational outcomes (Finkelstein et al., 2008, p. 3). There are many different roles that make
up the composition of a TMT and all have certain areas of expertise and expectations within the
team. These usually have different titles but can be organized based on the different functional
areas such as: Accounting/finance, administration/legal, human resources, information
systems/technology, marketing, operations, research & development and strategy (Menz, 2012,
p. 76). The conflict between employees and TMT can sometimes lead to a loss in performance as
a result of mistrust within the company. This can be limited if the TMT views the company
employees as customers (Leonard & Pakdil, 2016, p. 110). TMT is an important piece in the
understanding of company development and is crucial for the success of digital maturity.

8
1.4 Problem Background
To get a better understanding of the topic of research, we wanted to understand how the world of
digital maturity measurements and digital transformations work.
According to a survey conducted by McKinsey (2018), prior research on digital transformations
have shown that it is difficult to achieve success. Efforts to change and transform the
organization have a low success-rate, only 16 percent of respondents of a 1,793-person sample
size can say that their organization’s digital transformation have improved and sustained
performance in the long-term. 7 percent of the sample size say that improvements were made,
but that they could not be sustained in the long-term. The question that we need to ask ourselves
is why are digital transformations so difficult to implement and sustain, what are causing the
drawbacks? Academic literature suggests that a big component of digital transformation success
rate is dependent on the digital maturity of an organization (McKinsey, 2018; Kane, 2017; Gill &
VanBoskirk, 2016). Hence, digital maturity acts as a byproduct and foundation for
transformational success. Digital maturity measurements are relevant to understand where an
organization stands in terms of readiness to a digital transformation. A study by North et al.
(2019, p. 6) that investigated digital maturity and growth of SMEs in 427 companies concluded
that 77% of the companies are searching for digital opportunities and that only 24% search in a
systematic way. This shows that the interest for digital maturity and the transformation it can
lead to exists, but the procedure to get there is unclear or not pursued.

Furthermore, one of the major issues for the companies that choose to identify their digital
maturity is the struggle to convert identifications into actions (Gill & VanBoskirk, 2016, p. 10).
Moreover, the company might have identified and gained knowledge over what they need to
improve but fail to do so because of different factors that vary depending on the company in
question. Tabrizi et al. (2019, p. 2) account a similar issue, describing how digital technologies
provide possibilities, but if there is a lack of effort to change or practices are flawed the overall
transformation and maturity will be impacted negatively. These are some issues that are highly
relevant for the case of digital maturity and most research before us have focused on the broad
picture, taking into account all factors of an organization that either help or slow their digital
maturity and transformation. We would like to address and focus on a particular field within an
organization to get in-depth, niche knowledge surrounding digital maturity. This will be more
closely described in the following chapter.

9
1.5 Research Gap
Previous research has mainly focused on a broad perspective of digital maturity and required
steps for companies to take. How companies take action and maintain this development over
longer periods of time are limited in the field of research. For companies to sustain digital
maturity and progress in their journey a clear plan for developing certain areas is necessary. We
believe that one of the aspects that is lacking in the field of research is management through the
development of digital maturity development. McKinsey (2018) mentions the importance of
converting digital maturity assessment to company actions through a systematic engagement
from managers within the company and that an important key to success is dependent on
leadership commitment. North et al. (2019, p. 6) also shows that a very low percentage of SMEs
can count on leaders to initiate, coordinate and supervise digital initiatives and even fewer have
leaders that promote digital growth. TMT also has a lot of power when it comes to setting the
standard for how much a company cares about digital maturity and digital development. If TMT
can be effective in strategy and still have passion for digital development, in combination with
good relations between the company and TMT. Then it could lead to better performance and a
better success rate in digital transformations (Leonard & Pakdil, 2016).

We have observed that most studies in this field have focused on either giving overviews on
different perspectives of digital maturity assessments or models and comparing them to similar
ones as in e.g. (Chanias & Hess, 2016; DeBruin et al., 2005; Rossmann, 2018). Or, the studies
performed have been conducted with a quantitative approach to answer questions regarding
success rate of digital transformation. There is limited research focused on management
approaches and on how organizations plan and use the information gathered from digital
maturity measurement to further develop their maturity.

We believe that management is one of the key components and that there are far too few studies
that handle the aspect of management in digital maturity research. As Westerman et al. (2014, p.
15) describes, so-called digital masters excel in both digital and leadership dimensions.
Committed leadership is “the lever that turns technology into transformation” (Westerman et al.,
2014, p. 13). We would therefore want to investigate how the management affects and influences
the implementation and success of digital maturity measurements and how companies can work
with continuous improvements to lead digital development and benefit from it.

10
1.6 Research Questions and Purpose of Study
How do Management Teams influence the implementation and success of digital maturity
transformations in companies?

Since we want to investigate how management influences the implementation and success of
digital maturity initiatives in companies one of the purposes is to see how much management is
influencing digital transformation. We also want to investigate if current models for digital
maturity assessment are equipped to suggest real changes based on both evaluative and
performance indicators. Beyond these purposes, we also want to examine if the current digital
maturity assessment models can be improved in any way to further improve the success rate. The
research question and purposes are summarized within Table 1.

Research question How do management teams influence the implementation and success of
digital maturity transformations in companies?

Purpose 1 Investigate how much influence management has on the digital


transformation of a company.

Purpose 2 Investigate whether current digital maturity models are equipped to


suggest real changes based on evaluative/performance indicators in the
assessment.

Purpose 3 Examine if digital maturity assessment models can be improved for more
successful transformation projects?
Table 1. Overview and summary of research questions and connected purposes

1.7 Delimitations
Delimitations are the characteristics that limit the scope and define the boundaries of your study
(Simon, 2011, p. 2). Delimitations are something that you control. They are conscious
exclusionary and inclusionary decisions that have arised from limitations in the scope of the
same study (Simon & Goes, 2013, p. 3). Factors include choosing objectives, research questions,
variables of interest, theoretical perspectives and population of investigation (Simon, 2011, p. 2).
A delimitation section should cover the criteria of which participants are included. It should also
include the geographic region covered as well as profession and organizations involved (Simon,
2011, p. 2).

A delimitation with our study is that we have only chosen Swedish companies operating in
Sweden. This is because of the fact that we could ensure better participants in terms of
experience and knowledge in the field. Our networks outside of Sweden did not allow us to find
any suitable participants for this study. It was also more suitable to look at companies operating
in Sweden as we then could draw comparisons that had a foundation in a Swedish market. We

11
did not limit the companies to within a sector. Both public and private sectors are included. We
believe that a problem with having a scattered geographical enrollment of participants is that it
could lead to weaker results. Just because there is less empirical support and evidence.
Another delimitation is our sampling of participants. We have chosen to use a purposive
sampling method which is based on previous knowledge and experience in the field. We used
this method because we believe that it would give us the most in terms of valuable insights,
discussions and analysis. Moreover, we have chosen to use a qualitative approach to our study.
This means that the data will not be statistically analyzed but analyzed from our chosen theories
in correlation to our qualitative findings. Our research question and the purposes of this study are
suited to an exploratory research design, which is why we have consciously excluded any
quantitative aspects.

12
2. Theoretical Framework
2.1 Defining Digital Maturity
The term “digital maturity” does not have a universal definition which allows us to choose a
fitting definition according to our research perspective. Previous research has been scrutinized in
order to find the best suited definition. We compiled many definitions on digital maturity but
found that most did not suit the perspective and approach that we wanted to take. As referred to
in chapter 1.2, and Table 2 below, the definition found in Chanias and Hess (2016) was best
suited for our approach. Ultimately, we wanted a definition that could handle both the aspect of
digital maturity and management at the same time. Hence, the following definition was chosen.

Author & Year Title Definition of Digital


Maturity

Chanias and Hess (2016, p. 4) How digital are we? Maturity “The status of a company’s
models for the assessment of digital transformation”
a company’s status in the
digital transformation
Table 2. Selected definition of digital maturity.
We believe that this selection is the most balanced one in relation to digital maturity and top
management teams’ involvement. The digital transformation of a company includes the
importance of management as it is one of the main elements in being “mature” (Aslanova &
Kulichkina, 2014, p. 445). The overall status of a company’s digital transformation can be linked
to the importance of carrying out digital maturity assessments to understand the present status.

13
2.2 Main Digital Maturity Characteristics
In order to understand digital maturity as a whole it is vital that we break down and present the
most critical characteristics that together make digital maturity a unit. The following table is
inspired by Aslanova and Kulichkina (2014). The authors presented five characteristics that we
deem to be of utmost importance in digital maturity and as a result of that we have decided to
follow a similar approach.

Element Definition

Strategy Integrating a strategy for the organization to


achieve a high level of digital maturity. Clear
and specific actions are required which should
imply the implementation of new technologies
and resources to help improve performance
indicators.

Organization Arguably the most important component for


high digital maturity. Readiness of
management to provide changes in
organizational culture, business processes and
improve management skills.

People Engagement, motivation and participation of


staff are needed to successfully implement
changes. Readiness and awareness of staff are
a key component.

Technologies In order to achieve high digital maturity the


digital competence of the employees involved
in the transformation are a key factor.

Data Usage and management of data are a main


prerequisite for digitalization. Harmonization
of data creation and generation as well as
increasing volume and quality of data is
needed.
Table 3. The five main digital maturity characteristics according to Aslanova & Kulichkina
(2014, p. 445).

Since we have identified a research gap that is focused on top management teams in correlation
to digital maturity implementation, the characteristics of digital maturity will be slimmed down
to only two characteristics. The other three characteristics will not be extensively explored as it
strays too far from the identified research gap. However, we would like to acknowledge that
without all characteristics in place it would be difficult for a digital maturity assessment to
function as a tool for measuring a company’s status of digital transformation. The two

14
characteristics that will be further explored are organization and people. This is due to the fact
that they are both very important elements to achieve high digital maturity, but also because they
are interconnected to top management teams in organizations.

2.2.1 Organization
Leadership quality and management skills are an important component in order for an
organization to achieve high digital maturity (Aslanova & Kulichkina, 2014, p. 445). Readiness
to change will impact the performance and development of an organization. The process to
transform an organization into a digitally mature one can be a long one, therefore it is of major
interest that top management supports the entire journey. Such a process usually means that the
entire organization is affected, which could possibly lead to resistance in some areas. In regard to
that, it is essential that the right leadership skills are in place alongside the activeness of both top
management teams and other possible stakeholders (Matt et al., 2015, p. 5).
According to Irimiás and Mitev (2020, p. 8) change management has a positive effect on digital
maturity, meaning that the more efforts to change is embedded into the organizational culture the
better the digital development. Readiness to change does not only act as a one-way effect in the
organization. It is proven that developing change management skills will also have positive
effects on the green development of an organization (Irimiás & Mitev, 2020, p. 8). Furthermore,
the digital maturity of an organization has a positive effect on business performance. From this
data we can see that incorporating more change management into the organizational culture will
promote digital development which in turn has positive effects on both business performance and
green development (Irimiás & Mitev, 2020, p. 9). Salviotti et al. (2019, p. 4) and Westerman et
al. (2014, p. 14) also point out the importance of strategic factors for attaining high digital
maturity. The first one being a shared digital vision within top management teams. Aligning top
management with a vision surrounding the organization's digital future is important in order to
attain high digital maturity. Moreover, if they share a common vision of changes brought through
digital technologies, that will create competitive advantage compared to organizations whose top
management team's visions are absent (Salviotti et al., 2019, p. 4) A second strategic factor
regarding the organization is top management’s transformative vision. Salviotti et al. (2019, p. 4)
goes on to point out that many organizations fail to derive value from digital technologies
because their leaders lack a transformative vision. Therefore, it is of importance that top
management teams communicate the nature of the transformation that is created from the digital
vision of the same leaders (Salviotti et al., 2019, p. 4).

15
2.2.2 People
For the organization to make the digital vision a reality, it is necessary to engage employees and
managers at all levels (Salviotti et al., 2019, p. 4; Matt et al., 2015, p. 5). Communication
regarding digital visions of an organization goes hand in hand. It is only through means of
communication that a vision can become reality. This could be done by for example
implementing new strategic plans, internal communication instruments and events (Salviotti et
al., 2019, p. 4) Furthermore, this creates a new level of adaptability within the organization and
paves a way for better digital change efforts on all levels. Transformations will change processes
and the overall structure of an organization. These types of changes will require new skills in
employees. According to Salviotti et al. (2019, p. 5) and Aslanova and Kulichkina (2014, p. 446)
there are two options in such a position. One is to develop internal talent through new hiring or
develop existing talent. The same authors claim that research has proven that hiring new talent or
developing existing ones will increase competitive advantage in a transformation. Moreover,
besides having the right staff in place, experts from both inside and outside the organization may
be needed as additional support (Matt et al., 2015, p. 5).

2.3 Organizational Change


To get a better understanding of what key aspects of change management, we have to understand
what makes organizational change work and what aspects that need to be addressed during the
change.

Balogun & Hope-Hailey (2008, cited in Whittington et al., 2020, p. 469) have developed a
framework for identifying key elements of strategy that need to be taken into consideration while
proposing change programs. The name kaleidoscope comes from how the toy by the same name
changes and rearranges different elements into different patterns, the change kaleidoscope
arranges how different aspects of change can take on various forms in either supporting or
resisting change. The change kaleidoscope features eight different aspects that all affect strategic
change programs and by understanding what parts affect change in what way and understanding
where there might be resistance, managers might be able to focus their resources in the right
areas to be more successful and limit issues regarding organizational changes (Balogun & Hope-
Hailey, 2008, cited in Whittington et al., 2020, p. 469).

The first part of the change kaleidoscope is time and refers to the time available for change and
how this drastically can change from a case-to-case basis. A business might suffer from rapid
changes in the market and therefore require a more agile approach to not lose business or profit
from slow adaptation. At the same time, some companies or sectors might see the changes as
years away and may therefore take their time carefully planning their strategy for the upcoming
change (Balogun & Hope-Hailey, 2008, cited in Whittington et al., 2020, p. 469).

The scope of change is variable depending on how much of a change is needed to reach the goals
of the organization. If the company in question has a large and global presence, the scope is
likely to involve a high degree of breadth of change, while organizations with a long cultural
16
history might be more likely to require more depth of change (Balogun & Hope-Hailey, 2008,
cited in Whittington et al., 2020, p. 469-470). Preservations of some parts of the organization
might be required. E.g., capabilities might be one of the things that need to be revised during
organizational change.

While some capabilities need to be kept, either for balancing the revenue of the activities while
new streams might take a while to come up to speed or if they need a rework to be more efficient
(Balogun & Hope-Hailey, 2008, cited in Whittington et al., 2020, p. 469-470). While building a
plan for organizational change it can be beneficial to analyze the diversity of the company in
either experience, views or opinions. Diversity may help the organization to change current
processes and start new activities that might become key to the changing organization. However,
if an organization has been stuck with a similar strategy for a long time the organization can
become very homogeneous in their view of the business and the world. This could limit the
effectiveness of the change (Balogun & Hope-Hailey, 2008, cited in Whittington et al., 2020, p.
470). Capacity of change is another major point that needs to be considered during
organizational change and refers to the fact that change can be extremely costly. Not only
directly in financial terms but also through managers allocated time. It will most likely be the
responsibility of the TMT to allocate and prioritize these resources (Balogun & Hope-Hailey,
2008, cited in Whittington et al., 2020, p. 470).

Power is essential to be able to force significant change. Chief executives are seen to have a lot
of power, the reality is that resistance from the organization or pressure from key stakeholders
may limit the power of executives. In organizations with a strong hierarchical structure the top
management may have more power of change and it can be difficult to change this, mainly since
the organization expects this type of directness. In flatter organizations that are more networked
or learning organizations, it is more likely that a collaboration and a more diverse power
structure is common and desirable (Balogun & Hope-Hailey, 2008, cited in Whittington et al.,
2020, p. 470). Capability to manage change is unquestionably very important for change and
since it is so complex, organizations need to have access to skilled and capable change managers.
These can either be form within the firm or hired as an outside source through e.g., consultants.
It is also positive if the workforce is used and experienced with change as a common occurrence
(Balogun & Hope-Hailey, 2008, cited in Whittington et al., 2020, p. 470). The final part of the
change kaleidoscope is readiness for the change. Is the company ready for a change and is there
a need for it across the whole organization? Is there widespread resistance or are there parts of
the organization that are less or more ready for change than others? (Balogun & Hope-Hailey,
2008, cited in Whittington et al., 2020, p. 470).

2.4 Continuous Improvement

2.4.1 The Theory of Constraints (TOC)


The theory of constraints is used to find and analyze key factors of resistance in change
management. This can be described as the need for managers to identify resistance to change in
various forms and use it to test and apply strategies and action plans to be able to successfully
complete an implementation that has support from the organization. The process is based on a

17
few key questions such as: “What to change; What to change to; and How to bring about the
change?” (Mabin et al., 2001, p.168). Goldratt (1990, p. 5) expresses the importance of
verbalizing intuition, since it can be difficult to really understand an issue if not verbalized.
Usually, people have the intuition of some things being wrong within the organization, but to be
able to do something about it these intuitions need to be concretized through verbalization.
According to Goldratt (1990, p. 6-7) There are five steps to the theory of constraints, the first one
being to identify the system’s constraints. When identified, they also need to be prioritized based
on the impact they have on the organization. If this is not done, it can easily be filled with minor
issues that skews the scope and waste the limited time available. The second step is deciding
how to exploit the system's constraints. After the constraints are defined, the resources need to be
balanced in a way that everything that is going to be consumed will be supplied by the non-
constraints. There is thus no need to supply more than that and therefore the third step in the
process is to find a way to limit the impact of the constraints (Goldratt, 1990, p. 6). If the
constraint is elevated enough, to the point where it is reasonable to eliminate it other constraints
will be highlighted. Then there is time to repeat the process from the first step. However,
awareness of the reasoning behind the constraint might still exist, since one of the common
issues is that old policies might be at fault for the constraints to appear in the first place.

Companies need to review and update their policies to not be trapped in an endless cycle of
constraints (Goldratt, 1990, p. 6-7). Managers are constantly overwhelmed by problems that need
attention on a daily basis, and it can be hard for managers to find time to correct the underlying
issues. If the goal is to maintain effective improvement, some main points need to be defined:
What to change, to what to change and how to cause the change (Goldratt, 1990, p. 7; Mabin et
al., 2001, p. 171). What to change refers to the core issues that once corrected will have a major
impact on the organization. After the identification has occurred, it is easy to accidentally jump
straight into how to cause the change instead of focusing on what to change to. There is no need
to focus on solutions if the new goals and metrics have been set, since this will only cause
confusion and panic. There needs to be a plan for what the changes will take us and the solutions
to actually cause the change need to be simple and divided into digestible steps for the
organization to follow (Goldratt, 1990, p. 8). More recent literature has discussed the view that
this model has evolved into a management theory. Both from the perspective of methodology
and other applications of business, while the original theory was focused on manufacturing
methods. The theory has been used more within the service sector and it has also been used in
critical chain project management for real life applications. This is an interesting development
but there is lacking studies supporting advantages within project management (Zeynep et al.,
2014, p. 934)

2.4.2 PDSA-Cycle (Continuous improvement theory/model)


The PDSA-cycle was developed by Edward Deming and was first introduced by Andrew
Shewhart. The developed model took the form of four simple and repeatable steps for continuous
improvement, with the steps being Plan-do-study-act (Connelly, 2021, p. 61). The first stage plan
focuses on developing a plan and identifying key objectives and owners of the task. This stage
should also identify when, how and where the plan will be implemented. Predictions and the
objective outcome should also be decided in this stage. The next stage is do; this is the action
stage where the decided plan is carried out. It is also important to properly document relevant
18
data, successes, issues and unexpected realizations. Study is the next stage and is the most
important step in the cycle to actually improve at the process. At this stage, the results are
compared to the predictions made and the learnings from the previous step are discussed and also
documented. At the last stage, act. This step focuses on what action to take next, based on the
findings from the cycle (Christoff, 2018, p. 198). There are three different actions to take at this
stage: adapt, improve the change and continue the testing. Document the plans and changes for
the next cycle. Adopt, involve the changes made and implement them in a larger scale of
implementation and make a plan for the change to be sustainable. Abandon, eliminate the change
idea and try a different one (Christoff, 2018, p. 200).

2.4.3 RACI Model


RACI stands for Responsible, Accountable, Consult and Inform (Smith & Erwin, 2005, p. 5).
This model/matrix is a technique for identifying areas where there are differences or ambiguities
in the process. The model reconciles role conception, role expectation and role behavior all at the
same time (Smith & Erwin, 2005, p. 4). Essentially this means that the elements combine what
people think their job is, what others expect of that job and how the job is actually performed
(Smith & Erwin, 2005, p. 4). This charting system illustrates the goals of each task and
simultaneously illustrates the actions needed to achieve those goals (Ahmed, 2019, p. 6). By
using RACI, problem areas can be handled through open resolving in a cross-functional
collaborative effort within the organization (Smith & Erwin, 2005, p. 2). Responsibility
Charting, which it is also known as, opens up possibilities to be more systematic and focused in
discussions regarding actions that need to be accomplished in order to successfully deliver a
service or product (Smith & Erwin, 2005, p.2).

To ensure that the RACI model works, there are some basic rules to follow. Only one person
should be in charge of a given task or activity. This is the “A” segment of the model which refers
to “accountable” (Costello, 2012, p. 62). This person will answer for the completion and success
of the given task or activity, even though it might not be that person who carries out the work
related to the task. Moreover, there are people responsible for carrying out the necessary work to
complete a task. This is the “R” segment of the model. In many instances (often if the team or
project is small) the people that are accountable are the same ones carrying out the work
(Costello, 2012, p. 62). The “C” segment stands for consult, these people are assigned as experts
within the subject matter of the given task and acts as support and guidance to the rest of the
team (Costello, 2012, p. 62). The last segment of the RACI model are the “I” people. These
people need to inform the managers, partners or other stakeholders involved in the project. It is
vital that communication goes out to everyone across the RACI as well (Costello, 2012, p. 62).

19
Figure 3. An example of assigning roles in a RACI model.

RACI is a useful project tool but is also an enterprise-wide mindset for distributing both
participation and expectations to complete work tasks (Costello, 2012, p. 63). Moreover, it acts
as a checklist or reference when for example assigning roles, resources, duration and cost
estimates. In the RACI, the opportunity to spot missing pieces such as missing roles, missing
stakeholders or missing work tasks will appear because of its structure and interface (Khan &
Quraishi, 2014, p. 178). This further ensures that everyone on the team has assigned roles and
tasks to do. However, even though the RACI model provides clarity, managers must always
recognize symptoms of confusion during the process. Drifting away from roles or tasks tends to
be usual during most projects. Therefore, the identification and elimination of drift is important
to ensure both the project’s and company’s well-being (Smith & Erwin, 2005, p. 4).

As in all projects, it is vital that a process plan is made before giving the green light to the
project. In the case of RACI, there are no exceptions. A 5-step process is presented by Smith and
Erwin (2005, p. 6) which pedagogically explains how the structure and planning of the RACI
model can look like, as presented by Table 4 below.

20
1. Identify work Start with high Work process Fewer than ten Do not chart
processes impact areas first must be well activities is too process that will
defined narrow, more soon change
than 25 activities
is too broad

2. Determine Avoid obvious Each activity or


the decisions or generic decision should
and activities to activities such as begin with a
chart “attend good action verb
meetings” or such as
“prepare reports” “evaluate”,
“operate”,
“approve” etc.

3. Prepare a list Roles can be Can include Roles are better RACI chart
of roles or individuals, people outside than individual should be
people involved groups or entire your department names independent of
in those tasks departments or outside the personal
company relationships so
(customers, the chart would
suppliers etc.) still be valid if
all people filled
the roles
tomorrow

4. Develop the For larger groups Meeting time


RACI chart or more complex can be
issues, an significantly
independent reduced if a
facilitator is “straw model”
required list of decisions
and activities is
completed prior
to meeting

5. Get feedback Distribute the Capture their Reissue revised Update as


and buy-in RACI chart to changes and chart necessary on an
everyone revise chart ongoing basis
represented on
the chart but not
present in the
development
meeting
Table 4. The 5-step process to plan, structure and execute tasks through the RACI model.

21
2.5 Literature Review
Table 5 summarizes some of the key assessment models in digital maturity. The table gives an
overview of what structure the model has, what the aim is, and also the implication of the model.
A review of the literature has been made in order to get better acquainted with present models
and to identify potential drawbacks.

Authors (s) & Title Model Aim of Model Implication of


Year Structure (s) and Research model and
research

Forrester, Forrester’s The Four stages and Companies will To map yourself
VanBoskirk digital maturity four dimensions. fall into one of to one of the four
(2017) model 5.0 Dimensions four maturity segments and
determine the segments based identify a digital
digital maturity on questions strategy based on
stage (culture, (differentiators, how mature you
technology, collaborators, are. Some
organization, adopters, strategy goals
insights). skeptics). are given in each
segment, but
very broad. Goal
is to become a
differentiator.

McKinsey & Raising your Four dimensions Companies Acknowledge


Company (2015) digital quotient which determine should commit your current
digital maturity to lessons digital position
(strategy, digital provided within and build on it
capabilities, each of the four through practices
adaptive culture, dimensions in provided in each
aligning order to become dimension.
organizational more digitally
structures) mature.

PwC (2016) Industry 4.0: Four stages and Companies will To identify your
Building the seven fall into one of enterprise
digital enterprise dimensions. four stages architecture and
Dimensions (digital novice, what capabilities
determine vertical you need in
industry 4.0 integrator, order to improve
capabilities and horizontal business
the digital collaborator, processes. Pilot
maturity. digital opportunities
champion) based exist alongside

22
on the outcome every dimension
of the seven to help the
dimensions. company
transform.
Striving to
become a digital
champion in the
long-term.

MIT Center for The digital Four stages and Companies will Technology-
Digital Business advantage: How two dimensions. fall into one of enabled
and Capgemini digital leaders Dimensions the four stages. initiatives and
Consulting outperform their determine your High digital leadership
(2012) peers in every stage (digital intensity capabilities will
industry intensity, provides decide how
transformation companies to digitally mature
management gain and manage your company is.
intensity - more volume Focus on the
beginners, with existing intensity of
conservatives, physical digital and
fashionistas, capacity. High transformation
digirati). transformation management, to
management become a
intensity has a digirati.
direct positive
impact on
profitability.

Booz & Measuring Three stages and Companies will Characteristics


Company (2011) industry four dimensions. fall into one of of input,
(PwC now) digitization: Dimensions the three stages processing,
Leaders and determine your based on the output and
laggards in the maturity stage. assessment in infrastructure
digital economy (leading, relation to the will show the
midfield, four dimensions. overall degree of
lagging) “digitization”.

Berghaus & Stages in digital Five stages and Companies will After defining
Back (2016) business nine dimensions. fall into one of the maturity
transformations: Dimensions the five maturity stage, individual
Results of an determine your stages based on maturity scores
empirical maturity stage the score of each are also given to
maturity study (Promote & category. Cluster get an even
support, create & analysis is used. better overview
build, commit to of the digital
transform, user- maturity. No

23
centered & direct
elaborated suggestions of
processes, data- improvement if
driven scores were low.
enterprise).

Deloitte (2018) Digital Maturity Five clearly The different The digital
Model: defined business business maturity model
Achieving categories/dimen dimensions and suggests three
digital maturity sions (customer, individual steps to complete
to drive growth strategy, criteria will help a transformation.
technology, companies to Imagine, deliver
operations, identify gaps, and run. Assess
organization & establish key current digital
culture). 28 sub- areas to focus on maturity,
dimensions and where to prioritize
which gives a start. capabilities to
total of 179 enhance business
individual objectives and
criteria to assess measure the
maturity. No value and impact
stages. of initiatives to
digital maturity.

Boston BCG’s Digital Questionnaire- To benchmark To benchmark


Consulting Acceleration based evaluation digital maturity broad-
Group Global Index across 27 in 36 categories. capabilities, go-
(2022) dimensions. to-market
capabilities and
future-ready tech
functions. By
focusing on
these areas
companies can
gain competitive
advantage.
Table 5. An overview of popular assessment models created by large companies and the
different components of them.

24
2.6 Conceptual Framework
The constructed conceptual framework builds on the initial research ideas that we developed
early on. We believe that in order to achieve digital transformation successfully, the digital
maturity of a company needs to be assessed. Furthermore, companies should look more closely
into the proposed components that we present in Figure 4 below. These are: Importance of
Management and Change Management. Each component is supported by various theories that
we believe can help companies to achieve a greater level of digital maturity. The belief is that
most contemporary maturity models do not take the aspect of management into consideration, or
at least not to the level that we believe they should. This primarily includes the characteristics of
organization and people. Moreover, they do not suggest how companies can change in order to
become digitally mature. This is where the aspect of change management and change efforts
come into play.

Figure 4. Initial conceptual framework based on chosen theories.

25
3. Scientific Methodology
This chapter will cover the essentials of scientific methodology and what is needed in order to
better understand our own choices of research structure. This chapter will explain our
assumptions and beliefs from a philosophical standpoint. Research philosophy can be explained
as a system of beliefs and assumptions about the development of knowledge (Saunders et al.,
2019, p. 130). Numerous assumptions will be made during the research process. These include
assumptions about ontology (assumptions about the realities you encounter), epistemology
(human knowledge) and axiology (how personal values affect the research process). In order to
design a stable research project, there needs to be a consistent set of assumptions. This will help
build a credible research philosophy which in turn underpins methodological choice, research
strategy, data collection techniques and analysis procedures (Saunders et al., 2019, p. 130-131).
Our study will adopt an interpretive paradigm, which will be discussed in the following chapter
by breaking down the paradigm into categories.

3.1 Research Philosophy

3.1.1 Ontology
Ontology derives from the Greek word “existence” and can be seen as the theory of being
(Marsh & Furlong, 2002, p. 178). The questions surrounding the philosophical term that is
ontology, focuses on the nature of “being”. To be more specific; what is the form and nature of
reality, and what is there that can be known about it? (Marsh & Furlong, 2002, p. 178).
Moreover, there are two ontological positions one can take depending on how they see the world.
The first one being foundationalism, which divides itself into a more commonly known phrase
known as objectivism. Objectivism is the position that implies any social phenomena exists
whether we are aware of them or not. Furthermore, this means that they have an objective view
of reality independent of our role as an observer (Bryman & Bell, 2018, p. 26). Objectivism
embraces realism, which implies that social entities are much like physical entities. They exist
regardless of how we think, label or are aware of them (Saunders et al., 2019, p. 135). Moreover,
it also implies that our world only has one true social reality and is not influenced or affected by
the actions of any social actors. The second ontological position is anti-foundationalism, or better
known as either subjectivism or social constructionism (Marsh & Furlong, 2002, p. 178). The
subjectivism/constructivism viewpoint suggests that categories such as organizations and culture
are subjective and are all socially constructed, made possible by the actions and understanding of
humans (Bryman & Bell, 2018, p. 27; Saunders et al., 2019, p. 137).

Ontology is then divided even further down the line, into what type of debates and discussions
are covered depending on the research approach. A qualitative approach would imply
discussions surrounding the meaning of concepts, and more specifically, the definition of a
concept and the characteristics that make it an entity (Goertz & Mahoney, 2012, p. 207). The
discussion agenda of ontology from a quantitative approach differs since focus lies mostly on
data and measurements and less on the meaning. Instead, the focus lies on measuring and
operationalization of a concept. Operationalization in this case involves finding pointers in
numerical data that are correlated with each other (Goertz & Mahoney, 2012, p. 207). Your
ontological assumption shapes the way you see and study your research objects (Saunders et al.,

26
2019, p. 133) In business research, objects can be anything from organizations, events, working
lives or management (Saunders et al., 2019, p. 133).

This research project will adopt a subjectivism approach. This is because of the fact that the
research area explored includes objects such as organizations, management and individuals
which we assume to be a social construct. This is how we see the world of business through our
own lens; therefore, such an assumption is made. The world of business and especially constructs
such as technology, digital maturity and assessments are not social phenomena that exist whether
we know it or not. They are a result of human actions and beliefs that have shaped these
phenomena throughout the history of evolution and revolution. Since we are examining and
exploring how management teams influence the implementation of digital maturity
transformations in companies, we want to know how people perceive and experience that reality.
To strengthen and support our choice further, a subjectivist approach also means that the
researcher will be interested in different opinions and narratives that can help account for
different social realities of different social actors (Saunders et al., 2019, p. 137).
In relation to the interpretive paradigm, the ontological view is as previously discussed, socially
constructed, complex and rich (Saunders et al., 2007, p. 145). The data is seldom black and
white, meaning that there are most likely multiple meanings or different interpretations of it.

3.1.2 Epistemology
Epistemology is the theory of knowledge and reflects the view of what we can know about our
world (Marsh & Furlong, 2002, p. 178). It is also derived from the Greek language, “episteme”
meaning knowledge, and “logos” meaning theory (Bryman & Bell, 2018, p. 29). It refers to the
assumption about knowledge, what we deem to be valid and acceptable knowledge and how we
can communicate that knowledge to others (Saunders et al., 2019, p. 133). In the epistemological
context, focus lies with the question surrounding how we know about the world. Epistemology
considers logic, evidence and certainty as its key foundational components. The questions that
we ask ourselves from an epistemological side are whether an observer can identify real or
objective relations between social phenomena. And if so, how can they do it? (Marsh & Furlong,
2002, p. 178). Investigating knowledge claims is naturally followed from the original ontological
standpoint that one takes in the beginning. Furthermore, there are different epistemological
standpoints to take, if your ontological standpoint is foundationalism that would mean that your
epistemological choice could be either positivism (quantitative method) or realism
(quantitative/qualitative method) depending on what method you would deem necessary to use.
For example, using a subjectivism/social constructionism (anti-foundationalism) standpoint
would imply that the researcher would have to gather information and knowledge through
qualitative observations and interviews of social actors. This is because they would need to
understand their shape and understand the world, considering that subjectivism suggests that
everything is socially constructed (Bryman & Bell, 2018, p. 29). Epistemology is important in
research since it allows us to answer the question of how we should conduct research and what
type of data needs to be collected in order to understand a phenomenon (Bryman & Bell, 2018, p.
29).

27
Our epistemological standpoint will guide us to what type of method to use in our research.
Since the interpretive paradigm lines up with a qualitative epistemological approach, we have
chosen to adopt that standpoint. It seems like a natural way to follow. Since the ontological
assumption assumes that everything is socially constructed, we need a rich and complex view of
the organizational reality along with individual contexts such as experiences and differences
(Saunders et al., 2019, p. 134). We will follow narratives, stories, perceptions and interpretation
which are the main options to collect data through the interpretive paradigm.
Research findings should not be generalizable and objective, which is why we have not chosen to
follow that. This will contribute to a deeper understanding and give us an interpretive type of
view of knowledge.

3.1.3 Axiology
Axiology covers the roles of values and ethics in research. It is important that you recognize and
reflect upon your values and ethics when conducting a research project as it can have an impact
on the research (Saunders et al., 2019, p. 134). Of course, values and ethics can be of positive
impact, but that is for the researcher to deal with and reflect upon. Not only will the researcher
have to deal with their own values, but also the values of the people that they are researching.
Values will reflect the entire research philosophy as it is the driving force of choosing topics,
methods, interactions etc. According to Saunders et al. (2019, p. 134) it is important to
demonstrate axiological skill, articulating your values as a basis for making judgments about
what kind of research you are conducting and how you will do it.
As we have chosen an interpretivist approach, we are value-bound in our research. Moreover, we
are going to be very involved in the research and are therefore somewhat subjective since we
affect the data with our own interpretations and analysis (Saunders et al., 2019, p. 145). To carry
out this research in the best way possible, our goal is to be as objective as possible. However,
one of the key contributions of interpretivism is the researcher’s own interpretations of the data
(Saunders et al., 2019, p. 145). Therefore, knowing what value position you have (being as
objective as possible) could potentially ease the risk of compromising the research due to value
problems.

3.2 Research Approach


In order to establish sound research, researchers need to determine what type of logical
reasoning they want to follow. There are three different reasoning forms that exist in scientific
literature. These are: deduction, induction and abduction (Saunders et al., 2019; Rothchild,
2006; Reichertz, 2004). Deduction looks for predictions, induction look for facts and abduction
look for theories (Reichertz, 2013, p. 131).

Deduction translates into showing or holding a thing to be derived from something or drawing
conclusions from something known or assumed (Rothchild, 2006, p. 3). Another general
definition of deduction is “the deriving of a conclusion by reasoning” (Merriam-Webster, 2022).
Basically, forming a conclusion based on generally accepted facts or statements. Deduction is a
dominating research approach in natural sciences and is known by many people as the primary
way to conduct scientific research (Saunders et al., 2019, p. 154). An important theme in

28
deduction is the search to explain relationships between concepts and variables. The first step is
basically forming a hypothesis around two or more variables, or several hypotheses.

Induction comes from the word “induce” which means to derive by reasoning, to lead to
something as a conclusion or suggestion based on different facts. The idea is to pull together
these facts or particulars, in order to prove a general statement (Rothchild, 2006, p. 2 ). It is a
form of reasoning involving an element of probability. Induction generalizes results based on the
observations of many different particulars (Rothchild, 2006, p. 2). On the contrary to deduction,
induction focuses on theory following data rather than the other way around (Saunders et al.,
2019, p. 155). Inductive case research theories are developed through a data-driven standpoint
and some form of grounded theory approach is often used (Eisenhardt, 1989; Yin, 2003 cited in
Mantere & Ketokivi, 2013). For example, you observe a small group of people having the same
meal. You may induce that the meal is good and try it yourself based on the generalization of
your observation.

Abduction is the type of reasoning that can bring about new beliefs. This type of reasoning
begins as almost all research begins, with some type of feeling. Feelings such as surprise, doubt
or anxiety are all reasons for new research and science (Reichertz, 2004, p. 159). Abduction
begins with a moment of surprise but is then replaced by the ability to understand and make
predictions about the phenomena. According to Reichertz (2004, p. 160) the starting point for
abduction is empirical data. Scientists then interpret the data, making it possible to arrive at new
ideas by recontextualizing the data (Reichertz, 2004, p. 160). Furthermore, abduction does not
start with any theories that motivates the researcher to go in a certain direction. Instead,
abduction seeks to find theories along the way, motivated by the feeling that a theory is needed
to explain the surprising facts that may arrive at any time during the research (Reichertz, 2004, p.
159).

In the context of our study, we believe that an inductive reasoning is the most suitable option.
This is because induction is the favored choice when taking a stand in the interpretive paradigm.
Inductive reasoning needs in-depth research and qualitative methods of analysis. It is also better
off when used with smaller samples (Saunders et al., 2007, p. 145). This allows us to better find
patterns from our data. All of these recommendations suit our study and were the reason for
choosing induction as a research approach.

29
4. Research Methodology
This chapter’s purpose is to establish different approaches to research and discuss how the
research of this thesis is about to be conducted in the best way possible. Research methodology
is defined by Leedy and Omrod (2001, p. 14) as “the general approach the researcher takes in
carrying out the research project”.

4.1 Research Design


There are three different main ways of conducting research: Quantitative, qualitative and mixed.
The researcher will need to use the method most suited for the task of answering the research
question. If the research question focuses on district and measurable results, the method used
should be able to encompass questions that can give numerical data. If the research question can
be answered with an understanding of people and more complex thought processes that can't be
directly answered with data, then a qualitative method is more applicable (Williams, 2007, p.
65).

Quantitative research emerged around 1250 A.D since the need for data to be quantified was
needed. Quantitative research has since then been developed and mainly used by the western
culture in a way to try to create new knowledge (Williams, 2007, p. 66). Quantitative research
can be used to explain situations and relations between variables. “Quantitative researchers seek
explanations and predictions that will be generated to other persons and places. The intent is to
establish, confirm, or validate relationships and develop generalizations that contribute to
theory” (Leedy and Omrod, 2001, p. 102). Quantitative research also typically involves numeric
data, and the researcher usually uses some form of mathematical models and methodology to
analyze the data collected. (Williams, 2007, p. 66). There are three different broad styles of
quantitative research: descriptive, experimental and causal comparative according to Leedy and
Ormrod, 2001). There are a few different methods to conduct quantitative research such as:
descriptive research method, correlational, developmental design, observational studies and
surveys. Correlational research method is based around examining the differences between two
characteristics of the group of study. (Williams, 2007, p. 67). In correlation-based studies, the
researcher is focusing on finding out if two or more variables are related. The development
design focuses on how different characteristics of a study group changes over time and there are
two types of designs: cross-sectional and longitudinal. The cross-sectional study is based on a
comparison between two separate study groups with the same parameters. The longitudinal study
is based on studying a group over a longer period of time (Williams, 2007, p. 67). Observational
studies primarily observe one particular aspect of human behavior and try to analyze it with as
much objectivity as possible. Lastly, the survey method is used to try to capture behavior in the
moment, mainly used for sampling data from respondents that can be seen as representatives of a
population. Survey research is one of the ways to gather data in social sciences from a
quantitative perspective (Williams, 2007, p. 67).

30
Qualitative research can be seen as a holistic approach to research and focuses on doing research
based on a natural setting. This gives the researcher the possibility to develop a high level of
detail from the actual human experience (Williams, 2007, p. 67). One of the identifiers of
qualitative research is the social aspects that are being investigated from a participant's
viewpoint. There are a few areas of qualitative research that can be identified, according to
Bryman & Bell (2018, p. 356) these are: seeing the phenomena through the eyes of the
participants, the ability to see a better representation of the context of the phenomena, the
flexibility of and the lack of structure in the investigation of certain topics and phenomena and
the ability to develop concepts and theories based on the outcomes of the research process.
“There are five areas of qualitative research: Case study, ethnography study, phenomenological
study, grounded theory study and content analysis. These five areas are representative of
research that is built upon inductive reasoning and associated methodologies” (Williams, 2007,
p. 67). Qualitative research also builds on the inductive reasoning rather than a deductive one.
The observational nature of qualitative research poses questions that the researcher tries to
explain, there is also a rigid connection between the observer and the source that differs from the
quantitative research design (Williams, 2007, p. 67). Empirical research data is collected within
and used to explain different actions and phenomena related to social behavior.

Mixed method research is however the combination between qualitative and quantitative
research by both collecting and analyzing numeric data but also narrative data and combining the
results. This is usually done through the use of close-ended surveys that collect a lot of data fast
through the quantitative method. The researcher can then conduct interviews with a smaller
population to get more defined answers and ask more open-ended questions (Williams, 2007, p.
70). The goal of using the mixed method approach is to maximize the positives from booth
approaches and minimize the downsides at the same time. There are some cases that being
locked into either approach can have negative effects on the outcome of the study and in that
case, the mixed method is preferable. The mixed approach gives researchers the ability to further
develop theories that are grounded in both the quantitative and qualitative perspectives and can
therefore give a better insight of the complex nature of phenomena from the participants point of
view and relation to it. The fact that quantitative and qualitative research are not only compatible
but also complementary makes the mixed method highly relevant for the situations where it is
needed.

The basis of this study will be based on the qualitative research design, mainly because of the
nature of the research questions being most suited and also that the subject in whole requires
more flexibility in the analysis. Digital maturity measurement success and potential issues is
difficult to measure with quantitative methods and would therefore not be a suitable method for
this study. The research methodology of this paper will follow similarities of the
phenomenological study and thus have a large focus on the experience of the participants and try
to gather data from their knowledge within the study field (Williams, 2007, p. 69). This method
capitalizes on the form of collecting data through long form interviews that serves the purpose to
more deeply understand the participant’s relation to the topic and the researcher then focuses on
finding some clusters of data that give answers to the research question. The data collected will
be used in analysis for identifying common themes between the participants' experiences
(Creswell, 1998 cited in Williams, 2007, p. 69).

31
4.2 Data Collection

4.2.1 Semi-structured Interviews


Since our research will be based on a qualitative approach, the idea to go with semi-structured
interviews proved to be the best option. This is because semi-structured interviews allow us to
ask more open questions that do not require short and concise answers. Because of this we can
get an open dialogue with each participant, which will in turn provide us with a lot of personal
thoughts and beliefs surrounding the research subject. Since the data collection’s sample size will
be rather small, interviewing should be seen as the option to move forward with. Moreover, it is
the most common choice of data collection when it comes to small-scale qualitative research like
this one (Drever, 1995). In a semi-structured interview format, the general structure and main
questions will be outlined in advance. The more detailed structure and follow-up questions will
come naturally as the respondents cover each main question freely during the interviews. This
means that the respondents have a certain freedom in what to talk about, how to express it and
how much to say (Drever, 1995). According to Rabionet (2011, p. 563) qualitative interviewing
is a powerful tool that captures how people see and make meaning of their experiences. The
same author also proposes that in order to conduct a semi-structured interview, one should
follow six stages: Selecting the type of interview (1), establishing ethical guidelines (2), crafting
the interview protocol (3), conducting and recording the interview (4), analyzing and
summarizing the interview (5) and reporting the findings (6) (Rabionet, 2011, p. 563).

Stage 1: Selecting the kind of interview - As mentioned previously in this text, we chose this type
of interview as it was fitting for a qualitative approach with a smaller sample size. We wanted to
address some general fields of interest without making it too specific. This approach allows us to
address the general topics of interest and still get to hear the respondents’ stories in a natural
dialogue. An unstructured interview would risk straying too far from the subject matter and
particular fields of interest.

Stage 2: Establishing ethical guidelines - It is important for us to engage our interviews in an


ethical way towards our respondents. Issues we address in our ethical guidelines are consent,
identity, confidentiality and protection. As always information can be sensitive and therefore
needs to be protected alongside the confidentiality of our respondents.

Stage 3: Crafting the interview protocol - There are two important components to crafting our
interview protocol. The first one being how we as interviewees address and present ourselves to
the respondents. We need to create an environment that is safe and protected to ensure that our
respondents can answer each and every question in the best way possible. By starting our
interviews with addressing our ethical guidelines and the use and scope of the results, we will
indicate trust. The second component of crafting the protocol is to decide what questions should
be asked during the interviews. The questions should be in line with our research question and
purpose of our study to get the information necessary. It is of importance that we pay close
attention to the relationship between questions asked and the content produced by our
respondents (Rabionet, 2011, p. 565).

32
Stage 4: Conducting and recording the interviews - Since we need to put everything that is said
during the interviews to paper, we will use audio recording to capture the information. The audio
recordings will then be transcribed to paper. There are a few ways to go about this process,
another way of conducting and recording interviews would be to write notes simultaneously or
video recording (Rabionet, 2011, p. 565). We felt that audio recording suited our approach best.

Stage 5: Analyzing and summarizing the interviews - In this stage it is important to be thorough
and careful with each piece of interview. We need to ensure that every recording is analyzed
properly so that we do not miss any information. This is where we pick out the bits of
information that we think could help us form a result based on the purpose of our study. Studying
qualitative data analysis as well as former semi-structured analysis will help us know what to
look for.

Stage 6: Reporting the findings - Studying and getting to know the qualitative body of
knowledge is vital to better understand how to report our findings (Rabionet, 2011, p. 565).
Issues with reporting findings also need to be considered. These could be trustworthiness,
disclosure and consequences for example. Learning to address such issues will help us greatly in
the last stage of our interview process.

4.2.2 Interview Guide


The interview guide serves as a framework for researchers to follow and gives a structure to the
questions asked. We decided to base the study questions on both topics and references that are
considered to be the main focus point of the research, and this is reviewed in Table 6. The
interview guide will serve as the main source of questions, this exact questioning and follow-up
questions will however change depending on the interviewee since they will have more or less
experience regarding the different topics discussed. There might also be some participants that
share less, or do not understand the question to be able to deliver a fulfilling answer. In this case,
the interviewer needs to add follow-up questions to further develop the conversation. Exactly
what type of questions cannot be decided beforehand and need to be decided in the moment by
the researcher, although we will provide some examples of follow-up questions within the
structured interview guide shown in Table 6. No time minimum is set for the interview but one
hour is booked for each session to make sure that there is enough time. The interview will also
be recorded and transcribed by a computer software to minimize time spent transcribing. The
recordings and the automated transcript will be seen through and checked for mistakes made by
the automation.

33
Question Follow-up Questions Reference Topic

What do you think of Why do you think Chanias & Hess Digital Maturity
when you hear the that? Can you (2016)
word “Digital elaborate on that?
maturity”?

Do you have any If so, was the Digital Maturity


experience in digital experience Assessments
maturity assessments? meaningful?

It is shown that Would you be able to McKinsey (2018) Digital Maturity


success rates in digital explain further?
transformation
processes are low,
Why do you think
that is?

What role do you Do you think Kane et al., (2017) Importance of


believe management management is the Management
has in the process of most important factor
digital in digital
transformation? transformation?

What change efforts Is there a need to Kane et al., (2017) Importance of


does management continuously develop Bergman & Klefsjö Management
need to implement in current processes? (2016) Change Management
order to be more
digitally mature?

Do you see any What would you like Digital Maturity


drawbacks in the to change in existing Assessments
current models? models?

34
A major issue for Why? Please explain? Gill & VanBoskirk Change Management
companies is (2016) Digital Maturity
converting Assessments
identifications into
actions. Do you
believe that digital
maturity assessments
should provide more
clear suggestions on
how to become more
digital mature or is it
up to the organization
to create their own
path?

Table 6. Interview guide with questions, follow-up questions, references and topics.

35
4.3 Data Sampling

4.3.1 Sample Size


Sample size can be the determination factor in finding enough data to support a specified theory
or hypothesis. An adequate sample size is also important to avoid errors and biases and therefore
in quantitative studies. Finding the right sample size is important since it is related to the
complexity of the study population. While a large sample has lesser likelihood to result in
findings being biased, there is diminishing returns the larger sample size. Determining the right
size is dependent on how much solidity of the sample can be extracted without wasting time and
resources from the researchers (Taherdoost, 2016, p. 24). In qualitative research there is a
different focus however. Since the rule of diminishing returns also applies here, more data does
not necessarily lead to more information. Frequency is rarely important, as one piece of data or
one occurrence translates into more useful data within the structure of the qualitative studies
(Ritchie et al., cited in Mason, 2010, p. 1). This is due to the fact that qualitative research is more
focused on the meaning and underlying information than making a generalized hypothesis on the
discussed topic. Limited resources are also a factor that is more relevant when it comes to
qualitative research since these methods usually take up a lot of time per participant to extract the
relevant information for the study (Mason 2010, p. 1). There are different factors that should be a
part of determining the sample size of a qualitative study. Sample size in qualitative research can
be affected on whatever scientific paradigm the study falls under. This could for example be a
study that is oriented towards positivism and will therefore require a larger sample size than in-
depth qualitative research, this is to get a representative picture of the whole population. There is
also support for having relevant findings from only one participant. While only having one study
participant is highly rare, some rapports have presented meaningful results (Boddy, 2016, p.
426). Theoretical saturation could be one way to theoretically determine sample size of
qualitative research and practical research show that samples of 12 may be the point where
saturation occurs within a relatively homogeneous population and there is no more gain in
expanding the sampling size (Boddy, 2016; Mason, 2010).

Based on this knowledge regarding sample size within qualitative research we will aim towards a
sample size of five to ten participants. This since we will focus on a paradigm of subjectivism
and interpretivism that focuses on the subjective nature of knowledge and tailors towards a
smaller sample size. Regarding theoretical saturation, our study aims to investigate participants
with different perspectives and from different sides of the topic, therefore we will aim towards a
smaller sample size less than the limit of saturation declared by Boddy (2016) and Mason (2010).
This is because the sample is expected to lean towards a more heterogeneous perspective. We
have chosen to focus on building from a predetermined sample size towards the point of
saturation. We began by conducting three interviews and continued to interview participants
until we believed to have reached saturation at five participants. By doing this, we got enough
information from the participants to answer our questions without spending unnecessary time on
conducting interviews.

36
4.3.2 Sampling technique
Data is important in research as it acts as the foundation for a better understanding of a
theoretical framework (Etikan et al., 2016, p. 2). It is then up to the researcher to choose a
suitable manner of collecting data and who it should be collected from. The type of sampling
technique needs to be in line with what type of research you are conducting. Because improperly
collected data will be impossible to analyze correctly (Etikan et al., 2016, p. 2). There are a few
different steps that are important in the process of data sampling. Taherdoost (2016, p. 19)
describes a process where the researcher clearly defines their target population for the study and
also defines a frame of which the sampling should be held within, or which part of the population
the focus is on. The chosen sampling technique can either be probability based or non-
probability based. Probability based sampling focuses on methods that chooses samples at
random and can be divided into a few different categories or more defined methods such as:
Simple random, that all cases in the population have an equal probability to be included.
Systematic Sampling is based on taking a smaller sample size by for example taking every fifth
of the population. Stratified random sampling is where a population is divided into subgroups
and where a random sample from each subgroup is taken. This could for example be some
attribute such as gender, occupation or industry. Cluster sampling is where the whole population
is divided into clusters and one random sample is taken from each cluster to be used in the final
sample of the research. Multi-stage sampling uses a step-by-step process where the researcher
narrows down the population; this could be used if researchers need to narrow down a sample
based on many levels of criteria (Taherdoost, 2016, p. 21).

Non-probability sampling is the other main type of sampling and is based around a selection of
participants rather than a random selection. This type of sampling is often used in case study
designs and qualitative research (Taherdoost, 2016, p. 22). Quota sampling is one of the methods
used in non-probability sampling and focuses on choosing participants based on the basis of
predetermined factors and characteristics, this results in the sample having the same
characteristics as the wider population (Davis, 2005, cited in Taherdoost, 2016, p. 22). Snowball
sampling is where the researchers use a few cases to help encourage others to take part in the
study and is used to increase sample size. This is normally used to reach small populations that
are difficult to access due to different aspects. This can be populations such as inaccessible
professions and secret societies (Breweton and Millward, 2001, cited in Taherdoost, 2016, p. 22).
Convenience sampling is based on the method of selecting participants that are easily available
and is convenient when the researchers do not have access to more premium sources. This is
commonly used by students, since it is an easy and fast way to get participants by for example
asking friends and fellow students to take part in the study (Ackoff, 1953, cited in Taherdoost,
2016, p. 22). Purposive sampling is a method focused on selecting participants based on their
experience and competence in the field of research. This method is used to get information that
normally is not obtainable from other sources and where the participants are essential (Maxwell,
1996, cited in Taherdoost, 2016, p. 22). Purposive sampling is also referred to as judgment
sampling. It does not require underlying theories or a particular set of participants and is a
nonrandom sampling method. Instead, the researcher decides on what needs to be known and
chooses their participants based on who is willing to provide them with information (Etikan et
al., 2016, p. 2). According to the same authors, it is important that the candidates for the
sampling have a certain virtue of knowledge and experience in the selected field. Purposive
sampling chooses participants based on the qualities that they possess rather than availability.

37
Although this sampling technique prefers its participants based on knowledge and experience, no
components should be overlooked. Knowledge, experience, availability and the ability to
communicate and express opinions are all important factors for a solid candidate (Etikan et al.,
2016, p. 3).

Purposive sampling is a technique that well represents our research question and the overall
nature of this study. Since the question relies on experience from the real world of digital
maturity assessments and has a basis in understanding the work done by managers within this
field, it is important to use a non-probability-based sampling method. This is to ensure that the
study has participants that are well acquainted with the subject. It is also important that the
participants chosen to have the expertise required to answer complicated questions regarding
their experiences and participants where therefore sampled using the purposive sampling
method. With careful consideration of the participants, we have chosen the participants that
would best suit the study and based the choices on different experiences and positions to get a
wide range of perspectives. This is done to get participants that have been involved in different
stages in the process, from conducting the study from an external position in the form of
consultants to managers within organizations that have either been a part of previous assessments
or that are in the process of doing one.

For the purposes of this study, we believe that a purposive sampling technique would be a good
starting point. This is justified since the overall theme of the study is quite focused and depended
on some previous experience and knowledge regarding digital maturity measurements. By
focusing on finding subjects that have this knowledge, we can get better data on the real world
scenarios and issues within this topic. It should be said that a perfect purposeful study will not be
possible since that would involve getting contacts to the highest level of experts within the field,
which we do not have the resources to get. We have however focused on participants on a lower
level in the organizations with various levels of knowledge and expertise regarding digital
maturity assessments and also experience or standpoint within the connection to digital
transformations and management. We see this as a compromise of the purposive sampling
method.

4.3.3 Presentation of participants


The participants where chosen from within our own network and was further built upon by
suggestions from the participant’s respective networks. We chose to contact candidates that
would be relevant to the study and that also had a lot of experience within management and
change management connected to digital maturity assessments. In this chapter we will further
describe the participants, their roles and also their experience within the field.

Participant 1 is the CEO of an IT/management consultancy company in Sweden. The company


is an SME that operates in both public and private sectors. In terms of experience in the field of
it-consultancy and management consultancy the participant has roughly 30 years of it. They
work with various types of projects in the IT-industry, some of them have been with digital
maturity and therefore this candidate was chosen. Partly because of their relevance in the field of
digital maturity, but also because of the extensive experience and knowledge. The participant
also created their own digital maturity assessment models throughout the years of working which
adds value and aligns with our method of purposive sampling.
38
Participant 2 is a management consultant at an IT/management consultancy company in
Sweden. With a total of 10 years of experience with digitalization projects, this candidate fits our
requirements. The participant has worked with digitalization projects before arriving at the
current company and is well aware of the phenomena. They have worked closely with a few
assessment models that target the public sector. The first interaction with digital maturity
assessments seemed to be when a collaboration between their previous company and Gothenburg
University occurred. The university developed a model which since then has been used
extensively in Sweden.

Participant 3 has the role as a project manager and focuses on digital solutions connected to E-
health. With 15 years of experience as project manager participant 3 has worked with a lot of
different experts within the digital field. This participant works within the public sector
connected to healthcare and digitalization within this sector. This participant was chosen due to
the years of experience within both leading and developing digital solutions. The participant has
also been within the healthcare sector through a lot of digitization efforts and have seen what has
worked and what has not. The experience with leading groups in the digital transformation
process is something that we value, and we believe that this participant’s thoughts are useful for
our analysis.

Participant 4 is the head of analysis at an agency that delivers digital government. Their
experience is between 12-14 years in the digitalization industry. Their projects are targeted
towards government agencies and the public sector. This participant has extensive knowledge
when it comes to digitalization as a whole, but also digital maturity. During these 12-14 years,
several digital maturity projects have been carried out. This participant was chosen because of
their experience and field of work, hopefully contributing to some good insights and analysis.

Participant 5 is working as a development strategist with focus on digital development. Their


experience as a developer is around four years and before this, they had experience as a manager
for six years. At the time of the interview, the participant works closely with digital development
and has a role directly connected to digital maturity measurement tools. The participant works
within the public sector within the health department.

4.4 Analysis of Collected Data


There are different kinds of methods used for analyzing data in qualitative focused studies. One
established method for data analysis is framework analysis, this analysis method is based on
exploring important aspects of society through following a rigorous framework with five distinct
phases. These phases help the researcher to understand and interpret the data gathered and is
structured in a way that the researcher clearly can draw conclusions or make further analysis
based on the results (Furber 2010, p. 97). Data analysis is the first step within this method and is
the stage where the researchers immerse themselves in the data to familiarize themselves with it.
The researcher does not necessarily need to read each transcript of the entire data set, if the
researcher has not been a part of all the data gathering, it could be good to read or listen through
the transcript to better understand the context of key quotes or ideas from the interviewee. The
next stage is to develop a theoretical framework is the process of putting answers from the

39
participants into concrete groups or themes. This step is based around the notes that the
researchers took during the initial interview. The next step is indexing, this step is about going
through the transcripts and the constructed theoretical framework is matched against it. This is
done by reading through the transcript and marking each sentence where the theoretical
framework will match. This is a time-consuming effort, and the researchers might try to develop
a system to manage all the data analyzed. There are also computer programs that can help
automate this step (Furber 2010, p. 98-99). Charting is the next step in the process and involves
reducing the original data into manageable sections by summarizing the data into thematic
charts. The summaries are then connected to the appropriate theoretical framework, preferably in
a chart. This will make the information more accessible and easily visible for the authors and
readers (Furber 2010, p. 99). The final step of the framework analysis method is Synthesizing and
revolves around reviewing the charts, at this stage the theoretical framework may be revised if
needed. A final theoretical framework is agreed at this stage.

Another method of analysis that could be used is the thematic analysis as described by Braun and
Clarke (2006, p. 77). The use of thematic analysis is very common but not usually described
through a model as mentioned here. The core approach of the thematic analysis is to find patterns
or themes within the collected data and organize the answers based on these themes. The first
step of thematic analysis is to familiarize yourself with the collected data, this is done through
transcribing and re-reading the interviews and notes. The second part is to code certain
interesting or inspiring parts of the data, this could be together quotes or highlight certain
insights. The third part is to generate potential themes and divide the answers into the set themes
chosen by the researcher. The next step is to review the themes and refine any part that seems
unclear or poorly defined. After this step the themes are named and so that they can later be used
within the rapport as thematic evidence or as a point of discussion.

We will be using the foundation of thematic analysis to structure and analyze the data gathered.
This foundation will give us the possibility to structure our findings based on the current
theoretical framework, and later make changes based on the themes gathered from the data.
With that information gathered, we can begin to build a new theoretical framework based on the
answers that may be different from our initial conceptual framework as seen in Figure 4. The
data gathering from this method will give us a chance to compare the input from the participants
and the theories into a structured analysis. We will use this data to build a new conceptual
framework that can be seen as the basis for future theoretical development or used in the creation
of new models within the field of digital maturity. We will structure the findings within a table,
showing the summarized answers connected to each of our own conceptual themes and also
present new ones if the data does not match previous themes, therefore expanding the conceptual
framework.

4.5 Ethical Considerations


Ethical considerations are taken very seriously by both research institutes and universities (Mills
& Birks, 2014, p. 215). Ethic codes are important to understand as researchers face various
ethical issues during the process of conducting research (Bell & Bryman, 2007, p. 63).
According to Gibson and Brown (2009, p. 60) it is vital to consider the general ethical issues in
order to be professional in your work. The general issues are thought to be: informed consent,

40
confidentiality, avoiding harm, integrity and professionalism (Gibson & Brown, 2009, p. 60).
Whilst these issues can be seen as main ethical issues, it needs to be known that ethics is
extremely broad and includes many fields of research. According to Bell and Bryman (2007, p.
63) management research should not follow the same ethical principles as other social research,
even though it tends to follow them. Bell and Bryman (2007) suggest that the main ethical
principles of management research differ from another social research. Therefore, management
research should adopt a different approach better designed for the field of management. There
are certain ethical areas of particular interest for this type of research that differ from others. The
areas taken into account are conflicts of interest and affiliation, power relations, harm,
wrongdoing and risk, confidentiality and anonymity (Bell and Bryman, 2007, p. 67). According
to the same authors these areas are all important to look more closely into because of how it
affects management research. Because of this, a slightly altered ethical approach will be made
compared to a more “classical” approach as presented by for example Gibson and Brown (2009).
Based on the analysis of ethical differences, Bell and Bryman (2007, p. 71) have identified
eleven categories of ethical principles that will suit our research approach.

Harm to participants - Making sure that participants, researchers, or anyone else involved in the
research process is both mentally and physically well. Ensuring that no harm is made. We have
taken this into account as we chose our participants for the interviews.

Dignity - Respecting the dignity of anyone involved (participants, researchers or anyone else
involved) is required in order to avoid discomfort or anxiety.

Informed consent - Ensuring full consent of each research participant. We have ensured that we
have full consent from our participants by explaining what type of research we are doing and
what type of questions will be asked during the interview.

Privacy - Protecting and avoiding invasions of privacy for each research participant. Different
actions are taken to protect everyone's privacy. These are further explained in both the
confidentiality and anonymity sections.

Confidentiality - Research data is held confidential for everyone involved. Whether it be


individuals, groups or organizations. The data collected will be held confidential and will not be
shared with anyone else. This is something that we have assured our participants of before we
conducted the interviews. However, we have made it clear that the thesis will be published after
it is done, which is not a problem for any participant.

Anonymity - Protecting the anonymity of participants involved in the research. We have made it
clear to every participant involved that their contributions will be anonymous. It is known to our
participants that their real information (names, age etc.) will not be used in the research. Instead,
fictional names will be used to protect our participants and keep them anonymous. On top of
that, we will not disclose what the names of the different companies are.

Deception - Acknowledging deception throughout the process. Deception through lies or


behavior that is misleading. To prevent this, no acts that can be misleading towards our research

41
will be made. Everything will be kept totally true, and nothing will be altered afterwards. The
research findings are taken for what they are initially.

Affiliation - Any affiliations that may influence the research needs to be disclosed. Both
professional and personal affiliations need to be considered. We have discussed our personal
affiliations as well as professional. There is nothing that will affect the research in any way due
to affiliations.
Honesty and transparency - Information about the research should be communicated in an open
and honest way to all interested parties. All participants are informed on the research and how it
will be used.

Reciprocity - There should be some mutual benefit for both researchers and participants
regarding the research. For example, collaboration or active participation involved. After our
thesis is done, any participant that wants to will get a copy of it. We will also send it to all
companies involved so that they may use it for inspiration in their own work.

Misrepresentation - Avoiding all misleading, misrepresenting and misunderstanding of research


findings.

All these ethical principles will be of great guidance in ensuring that the research as a whole will
be carried out in an ethical way to both participants and researchers. Throughout the process of
collecting data, we will rely on these eleven codes, creating a safe and fair environment for
everyone.

4.6 Truth Criteria


Over time there have been many frameworks developed to enhance the credibility of research for
both quantitative and qualitative research (Mills & Birks, 2014, p. 229). Essentially, truth criteria
validate the research through different dimensions depending on the framework. Focusing on the
credibility of qualitative research, many proposed frameworks build on the criteria established by
Lincoln and Guba (1985). The criteria which cover trustworthiness of qualitative studies are
credibility, transferability, dependability and confirmability (Lincoln & Guba 1985; Mills &
Birks, 2014, p. 229). Furthermore, some contemporary frameworks may suggest different criteria
than traditional ones. We decide to follow a rather traditional approach inspired by the works of
Lincoln and Guba (1985) which has been supported by many authors since. However, Cope
(2014) suggests that there should be a further criterion included, which is covered in the text
below.

The first criteria to consider is credibility. It refers to the truth of the data, any views from
participants and the interpretation of those views from the researchers (Cope, 2014, p. 89). To
boost the credibility of a study, researchers should describe their own experience as a researcher
and at the same time verify the research findings with the participants. By showing engagement,
observation and audit trails, the credibility of the study will be further supported (Cope, 2014, p.
89). In our research we have considered the criteria of reliability and therefore have taken actions
to enhance credibility. During our interviews we always ask our participants if our interpretation

42
of their answers is correct. Further, when transcribing the audio to paper we double check that
the information is correct to eliminate any false or wrongful information.

Secondly, transferability is a criterion that describes the ability of a qualitative study to be


applied to other settings or groups. It is only deemed to be transferable if individuals not
involved in the study feel that the study gives meaning to them (Cope, 2014, p. 89). It can also be
seen as transferable if readers can associate the results with their own experiences.
Transferability is not seen as a necessary criterion for validation as qualitative studies may not
aim for generalizations on a subject or phenomenon (Cope, 2014, p. 89). This is also somewhat
true for our research. We believe that it will be difficult to transfer our findings to another
context that is not digital maturity.

Third, dependability is the next criteria to consider. Referring to the constancy of data over
similar conditions (Cope, 2014, p. 89), these criteria can be achieved if other studies replicate the
findings with similar participants in similar conditions. If this can be accomplished, a study
would be considered as dependable (Cope, 2014, p. 89). Researchers are often personally
involved in the research process, making it subjective. Therefore, it is difficult to replicate any
result that we conclude since our approach demands that we involve ourselves in the analysis of
the research.

Further down the criteria line is confirmability. In order to achieve these criteria, researchers
must prove that any data collected is matched with the participants' responses and not altered to
the researcher’s biases or viewpoints (Cope, 2014, p. 89). In a practical sense, researchers should
be able to explain how they got to a certain conclusion or finding based on using the data
collected from participants. This can for example be done by using quotes from participants. We
will use strong quotes from participants in order to ensure confirmability and that readers will
understand our findings.

Finally, the last criteria are authenticity. By faithfully exhibiting a participant's feelings and
emotions of their experiences, readers can understand the essence of the experience (Cope, 2014,
p. 89). This is usually done by providing the reader with quotes from the participant in the
relevant theme. Quotes will be very useful for this part as well. The use of quotes throughout the
text will combine confirmability and authenticity. The readers will understand both the
emotional experience from the participants and our interpretation of it.

43
5. Empirical Findings
In this chapter we will present the findings of our collected data. All of our collected data is done
through purposely sampled, semi-structured interviews. Because we want to test our conceptual
framework, a framework analysis will be used. This is so that we can compare our findings to the
conceptual framework and compare them at last. The data will be put into a specific category
depending on the answer. These categories are all part of the conceptual framework and theory.
If any statement from a participant does not fall into one of our categories, a new one will be
made. Thus, building towards a new, revised framework founded on the results of our
participants.

5.1 Digital Maturity


What do you think about when you hear the word digital maturity?

Participant 1:
When asked what the participant thought about when they heard digital maturity, the response
was:

“The ability to implement new solutions and make sure that something actually happens with
them.”

The answer was short and to the point. Describing the phenomena as a tool for implementing
solutions and following up on them afterwards. Since our aim was to dig deeper into the subject,
we asked the participant why they thought that digital maturity was an effect of that. The overall
perception was that the participant viewed digital maturity not only as an ability to implement
new solutions, but something much larger that involved the whole organization. A process that
involves management, leadership and administration in order to make it work.

“Digital maturity is also about making sure that the ones in charge of a project can ensure that
the benefits of a project occur in the operation. It is not only about the value realization of a
project, but also taking care of the project so that change actually happens. It is about
management, leadership and administration. Everything from personnel to work procedures.”

According to the participant, the initial stages of creating a project and implementing it are
considered easy for most. Organizations that have certain components such as time, resources
and quality can easily establish a project. However, digital maturity is more than that. To follow
up on the process and guide people into the right direction ensures that the project will be
delivered in the end.
“It is easy to define a project, and usually easy to implement an initial phase of one. If you have
time, resources and quality that makes it easier. But digital maturity is more than that.”

Participant 2:
The participant’s idea of digital maturity was that it had an extremely broad area and had many
types of applications for organizations. According to the participant it did not only mean how
well you could for example implement hardware or use technology. There were others parts to

44
digital maturity, components such as the structure of the organization, leadership and the digital
heritage.

“I am thinking that it is something that a lot of organizations need to get better at and something
that covers a lot of parts in the organization. It has a very broad perspective. It is absolutely
about usage, but also about the structure inside the organization, how the leadership works and
what type of digital heritage the organization has in their system environment.”

Participant 3:
The participant had clear answers when it came to the first interview question and described their
associations with the word as complicated and difficult to grasp saying that:

“What is digital maturity and when are we mature? What should be mature, is it people or the
organization?”

The participant also mentioned that she first came in contact with the word digital maturity
through the development of the tool called DIMIOS. It is also mentioned that the language used
when discussing subjects related to digital maturity is difficult and abstract. Participant 3 says:

“Personally, I think that it should be self-explanatory and it should be easy to understand what
we are talking about”

The participant continues to explain that the language used is more developed and connected to
theoretics and that it can be hard for people within the organizations to understand the
vocabulary.

Participant 4:
This participant instantly thought about change management. To dare to lead change. They
mostly talked about the ability and courage to lead change. That was a very important factor in
digital maturity. The people responsible for a project might not always have the right type of
competence, but it is important that they have the courage to follow changes anyway. According
to the participant, digital maturity was not as much digital as it was change management.

“I think that digital maturity is not too much about the digital aspect. The digital part of it is a
tool that should be used, but it is more about knowing your organization inside and out and at
the same time driving change.”

Another topic about digital maturity that came up was whether digital maturity was seen as
successful just by digitalizing. The answer from the participant was that they saw digital maturity
as something far more complex than that. When people in the organization are able to start
questioning processes and realize that not all digital is good, then you have digitally matured.

“The question is if digital maturity is just about digitalizing processes etc. I think that when you
can start asking questions such as: do we need all of these processes? When you get to the point
where you realize that you might be better off with fewer processes, then you are more digitally
mature.”

45
Participant 5:
The participant expressed that the first thing that comes to mind was how well equipped an
organization is to benefit from digital tools and other opportunities. The participant expressed
how important it is for an organization to share the same picture of what digital maturity is for
them. It does not only have to be the textbook definition and digital maturity can be different
from case to case and from organization to organization. To benefit from the digital tools
organizations need to be able to see and also capitalize on opportunities that come their way, and
actively strive towards development.

It is shown that success rates in digital transformation processes are low, why do you think
that is?
Participant 1:
The answer was mainly focused on the follow- up of a project. The participant believed that a
significant reason for low success rates was not prioritizing the follow-up after the initial
maturity assessments had been completed. Too much focus lies with just trying to complete the
assessments, however the pitfalls of the transformations lie in the ability to follow up on the
project with measurable indications that realize value.

“You focus too much on the project/assessment being delivered, and not two years after for
example. The value may not be achieved instantly. There is a lack of following up and using
measurements to indicate the value.”

The participant also gave a good example of how to ensure that the transformation succeeds. One
of their clients’ projects includes a revision from the management after the project is delivered.
This ensures that the value has been achieved. The belief was that it is sometimes good to put
pressure on the organization to complete their transformation or any given project. The
connection between projects and quality assurance is generally lacking.

“The management does a revision of the project some time after the project has been delivered.
This way they will know if the value has been achieved or if it needs more work and what needs
to change.”

Participant 2:
This participant thought that a potential factor for this could be too much faith in technology and
that organizations need to have a wider approach in projects. Organizations think that if you have
a thought-out solution and the technology required to deliver, it will implement and solve itself.
There is so much more to the process than just that. The biggest challenge is then to adjust the
organization in different required areas to go along with the proposed solutions for change.

“There is a bit too much faith in technology. You want to achieve value and organizations think
that it will occur after implementing a certain solution to the problem. This is where the real
challenge begins. To adjust areas such as work processes, behaviors and approaches. Only then
can you actually obtain value.”

A follow-up question based on the participants' answers was asked. When asked about who was
in charge of implementing and making sure that the organization obtained value from a project
the answer was simply management teams. At the same time the participant believed that
46
management teams often take charge of a project, but go on to other tasks outside of the project.
The result of this is that projects or processes often die out when tendencies of management
teams forget about them often occur.

“The ones that need to be in charge of these things are management teams. They are the ones
who often run projects or operations, but at the same time that is where it tends to rupture. They
often go on to work with other solutions after the initial phase of a project which is not good for
the project itself. Guidance is needed and management teams are sometimes not aware of what is
needed in order to succeed.”

Participant 3:
The participant once again focuses on the fact that it is difficult to understand what digital
maturity is and how to actually define it. It was also mentioned that digital maturity has been a
trend-word for a long time. The participant talks about how they made comparisons of digital
transformations to the science-fiction characters of transformers, that can transform and change
form without adding or removing material. The connection the participant made is that digital
transformations need to change form without adding resources and this is the challenge that
organizations need to solve to be successful. The participant said that one of the things that make
these projects fail is the support from top team management:

“It depends on the top management, what goals do they have? What prerequisites do they have?
But it is also about making difficult decisions and challenge the power structure from a worker’s
point of view “

For the people within the organization, the participant is saying that there is an issue with people
losing their identification with their current or previous role when the changes start to happen.
The issue is that people don’t know their role within the “new” organization and work therefore
needs to be done to make room for everyone after the transformation.

“Instead of being enabling, people within the organization might become obstructive when put in
a position without the right experience”

The participants also highlight the need for the top team to understand what the technology can
do for the company and to understand how to use data and data management effectively. This
was focused on extensively and the participant mentioned that their organization still focuses a
lot on traditional documentation management that has not yet changed to digital.

Participant 4:
The participant identified several factors in relation to why transformations fail. One of them was
focusing too heavily on technology bits. In a digital transformation, the participant believed that
focus should mainly be on the operation and identifying the needs of the organization.

“The ones who develop technological solutions don’t always know about the needs of the
organization. It is important to identify. It is easy to focus too much on technology.”

47
Another factor was the legal parts of a project. According to the participant, some connect the
legal aspect too late. This means that you develop a project and implement a certain system or
solution, and it does not work out legally. When asked to develop what they meant by the legal
parts, the answer was:

“Often technological solutions are about getting data to flow somewhere inside the organization
or between organizations. You can build how many great technological solutions you want, but if
you are not allowed to use the data in the way the technological solutions were thought to work,
then it will fall. It is a recurring problem.”

Further, this means that you might develop a solution that is not fit for a certain organization
based on their needs or prerequisites. The last factor was that most people are bad at motivating
the value of an investment or a transformation. There needs to be a greater focus when it comes
to what value a project or investment will bring, and then making sure that you can achieve that
value.

“There are large costs in relation to transformations. Because of this you have to be precise and
clear with how a project will bring value to the organization.”

Participant 5:
The participant expressed that they believed that some organizations that are in contact with
digital transformation processes have other main aspects of their business that tend to take up
most of the development time. They also mention that they believe that time and connection to
science are key components for a successful digital transformation. They also mention that
meticulous planning and investigation need to be done to be able to prepare the organization for
these kinds of projects.

“A county may have main focus areas such as infrastructure, social services or school related
tasks that they put their development, research and energy towards. While the digital solutions
would have improved their work efficiency in these areas are not prioritized”

48
5.2 Digital Maturity Assessments
With our first question in this category, we wanted to get a better picture of each participant's
experience and knowledge-level on digital maturity assessments and if they had previously
worked with these types of assessments.

Do you have any experience in digital maturity assessments?

Participant 1:
The first participant had previous experience in the field and had worked on a few assessment
models. However, the aim of those models was not particularly to look at maturity levels. The
models were structured similarly to a digital maturity assessment model, and some were based
off of known maturity assessment models.

“I did a model based off of Ernst & Young, where you have a scale from 0-4 where 4 is the
highest degree of maturity. I have previously done some variations of models in the past. Usually
looking at what you need to achieve in order to attain more effect in the organization.”

Participant 2:
The second participant had some brief experience in the field. They had worked on a program for
digitalization toward a municipality in Sweden. The participant was primarily responsible for
change management and value realization connected to the project. It was a large project with
several million Swedish crowns as a budget spanning over four years. There was also a
collaboration between them and Gothenburg University which had launched a model called
DIMIOS. That same model was brought into another workplace and helped them conduct several
smaller digital maturity assessments.

“I previously worked with a project targeted toward a municipality in Sweden. They wanted to
digitize certain parts in their operation, and we helped them with that. The project scale was
large and the prognosis was to deliver within four years.” At this time we worked closely with
Gothenburg University and their own model called DIMIOS.”

Participant 3:
The participant said that they have no official experience with current digital maturity
assessment models and mentioned that they were involved in a project called “equal social-
services” that focused on evaluating competencies within the organization and mobilizing to
serve the interest of the community. The participants worked with several other counties to
develop a questionnaire to help them understand their needs as an organization and the needs of
the public dependent on their services. The project led to the top teams of each county starting to
discuss relevant topics connected to competency and the needs of the public.

Participant 4:
This participant had previous and present experience in the field. Digital maturity is something
that they work with in most projects today. The participant told us that they had worked with
several different maturity assessment models, but also developed their own in-house model.
Their own maturity model is targeted toward government agencies. When asked if the participant
had personally been involved in these projects, the answer was:
49
“Yes, absolutely. I have been involved in several of these projects. It is something we offer in the
present day as well.”

Participant 5:
The participant mentioned that they currently are the contact person for a big digital maturity
initiative for their organization and have a good understanding about the tools they use for digital
maturity measurements. This specific position is relatively new for the participant, but they have
prior experience within digital maturity and other development related projects.

Do you see any drawbacks in the current models?

Participant 1:
The first participant thought that most maturity assessment models were good at management of
current applications, but the drawbacks were that most models did not focus on the digitization
of a new operation. According to the participant some models focused more on the digitalization,
but the problem there would be that they only focused on that.

“There are models out there that focus on digitalization of new operations, but then the problem
is that they focus solely on that. There are more areas to consider if you want to completely
succeed in digitally transforming. For example, the digital heritage”.

When asked about what the participant meant when talking about digital heritage the response
was that it was the existing systems in an organization. For most organizations there is a big
amount of IT involved in the structure. For example, logistics and economy systems are usually
somewhat digitized already. If they are, they need to be accounted for in the overall digital
transformation as well. If they are not, they still need to be accounted for. Furthermore, the
participant wanted assessment models to be clearer in terms of where a company positions itself
in digital maturity in the present day, and analysis on how to become better.

“I want a clear picture of where we are today. For example, we are currently at a 1 out of 5 on a
maturity scale. I also want an analysis or some form of suggestion on how to reach level 3 or
above. I have not seen that in any models.”

Participant 2:
The second participant did not have any particular experience or knowledge with assessment
models. In a previous role at a different job, they had worked with a model which targeted the
public sector. However, the participant had not been involved in that. Some other names of
models came up and there seemed to be a “general” knowledge of some of the big models used
in Sweden. The overall knowledge of assessment models was too vague for the participant to be
able to identify any drawbacks.

“ I am not the right person to answer that question. There are some models that I know of, for
example the Gartner model. On the EU level there are so many models, but I am not familiar
with any of those.”

50
Participant 3:
The participant had no direct connection to the current models and were therefore not able to
answer the question with confidence.

Participant 4:
The participant directly pointed out that it was difficult to measure maturity. This was because
most assessments start off with some sort of survey that higher level employees answer, for
example strategic managers. Certain questions are hard to answer and to get a justified answer
out of them. Questions such as: how good of a leader would you say that you are? The
participant believed that self-assessments were difficult because they could portray the wrong
picture of a company depending on who would answer the survey. We dug deeper into models
and started discussing. Something the participant noted was that it was difficult to choose a
model and use it straight away.

“We get a lot of questions regarding different models. For example, is this model good? It is the
same for almost every model, you need to bring it into your own organization and make it your
own. Almost every model need altering in order for it to work properly with the needs and
visions of your own organization.”

We continued to talk about different models, upon which the participant mentioned a client who
used Gartner’s model. When they first did an assessment in 2013, they scored a relatively high
score. Then they did the same type of assessment in 2016 and 2017, where they scored much
lower than 2013. According to the participant it was because they had gained more knowledge
about their own organization and where they wanted to be in the future. They had a much clearer
vision for change. Furthermore, the participant pointed out that a model was never to be thought
of as a solution.

“It is never a solution, merely a measure of temperature and to get an indication of where we
are today. I can’t see that you can just use a model that will fix everything and transform the
organization. It requires hard work.”

Lastly, we talked about how it is unlikely that a universal model will be developed since all
organizations differ. Instead, you have to choose a model that you like and believe has the right
approach for your organization, and then make it your own and alter it.

“You can’t just bring it back to the organization, it needs to be developed internally. We also
work with value realization, it is the same for those types of projects. You need to adapt the
models to your own organization.”

The participant believed that you can still make models that offer support in terms of good
suggestions for development and change. Something that can point to different categories and
give general suggestions.

51
Participant 5:
Participant 5 expressed that they have experience with the some of the current models and more
experience with the models that are made for the Swedish audience. The participant expressed
that the models should add more guidance and suggestions on what companies can do to further
develop their digital maturity position. The participant is currently working on managing a
project where digital maturity measurements are done.

5.3 Importance of Management


One of the conceptual framework’s most important components is the importance of
management. Something which we believe to be of heavy influence in digital maturity and digital
transformations. In order to obtain information about the subject we asked questions designed to
be in line with the importance of management.

What role do you believe management has in the process of digital transformation?

Participant 1:
According to the first participant it was the most prominent feature of digital transformation.
Even a “bad” project could work out and be handled well as long as management and the leaders
involved understand what type of changes needs to happen, support it, and adjust to the change
needed. If leadership is not present in change processes, you will not succeed.

“The structure of a project is only a part of success, it supports the transformation. Leadership
however always needs to be present. That is where it often crumbles. Almost always.”

Participant 2:
The participant emphasized the importance of management in the process of digital
transformation. It can be difficult to adjust the organization to new changes, therefore strong
leaders are needed to support the ones involved.

“It matters enormously. You need leaders with capabilities in the form of insights and strength in
order to see it through. Although the reality is that almost all managers have too much on their
plate to be able to take the time needed to change.”

When asked if the participant felt that the most important component to succeed was
management, they replied that they indeed did think that. However, since the participant worked
with these types of questions they flagged for a certain bias. Furthermore, the participant
analyzed that there were several other areas of interest as well. They believed that it was rather
easy to implement a completely automated process of something that was already partly
digitized. This is something that a lot of organizations do today, for example automating
administrative tasks that were previously handled by a human. By doing this, you miss out on the
bigger digitalization and the possibility of complete digital transformation. Most likely because it
is the easiest thing to do and a short-term solution to fix a problem. To achieve digital maturity

52
and transformation, processes need to be broken down and changed from the core. This is where
management and leadership come into play.

“Sometimes you choose the easiest route and go for short-term solutions to fix a digitalization
problem. But then you miss out on the bigger picture which is the overall transformation. It is a
problem that many organizations face.”

Participant 3:
The participant mentioned that the management is an extremely important part of the digital
transformation and gave us examples of two managers that they had worked with. One of them
actively did not follow or promote the department to follow the new digital directives. The
directive was to implement a new booking system so that the personnel did not need to run
around and discuss what time they were booked or what room was taken. The participant
mentioned that the department would have saved a lot of time if the manager would have been
more onboard with the changes. The participant described another situation with a manager that
was actively working with digital development. This manager said yes to the new directives, and
they implemented a digital planning tool. Some issues with the previous routines did occur, but
after a period of time, the system helped the department to be more effective.

“I believe that management is extremely important. If the manager does not believe in the digital
change, the organization won’t either”

The participant said that in their sector, it is important to highlight the cost benefits over time to
convince top management to allocate resources and give a green light for the project. They also
mention that to teams that support development is the only way to become more digitally mature.

Participant 4:
Leadership was an extremely important factor. They said that if you want to make big changes
you need a leader that can get their employees to go along that journey. The participant gave a
good analysis of digital transformation. Often those types of projects are to effectivize processes
in the organization. This also means that some jobs or tasks will disappear. It can be hard to
motivate your team to go along with that change if they know that their jobs or tasks will
disappear.

“If you need to motivate your team to go on a journey that will imply jobs and tasks
disappearing, then you need to be extremely engaged and strong. You need to make sure that
they understand how that change will be positive even if the environment will be different in the
future”.

When asked if leadership was the greatest pillar of digital transformation, the participant said
that they thought so. They believed that it would be difficult to transform if top management was
not included or engaged. Top management needs to be involved in that journey and have an
obligation to lead it.

53
Participant 5:
The participant believes that the leadership makes a huge difference and gives two ways that it
does. The one example that they mentioned is for management to make decisions that is paving
the way for digital systems and methods to be implemented. The other part that the participant
mentioned is the actual change management and that there needs to be new ways to work within
the organization to match the digital transformation. The change management needs to focus on
working with the staff and developing new routines to handle the work in different ways.

“There are two parts. the concrete changes such as budget, specification of the system and
infrastructural needs. Then there is the management part, where questions like: Why is this
change good for our organization? What benefits can we gain from changing our processes?
What can these changes do to improve the experience for our customers, staff and other
stakeholders?”

5.4 Change Management


What change efforts does management need to implement in order to be more digitally
mature?

Participant 1:
The first thing that came to mind was understanding change management. Essentially to
understand how and what you need to do in order to go from one step to another. According to
the participant the transition of moving forward and changing within the organization can at
times be tough. If you can understand and grasp the value of change management, you will also
become more digitally mature. There were three important components involved according to the
participant. These were: understanding, time and resources. These components work for all
processes where change management is needed. They also mentioned leadership responsibility as
a key factor for this question.

“The first thing that I think about is change management. This means getting people to go from
one step to another. You need to understand the value of change management and the
responsibility that the organization has to change.”

Participant 2:
Participant 2 mentioned that a serious issue that several studies show is the lack of following up
a project. As a project manager you can make choices on change, start the project and then move
on and work with the next need in the organization. They tend to let go of it and therefore the
support disappears. You don’t hold on for change to happen.

“Project managers tend to let go of a project in an early stage. There is a need to hold on and
see it through. Following up on a project is extremely important. It is super important. If you
don’t focus on the project the support for changes disappears as well.”

54
Participant 3:
The participant mentioned that if they were in the position of top management, they would have
used one of the many tools for digital maturity to develop the organization further, and maybe
implement more support functions to determine what the community affected by the organization
needs and continuously monitor the changes in the community to further develop the
organization.

“To use the resources within the organization and work together in new constellations and team
and take the time to understand each other and our competences”

Participant 3 says in the interview that there currently exists a gap between different roles in the
organization and that the language between people is a problem. Some of the personnel have
background in natural science, some engineering background or IT background and some have a
background in social science or social studies. The participant mentions that the language barrier
can be high between the different departments, and it is therefore crucial for the organization to
develop sustainable ways to understand each other and have patience when working in new
constellations.

“There are always new concepts that are trending, and sometimes we don’t have the patience,
humility or knowledge to understand”

The participant mentioned that there is an issue regarding staff and managers not speaking up if
they don’t understand the concepts or how a model should be used. They mention that it feels
like their organization puts pressure on the managers and that they are supposed to know
everything. The participant also mentioned that there need to be more resources allocated
towards analyzing and talking about issues to get everyone on the same page and that this is not a
quick, but rather a lengthy process.

Participant 4:
The most important thing that needs to be understood, according to the participant, is
understanding the connection between IT and operation/organization.

“Digitalization is not an IT question, it can’t be an isolated IT question because then it will


never work. I think that you need to understand the connection between IT and organization.”

The participant believed that many leaders or project managers think too much about technology
and IT questions regarding the project. Instead, there needs to be more focus towards the
organization's structure, its needs, and the stakeholders involved. Implementing this will help
organizations become more digitally mature according to the participant.

Participant 5:
The participant focuses on the fact that the managers need to prioritize their change initiatives
and tells us that not all efforts need or can be directed towards digital development. If the change
efforts also are designed to phase out the old systems and processes, prioritizing becomes even
more essential to determine when we can cut ties to old systems and be prepared for it

55
throughout the organization. Today's development at the organization of this participant is not
the case. There are instead many minor change efforts without ties to bigger initiatives with clear
goals and symbiosis.

“It comes down to prioritizing. I believe that it is important to develop smart goals with a clear
direction on where we want to go. After the plan is set, the top management needs to prioritize
what change efforts need to be focused on.”

The participant also mentioned that they would like to see more organized change efforts from a
centralized point within the organization to ensure that change efforts made are compatible and
going in the same direction.

A major issue for companies is converting identifications into actions. Do you believe that
digital maturity assessments should provide more clear suggestions on how to become more
digital mature or is it up to the organization to create their own path?

Participant 1:
Instantly they mentioned that digital maturity assessments should propose better development
plans and provide more general guidance for organizations that assess their maturity. On the
same note, it is up to the same organization to drive changes since they know their structure best.
Although the participant believed it to be difficult to create an assessment model fit for every
business in the world, current models should support better.

“Current models should support companies better in terms of suggestions and guidance on what
to change. They should also include a framework for how to structure change work in
correlation to digital maturity.”

There is currently a huge lack in models to suggest real changes based on results of assessments.
It is difficult to pinpoint detailed suggestions or guidance tailored to every business. However,
there is a need to create general suggestions that could work for most or some organizations at
least. The participant highlighted a scenario where a company struggles with handling and
prioritizing resources and digital heritage. These types of areas may show from an assessment.
The problem is to know what areas to look into in order for it to change, something that current
models do not show. The participant believed that there should be models equipped to show
areas of improvement directly after assessments.

“They (the models) should be better at clarifying what to look into more concretely. What areas
work and what areas do we need to improve? Money and resources are other areas to
consider.”

The Gartner model was one model that the participant talked about, it described how an
operation should look like ideally for different sectors. However, one pitfall of the Gartner model
seemed to be that it was designed towards American structures.

56
Participant 2:
The participant believed that clearer suggestions were extremely needed, it was also one of the
main areas of work that they handled, to drive questions regarding change management. This
question or issue came up frequently. They said that important discussions came up thanks to
assessments, but that it was just as important for managers to know what to do after the
assessment. They believed that there were few people that would know exactly what to do
without any suggestions or guidance.

“There is a huge need for clearer suggestions and more suggestions in the later stages of an
assessment. If we are doing digital maturity assessments, we also need to help organizations with
what they are to do with the results. It is very easy to see that it is an issue.”

A discussion came up regarding current models. The participant was informed that many of
today’s models lacked the ability to provide general suggestions based on any result. The
participant once again mentioned that they had pushed the same types of issues before. They had
also created some sort of guidance model based on assessment results.

“I pushed that issue, I have a picture/model where I showed how you could be guided based on
the results. The problem is to map out all guidelines towards the results. To put together the
pieces so to speak.”

Participant 3:
The participant mentioned that they would appreciate more structured guidelines on how
companies can build their digital development. The participant had experience with an
organization that was a part of a digital maturity measurement and did not get enough support for
development. They also mentioned that building motivation is difficult in these kinds of digital
maturity measurements and that there needs to be a better explanation of why the measurement is
important and build the motivation of the staff and the management within the organization.

“The digital maturity measurement only becomes an irritation, because they don’t know how to
answer the questions when there are little or low amounts of explanation to the subject”

Participant 4:
They thought that you could definitely give suggestions. However, something that we discussed
was that it was very difficult within the public sector, which is where they operated. Some things
in the public sector that are open, you can point to such as frameworks and guidance. However,
most parts of a project in the public sector needs to be procured, such as training. In conclusion,
it is difficult to suggest a public sector. The participant gave an example of this. One company
that had developed a model wanted to provide additional help based on the assessment but could
not because of the legal parts.

“I know that they struggle with this when talking about the public sector. They want to provide
them with educational/training packages and certified consulting that can go in and educate. I
don’t know how to get that to work legally.”

57
Our discussion about improvements continued and the participant said that on a general level
there are things that point to in order to get better support. In their work they also tried to identify
certain areas and could connect that to other sectors with similar problems.

“We try to connect different agencies and have created a new development process based on
that. By combining them you can create value for similar problems in different sectors.”

In conclusion you can suggest things on a very general level and find some support based on
that. The participant also stated that a good digitalization depends on what the organization looks
like and other components that were not explicitly stated.

Participant 5:
The participant mentioned that they thought that it would be good for organizations to get better
guidance. They also told us that it would be beneficials for the organization to receive cases from
similar efforts and how these companies chose to deal with their problems. This would also show
what worked and not worked in previous change efforts, this would also give the organization a
little push in the right direction.

58
6. Discussion
6.1 Digital Maturity
When comparing each of the participant’s responses, the most common answers were change
and leadership to answer what digital maturity was. For them, digital maturity was something
that was more than just a concept of digitalization. Digital maturity was for most of them a rather
complex phenomena that covered many aspects and weighed several factors of organization and
people. We could not match any of the answers to Chanias and Hess’s (2016) definition of
digital maturity, “the status of a company’s digital transformation”. Nonetheless the answers
were detailed and descriptive, more so than the actual definition that we chose. The participants
agreed that digital maturity is something that a lot of organizations struggle with and need to
improve on. It is rather easy to define a project and initiate one, however much more difficult to
complete it.

We saw a pattern where change connected to digital maturity was prominent, since digital
maturity is a basis of digital transformation. Digital transformation is all about transforming
the organization, changing certain methods or patterns to effectivize. Something that was
interesting when talking about change connected to digital maturity was that most of the
participants talked about change management more than they did about the digitalization in
regard to digital maturity. One participant even mentioned that digital maturity was more about
change management then it was about digitalizing. These answers also go in line with our
theoretical perspective on digital maturity. For example, Kane et al., (2017, p. 3) pointed out
that one of the key practices for digitally mature companies is to implement systemic changes
in workforce, workplace innovation and culture/experience in the organization. The
participants had similar thoughts to this key practice, understanding both the importance of
change and the importance of good leadership and structure. One of the participants had a
simple yet good answer that correlated to many of the other participants' answers as well.

“The ability to implement new solutions and make sure that something actually happens with
them.”

The second most common identification in relation to digital maturity was leadership. In this
case, leadership is something that covers a lot of areas. Besides being a good leader and being
responsible for the project and people involved, following-up and ensuring that the project was
delivered was something that seemed to be of importance to most participants. We could see
that nearly everyone had experience from this and that it was something to consider as an
important factor to failure. According to the participants, many projects fail because there is no
proper following-up along the way from designated leaders. Projects can get lost, lose
importance or just die out with time because of poor structure and leadership. This was also
true for the second question in this theme, why they thought that success rates for digital
transformation processes were low. The participants mentioned both bad follow-up and lack of
support from top management as the primary source for failure in these types of situations.

Another important area in leadership that we noticed was the ability and desire to lead.
Identifying the right type of person to lead a change project is important if you want it

59
delivered as promised. According to the participants, it was extremely important that leaders
knew their organization inside out. For them it was also about having the right prerequisites to
drive change, such as competence and resources.

When asked about why the participants thought that digital transformation processes generally
have a low success rate the answers were mixed. As previously mentioned, follow-up and
support were the main identifications. Furthermore, other participants mentioned having too
much faith in technology as a factor for failure. That means focusing on implementing
technological solutions that do not necessarily provide efficiency or success. Instead,
organizations must identify where change needs to happen. That can imply changing structures
or resources for example.

To summarize, we saw an overarching pattern of two components (change and leadership) that
the participants thought were especially important when talking about digital maturity. In a
project of a complex digital transformation, these were thought to have the biggest impact for
good and bad. When digging deeper into these two components, factors for success as well as
failure came up. For example, identifying the right type of change and the importance of
having leaders that follow up and support projects until the end.

6.2 Digital Maturity Assessments


The first question that regarded the level of experience with digital maturity assessments was
quite divided. It showed that all of the participants had knowledge, and to some extent
experience as well. However, only about half of the participants had actually worked hands-on
with assessments previously. The ones that had work-related experience connected to digital
maturity assessments had also developed their own models based on large, popular ones.
Moreover, the other half with no experience all pointed out that they had knowledge about said
assessment models. The understanding was that they had been a part of projects with digital
maturity and assessments, but never handled it themselves or came close to the action. An
interesting identification we made after the interviews was that the participants with experience
all agreed that it was extremely difficult to use assessment models without adjusting them to the
specific organization. This was because of the fact that all organizations are structured differently
and require different things. It is therefore difficult to use an assessment model and think that it
fits the organization perfectly from the start. This relates to what BCG (2022) pointed out; there
is no universal model for digital maturity assessment.

The overall impression we got when comparing the answers to the question “Do you see any
drawbacks in the current models?” was that a majority of our participants had something to add.
They identified several drawbacks of current models and did not talk much about what the
models offered in a positive way. However, it was good that we got some insight into what our
participants thought to be “wrong” with current models. The answers were scattered amongst the
participants and each one had a slightly different analysis. There was discussion about how
models focused too much on one component, in this case it was digitalization. From the
discussion we learned that there was much more to digital maturity than just digitalizing.

60
Another answer was that it was difficult to measure maturity. This was mainly because
assessments use surveys to get a better picture of the organization and its people. Surveys are of
course not a hundred percent reliable and therefore it makes it harder to trust the outfall. The last
important contribution was how current models were not adaptable to different organizations. It
would be difficult to bring in a model to an organization without adjusting it and making it your
own. This is something that we noticed the participants agreed with throughout our interviews.
This topic came up several times and seemed to be an important drawback to consider.

Lastly, we got some interesting discussions regarding possible future improvements on


assessment models. The general idea that we got from our participants is that future models
should consider giving better general guidance and advice based on what type of results you get
in the initial phase. It should be general in the sense that it should work for almost all types of
organizations, at least organizations in the same sector or category. This would mean that the
work on adapting models to organizations could minimize. We also discussed how models
should provide a clear picture on improvement. For example, if an organization is assessed a bad
score, a 2 out of 5. What do they need to improve in order to attain a 4 or 5 out of 5? This is
something that can be included to some extent in the assessment models.

6.3 Importance of Management

6.3.1 Strong and Motivating Leadership


The participants were all expressing the importance of management within digital transformation
processes and while not everyone expressed this overwhelmingly, it is still a point that they
believed to be essential. The participants also focused on how much good leadership can change
the outcome of projects, which also is supported by the theory from both the organizational and
the people perspective. Matt et al. (2015, p. 5) focus on that the right leadership is needed for
digital transformations to be successful and that the top team management needs to be driving
and leading the change in the right direction. The participants we interviewed all shared this
thought as well, the collective response was that leaders in these kinds of projects need to be
strong and that they need to be interested and focused on digital maturity. The participants also
focused on the idea that the leader should be able to motivate the personnel that the change is
important and why, by making sure that everyone understands what the goals and benefits of the
change are. We believe that the structure of the organization has a big impact on how leaders can
be able to be successful. This might be related to the size of companies and how it can be
difficult to move in the same direction and to coordinate when the company size increases. To
build structured management with strategies and goals over a long time might also be especially
difficult when the organization has a lot of top management changes so that directives change
often, this makes strong leadership even more important.

61
6.3.2 Short-term Solutions
One of the things that stood out to us while conducting these interviews was connection to short
term solutions. Digital development and digital maturity were said to be done through smaller
easy changes such as automation of minor administrative or time-consuming tasks previously
done by humans. This is definitely a step in the right direction and can help the overall efficiency
of the organization. However, only committing to the small and simple changes without a larger
plan can result in minimal change in the long run and the interviewees saw these as short term
“quick fixes” to problems with larger underlying issues that are either too complex or too large to
handle with a short-term solution. This is where the importance of management comes in. By
having structured management that is able to link the core business objectives to digital
strategies, some organizations might be more efficient and can sustain an evolving environment
better (Kane et al., 2017, p. 3). Participant 4 expressed that the journey of digital development
and the thrive for digital maturity need take the other disappearing jobs into consideration and
plan for how the staff can fit into the new digitized structure. We believe that this is very
important in the development of new organizational structures and practices, since the personnel
will be hesitant to change if there is no place for them and their expertise within the new
organizational structure. This ties into the notion of a short-term solution where the easiest
changes come in the form of automation and to eliminate manual tasks in favor of digitized
solutions. If only changes are not done in the perspective of sustainable change and planned for
the long term, some of the personnel might suffer, since they might not have the skills and
experience with the new digital solutions and changes. By working with long term plans and
goals, organizations might be able to lower the negative effects of short-term changes and
solutions and become more effective.

6.4 Change Management

6.4.1 Resource Planning


Through the process of conducting interviews and researching the topic of digital maturity, we
have found some connecting factors between our view of the subject and the data found. Some of
these key topics of why digital maturity measurements fail and change efforts are not as
successful as planned stands within the foundation of change management. The topic of how
managers can lead and make decisions that please their customer, as well as their investors or
other key stakeholders. The topic of change management can be one of the most difficult to get
right, and the patience and clear directions needed to find the right way to success and to keep
innovating and changing with the times can be tough. One view from the study suggests that
there are three steps for a manager to take to be more digitally mature and these were
understanding, time and resources. To focus on all of these factors might be the optimal way to
engage in change. However, a perfect balance between might not be possible and sometimes a
compromise has to be made. One thing that comes to mind when discussing this topic is the use
of the project triangle mentioned by Pollack et.al. (2018). The project triangle is a tool often used
in project management where there needs to be a decision between three criteria, namely: Time,
resources or quality. According to the model, a project needs to determine what should be
prioritized between these three which means that not all of them can be at 100% priority for this
specific project. This tool came to mind when this answer was given and made us think that it

62
could be used as part of the development of digital transformation processes, that usually comes
in the form of a project or several projects.

Regarding the importance of resource planning, organizations might be more successful by using
the theory of constraint (TOC). If there is an issue with resource planning, managers can use the
TOC to find what resources might slow down or make the transitions to a more digitally mature
organization difficult. By identifying the constraints and finding a way of limiting or removing
their negative effects, managers might be able to see issues that might come later as well as being
able to make plans in advance. By working with a cycle of identifying and resolving constraints,
organizations will be continuously evolving throughout their development journey, while at the
same time limiting resource loss and maintaining a sustainable environment for their employees.

6.4.2 Follow-up
Another part that was gathered from the data, and also discovered during our initial research of
the topic is the lack of follow-up done within these projects and especially when dealing with
digital maturity measurements. There is no clear format for how this should be done, and no
clear directives on how the companies themselves can work towards following-up their own
work either. Participant 2 mentioned that there is a huge problem with projects finishing too
early and not taking the last step of handing over the information and plan to the organization in
a correct way. Most of the time, the organization is left with changes that they do not completely
understand or believe in. This is a huge issue to be able to consistently develop new processes
and to continually develop the organization to be more digitally mature. The change efforts done
might need changing or tweaking after the project has ended to keep up with the changing
environment and it is therefore crucial that organizations understand how the changes work and
for managers to drive the development further. This could especially be an issue when
organizations hire external professionals as consultants to make change efforts and to lead
projects regarding digital maturity. The obvious part of this is that they don’t have the connection
to the organization and therefore do not need to work within it after the project has ended.
Another part of this is the fact that consultants are expensive and that organizations might cut
corners to save money, and don’t see the value of a well-executed follow-up phase or a
concluding stage in these projects. By including better practices for follow-up stages to build
knowledge within the organization and to give the right tools to the ones working with the
process every day, we can have a much better chance of successful digitization efforts and
become more digitally mature.

63
6.4.3 Communication
If an organization wants to change into a more digitally adept state, one of the factors that can
determine success is communication. If managers are able to understand what is needed and can
convey this information to the organization in an effective way, they can develop better
strategies that can take advantage of a diverse group of employees in regard to competences.
There were two of the participants that discussed the topic of communication, one of them
focused on how managers need to make sure that everyone within the organization talks the
same language and have patience that there can be a gap in communication between employees
when it comes to digital development. To be able to become more digitally mature, organizations
will have to have good relations with those who have digital knowledge. To manage a company
that not only has experts in their certain field is something that managers constantly need to work
with and having a clear and defined plan for communication can be key. We believe that there
can be a huge benefit for companies to work with communication plans as part of the digital
maturity initiative. A good communication plan makes it easy for people within the organization
to find the right people to contact and can also give directions on how communication should be
conducted, as agreed from the directive of the organization or as a part of the change effort or
project where it is relevant to.

6.4.4 Prioritizing
Prioritizing is another thing that has shown up in the results of this study and mainly refers to
how managers need to prioritize change efforts in order to be as effective as possible while
moving towards a more digital mature practice. It also refers to how managers need to set smart
goals so that they eventually can make the decision to prioritize the new, digital processes and go
away from their previous and potential comfortable processes. The question of prioritizing is a
difficult one, and we believe that a good strategy can be useful to be able to make good plans and
make good priorities. This is where the potential use of the Change Kaleidoscope comes in and
by using the criteria of the kaleidoscope, organizations can structure their digital maturity
development through the eight different categories that is defined, based on these categories,
organizations might be able divide the work needed and from there make the prioritizing based
of what is most critical and most needed for a successful change (Balogun & Hope-Hailey, 2008,
cited in Whittington et al., 2020, p. 469).

6.4.5 Foundation for Continuous Improvements


It was clear from the interviews that the participants were looking for something more than the
current models have to offer. There was a clear favor of having models that focuses more on
suggestions on how the organization could get better after a maturity measurement. Some of the
participants wanted the models to suggest a framework based on cases that previous companies
have been through and show how other companies have chosen to handle different solutions.
This might not be possible to get standardized as included in the model, since different
companies will have different focus points and issues. However, there could be development
done with a consultant firm utilizing this framework, or if a collective of organizations using the
same model can build a program that works together in sharing development information and
building a pool of cases for future digital development. This would require a model to be used in
a broader sense and with the environment the maturity models are today, this might be an issue

64
since the inflation in the maturity model space. One of the participants mentioned issues
regarding the legal option of giving suggestions to public organizations and that this makes it
difficult to work with them because of this.

Regarding how companies can become more sustainable and to further their own development
and start working on continuous improvements on their own companies can use the PDSA-cycle.
Most of the sustainability issues that the organizations face is regarding the aforementioned
follow-up and longevity issues. By enforcing development with the PDSA-cycle in focus could
be a solution for companies to act of their own development and continue to work towards
becoming more digitally mature. The steps of PDSA-cycle also give managers something to fall
back on while developing new strategies and coming up with new change efforts. Another useful
model that can be used to be structured within the digital maturity development is the RACI
model. This model can be used as a very structured approach to change management and
especially regarding the structuring of roles and responsibilities within the change efforts.
Organizations might be able to achieve better results by including these parts of the RACI-model
in a new model for digital maturity measurement. If implemented correctly, PDSA-cycles in
combination with parts from the change kaleidoscope, the theory of constraints and RACI-model
can make a new model for digital maturity measurement more effective than current ones.

65
6.5 Revised and Developed Conceptual Framework

Figure 5. Revised and developed conceptual framework.

Considering our findings from the collected data, we have decided to alter our conceptual
framework. The analyzed results provided us with new and interesting topics that could
potentially help create a better understanding for what digital maturity is and how it should be
looked at when broken down into components. The data has given us new insights which in turn
has made us look at other theories and ways for improvement. The aim is to provide an even
better and more detailed conceptual framework that can provide guidance in a digital
transformation. Our suggested theories were in line with the responses that we got, making them
useful even at this stage. The phenomena of digital maturity and digital maturity assessments
were predictable in relation to what responses we got from our participants. Our view and stance
on digital maturity and digital maturity assessments have not changed after the collected data but
have enlightened us with new insights which contribute to a deeper understanding of them.

When looking at the importance of management, organization and people were the main
components. Organization in this sense is the readiness of management to provide changes in
organizational culture, business processes and improve management skills (Aslanova &
Kulichkina, 2014, p. 445). People regard the engagement, motivation and participation of staff

66
which are needed to successfully implement changes. Readiness and awareness of staff are a key
component (Aslanova & Kulichkina, 2014, p. 445). Our findings suggest that the most important
features in relation to management are strong and motivating leadership as well as working with
long-term plans and goals. This means having structured management that is able to link the core
business objectives to digital strategies (Kane et al., 2017, p. 3). Moreover, the idea is that a
leader should be able to motivate the personnel that the change is important and why, by making
sure that everyone understands what the goals and benefits of the change are. This will hopefully
bring out engagement, motivation and participation of staff. The collected data corresponds well
with our initial suggestion on the importance of management aspects and its components.
Therefore, we will not change anything for this part. However, we believe that the structure of
the organization plays a big part in digital maturity and therefore that will be included in
“Organization”. Structure can impact how leaders are able to be successful. It can be related to
the size of the company and how it can be difficult to coordinate as the company size increases.
We will also consider long-term planning and goal setting as a vital component of
“Organization”.

Change management initially had four different theories. We noticed that all of our suggested
theories for change management could be connected to the collected data. The Theory of
Constraint (TOC) could be applicable in resource planning, which was a topic that we
highlighted. By using TOC, managers can find what resources might slow down or make digital
transitions difficult. TOC is also useful to identify potential future issues and constraints.
The change kaleidoscope found to be well suited as a tool for prioritizing workload. By using the
kaleidoscope, managers can divide and prioritize work through eight categories (Balogun &
Hope-Hailey, 2008, cited in Whittington et al., 2020, p. 469). As a result, the most critical
change needed can be prioritized and divided through different assets in the organization.
The last identified category on continuous improvement covered the drawbacks of digital
maturity on a more general note. Identifications on how difficult it can be to adapt and become
more digitally mature were made throughout this chapter. However, we found that our suggested
theories on PDSA-cycle and the RACI model proved to be good for general guidance on
improvement within the organization. These models can help organizations acquire a structure
which can effectivize and build a foundation for digital maturity.

Furthermore, four new components were added to the conceptual framework after analyzing the
results. The data pointed to new categories which opened up for new theories. Firstly, we have
introduced the project triangle as a new component. We realized that understanding, time and
resources were all factors for success in change management. The project triangle handles
similar criteria; time, resources and quality (Pollack et al., 2018) which is why this method is
deemed to be useful. The project triangle is a tool that covers both resource planning and
prioritizing in a project and can be used to further enhance development of digital transformation
projects. The second component that we decided to add was communication plans. Our
understanding from our data is that communication is extremely important when talking about
change management and projects in general. This means that organizations should focus on
establishing the right type of relationships, communicate plans to key players as well as give
directions. Finally, we also added sub-components to the importance of management. This was
mainly to clarify how important long-term solutions and strategies and strong and motivating
leadership were for each of the components under the importance of management.

67
7. Conclusion
In this chapter we will present any conclusions that our study has arrived at, contributions to
literature, managerial implications, societal implications, limitations and further research.

7.1 General Conclusions


The aim of this study was to answer the following research question:

How do management teams influence the implementation and success of digital maturity
transformations in companies?

In order to arrive at any conclusions, interviews were conducted using a semi-structured


approach. This qualitative approach opened up for deeper conversations with each participant
and gave them freedom in their answers. We wanted respondents’ answers to be as truthful and
in-depth as possible to get the most accurate results. It was also important for us that the answers
contained relevant information, since we interviewed 6 people. Therefore, a purposive sampling
was used, targeting people with experience and knowledge in the field studied. The use of a
thematic analysis to analyze the data was helpful, since we had several themes connected to our
theoretical framework. The idea was to connect the participant’s answers to our theoretical
themes to see if it matched our predictions and the conceptual framework.

Our study suggests that management teams influence the implementation and success of digital
maturity transformations with their management skills and leadership, their ability to create long-
term solutions and strategies, and their ability to equip change management into an environment
that needs change. These are all components that should be looked at initially if an organization
is evaluated with a low digital maturity score.

What we found from our questions was that digital maturity was perceived as something
extremely complex, spanning several categories in an organization. It was more than just
“digitalization” and being mature in that sense. Digital maturity's most prominent components
according to our participants was leadership and change. These topics were represented at some
point with all of our participants. It was about having a long-term plan, implementing it, leading
change, communicating and following up until the end. We have argued that studies point to
difficulties in succeeding with digital transformation, and that it is directly linked to an
organization’s digital maturity. Moreover, we believe that digital maturity assessments are
important in order to know where an organization's readiness to transform is positioned at. We
strongly believe that an assessment should not only position readiness, but also guide and
suggest ways of improvement for struggling organizations. This is the reason behind our
constructed initial conceptual framework and our foundation of theories. According to our
findings, change management and leadership are two components that are of massive importance
to achieve digital maturity. Similarly, we predicted that these components would be prominent in
our responses. We believed in the importance of management and leadership as well as change
management.

We received relevant and actionable information in the responses from our participants, partly
due to our choice to go with a purposive sampling technique. All participants could answer each
68
question rigorously which was helpful towards our analysis. After analyzing all data from our
participants, we decided to add four more components to our conceptual framework. We were
delighted to see that our initial components would not have to be removed, but instead proved to
be useful in relation to the empirical data. Furthermore, the project triangle and communication
plans were added, expanding the conceptual framework. The decision was made after we noticed
common features in our analysis that would go well with the named components. We also added
long-term solutions and strategies as well as strong and motivating leadership as sub-components
to the importance of management. This was a natural process that arose from our interviews with
the participants.

We do believe that our research question has been answered to our best capacity. The idea was to
see how and what management teams need to do in order to influence and succeed with digital
transformations in companies. By setting up theoretical themes and matching them with our data,
we could see if our predictions were accurate. In general, the themes did match the responses that
we acquired, proving how managers should act and what they need to do in order to succeed in
this context. Another goal of this study was to answer the research purposes created in the
beginning. The different research purposes are stated once again:

Investigate how much influence management has on the digital transformation of a company.

Investigate whether current digital maturity models are equipped to suggest real changes based
on evaluative/performance indicators in the assessment.

Examine if digital maturity assessment models can be improved for more successful
transformation projects?

We believe that we have thoroughly investigated how much influence management has on digital
transformation. Since our theoretical standpoint is based on management as a key factor,
management has been discussed and analyzed in correlation to digital maturity and digital
transformation. Secondly, we have investigated current digital maturity assessment models and
whether they can suggest changes directly based on evaluation. The overall indication that we
got was that assessments did not suggest enough and should provide better suggestions on
improvement. Although it is difficult for a model to fit into every type of company or
organization, to create a universal model is not possible at this time. Assessment models can, and
should, provide better suggestions based on evaluations but within a reasonable width of sectors.
Lastly, we have examined if assessment models can be improved, to create more successful
projects. This ties into our research question as well. The reasonable improvements that can be
made for assessment models, whilst still keeping it on a general level, is to provide
improvements on management and leadership level as well as change management level. These
two important components generally fit into most organizations and most project types. When
talking about improvements on a more detailed level, such as suggestions based on specific
outcome, the model most probably needs to be tailored to the specific organization.

69
7.2 Contributions to Literature
This thesis is built upon previous literature discussing the factors of digital maturity and the lack
of discussion regarding a focus on management. McKinsey (2018) is one example that has
shown low success rates for change efforts within digital maturity and also show a lack of tom
management efforts to influence transformational success. This in connection to lack of previous
research and the huge number of different models for digital maturity measurement gave us the
idea of building upon the previous knowledge in the field and also comparing this to
management literature and empirical data to find potential development points that future models
and research can be based on. Our goal is to inspire researchers within the field of digital
maturity or management to develop better and more in-depth models that can bring more success
to digital maturity measurements and for digital transformation processes.

We propose that additional components need to be factored into the research designs of studies
of digital maturity. Hence, we extend the work of Aslanova and Kulichkina (2014, p. 446) by
building on their work of identifying the five different elements of digital maturity: Strategy,
organization, people, technologies and data. We decided to narrow the scope and focus on the
organization and people aspects. Nevertheless, we believe that this study builds upon those
aspects of the original elements and therefore contributes to the literature of digital maturity. We
believe that the addition to these characteristics can inspire more researchers to further
investigate and build upon the field of digital maturity.

Methodologically, our study contributes by providing in-depth qualitative insights to the field
mostly dominated by quantitative studies such as North et al. (2019, p. 6) or McKinsey (2018)
and their studies show that there is a large group of companies seeking digital maturity, but most
will not achieve their goal. These statistics are useful to understand that there are issues that need
to be fixed, but they do not focus enough on the qualitative aspects of digital maturity. This
explains the exploratory nature of our approach, allowing us to identify what needs to change in
these organizations so that they can become more digitally mature from a management
standpoint. To do that, there is a need to focus on qualitative study within the field of research.

7.3 Managerial Implications


In order for companies and organizations to have a higher success rate within digital
transformations, we believe that there needs to be development in the areas that we focus on in
this study. To achieve a better environment for less digital organizations, the models created
need to focus on the longevity of the strategies and development purposes, this is where the
implication for management comes in. With this proposed model, managers will get a basic
insight in what is needed of them in order for a successful digital transformation. Since the study
is qualitative and the participants invested in the subject, we believe that this study will make a
great foundation for managers to further develop models based on the empirical data and the
findings in this report. Since several of the participants are specialists within the public sector,
we believe that this study might have extra usability within this particular field, and some of the
topics discussed can therefore be of good use within the public sector when developing strategies
for digital transformations.

70
We believe that this study can aid the top team management of organizations to find the main
points of development within the digital field. Based on the suggestions we give, managers from
higher up in the organization might be able to change their strategies regarding digital maturity
development and see the benefits of change management at the top level. We believe that this
study can be more beneficial in a more macro perspective and later developed into more concrete
and focused strategies. These are all factors that determine who is one of the key focus groups of
this study and should not be seen as a step-by-step guide for digital maturity development.

7.4 Societal Implications


During the study of digital maturity, it has become clear that a lot of organizations have a lot of
issues regarding sustaining development and to continuously develop their product or service to
follow the digital transformation within the society. One of the aspects that this study shows is
the use of continuous improvement so that organizations can follow the development around
them. This will in the greater context of the social environment mean that services and products
can be better suited for the current and future generations and will in some cases greatly increase
their brand loyalty and trustworthiness. Coming generations will even more rely on digital
structures and processes and they are growing up to become consumers fast. For companies to
find the balance between two distinctly different generations from the young to the old in this
environment might be more difficult than ever. By building a workflow, structure and strategies
that can bend and change over time is essential for companies to get a sustainable social
environment.

By contributing to the development of more success in digital transformations, we believe that


more organizations can become digitally mature. If organizations can see the benefits of
management in digital transformations, the overall environment from public to private sector and
many different industries might see gains and learn from each other in their journey to a more
digitally adept world. We believe that this might have a ripple effect if someone can develop a
sustainable method for digital maturity measurement that encourages organizations to take their
development to the next level. If the environment changes, the benefits of a more digital society
will become clear, and we will all benefit from it.

7.5 Limitations and Further Research


This study has focused on the influence that management teams have on the success of digital
maturity transformations in companies. As this is an exploratory qualitative study, we dwelled
deeper into the phenomena of both digital maturity transformations and its assessment models,
and how certain components influence the success of these. However, there were limitations to
our study. Since we chose to select the enrollment of our participants according to knowledge
and experience, we wanted the most supreme candidates in the field. Due to a rather slim
network, we could not choose from a substantial quantity of candidates. Instead, we had to limit
our search to the network we had and chose the top candidates from there. This limitation
implied that not all candidates were part of management teams, but still had knowledge in the
field.

71
Digital maturity and digital maturity assessments are phenomena known to a world-wide
audience. The study would have benefited from international inputs, because of the difference in
corporate environment and organizational culture, possibly changing the results of the study.
Also due to our network, we had to limit the geographical boundaries to Sweden.

Further research is encouraged, especially in the areas where the study was limited. Academics
should consider researching this field, taking on similar or different directions in the field. There
is a possibility to involve international aspects to this study and involve further participants with
relevant knowledge and experience. The qualitative approach in relation to digital maturity is
slight, which should encourage further qualitative studies. Future studies should also consider a
wider selection of industries and focus on investigating variation in digital maturity model
application in firms facing various industry specific challenges. Such as varying supply chain
structures, environmental dynamism and networking structures.

72
References
Ahmed, M. R. (2019). The RACI Matrix and its implications: a case of Unilever.

Aslanova, I. V., & Kulichkina, A. I. (2014). Digital maturity: Definition and model. Sloan
Management Review´+.

Ashton, K. (2009). That ‘Internet of Things’ Thing. RFID Journal 22: 97–114.

Baca, C. M. (2005). Project manager's spotlight on change management. John Wiley & Sons,
Incorporated.

Berghaus, S. and Back, A. (2016). "Stages in Digital Business Transformation: Results of an


Empirical Maturity Study". Proceedings of the Tenth Mediterranean Conference on Information
Systems (MCIS), Paphos, Cyprus, 2016.

Bergman, B. and Klefsjö, B. 2012. Kvalitet från behov till användning 5., uppdaterade och utök.
uppl. Lund: Studentlitteratur.

Booz and Company (2011). Measuring Industry Digitization: Leaders and Laggards in the
Digital Economy. Online Publication. https://www.strategyand.pwc.com/gx/en/insights/2002-
2013/measuring-industry-digitization/strategyand-measuring-industry-digitization-leaders-
laggards-digital-economy.pdf [Retrieved 1 March, 2022]

Bell, E., & Bryman, A. (2007). The ethics of management research: an exploratory content
analysis. British journal of management, 18(1), 63-77.

Bell, E., Bryman, A., & Harley, B. (2018). Business research methods. Oxford university press.

Braun, & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in
Psychology, 3(2), 77–101.

Chanias, S., & Hess, T. (2016). How digital are we? Maturity models for the assessment of a
company’s status in the digital transformation. [Retrieved January 27, 2022].
http://www.wim.bwl.unimuenchen.de/download/epub/mreport_2016_2.pdf.

Christoff, P. (2018). Running PDSA cycles. Current problems in pediatric and adolescent health
care. 48(8), (pp. 198–201).

Connelly, L.M. (2021). Using the PDSA Model Correctly. Medsurg nursing. 30(1), (pp.61–64).

Cope, D. G. (2014, January). Methods and meanings: credibility and trustworthiness of


qualitative research. In Oncology nursing forum (Vol. 41, No. 1).

73
Deloitte. (2018). Digital Maturity Model, Achieving Digital Maturity to Drive Growth.
https://www2.deloitte.com/content/dam/Deloitte/global/Documents/Technology-Media-
Telecommunications/deloitte-digital-maturity-model.pdf [Retrieved March 2, 2022].

De Bruin, T., Rosemann, M., Freeze, R., & Kaulkarni, U. (2005). Understanding the main phases
of developing a maturity assessment model. In Australasian Conference on Information Systems
(ACIS) (pp. 8-19). Australasian Chapter of the Association for Information Systems.

Drever, E. (1995). Using Semi-Structured Interviews in Small-Scale Research. A Teacher's


Guide.

BCG Global (2022). Digital Maturity. BCG. [Retrieved February 9, 2022].


https://www.bcg.com/capabilities/digital-technology-data/digital-maturity

Etikan, I., Musa, S. A., & Alkassim, R. S. (2016). Comparison of convenience sampling and
purposive sampling. American journal of theoretical and applied statistics, 5(1), 1-4.

Finkelstein, S., Hambrick, D., & Cannella, A. (2008-10-08). Top Management Teams. In
Strategic Leadership: Theory and Research on Executives, Top Management Teams, and
Boards. Oxford University Press. [Retrieved February 8, 2022] https://oxford-
universitypressscholarship-
com.proxy.ub.umu.se/view/10.1093/acprof:oso/9780195162073.001.0001/acprof-
9780195162073-chapter-5.

Furber, C. (2010). Framework analysis: a method for analysing qualitative data. African Journal
of Midwifery and Women's health, 4(2), 97-100.

Gibson, W., & Brown, A. (2009). Working with qualitative data. Sage.

Gill, M., & VanBoskirk, S. (2016). The digital maturity model 4.0. Benchmarks: digital
transformation playbook.

Goertz, G., & Mahoney, J. (2012). Concepts and measurement: Ontology and epistemology.
Social Science Information, 51(2), 205-216.

Goldratt, E. M. (1990). Theory of constraints (pp. 1-159). Croton-on-Hudson: North River.

Ha, H. (2014). Change management for sustainability. Business Expert Press.

Hermann, Pentek, T., & Otto, B. (2016). Design Principles for Industrie 4.0 Scenarios. 2016 49th
Hawaii International Conference on System Sciences (HICSS). Hawaii, 2016-01. 3928–3937.

Irimiás, A., & Mitev, A. (2020). Change Management, Digital Maturity, and Green
Development: Are Successful Firms Leveraging on Sustainability? Sustainability, 12(10), 4019.

Kane, G. C., Palmer, D., & Phillips, A. N. (2017). Achieving digital maturity. MIT Sloan
Management Review

74
Kane, G. C. (2017). Digital Maturity, Not Digital Transformation. MIT Sloan Management
Review. [Retrieved February 8, 2022] https://sloanreview.mit.edu/article/digital-maturity-not-
digital-transformation/

Kushal, K., Ravishankar, S., & Kalla, H. (2009). Perspectives for effective management. Global
Media.

Lahrmann, G., Marx, F., Mettler, T., Winter, R. and Wortmann, F. (2011). Inductive Design of
Maturity Models: Applying the Rasch Algorithm for Design Science Research In:

Lee, J., B. Bagheri, and H. Kao. (2015). A Cyber-physical Systems Architecture for Industry 4.0-
based Manufacturing Systems. Manufacturing Letters (3), 18–23.

Leedy, P. & Ormrod, J. (2001). Practical research: Planning and design (7th ed.). Upper Saddle
River, NJ: Merrill Prentice Hall. Thousand Oaks: SAGE Publications.

Leonard, & Pakdil, F. (2016). Performance leadership (First edition.). Business Expert Press.

Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. sage.

Mabin, J., Forgeson, S., & Green, L. (2001). Harnessing resistance: using the theory of
constraints to assist change management. Journal of European Industrial Training, (25), 168–
191.

Mantere, S., & Ketokivi, M. (2013). Reasoning in organization science. Academy of


management review, 38(1), 70-89.

Marsh, D., & Furlong, P. (2002). A skin, not a sweater: ontology and epistemology in political
science. Theory and methods in political science, 2(1), 17-41.

Mason, M. (2010). Sample size and saturation in PhD studies using qualitative interviews. In
Forum qualitative Sozialforschung/Forum: qualitative social research 11(3).

Matt, C., Hess, T., & Benlian, A. (2015). Digital transformation strategies. Business &
information systems engineering, 57(5), 339-343.

McKinsey and Company (2018, October 29). Unlocking success in digital transformations.
McKinsey. https://www.mckinsey.com/business-functions/people-and-organizational-
performance/our-insights/unlocking-success-in-digital-transformations [Retrieved February 11,
2022].

McKinsey and Company (2015). Raising your Digital Quotient. Online Publication.
https://www.mckinsey.com/business-functions/strategy-and-corporate-finance/our-
insights/raising-your-digital-quotient [Retrieved February 28, 2022].

75
Menz. (2012). Functional Top Management Team Members: A Review, Synthesis, and Research
Agenda. Journal of Management, 38(1), 45–80.

Merriam-Webster. (2021, October 26). Deductive vs. Inductive vs. Abductive Reasoning.
Merriam-Webster.com; Merriam-Webster. https://www.merriam-webster.com/words-at-
play/deduction-vs-induction-vs-
abduction#:~:text=Inductive%20reasoning%2C%20or%20induction%2C%20is,conclusion%20fr
om%20what%20you%20know. [Retrieved March 31, 2022].

Mills, J., & Birks, M. (2014). Qualitative methodology: A practical guide. Sage.

MIT Center for Digital Business and Capgemini Consulting (2012). The Digital Advantage:
How Digital Leaders Outperform their Peers in Every Industry. https://www.capgemini.com/wp-
content/uploads/2017/07/The_Digital_Advantage__How_Digital_Leaders_Outperform_their_Pe
ers_in_Every_Industry.pdf [Retrieved March 1, 2022].

Mitra, Kundu, A., Chattopadhyay, M., & Chattopadhyay, S. (2017). A cost-efficient one time
password-based authentication in cloud environment using equal length cellular automata.
Journal of Industrial Information Integration, 5, 17–25.

Murthy, C. (2007). Change management. Himalaya Publishing House.

Pollack, J., Helm, J. and Adler, D. (2018), What is the Iron Triangle, and how has it changed?,
International Journal of Managing Projects in Business, 11(2), 527-547.
https://doi.org/10.1108/IJMPB-09-2017-0107

PWC (2016). Industry 4.0: Building the Digital Enterprise. Online Publication.
https://www.pwc.com/gx/en/industries/industries-4.0/landing-page/industry-4.0-building-your-
digital-enterprise-april-2016.pdf [Retrieved February 28, 2022].

Hemant, J., Padmal, V. and Atish, S. (2011) Service-Oriented Perspectives in Design Science
Research. In: DESRIST. Milwaukee, WI, USA, May 5-6, 2011. Proceedings in: Berlin,
Heidelberg: Springer Berlin Heidelberg, pp.176–191.

Rabionet, S. E. (2011). How I learned to design and conduct semi-structured interviews: an


ongoing and continuous journey. Qualitative Report, 16(2), 563-566.

Reichertz, J. (2004). 4.3 Abduction, deduction and induction in qualitative research. A


Companion to, 159.

Reichertz, J. (2013). Induction, deduction. The SAGE handbook of qualitative data analysis, 123-
135.

Remane, G., Hanelt, A., Wiesboeck, F., & Kolbe, L. M. (2017, June). Digital Maturity in
Traditional industries-an Exploratory Analysis. In ECIS (p. 10).

76
Rosemann, M., De Bruin, T.: Towards a Business Process Management Maturity Model. In: 13th
European Conference on Information Systems (ECIS 2005), Regensburg, Germany (2005)

Rossmann, A. (2018). Digital maturity: Conceptualization and measurement model.

Rothchild, I. (2006). Induction, deduction, and the scientific method. In Society for the Study of
Reproduction (pp. 1-11).

Salviotti, G., Gaur, A., & Pennarola, F. (2019, September). Strategic factors enabling digital
maturity: An extended survey. In The 13th Mediterranean Conference on Information Systems
(MCIS) (1-13).

Saunders, M. (2019). Chapter 4: Understanding research philosophy and approaches to theory


development. In: Saunders, M., Lewis, P., & Thornhill, A. (2019). Research methods for
business students. 8th edition, Harlow, United Kingdom. Pearson Education Limited, pp. 128-
170.

Saunders, M., Lewis, P. H. I. L. I. P., & Thornhill, A. D. R. I. A. N. (2007). Research methods.


Business Students 4th edition Pearson Education Limited, England.

Simon, M. (2011). Assumptions, limitations and delimitations.

Simon, M. K., & Goes, J. (2013). Scope, limitations, and delimitations.

Simpson, J.A., Weiner, E.S.C.: The Oxford English Dictionary. Oxford University Press, Oxford
(1989)

Tabrizi, B., Lam, E., Girard, K., & Irvin, V. (2019). Digital transformation is not about
technology. Harvard business review, 13(March), 1-6.

Thaderdoost, H. (2016). Sampling methods in research methodology; How to choose a sampling


technique for research. International journal of academic research in management, 5 (2) 18-27.

Todnem, BY. R. (2005) Organizational Change Management: A Critical Review. Journal of


Change Management 5 (4) 69–- 380.

Tracy, B. (2014). Management 1st edition. Saranac Lake], New York: AMACOM.

VanBoskirk, S., Gill, M., Green, D., Berman, A., Swire, J., & Birrell, R. (2017). The digital
maturity model 5.0. Forrester Research.

Wernicke, B., Stehn, L., Sezer, A. A., & Thunberg, M. (2021). Introduction of a digital maturity
assessment framework for construction site operations. International Journal of Construction
Management, 1-11.

77
Westerman, G., Bonnet, D., & McAfee, A. (2014). Leading digital: Turning technology into
business transformation. Harvard Business Press.

Whittington, Regnér, P., Angwin, D., Johnson, G., & Scholes, K. (2020). Exploring strategy. 12
th edition. Harlow: Pearson.

Williams, C. (2007). Research methods. Journal of Business & Economics Research (JBER),
5(3), pp. 65-72.

Xu, L. D., Xu, E. L., & Li, L. (2018). Industry 4.0: state of the art and future trends. International
Journal of Production Research, 56(8), 2941–2962.

Xu, He, W., & Li, S. (2014). Internet of Things in Industries: A Survey. IEEE Transactions on
Industrial Informatics, 10(4), 2233–2243.

Zeynep Tuğçe Şimşit, Noyan Sebla Günay, Özalp Vayvay. (2014). Theory of Constraints: A
Literature Review. Procedia - Social and Behavioral Sciences, (150), 930-936.

Zheng, X., Martin, P., Brohman, K., & Xu, L. (2014). Cloud Service Negotiation in Internet of
Things Environment: A Mixed Approach. IEEE Transactions on Industrial Informatics, 10(2),
1506–1515.

Appendix 1: Success rate of digital transformations by


key factors

78
79

You might also like