Professional Documents
Culture Documents
Jeannette Paap
13383531
June 24th, 2022
Final version
Statement of originality
This document is written by Jeannette Paap who declares to take full responsibility for
the contents of this document. I declare that the text and the work presented in this document is
original and that no sources other than those mentioned in the text and its references have been
used in creating it. The Faculty of Economics and Business is responsible solely for the
Acknowledgements
First of all, I would like to thank Carlotta Bunzel for the supervision and feedback
throughout the process. Next, I would like to thank the participants who filled in the
questionnaire. I would like to thank Philippine de Planque for her efforts in collecting the data
and the support throughout the process of the thesis period. Lastly, I would like to thank my
friends and family, especially Anne Smits and Lisa Appel for motivating me.
4
Abstract
Do you follow your leader blindly? Leaders can emerge in various ways since leadership
functions can be performed by both humans and computers due to innovations such as artificial
intelligence. This creates the possibility of augmented leadership, where humans and computers
work together performing leadership functions. Effective leaders can contribute to the execution
of an organization’s strategy with their team, which means that employees need to follow their
leader to help achieve this strategy. Acceptance of the leader is key in accomplishing this. Yet,
not providing information on the acceptance of the augmented leader. The aim of our research
default vs. algorithm default) and situational leadership behavior (person-focused vs. task-
focused) are used for measuring the variable for our model. Our results showed that acceptance
of the augmented leadership distribution is higher when human is prominent and an algorithm
with situational leadership behavior is found. For practical means, our study can help to get
insight into how to implement augmented leadership for acceptance and ultimately following
of the leader.
collaboration, algorithm.
5
Table of content
List of figures ......................................................................................................................................... 7
Introduction ........................................................................................................................................... 8
Theoretical framework ....................................................................................................................... 11
Augmented leadership .................................................................................................................... 11
Acceptance of augmented leadership............................................................................................. 14
Mediating role of transparency ...................................................................................................... 16
Situational leadership behavior ..................................................................................................... 19
Method.................................................................................................................................................. 22
Design ............................................................................................................................................... 22
Procedure ......................................................................................................................................... 22
Participants ...................................................................................................................................... 23
Vignettes ........................................................................................................................................... 24
Measurements .................................................................................................................................. 26
Acceptance .................................................................................................................................... 26
Transparency ................................................................................................................................ 26
Control variable ............................................................................................................................ 27
Manipulation check ...................................................................................................................... 27
Analytical Plan................................................................................................................................. 28
Results .................................................................................................................................................. 29
Hypothesis testing ............................................................................................................................ 33
Leader acceptance of the augmented leadership distribution..................................................... 33
Mediating effect of transparency on acceptance of the leader ................................................... 35
Moderating effect of situational leadership behavior on acceptance of the leader ................... 39
Discussion ............................................................................................................................................. 41
Findings ............................................................................................................................................ 41
Theoretical implications.................................................................................................................. 42
Practical implications ...................................................................................................................... 45
Strengths and limitations ................................................................................................................ 47
Future research................................................................................................................................ 49
Conclusion ........................................................................................................................................ 50
6
List of tables
Table 1 - Overview of distribution of participants across condition .................................................... 29
Table 2 - Correlation matrix................................................................................................................. 32
Table 3 - Means, standard deviations, and one-way analyses of variance in acceptance by augmented
leadership distribution .......................................................................................................................... 34
Table 4 - Regression analysis for control variable lay beliefs of AI by augmented leadership
distribution to acceptance of the leader ................................................................................................ 35
Table 5 - Regression analysis for mediation by transparency for augmented leadership distribution to
acceptance of the leader ........................................................................................................................ 36
Table 6 - Regression analysis for control variable lay beliefs of AI by augmented leadership
distribution to transparency .................................................................................................................. 38
Table 7 - Factorial ANOVA predicting acceptance of the leader by augmented leadership distribution
and situational leadership behavior ...................................................................................................... 40
7
List of figures
Figure 1 - The conceptual model .......................................................................................................... 10
Figure 2 - Regression coefficients of the mediating effect of transparency on the relationship between
augmented leadership distribution and acceptance of the leader ......................................................... 37
8
Introduction
First automatization in production, then computers taking over simple jobs. What is
next? Would algorithms be deciding the business decisions? Could algorithms be the new
leaders? According to Lee (2018), the innovations in data infrastructure, machine learning, and
artificial intelligence are revolutionizing how organizations are managed by leaders, thus how
leaders manage employees. Accordingly, the functions of leaders are shifting because of the
increasing collaboration between humans and algorithms (Wesche & Sonderegger, 2019).
Together with the evolving computers, algorithms change from being tools to partners which
can perform leadership functions (Höddinghaus, Sondern, & Hertel, 2021; Wesche, &
Sonderegger, 2019).
to perform a task by combining the strengths of both actors (Raisch, & Krakowski, 2021).
Leaders play a significant role towards employees, which influences how employees perform
their job (Iskamto, 2020; Voon, et al., 2011). Employees’ acceptance of the leader is essential
for the execution of the made decision by the leader since the execution of those decisions
contribute to the strategy of the organization (Thomassin Singh, 1998; Zagotta & Robinson,
2002). In addition, according to Wesche and Sonderegger (2019), the question if employees
accept algorithms as leaders is one of the most important aspects of algorithmic management,
where the computer performs managing tasks. For sustainable and effective leadership
voluntary compliance from employees as well as acceptance of the leader is essential (van
Quaquebeke & Eckloff, 2013). Therefore, it is meaningful to understand where the acceptance
The question that arises is what the reason for employees is to accept the augmented
version of leadership. In general, algorithms are associated with a lack of transparency (Glikson
& Wooley, 2020). Meaning that employees who do not have the knowledge about the decision-
9
making process could perceive the process as well as the outcome of the process as ambiguous
(Ananny & Crawford, 2018). Humans have the ability to explain their processes and decisions
verbally when something is unclear for employees. In addition, a gap identified is whether more
background information about the leader’s process contributes to the reaction, such as
acceptance or nonacceptance, of the employee (Hiemstra, et al., 2019). Since transparency can
support the understanding of the process that an augmented leader proceeds, this could mean
that the acceptance of that augmented leader will increase. Therefore, the first research question
this study aims to answer is: is the relationship between augmented leadership distribution and
Accordingly, leaders are responsible for making decisions and are responsible for the
outcome of these decisions (Leyer, & Schneider, 2021). According to Fleishman et al. (1991),
there are two classifications of leadership behavior: task-focused and person-focused. Task-
focused leadership behavior deals with task accomplishment, where a leader intends to identify
operating procedures, task requirements and obtaining task information (Burke et al., 2006).
Person-focused leadership behavior deals with team interaction, where a leader intends to have
a good relationship with their followers (Fleishman, et al., 1991). Augmented leadership
combines the best of both humans and computers to perform leadership tasks, these tasks are
being performed by employees who are being managed by their leaders, which are behaving in
different ways (Raisch, & Krakowski, 2021; Wesche & Sonderegger, 2019). Since the degree
to which employees accept the behavior of their leaders being distributed either way, human
default or computer default, is uncertain, it would be relevant to examine the interaction with
the two classifications of situational leadership behavior. Therefore, the second research
question this study intends to answer is: is the relationship between augmented leadership
The present study provides insight into how leadership distribution impacts employees.
For the acceptance of the leader and consequently leading employees effectively, this insight is
crucial for creating the best combination of the augmented leadership distribution, thus the
collaboration between both humans and computers. Therefore, this study contributes to the
is developing and what influences the acceptance (Figure 1). We used a between-subject
experimental vignette study to examine our questions. Augmented leadership distribution and
situational leadership behavior are manipulated, which created four different scenarios. For
practical contributions, the present study provides information on how to implement algorithms
humans and algorithms in a way that employees will accept their leaders and ultimately their
made decisions. This could lead to recommended policy for designing the augmented
Figure 1
The conceptual model
Transparency
Augmented Acceptance of
leadership leader
Augmented Acceptance of
leadership leader
Leadership
behavior
11
Theoretical framework
Augmented leadership
Meaning that leadership consists of a process where a common goal is attained through
influencing individuals. Grimm (2010) describes leadership as needing to deal with change.
Accordingly, team leadership is a dynamic process where social problem solving is needed by
generic responses to social problems (Burke et al., 2006). Being a leader means that certain
functions need to be executed such as decisions about the goals, ensuring an available network
for realizing the goals, and establishing that the goals are attained by making people do what
needs to be done (Grimm, 2010). The way of managing employees is changed due to the
Leadership functions can be performed by both humans and computers because of the
use of algorithms (Höddinghaus, et al., 2020). The promise of employing artificial intelligence
(AI) is supplying supplementary cognitive abilities which will improve the efficiency and
productivity of the leader (Kolbjørnsrud et al., 2016). The collaboration between humans and
Krakowski, 2021). In this augmented leadership the human and computer are both performing
leadership functions, working together on various tasks (Leyer & Schneider, 2021; Raisch, &
Krakowski, 2021). Additionally, the relationship between employees and leaders is influenced
et al., 2020). Accomplishing a good balance of the interaction of the collaboration between
humans and computers performing as a leader is important for the execution of the
organization’s strategy (Iskamto, 2020; Voon et al., 2011), and the overall business
In augmented leadership, the actors human and computer are both present. The amount
of work they take on could differ (Daugherty & Wilson, 2018). In the case that a human is the
main actor in the augmented leadership, the human acts as the leader of their employees and
consults the computer system (algorithm) when the human feels like it is necessary (Raisch &
Krakowski, 2021). Algorithms perform a supporting role when being consulted. On the other
hand, when a computer is the main actor in the augmented leadership, the algorithm is in charge
of most of the leadership tasks and a human plays a role in certain situations necessary (Raisch
& Krakowski, 2021). Both situations require the presence of a human as well as a computer,
hence the augmented leadership. The augmentation of leadership consists of performing the
same leadership functions as normal, however with the collaboration between humans and
computers the achieved outcomes are far better than either could achieve alone (Daugherty &
Wilson, 2018). The question raises which qualities both entities bring to the collaboration, the
augmented leadership.
A capability that humans have is intuitive skills that contribute to thinking of the bigger
picture (Jarrahi, 2018). This means that humans can combine information from their employees
and make judgments based on fragmentary signs. Moreover, humans can engage in creative
thinking which most of the time requires improvisation (Wesche & Sonderegger, 2019). This
means that humans are able to act innovatively. Emotional intelligence is similarly a quality
that humans possess that is effective in collaborating with employees (Chamorro-Premuzie &
Ahmetoglu, 2016). Strategic thinking can be required as a leader, this is a task that is easier for
humans since they have the right understanding and can make sense of the different contexts in
which strategic thinking is necessary (Jarrahi, 2018). Employees perceive humans as less
controlling because humans cannot track everything themselves and computers can (Kellog et
al., 2020). Humans cannot process that much information at one time, although computers with
algorithms can (Brynjolfsson & Mitchell, 2017; Cheng & Hackett, 2021; Glikson & Woolley,
13
2020). Moreover, humans are more likely to overlook some patterns that can be detected by
algorithms (Parry et el., 2016). Consequently, aid from a computer would be beneficial for
bridging the capabilities that humans do not possess themselves. The augmentation of
Computers can detect different layers of complexity and with that identify hidden
patterns in large data sets (Parry et al., 2016). They can use statistical models when there is
information missing and predict based on the data (Cheng & Hackett, 2021). Algorithms
possess a more developed processing capacity compared to humans, therefore algorithms make
data-based decisions faster and are less expensive in operation (Brynjolfsson & Mitchell, 2017;
Glikson & Woolley, 2020). This means that using algorithms can save time and money,
minimize risk, and increase productivity for certain tasks (Suen et al., 2019). By making use of
algorithms, biases can be disregarded due to the lack of self-interest that computer systems do
not have (Langer et al., 2019; Parry et al., 2016). Furthermore, something else that algorithms
do not possess is empathy and human emotions, which makes their response toward employees
differ from humans (Chamorro-Premuzie & Ahmetoglu, 2016). However, due to their
reactions toward leaders that can cause hurdles (Lee, 2018; Raveendhran & Fast, 2021). The
processes that are used by an algorithm are advanced, encompassing but opaque (Kellogg et al.,
2020). Accordingly, humans’ assistance would be a useful way to bring both advantages
together. A way to bringe both advantages of the actor together is the augmentation of
leadership.
Being an effective leader involves performing all critical leadership functions (Burke,
et al., 2006). Leaders focus on the attainment of the goals that are formulated for the strategy
of the organization and making decisions that contribute to the common goal is part of that. The
efficient collaboration between humans and computers enables managers to make enhanced
14
decisions (Kolbjørnsrud et al., 2016). Effective leadership includes the acceptance of the leader
(Yukl et al., 2009). However, leaders that are trying to build a relationship with their employees
in order to lead effectively, in both ways of augmented leadership, either human default or
algorithm default, exist with advantages and disadvantages (Chamorro-Premuzic & Ahmetoglu,
2016). Therefore, the augmentation of leadership where both humans and computers work
together should be an effective leadership style. The raised question is which allocation of these
Part of leadership is making decisions that can contribute to the overarching strategy of
the organization. Thus, accepting the decision that leaders make is crucial for executing the
organization’s strategy (Thomassin Singh, 1998; Zagotta & Robinson, 2002). Consequently,
employees that accept their leader will more easily accept their leader’s decision. With the
influential process that leadership is, the question rises of how to exert this influence over
employees, in such a way that the employees agree to follow the decision and rules of their
leader (Wesche & Sonderegger, 2019). Furthermore, in principle, behaviors such as acceptance
or nonacceptance are influenced by knowing how employees feel about their leader (Hiemstra,
et al., 2019). Voluntary compliance and acceptance of the leader are both esstional for
sustainable and effective leadership (van Quaquebeke & Eckloff, 2013). Leadership needs to
not accepting the leadership are a decrease in commitment (Öztekin et al., 2015), less
productivity (Baker et al., 2002), and subsequently not contributing to the goals and strategy.
With leadership, there are two parties involved, the leader and the employee. The
augmented leader consists of two elements, human and computer. The relationship between
leader and employee is an exchange process where a unique relationship develops (Graen &
Uhl-Bien, 1995). How individuals view each other has an impact on the connection between
leaders and their employees, and the quality of the relationship may suffer as a result.
Employees identify themselves by looking to their leaders (Hogg et al., 2012). The relationship
between employees and a leader is a social exchange that can be disturbed by the way
employees perceive their leaders. The way people perceive others can be explained by the social
identity theory, which implies how people perceive each other by means of self-evaluation and
finding common ground with other people (Hogg, 2001). This theory indicates how employees
view others which can clarify how employees perceive others. Meaning that employees’ own
perception is in play (Hogg, 2001). How employees perceive their leader influences the
relationship between them and it could harm or favor the quality of the relationship (Turban &
Jones, 1988). Identification with someone else is essential for achieving feelings of closeness
and providing common ground between those individuals (Napier & Ferris, 1993). According
to the similarity attraction paradigm, people who perceive someone else to be more similar tend
to like that person more (Ensher & Murphy, 1997). Employees who perceive their leader as
similar have a tendency to be fond of their leader too (Turban & Jones, 1988). Additionally,
employees have a positive bias towards that person. Looking at the social identity theory, in
terms of augmented leadership, identification with the leader, feeling close and having common
Employees who perceive leaders to be similar are expected to have a positive feeling
about their leader (Ensher & Murphy, 1997; Turban & Jones, 1988). The more distant the
algorithm feels for employees, the more unfamiliar and subjective they are perceived (Trope &
Liberman, 2010). Likewise, inaccessible entities, such as algorithms, are perceived as less
familiar to the employees due to their abstract nature (Popper, 2013). In augmented leadership
16
both humans and computers are present, meaning that there is a part of the leadership that will
not feel familiar to employees. The two situations of augmented leadership we examine in this
research differ in a way that one actor is prominent in the leadership and the other is supporting.
According to the social identity theory, employees will have more similarities with a human
and therefore perceive a human as more likeable. More agreement on the human leader is
probably due to the fact that the human part of the leadership is more prototypical (Hogg et al.,
2012). Thus, in the situation where the human is more prominent in the leadership, employees
can more identify themselves with the human part and will like that part of the leadership more
compared to the algorithm part which feels more distant (Mahmud et al., 2022). Controversy,
in the situation where the algorithm is more prominent in the leadership, employees will try to
identify with the algorithm which is more difficult due to non-familiarities (Lim & O’Connor,
1996). Therefore, we suspect that the acceptance of the leader will be higher for the augmented
leadership distribution where the human is in the lead and the algorithm acts supporting. The
human is the default and is supported by algorithm (versus when algorithm is the default
knowledge that employees have with the automated part of the leadership distribution, the
algoritm (Höddinghaus, et al., 2021). Especially algorithms are unknown, due to the black box
design which means that there is no clear perception of how algorithms operate (Mahmud et
al., 2022). Gabris and Ihrke (2000) describe that, in order for employees to accept, it is important
that the system that is deployed is procedurally fair and valid for employees. This means, that
when the procedure is perceived, to be honest and reasonable the employee is more likely to
17
accept it. Transparency helps to develop understanding (Mahmud et al., 2022). Associating this
with the social identity theory, transparency aids the understanding of the augmented leader
which is important for the employee to know the right information to identifying themselves to
their leaders.
Transparency indicates that “clear and open information” is a necessity in the exchange
between the leader and employees (Breuer, et al., 2020, p. 13). The line of reasoning should be
et al., 2007). For transparent decision-making processes, decisionmakers should evidently show
the principles behind the conclusions as well as the reasoning that has brought the decision-
maker to that conclusion (Rasmussen et al., 2007). Vital for successful decision-making is
information (Rodrigues & Hickson, 1995). Transparency should nurture the understanding
between the leader and employee, which as well provides traceability (Breuer, et al., 2020).
Meaning that employees should be aware of the rationale the leaders have while executing
transparency, as possessing those characteristics that provide “transparent and open knowledge
management” (p. 18). This raises the importance to understand the allocation of augmented
Buell and Norton (2011) found that giving an explanation of why certain advice is given
enhanced the acceptability of the given advice. Humans have the opportunity to communicate
to and with the employees and explain their reasons verbally to their employees about why they
made certain decisions due to the interaction between human leaders and employees (Zerilli et
al., 2018). To a certain extent this means that humans are transparent. However, humans also
can be less transparent and comprehensible in their decision-making (Höddinghaus et al., 2021).
Aid for reducing this uncertainty can be operational transparency (Buell, & Norton, 2011).
Contrastingly, algorithm processes are not transparent and can be perceived as vague due to the
18
limited access to some specific information (Glikson & Wooley, 2020; Kellog et al., 2019).
Algorithms do not have the possibility to verbally explain their line of reasoning and likewise
cannot sense when employees do not understand something and need an explanation. Mostly
the reasoning, as well as the complexity of the algorithms, are not transparent and difficult to
understand (Faraj et al., 2018). Being transparent is among other things essential for enhanced
collaboration among employees and their leaders (Parris et al., 2016). Increasing the
understanding of a process or decision is the intention of being transparent (Cramer et al., 2008).
Viewing the two augmented leadership situations, both humans and algorithms have
participated in transparency.
Employees who have enough knowledge about their leader and the motives of their leader have
the opportunity to know their leader. Offering reasons for making decisions can employees help
to understand the rationale of their leader (Mahmud et al., 2022). Transparent leaders provide
the possibility for employees to see or know the complete picture which is necessary to evaluate
or judge their leader (Hogg et al., 2012; Kellog et al., 2019; Mahmud et al., 2022). Then
employees can determine if they can identify with their leader and would agree with their leader.
Employees with knowledge and information about their leader can also compare themselves to
their leaders, even if the computer is in the lead of the employees. The likability of a leader is
an outcome of employees identifying with their leaders and finding common ground (Ensher &
Murphy, 1997; Hogg et al., 2012; Napier & Ferris, 1993). Employees who are fond of their
augmented leader will agree more with them, accept the decisions made and follow their
When human is prominent in the augmented leadership situation, employees can ask
their leader for an explanation when something is unclear (Önkal et al., 2009). Hence the
information for the employee about the leader is open to ask and possible to ask for clarification.
19
In the other augmented leadership situation, when a computer is more prominent, asking the
algorithm to explain why certain decisions is not accessible (van Dongen & van Maanen, 2013).
Even with the support of the human, the human part of the augmented leadership also cannot
tell how and why the computer made certain decisions because of the black box design of
algorithms (Mahmud et al., 2022). Therefore, we suggest that the situation of augmented
leadership where an algorithm is more prominent is not as transparent as when the human is
more prominent in the augmented leadership. The following hypothesis is the result of this
reasoning:
acceptance of the leader is mediated by transparency in such a way that when the human
in the augmented leadership, the transparency is higher which will increase the
The relationship between leadership distribution and the acceptability of the leader can
also be influenced by the behavior of a leader. Leaders act with a focus on goal attainment,
which creates the behavior of the leader (Fleishman, et al., 1991). However, reaching a goal
comes with challenges since organizations must deal with the external environment, subsystems
within the organization that change, and individual employees (Fleishman, et al., 1991). In
dealing with these changes within an organization, leaders can behave in several ways. Two
classifications of leadership behaviors are known in the literature, behavior dealing with task
accomplishment, and behavior dealing with team interaction (Fleishman, et al., 1991).
Leadership behaviors have an impact on the performance outcome of their team (Burke et al.,
2006). Perceptions of the leader can be influenced by the nature of the task (Fleishman, et al.,
20
1991; Lee, 2018). Our research adopts the dichotomy of leadership behaviors of Fleishman et
al. (1991) since they classified the behaviors from two common themes.
requirements, operation procedures, and task information (Burke, et al., 2006). Task-focused
behavior involves transactional behavior such as motivating goal achievement (Patroom, 2018).
includes promoting the accomplishment of goals through the minimization of role ambiguity as
well as conflict (Burke et al., 2006). Moreover, boundary spanning is a characteristic of task-
focused leader behavior, which refers to the emphasis on attaining resources and information
through communication and collaboration (Patroom, 2018). The utilization and monitorization
of employees involve tasks that are demanding technical skills (Fleishman, et al., 1991).
According to de Winter and Hancock (2015), tasks such as information storing and signalling
controls are repetitive tasks that are performed best by computers. These tasks require technical
skills which algorithms are better at than humans since computers have the capacity and
complexity to process this (Brynjolfsson & Mitchell, 2017; Glikson & Woolley, 2020; Parry et
al., 2016). Subsequently, traits that are explaining task-focused behavior seem to match the
interactions, cognitive structures, and attitudes, with the goal to develop a team that can work
effectively (Burke, et al., 2006). A leader who behaves person-focused is characterized by being
transformational which means that the leader tries to inspire their followers (Patroom, 2018).
Moreover, this leadership behavior acts in maintaining a relationship with open communication
where mutual respect, satisfying needs, and trust are the basis (Burke et al., 2006). Motivational
and empowering are also characteristics to describe person-focused behavior, this includes
arousing employees to put in extra effort, providing autonomy, acting as a coach, and promoting
21
self-leadership for employees (Patroom, 2018). Tasks that require judgment, reasoning,
flexibility, and improvising demand human execution (de Winter & Hancock, 2015). Tasks
require social skills (Fleishman, et al., 1991). Social skills are best performed by humans since
algorithms lack the ability of intuition, social interaction, and empathy (Chamorro-Premuzie &
Ahmetoglu, 2016; Lee, 2018). Consequently, characteristics that are describing person-focused
complementing the two different actors of augmented leadership. Meaning that person-focused
behavior appears to be performed better by humans due to their skills, attributes, and
because of the emphasis on the traits and resources that are involved in such tasks. The two
leadership behaviors both have their own requirements for the best execution of the
corresponding tasks. For the acceptance of the augmented leader, the best-suited combination
seems to be when the default leadership function brings out the best in the behavioral tasks of
a. The effect when human leadership is the default and is supported by an algorithm
on the acceptance of the leader when the leader performs in a person-focused
behavior is higher as opposed to when the same leader performs in a task-focused
behavior ;
b. The effect when algorithmic leadership is default and is supported an human on the
acceptance of the leader when the leader performs in a task-focused behavior is
higher as opposed to when the same leader performs in a person-focused behavior.
22
Method
Design
experimental vignette study combines an experiment with a survey, whereas the weaknesses of
both approaches counterbalance each other (Atzmüller & Steiner, 2010). The vignette
experiment is the core element of the method, corresponding traditional survey for measuring
constructs. Additionally, this method of research is generally used for examining the opinions,
beliefs and attitudes of people (Petrinovich et al., 1993). It gives the possibility to have high
control of the manipulations due to the hypothetical scenarios where independent variables are
being manipulated (Petrinovich et al., 1993). First, participants read the vignette where they
were asked to imagine themselves in a situation. Secondly, with this situation in mind, the
participants were asked to complete the questionnaire. Participants were randomly assigned to
Procedure
Participants were invited to contribute to our study through several resources. We used
LinkedIn, Facebook, e-mail, WhatsApp, and verbal invitations for recruiting our respondents.
The survey consisted of the vignette and the questionnaire, which were both in written form
meaning that respondents needed to read it themselves. Participants filled in the survey in an
online setting, which ensures better external validity because participants could fill in the
questionnaire anywhere (Tröster & van Quaquebeke, 2021). After the invitation participants
needed to give consent, followed by the general introduction of the scenario, where the
participants were first made aware of the importance of identifying with the described situation.
The scenario started with one page general introduction where participants were made aware
23
what kind of situation they are in. This general information is followed by the in-depth scenario
on the next page, which was not the same for every participants due the manipulations resulting
Participants were randomly assigned, which means that every participant has the same
means that each participant only reviews one vignette before answering the questionnaire
(Atzmüller & Steiner, 2010; Evans et al., 2015). After completing the questionnaire, participant
comparisons are made across the participants per vignette group (Atzmüller & Steiner, 2010).
Due to the between-subject design, the participant had no reference point since they only are
presented with one vignette, this can otherwise harm the true judgement of participants (Aguinis
& Bradley, 2014). Vignette equivalence can help with this, by ensuring that the structure of the
vignette is similar across all vignettes (Evans, et al., 2015) and that participants have sufficient
information, in terms of context (Aguinis & Bradley, 2014). However, reading a vignette is
significantly different from experiencing such a situation, which creates a limitation of the
participant’s understanding (Lee, 2018). Furthermore, in order to receive credible results from
the experimental vignette method, the study design needs to be flawless (Sheringham et al.,
2021).
Participants
Participation in this survey was completely voluntary and anonymity was guaranteed
for the participants. Participants were made aware of the possibility of withdrawing from the
survey midway and of the possibility of attention checks. The survey was prepared in Qualtrics,
which is a survey tool with the possibility to export data to SPSS. The survey opened on April
12th 2022 and closed on May 18th 2022. Participants who failed to start the first item were
removed from the dataset. The attention check was examined, which resulted in 92% correct
answers. Based on the feedback from multiple participants we concluded that the attain check
24
did not perform as we intended. In total, we recruited 171 participants. However, not every
participant filled in the questionnaire entirely. 83% of the participants finished the questionnaire
completely. Analyzing missing values are not a problem since SPSS can process these for each
separate analysis, which is built in SPSS procedures. Moreover, excluding participants based
on attention check or based on not finishing the survey did not provide significantly different
outcomes of the analysis. Participants were 56% female, 43% male. 1 participant identified as
non-binary and 1 participant did not prefer to say. The youngest participant was 21 years old
and the oldest participant was 73 years old, with the average age of the sample being 31 years
old. Most participants’ highest education was a bachelor’s degree (54%), followed by a master’s
Vignettes
Using vignettes means that a “short, carefully constructed description” (p. 128) of a
situation is portrayed (Atzmüller & Steiner, 2010). The aim is to evaluate dependent variables
such as behaviors by presenting the participant with realistic situations (Aguinis & Bradley,
2014). Moreover, this allows for manipulating independent variables which simultaneously
enhances both internal and external validity (Aguinis & Bradley, 2014). A vignette represents
a mixture of characteristics and is used to elicit judgments about situations (Atzmüller &
Steiner, 2010). Vignette studies utilize data that is self-reported by the participants, this is
because the participants are answering questions responding to the situation of the vignette.
Algorithm default) and leadership behavior (person-focused vs. task-focused) are the factors
used in the present study. This means that two of the four scenarios include a human as the
default factor of the augmented leadership distribution, where computer act supportive, and two
of the four scenarios include a computer as the default factor of the augmented leadership
distribution, where human act supportive. Moreover, two of the four scenarios include a person-
25
focused leader situation and two of the four scenarios include a task-focused leader situation.
The four different vignettes are described in appendix A. The scenarios are based on real
situations to make it easier for participants to imagine the scenario. Likewise, due to the realistic
situation in the vignettes, the level of common-sense increases. The vignettes were introduced
by letting participants know what kind of scenarios there were in. Specifically, the participants
are told that they needed to imagine to be looking for a new job, followed by an introduction of
a company where only one thing was not clear in order to make a decision to accept the new
job. The offer that the company makes is almost acceptable, only the leadership philosophy was
not clear yet. In the specific scenarios, different leadership philosophy are explained which was
The first independent variable that was manipulated, is the augmented leadership
distribution, which has two different types: human default, where the algorithm supports the
human and algorithm default, where the human supports the algorithm. In this experiment, we
effect on acceptance of the leader and whether transparency can explain this relation. In the
scenario, this is divided into a situation where the team manager named ‘Alex’ is in charge of
the day-to-day business tasks and where the automated system is consulted in certain cases. The
other situation is where the automated system is in charge of most management tasks and where
team manager ‘Alex’ is consulted in special cases. In the written scenario this is made clear by
Leadership behavior was the second variable that is manipulated, which has two types:
focusing on the task and completion of the task. We attempt to evaluate the difference in
acceptance of the leader. In the person-focused scenario, we used the situation of performance
and career development involving coaching, personalized feedback, and future possibilities. In
26
the task-focused scenario, we used the situation of managing team projects consisting of task
Measurements
The present study consisted of two measurements which was the mediator and the
dependent variable (Appendix B). Furthermore, a control variable, lay beliefs of AI, was
included. To end, two independent variables, the leadership distribution and leadership
Acceptance
Measuring acceptance of the leader was done by an adapted scale by Höddinghaus et al.
(2020). We adjusted this scale in such a way that it was obvious whom the question is about,
by replacing the X with the leadership system. In addition, we deleted the word decision from
the initial items. This scale of acceptance consisted of three items which were ‘I think I would
accept this leadership system’, ‘I think I would agree with this leadership system’, and ‘I think
I would endorse this leadership system and act accordingly.’. These items were rated on a 7-
point Likert scale ranging from strongly disagree (1) to strongly agree (7). The acceptance scale
revealed high reliability (Cronbach’s Alpha = .93). The corrected item-total correlation showed
that these items are strongly correlated with the total score (> .30). There were 3 missing values
for acceptance.
Transparency
The scale that was used for measuring transparency was an adapted scale of
Höddinghaus et al (2020). We adjusted this scale the same way as we did with acceptance, by
replacing the X with the leadership system. The transparency scale had three items which were
‘I think I could understand the decision-making processes of this leadership system very well,
‘I think I could see through this leadership system's decision-making process’, and ‘I think the
decision-making processes of this leadership system are clear and transparent’. These three
27
items were rated on a 7-point Likert scale ranging from strongly disagree (1) to strongly agree
(7). High reliability was demonstrated for the transparency scale (Cronbach’s Alpha = .87). All
items on the transparency scale strongly correlated with the total score, which appeared by the
corrected item-total (> .30). For transparency, there were 10 missing values.
Control variable
In our research, the variable lay beliefs of AI was examined as a control variable. The
scale that has been used for measuring lay beliefs of AI had 10 items which were rated on a 5-
point Likert scale ranging from strongly disagree (1) to strongly agree (5). A list of abilities was
stated and the question was asked whether participants think AI can perform these abilities
better than human intelligence. The abilities were ‘possesses abstract reasoning ability’, ‘has a
good short-term memory’, ‘has good long-term memory’, ‘processes information quickly’, ‘has
a high ability to learn’, ‘is good at problem-solving’, ‘flexibility/ can adapt to new things’, ‘can
perform well on complex tasks’, is good at initiating structure’, and ‘is good at personal
consideration’. The lay beliefs of AI scale was reliable (Cronbach’s Alpha = .72). However,
examining the corrected item-total correlation showed no strong correlation with the total score
(< .30). For lay beliefs of AI, there were 13 missing values.
Manipulation check
A manipulation check was used to check whether the manipulations were effective
(Lonati et al., 2018). In this survey, three manipulation questions were asked to verify the
leadership behavior. The manipulation check for the augmented leadership distribution in this
study consisted of the binary question “Who was most dominantly in charge in the described
leadership system in the scenario?”. Participants had two options to answer this question, either
the team manager or the automated system. Following was the manipulation question regarding
leadership behavior. The binary question was asked about which example was used in the
28
described scenario. For this question, the participants had the following two options, either
management of team project or performance and career evaluation. In addition, the last
manipulation check for checking the effectiveness of leadership behavior was the question of
what the nature of the leadership situation was in the described scenario. For this question, we
used a 5-point Likert scale ranging from definitely person-focused to definitely task-focused.
The aim is a successful manipulation check, meaning that conclusions about the relationship
Analytical Plan
Our data was analyzed using SPSS as statistical software. Cleaning the data was done
prior to the analysis. Before testing the hypothesis, the distribution of the variables was
analyzed, the data was tested for normality and outliers, and a correlation matrix was performed.
The multiple analysis for testing the hypotheses followed. For hypothesis 1 a one-way ANOVA
was used for examining the relationship between the dependent variable acceptance and the
analysis for hypothesis 2 to evaluate the impact of transparency. For hypothesis 3 a two-way
ANOVA was conducted for discovering the influence of situational leadership behavior.
29
Results
Our survey was filled in by 171 participants. In the results section, analyses are reported
that were conducted with the data of these 171 participants. Computing the frequencies of each
scale shows that there were no data entry errors. For each scale, the mean, standard deviations,
minimum and maximum were assessed. For acceptance, the minimum score was 1, and the
maximum score was 7 (M = 4.09, SD = 1.52). The range of transparency score was from 1.33
to 7 (M = 4.59, SD = 1.39). for lay beliefs of AI the minimum score was 2.1, and the maximum
score was 5 (M = 3.66, SD = .54). Additional, the data were examined for normality by looking
at skewness and Kurtosis scores. The acceptable range is between 1 and -1, which assumes that
the data is normally distributed. For the acceptance scale, the statistics were within the
acceptable range, thus we assumed normal distribution. The skewness and Kurtosis scores for
the transparency scale were in the acceptable range, which indicated a normal distribution of
the data. Furthermore, the lay beliefs of AI scale was within the acceptable range of skewness
and Kurtosis, therefore we assumed normal distribution. The data were checked for outliers,
which were not detected. The distribution of participants across the different scenarios was
examined (Table 1). The distribution is not equal because participants were excluded by SPSS
from the data when they did not answer the manipulation check. The unequal distribution can
Table 1
Overview of distribution of participants across conditions
Leadership behavior
Human default 36 41
Algorithm default 49 45
30
The manipulation check question was examined to know if the manipulation of the
variables worked. The aim of the manipulation check is to ensure that the manipulation has
been successful, which means that participants understand the variable that has been
participants understood the scenario as we intended. The goal is that participants could recall
who was in charge of most leadership tasks and what the leadership behavior was in the vignette
The first manipulation check was for the augmented leadership distribution variable,
where the main character of the leadership system was asked through a binary question. A
crosstabs and Pearson Chi-Square test were conducted to verify how many participants could
remember the scenario. The Pearson Chi-Square is statistically significant x2(1) = 75.05, p <
.001, which explains that there is a significant association between the variables. Crosstabs
analysis showed that in total 84% of participants could recall the augmented leadership
distribution correctly. Of the participants that were in the scenario with humans as default in
the augmented leadership, 97% answered the manipulation check correctly. For the participants
that were in the scenario with an algorithm as default in the augmented leadership, 82%
The same analysis was conducted for testing the understanding of the participants of the
leadership behavior. Participants answered a binary question about the leadership behavior that
was performed in the scenario they read. Again a crosstabs and Pearson Chi-Square test were
conducted to verify how many participants could recall the scenario. The Pearson Chi-Square
is statistically significant x2(1) = 67.74, p < .001, which means that the observed distribution is
significantly different from the expected distribution. In total 82% of participants correctly
remembered the leadership behavior performed in the scenario. The Crosstabs analysis showed
31
that of the participants that read the scenario with performance and career evaluation as a
leadership situation, 76% answered this question correctly. For the participants that read the
scenario with the management of team projects as a leadership situation, 89% correctly
For the last manipulation check, an independent sample t-test is performed to assess
whether participants correctly identified with the behavior of their allocated scenario. The
Levene’s test is statistically significant (p < .01), which means that equal variances are not
assumed. From the 80 participants who read the task-focused behavior vignette (M = 3,76, SD
= 1,05) compared to the 78 participants who read the person-focused behavior vignette (M =
3,18, SD = 1,25), the results revealed that the mean differences are statistically significant,
t(156) = 3.18, p < .01. A higher score corresponded to a task-focused rating and a lower score
corresponded to a person-focused rating. The results show that the participants who read the
task-focused vignette scored the leadership behavior as more task-focused. The decision was
made to not exclude participants who failed the manipulation check to minimize the possibility
In the correlation matrix (Table 2) information can be found about the variables and
their relationships. Means, standard deviations, correlations, and Cronbach’s alpha are
presented in this table. Acceptance and transparency are positively correlated (p <.001), which
indicates that when transparency is high, the acceptance of the leader is likewise high.
distribution (p < .001). This means that the acceptance of the leader is higher when the
Another variable that acceptance of the leader has a positive relation with is lay beliefs of AI
(p < .01). This correlation indicates that when participants have high lay beliefs of AI, the
leadership distribution (p < .01), which implies that transparency is higher when the distribution
of augmented leadership is default human, and an algorithm acts supportive. Another positive
correlation was found between transparency and lay beliefs of AI (p < .001). Moreover, no
statistically significant relationship was found with situational leadership behavior. The
correlations showed that lay beliefs of AI has a significant correlation with both the dependent
variable acceptance and the mediator variable transparency. Therefore lay beliefs of AI was
included in the analysis as a control variable. Gender and age both do not statistically significant
Table 2
Correlation matrix
Variable M SD 1 2 3 4 5 6
Hypothesis testing
Hypothesis 1 stated that the acceptance of the leader will be higher when human is the
default and is supported by an algorithm in the augmented leadership. Previous analysis showed
that the distribution of augmented leadership where human is the prominent actor and an
algorithm act supportive scores higher than the distribution of augmented leadership where an
algorithm is the prominent actor and a human is supportive on acceptance of the leader. To
determine whether the difference between the different distributions of augmented leadership
performing the one-way ANOVA, the assumptions must be met. The normality assumption was
met since the scores for skewness and Kurtosis are in an acceptable range. For the
homoscedasticity assumption, Levene’s test was utilized, which was not significant (p = .249).
This indicated that the variances are equal across groups, which meant that the assumption was
met. Because of the between-subject study design, the independence assumption was also met.
The one-way ANOVA test whether the human default of the augmented leadership distribution
is more accepted than the algorithm default of the augmented leadership distribution (Table 3).
Results revealed that the differences between the two distributions of augmented leadership
were statistically significant with a large effect, F(1,166) = 3.35 ,p < .001, η2 = .14.) Participants
in the scenario with a prominent human in the augmented leadership distribution testified higher
acceptance (M = 4.73, SD = 1.38) compared with participants in the scenarios with a prominent
hypothesis 1, the results showed that when the distribution of augmented leadership is default
human, the acceptance of the leader is higher than when the distribution of augmented
Table 3
Means, standard deviations, and one-way analyses of variance in acceptance by augmented
leadership distribution
M SD M SD
The one-way ANOVA was repeated with lay beliefs of AI performing as a control
variable, performing a hierarchical regression analysis (Table 4). This analysis showed that
when adding lay beliefs of AI the augmented leadership distribution had still a significant effect
as well as a large effect on acceptance, F(1,155) = 30.28, p < .001, η2 = .16. In addition, the
with augmented leadership distribution, lay beliefs of AI was added in model 2. Results
displayed that the addition of lay beliefs of AI counted for an expansion of 5,2% in the variance
of acceptance. This increase in R2 was statistically significant (p < .01). This means that the
predictive power on acceptance advances when lay beliefs of AI is high. Both models were
overall significant. The results of the regression analysis indicate that when lay beliefs of AI
Table 4
Regression analysis for control variable lay beliefs of AI by augmented leadership distribution
Note. Leadership distribution refers to manipulations of human default and algorithm default
augmented leadership distribution on the acceptance of the leader in such that it will positively
increase the acceptance of the leader when human is the default and is supported by an
algorithm. Correlation showed that transparency positively correlated with both acceptance as
the previously observed relationship between augmented leadership distribution and acceptance
of the leader, an regression analysis was performed using PROCESS macro model by Hayes
(Table 5). Results indicated a significant positive effect of human default versus algorithm
default on acceptance of the leader, b = 1.15, SE = .22, t(159) = 5.24, p < .001. Performed a
linear regression with acceptance as dependent variable and in model 1 the independent variable
augmented leadership distribution, and the results indicated the predictive power of augmented
36
= .21, t(159) = 3.52, p < .01. The predictive power of this model, including augmented
t(158) =6.96, p < .001. This means that when transparency scores high, acceptance of the leader
increases. With these two steps, a reduction in the effect of augmented leadership distribution
on acceptance of the leader is observed, b = .85, SE .20, t(158) = 4.19, p < .001. The indirect
effect, .34, 95%CI[.14, .56] was significant since the bootstrap confidence interval does not
include zero (Figure 2). These results mean that there in fact is a mediation effect. Since the
direct effect of augmented leadership distribution on acceptance of the leader was significantly
present, the mediation is partial. This partial mediation effect indicates that augmented
leadership distribution exerts some of the impact on acceptance via transparency. Furthermore,
the predictive power when adding transparency to the model improved from 16,7% to 35,6%,
Table 5
Regression analysis for mediation by transparency for augmented leadership distribution to
Note. Leadership distribution refers to manipulations of human default and algorithm default
Figure 2
Regression coefficients of the mediating effect of transparency on the relationship between
Transparency
1.15*** .51***
Human 1.15***
Acceptance of
X
leader
.85***
Algorithm
Further analyses have been done where control variable lay beliefs of AI is added
as a covariate in PROCESS by Hayes. This analysis showed that lay beliefs of AI is not
For examining what lay beliefs of AI has of influence on our model, several hierarchical
regression analyses were conducted. First of all, the influence of lay beliefs of AI on the
relationship between augmented leadership distribution and transparency was examined (Table
6). In model 2 lay beliefs of AI was added, where in model 1 augmented leadership distribution
is placed. Results showed that by adding lay beliefs of AI the variance in transparency increased
statistically significant (p <.01), meaning that the predictive power on transparency improves
when lay beliefs of AI is high. Moreover, both models were overall significant. Results suggest
that when lay beliefs of AI increases, the transparency likewise showed an increase (b = .67, p
< .01). Next, the effect of lay beliefs of AI on the relationship of transparency to acceptance
Table 6
Regression analysis for control variable lay beliefs of AI by augmented leadership distribution
to transparency
Note. Leadership distribution refers to manipulations of human default and algorithm default
between augmented leadership distribution and acceptance of the leader. The moderation was
hypothesized such that the effect when human leadership is the default and is supported by
algorithmic leadership on the acceptance of the leader is positive when the leader performs a
person-focused behavior and such that the effect when algorithmic leadership is default and is
supported by algorithmic leadership on the acceptance of the leader is positive when the leader
performs a task-focused behavior. To test whether the mean of acceptance of the leader changes
according to the levels of augmented leadership distribution and situational leadership behavior,
a two-way ANOVA was performed. Before conducting the two-way ANOVA, the assumptions
distributed. The scores of skewness and Kurtosis were classified to be acceptable. Levene’s test
was performed, for the homoscedasticity assumption, which was not significant (p = .43). The
assumption is met since the variances are equal across groups. This research used a between-
subject design, this ensures the independence of observation, thus the independence assumption
acceptance of the leader (Table 7). The results revealed that there was a main effect for
augmented leadership distribution on acceptance of the leader, F(1,164) = 27.70, p < .01, η2 =
.15. However, no main effect was found for leadership behavior on acceptance of the leader,
distribution and leadership behavior was discovered, F(1,164) = .73, p = .40, η2 = .004.
Hypothesis 3 cannot be supported based on this. This means that the effect of augmented
40
leadership distribution on acceptance of the leader was not conditional on the situational
leadership behavior.
Table 7
Factorial ANOVA predicting acceptance of the leader by augmented leadership distribution
Acceptance
Distribution
.73 .40 .004 4.89 1.28 4.59 1.47 3.54 1.44 3.62 1.44
x Behavior
41
Discussion
Findings
The aim of this study was to expand our knowledge about augmented leadership and its
effects on employees. One of the goals of this study was to investigate what drives acceptance
of a leader when the leader is augmented. Furthermore, this study anticipates developing an
leadership and acceptance relationship. More specifically, our study’s purpose was to examine
how acceptance of an augmented leader (default human vs. default algorithm) is, and what role
transparency and different leadership behaviors play in this relationship. Transparency is tested
as the mediator in our model, and situational leadership behavior is tested as the moderator in
our model.
significantly higher when the prominent actor in the distribution of augmented leadership is
human and the algorithm is supportive. This demonstrates the support of hypothesis 1.
Contrastingly, this means that the acceptance of followers is lower when the prominent actor
Furthermore, the results indicate that transparency mediates the relationship between
augmented leadership distribution on the acceptance of the leader. The bootstrap interval did
not include zero which means that the indirect effect is significant. Likewise, the direct effect
of augmented leadership distribution on the acceptance of the leader was still significant. Thus
the mediation that we hypothesized is a partial mediation. Following our analytical plan, a two-
way ANOVA is conducted for analyzing the moderated effect of situational leadership behavior
on the augmented leadership distribution and acceptance of the leader relationship. Since there
is not found an interaction effect between augmented leadership distribution and situational
leadership behavior, this hypothesis is not supported. The effect of the relationship between
42
augmented leadership distribution and acceptance of the leader is not conditional on the
Other analyses conducted for additional analysis found that lay beliefs of AI had an
impact on two relationships in our model. First, the relationship between our independent and
mediator, the augmented leadership distribution and transparency. With lay beliefs of AI
increased from 7,8% to 14,5%. Second, the relationship between augmented leadership
distribution and acceptance, the relationship between the independent variable and dependent
variable. Results show that adding lay beliefs of AI causes an increase in predictive power,
Theoretical implications
Our research contributes to the existing knowledge of augmented leadership and its
the leader is established such that acceptance of the leader is higher when the most prominent
actor in the distribution is human and is supported by an algorithm. These results are in line
with the social identity theory we used for hypothesizing this relationship. Since the prominent
actor in the most accepted augmented leadership distribution was human, employees would be
more likely to like their leader because of more similarities (Hogg, 2001). On the other hand,
employees would not be more likely to like their leader if the prominent actor of the augmented
leader is an algorithm. In both situations of the augmented leadership distribution, the human
and algorithm are present. However, the power dynamics are not equal in our scenarios, which
means that one of the actors is more prominent in one situation than in the other situation. This
means that the action, ability, and function of the most prominent in the augmented distribution
is what employees most see and compare with. In theory, there are several reasons why certain
leaders are effective in terms of social identity, these are prototype-based liking, the appearance
43
of being influential, and constructs such as legitimacy, trust, and innovation (Hogg et al., 2012).
The theory of social identity that is been used to hypothesize this relationship, and is found to
Findings for transparency as a mediator are in line with the theorized relationship.
acceptance of the leader in a way that the acceptance of the leader increases when the human is
the prominent actor of the augmented leader and the algorithm is supporting. The mediation is
partial, which means that transparency is just for a part responsible for the relationship between
augmented leadership behavior and acceptance of the leader. This relationship is strengthened
by transparency, which means when there is no transparency, the relationship still exists. The
knowledge that transparency mediates this relationship is in line with the arguments of Brueur
et al. (2020), which stated on the one hand that transparency nurtures the understanding between
leader and employee, and on the other hand that clear and open information is key in the
increase acceptance of the leader when leadership is augmented. Accordingly to the argument
that giving an explanation of why things are done in a certain way increases the acceptability
(Buell & Norton, 2011), there can be concluded that our research proves the theory.
distribution on acceptance of the leader is not found as hypothesized. While the correlation
table already showed no correlation for any other variables with situational leadership behavior,
this analysis was still performed. In contrary with the argument that the capabilities of the
dimensions of the behaviors and the capabilities of the prominent actors would complement
each other, the moderated effect is not present. There is no evidence found for an interaction
effect. A conclusion is made that the augmented leadership distribution has the same effect on
44
the leader. While this finding does not fit with the theory, other explanations are discovered.
A possible explanation for not finding the hypothesized moderating effect can be that
the participants failed to acknowledge the different behavior as we intended. Argument for the
moderation was that the skills of task-focused behavior complement the abilities of algorithm
and the skills of person-focused behavior complement the abilities of a human. However,
people have their own judgment on what skills are required (Lee, 2018). Therefore, the
complementing skills and abilities should have been exposed to the participants for them to
understand the different behavior in combination with the two augmented leadership
distribution scenarios. Moreover, the vignettes were not pretested before conducting our
research. This can mean that the questions were too difficult to answer due to the complexity
of the situations. This has nothing to do with the manipulation check since that is only checking
whether the participants remember the scenario. However, for answering the questions how we
intended to, the manipulated scenario also needs to be completely in terms of information for
is intertwined in the vignette, however an actual explanation of what person-focused and task-
focused behavior means is not included in the vignettes. This can mean that the interpretation
of this behavior could not be as we intended and therefore the results were different from our
hypothesis.
Our unexpected finding is the impact that lay beliefs of AI has on the relationships
between the independent variable and the mediator and between the independent variable and
the dependent variable. In both cases, lay beliefs in AI increases the predictive power of the
outcome variable. This means that when employees think highly of AI, the acceptance of the
augmented leadership increases. No theorizing was done for this finding before the analysis.
However, trust in the leadership agents could play a role in explaining these findings
45
(Höddinghaus et al., 2021). Their research contributed to the literature in a way that they
emphasized the trustworthiness of human and computer leaders. Thus, when employees
perceive AI as having better abilities than humans, they would trust when AI is implemented in
the leadership distribution, which could be a reason for the found impact on acceptance. Trust
was identified as a component of acceptance (Höddinghaus et al., 2021). The impact of lay
beliefs of AI is also found for transparency. Here the reason could be that when employees
score AI to have better abilities than humans, they have knowledge and faith in AI. Knowledge
and information are key to transparency (Breuer, et al., 2020), which supports this rationale.
Practical implications
leaders with a human as the main character (and algorithm supporting) more as compared to an
augmented leader with an algorithm as a main character (and human supporting), irrespective
of the leadership behavior. Accepting the leader is important for multiple reasons, including the
executing of the strategy (Thomassin Singh, 1998; Zagotta & Robinson, 2002), increase in
commitment (Öztekin et al., 2015), productivity (Baker et al., 2002), and effective and
sustainable leadership (van Quaquebeke & Eckloff, 2013; Yukl et al., 2008). A first practical
implication is that knowledge about the distribution of the augmented leadership and its
consequences on the acceptance of that leader could help management make decisions on how
to implement certain leadership functions. It can be concluded that the decision for augmented
leadership should not be taken lightly and important is to decide how the allocation of actors in
the augmented leadership is. Managers can use the findings of our research when they want to
start with augmented leadership. Before implementing such a project, they have a couple of
the augmented leadership distribution and the acceptance of the leader. The relationship
transparency. Management should increase transparency for the intended effect of improving
the acceptance of the leader. From theory, we know that guaranteeing transparency is possible
(Rasmussen, et al., 2007). Essential is information and, transparent and open knowledge
management (Breuer, et al., 2020; Rodrigues & Hickson, 1995). In practice, this means that
management should ensure that both humans and algorithms are transparent about their
execution of tasks. To conclude, this contributes to practice by providing ways how to improve
transparency which in turn increases the acceptance of the leader. Increasing transparency can
Implementing an open communication channel where the organization shows and explains their
Following the practical implication of increasing transparency, the literature states that
the overall algorithms lack transparency (Glikson & Wooley, 2020; Mahmud et al., 2022).
Organizations can aid transparency through computer augmented transparency, which means
that leaders receive answers about work that is being done in the organization (Schildt, 2016).
With this information, the leader can adjust processes accordingly. According to Yeomans et
al. (2019), the way the explanation is communicated is important for algorithmic aversion or
affection. For positive impact, a persuasive way of communication seems to be de best way, for
example, personalized conversation or illustration (Mahmud et al., 2022; Yeomans et al., 2019).
augmented leadership distribution and transparency and on the relationship between augmented
leadership distribution and acceptance. How and why this impact exists is on the acceptance is
47
not investigated. A high score of lay beliefs of AI means that participants believe that AI
performs the abilities better than a human could do. This high score is beneficial for our model,
which indicates that organizations should nurture the lay beliefs of AI to employees. Therefore
organizations can contribute to educating employees about AI, organizing workshops, and
providing training. Additionally, task objectivity can bring aid in a positive attitude and trust
toward the use of algorithms (Castelo et al., 2019). Practical steps for task objectivity are
communicating how the task is set up and what elements serve as a priority, where contingency
Our study consists of both strengths as well as limitations that are worth mentioning.
First the strengths of our research. To start we used a between-subject design, which ensures
that no learning can be done or no transfer was possible throughout scenarios. This contributes
to the internal validity of this research. The vignette experiment ensures that manipulation can
be controlled by providing realistic situations for the manipulated variables, providing internal
as well as external validity (Aguinis & Bradley, 2014). Furthermore, this approach stimulates
judgments about situations by participants (Atzmüller & Steiner, 2010). Another strength of
our study is the built-in manipulation check. The manipulation check acts as an indicator of
internal validity (Aguinis & Bradley, 2014; Lonati et al., 2018). Our decision regarding the
manipulation check was to not exclude participants who failed the manipulation check to
The limitations mostly coincide with the strengths. The first limitation that this research
encountered is the experimental vignette method. Since the scenarios entail a resemblance to
the real-world events and participants do not experience these events, this harms the external
validity and generalizability of the results of our study. Participants’ understanding could be
reduced by using hypothetical situations (Lee, 2018). Moreover, the vignettes are not tested
48
prior to data collection. Furthermore, the vignettes of this study were not equally distributed
which could lead to reducing of statistical power. Besides, the number of participants for each
vignette was too low for minimizing the risk of losing statistical power. On the other hand, this
experimental vignette method allowed for high control and enables suggestions for causality
due to the manipulation of variables (Lonati et al., 2018). This in turn increases the internal
validity.
Regarding the method, another limitation that is identified is the use of between-subject
design in our study. Participants were randomly assigned to a vignette and the results are based
on the comparison of those groups. This means that participants were offered one scenario with
either augmented distribution with default human or default algorithm with either a person-
focused leadership behavior or a task-focused leadership behavior. This method ignores the
impact of how participants perceive the other conditions of other vignettes, thus it does not
consider how participants’ response changes between circumstances. An idea for future
research could be to perform the research within-subject design for including changes between
conditions. However, this creates a much longer survey where attention and manipulation is
more difficult to establish. Moreover, with within-subject design learning and transferring
Moreover, the attention check can be seen as a limitation. We added an attention check
in our questionnaire to increase the validity of our research. We asked participants to rate a
specific question with number 4. To a large extent. However, feedback from multiple
participants pointed out that the number four was not present in the answer which was confusing
for the participants. Additionally, some participants got confused by the fact that the
questionnaire is about algorithms as leaders and thought it was a reversed question. This
resulted in their minds like they did not need to listen to the questionnaire which was in their
49
imagination the computer. Partly because of this feedback we did not exclude participants based
Obtaining participants was harder than we anticipated in the beginning. The goal for the
number of participants was around 200. However, in the end, we needed to decide to close the
questionnaire earlier due to the lack of time for analyzing the data and interpreting the results.
This means that our sample size is smaller, which reduces the power of the research. Moreover,
the small sample size would reduce the generalizability of the study. Remaining in the
participants part of the study, another limitation was the language. The questionnaire was
conducted in English, however, the majority of our participants have Dutch as their native
language. For native Dutch speakers, the questionnaire can be difficult to read and understand.
To end, a limitation that occurred is about our control variable. Lay beliefs of AI is used
as a control variable in our analysis. However, while conducting the reliability analysis the
overall score was just above the acceptable range. Therefore, an analysis of the separate items
was performed. This showed unacceptable scores for 6 of the 10 items (> .30). Moreover,
investigating the normality of the lay beliefs of AI scale revealed that the data is not normally
distributed because of the not acceptable scores for skewness and kurtosis of 5 of the 10 items
(< -1, > 1). Together this diminishes the interpretation of the lay beliefs of AI results.
Future research
During our research, some limitations were identified. For future research in this
particular field, it is recommended to tackle these limitations before conducting the study. First,
a pretest for manipulation check is a recommendation to ensure that the vignettes are perceived
as intended. Since the experimental design in beginning is bad for the external validity because
of participants did not experience these scenarios, future research could try and improve the
imagination of participants in these scenarios. Performing a pilot study for the vignettes for
these manipulation checks is one way to achieve this (Hughes & Huby, 2004). Furthermore, to
50
enhance external validity two other things can be done in future research. First, the study can
be repeated over time to improve its external validity of the study. Moreover, by creating a real-
world setting the external validity would increase as well. Moreover, future research should
give attention to the distribution of the vignettes. When not equally distributed, at least the
amount of participants per vignette should be higher than in our research. Continuing on the
improvement for further research on the method, the attention check should be built in more
carefully to make sure no confusion can originate. To end the recommendations regarding the
method, making the questionnaire multi-language and giving participants the choice could
The impact of leadership behaviors is not found in our research, however, it was
theorized. For future research, a different manipulation could be performed to test this effect
again. Furthermore, additional analysis was done with lay beliefs of AI as a control variable.
Results showed that lay beliefs of AI positively impacts the predictive power of both
transparency and acceptance. The how and the why of the impact of lay beliefs of AI on our
variables is not examined. Therefore more research should be done on lay beliefs of AI and
how this can aid the acceptance of augmented leadership. One thing to take into account when
performing future research into lay beliefs of AI, is the scale. The current scale shows some
nonreliable items, which is not good for the interpretation of the results.
Conclusion
variables, where acceptance, transparency, and lay beliefs of AI are the main variables for our
research questions. Results showed that acceptance of augmented leaders is higher when that
51
leader is prominent human (and an algorithm acts in a supporting role) as well as when
transparency is in place. This highlights the fact that the distribution of a leader is impacting
the way employees perceive and ultimately accept their leader, which is needed for effective
of the leader is partial. Moreover, results showed that the effect of augmented leadership
distribution on acceptance of the leader is not restrictive on the situational leadership behavior,
thus no moderation effect is present. Additional analyses indicate that lay beliefs of AI
influences both the acceptance of the leader and transparency. This research learns us that
accepting the augmented leader is essential and can be achieved by a correct distribution of the
actor in augmented leadership actors, providing transparency, and contributing to learn about
AI.
52
References
Aguinis, H., & Bradley, K. J. (2014). Best Practice Recommendations for Designing and
Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency
ideal and its application to algorithmic accountability. New Media & Society, 20(3),
973–989. https://doi.org/10.1177/1461444816676645
Atzmüller, C., & Steiner, P. M. (2010). Experimental Vignette Studies in Survey Research.
Baker, E., Avery, G. C., & Crawford, J. (2002). Satisfaction and Perceived Productivity When
Breuer, C., Hüffmeier, J., Hibben, F., & Hertel, G. (2020). Trust in teams: A taxonomy of
Brynjolfsson, E., & Mitchell, T. (2017). What can machine learning do? Workforce
Buell, R. W., & Norton, M. I. (2011). The Labor Illusion: How Operational Transparency
https://doi.org/10.1287/mnsc.1110.1376
Burke, C. S., Stagl, K. C., Klein, C., Goodwin, G. F., Salas, E., & Halpin, S. M. (2006). What
Castelo, N., Bos, M. W., & Lehmann, D. R. (2019). Task-Dependent Algorithm Aversion.
https://doi.org/10.1177/0022243719851788
Chamorro-Premuzic, T., & Ahmetoglu, G. (2016). The Pros and Cons of Robot Managers.
Cheng, M. M., & Hackett, R. D. (2021). A critical review of algorithms in HRM: Definition,
https://doi.org/10.1016/j.hrmr.2019.100698
Cramer, H., Evers, V., Ramlal, S., van Someren, M., Rutledge, L., Stash, N., Aroyo, L., &
based art recommender. User Modeling and User-Adapted Interaction, 18(5), 455–496.
https://doi.org/10.1007/s11257-008-9051-3
de Winter, J., & Hancock, P. (2015). Reflections on the 1951 Fitts List: Do Humans Believe
https://doi.org/10.1016/j.promfg.2015.07.641
Ensher, E. A., & Murphy, S. E. (1997). Effects of Race, Gender, Perceived Similarity, and
https://doi.org/10.1006/jvbe.1996.1547
Evans, S. C., Roberts, M. C., Keeley, J. W., Blossom, J. B., Amaro, C. M., Garcia, A. M.,
Stough, C. O., Canter, K. S., Robles, R., & Reed, G. M. (2015). Vignette methodologies
field studies. International Journal of Clinical and Health Psychology, 15(2), 160–170.
https://doi.org/10.1016/j.ijchp.2014.12.001
54
Faraj, S., Pachidi, S., & Sayegh, K. (2018). Working and organizing in the age of the learning
https://doi.org/10.1016/j.infoandorg.2018.02.005
Fleishman, E. A., Mumford, M. D., Zaccaro, S. J., Levin, K. Y., Korotkin, A. L., & Hein, M.
https://doi.org/10.1016/1048-9843(91)90016-u
Gabris, G. T., & Ihrke, D. M. (2000). Improving Employee Acceptance Toward Performance
Appraisal and Merit Pay Systems. Review of Public Personnel Administration, 20(1),
41–53. https://doi.org/10.1177/0734371x0002000104
Glikson, E., & Woolley, A. W. (2020). Human Trust in Artificial Intelligence: Review of
https://doi.org/10.5465/annals.2018.0057
https://doi.org/10.1016/1048-9843(95)90036-5
Hiemstra, A. M. F., Oostrom, J. K., Derous, E., Serlie, A. W., & Born, M. P. (2019).
5888/a000230
55
Höddinghaus, M., Sondern, D., & Hertel, G. (2021). The automation of leadership functions:
Would people trust decision algorithms? Computers in Human Behavior, 116, 106635.
https://doi.org/10.1016/j.chb.2020.106635
Hogg, M. A., van Knippenberg, D., & Rast, D. E. (2012). The social identity theory of
https://doi.org/10.1080/10463283.2012.741134
Hughes, R., & Huby, M. (2004). The construction and interpretation of vignettes in social
https://doi.org/10.1921/17466105.11.1.36
Jarrahi, M. H. (2018). Artificial intelligence and the future of work: Human-AI symbiosis in
https://doi.org/10.1016/j.bushor.2018.03.007
Kellogg, K. C., Valentine, M. A., & Christin, A. (2020). Algorithms at Work: The New
https://doi.org/10.5465/annals.2018.0174
Langer, M., König, C. J., & Papathanasiou, M. (2019). Highly automated job interviews:
205395171875668. https://doi.org/10.1177/2053951718756684
Leyer, M., & Schneider, S. (2021). Decision augmentation and automation with artificial
https://doi.org/10.1016/j.bushor.2021.02.026
Lonati, S., Quiroga, B. F., Zehnder, C., & Antonakis, J. (2018). On doing relevant and rigorous
19–40. https://doi.org/10.1016/j.jom.2018.10.003
Napier, B. J., & Ferris, G. R. (1993). Distance in organizations. Human Resource Management
Northouse, P. G. (2019). Leadership: Theory and Practice (8th ed.). SAGE Publications, Inc.
Öztekin, Z., İşÇi, S., & Karadağ, E. (2015). The Effect of Leadership Leadership on
57–79. https://doi.org/10.1007/978-3-319-14908-0_4
Parris, D. L., Dapko, J. L., Arnold, R. W., & Arnold, D. (2016). Exploring transparency: a new
247. https://doi.org/10.1108/md-07-2015-0279
Parry, K., Cohen, M., & Bhattacharya, S. (2016). Rise of the Machines. Group & Organization
Petrinovich, L., O’Neill, P., & Jorgensen, M. (1993). An empirical study of moral intuitions:
467–478. https://doi.org/10.1037/0022-3514.64.3.467
57
Popper, M. (2013). Leaders perceived as distant and close. Some implications for psychological
https://doi.org/10.1016/j.leaqua.2012.06.008
Raisch, S., & Krakowski, S. (2021). Artificial Intelligence and Management: The Automation–
https://doi.org/10.5465/amr.2018.0072
Rasmussen, B., Jensen, K. K., & Sandoe, P. (2007). Transparency in decision-making processes
Raveendhran, R., & Fast, N. J. (2021). Humans judge, algorithms nudge: The psychology of
6486.1995.tb00793.x
Schildt, H. (2016). Big data and organizational design – the brave new world of algorithmic
https://doi.org/10.1080/14479338.2016.1252043
Sheringham, J., Kuhn, I., & Burt, J. (2021). The use of experimental vignette studies to identify
drivers of variations in the delivery of health care: a scoping review. BMC Medical
Suen, H. Y., Chen, M. Y. C., & Lu, S. H. (2019). Does the use of synchrony and artificial
Thomassin Singh, D. (1998). Incorporating cognitive aids into decision support systems: the
case of the strategy execution process. Decision Support Systems, 24(2), 145–163.
https://doi.org/10.1016/s0167-9236(98)00066-9
Tröster, C., & van Quaquebeke, N. (2021). When Victims Help Their Abusive Supervisors:
The Role of LMX, Self-Blame, and Guilt. Academy of Management Journal, 64(6),
1793–1815. https://doi.org/10.5465/amj.2019.0559
Turban, D. B., & Jones, A. P. (1988). Supervisor-subordinate similarity: Types, effects, and
https://doi.org/10.1037/0021-9010.73.2.228
van Quaquebeke, N., & Eckloff, T. (2013). Why follow? The interplay of leader categorization,
identification, and feeling respected. Group Processes & Intergroup Relations, 16(1),
68–86. https://doi.org/10.1177/1368430212461834
Wesche, J. S., & Sonderegger, A. (2019). When computers take the lead: The automation of
https://doi.org/10.1016/j.chb.2019.07.027
59
Yeomans, M., Shah, A., Mullainathan, S., & Kleinberg, J. (2019). Making sense of
https://doi.org/10.1002/bdm.2118
Yukl, G., O’Donnell, M., & Taber, T. (2009). Influence of leader behaviors on the leader‐
https://doi.org/10.1108/02683940910952697
Zerilli, J., Knott, A., Maclaurin, J., & Gavaghan, C. (2018). Transparency in Algorithmic and
Appendix A
General scenario
Imagine that you are currently looking for a new job as a sales agent. After some
interviews with different companies, you now have a job offer from “SecurInsure".
"SecurInsure" is a big insurance company that would like to hire you in their sales department.
The offer corresponds to your wishes on topics like pay and benefits.
However, you also really care about the leadership philosophy at the company because
you know that this will have a big effect on your daily work. Therefore, you talk to employees
from the sales department who explain to you how the team is managed. This is what you learn:
Vignettes
“SecurInsure” uses augmented leadership ( = the use of automated systems and analytics
to support people management), to manage their employees. The approach is based on the idea
that the combination of human and technological capabilities is stronger than either of them
alone. At the moment the company is trialing different distributions of how human managers
and automated systems can complement each other. In the sales department team managers
In the team that you would be working in Alex Stanton is the team manager. In day to
day business Alex takes on most of the team management tasks. In certain leadership situations
For example, Alex is in charge of the yearly performance and career development
assessment. The assessment entails coaching, personalized feedback and the evaluation of
61
future possibilities at the company. The automated system supports Alex in the assessment of
the team member’s performance by e.g. providing performance analytics. However, it is clear
for all team members that it is Alex who takes the lead in all decisions involved in the
assessment and the automated system only consults when technical competencies are needed.
“SecurInsure” uses augmented leadership (= the use of automated systems and analytics
to support people management), to manage their employees. The approach is based on the idea
that the combination of human and technological capabilities is stronger than either of them
alone. At the moment the company is trialing different distributions of how human managers
and automated systems can complement each other. In the sales department team managers
In the team that you would be working in Alex Stanton is the team manager. In day to
day business Alex takes on most of the team management tasks. In certain leadership situations
For example, Alex is in charge of managing of team projects. The project management
entails the distribution of tasks as well as keeping track of deadlines and task accomplishment.
The automated system supports Alex in the project management process by e.g. providing
interpersonal coaching in case of conflict. However, it is clear for all team members that it is
Alex who takes the lead in all decisions involved in the management of team projects and the
“SecurInsure” uses augmented leadership (= the use of automated systems and analytics
to support people management), to manage their employees. The approach is based on the idea
that the combination of human and technological capabilities is stronger than either of them
62
alone. At the moment the company is trialing different distributions of how human managers
and automated systems can complement each other. In the sales department an automated
system carries out most management decisions and team managers take on a consulting role.
In the team that you would be working in Alex Stanton is the team manager. However,
in day to day business an automated system takes on most of the team management tasks. Only
For example, the automated system is in charge of the yearly performance and career
development assessment. The assessment entails coaching, personalized feedback and the
evaluation of future possibilities at the company. Alex supports the automated system in the
assessment of the team member’s performance by e.g. providing interpersonal coaching in case
of conflict. However, it is clear for all team members that it is the automated system that takes
the lead in all decisions involved in the assessment and Alex only consults when interpersonal
“SecurInsure” uses augmented leadership (= the use of automated systems and analytics
to support people management), to manage their employees. The approach is based on the idea
that the combination of human and technological capabilities is stronger than either of them
alone. At the moment the company is trialing different distributions of how human managers
and automated systems can complement each other. In the sales department an automated
system carries out most management decisions and team managers take on a consulting role.
In the team that you would be working in, Alex Stanton is the team manager. However,
in day to day business an automated system takes on most of the team management tasks. Only
For example, the automated system is in charge of managing of team projects. The
project management entails the distribution of tasks as well as keeping track of deadlines and
task accomplishment. Alex supports the automated system in the project management process
by e.g. providing interpersonal coaching in case of conflict. However, it is clear for all team
members that it is the automated system that takes the lead in all decisions involved in the
management of team projects and Alex only consults when interpersonal competencies are
needed.
64
Appendix B
Construct Item
process.
and transparent.