You are on page 1of 64

Accepting the collaboration between

humans and computers


Study on the influence of transparency and leadership behavior on the
relationship between augmented leadership and the acceptance of the leader

Jeannette Paap
13383531
June 24th, 2022
Final version

MSc in Business Administration – Leadership and Management track


Amsterdam Business School, University of Amsterdam
Academic year 2021/2022
EBEC approval number: 20220315110332
First supervisor: Carlotta Bunzel
2

Statement of originality

This document is written by Jeannette Paap who declares to take full responsibility for

the contents of this document. I declare that the text and the work presented in this document is

original and that no sources other than those mentioned in the text and its references have been

used in creating it. The Faculty of Economics and Business is responsible solely for the

supervision of completion of the work, not for the contents.


3

Acknowledgements

First of all, I would like to thank Carlotta Bunzel for the supervision and feedback

throughout the process. Next, I would like to thank the participants who filled in the

questionnaire. I would like to thank Philippine de Planque for her efforts in collecting the data

and the support throughout the process of the thesis period. Lastly, I would like to thank my

friends and family, especially Anne Smits and Lisa Appel for motivating me.
4

Abstract

Do you follow your leader blindly? Leaders can emerge in various ways since leadership

functions can be performed by both humans and computers due to innovations such as artificial

intelligence. This creates the possibility of augmented leadership, where humans and computers

work together performing leadership functions. Effective leaders can contribute to the execution

of an organization’s strategy with their team, which means that employees need to follow their

leader to help achieve this strategy. Acceptance of the leader is key in accomplishing this. Yet,

research on the collaboration of humans and computers in leadership, augmented leadership, is

not providing information on the acceptance of the augmented leader. The aim of our research

is to investigate the acceptance of augmented leadership by comparing two distributions of

augmented leadership and whether transparency is mediating this effect. We investigated

likewise the circumstantial factor of situational leadership behavior in terms of acceptance by

comparing two different leadership behaviors. We collected data through an online

experimental vignette study where manipulation of augmented leadership distribution (human

default vs. algorithm default) and situational leadership behavior (person-focused vs. task-

focused) are used for measuring the variable for our model. Our results showed that acceptance

of the augmented leadership distribution is higher when human is prominent and an algorithm

is supportive. Moreover, transparency partially mediates this relationship. No interaction effect

with situational leadership behavior is found. For practical means, our study can help to get

insight into how to implement augmented leadership for acceptance and ultimately following

of the leader.

Key words: augmented leadership, acceptance, transparency, leadership behavior,

collaboration, algorithm.
5

Table of content
List of figures ......................................................................................................................................... 7
Introduction ........................................................................................................................................... 8
Theoretical framework ....................................................................................................................... 11
Augmented leadership .................................................................................................................... 11
Acceptance of augmented leadership............................................................................................. 14
Mediating role of transparency ...................................................................................................... 16
Situational leadership behavior ..................................................................................................... 19
Method.................................................................................................................................................. 22
Design ............................................................................................................................................... 22
Procedure ......................................................................................................................................... 22
Participants ...................................................................................................................................... 23
Vignettes ........................................................................................................................................... 24
Measurements .................................................................................................................................. 26
Acceptance .................................................................................................................................... 26
Transparency ................................................................................................................................ 26
Control variable ............................................................................................................................ 27
Manipulation check ...................................................................................................................... 27
Analytical Plan................................................................................................................................. 28
Results .................................................................................................................................................. 29
Hypothesis testing ............................................................................................................................ 33
Leader acceptance of the augmented leadership distribution..................................................... 33
Mediating effect of transparency on acceptance of the leader ................................................... 35
Moderating effect of situational leadership behavior on acceptance of the leader ................... 39
Discussion ............................................................................................................................................. 41
Findings ............................................................................................................................................ 41
Theoretical implications.................................................................................................................. 42
Practical implications ...................................................................................................................... 45
Strengths and limitations ................................................................................................................ 47
Future research................................................................................................................................ 49
Conclusion ........................................................................................................................................ 50
6

List of tables
Table 1 - Overview of distribution of participants across condition .................................................... 29
Table 2 - Correlation matrix................................................................................................................. 32
Table 3 - Means, standard deviations, and one-way analyses of variance in acceptance by augmented
leadership distribution .......................................................................................................................... 34
Table 4 - Regression analysis for control variable lay beliefs of AI by augmented leadership
distribution to acceptance of the leader ................................................................................................ 35
Table 5 - Regression analysis for mediation by transparency for augmented leadership distribution to
acceptance of the leader ........................................................................................................................ 36
Table 6 - Regression analysis for control variable lay beliefs of AI by augmented leadership
distribution to transparency .................................................................................................................. 38
Table 7 - Factorial ANOVA predicting acceptance of the leader by augmented leadership distribution
and situational leadership behavior ...................................................................................................... 40
7

List of figures
Figure 1 - The conceptual model .......................................................................................................... 10
Figure 2 - Regression coefficients of the mediating effect of transparency on the relationship between
augmented leadership distribution and acceptance of the leader ......................................................... 37
8

Introduction

First automatization in production, then computers taking over simple jobs. What is

next? Would algorithms be deciding the business decisions? Could algorithms be the new

leaders? According to Lee (2018), the innovations in data infrastructure, machine learning, and

artificial intelligence are revolutionizing how organizations are managed by leaders, thus how

leaders manage employees. Accordingly, the functions of leaders are shifting because of the

increasing collaboration between humans and algorithms (Wesche & Sonderegger, 2019).

Together with the evolving computers, algorithms change from being tools to partners which

can perform leadership functions (Höddinghaus, Sondern, & Hertel, 2021; Wesche, &

Sonderegger, 2019).

Augmented leadership is leadership where humans closely collaborate with computers

to perform a task by combining the strengths of both actors (Raisch, & Krakowski, 2021).

Leaders play a significant role towards employees, which influences how employees perform

their job (Iskamto, 2020; Voon, et al., 2011). Employees’ acceptance of the leader is essential

for the execution of the made decision by the leader since the execution of those decisions

contribute to the strategy of the organization (Thomassin Singh, 1998; Zagotta & Robinson,

2002). In addition, according to Wesche and Sonderegger (2019), the question if employees

accept algorithms as leaders is one of the most important aspects of algorithmic management,

where the computer performs managing tasks. For sustainable and effective leadership

voluntary compliance from employees as well as acceptance of the leader is essential (van

Quaquebeke & Eckloff, 2013). Therefore, it is meaningful to understand where the acceptance

or nonacceptance of employees comes from.

The question that arises is what the reason for employees is to accept the augmented

version of leadership. In general, algorithms are associated with a lack of transparency (Glikson

& Wooley, 2020). Meaning that employees who do not have the knowledge about the decision-
9

making process could perceive the process as well as the outcome of the process as ambiguous

(Ananny & Crawford, 2018). Humans have the ability to explain their processes and decisions

verbally when something is unclear for employees. In addition, a gap identified is whether more

background information about the leader’s process contributes to the reaction, such as

acceptance or nonacceptance, of the employee (Hiemstra, et al., 2019). Since transparency can

support the understanding of the process that an augmented leader proceeds, this could mean

that the acceptance of that augmented leader will increase. Therefore, the first research question

this study aims to answer is: is the relationship between augmented leadership distribution and

acceptance of the leader mediated by transparency?

Accordingly, leaders are responsible for making decisions and are responsible for the

outcome of these decisions (Leyer, & Schneider, 2021). According to Fleishman et al. (1991),

there are two classifications of leadership behavior: task-focused and person-focused. Task-

focused leadership behavior deals with task accomplishment, where a leader intends to identify

operating procedures, task requirements and obtaining task information (Burke et al., 2006).

Person-focused leadership behavior deals with team interaction, where a leader intends to have

a good relationship with their followers (Fleishman, et al., 1991). Augmented leadership

combines the best of both humans and computers to perform leadership tasks, these tasks are

being performed by employees who are being managed by their leaders, which are behaving in

different ways (Raisch, & Krakowski, 2021; Wesche & Sonderegger, 2019). Since the degree

to which employees accept the behavior of their leaders being distributed either way, human

default or computer default, is uncertain, it would be relevant to examine the interaction with

the two classifications of situational leadership behavior. Therefore, the second research

question this study intends to answer is: is the relationship between augmented leadership

distribution and acceptance of the leader moderated by the leadership behavior?


10

The present study provides insight into how leadership distribution impacts employees.

For the acceptance of the leader and consequently leading employees effectively, this insight is

crucial for creating the best combination of the augmented leadership distribution, thus the

collaboration between both humans and computers. Therefore, this study contributes to the

literature by presenting information on how employees’ acceptance of their augmented leader

is developing and what influences the acceptance (Figure 1). We used a between-subject

experimental vignette study to examine our questions. Augmented leadership distribution and

situational leadership behavior are manipulated, which created four different scenarios. For

practical contributions, the present study provides information on how to implement algorithms

in leadership functions. It will challenge practitioners to design the collaboration between

humans and algorithms in a way that employees will accept their leaders and ultimately their

made decisions. This could lead to recommended policy for designing the augmented

management within an organization.

Figure 1
The conceptual model

Transparency

Augmented Acceptance of
leadership leader

Augmented Acceptance of
leadership leader

Leadership
behavior
11

Theoretical framework

Augmented leadership

Leadership involves an influential process with a common goal (Northouse, 2019).

Meaning that leadership consists of a process where a common goal is attained through

influencing individuals. Grimm (2010) describes leadership as needing to deal with change.

Accordingly, team leadership is a dynamic process where social problem solving is needed by

generic responses to social problems (Burke et al., 2006). Being a leader means that certain

functions need to be executed such as decisions about the goals, ensuring an available network

for realizing the goals, and establishing that the goals are attained by making people do what

needs to be done (Grimm, 2010). The way of managing employees is changed due to the

revolution of the digital age (Lee, 2018).

Leadership functions can be performed by both humans and computers because of the

use of algorithms (Höddinghaus, et al., 2020). The promise of employing artificial intelligence

(AI) is supplying supplementary cognitive abilities which will improve the efficiency and

productivity of the leader (Kolbjørnsrud et al., 2016). The collaboration between humans and

computers performing leadership functions is called augmented leadership (Raisch, &

Krakowski, 2021). In this augmented leadership the human and computer are both performing

leadership functions, working together on various tasks (Leyer & Schneider, 2021; Raisch, &

Krakowski, 2021). Additionally, the relationship between employees and leaders is influenced

by the augmentation of humans and computers performing leadership functions (Höddinghaus,

et al., 2020). Accomplishing a good balance of the interaction of the collaboration between

humans and computers performing as a leader is important for the execution of the

organization’s strategy (Iskamto, 2020; Voon et al., 2011), and the overall business

performance (Daugherty & Wilson, 2018).


12

In augmented leadership, the actors human and computer are both present. The amount

of work they take on could differ (Daugherty & Wilson, 2018). In the case that a human is the

main actor in the augmented leadership, the human acts as the leader of their employees and

consults the computer system (algorithm) when the human feels like it is necessary (Raisch &

Krakowski, 2021). Algorithms perform a supporting role when being consulted. On the other

hand, when a computer is the main actor in the augmented leadership, the algorithm is in charge

of most of the leadership tasks and a human plays a role in certain situations necessary (Raisch

& Krakowski, 2021). Both situations require the presence of a human as well as a computer,

hence the augmented leadership. The augmentation of leadership consists of performing the

same leadership functions as normal, however with the collaboration between humans and

computers the achieved outcomes are far better than either could achieve alone (Daugherty &

Wilson, 2018). The question raises which qualities both entities bring to the collaboration, the

augmented leadership.

A capability that humans have is intuitive skills that contribute to thinking of the bigger

picture (Jarrahi, 2018). This means that humans can combine information from their employees

and make judgments based on fragmentary signs. Moreover, humans can engage in creative

thinking which most of the time requires improvisation (Wesche & Sonderegger, 2019). This

means that humans are able to act innovatively. Emotional intelligence is similarly a quality

that humans possess that is effective in collaborating with employees (Chamorro-Premuzie &

Ahmetoglu, 2016). Strategic thinking can be required as a leader, this is a task that is easier for

humans since they have the right understanding and can make sense of the different contexts in

which strategic thinking is necessary (Jarrahi, 2018). Employees perceive humans as less

controlling because humans cannot track everything themselves and computers can (Kellog et

al., 2020). Humans cannot process that much information at one time, although computers with

algorithms can (Brynjolfsson & Mitchell, 2017; Cheng & Hackett, 2021; Glikson & Woolley,
13

2020). Moreover, humans are more likely to overlook some patterns that can be detected by

algorithms (Parry et el., 2016). Consequently, aid from a computer would be beneficial for

bridging the capabilities that humans do not possess themselves. The augmentation of

leadership provides a solution for bridging the abilities.

Computers can detect different layers of complexity and with that identify hidden

patterns in large data sets (Parry et al., 2016). They can use statistical models when there is

information missing and predict based on the data (Cheng & Hackett, 2021). Algorithms

possess a more developed processing capacity compared to humans, therefore algorithms make

data-based decisions faster and are less expensive in operation (Brynjolfsson & Mitchell, 2017;

Glikson & Woolley, 2020). This means that using algorithms can save time and money,

minimize risk, and increase productivity for certain tasks (Suen et al., 2019). By making use of

algorithms, biases can be disregarded due to the lack of self-interest that computer systems do

not have (Langer et al., 2019; Parry et al., 2016). Furthermore, something else that algorithms

do not possess is empathy and human emotions, which makes their response toward employees

differ from humans (Chamorro-Premuzie & Ahmetoglu, 2016). However, due to their

objectivity algorithms can be better at providing feedback to employees which removes

reactions toward leaders that can cause hurdles (Lee, 2018; Raveendhran & Fast, 2021). The

processes that are used by an algorithm are advanced, encompassing but opaque (Kellogg et al.,

2020). Accordingly, humans’ assistance would be a useful way to bring both advantages

together. A way to bringe both advantages of the actor together is the augmentation of

leadership.

Being an effective leader involves performing all critical leadership functions (Burke,

et al., 2006). Leaders focus on the attainment of the goals that are formulated for the strategy

of the organization and making decisions that contribute to the common goal is part of that. The

efficient collaboration between humans and computers enables managers to make enhanced
14

decisions (Kolbjørnsrud et al., 2016). Effective leadership includes the acceptance of the leader

(Yukl et al., 2009). However, leaders that are trying to build a relationship with their employees

in order to lead effectively, in both ways of augmented leadership, either human default or

algorithm default, exist with advantages and disadvantages (Chamorro-Premuzic & Ahmetoglu,

2016). Therefore, the augmentation of leadership where both humans and computers work

together should be an effective leadership style. The raised question is which allocation of these

functions is most accepted by employees.

Acceptance of augmented leadership

Part of leadership is making decisions that can contribute to the overarching strategy of

the organization. Thus, accepting the decision that leaders make is crucial for executing the

organization’s strategy (Thomassin Singh, 1998; Zagotta & Robinson, 2002). Consequently,

employees that accept their leader will more easily accept their leader’s decision. With the

influential process that leadership is, the question rises of how to exert this influence over

employees, in such a way that the employees agree to follow the decision and rules of their

leader (Wesche & Sonderegger, 2019). Furthermore, in principle, behaviors such as acceptance

or nonacceptance are influenced by knowing how employees feel about their leader (Hiemstra,

et al., 2019). Voluntary compliance and acceptance of the leader are both esstional for

sustainable and effective leadership (van Quaquebeke & Eckloff, 2013). Leadership needs to

be accepted for commitment to the execution of the organization’s strategy. Consequences of

not accepting the leadership are a decrease in commitment (Öztekin et al., 2015), less

productivity (Baker et al., 2002), and subsequently not contributing to the goals and strategy.

Moreover, the absence of acceptance of AI systems, impeded the manifestation and

implementation in the workplace (Gill, 1995).

With leadership, there are two parties involved, the leader and the employee. The

relationship between these two is important in terms of the effectiveness of leadership. A


15

augmented leader consists of two elements, human and computer. The relationship between

leader and employee is an exchange process where a unique relationship develops (Graen &

Uhl-Bien, 1995). How individuals view each other has an impact on the connection between

leaders and their employees, and the quality of the relationship may suffer as a result.

Employees identify themselves by looking to their leaders (Hogg et al., 2012). The relationship

between employees and a leader is a social exchange that can be disturbed by the way

employees perceive their leaders. The way people perceive others can be explained by the social

identity theory, which implies how people perceive each other by means of self-evaluation and

finding common ground with other people (Hogg, 2001). This theory indicates how employees

view others which can clarify how employees perceive others. Meaning that employees’ own

perception is in play (Hogg, 2001). How employees perceive their leader influences the

relationship between them and it could harm or favor the quality of the relationship (Turban &

Jones, 1988). Identification with someone else is essential for achieving feelings of closeness

and providing common ground between those individuals (Napier & Ferris, 1993). According

to the similarity attraction paradigm, people who perceive someone else to be more similar tend

to like that person more (Ensher & Murphy, 1997). Employees who perceive their leader as

similar have a tendency to be fond of their leader too (Turban & Jones, 1988). Additionally,

employees have a positive bias towards that person. Looking at the social identity theory, in

terms of augmented leadership, identification with the leader, feeling close and having common

ground, would lead to more acceptance.

Employees who perceive leaders to be similar are expected to have a positive feeling

about their leader (Ensher & Murphy, 1997; Turban & Jones, 1988). The more distant the

algorithm feels for employees, the more unfamiliar and subjective they are perceived (Trope &

Liberman, 2010). Likewise, inaccessible entities, such as algorithms, are perceived as less

familiar to the employees due to their abstract nature (Popper, 2013). In augmented leadership
16

both humans and computers are present, meaning that there is a part of the leadership that will

not feel familiar to employees. The two situations of augmented leadership we examine in this

research differ in a way that one actor is prominent in the leadership and the other is supporting.

According to the social identity theory, employees will have more similarities with a human

and therefore perceive a human as more likeable. More agreement on the human leader is

probably due to the fact that the human part of the leadership is more prototypical (Hogg et al.,

2012). Thus, in the situation where the human is more prominent in the leadership, employees

can more identify themselves with the human part and will like that part of the leadership more

compared to the algorithm part which feels more distant (Mahmud et al., 2022). Controversy,

in the situation where the algorithm is more prominent in the leadership, employees will try to

identify with the algorithm which is more difficult due to non-familiarities (Lim & O’Connor,

1996). Therefore, we suspect that the acceptance of the leader will be higher for the augmented

leadership distribution where the human is in the lead and the algorithm acts supporting. The

following hypothesis is the result of this reasoning:

Hypothesis 1: Followers acceptance of augmented leadership will be higher when

human is the default and is supported by algorithm (versus when algorithm is the default

and is supported by human)

Mediating role of transparency

An implication that appears in augmented leadership is the limited experience or

knowledge that employees have with the automated part of the leadership distribution, the

algoritm (Höddinghaus, et al., 2021). Especially algorithms are unknown, due to the black box

design which means that there is no clear perception of how algorithms operate (Mahmud et

al., 2022). Gabris and Ihrke (2000) describe that, in order for employees to accept, it is important

that the system that is deployed is procedurally fair and valid for employees. This means, that

when the procedure is perceived, to be honest and reasonable the employee is more likely to
17

accept it. Transparency helps to develop understanding (Mahmud et al., 2022). Associating this

with the social identity theory, transparency aids the understanding of the augmented leader

which is important for the employee to know the right information to identifying themselves to

their leaders.

Transparency indicates that “clear and open information” is a necessity in the exchange

between the leader and employees (Breuer, et al., 2020, p. 13). The line of reasoning should be

uncovered, described, documented, and communicated for ensuring transparency (Rasmussen,

et al., 2007). For transparent decision-making processes, decisionmakers should evidently show

the principles behind the conclusions as well as the reasoning that has brought the decision-

maker to that conclusion (Rasmussen et al., 2007). Vital for successful decision-making is

information (Rodrigues & Hickson, 1995). Transparency should nurture the understanding

between the leader and employee, which as well provides traceability (Breuer, et al., 2020).

Meaning that employees should be aware of the rationale the leaders have while executing

leadership functions. Breuer et al. (2020) describe a category of transparency, task-related

transparency, as possessing those characteristics that provide “transparent and open knowledge

management” (p. 18). This raises the importance to understand the allocation of augmented

leadership for ensuring open knowledge and information for employees.

Buell and Norton (2011) found that giving an explanation of why certain advice is given

enhanced the acceptability of the given advice. Humans have the opportunity to communicate

to and with the employees and explain their reasons verbally to their employees about why they

made certain decisions due to the interaction between human leaders and employees (Zerilli et

al., 2018). To a certain extent this means that humans are transparent. However, humans also

can be less transparent and comprehensible in their decision-making (Höddinghaus et al., 2021).

Aid for reducing this uncertainty can be operational transparency (Buell, & Norton, 2011).

Contrastingly, algorithm processes are not transparent and can be perceived as vague due to the
18

limited access to some specific information (Glikson & Wooley, 2020; Kellog et al., 2019).

Algorithms do not have the possibility to verbally explain their line of reasoning and likewise

cannot sense when employees do not understand something and need an explanation. Mostly

the reasoning, as well as the complexity of the algorithms, are not transparent and difficult to

understand (Faraj et al., 2018). Being transparent is among other things essential for enhanced

collaboration among employees and their leaders (Parris et al., 2016). Increasing the

understanding of a process or decision is the intention of being transparent (Cramer et al., 2008).

Viewing the two augmented leadership situations, both humans and algorithms have

participated in transparency.

Having transparency in place within the leader-member relationship is important.

Employees who have enough knowledge about their leader and the motives of their leader have

the opportunity to know their leader. Offering reasons for making decisions can employees help

to understand the rationale of their leader (Mahmud et al., 2022). Transparent leaders provide

the possibility for employees to see or know the complete picture which is necessary to evaluate

or judge their leader (Hogg et al., 2012; Kellog et al., 2019; Mahmud et al., 2022). Then

employees can determine if they can identify with their leader and would agree with their leader.

Employees with knowledge and information about their leader can also compare themselves to

their leaders, even if the computer is in the lead of the employees. The likability of a leader is

an outcome of employees identifying with their leaders and finding common ground (Ensher &

Murphy, 1997; Hogg et al., 2012; Napier & Ferris, 1993). Employees who are fond of their

augmented leader will agree more with them, accept the decisions made and follow their

execution (Hogg et al., 2012; Wesche & Sonderegger, 2019).

When human is prominent in the augmented leadership situation, employees can ask

their leader for an explanation when something is unclear (Önkal et al., 2009). Hence the

information for the employee about the leader is open to ask and possible to ask for clarification.
19

In the other augmented leadership situation, when a computer is more prominent, asking the

algorithm to explain why certain decisions is not accessible (van Dongen & van Maanen, 2013).

Even with the support of the human, the human part of the augmented leadership also cannot

tell how and why the computer made certain decisions because of the black box design of

algorithms (Mahmud et al., 2022). Therefore, we suggest that the situation of augmented

leadership where an algorithm is more prominent is not as transparent as when the human is

more prominent in the augmented leadership. The following hypothesis is the result of this

reasoning:

Hypothesis 2: The relationship between augmented leadership distribution and

acceptance of the leader is mediated by transparency in such a way that when the human

is prominent in the augmented leadership, compared to the algorithm being prominent

in the augmented leadership, the transparency is higher which will increase the

acceptance of the leader.

Situational leadership behavior

The relationship between leadership distribution and the acceptability of the leader can

also be influenced by the behavior of a leader. Leaders act with a focus on goal attainment,

which creates the behavior of the leader (Fleishman, et al., 1991). However, reaching a goal

comes with challenges since organizations must deal with the external environment, subsystems

within the organization that change, and individual employees (Fleishman, et al., 1991). In

dealing with these changes within an organization, leaders can behave in several ways. Two

classifications of leadership behaviors are known in the literature, behavior dealing with task

accomplishment, and behavior dealing with team interaction (Fleishman, et al., 1991).

Leadership behaviors have an impact on the performance outcome of their team (Burke et al.,

2006). Perceptions of the leader can be influenced by the nature of the task (Fleishman, et al.,
20

1991; Lee, 2018). Our research adopts the dichotomy of leadership behaviors of Fleishman et

al. (1991) since they classified the behaviors from two common themes.

Leaders performing task-focused behavior participate in the facilitation of task

requirements, operation procedures, and task information (Burke, et al., 2006). Task-focused

behavior involves transactional behavior such as motivating goal achievement (Patroom, 2018).

A leader who performs tasks-focused behavior is characterized by initiating structure which

includes promoting the accomplishment of goals through the minimization of role ambiguity as

well as conflict (Burke et al., 2006). Moreover, boundary spanning is a characteristic of task-

focused leader behavior, which refers to the emphasis on attaining resources and information

through communication and collaboration (Patroom, 2018). The utilization and monitorization

of employees involve tasks that are demanding technical skills (Fleishman, et al., 1991).

According to de Winter and Hancock (2015), tasks such as information storing and signalling

controls are repetitive tasks that are performed best by computers. These tasks require technical

skills which algorithms are better at than humans since computers have the capacity and

complexity to process this (Brynjolfsson & Mitchell, 2017; Glikson & Woolley, 2020; Parry et

al., 2016). Subsequently, traits that are explaining task-focused behavior seem to match the

capabilities of a computer (Langer & Landers, 2021).

Person-focused behavior performed by a leader includes the facilitation of behavioral

interactions, cognitive structures, and attitudes, with the goal to develop a team that can work

effectively (Burke, et al., 2006). A leader who behaves person-focused is characterized by being

transformational which means that the leader tries to inspire their followers (Patroom, 2018).

Moreover, this leadership behavior acts in maintaining a relationship with open communication

where mutual respect, satisfying needs, and trust are the basis (Burke et al., 2006). Motivational

and empowering are also characteristics to describe person-focused behavior, this includes

arousing employees to put in extra effort, providing autonomy, acting as a coach, and promoting
21

self-leadership for employees (Patroom, 2018). Tasks that require judgment, reasoning,

flexibility, and improvising demand human execution (de Winter & Hancock, 2015). Tasks

including motivation personal such as supporting employees and enabling performance to

require social skills (Fleishman, et al., 1991). Social skills are best performed by humans since

algorithms lack the ability of intuition, social interaction, and empathy (Chamorro-Premuzie &

Ahmetoglu, 2016; Lee, 2018). Consequently, characteristics that are describing person-focused

behavior seem to match the traits of a human (Mahmud et al., 2022).

The two different behaviors, person-focused and task-focused, seem to be

complementing the two different actors of augmented leadership. Meaning that person-focused

behavior appears to be performed better by humans due to their skills, attributes, and

capabilities. Furthermore, task-focused behavior appears to be conducted better by computers

because of the emphasis on the traits and resources that are involved in such tasks. The two

leadership behaviors both have their own requirements for the best execution of the

corresponding tasks. For the acceptance of the augmented leader, the best-suited combination

seems to be when the default leadership function brings out the best in the behavioral tasks of

that leader. The following hypothesis is the result of this reasoning:

Hypothesis 3: Situational leadership moderates the relationship between leadership


distribution and acceptance of the leader such that

a. The effect when human leadership is the default and is supported by an algorithm
on the acceptance of the leader when the leader performs in a person-focused
behavior is higher as opposed to when the same leader performs in a task-focused
behavior ;
b. The effect when algorithmic leadership is default and is supported an human on the
acceptance of the leader when the leader performs in a task-focused behavior is
higher as opposed to when the same leader performs in a person-focused behavior.
22

Method

Design

The approach we took to test the hypothesis is an experimental vignette study. An

experimental vignette study combines an experiment with a survey, whereas the weaknesses of

both approaches counterbalance each other (Atzmüller & Steiner, 2010). The vignette

experiment is the core element of the method, corresponding traditional survey for measuring

constructs. Additionally, this method of research is generally used for examining the opinions,

beliefs and attitudes of people (Petrinovich et al., 1993). It gives the possibility to have high

control of the manipulations due to the hypothetical scenarios where independent variables are

being manipulated (Petrinovich et al., 1993). First, participants read the vignette where they

were asked to imagine themselves in a situation. Secondly, with this situation in mind, the

participants were asked to complete the questionnaire. Participants were randomly assigned to

a 2 (leadership distribution: human default vs. algorithm default) x 2 (leadership behavior:

person-focused vs. task-focused) between-subject design. The two independent variables,

augmented leadership distribution and situational leadership behavior, were manipulated.

Procedure

Participants were invited to contribute to our study through several resources. We used

LinkedIn, Facebook, e-mail, WhatsApp, and verbal invitations for recruiting our respondents.

The survey consisted of the vignette and the questionnaire, which were both in written form

meaning that respondents needed to read it themselves. Participants filled in the survey in an

online setting, which ensures better external validity because participants could fill in the

questionnaire anywhere (Tröster & van Quaquebeke, 2021). After the invitation participants

needed to give consent, followed by the general introduction of the scenario, where the

participants were first made aware of the importance of identifying with the described situation.

The scenario started with one page general introduction where participants were made aware
23

what kind of situation they are in. This general information is followed by the in-depth scenario

on the next page, which was not the same for every participants due the manipulations resulting

in four vignettes. After this, the questionnaire began.

Participants were randomly assigned, which means that every participant has the same

chance to be assigned to each scenario. Moreover, we used a between-subject design, which

means that each participant only reviews one vignette before answering the questionnaire

(Atzmüller & Steiner, 2010; Evans et al., 2015). After completing the questionnaire, participant

comparisons are made across the participants per vignette group (Atzmüller & Steiner, 2010).

Due to the between-subject design, the participant had no reference point since they only are

presented with one vignette, this can otherwise harm the true judgement of participants (Aguinis

& Bradley, 2014). Vignette equivalence can help with this, by ensuring that the structure of the

vignette is similar across all vignettes (Evans, et al., 2015) and that participants have sufficient

information, in terms of context (Aguinis & Bradley, 2014). However, reading a vignette is

significantly different from experiencing such a situation, which creates a limitation of the

participant’s understanding (Lee, 2018). Furthermore, in order to receive credible results from

the experimental vignette method, the study design needs to be flawless (Sheringham et al.,

2021).

Participants

Participation in this survey was completely voluntary and anonymity was guaranteed

for the participants. Participants were made aware of the possibility of withdrawing from the

survey midway and of the possibility of attention checks. The survey was prepared in Qualtrics,

which is a survey tool with the possibility to export data to SPSS. The survey opened on April

12th 2022 and closed on May 18th 2022. Participants who failed to start the first item were

removed from the dataset. The attention check was examined, which resulted in 92% correct

answers. Based on the feedback from multiple participants we concluded that the attain check
24

did not perform as we intended. In total, we recruited 171 participants. However, not every

participant filled in the questionnaire entirely. 83% of the participants finished the questionnaire

completely. Analyzing missing values are not a problem since SPSS can process these for each

separate analysis, which is built in SPSS procedures. Moreover, excluding participants based

on attention check or based on not finishing the survey did not provide significantly different

outcomes of the analysis. Participants were 56% female, 43% male. 1 participant identified as

non-binary and 1 participant did not prefer to say. The youngest participant was 21 years old

and the oldest participant was 73 years old, with the average age of the sample being 31 years

old. Most participants’ highest education was a bachelor’s degree (54%), followed by a master’s

degree (30%). The majority (53%) of the participants worked full-time.

Vignettes

Using vignettes means that a “short, carefully constructed description” (p. 128) of a

situation is portrayed (Atzmüller & Steiner, 2010). The aim is to evaluate dependent variables

such as behaviors by presenting the participant with realistic situations (Aguinis & Bradley,

2014). Moreover, this allows for manipulating independent variables which simultaneously

enhances both internal and external validity (Aguinis & Bradley, 2014). A vignette represents

a mixture of characteristics and is used to elicit judgments about situations (Atzmüller &

Steiner, 2010). Vignette studies utilize data that is self-reported by the participants, this is

because the participants are answering questions responding to the situation of the vignette.

A 2 x 2 experimental design with augmented leadership distribution (human default vs.

Algorithm default) and leadership behavior (person-focused vs. task-focused) are the factors

used in the present study. This means that two of the four scenarios include a human as the

default factor of the augmented leadership distribution, where computer act supportive, and two

of the four scenarios include a computer as the default factor of the augmented leadership

distribution, where human act supportive. Moreover, two of the four scenarios include a person-
25

focused leader situation and two of the four scenarios include a task-focused leader situation.

The four different vignettes are described in appendix A. The scenarios are based on real

situations to make it easier for participants to imagine the scenario. Likewise, due to the realistic

situation in the vignettes, the level of common-sense increases. The vignettes were introduced

by letting participants know what kind of scenarios there were in. Specifically, the participants

are told that they needed to imagine to be looking for a new job, followed by an introduction of

a company where only one thing was not clear in order to make a decision to accept the new

job. The offer that the company makes is almost acceptable, only the leadership philosophy was

not clear yet. In the specific scenarios, different leadership philosophy are explained which was

the manipulation of this study.

The first independent variable that was manipulated, is the augmented leadership

distribution, which has two different types: human default, where the algorithm supports the

human and algorithm default, where the human supports the algorithm. In this experiment, we

manipulated augmented leadership distribution subjects in an effort to assess the difference in

effect on acceptance of the leader and whether transparency can explain this relation. In the

scenario, this is divided into a situation where the team manager named ‘Alex’ is in charge of

the day-to-day business tasks and where the automated system is consulted in certain cases. The

other situation is where the automated system is in charge of most management tasks and where

team manager ‘Alex’ is consulted in special cases. In the written scenario this is made clear by

repeating the factor that is in the lead of the management decisions.

Leadership behavior was the second variable that is manipulated, which has two types:

person-focused, concentrating on the employee and their well-being, and task-focused,

focusing on the task and completion of the task. We attempt to evaluate the difference in

acceptance of the leader. In the person-focused scenario, we used the situation of performance

and career development involving coaching, personalized feedback, and future possibilities. In
26

the task-focused scenario, we used the situation of managing team projects consisting of task

distribution, keeping track of deadlines, and task accomplishment.

Measurements

The present study consisted of two measurements which was the mediator and the

dependent variable (Appendix B). Furthermore, a control variable, lay beliefs of AI, was

included. To end, two independent variables, the leadership distribution and leadership

behavior, that have been manipulated were present in this study.

Acceptance

Measuring acceptance of the leader was done by an adapted scale by Höddinghaus et al.

(2020). We adjusted this scale in such a way that it was obvious whom the question is about,

by replacing the X with the leadership system. In addition, we deleted the word decision from

the initial items. This scale of acceptance consisted of three items which were ‘I think I would

accept this leadership system’, ‘I think I would agree with this leadership system’, and ‘I think

I would endorse this leadership system and act accordingly.’. These items were rated on a 7-

point Likert scale ranging from strongly disagree (1) to strongly agree (7). The acceptance scale

revealed high reliability (Cronbach’s Alpha = .93). The corrected item-total correlation showed

that these items are strongly correlated with the total score (> .30). There were 3 missing values

for acceptance.

Transparency

The scale that was used for measuring transparency was an adapted scale of

Höddinghaus et al (2020). We adjusted this scale the same way as we did with acceptance, by

replacing the X with the leadership system. The transparency scale had three items which were

‘I think I could understand the decision-making processes of this leadership system very well,

‘I think I could see through this leadership system's decision-making process’, and ‘I think the

decision-making processes of this leadership system are clear and transparent’. These three
27

items were rated on a 7-point Likert scale ranging from strongly disagree (1) to strongly agree

(7). High reliability was demonstrated for the transparency scale (Cronbach’s Alpha = .87). All

items on the transparency scale strongly correlated with the total score, which appeared by the

corrected item-total (> .30). For transparency, there were 10 missing values.

Control variable

In our research, the variable lay beliefs of AI was examined as a control variable. The

scale that has been used for measuring lay beliefs of AI had 10 items which were rated on a 5-

point Likert scale ranging from strongly disagree (1) to strongly agree (5). A list of abilities was

stated and the question was asked whether participants think AI can perform these abilities

better than human intelligence. The abilities were ‘possesses abstract reasoning ability’, ‘has a

good short-term memory’, ‘has good long-term memory’, ‘processes information quickly’, ‘has

a high ability to learn’, ‘is good at problem-solving’, ‘flexibility/ can adapt to new things’, ‘can

perform well on complex tasks’, is good at initiating structure’, and ‘is good at personal

consideration’. The lay beliefs of AI scale was reliable (Cronbach’s Alpha = .72). However,

examining the corrected item-total correlation showed no strong correlation with the total score

(< .30). For lay beliefs of AI, there were 13 missing values.

Manipulation check

A manipulation check was used to check whether the manipulations were effective

(Lonati et al., 2018). In this survey, three manipulation questions were asked to verify the

effectiveness of the manipulation of the augmented leadership distribution and situational

leadership behavior. The manipulation check for the augmented leadership distribution in this

study consisted of the binary question “Who was most dominantly in charge in the described

leadership system in the scenario?”. Participants had two options to answer this question, either

the team manager or the automated system. Following was the manipulation question regarding

leadership behavior. The binary question was asked about which example was used in the
28

described scenario. For this question, the participants had the following two options, either

management of team project or performance and career evaluation. In addition, the last

manipulation check for checking the effectiveness of leadership behavior was the question of

what the nature of the leadership situation was in the described scenario. For this question, we

used a 5-point Likert scale ranging from definitely person-focused to definitely task-focused.

The aim is a successful manipulation check, meaning that conclusions about the relationship

between the variables are more accurate (Hoewe, 2017).

Analytical Plan

Our data was analyzed using SPSS as statistical software. Cleaning the data was done

prior to the analysis. Before testing the hypothesis, the distribution of the variables was

analyzed, the data was tested for normality and outliers, and a correlation matrix was performed.

The multiple analysis for testing the hypotheses followed. For hypothesis 1 a one-way ANOVA

was used for examining the relationship between the dependent variable acceptance and the

independent variable augmented leadership distribution. Followed with a PROCESS by Hayes

analysis for hypothesis 2 to evaluate the impact of transparency. For hypothesis 3 a two-way

ANOVA was conducted for discovering the influence of situational leadership behavior.
29

Results

Our survey was filled in by 171 participants. In the results section, analyses are reported

that were conducted with the data of these 171 participants. Computing the frequencies of each

scale shows that there were no data entry errors. For each scale, the mean, standard deviations,

minimum and maximum were assessed. For acceptance, the minimum score was 1, and the

maximum score was 7 (M = 4.09, SD = 1.52). The range of transparency score was from 1.33

to 7 (M = 4.59, SD = 1.39). for lay beliefs of AI the minimum score was 2.1, and the maximum

score was 5 (M = 3.66, SD = .54). Additional, the data were examined for normality by looking

at skewness and Kurtosis scores. The acceptable range is between 1 and -1, which assumes that

the data is normally distributed. For the acceptance scale, the statistics were within the

acceptable range, thus we assumed normal distribution. The skewness and Kurtosis scores for

the transparency scale were in the acceptable range, which indicated a normal distribution of

the data. Furthermore, the lay beliefs of AI scale was within the acceptable range of skewness

and Kurtosis, therefore we assumed normal distribution. The data were checked for outliers,

which were not detected. The distribution of participants across the different scenarios was

examined (Table 1). The distribution is not equal because participants were excluded by SPSS

from the data when they did not answer the manipulation check. The unequal distribution can

lead to a loss of statistical power, therefore checking for normality is important.

Table 1
Overview of distribution of participants across conditions
Leadership behavior

Augmented leadership distribution Person-focused Task-focused

Human default 36 41

Algorithm default 49 45
30

The manipulation check question was examined to know if the manipulation of the

variables worked. The aim of the manipulation check is to ensure that the manipulation has

been successful, which means that participants understand the variable that has been

manipulated (Hoewe, 2017). We included three manipulation check questions to test if

participants understood the scenario as we intended. The goal is that participants could recall

who was in charge of most leadership tasks and what the leadership behavior was in the vignette

they had read.

The first manipulation check was for the augmented leadership distribution variable,

where the main character of the leadership system was asked through a binary question. A

crosstabs and Pearson Chi-Square test were conducted to verify how many participants could

remember the scenario. The Pearson Chi-Square is statistically significant x2(1) = 75.05, p <

.001, which explains that there is a significant association between the variables. Crosstabs

analysis showed that in total 84% of participants could recall the augmented leadership

distribution correctly. Of the participants that were in the scenario with humans as default in

the augmented leadership, 97% answered the manipulation check correctly. For the participants

that were in the scenario with an algorithm as default in the augmented leadership, 82%

correctly answered the manipulation check.

The same analysis was conducted for testing the understanding of the participants of the

leadership behavior. Participants answered a binary question about the leadership behavior that

was performed in the scenario they read. Again a crosstabs and Pearson Chi-Square test were

conducted to verify how many participants could recall the scenario. The Pearson Chi-Square

is statistically significant x2(1) = 67.74, p < .001, which means that the observed distribution is

significantly different from the expected distribution. In total 82% of participants correctly

remembered the leadership behavior performed in the scenario. The Crosstabs analysis showed
31

that of the participants that read the scenario with performance and career evaluation as a

leadership situation, 76% answered this question correctly. For the participants that read the

scenario with the management of team projects as a leadership situation, 89% correctly

remembered this situation.

For the last manipulation check, an independent sample t-test is performed to assess

whether participants correctly identified with the behavior of their allocated scenario. The

Levene’s test is statistically significant (p < .01), which means that equal variances are not

assumed. From the 80 participants who read the task-focused behavior vignette (M = 3,76, SD

= 1,05) compared to the 78 participants who read the person-focused behavior vignette (M =

3,18, SD = 1,25), the results revealed that the mean differences are statistically significant,

t(156) = 3.18, p < .01. A higher score corresponded to a task-focused rating and a lower score

corresponded to a person-focused rating. The results show that the participants who read the

task-focused vignette scored the leadership behavior as more task-focused. The decision was

made to not exclude participants who failed the manipulation check to minimize the possibility

to ignore valuable information.

In the correlation matrix (Table 2) information can be found about the variables and

their relationships. Means, standard deviations, correlations, and Cronbach’s alpha are

presented in this table. Acceptance and transparency are positively correlated (p <.001), which

indicates that when transparency is high, the acceptance of the leader is likewise high.

Acceptance of the leader furthermore positively correlated with augmented leadership

distribution (p < .001). This means that the acceptance of the leader is higher when the

distribution of augmented leadership is default human, and an algorithm acts supportive.

Another variable that acceptance of the leader has a positive relation with is lay beliefs of AI

(p < .01). This correlation indicates that when participants have high lay beliefs of AI, the

acceptance of the leader is higher. Transparency positively correlated with augmented


32

leadership distribution (p < .01), which implies that transparency is higher when the distribution

of augmented leadership is default human, and an algorithm acts supportive. Another positive

correlation was found between transparency and lay beliefs of AI (p < .001). Moreover, no

statistically significant relationship was found with situational leadership behavior. The

correlations showed that lay beliefs of AI has a significant correlation with both the dependent

variable acceptance and the mediator variable transparency. Therefore lay beliefs of AI was

included in the analysis as a control variable. Gender and age both do not statistically significant

correlate with the other variables.

Table 2
Correlation matrix

Variable M SD 1 2 3 4 5 6

1. Leader acceptance 4.09 1.52 (.93)

2. Transparency 4.59 1.39 .54** (.87)

3. Augmented leadership 0.44 0.50 .38** .27** -

4. Leadership behavior 0.51 0.50 .00 .07 -.06 -

5. AI lay beliefs 3.66 .54 .26** .28** .07 .11 (.72)

6. Gender - - .10 .06 .09 -.07 .12 -

7. Age - - .04 .10 -.02 .11 -.02 -.07

Note. Augmented leadership is coded as 1 = algorithm default, 0 = human default. Leadership

behavior is coded as 1 = task-focused, 0 = person-focused.

*p < .05, **p < .01

Cronbach’s α between parentheses.


33

Hypothesis testing

Leader acceptance of the augmented leadership distribution

Hypothesis 1 stated that the acceptance of the leader will be higher when human is the

default and is supported by an algorithm in the augmented leadership. Previous analysis showed

that the distribution of augmented leadership where human is the prominent actor and an

algorithm act supportive scores higher than the distribution of augmented leadership where an

algorithm is the prominent actor and a human is supportive on acceptance of the leader. To

determine whether the difference between the different distributions of augmented leadership

is statistically significant, an one-way analysis of variance (ANOVA) was computed. Before

performing the one-way ANOVA, the assumptions must be met. The normality assumption was

met since the scores for skewness and Kurtosis are in an acceptable range. For the

homoscedasticity assumption, Levene’s test was utilized, which was not significant (p = .249).

This indicated that the variances are equal across groups, which meant that the assumption was

met. Because of the between-subject study design, the independence assumption was also met.

The one-way ANOVA test whether the human default of the augmented leadership distribution

is more accepted than the algorithm default of the augmented leadership distribution (Table 3).

Results revealed that the differences between the two distributions of augmented leadership

were statistically significant with a large effect, F(1,166) = 3.35 ,p < .001, η2 = .14.) Participants

in the scenario with a prominent human in the augmented leadership distribution testified higher

acceptance (M = 4.73, SD = 1.38) compared with participants in the scenarios with a prominent

algorithm in the augmented leadership distribution (M = 3,58, SD = 1.43). In line with

hypothesis 1, the results showed that when the distribution of augmented leadership is default

human, the acceptance of the leader is higher than when the distribution of augmented

leadership is default algorithm. Thus, hypothesis 1 is supported.


34

Table 3
Means, standard deviations, and one-way analyses of variance in acceptance by augmented

leadership distribution

Measure Human Algorithm F(1,166) η2

M SD M SD

Acceptance of leader 4.73 1.38 3.58 1.43 27.49*** 0.14

*** p < .001

The one-way ANOVA was repeated with lay beliefs of AI performing as a control

variable, performing a hierarchical regression analysis (Table 4). This analysis showed that

when adding lay beliefs of AI the augmented leadership distribution had still a significant effect

as well as a large effect on acceptance, F(1,155) = 30.28, p < .001, η2 = .16. In addition, the

influence of lay beliefs of AI on the relationship of augmented leadership distribution and

acceptance was further examined by performing hierarchical regression. Addition to model 1

with augmented leadership distribution, lay beliefs of AI was added in model 2. Results

displayed that the addition of lay beliefs of AI counted for an expansion of 5,2% in the variance

of acceptance. This increase in R2 was statistically significant (p < .01). This means that the

predictive power on acceptance advances when lay beliefs of AI is high. Both models were

overall significant. The results of the regression analysis indicate that when lay beliefs of AI

enhances, the acceptance enhances as well (b =.64, p < .01).


35

Table 4
Regression analysis for control variable lay beliefs of AI by augmented leadership distribution

to acceptance of the leader

Variable b 98%CI SE B B R2 R2change

Step 1 .14 .14***

Constant 3.58*** [3.92,3.87] .14

Leadership distribution 1.15*** [.72,1.58] .22 .38

Step 2 .22 .05**

Constant 1.25 [-.22,2.71] .74

Leadership distribution 1.21*** [.77,1.64] .22 .39

Lay beliefs of AI .64** [.25,1.04] .20 .23

Note. Leadership distribution refers to manipulations of human default and algorithm default

**p < .01, *** p < .001

Mediating effect of transparency on acceptance of the leader

In hypothesis 2 it is stated that transparency mediates the relationship between

augmented leadership distribution on the acceptance of the leader in such that it will positively

increase the acceptance of the leader when human is the default and is supported by an

algorithm. Correlation showed that transparency positively correlated with both acceptance as

well as augmented leadership distribution. To test whether transparency is the mechanism of

the previously observed relationship between augmented leadership distribution and acceptance

of the leader, an regression analysis was performed using PROCESS macro model by Hayes

(Table 5). Results indicated a significant positive effect of human default versus algorithm

default on acceptance of the leader, b = 1.15, SE = .22, t(159) = 5.24, p < .001. Performed a

linear regression with acceptance as dependent variable and in model 1 the independent variable

augmented leadership distribution, and the results indicated the predictive power of augmented
36

leadership distribution, R2 = .16. Including transparency in step 2, this showed a significant

positive relationship between augmented leadership distribution and transparency, b = .75, SE

= .21, t(159) = 3.52, p < .01. The predictive power of this model, including augmented

leadership distribution and transparency, increased, R2 = .36. Furthermore, results showed a

significant positive relationship between transparency and acceptance, b = .51, SE = 0.07,

t(158) =6.96, p < .001. This means that when transparency scores high, acceptance of the leader

increases. With these two steps, a reduction in the effect of augmented leadership distribution

on acceptance of the leader is observed, b = .85, SE .20, t(158) = 4.19, p < .001. The indirect

effect, .34, 95%CI[.14, .56] was significant since the bootstrap confidence interval does not

include zero (Figure 2). These results mean that there in fact is a mediation effect. Since the

direct effect of augmented leadership distribution on acceptance of the leader was significantly

present, the mediation is partial. This partial mediation effect indicates that augmented

leadership distribution exerts some of the impact on acceptance via transparency. Furthermore,

the predictive power when adding transparency to the model improved from 16,7% to 35,6%,

which is 18,9%. To conclude, hypothesis 2 was supported based on these results.

Table 5
Regression analysis for mediation by transparency for augmented leadership distribution to

acceptance of the leader

Variable b 98%CI SE B B R2 R2change

Step 1 .14 .14***

Constant 3.58*** [3.92,3.87] .14

Leadership distribution 1.15*** [.72,1.58] .22 .38


37

Variable b 98%CI SE B B R2 R2change

Step 2 .36 .20***

Constant 1.44*** [.77,2.10] .34

Leadership distribution .85*** [.45,1.2] .20 .28

Transparency .51*** [.36,.65] .07 .46

Indirect effect .34 [014,.56] .11

Note. Leadership distribution refers to manipulations of human default and algorithm default

***p < .001

Figure 2
Regression coefficients of the mediating effect of transparency on the relationship between

augmented leadership distribution and acceptance of the leader

Transparency
1.15*** .51***

Human 1.15***
Acceptance of
X
leader
.85***
Algorithm

Further analyses have been done where control variable lay beliefs of AI is added

as a covariate in PROCESS by Hayes. This analysis showed that lay beliefs of AI is not

significantly influencing acceptance, b = .33, t(154) = 1.78, p = .08.


38

For examining what lay beliefs of AI has of influence on our model, several hierarchical

regression analyses were conducted. First of all, the influence of lay beliefs of AI on the

relationship between augmented leadership distribution and transparency was examined (Table

6). In model 2 lay beliefs of AI was added, where in model 1 augmented leadership distribution

is placed. Results showed that by adding lay beliefs of AI the variance in transparency increased

by 6,7% in comparison to only augmented leadership distribution. This increase in R2 was

statistically significant (p <.01), meaning that the predictive power on transparency improves

when lay beliefs of AI is high. Moreover, both models were overall significant. Results suggest

that when lay beliefs of AI increases, the transparency likewise showed an increase (b = .67, p

< .01). Next, the effect of lay beliefs of AI on the relationship of transparency to acceptance

was inspected. This analysis showed no significant effects.

Table 6
Regression analysis for control variable lay beliefs of AI by augmented leadership distribution

to transparency

Variable b 98%CI SE B B R2 R2change

Step 1 .07 .07**

Constant 4.26*** [3.99,4.54] .14

Leadership distribution .75** [.33,1.17] .21 .27

Step 2 .15 .07**

Constant 1.82* [.43,3.21] .70

Leadership distribution .73** [.32,1.14] .21 .26

Lay beliefs of AI .67** [.29,1.04] .19 .26

Note. Leadership distribution refers to manipulations of human default and algorithm default

*p < .05, **p < .01, *** p < .001


39

Moderating effect of situational leadership behavior on acceptance of the leader

Hypothesis 3 stated that situational leadership behavior moderates the relationship

between augmented leadership distribution and acceptance of the leader. The moderation was

hypothesized such that the effect when human leadership is the default and is supported by

algorithmic leadership on the acceptance of the leader is positive when the leader performs a

person-focused behavior and such that the effect when algorithmic leadership is default and is

supported by algorithmic leadership on the acceptance of the leader is positive when the leader

performs a task-focused behavior. To test whether the mean of acceptance of the leader changes

according to the levels of augmented leadership distribution and situational leadership behavior,

a two-way ANOVA was performed. Before conducting the two-way ANOVA, the assumptions

of normality, homoscedasticity, and independence of observation must be met. The normality

assumption was met, as beforementioned we assumed acceptance of the leader is normally

distributed. The scores of skewness and Kurtosis were classified to be acceptable. Levene’s test

was performed, for the homoscedasticity assumption, which was not significant (p = .43). The

assumption is met since the variances are equal across groups. This research used a between-

subject design, this ensures the independence of observation, thus the independence assumption

was also met.

Performing the two-way ANOVA test, means investigating if there is an interaction

between augmented leadership distribution and situational leadership behavior on the

acceptance of the leader (Table 7). The results revealed that there was a main effect for

augmented leadership distribution on acceptance of the leader, F(1,164) = 27.70, p < .01, η2 =

.15. However, no main effect was found for leadership behavior on acceptance of the leader,

F(1,164) = .25, p = .62, η2 = 0.002. Additionally, no interaction effect between leadership

distribution and leadership behavior was discovered, F(1,164) = .73, p = .40, η2 = .004.

Hypothesis 3 cannot be supported based on this. This means that the effect of augmented
40

leadership distribution on acceptance of the leader was not conditional on the situational

leadership behavior.

Table 7
Factorial ANOVA predicting acceptance of the leader by augmented leadership distribution

and situational leadership behavior

Acceptance

F(1,146) p η2 Human default Algorithm default

Person- Task- Person- Task-


Distribution 27.70 .000 .15
focused focused focused focused

Behavior 0.51 .62 .002 M SD M SD M SD M SD

Distribution
.73 .40 .004 4.89 1.28 4.59 1.47 3.54 1.44 3.62 1.44
x Behavior
41

Discussion

Findings

The aim of this study was to expand our knowledge about augmented leadership and its

effects on employees. One of the goals of this study was to investigate what drives acceptance

of a leader when the leader is augmented. Furthermore, this study anticipates developing an

understanding of the variables transparency and leadership behavior on the augmented

leadership and acceptance relationship. More specifically, our study’s purpose was to examine

how acceptance of an augmented leader (default human vs. default algorithm) is, and what role

transparency and different leadership behaviors play in this relationship. Transparency is tested

as the mediator in our model, and situational leadership behavior is tested as the moderator in

our model.

The results of hypothesis testing showed that the acceptance of employees is

significantly higher when the prominent actor in the distribution of augmented leadership is

human and the algorithm is supportive. This demonstrates the support of hypothesis 1.

Contrastingly, this means that the acceptance of followers is lower when the prominent actor

of the distribution of augmented leadership is the algorithm and human is supporting.

Furthermore, the results indicate that transparency mediates the relationship between

augmented leadership distribution on the acceptance of the leader. The bootstrap interval did

not include zero which means that the indirect effect is significant. Likewise, the direct effect

of augmented leadership distribution on the acceptance of the leader was still significant. Thus

the mediation that we hypothesized is a partial mediation. Following our analytical plan, a two-

way ANOVA is conducted for analyzing the moderated effect of situational leadership behavior

on the augmented leadership distribution and acceptance of the leader relationship. Since there

is not found an interaction effect between augmented leadership distribution and situational

leadership behavior, this hypothesis is not supported. The effect of the relationship between
42

augmented leadership distribution and acceptance of the leader is not conditional on the

situational leadership behavior.

Other analyses conducted for additional analysis found that lay beliefs of AI had an

impact on two relationships in our model. First, the relationship between our independent and

mediator, the augmented leadership distribution and transparency. With lay beliefs of AI

additionally to augmented leadership distribution the predictive power on transparency

increased from 7,8% to 14,5%. Second, the relationship between augmented leadership

distribution and acceptance, the relationship between the independent variable and dependent

variable. Results show that adding lay beliefs of AI causes an increase in predictive power,

from 16,7% to 21,8%.

Theoretical implications

Our research contributes to the existing knowledge of augmented leadership and its

impacts on employees. The effect of augmented leadership distribution on the acceptance of

the leader is established such that acceptance of the leader is higher when the most prominent

actor in the distribution is human and is supported by an algorithm. These results are in line

with the social identity theory we used for hypothesizing this relationship. Since the prominent

actor in the most accepted augmented leadership distribution was human, employees would be

more likely to like their leader because of more similarities (Hogg, 2001). On the other hand,

employees would not be more likely to like their leader if the prominent actor of the augmented

leader is an algorithm. In both situations of the augmented leadership distribution, the human

and algorithm are present. However, the power dynamics are not equal in our scenarios, which

means that one of the actors is more prominent in one situation than in the other situation. This

means that the action, ability, and function of the most prominent in the augmented distribution

is what employees most see and compare with. In theory, there are several reasons why certain

leaders are effective in terms of social identity, these are prototype-based liking, the appearance
43

of being influential, and constructs such as legitimacy, trust, and innovation (Hogg et al., 2012).

The theory of social identity that is been used to hypothesize this relationship, and is found to

be working for augmented leadership, which is a contribution to the literature.

Findings for transparency as a mediator are in line with the theorized relationship.

Transparency positively mediates the correlation between augmented leadership distribution on

acceptance of the leader in a way that the acceptance of the leader increases when the human is

the prominent actor of the augmented leader and the algorithm is supporting. The mediation is

partial, which means that transparency is just for a part responsible for the relationship between

augmented leadership behavior and acceptance of the leader. This relationship is strengthened

by transparency, which means when there is no transparency, the relationship still exists. The

knowledge that transparency mediates this relationship is in line with the arguments of Brueur

et al. (2020), which stated on the one hand that transparency nurtures the understanding between

leader and employee, and on the other hand that clear and open information is key in the

relationship between leader and employee. Understanding this identified an approach to

increase acceptance of the leader when leadership is augmented. Accordingly to the argument

that giving an explanation of why things are done in a certain way increases the acceptability

(Buell & Norton, 2011), there can be concluded that our research proves the theory.

The effect of situational leadership behavior on the relationship of augmented leadership

distribution on acceptance of the leader is not found as hypothesized. While the correlation

table already showed no correlation for any other variables with situational leadership behavior,

this analysis was still performed. In contrary with the argument that the capabilities of the

dimensions of the behaviors and the capabilities of the prominent actors would complement

each other, the moderated effect is not present. There is no evidence found for an interaction

effect. A conclusion is made that the augmented leadership distribution has the same effect on
44

person-focused behavior as on task-focused behavior, thus no moderator on the acceptance of

the leader. While this finding does not fit with the theory, other explanations are discovered.

A possible explanation for not finding the hypothesized moderating effect can be that

the participants failed to acknowledge the different behavior as we intended. Argument for the

moderation was that the skills of task-focused behavior complement the abilities of algorithm

and the skills of person-focused behavior complement the abilities of a human. However,

people have their own judgment on what skills are required (Lee, 2018). Therefore, the

complementing skills and abilities should have been exposed to the participants for them to

understand the different behavior in combination with the two augmented leadership

distribution scenarios. Moreover, the vignettes were not pretested before conducting our

research. This can mean that the questions were too difficult to answer due to the complexity

of the situations. This has nothing to do with the manipulation check since that is only checking

whether the participants remember the scenario. However, for answering the questions how we

intended to, the manipulated scenario also needs to be completely in terms of information for

participants to understand. In our research, the manipulation of situational leadership behavior

is intertwined in the vignette, however an actual explanation of what person-focused and task-

focused behavior means is not included in the vignettes. This can mean that the interpretation

of this behavior could not be as we intended and therefore the results were different from our

hypothesis.

Our unexpected finding is the impact that lay beliefs of AI has on the relationships

between the independent variable and the mediator and between the independent variable and

the dependent variable. In both cases, lay beliefs in AI increases the predictive power of the

outcome variable. This means that when employees think highly of AI, the acceptance of the

augmented leadership increases. No theorizing was done for this finding before the analysis.

However, trust in the leadership agents could play a role in explaining these findings
45

(Höddinghaus et al., 2021). Their research contributed to the literature in a way that they

emphasized the trustworthiness of human and computer leaders. Thus, when employees

perceive AI as having better abilities than humans, they would trust when AI is implemented in

the leadership distribution, which could be a reason for the found impact on acceptance. Trust

was identified as a component of acceptance (Höddinghaus et al., 2021). The impact of lay

beliefs of AI is also found for transparency. Here the reason could be that when employees

score AI to have better abilities than humans, they have knowledge and faith in AI. Knowledge

and information are key to transparency (Breuer, et al., 2020), which supports this rationale.

Practical implications

On a practical level, our research presents implications for organizations as well as

managers to apply in their work environment. Generalizing, employees accept augmented

leaders with a human as the main character (and algorithm supporting) more as compared to an

augmented leader with an algorithm as a main character (and human supporting), irrespective

of the leadership behavior. Accepting the leader is important for multiple reasons, including the

executing of the strategy (Thomassin Singh, 1998; Zagotta & Robinson, 2002), increase in

commitment (Öztekin et al., 2015), productivity (Baker et al., 2002), and effective and

sustainable leadership (van Quaquebeke & Eckloff, 2013; Yukl et al., 2008). A first practical

implication is that knowledge about the distribution of the augmented leadership and its

consequences on the acceptance of that leader could help management make decisions on how

to implement certain leadership functions. It can be concluded that the decision for augmented

leadership should not be taken lightly and important is to decide how the allocation of actors in

the augmented leadership is. Managers can use the findings of our research when they want to

start with augmented leadership. Before implementing such a project, they have a couple of

decisions to make which our research can help out with.


46

Next, transparency is relevant to take into consideration since it is positively related to

the augmented leadership distribution and the acceptance of the leader. The relationship

between augmented leadership distribution and acceptance of the leader is strengthened by

transparency. Management should increase transparency for the intended effect of improving

the acceptance of the leader. From theory, we know that guaranteeing transparency is possible

through uncovered, described, documented, and communicated the line of reasoning

(Rasmussen, et al., 2007). Essential is information and, transparent and open knowledge

management (Breuer, et al., 2020; Rodrigues & Hickson, 1995). In practice, this means that

management should ensure that both humans and algorithms are transparent about their

execution of tasks. To conclude, this contributes to practice by providing ways how to improve

transparency which in turn increases the acceptance of the leader. Increasing transparency can

be achieved by sharing information publicly (Schnackenberg & Tomlinson, 2016).

Implementing an open communication channel where the organization shows and explains their

made decisions is one way.

Following the practical implication of increasing transparency, the literature states that

the overall algorithms lack transparency (Glikson & Wooley, 2020; Mahmud et al., 2022).

Organizations can aid transparency through computer augmented transparency, which means

that leaders receive answers about work that is being done in the organization (Schildt, 2016).

With this information, the leader can adjust processes accordingly. According to Yeomans et

al. (2019), the way the explanation is communicated is important for algorithmic aversion or

affection. For positive impact, a persuasive way of communication seems to be de best way, for

example, personalized conversation or illustration (Mahmud et al., 2022; Yeomans et al., 2019).

Lay beliefs of AI is found to have an impact on acceptance, the relationship between

augmented leadership distribution and transparency and on the relationship between augmented

leadership distribution and acceptance. How and why this impact exists is on the acceptance is
47

not investigated. A high score of lay beliefs of AI means that participants believe that AI

performs the abilities better than a human could do. This high score is beneficial for our model,

which indicates that organizations should nurture the lay beliefs of AI to employees. Therefore

organizations can contribute to educating employees about AI, organizing workshops, and

providing training. Additionally, task objectivity can bring aid in a positive attitude and trust

toward the use of algorithms (Castelo et al., 2019). Practical steps for task objectivity are

communicating how the task is set up and what elements serve as a priority, where contingency

is important (Mahmud et al., 2022).

Strengths and limitations

Our study consists of both strengths as well as limitations that are worth mentioning.

First the strengths of our research. To start we used a between-subject design, which ensures

that no learning can be done or no transfer was possible throughout scenarios. This contributes

to the internal validity of this research. The vignette experiment ensures that manipulation can

be controlled by providing realistic situations for the manipulated variables, providing internal

as well as external validity (Aguinis & Bradley, 2014). Furthermore, this approach stimulates

judgments about situations by participants (Atzmüller & Steiner, 2010). Another strength of

our study is the built-in manipulation check. The manipulation check acts as an indicator of

internal validity (Aguinis & Bradley, 2014; Lonati et al., 2018). Our decision regarding the

manipulation check was to not exclude participants who failed the manipulation check to

minimize the possibility to ignore valuable information.

The limitations mostly coincide with the strengths. The first limitation that this research

encountered is the experimental vignette method. Since the scenarios entail a resemblance to

the real-world events and participants do not experience these events, this harms the external

validity and generalizability of the results of our study. Participants’ understanding could be

reduced by using hypothetical situations (Lee, 2018). Moreover, the vignettes are not tested
48

prior to data collection. Furthermore, the vignettes of this study were not equally distributed

which could lead to reducing of statistical power. Besides, the number of participants for each

vignette was too low for minimizing the risk of losing statistical power. On the other hand, this

experimental vignette method allowed for high control and enables suggestions for causality

due to the manipulation of variables (Lonati et al., 2018). This in turn increases the internal

validity.

Regarding the method, another limitation that is identified is the use of between-subject

design in our study. Participants were randomly assigned to a vignette and the results are based

on the comparison of those groups. This means that participants were offered one scenario with

either augmented distribution with default human or default algorithm with either a person-

focused leadership behavior or a task-focused leadership behavior. This method ignores the

impact of how participants perceive the other conditions of other vignettes, thus it does not

consider how participants’ response changes between circumstances. An idea for future

research could be to perform the research within-subject design for including changes between

conditions. However, this creates a much longer survey where attention and manipulation is

more difficult to establish. Moreover, with within-subject design learning and transferring

knowledge is possible which interferes with the interpretation of the results.

Moreover, the attention check can be seen as a limitation. We added an attention check

in our questionnaire to increase the validity of our research. We asked participants to rate a

specific question with number 4. To a large extent. However, feedback from multiple

participants pointed out that the number four was not present in the answer which was confusing

for the participants. Additionally, some participants got confused by the fact that the

questionnaire is about algorithms as leaders and thought it was a reversed question. This

resulted in their minds like they did not need to listen to the questionnaire which was in their
49

imagination the computer. Partly because of this feedback we did not exclude participants based

on the attention check.

Obtaining participants was harder than we anticipated in the beginning. The goal for the

number of participants was around 200. However, in the end, we needed to decide to close the

questionnaire earlier due to the lack of time for analyzing the data and interpreting the results.

This means that our sample size is smaller, which reduces the power of the research. Moreover,

the small sample size would reduce the generalizability of the study. Remaining in the

participants part of the study, another limitation was the language. The questionnaire was

conducted in English, however, the majority of our participants have Dutch as their native

language. For native Dutch speakers, the questionnaire can be difficult to read and understand.

To end, a limitation that occurred is about our control variable. Lay beliefs of AI is used

as a control variable in our analysis. However, while conducting the reliability analysis the

overall score was just above the acceptable range. Therefore, an analysis of the separate items

was performed. This showed unacceptable scores for 6 of the 10 items (> .30). Moreover,

investigating the normality of the lay beliefs of AI scale revealed that the data is not normally

distributed because of the not acceptable scores for skewness and kurtosis of 5 of the 10 items

(< -1, > 1). Together this diminishes the interpretation of the lay beliefs of AI results.

Future research

During our research, some limitations were identified. For future research in this

particular field, it is recommended to tackle these limitations before conducting the study. First,

a pretest for manipulation check is a recommendation to ensure that the vignettes are perceived

as intended. Since the experimental design in beginning is bad for the external validity because

of participants did not experience these scenarios, future research could try and improve the

imagination of participants in these scenarios. Performing a pilot study for the vignettes for

these manipulation checks is one way to achieve this (Hughes & Huby, 2004). Furthermore, to
50

enhance external validity two other things can be done in future research. First, the study can

be repeated over time to improve its external validity of the study. Moreover, by creating a real-

world setting the external validity would increase as well. Moreover, future research should

give attention to the distribution of the vignettes. When not equally distributed, at least the

amount of participants per vignette should be higher than in our research. Continuing on the

improvement for further research on the method, the attention check should be built in more

carefully to make sure no confusion can originate. To end the recommendations regarding the

method, making the questionnaire multi-language and giving participants the choice could

increase their understanding while reading and answering the questionnaire.

The impact of leadership behaviors is not found in our research, however, it was

theorized. For future research, a different manipulation could be performed to test this effect

again. Furthermore, additional analysis was done with lay beliefs of AI as a control variable.

Results showed that lay beliefs of AI positively impacts the predictive power of both

transparency and acceptance. The how and the why of the impact of lay beliefs of AI on our

variables is not examined. Therefore more research should be done on lay beliefs of AI and

how this can aid the acceptance of augmented leadership. One thing to take into account when

performing future research into lay beliefs of AI, is the scale. The current scale shows some

nonreliable items, which is not good for the interpretation of the results.

Conclusion

Collaboration between humans and computers, specifically algorithms, is growing,

which emphasizes the importance of knowledge of how augmentation of leadership influences

employees. With an experimental vignette study, we manipulated augmented leadership

distribution and situational leadership behavior. The questionnaire consisted of multiple

variables, where acceptance, transparency, and lay beliefs of AI are the main variables for our

research questions. Results showed that acceptance of augmented leaders is higher when that
51

leader is prominent human (and an algorithm acts in a supporting role) as well as when

transparency is in place. This highlights the fact that the distribution of a leader is impacting

the way employees perceive and ultimately accept their leader, which is needed for effective

leadership. The mediation of transparency on augmented leadership distribution on acceptance

of the leader is partial. Moreover, results showed that the effect of augmented leadership

distribution on acceptance of the leader is not restrictive on the situational leadership behavior,

thus no moderation effect is present. Additional analyses indicate that lay beliefs of AI

influences both the acceptance of the leader and transparency. This research learns us that

accepting the augmented leader is essential and can be achieved by a correct distribution of the

actor in augmented leadership actors, providing transparency, and contributing to learn about

AI.
52

References

Aguinis, H., & Bradley, K. J. (2014). Best Practice Recommendations for Designing and

Implementing Experimental Vignette Methodology Studies. Organizational Research

Methods, 17(4), 351–371. https://doi.org/10.1177/1094428114547952

Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency

ideal and its application to algorithmic accountability. New Media & Society, 20(3),

973–989. https://doi.org/10.1177/1461444816676645

Atzmüller, C., & Steiner, P. M. (2010). Experimental Vignette Studies in Survey Research.

Methodology, 6(3), 128–138. https://doi.org/10.1027/1614-2241/a000014

Baker, E., Avery, G. C., & Crawford, J. (2002). Satisfaction and Perceived Productivity When

Professionals Work From Home. Research and Practice in Human Resource

Management, 15(1), 37–62.

Breuer, C., Hüffmeier, J., Hibben, F., & Hertel, G. (2020). Trust in teams: A taxonomy of

perceived trustworthiness factors and risk-taking behaviors in face-to-face and virtual

teams. Human Relations, 73(1), 3–34. https://doi.org/10.1177/0018726718818721

Brynjolfsson, E., & Mitchell, T. (2017). What can machine learning do? Workforce

implications. Science, 358(6370), 1530–1534. https://doi.org/10.1126/science.aap8062

Buell, R. W., & Norton, M. I. (2011). The Labor Illusion: How Operational Transparency

Increases Perceived Value. Management Science, 57(9), 1564–1579.

https://doi.org/10.1287/mnsc.1110.1376

Burke, C. S., Stagl, K. C., Klein, C., Goodwin, G. F., Salas, E., & Halpin, S. M. (2006). What

type of leadership behaviors are functional in teams? A meta-analysis. The Leadership

Quarterly, 17(3), 288–307. https://doi.org/10.1016/j.leaqua.2006.02.007


53

Castelo, N., Bos, M. W., & Lehmann, D. R. (2019). Task-Dependent Algorithm Aversion.

Journal of Marketing Research, 56(5), 809–825.

https://doi.org/10.1177/0022243719851788

Chamorro-Premuzic, T., & Ahmetoglu, G. (2016). The Pros and Cons of Robot Managers.

Harvard Business Review, 12.

Cheng, M. M., & Hackett, R. D. (2021). A critical review of algorithms in HRM: Definition,

theory, and practice. Human Resource Management Review, 31(1), 100698.

https://doi.org/10.1016/j.hrmr.2019.100698

Cramer, H., Evers, V., Ramlal, S., van Someren, M., Rutledge, L., Stash, N., Aroyo, L., &

Wielinga, B. (2008). The effects of transparency on trust in and acceptance of a content-

based art recommender. User Modeling and User-Adapted Interaction, 18(5), 455–496.

https://doi.org/10.1007/s11257-008-9051-3

de Winter, J., & Hancock, P. (2015). Reflections on the 1951 Fitts List: Do Humans Believe

Now that Machines Surpass them? Procedia Manufacturing, 3, 5334–5341.

https://doi.org/10.1016/j.promfg.2015.07.641

Ensher, E. A., & Murphy, S. E. (1997). Effects of Race, Gender, Perceived Similarity, and

Contact on Mentor Relationships. Journal of Vocational Behavior, 50(3), 460–481.

https://doi.org/10.1006/jvbe.1996.1547

Evans, S. C., Roberts, M. C., Keeley, J. W., Blossom, J. B., Amaro, C. M., Garcia, A. M.,

Stough, C. O., Canter, K. S., Robles, R., & Reed, G. M. (2015). Vignette methodologies

for studying clinicians’ decision-making: Validity, utility, and application in ICD-11

field studies. International Journal of Clinical and Health Psychology, 15(2), 160–170.

https://doi.org/10.1016/j.ijchp.2014.12.001
54

Faraj, S., Pachidi, S., & Sayegh, K. (2018). Working and organizing in the age of the learning

algorithm. Information and Organization, 28(1), 62–70.

https://doi.org/10.1016/j.infoandorg.2018.02.005

Fleishman, E. A., Mumford, M. D., Zaccaro, S. J., Levin, K. Y., Korotkin, A. L., & Hein, M.

B. (1991). Taxonomic efforts in the description of leader behavior: A synthesis and

functional interpretation. The Leadership Quarterly, 2(4), 245–287.

https://doi.org/10.1016/1048-9843(91)90016-u

Gabris, G. T., & Ihrke, D. M. (2000). Improving Employee Acceptance Toward Performance

Appraisal and Merit Pay Systems. Review of Public Personnel Administration, 20(1),

41–53. https://doi.org/10.1177/0734371x0002000104

Glikson, E., & Woolley, A. W. (2020). Human Trust in Artificial Intelligence: Review of

Empirical Research. Academy of Management Annals, 14(2), 627–660.

https://doi.org/10.5465/annals.2018.0057

Graen, G. B., & Uhl-Bien, M. (1995). Relationship-based approach to leadership: Development

of leader-member exchange (LMX) theory of leadership over 25 years: Applying a

multi-level multi-domain perspective. The Leadership Quarterly, 6(2), 219–247.

https://doi.org/10.1016/1048-9843(95)90036-5

Grimm, J. W. (2010). Effective Leadership: Making the Difference. Journal of Emergency

Nursing, 36(1), 74–77. https://doi.org/10.1016/j.jen.2008.07.012

Hiemstra, A. M. F., Oostrom, J. K., Derous, E., Serlie, A. W., & Born, M. P. (2019).

Discriminated by an algorithm: a systematic review of discrimination and fairness by

algorithmic decisionmaking in the context of HR recruitment and HR development.

Journal of Personnel Psychology, 18(3), 138–147. https://doi.org/10.1027/1866-

5888/a000230
55

Höddinghaus, M., Sondern, D., & Hertel, G. (2021). The automation of leadership functions:

Would people trust decision algorithms? Computers in Human Behavior, 116, 106635.

https://doi.org/10.1016/j.chb.2020.106635

Hoewe, J. (2017). Manipulation Check. The International Encyclopedia of Communication

Research Methods, 1–5. https://doi.org/10.1002/9781118901731.iecrm0135

Hogg, M. A. (2001). A Social Identity Theory of Leadership. Personality and Social

Psychology Review, 5(3), 184–200. https://doi.org/10.1207/s15327957pspr0503_1

Hogg, M. A., van Knippenberg, D., & Rast, D. E. (2012). The social identity theory of

leadership: Theoretical origins, research findings, and conceptual developments.

European Review of Social Psychology, 23(1), 258–304.

https://doi.org/10.1080/10463283.2012.741134

Hughes, R., & Huby, M. (2004). The construction and interpretation of vignettes in social

research. Social Work and Social Sciences Review, 11(1), 36–51.

https://doi.org/10.1921/17466105.11.1.36

Jarrahi, M. H. (2018). Artificial intelligence and the future of work: Human-AI symbiosis in

organizational decision making. Business Horizons, 61(4), 577–586.

https://doi.org/10.1016/j.bushor.2018.03.007

Kellogg, K. C., Valentine, M. A., & Christin, A. (2020). Algorithms at Work: The New

Contested Terrain of Control. Academy of Management Annals, 14(1), 366–410.

https://doi.org/10.5465/annals.2018.0174

Langer, M., König, C. J., & Papathanasiou, M. (2019). Highly automated job interviews:

Acceptance under the influence of stakes. International Journal of Selection and

Assessment, 27(3), 217–234. https://doi.org/10.1111/ijsa.12246


56

Lee, M. K. (2018). Understanding perception of algorithmic decisions: Fairness, trust, and

emotion in response to algorithmic management. Big Data & Society, 5(1),

205395171875668. https://doi.org/10.1177/2053951718756684

Leyer, M., & Schneider, S. (2021). Decision augmentation and automation with artificial

intelligence: Threat or opportunity for managers? Business Horizons, 64(5), 711–724.

https://doi.org/10.1016/j.bushor.2021.02.026

Lonati, S., Quiroga, B. F., Zehnder, C., & Antonakis, J. (2018). On doing relevant and rigorous

experiments: Review and recommendations. Journal of Operations Management, 64(1),

19–40. https://doi.org/10.1016/j.jom.2018.10.003

Napier, B. J., & Ferris, G. R. (1993). Distance in organizations. Human Resource Management

Review, 3(4), 321–357. https://doi.org/10.1016/1053-4822(93)90004-n

Northouse, P. G. (2019). Leadership: Theory and Practice (8th ed.). SAGE Publications, Inc.

Öztekin, Z., İşÇi, S., & Karadağ, E. (2015). The Effect of Leadership Leadership on

Organizational Commitment Commitment. Leadership and Organizational Outcomes,

57–79. https://doi.org/10.1007/978-3-319-14908-0_4

Parris, D. L., Dapko, J. L., Arnold, R. W., & Arnold, D. (2016). Exploring transparency: a new

framework for responsible business management. Management Decision, 54(1), 222–

247. https://doi.org/10.1108/md-07-2015-0279

Parry, K., Cohen, M., & Bhattacharya, S. (2016). Rise of the Machines. Group & Organization

Management, 41(5), 571–594. https://doi.org/10.1177/1059601116643442

Petrinovich, L., O’Neill, P., & Jorgensen, M. (1993). An empirical study of moral intuitions:

Toward an evolutionary ethics. Journal of Personality and Social Psychology, 64(3),

467–478. https://doi.org/10.1037/0022-3514.64.3.467
57

Popper, M. (2013). Leaders perceived as distant and close. Some implications for psychological

theory on leadership. The Leadership Quarterly, 24(1), 1–8.

https://doi.org/10.1016/j.leaqua.2012.06.008

Pratoom, K. (2018). Differential Relationship of Person- and Task-Focused Leadership to Team

Effectiveness: A Meta-Analysis of Moderators. Human Resource Development Review,

17(4), 393–439. https://doi.org/10.1177/1534484318790167

Raisch, S., & Krakowski, S. (2021). Artificial Intelligence and Management: The Automation–

Augmentation Paradox. Academy of Management Review, 46(1), 192–210.

https://doi.org/10.5465/amr.2018.0072

Rasmussen, B., Jensen, K. K., & Sandoe, P. (2007). Transparency in decision-making processes

governing hazardous activities. International Journal of Technology, Policy and

Management, 7(4), 422. https://doi.org/10.1504/ijtpm.2007.015173

Raveendhran, R., & Fast, N. J. (2021). Humans judge, algorithms nudge: The psychology of

behavior tracking acceptance. Organizational Behavior and Human Decision

Processes, 164, 11–26. https://doi.org/10.1016/j.obhdp.2021.01.001

Rodrigues, S. B., & Hickson, D. J. (1995). SUCCESS IN DECISION MAKING: DIFFERENT

ORGANIZATIONS, DIFFERING REASONS FOR SUCCESS. Journal of

Management Studies, 32(5), 655–678. https://doi.org/10.1111/j.1467-

6486.1995.tb00793.x

Schildt, H. (2016). Big data and organizational design – the brave new world of algorithmic

management and computer augmented transparency. Innovation, 19(1), 23–30.

https://doi.org/10.1080/14479338.2016.1252043

Schnackenberg, A. K., & Tomlinson, E. C. (2016). Organizational Transparency. Journal of

Management, 42(7), 1784–1810. https://doi.org/10.1177/0149206314525202


58

Sheringham, J., Kuhn, I., & Burt, J. (2021). The use of experimental vignette studies to identify

drivers of variations in the delivery of health care: a scoping review. BMC Medical

Research Methodology, 21(1). https://doi.org/10.1186/s12874-021-01247-4

Suen, H. Y., Chen, M. Y. C., & Lu, S. H. (2019). Does the use of synchrony and artificial

intelligence in video interviews affect interview ratings and applicant attitudes?

Computers in Human Behavior, 98, 93–101. https://doi.org/10.1016/j.chb.2019.04.012

Thomassin Singh, D. (1998). Incorporating cognitive aids into decision support systems: the

case of the strategy execution process. Decision Support Systems, 24(2), 145–163.

https://doi.org/10.1016/s0167-9236(98)00066-9

Trope, Y., & Liberman, N. (2010). Construal-level theory of psychological distance.

Psychological Review, 117(2), 440–463. https://doi.org/10.1037/a0018963

Tröster, C., & van Quaquebeke, N. (2021). When Victims Help Their Abusive Supervisors:

The Role of LMX, Self-Blame, and Guilt. Academy of Management Journal, 64(6),

1793–1815. https://doi.org/10.5465/amj.2019.0559

Turban, D. B., & Jones, A. P. (1988). Supervisor-subordinate similarity: Types, effects, and

mechanisms. Journal of Applied Psychology, 73(2), 228–234.

https://doi.org/10.1037/0021-9010.73.2.228

van Quaquebeke, N., & Eckloff, T. (2013). Why follow? The interplay of leader categorization,

identification, and feeling respected. Group Processes & Intergroup Relations, 16(1),

68–86. https://doi.org/10.1177/1368430212461834

Wesche, J. S., & Sonderegger, A. (2019). When computers take the lead: The automation of

leadership. Computers in Human Behavior, 101, 197–209.

https://doi.org/10.1016/j.chb.2019.07.027
59

Yeomans, M., Shah, A., Mullainathan, S., & Kleinberg, J. (2019). Making sense of

recommendations. Journal of Behavioral Decision Making, 32(4), 403–414.

https://doi.org/10.1002/bdm.2118

Yukl, G., O’Donnell, M., & Taber, T. (2009). Influence of leader behaviors on the leader‐

member exchange relationship. Journal of Managerial Psychology, 24(4), 289–299.

https://doi.org/10.1108/02683940910952697

Zagotta, R., & Robinson, D. (2002). KEYS TO SUCCESSFUL STRATEGY EXECUTION.

Journal of Business Strategy, 23(1), 30–34. https://doi.org/10.1108/eb040221

Zerilli, J., Knott, A., Maclaurin, J., & Gavaghan, C. (2018). Transparency in Algorithmic and

Human Decision-Making: Is There a Double Standard? Philosophy & Technology,

32(4), 661–683. https://doi.org/10.1007/s13347-018-0330-6


60

Appendix A

Vignettes of the study

General scenario

Imagine that you are currently looking for a new job as a sales agent. After some

interviews with different companies, you now have a job offer from “SecurInsure".

"SecurInsure" is a big insurance company that would like to hire you in their sales department.

The offer corresponds to your wishes on topics like pay and benefits.

However, you also really care about the leadership philosophy at the company because

you know that this will have a big effect on your daily work. Therefore, you talk to employees

from the sales department who explain to you how the team is managed. This is what you learn:

Vignettes

Vignette 1_human default x person-focused

“SecurInsure” uses augmented leadership ( = the use of automated systems and analytics

to support people management), to manage their employees. The approach is based on the idea

that the combination of human and technological capabilities is stronger than either of them

alone. At the moment the company is trialing different distributions of how human managers

and automated systems can complement each other. In the sales department team managers

consult an automated system to inform their management decisions.

In the team that you would be working in Alex Stanton is the team manager. In day to

day business Alex takes on most of the team management tasks. In certain leadership situations

Alex consults an automated decision-making system.

For example, Alex is in charge of the yearly performance and career development

assessment. The assessment entails coaching, personalized feedback and the evaluation of
61

future possibilities at the company. The automated system supports Alex in the assessment of

the team member’s performance by e.g. providing performance analytics. However, it is clear

for all team members that it is Alex who takes the lead in all decisions involved in the

assessment and the automated system only consults when technical competencies are needed.

Vignette 2_human default x task-focused

“SecurInsure” uses augmented leadership (= the use of automated systems and analytics

to support people management), to manage their employees. The approach is based on the idea

that the combination of human and technological capabilities is stronger than either of them

alone. At the moment the company is trialing different distributions of how human managers

and automated systems can complement each other. In the sales department team managers

consult an automated system to inform their management decision.

In the team that you would be working in Alex Stanton is the team manager. In day to

day business Alex takes on most of the team management tasks. In certain leadership situations

Alex consults an automated decision-making system.

For example, Alex is in charge of managing of team projects. The project management

entails the distribution of tasks as well as keeping track of deadlines and task accomplishment.

The automated system supports Alex in the project management process by e.g. providing

interpersonal coaching in case of conflict. However, it is clear for all team members that it is

Alex who takes the lead in all decisions involved in the management of team projects and the

automated system only consults when technical competencies are needed.

Vignette 3_algorithm default x person-focused

“SecurInsure” uses augmented leadership (= the use of automated systems and analytics

to support people management), to manage their employees. The approach is based on the idea

that the combination of human and technological capabilities is stronger than either of them
62

alone. At the moment the company is trialing different distributions of how human managers

and automated systems can complement each other. In the sales department an automated

system carries out most management decisions and team managers take on a consulting role.

In the team that you would be working in Alex Stanton is the team manager. However,

in day to day business an automated system takes on most of the team management tasks. Only

in certain leadership situations Alex steps in to consult.

For example, the automated system is in charge of the yearly performance and career

development assessment. The assessment entails coaching, personalized feedback and the

evaluation of future possibilities at the company. Alex supports the automated system in the

assessment of the team member’s performance by e.g. providing interpersonal coaching in case

of conflict. However, it is clear for all team members that it is the automated system that takes

the lead in all decisions involved in the assessment and Alex only consults when interpersonal

competencies are needed.

Vignette 4_algorithm default x task-focused

“SecurInsure” uses augmented leadership (= the use of automated systems and analytics

to support people management), to manage their employees. The approach is based on the idea

that the combination of human and technological capabilities is stronger than either of them

alone. At the moment the company is trialing different distributions of how human managers

and automated systems can complement each other. In the sales department an automated

system carries out most management decisions and team managers take on a consulting role.

In the team that you would be working in, Alex Stanton is the team manager. However,

in day to day business an automated system takes on most of the team management tasks. Only

in certain leadership situations steps in to consult.


63

For example, the automated system is in charge of managing of team projects. The

project management entails the distribution of tasks as well as keeping track of deadlines and

task accomplishment. Alex supports the automated system in the project management process

by e.g. providing interpersonal coaching in case of conflict. However, it is clear for all team

members that it is the automated system that takes the lead in all decisions involved in the

management of team projects and Alex only consults when interpersonal competencies are

needed.
64

Appendix B

Item measured in present study

Construct Item

Acceptance I think I would accept this leadership system

I think I would agree with this leadership system

I think I would endorse this leadership system and act accordingly.

Transparency I think I could understand the decision-making processes of this leadership

system very well.

I think I could see through this leadership system's decision-making

process.

I think the decision-making processes of this leadership system are clear

and transparent.

You might also like