You are on page 1of 65

Trust: nature, and influence on aircrew-

automation interactions
an exploratory study of professional pilots perspective

Sandro Guidetti

Supervisor: Dr John Huddlestone

A research paper submitted in partial fulfilment of the MSc Human Factors in


Aviation in the Faculty of Engineering, Environment and Computing

Academic year 2020/21 [August 2021]

Guidetti – 9680784 27.08.21 page i


intentionally left blank

Guidetti – 9680784 27.08.21 page ii


Declaration of Originality
This project is all my own work and has not been copied in part or in whole from any other
source except where duly acknowledged. As such, all use of previously published work
(from books, journals, magazines, internet, etc.) has been acknowledged within the main
report to an entry in the References list.
I agree that an electronic copy of this report may be stored and used for the purposes of
plagiarism prevention and detection. I understand that cheating and plagiarism constitute a
breach of University Regulations and will be dealt with accordingly.

Copyright Acknowledgement
I acknowledge that the copyright of this research project, the submitted assessments and
any ‘product’ developed as a result of this research project belongs to Coventry University.

Sandro Guidetti Date: 27 August 2021

Guidetti – 9680784 27.08.21 page iii


Acknowledgement

To you, the anonymous participants. I want to thank you for the time offered
unconditionally, for sharing your knowledge, experience and perspective, and for
entrusting me with the information you have shared. Thanks for your professional insight,
kindness, and light moments. It has been a fantastic journey.

To Dr John Huddlestone, my thesis supervisor. Thanks for allowing me to explore my area


of interest freely while guiding me when necessary. Thanks for the pragmatic approach
and the non-compromising coaching on critical aspects. Finally, a huge thank you for being
available and supportive when I really needed it.

Guidetti – 9680784 27.08.21 page iv


Abstract

Aircraft automation complexity and usage have drastically increased over the past 40
years. While bringing clear safety improvement, advanced automation created new human
factors challenges, and modern accident causal factors emerged. Studies have shown the
significant influence of trust on adequate automation usage. The study aimed to gain a
better understanding of the importance of trust in automation in aviation, potentially to
inform future practice, training, and design. Semi-structured interviews of eleven highly
experienced professional pilots were analysed through a reflective thematic analysis to
identify the interconnections and influence of trust on aircrew-automation interactions. The
analysis generated seven interconnected themes related to trust. Trust in automation was
found to be part of a larger trust construct in manufacturers and certification authorities.
Nonetheless, trust in automation was found to influence human-automation interaction, as
expected. However, the study also showed that probably trust in automation does not flow
linearly to automation usage and then influencing aircrew-automation interactions. Instead,
the aircrew-automation interaction quality likely has a direct and significant influence on
trust in automation. Lack of automation feedback, limited system information, and poor
human-automation interactions raised most concerns, mainly due to the impact on sense
of control. Crews are made to react to technology instead of collaborating with a
technology designed around the human to achieve a common goal. Additionally, the study
showed that there is a latent need for a suitable aircrew-automation CRM protocol.

Guidetti – 9680784 27.08.21 page v


Table of Content

Declaration of Originality iii

Copyright Acknowledgement iii

Acknowledgement iv

Abstract v

Table of Content vi

1. Introduction

1.1 Context 1
1.2 Aim and Objectives 2
1.3 Approach 2
1.4 Researcher Aviation Background 2

2. Theoretical Background

2.1 Automation in Aviation


2.1.1 Definition 3
2.1.2 Automation Drive and Issues 3
2.1.3 Level of Automation 4
2.1.4 Function Allocation 5
2.1.5 Automation Usage and Interactions 6

2.2 Trust
2.2.1 Trust and Its Function 7
2.2.2 Trust in Automation 8
2.2.3 Trust in Organisation and Authorities 11

Guidetti – 9680784 27.08.21 page vi


3. Method
3.1 Methodology 12
3.2 Targeted Sample 12
3.3 Participants 13
3.4 Equipment 13
3.5 Interview Protocol 14
3.6 Procedure

3.6.1 Ethics Considerations 15


3.6.2 Data Collection 15
3.6.3 Interviews Transcription 15

3.7 Analysis 16
3.8 Data Saturation 18

4. Results

4.1 Participants Perspective on Automation 19


4.2 Themes Generated 19

4.2.1 Trust in the Aviation System 20


4.2.2 Trust in Automation 21
4.2.3 Usage of Automation 23
4.2.4 Healthy Scepticism 24
4.2.5 Self-Confidence 25
4.2.6 Aircrew-Automation Interactions 26
4.2.7 Sense of Control 27

5. Discussion 30

6. Conclusion and Recommendation 36

References 37

Appendices
Certificate of Ethical Approval A-02
Relevant Aircraft Types Operated by the Participants A-03
Participant Information Sheet A-04
Interview Guide A-06
Transcription Convention A-08
Intermediate Mind Map A-09
List of Codes per Themes A-10

Guidetti – 9680784 27.08.21 page vii


1. Introduction

1.1. Context

In aviation, automation complexity and usage have drastically increased over the past 40
years. With the introduction of a new aeroplane generation, such as the A320 or B777,
automation became ubiquitous and functions less transparent. Consequently, pilots’ roles
evolved from aircraft controller to system manager (Boy, 2020, pp. 113–118). While
advanced automation brought clear safety improvement to aviation, the evolution created
new human factors challenges (Billings, 1997, pp. 81–117; Funk et al., 1999), and modern
accident causal factors emerged (Kwak et al., 2018).

A recent study by Eurocontrol on Resolution Advisory (RA) compliance revealed worse


case compliance as low as 38% (Mateusz & Stanislaw, 2020), showing that a seemingly
simple, safety-critical instruction provided by an autonomous system, is not always
adhered to. Different articles, e.g. (Elias, 2019; Evjemo & Johnsen, 2019), posit that
increasing automation level creates new human-automation issues. Studies have shown
that trust affects automation usage, e.g. (Lewandowsky et al., 2000; Pearson et al., 2016).

Schaefer et al. (2016) discussed the importance of trust to foster effective human-
automation collaboration, as automation leans towards more autonomy. Parasuraman and
Riley (1997) showed that trust in automation is a balanced act, making clear that an
inappropriate level of trust in automation likely lead to automation disuse or misuse.

Human errors are analysed during accident investigations, Threat and Error Management
(TEM) is used in operation, and strict application of procedures is reinforced during
training. However, deep consideration regarding human-machine cooperation appears
absent. Cognitive processes impacting system usage daily, such as trust in automation,
are barely discussed and practically not considered.

Guidetti – 9680784 27.08.21 page 1 of 48


1.2. Aim and Objectives

The research aimed to comprehend how trust in automation is formed by aircrews and
understand the extent to which trust impacts aircrew-automation interactions. It further
intended to establish a tentative model to present trust elements interrelations and effects,
in the specific context of aeroplane flight operations.

The purpose was to identify the aspects that should be considered, regarding automation
trust, possibly to improve the design of advanced automation, related operational
procedures, and training approach, concerning advanced automation.

The specific objectives were first to establish the influence of trust on aircrew-automation
collaboration effectiveness. Secondly, explore the factors influencing aircrew propensity to
trust automation, and thirdly, discuss possible ways to improve trust in automation. The
expected benefit was to lay down a basis on which enhanced training or operational
practices could potentially be based.

1.3 Approach

The objective was to gain an expert view on the influence of trust on automation usage in
modern aeroplanes with advanced automation, with the longitudinal perspective of pilots
having a background in legacy aeroplanes. Accordingly, a non-probabilistic purposive
sampling of professional pilots with significant operational experience with advanced
automation, but having started their careers on classic aeroplanes, was researched and
analysed thematically.

1.4 Researcher Aviation Background

The researcher has twenty years of experience as a professional pilot with over 7’000
hours of flight experience, mostly on CS23 certified turboprops and business jets. Besides
substantial experience as an instructor and examiner, he was also involved in flight testing,
mainly cockpit upgrades. While advanced avionics and automation are commonly fitted in
small business jets, flight control systems are typically mechanical. Hence, while fly-by-
wire (FBW) flight control mimicking mechanical systems could easily be related, it was not
the case when discussing control systems without displacement or force feedback.

Guidetti – 9680784 27.08.21 page 2 of 48


2. Theoretical Background

2.1 Automation in Aviation

2.1.1 Definition

Mouloua (2016, p. 2) defined automation as “the execution of a task, function, service, or


subtask by a machine agent”. In aviation, automation commonly refers to auto-pilot, auto-
thrust, flight management system, crew-alerting system, autonomous safety systems, and
advanced cockpit-related systems and functions (Mouloua et al., 2016, p. 2; Wiener &
Curry, 1980, p. 13). Harris (2011, p. 221) presented today’s automation as managing
almost everything in the aircraft, with the flight deck being “a huge flying computer
interface”.

2.1.2 Automation Drive and Issues

Different factors were identified behind the automation drive (Harris, 2011, pp. 223–224;
Wiener, 1988, p. 444; Wiener & Curry, 1980, pp. 3–4). Availability of technology, safety,
economics and workload reduction, being illustrative. Digital technology opened incredible
possibilities, permitting a giant leap in aircraft automation. However, technology might
sometimes be treated as an aim in itself, leading to a tendency to automate as much as
feasible instead of considering what makes sense. While modern automation benefits
materialised quickly, the drawbacks were not adequately considered (Boehm-Davis et al.,
1983, p. 956), and not perceived until much later (Wiener, 1988, p. 444); many remained
latent today (Boy, 2020, pp. 115–116). Parasuraman and Riley (1997, pp. 286–287) were
adamant that “automation does not simply supplant human activity but rather changes it,
often in ways unintended and unanticipated by the designers of automation”, creating new
demands and challenges for the human (Chialastri, 2012, pp. 95–96), when not leading
outright to accidents, e.g. (Mårtenson, 1995, p. 311,324; Spielman & Le Blanc, 2021, p.
69). Sheridan (2012, p. 1013) opinioned that humans will keep playing technology catch-
up for the near future.

Automation brought dramatic safety improvements, such as ground proximity warning


systems. Considering the predominance of human errors in accidents (Kharoufah et al.,
2018), attempting to automate humans out of the system seems a sensible usage of
technology. However, Bainbridge (1983, p. 775) demonstrated this ironic approach; firstly,
designers will likely introduce their own human errors into designs. Secondly, pilots are still

Guidetti – 9680784 27.08.21 page 3 of 48


expected to manage tasks that cannot be automated; or take over when automation
malfunctions. She argued convincingly that automated systems are human-machine
systems, where the human element becomes more critical the more advanced the
automation becomes. Strauch (2018, p. 425,428) revisited Bainbridge's paper, noting that
while much progress has been made, the issues raised are still unresolved and continue to
impact human-automation today. He questioned if those ironies can ever be solved.
Finally, he raised two additional ironies: firstly, automation may conceal pilots flawed skills;
secondly, the current training shortcomings open the possibility to “become qualified to
operate automated systems without possessing the expertise necessary to be fully
conversant with the capabilities of the systems they operate”. Boehm-Davis et al. (1983,
pp. 955–956) shared a concurring perspective. In a study of airline pilot abnormal events
training, Casner et al. (2013, pp. 483–484) doubted the highly scripted training
effectiveness to cope with reality.

Reducing pilot workload was seen as another automation promise, which enabled
certifying large aeroplanes with two-men crews. While the different stakeholders involved
all claimed to pursue flight safety, obviously, they also had different competing financial
interests in the background (McLucas et al., 1981, p. 2,4-5). While the workload reduction
afforded by automation remains an open question, the nature of workload has shifted
towards increasing cognitive demand. Furthermore, automation reduces workload in low
workload phases while increasing it in peak phases (Wiener, 1989).

2.1.3 Level of Automation (LoA)

The view in the 1970s was reportedly to automate in full measure (Dekker & Woods, 2002,
p. 240). Wiener and Curry (1980, p. 1) questioned what should be automated, considering
the many human factor issues resulting from automation. Ferris (2010, p. 481) further
discussed the complexity of the available automation flavours and the necessity to
implement automation mixes adequate for human users. Wiener and Curry (1980, p. 13)
clarified that automation is not only about executing tasks. Cockpit alerting systems are
also automation; however, there is a distinction between the two. In the first case,
automation is in charge and pilots monitor, while for warning systems, the automation
monitors and pilots control. Both aspects could exist independently and with different LoA.

Sheridan and Verplank (1978, pp. 8–17) proposed ten LoA describing man-computer
decision-making, from the human doing everything to the computer being the sole

Guidetti – 9680784 27.08.21 page 4 of 48


decision-maker. The scale aimed to show the existence of a continuum between total
human control and complete automation, rather than a binary situation (T. B. Sheridan,
2018, pp. 25–26). Sheridan (1992, pp. 357–358) refined the original scale, clarifying that
each higher level brings more room for machine errors, and reduces human intervention
latitude. He further discussed that the extent we go might be a question of the level of trust
placed in the human; however, deference to humans might also be politically motivated in
terms of decision acceptability from the public. Finally, Parasuraman et al. (2000, pp. 287–
289) added a functional dimension covering the input-output aspects, noting that the ideal
level of automation is seldom the same along these different stages.

2.1.4 Function Allocation

Hancock and Scallen (1996, p. 24,26) saw function allocation as an essential element of
the human factors endeavour. Furthermore, function allocation decisions are at the heart
of human-machine design decisions; they should be the starting point (Chapanis, 1965,
pp. 1–2; Pritchett et al., 2014, p. 52). Fuld (1993, 2000) criticised vividly function allocation
as a practical design process, labelling it “a useful theory but not a practical method”. In a
partial rebuttal, Hancock and Scallen (1996, p. 28) were adamant that the effort must be
continued and evolved towards dynamic function allocation.

Fitts (1951, pp. 5–11), probably initiated function allocation research, discussing the
possible roles of man and machine. He proposed a list of superior capabilities of men,
respectively machines, that informed functions allocation. Fitts did not discuss functions
that could be done equally by man and machine, nor trade-off situations. Fitts (1951, p. 5)
guiding perspective was human performance and capacity limitations. However, function
allocation could be linked to economic or political aspects, or depend on engineering
uncertainties (Chapanis, 1965, pp. 5–6). Nonetheless, aviation certification standards may
impose allocating specific functions to crews. While certain elements of the Fitts list have
lost relevance due to the technological evolution, the value and relevance of his work
should be appreciated (see de Winter et al., 2014 for a scientific review).

Jordan (1963, pp. 162–163) stated that it is wrong to compare men and machines abilities
in our quest to allocate specific functions to the best one. Appreciating that men and
machines are complementary would open brighter perspectives. Hancock and Scallen
(1996, p. 27) discussed the importance of considering the operating context dynamism
and its strong impact on the human side regarding human factors. Furthermore, functions

Guidetti – 9680784 27.08.21 page 5 of 48


allocated are not discrete events; they create changes for the other system elements
(Dekker & Woods, 2002, p. 242). Pritchett (2014, pp. 53–57) proposed requirements for
effective function allocation, relevant to the different views expressed.

2.1.5 Automation Usage and Interactions

Automation usage refers to its intentional engagement or disconnection by the human


user. Riley (1996, p. 33, 1995, p. 255) discussed the many factors that could influence
pilots individual automation usage decisions and developed a theoretical model of the
relationships. Parasuraman and Riley (1997, pp. 233–236) stated that trust frequently
determines automation usage. They further defined usage as disuse, where automation is
underused or neglected, for example, disregarding warnings. Misuse, being over-relying
on automation, by not monitoring it adequately or engaging it when it should not be.
Finally, abuse, where automation usage is imposed inadequately by managers, or when its
design fails to consider the human element. Too much trust in very reliable automation
may lead to over-relying on it, possibly leading to monitoring errors (Parasuraman & Riley,
1997, pp. 238–239). Automation abuse mainly stems from focusing on how automation is
designed instead of considering how aircrew will use it, possibly leading to automation
distrust and disuse (Parasuraman & Riley, 1997, p. 249; Riley, 1995, pp. 253–255). Lee
(2008, pp. 407–408) restated the risk of vicious cycles developing from automation abuse,
leading to more abuse.

Discussing the new problems created by advanced automation, Billing (1991, pp. 266–
269) was adamant that aircrews must retain a central management and control function.
Woods (1988, p. 70; 1985, p. 88) warned about the risk of responsibility/authority double-
bind when the automation scope of responsibility is not made clear. Aircrews should be
provided with salient information about automation to monitor it effectively; furthermore,
automation should be predictable and communicate its intents (Billings, 1996, pp. 10–13;
D. A. Norman, 1990, p. 15; Parasuraman & Riley, 1997, pp. 242–243; Sarter et al., 2007,
pp. 355–356). Notwithstanding, Ashleigh and Stanton (2001, pp. 98–99) showed that the
quality of interaction plays a major role in successful human-automation collaboration.

Guidetti – 9680784 27.08.21 page 6 of 48


2.2 Trust

2.2.1 Trust and Its Function

Trust is a complex concept, nonetheless a critical element in relationships. While there is


no unique definition of trust, researchers have a consensual view of the main aspects. In
their study on cross-disciplinary views of trust, Rousseau and al. (1998, p. 395) defined
trust as: “a psychological state comprising the intention to accept vulnerability based upon
positive expectations of the intentions or behavior of another”; their definition is considered
adequate by many researchers (Castelfranchi & Falcone, 2010, pp. 30–31; Earle, 2010, p.
542).

Trust presuppose, firstly, the existence of a risk. However, this risk results from the trusting
engagement, in which potential negative consequences are more significant than the
benefit envisioned (Luhmann, 1988, p. 97,100). Furthermore, Deutsch (Deutsch, 1958, p.
266) is adamant that trusting should not be construed as meaning risk-taking or gambling.
Secondly, a situation of trust implies reliance on another party to achieve the expected
objective.

Trust is dynamic; moreover, it could be a cause, an effect, or a mediator (Rousseau et al.,


1998, pp. 394–396). Mayer et al. (1995, pp. 715–720) proposed that trustee
trustworthiness depends on three factors: ability, benevolence, and integrity; furthermore,
the trustor propensity to trust varies among individuals, from blind trust to mainly being
reluctant to trust. The concept validity was later confirmed by Colquitt and al. (2007, pp.
917–920) meta-analysis, establishing the three trustworthiness factors uniqueness.
Additionally, they found that trustworthiness had a more substantial influence on trust than
trust propensity.

Trust enables people to collaborate effectively and deal with complex or unfamiliar
situations, which uncertainties could otherwise discourage enterprise. When relying on
technology, it permits humans to cope with situations where complete system
understanding is impossible (J. D. Lee & See, 2004, p. 52; Lewicki et al., 1998, p. 446).
Hoffman et al. (2013, p. 85) warned that when trust in automation is lost, it could be difficult
to regain. Furthermore, trust in automation could be quickly lost under some situations.

Lewicki et al. (1998, pp. 445–446) posited that trust and distrust are different constructs,
arguing that mistrust is not similar to low trust. Cho (2006, p. 26) concurred, advancing that
the trustee expects the other party to defend his self-interest and behave in harmful ways
in situations of distrust. Dimoka (2010, pp. 388–392) found that trust and distrust are likely

Guidetti – 9680784 27.08.21 page 7 of 48


to differ, both in construct and dimension. Mayo (2015, p. 288) illustrates that things are
perceived to be as they are in situations of trust, while distrust implies suspicion that things
might not be what they appear to be.

Comparing confidence and trust, confidence has to do with recognisable or known aspects
and the belief that things will happen as foreseen. Somehow, confidence is rooted in the
past, while trust lies in the future (Earle, 2010, p. 542). Self-confidence is described by
Perry (2011, p. 219) as “a self-perceived measure of one’s belief in one’s own abilities,
dependent upon contextual background and setting”.

2.2.2 Trust in Automation

Regarding trust in automation, Lacher et al. (2014, p. 43) (2014, p43) has the following
view: “Trust is not a trait of the system; it is the status the system has in the mind of human
beings based upon their perception and expectation of system performance”.

Muir (1987) extended human-human trust models into the field of human-machine trust, to
support the design of decision aids that users would trust, and consequently used
appropriately. While human-human models seemed applicable, she viewed machine
technical competencies as central to human-machine trust. Hoffman et al. (2013, pp. 83–
84) expressed that all technical limitations and flaws would impact human-automation
trust. Sheridan (1988, pp. 429–429) suggested additional attributes of trust, considering
the aspects of control and command. Accordingly, the system should be understandable
for the human and use familiar procedures; furthermore, the system intentions should be
transparent, and its usefulness explicated.

Lee and Moray (1994; 1992) investigated the role of trust in operators' choice to use
automated or manual control, including the influence of self-confidence. Working on
elements influencing trust, they proposed automation performance, purpose and process
to link the different existing frameworks (see J. D. Lee & See, 2004, p. 60 for a review).
Lee and Moray (1994; 1992) established that allocation between manual and automatic
control was not simply a function of trust in the system, suggesting that trust in one control
system might not imply whole system trust. Trust was found to be affected dynamically by
system faults and performance. Finally, they concluded that operators allocation decision
was conditioned by the difference between trust and self-confidence, cautioning that both
aspects are subject to individual bias and could be miscalibrated.

Guidetti – 9680784 27.08.21 page 8 of 48


While developing a model to study supervisor trust in automation, Muir (1994, pp. 1906–
1907) suggested that human-automation trust is probably “only part of a network of trust”
that would encompass, among others, designers, society and management. Parasuraman
and Riley (1997, p. 232) had a converging view, considering that trust in automation partly
reflects the operator trust in the system designer. In a follow-up study to understand the
development of trust, Muir and Moray (1996) found that faith droved initially trust in
automation, followed by automation dependability, and then its predictability. Additionally,
automation usage was found to correlate strongly and positively with trust in automation.
They also observed that operators monitored more automation they distrusted, but tended
to monitor trusted automation complacently.

Lee et al. (2021) re-enacted Muir and Moray (1996) study and found dependability to
primarily drive trust development, contradicting the original study's findings. They
acknowledged the possible cultural influence between the original Canadian participants
and the Japanese participants they used. More interestingly was the suggestion that
participants having grow-up in a computer world may develop trust in automation
differently than participants who did not have that ubiquitous technology exposure. This
aspect was, however, not further researched. Contradicting both Lee et al. (2021) and Muir
and Moray (1996) findings, Balfe et al. (2018, p. 493) found in their real-world study that
understanding was the most relevant factor governing trust in automation. They concluded
that user trust in real operation is probably fundamentally different that trust in laboratory
settings. Nonetheless, Chancey et al. (2017, p. 342) suggested that perceived risk in real
operation had a different impact than a defined risk variable in an experiment, owing to the
reality of potential consequences, which could be another reason for the different results.

The influence of trust on appropriate reliance on automation has been comprehensively


reviewed by Lee and See (2004), crystallising four key aspects. Firstly, trust levels should
match true automation capabilities. Secondly, context affects trust, whether it be individual
differences, organisational environment, or culture. Thirdly, the reason for trusting should
be defined, considering what should be trusted and for which purpose, and the available
information to establish the automation level of trustworthiness; the aspects of when
automation is to be trusted and why are possibly relevant supplemental aspects (Hoffman,
2017, pp. 148–150). Fourthly, trust is formed cognitively through the interaction of
analytical, analogical, and affective processes. While the governing process would depend
on several factors, there are compelling indications that affect has an overarching
influence (J. D. Lee & See, 2004, pp. 61–65). They consider that automation should be

Guidetti – 9680784 27.08.21 page 9 of 48


designed to be trustable rather than trustworthy; furthermore, automation should be
comprehensible and understandable by operators, and that might require automation
simplification. Finally, hearsay and myths should be discussed during training to ensure
that operators comprehend effective automation capabilities. Ho et al. (2017, pp. 536–537)
found that personal history and system development history, were additional important
dimension in specific situations.

Through a systematic review of research made between 2002 and 2013 concerning trust
in automation, Hoff and Bashir (2015) synthesised a three layers model of the variables
affecting human-automation trust. The first layer is dispositional trust, covering the
enduring aspect governing personal dispositions to trust automation, like personal traits or
culture. The second is situational trust, encompassing context-dependent variables, which
could be internal, such as self-confidence, or external, such as workload or organisational
aspects. Finally, learned trust is the third layer, which is about the human perception of the
specific automation. It is influenced by knowledge and experience, and is affected by
different variables before and during interactions. In their design recommendations, they
stressed the importance of automation feedback and transparency, particularly when
levels of automation are high. In separate studies Verberne et al. (2012, p. 799), Koo et al.
(2015) and Dorneich (2015, p. 287), all showed that providing feedback and information
increase the trustworthiness of highly automated systems. Nonetheless, Ashleigh and
Stanton (2001, pp. 98–99) found that it was not mere information or feedback that matter,
the critical aspect was the quality of interaction.

In their meta-analysis of researches conducted until 2014, Schaefer et al. (2016) obtained
similar variable grouping. They were organised into three factors related to human,
partner, that is, automation, and environments; however, two aspects differ. Firstly, they
integrated the possibility that the human part consists of a team of humans; McNeese
(2021, p. 67) showed that team performance and trust in automation trust were related.
Secondly, automation is labelled a partner, and the relevant variables give the impression
of an exchange between the human and the automation, rather than a solely human
perspective. This was possibly influenced by previous work on human-robot trust
(Hancock et al., 2011).

Guidetti – 9680784 27.08.21 page 10 of 48


2.2.3 Trust in Organisations and Authorities

The model of organisational trust set by Mayer et al. (1995, pp. 717–720), propose that
trustworthiness is based on three factors: ability, benevolence and integrity. Nonetheless,
Kim (2005, p. 622) cited credible commitment as an additional key factors of government
trustworthiness. Notwithstanding, Grimmelikhuijsen and Knies (2017, p. 596) showed the
validity of Mayers et al. (1995) three factors, to assess trust in governmental organisations.

Conceptually regulators act as proxies for the public, being a guarantor that regulated
organisations would not endanger public safety; this mission being carried out through
control and trust (Six & Verhoest, 2017, pp. 9–11). Noy et al. (2018, p. 74) stated the
critical importance of achieving trust in institutions concerning automated driving. In the
field of aviation, regulators tend to lack expert knowledge and delegate part of their
oversight function and responsibilities to the industry being regulated, with little possibility
to do otherwise (Downer, 2010); possibly reaching point were it looked captured by the
industry (Niles, 2002, pp. 405–406). Furthermore, organisations might deviate from
expected behaviours (Englehardt et al., 2021, pp. 2–5; Vaughan, 1999).

The B737 Max accidents demonstrated the limits of the organisations-regulators


relationship, possibly impacted the trust stakeholders placed in the system (The House
Committee on Transportation & Infrastructure, 2020). Conversely, testing and vetting
processes in certain military branches appear effective and trusted (N. Ho et al., 2017, p.
248; Lacher et al., 2014, p. 45).

Guidetti – 9680784 27.08.21 page 11 of 48


3. Method

3.1 Methodology

The chosen research approach was qualitative, using thematic analysis. Semi-structured
interviews were chosen (Adams, 2015, pp. 495–496). The data collection aimed to obtain
rich accounts of pilots perspectives about trust and automation, to explore the influence of
trust on aircrew-automation collaboration. A constructionist approach was taken to
integrate participants' expressions and the importance of specific aspects (Byrne, 2021, p.
5). The analysis was mainly inductive to explore the data openly; it was not informed by
existing models or specific theories (Gareth et al., 2017, p. 22).

While different schools of thematic analysis exist (Braun & Clarke, 2021a, p. 39), reflective
thematic analysis (RTA) appeared to be the most suitable one, considering the research
objective, the chosen paradigms, and the sole researcher approach (Braun & Clarke,
2020, p. 6; Byrne, 2021, p. 3). In RTA, the researcher subjectivity is used as an analytic
resource (Braun & Clarke, 2019, p. 591, 2020, p. 3); Braun and Clark (2016, pp. 740–741)
are adamant that in RTA, themes do not pre-exist and emerge but are generated through
active researcher's participation. The RTA approach differs substantially from “coding
reliability” thematic analysis (Braun & Clarke, 2021a, p. 39); furthermore, attempting to
establish code accuracy or reliability contradict the RTA concept and is discouraged (Braun
& Clarke, 2019, p. 594; Byrne, 2021, p. 3).

3.2 Targeted Sample

The research aimed to gain an expert perspective on the influence of trust on automation
usage rather than achieving an overall perspective representing the entire pilot population.
Accordingly, a non-probability purposive sampling method was used (Campbell et al.,
2020, pp. 653–654); the sample was intended to be homogenous in terms of established
flight experience. The targeted participants were experienced aircrew flying as captains,
either 4th generation airliners, respectively similar business jets, or flying 4th/4.5th
generation fighters. The inclusion criteria were: more than ten years of experience as a
professional pilot, either with a civil operator or as a military pilot, with a career started on
classic aircraft types; operational flight experience as a commander on one or more of the
aircraft generation listed above; more than 5’000 hours flight experience, respectively
2’000 hours for fighter pilots. No relevant exclusion criteria could be defined.

Guidetti – 9680784 27.08.21 page 12 of 48


The target sample aimed to have a mix of pilots from different operational worlds and
represent various aeroplane types. The sample size was to be at least 10 participants.
Practical considerations, regarding time constraints and research framework, dictated the
sample size (Robinson, 2014, p. 29). Nevertheless, based on comparable studies, e.g.
(Lempereur & Lauri, 2006; Tušl et al., 2020; Weyer, 2016), it appeared reasonable to
assume that 10 participants would provide sufficient sensible information.

3.3 Participants

Twelve participants living and working in Europe were interviewed. One of those interviews
was later set aside, as the critical criterion of starting a flying career on legacy aeroplanes
was not met. The retained eleven participants were professional pilots operating as
captains. Four of them having military fighter backgrounds, three of which have later
continued a civil career; the other seven received civil pilot training from the onset. Eight
participants were instructors, and three were also test pilots. The participants had an
average professional experience of 30 years (SD=9.3), an average total flight time of
12’545 hours (SD=5’027) with an average of 8’355 hours as pilot-in-command (SD=3’631).
Appendix A-03 shows the advanced aeroplanes flown. The participants' demographics
vastly exceeded the inclusion criteria.

Half the participants were direct contacts of the researcher. Regarding the remainder, two
were proposed by other participants, two were contacted through professional networks,
and two reached through the academic network. Participants were not compensated for
their participation and donated their time for the interview freely.

3.4 Equipment

A desktop computer with encrypted video conferencing capability (i.e. Zoom), including
audio and video recording, was used for the interviews by video-link. An iPad was used
concurrently as a backup audio recording device.

NVivo (no version number attached) was used for coding and analysing the interviews.
The NVivo transcription module was used for initial interview transcription.

Guidetti – 9680784 27.08.21 page 13 of 48


3.5 Interview Protocol

Due to the Covid pandemic, the interviews were conducted via video-link, as per the
University recommendations. Johnson (2019) found that face-to-face interviews provided
richer data compared to video-link interviews. Furthermore, Krouwel (2019) noticed that
more interactions took place, although limitedly; nonetheless, they credited the approach
with substantial advantages regarding time and cost savings. Jenner and Myers (2019)
asserted that video-link interviews do not result in lower-grade data quality, but cautioned
about time delay issues, potentially resulting in challenging cross-talks. It was concluded
that the planned interviews could be conducted satisfyingly via video-link, as the apparent
superiority of face-to-face was somewhat limited; furthermore, video interviews offered
undeniable flexibility and time savings.

An interview guide was developed iteratively over several weeks, and was finalised
through a pilot interview. Nevertheless, it was further adjusted throughout the interviews
(appendix A-06). The questions were organised to cover areas where trust was likely to
play a role. While the participants were aware that trust was the central point of the
research, trust was not mentioned directly by the researcher, except for the last question,
to let participants contribute amply and avoid leading them into finite and specific answers.
The participants’ perspective was queried concerning:

1. the evolution of automation and its impact on humans

2. confidence or doubt with specific automation systems

3. contexts that could influence automation usage

4. experienced automation issues

5. automatic systems transparent to the user

6. autonomous safety system able to take control

7. the possible arrival of hard artificial intelligence

8. what should be the dream human-automation set-up

9. influence of trust on automation usage

Guidetti – 9680784 27.08.21 page 14 of 48


3.6 Procedure

3.6.1 Ethics Considerations

The research was approved by Coventry University Ethics Committee (appendix A-02)

The research objectives and approach were transparent to the participants; they came
from different organisations and participated as private individuals. The interview files were
kept confidential, and the transcripts anonymised.

Participants were provided in advance with an information sheet (appendix A-04), sent by
e-mail together with a consent form. The signed consent forms were returned by e-mail to
the researcher before interviews took place; an e-mail confirming acceptance of the
conditions on the consent form was also accepted. Before formally starting the interview,
participants were re-informed verbally about the research purpose, the participation
conditions, their right to withdraw at any time, and then asked their permission to start
recording.

3.6.2 Data Collection

The interviews were conducted via video-link. They were audio and video recorded, except
for one which was solely audio recorded, on security constraints at the participant location.
The main questions were administered following the structure of the interview guide.
Prompts were used to investigate further and yield rich data on each aspect. Every
participant contributed to all the subjects queried.

Overall, the interviews went smoothly. Cross-talks occurred occasionally, likely caused by
communication lag. The eleven interviews yielded a total of 11h55 of recording about the
defined questions (M=01h05, SD=00h16)

3.6.3 Interviews Transcription

Transcription calls for choices regarding what is transcribed and how, yielding different
outcomes; consequently, the transcription approach should be coherent with the research
methodology. Transcription involves a level of interpretation and cannot be neutral
(Bucholtz, 2000, pp. 1440–1441; Green et al., 1997); thus, the approach and concept are
made transparent.

Guidetti – 9680784 27.08.21 page 15 of 48


This research focuses on interviews content, analysed through thematic analysis;
therefore, a denaturalised transcription approach was most adapted (Oliver et al., 2005, p.
1276). The sentences were cleaned up to make them fluid but were not corrected, nor the
wording improved. Punctuation representing the sentences flow and rhythm, not the
grammatical usage (see convention in appendix A-08). The recordings were first
processed through NVivo transcription engine and then thoroughly verified, corrected and
adjusted by the interviewer. Only the audio data were used; gestures or facial expressions
were not analysed on cost-benefits ground and time constrain.

All information related to companies, places, names were anonymised unless referring to
information in the public domain. Information regarding aeroplanes make and model was
retained for the analysis, but anonymised in extracts used to illustrate results.

3.7 Analysis

The analysis broadly followed Braun and Clark (2006, pp. 86–93) proposed steps, further
informed by two worked examples (Byrne, 2021; Trainor & Bundon, 2020). To get familiar
with the data before coding, interviews were listened to, and transcripts were read several
times.

The first six interview coding was approached with an open and curious mind. Sentences
were used to describe what seemed important, relevant, or peculiar, with a particular
interest in trust aspects and elements that could influence trust. Nonetheless, elements not
related to trust but that might be relevant or shed light later were also coded. The coding
approach was mostly semantic (Gareth et al., 2017, pp. 22–23).

Some codes were written interrogatively regarding the elements reported, some others as
versus, some as the expression of the interviewees' opinion or experience. Some codes
ended up as “containers” to encapsulate similar views among interviewees, particularly
when directly related to trust in automation or automation usage. Code names evolved as
their meaning became crispier, while others were consolidated. Purely descriptive codes
were avoided to give life and context to the codes and grasp the main ideas (Saldaña,
2016, pp. 76–80). Occasionally, interview elements resulted in different codes, when they
appeared to cover different aspects simultaneously.

These first interviews yielded 202 codes. To be consolidated, tentative categories were
then created to facilitate the appearance of codes duplicates or closely related. It also

Guidetti – 9680784 27.08.21 page 16 of 48


gave a first glimpse of possible meanings. The exercise was stopped when it appeared
that grouping was done for the sake of simplification, and that essential discrete
information started to be diluted. It ended up in 126 codes grouped in 16 categories. The
path taken appeared to broadly match what Saldaña (2016, pp. 98–101) calls elemental
methods, specifically structural coding.

After coding the seventh interview, a fresh, in-depth review of the codes and categories
appeared necessary. A mind-map was used to materialise reflections, clarify the overall
picture and improve the codes. All previously coded transcripts were reviewed for coding
adequacy and consistency. It resulted in 100 codes in 16 categories. The remaining
interviews were coded on that basis, adding new codes as necessary. Nevertheless,
codes and categories were continuously evolved as coding progressed. After all interviews
were coded, the categories were redefined to minimise overlapping, and the codes
relocated accordingly. The coding exercise resulted finally in 150 codes grouped into 18
categories.

The next step was to make sense of data coded from a holistic perspective, moving from a
global picture to the codes and vice-versa, with different angles. Through the process, the
categories evolved towards interconnected themes. Mind-maps were used extensively, re-
organising the codes and their grouping accordingly, creating a new mind-map, and so on
(see appendix A-09 for an intermediate Mind-map). In the process, some codes and
categories were segregated as a coherent, broad picture was generated. From 134 codes
in 14 categories, it evolved iteratively into 51 codes forming seven interconnected themes
(list of codes in appendix A-10).

Guidetti – 9680784 27.08.21 page 17 of 48


3.8 Data Saturation

The premise of achieving saturation is to ensure that sufficient data are being collected to
ensure complete coverage of the researched subject (Morse, 1995). While Constantinou et
al. (2017, pp. 585–586) see saturation as the key validity indicator in qualitative research,
Braun and Clark (2021b, pp. 208–211) believe that predicting data saturation in reflective
thematic analysis is nearly impossible, and possibly irrelevant. However, in the present
research, the quantum of data collected was limited by resource, and a saturation
evaluation a posteriori would still provide a helpful validity indication.

The method proposed by Guest et al. (2020) has been used to assess the level of data
saturation achieved, using the first four interviews as base, subsequent runs of two
interviews, and a new code threshold of 5%. Based on these assumptions, code saturation
was achieved by the eighth interview (see table 1). Accordingly, the sample size appeared
adequate.

Interview Number 01 02 03 04 05 06 07 08 09 10 11
New Codes in Interview 32 3 4 2 3 3 2 0 1 1 0
New Codes in Run 41 6 5 2 1 2 1
% Saturation 85% 88% 95% 98% 95% 98%

Table 1. Estimated saturation versus interview number (base of 4, runs of 2)

Guidetti – 9680784 27.08.21 page 18 of 48


4. Results

4.1. Participants Perspective on Automation

Overall, automation was seen as beneficial, particularly concerning safety. It has changed
work nature, which shifted from piloting to supervising systems; furthermore, the new
capabilities increased demand or expectations. Moving from classic aircraft to advanced
automation was described as a challenging giant step. Automation development was
perceived as technology-driven rather than needs-driven, resulting in extraneous
complexity. Finally, training was considered inadequate.

4.2. Themes Generated

Seven interlinked themes, related to trust and its influence on aircrews-automation


collaboration, resulted from the analysis (see figure 1). Theme-specific results, illustrated
by interview extracts, are presented in the subsequent sub-sections.

Figure 1. Themes and their interconnections

Guidetti – 9680784 27.08.21 page 19 of 48


4.2.1 Trust in the Aviation System

Participants expressed serious doubts and concerns regarding upper organisations in the
aviation system: management, manufacturers, and certification authorities. Many
comments were linked to the 737 Max Manoeuvring Characteristics Augmentation System
(MCAS) design, certification, and accidents. Participants opinioned strongly that
automation design was all about business decisions driven by profit, human factors and
safety being inferior concerns. Supposedly, manufacturers exploit the Operational
Suitability Data (OSD) process fully, to attain type rating commonality and create
competitive advantages. Participants expressed vivid concerns about autonomous devices
unknown to crews. P04 shared his view:

the Max system, which would supposed or implemented to avoid system failures and actually
overrode the pilot's input. Definitely not good, I see what, probably Boeing is probably a very
actual case where you can see what happen, when you try and hide or, when you not publish
information which is definitely, need to know for the two guys in front.

The technical expertise was seen as being in the manufacturers’ hands, while certification
authorities were considered lacking the competencies and resources to maintain the
balance of power. Furthermore, certification standards seemingly lag by one aircraft
generation. Trust in the manufacturers' honesty and authorities capabilities was perceived
as having declined steadily among the professional pilot community; one participant was
“very upset” by this situation, while another labeled it “a pity”. Many participants saw the
737 Max dramatic fiasco as a direct consequence of inadequate oversight and economic
pressures. P09 labeled the certification system as fundamentally flawed:

I think, we're into, you remember the phrase of the Disney film about the emperor's new clothes.
We have the manufacturers built it, he's got the manufacturers test pilots, he's got the
manufacturers test pilots talking to the allied certification authority test pilots, who have been
there talking to them; and they've done all the profiles in the simulator. It is a self-defeating
procedure because they all convince each other it's perfect, (...) until something goes wrong.

However, one participant described the multi-layered evaluation process in the military as
very dependable. When a new device or software load is released to the operational unit, it
works as expected and can be relied on. The process appears highly valued and trusted.

Participants felt strongly about management-by-compliance. Managers are perceived as


attempting to discharge their duties and responsibilities by looking at strict procedure
compliance without considering or understanding in-flight realities. Participants judged that

Guidetti – 9680784 27.08.21 page 20 of 48


it reduced their ability to think and act as pilots, particularly about sensible automation
management in challenging situations.

Interestingly, one participant saw technical trust as incidental compared to the serious
concerns posed by entrusting automation with personal data. P03 explained:

A huge challenge there is data protection; because, is that data protected? or is my employer
going to look at the way that I set up my cockpit, or the way that I deal with certain systems?
And that potentially going to have ramifications on your job, so there's that trust issue. And I
think, at least a lot of pilots I talked to, that's where their main trust issues are; not so much with
the technical side, but with the whole data side that's involved; the data protection, and also,
what data goes into some of these systems, has this data been embedded? Is it quality data, or
is it just some weird data? So (...) but trust is definitely an issue.

4.2.2 Trust in Automation

The participants saw trust in automation as essential and exemplified its impact on
automation usage. The level of automation used appears highly influenced by trust.
Notwithstanding, P02 emphasised the importance of finding the right balance:

Trust, surely to a certain degree, improve it; beyond a certain degree it decreases it. Meaning,
it's like a stall, your pitch to a certain degree, increases lift at some point you get a buffet and if
you are overconfident, you get in a secondary stall.

However, avoiding over-trusting automation appeared difficult in a world of very reliable


automation bringing great support. P02 continued:

Yeah, well, automation is a very, very comfortable thing. It's, is very nice to rely, and it gives you
the sensation of full (...) reliability.

P03 wondered if trust did not reach a point where crews are not able anymore to be
assertive when using automation:

on the technical side, we have almost crossed the threshold where we are beginning to trust
automation too much; because it is very reliable, and the few instances where it does not act
like we expect, we're really surprised, and we're very reluctant to do something because we, at
least I, (constantly?) say: "I'm sure I missed something, because he, he wouldn't make that
mistake, I missed something, it's probably my mistake, I did something wrooong".

Nonetheless, the view is that trust should not be given as a blank check; knowing what is
trusted and why is necessary. Furthermore, trust should not exist without being verified.
Many participants raised the importance of being clear with expectations, as the

Guidetti – 9680784 27.08.21 page 21 of 48


commands given are not necessarily reflecting what crews really wanted. P05 illustrated
the cognitive complexities involved:

So, yes, (...) trust is important. But you have to know what you're trusting, and do you know
everything the computer is doing? Is the computer know everything that you are doing? or that
you're thinking?

Most participants presented insufficient automation knowledge and understanding as


affecting strongly their ability to trust automation. This was primarily rooted in automation
philosophy and logic understanding, comprehensive training, and transparent information
about the system functioning. P04 expanded:

I think it all being built in your training, that you, not only manage the systems, but you try and
understand as best as possible. And that comes all back to your training. Training means,
having also extra time, having not the commercial pressure in the back of your neck that
companies have nowadays, during training.

Several participants mentioned the importance of unconditional trust, required in certain


situations or operations depending on fully automatic systems for their performance. Trust
bestowed on such systems appeared strongly linked with trustworthy certification
processes. P09 on automatic approach and landing:

Trust is very deep, you must trust it, like we've said, for a CAT III landing. To trust (unknown?),
it's actually taking you down to the concrete, not to the airport building or the grass next to it or
whatever. So your implicit trust in a CAT III purely looking out the front and there's going to be a
runway when we get that, provided you've done your homework, set the autoland, done all your
checks, you've got the trust that it can fly, and would flare and land the aeroplane.

However, P06 raised a point regarding the tension created between requiring crews to
trust a system unequivocally while at the same time exposing them to system
malfunctions:

And it's ambivalent because on the one hand, after the simulator, a pilot should go out and trust
the system, the automation for his next flights in the real aircraft. On the other hand, every six
months, it's shown to him that the systems can fail very close to the ground.

Different accounts revealed that trust in automation could be severely affected by hearsay
or myths propagated within a community, dramatically affecting how specific systems are
used. P01 contrasted a conviction widely held and his operational experience:

I think it influences it greatly, and a lot of it is down to misinformation, social pressure. So, there
was a lot of question about the [Aeroplane-C] and its autothrust system, and it was nonsense.
And people just stop using it. They wouldn't trust it, they were overriding it, [...] And I left the

Guidetti – 9680784 27.08.21 page 22 of 48


autopilot in and the autothrust in, the whole way down to 100 feet. At 100 feet, I took it out and
just put it in the rudder and landed, and both automation systems the flight path control and the
autothrust was absolutely, was excellent, excellent. Now, that was a system that had been
roundly chastised by ballroom talk, and it was just interference with people. Not using the
system properly.

Several participants evoked that trust in automation is affected by personal preferences,


opinions, and the readiness to embrace new technology automation. One participant
expressed that pilots are generally quite conservative and tend to rely on what has been
proven in the past. However, criticism and doubts tend to disappear after crews gained
experience and the system reliability is established.

Finally, one participant reflected on the tension between the solutions engineers believe
they should provide, and what pilots perceive as their duties and responsibilities on board.
He saw the critical issue as negotiating a level of trust acceptable for a given situation,
noting that it would change with each automation generation.

4.2.3 Usage of Automation

While participants spoke about the importance of trust in automation, most pointed out that
automation usage was primarily governed by Standard Operating Procedures (SOPs).
Nonetheless, companies differed in their approaches, from very detailed SOPs to offering
vast leeway to their crews. The reasons being, for example, cultural or operational. P04
experience is illustrative:

that was a clever way of [Carrier] to take on the problem of having various culture in the cockpit
was, you have to create one similar operation for very dissimilar cultures. So, the SOPs were
written really in the smallest details, almost to the level of: you had to touch this knob with two
fingers and the other with three fingers, almost to a very detailed level. Whereas in [Carrier] you
could, it was more open. It's probably more the culture [World Area A] civilised culture, saying
[World Area B] probably more dogmatic, more to the letter.

However, unwritten automation practices build on false assumptions or defining socially


acceptable behaviours, seemed to exist in some operations. Many of these habits were
reported to conflict with the automation logic or led to sub-optimal automation usage.

Automation usage strategy was mainly linked with workload management, maintaining a
sensible drawback-advantage balance, or in some cases achieving maximum capabilities.
Nonetheless, several participants opinioned that high levels of automation could lead to
being overwhelmed by automation. One participant explained that it was not always easy

Guidetti – 9680784 27.08.21 page 23 of 48


to get the priorities right, owing to conflicting expectations of different stakeholders.
Overall, automation was seen as a tool supporting the pilot. However, many participants
viewed advanced automation as subjugating. P07 exemplified:

Now, you're in downwind for a visual approach; why do you need all this automation? Flight
director Off, autopilot Off, thrust director Off, fly the plane based on final. Why do we need to
turn the heading bug? Why do you need the path down now? Get rid of all of this [laugh].

Many participants reported to regularly hand-fly; accordingly, automation is disengaged.


However, some participants shared that many pilots lack automation knowledge,
understanding, and proficiency. Some of the concerned pilots were relying instead on their
hand-flying skills, which was seen as inadequate. P06 shared his view:

it's also sometimes a problem when pilots have very good manual flying skills and good
overlook and a quick cross-check, but have a lack on automation use because the balanced
pilot has both skills, so he can decide which system he uses: his or the automation. But often
some pilots are skeptical for any reason, and on the other hand, they have also not the skill to
use automation proficiently.

4.2.4 Healthy Scepticism

All participants professed assertiveness in dealing with automation, while stating the
importance of having a “degree of scepticism” or maintaining a “healthy balance between
trust and scepticism”. One stated candidly that “a hole in the Swiss cheese” was
unavoidable. Many participants asserted that technology should not be blindly relied upon.
They articulated the importance of understanding how the system works, its weaknesses,
and what it is supposed to do. P02 claimed:

I trust sound understanding of what I am doing

Participants spoke about the importance of a large base of knowledge and experience as
the foundation to develop an assertive attitude towards automation. It was stressed that it
was not a matter of doubting automation, but rather ensuring healthy cooperation and
reacting as necessary. P01 shared:

So, it's not that I don't trust it, I know what it can do, but I also know the limits of what it can do.
So, if something doesn't work, it doesn't upset me, it doesn't scare me, it's just: oh, that's
interesting it's not doing what I supposed to do. Instantly take over manually or use another
mode or use that secondary mode or whatever.

Guidetti – 9680784 27.08.21 page 24 of 48


Many participants saw the rich experience made earlier in their careers, with less
automated systems and constant cross-checking, as the fundament of their current
cautious approach. Some expressed concerns regarding the new generation of pilots that
are unlikely to enjoy a similar exposure. Furthermore, the current training approach was
viewed as relying largely on declarative knowledge and dogmatic SOPs, which was not
seen as supporting the development of a healthy questioning attitude.

4.2.5 Self-Confidence

Participants were adamant that hand-flying skills and self-confidence were fundamental
aspects of flying. P07 expressed it vividly:

It's paramount, it's paramount. (...) I mean, trust in oneself, that's absolutely necessary, you
don't go in a plane if you don't trust yourself, of course.

Sharing his perspective on crews duties and responsibilities, P06 stated:

The men are flying because when the systems fail, for many reasons, then the pilots are the
last line of defence. And they have to keep their manual flying skills up to date.

The capacity to hand-fly the plane and take over safely from automated flight was seen as
a critical skill to master to recover and continue safely, after experiencing automation
malfunctions. Participants expressed that the ability to operate automation fully and
peacefully depended greatly on crew confidence to fly proficiently at reduced levels of
automation. The participants' confidence in their hand-flying skills was striking. While their
confidence appeared rooted in substantial experience in classic aircraft, dedicated training
to gain sureness in hand-flying FBW aircraft was labelled capital. P04 exemplified:

their training was based on manual skills; they would first demonstrate that you could fly an
[Manufacturer] fly-by-wire aircraft perfectly in full manual mode, until you got very confident in
your manual skills, and then you got to know the system. So I felt I, must say, I felt very
comfortable using the [Manufacturer] system.

Regular hand-flying practice was deemed necessary. While this perspective seemed
shared by operators; however, actual policies and practices appeared to vary significantly.
Furthermore, hand-flying over minimum custom was reportedly criticised in some cockpits.

Achieving confidence in recovery abilities was reportedly more tricky. While training is
conducted in simulators, actual exposure is about experiencing abnormal events during
operation, which do not happen on command. Hence, overall experience, the breadth and

Guidetti – 9680784 27.08.21 page 25 of 48


realism of trained events, and flight control systems understanding, were seen as crucial
readiness elements. P01 illustrated the issues:

And in extremis, the next thing you know is the autopilot disconnected, because it's reached its
limit, and it says I can't cope anymore. And now it's gone from really quite a benign environment
because it's doing its job, taking out the turbulence effects, and then all of a sudden you get the
full turbulence. So not only are you having to fly manually, but you're now in this really quite
hostile environment. And of course, we don't fly that, we don't train it, it just happens.

P03 shared his concerns, stemming from insufficient information and flight control system
functioning opacity:

that's that very far tip that I, when it comes to feeling comfortable, would say, ok, if we get into
that very, very small niche, that's an area where, I'm honest, I'm not 100% comfortable with

4.2.6 Aircrew-Automation Interactions

One participant raised that crews tend to give more leeway to automation than to fellow
crew when hand-flying, where a strict deviation protocol exists and is enforced. In the
absence of clearly defined decision thresholds and actions, crew tend to wander and give
credit to automation in the presence of any subtle sign of corrections. P03 mentioned that
he never really thought about this issue until a discussion with a colleague a few days
before the interview. He contrasted the two situations:

if I fly, I'm pilot flying, manually, and my co-pilot says: "speed", I'm like: “yeap, correcting”. And
that's the response I have to give: "correcting", so that's the dialogue. Now, if we're watching
autopilot, this is how the dialogue typically goes: "ohh (...), if I'd be flying like that you'd be
calling speed right now", or: "look at, that's not very good speed control", and then, like you say,
we both look at each other: "ok?". And now it's sort of individual tolerance, if and when I'm going
to take action.

Many participants reported facing automation weaknesses regularly, and explained the
need to have strategies to deal with these performance shortcomings. This appeared
largely experience-based and often implies anticipating automation behaviour, or tricking it
into ensuring acceptable outcomes. Nonetheless, several participants also viewed the lack
of automation proficiency as a significant issue. They felt that some crew lack
understanding of what automation is doing, and more importantly, do not know what they
want the aircraft to do. However, P11 pointed human limitations:

I mean, as long as you're aware, it's nice. If you're ahead of the game, it's nice, but the trouble
is, like we all know, that sometimes you're not ahead of the game, sometimes you're behind.

Guidetti – 9680784 27.08.21 page 26 of 48


And if then, the machine does something different you were expecting, and you're already
behind the play, then it becomes like challenging

A participant explained that aeroplanes communicate through the Flight Mode Annunciator
(FMA); hence, not understanding the FMA is similar to a couple where one talks and the
other one does not listen. Notwithstanding, other participants pointed to the rising
complexity of cockpit interfaces, which have become more demanding. The difficulty for
the automation to know what the crew truly want or think was also raised. P03 conveyed
the need for human-centred design:

I think, the human-machine interface is crucial; I mentioned, to (...) have the cockpit
communicate with me in a way, that caters towards me as a human

Several participants criticised the lack of information about the throw of controls and where
is the automation inside its operating envelope, limiting crew awareness of the remaining
automation margin. P01 expanded:

So, I think, we do need to make pilots more aware of what the automation does. And where you
are in terms of normal operating, versus the entire operational envelope of that automated
system; we need to have a gage. We don't provide that feedback at the moment, sufficiently.

4.2.7 Sense of Control

The participants were adamant that their function was to command their aeroplane and
additionally, be the last line of defence when things go havoc. Accordingly, they want to be
the ultimate decision-maker. However, many participants felt tensions between the level of
control bestowed upon automation, the opacity of certain systems, and their duties and
responsibilities. Nevertheless, one participant mentioned that this discussion was a
recurring subject between engineers and pilots with every new automation generation.

Some participants clarified their concern as being a control-command-responsibility issue.


They felt that who has command is unclear in certain flight envelope areas, or regarding
automated systems that are transparent to the pilot. The feeling is that the authority-
responsibility dyad is blurred and that a clear demarcation line must be drawn. P03
expressed that feeling:

then I cannot exercise, not just control, but I cannot exercise command, and then that has to be
clearly defined; and then, whoever assumes that command authority, somebody does, I mean if
it's designer, the software engineer, or the manufacturer, or the operator, then they need to say,
ok, yes, I assume the command authority; and I assume everything that comes with that.

Guidetti – 9680784 27.08.21 page 27 of 48


Asserting the indivisibility of these two elements, P08 further stated:

So, as long as I have the responsibility, I want to have control. And if they take me that control
possibilities, I don't want to have responsibility. So we needs to be clear, who is responsible,
and the one who is responsible needs to have the possibility that he can act.

Incidentally, one participant explained that on [Aeroplane-G] they have to disengage


automation in certain abnormal situations, reportedly to shift responsibility to the crew.

Autonomous safety systems were generally positively seen by the participants, provided
that an override function exists. However, one participant expressed his unconditional trust
to the ones fitted in [Aeroplane-D], as those were his lifeline.

Most participants reflected on haptic control feedback. Many expressed an aversion for
non-moving thrust levers and controls without force or position feedback, as they felt
deprived of an essential source of feedback to be in the loop. One participant stated that
he was very comfortable with such control systems; however, no participant suggested
preferring controls without feedback.

System management concerns were largely discussed, most participants mentioning the
lack of information regarding system functioning. P01 talked about his latest type rating
course:

To get the information as to how it actually works, was well beyond any of the training manuals,
or any of the manuals that were given to the pilot. We (were?) just not given the information. It
was just: you don't need to know that. You know, if it's in front of you, it's good. If it's not, we'll
take it away and we'll decide, okay. Yeah. So, I didn't like that.

The issue of having limited data from the aircraft systems, to effectively manage them
when they malfunction, was raised by many participants. One of them pointed out that
knowing the malfunction’s operational effect does not say what is happening inside the
aeroplane, nor does it shed light on malfunctions root cause. Furthermore, some stressed
the need to understand a situation rather than just mechanically applying procedures. P01
recalled:

So the first officer was very keen, very well trained, and he was going to just carry out that
action, because that's what it said to do. At which point I then said: why don't we find out what's
going wrong first, before we do anything else?

Several participants opinioned that manufactured were playing the lowest common
denominator regarding information provided. However, the approach was reported to vary
significantly between manufacturers, depending on how they view the pilot inside the

Guidetti – 9680784 27.08.21 page 28 of 48


system. Nonetheless, participants clarified that it was not about having as much
information as possible, but having adequate one from a line pilot perspective. One
participant suggested that better models were needed to determine what is relevant
information, as we currently tend to realise the inadequacies after accidents occurrence.
Another participant expressed that while having data is nice, they only become relevant
information with the ability to filter and contextualise them.

Guidetti – 9680784 27.08.21 page 29 of 48


5. Discussion

Unexpectedly, a substantial level of doubt exists toward aviation authorities and aircraft
manufacturers. Latent feelings and concerns, regarding the safety and integrity of the
aviation system, apparently grew over the past ten or twenty years. Possibly, the two B737
Max catastrophic accidents transformed these impressions into reality. First, showing the
dramatic consequences; second, making public facts about certification process flaws and
evidence that the manufacturer concealed information and disregarded safety concerns,
prioritising its business case instead (The House Committee on Transportation &
Infrastructure, 2020, pp. 11–33). Low trustworthiness of authorities appears an issue of
ability and benevolence; regarding manufacturers, benevolence and integrity are likely the
key factors (Mayer et al., 1995). The analysis showed that trust in the organisation
releasing automation for use influences automation trust. Muir (1994) and Parasuraman
and Riley (1997) had a supporting view. The positive influence of a trusted higher
organisation on automation trust, appears more substantial than the negative influence
when an organisation is doubted. However, these two opposites came from vastly different
set-ups regarding certification, operational usage, and culture, which could also explain the
apparent difference.

Trust is perceived as essential and strongly influences the automation level used; the
current high automation reliability creates complacency risk, rejoining the conclusions
reached by Muir and Moray (1996). The need to trust analytically and verify is
emphasised; however, when automation behaves unexpectedly, the human doubt himself
immediately, rather than automation. This apparent dichotomy could be explained by Lee
and See (2004) model, where the affective process strongly influences the analytical.
However, it could also be symptomatic of an automation complexity reportedly edging what
aircrews could reasonably comprehend. Automation knowledge and understanding,
together with comprehensive training, appear strongly linked to trust in automation; this
seems inconsistent with both Muir and Moray (1996) and Lee et al. (2021) findings, but
strongly supported by Balfe et al. (2018) conclusions in their real-world study. Reportedly,
hearsay could dramatically affect trust in automation, consequently it usage.

Achieving unconditional trust in automation is vital when crew must rely on automation,
e.g. during category III autoland, where confidence in the automation certification plays a
significant role, rejoining Ho et al. (2017, p. 248) findings. Crews training for such
operation likely creates cognitive tensions. On the one hand, they are trained to achieve
complete confidence in automation reliability, and on the other hand, they are shown that

Guidetti – 9680784 27.08.21 page 30 of 48


system failures could occur. This contradiction might not be solvable; however, it shows
the necessity to integrate trust aspects in training design. Automation myths and hearsay
are frequent and affect crew trust, leading to inadequate automation usage; this is
supported by Lee and See (2004).

The study showed that while trust greatly influences automation usage, actual usage is
largely prescribed by SOPs. Furthermore, some operational requirements, such as
autoland, are beyond human capabilities imposing reliance on automation. Finally, in
certain situations, pilots might decide to practice hand-flying. Consequently, situations
could occur where pilots trust automation but decide not to use it, or they must rely on it
without necessarily trusting it, or that they shall use automation despite not trusting it.
Therefore, aviation operational rules, procedures, practices, or situations strongly mediate
the relationship trust in automation – reliance on automation. However, this reality should
not downplay the impact of trust; it is conceivable that a significant divergence between the
level of trust in automation and automation usage increases cognitive workload.

Nonetheless, automation usage is equally governed by workload management, although


higher LoA could also increase workload. This later point contradicts Balfe et al. (2015, pp.
62–63), which found that higher LoA reduce workload. However, in certain aviation
situations, higher LoA require substantial programming investment while resulting in limited
benefits, hence workload increases. Crews may exhibit different levels of competence and
confidence concerning hand-flying or automation usage competence. Accordingly, choices
are not always made rationally, but result from the perceived gap between their confidence
in hand-flying or automation usage. Hence, trust in automation and reliance on it may stem
from a lack of hand-flying skills. Conversely, reliance on hand-flying skills may indicate a
lack of confidence in automation management, not conducive to trust it. This mechanism is
conceptually similar to Lee and Moray (1994) findings regarding the confidence-trust
interplay.

Realising the role played by the healthy scepticism attitude was remarkable. It appears to
be the vital mechanism used to exploit automation confidently, and at the same time
observe it with a neutral dose of scepticism, and act decisively if needed. It apparently
relies strongly on critical thinking. It is not about doubting; it is about ensuring a healthy
relationship with automation. It seems based on the ability to maintain at the same time a
state of trust and distrust. While Mayo (2015) discuss the region of “neither-trust-nor-
distrust”, the present mechanism appears dynamic, hence the dual state expression. It is

Guidetti – 9680784 27.08.21 page 31 of 48


probably best express by analogy with Francis Scott Fitzgerald (1936, p. 69) famous
quote, although not referring to the intelligence aspect:

“The test of a first-rate intelligence is the ability to hold two opposing ideas in
mind at the same time and still retain the ability to function”.

The healthy scepticism approach appears to build on solid knowledge, experience, and
critical thinking. Furthermore, a deep understanding of systems working, capabilities and
limitations seem necessary. Both Schaefer et al. (2016) and Hoff and Bashir (2015)
demonstrated the importance of knowledge and understanding importance to trust,
supporting the present perspective. Nonetheless, the participants' wealth of experience
stems from the early career spend on classic aeroplanes, where navigation systems were
less accurate and required frequent cross-checking, and where malfunctions were regular
occurrences. Consequently, the healthy scepticism approach described might well be
unique to the aircrew generation having started their career on legacy aeroplanes.

Having well-honed hand-flying skills and the necessary self-confidence to use them
appears essential. Lee and Moray (1994) found that the gap between trust and self-
confidence conditioned automation usage; consequently, high confidence in hand-flying
skills should lead to low automation usage. However, the analysis showed that high hand-
flying confidence could lead to higher automation usage, even in situations of limited trust
in automation, apparently contradicting Lee and Moray (1994) findings. Solid hand-flying
skills were presented as the B plan when automation does not behave as expected. It is,
therefore, most likely that when equipped with a solid fall-back plan, the perceived risk of
using automation reduces dramatically, hence the readiness to use automation despite not
entirely trusting it. Mature hand-flying skills and confidence certainly developed through a
consequent exposure to flying less automated aeroplanes as a matter of routine,
becoming second nature. It is unlikely that newer generation pilots could achieve the same
hand-flying experience and confidence with the current aviation paradigm. Confidence to
manually take over when at the edge of the flight envelope was less pronounced. The
critical issues seemed limited training and incomplete understanding of the flight control
system behaviour. Balfe et al. (2018) support the result regarding the criticality of
understanding to ensure automation trust. Casner et al. (2013) doubted the adequacy of
current training practice.

The absence of human-automation Crew Resource Management (CRM) protocol


governing when, and at which conditions, a take-over shall take place was not surprising.

Guidetti – 9680784 27.08.21 page 32 of 48


However, the realisation that the issue existed was striking. In the absence of defined
tolerances, crews give a larger leeway to automation than they give their fellow crew
members, hoping that automation will correct back. Crews are informed about automation
doings through FMA monitoring; however, not being necessarily fully aware of the
automation game-plan. Without advanced instinctive information about the automation
intents and remaining margins, leaving crews guessing and reacting. It is supported by
Koo et al. (2015) and Dorneich (2015). Furthermore, some crews might lack a complete
understanding of automation capability and logic to interact confidently with it. The issue
appeared linked to the system complexity and the partial training provided.

The research showed that pilots duties and responsibilities are very clear from their
perspective. However, pilot purpose and authority appeared viewed differently by aircraft
manufacturers. The formers seeing a leadership role, while the laters look for responsive
managers. Nonetheless, these tensions regarding how much control is given to pilots,
respectively, how much control they retain, reportedly occurred with each automation
generation. However, it would seem reasonable that the one bearing the ultimate
responsibility also has ultimate authority (Billings, 1996, pp. 8–9). With FBW, control is
technically with the flight control computers. However, there are situations where pilots
appear unsure if they are still in command, primarily due to automation transparency
paucity and the absence of flight envelope information.

Non-moving thrust levers and lack of control force feedback are reportedly significant
drawbacks to intuitive understanding what the aeroplane is doing and feeling in control.
This highlight the question of whether haptic feedback is advantageous for humans
compared to receiving visual information, or whether it is anchored into years of
mechanical control systems, making control mimicking legacy flight control appearing the
only acceptable approach. Alternatively, it might be a matter of personal preference.

Having access to an adequate level of information is necessary to effectively understand


and manage the aeroplane's systems, particularly when they malfunction (D. A. Norman,
1990). The caveat being how the adequate level of information is determined;
manufacturers and aircrew have different realities and agendas. Furthermore, individual
differences appear to play a significant role in their ability to use information effectively.

To summarise, participants had a positive attitude toward automation and technology.


However, concerns were expressed about the future, particularly regarding how modern
business-driven training is conducted. It was perceived aimed at minimum compliance

Guidetti – 9680784 27.08.21 page 33 of 48


rather than maximum competence. Trust in authorities and manufacturers does impact
trust in automation. However, authorities’ oversight effectiveness is doubted, and
manufacturers are perceived with suspicion. The B737 Max scandal probably exacerbated
this perception.

Trust does influence automation usage; this is, however, constrained by SOPs and
practices. Nonetheless, a mismatch between automation trust and usage is likely to impact
aircrew cognitive workload. Advanced automation was seen as highly reliable, but
complex, leading to a bias towards distrusting humans. Most participants displayed a
healthy scepticism towards automation as a strategy to maintain a state of neutrality and
readiness regarding its behaviour. Participants were highly confident about their flying
skills and saw them as a fallback plan; hence, they confidently explored advanced
automation features.

All participants displayed a will to learn and improve themselves, consequently had some
frustration regarding information availability. Manufacturers are viewed as taking a lowest
common denominator approach regarding providing information. This approach possibly
creates fertile ground for very influential gossips about automation. Training is regarded as
being built on compliance and dogmatic SOPs, rather than encouraging long term
competence development and critical thinking.

Lack of automation feedback, limited system information, and poor human-automation


interactions raised most concerns, particularly due to the impact on sense of control.
Crews are made to react to technology instead of collaborating with a technology designed
around the human to achieve a common goal. As expected, trust in automation was found
to influence human-automation interaction. However, it turned out that the very likely vital
driver of trust in automation is high-quality human-automation interactions; consequently,
trust would develop circularly in a sort of virtuous loop. A model linking the different
findings is proposed (see figure 2). The bold line flowing from Human-Automation
Interaction to Trust in Automation express its likely predominance.

Guidetti – 9680784 27.08.21 page 34 of 48


Figure 2. Proposed trust loop model

The present study used a specific purposive sample of experienced professional pilots, to
explore the influence of trust on advanced automation, from the perspective of pilots
having a substantial background in earlier generation aeroplanes. Hence, the findings are
unlikely to be valid for other pilots demographics. However, its validity could be extended
through hypothesis testing with other pilot groups.

Almost no literature or studies could be found on the influence of hearsay or gossip on


automation usage or trust. Therefore, the finding in this regard could not be evaluated
against scientific evidence.

Guidetti – 9680784 27.08.21 page 35 of 48


6. Conclusion and Recommendations

As Riley (1996) showed, the impact of trust on automation usage, and the complexity of
the interactions involved cannot be denied. The present study focused on understanding
how trust influences human-automation interaction in a modern aeroplane cockpit, based
on a purposive sample of highly experienced professional pilots. The results indicate that
the quality of the human-automation interaction is probably the most influential factor of
trust in automation. A model showing the perceived circular nature of trust in automation
development is proposed. This finding suggests that a human-automation team centred
approach should be considered for the design of advanced automation.

Further research should explore the specific differences in human-automation interaction


between single-pilot and multi-crew operation and how it impacts trust in automation. It
could potentially be extended to cover single-seater fighter operations with wingmen.
Secondly, the impact of a considerable experience on legacy aeroplanes, compared to
pilot having solely flown advanced automation aeroplanes, should be quantified to
understand training relevance. Thirdly, there is a latent need for a suitable aircrew-
automation CRM protocol. Hence, the necessity to research the possible approaches.
Finally, owing to the importance of trust in automation, how to integrate trust in training
should be researched. Both to ensure that aircrews are aware of its impact on their
decisions and integrate this aspect into training design.

Guidetti – 9680784 27.08.21 page 36 of 48


References

Adams, W. C. (2015). Conducting Semi-Structured Interviews. In Handbook of Practical

Program Evaluation (pp. 492–505). John Wiley & Sons, Ltd.

https://doi.org/10.1002/9781119171386.ch19

Ashleigh, M. J., & Stanton, N. A. (2001). Trust: Key Elements in Human Supervisory

Control Domains. Cognition, Technology & Work, 3(2), 92–100.

https://doi.org/10.1007/PL00011527

Bainbridge, L. (1983). Ironies of automation. Automatica, 19(6), 775–779.

https://doi.org/10.1016/0005-1098(83)90046-8

Balfe, N., Sharples, S., & Wilson, J. R. (2015). Impact of automation: Measurement of

performance, workload and behaviour in a complex control environment. Applied

Ergonomics, 47, 52–64. https://doi.org/10.1016/j.apergo.2014.08.002

Balfe, N., Sharples, S., & Wilson, J. R. (2018). Understanding Is Key: An Analysis of

Factors Pertaining to Trust in a Real-World Automation System. Human Factors,

60(4), 477–495. https://doi.org/10.1177/0018720818761256

Billings, C. E. (1991). Toward a Human-Centered Aircraft Automation Philosophy.

International Journal of Aviation Psychology, 1(4), 261.

https://doi.org/10.1207/s15327108ijap0104_1

Billings, C. E. (1996). Human-Centered Aviation Automation: Principles and Guidelines

(TM 110381). NASA Ames Research Center.

Billings, C. E. (1997). Aviation automation: The search for a human-centered approach.

Lawrence Erlbaum Associates Publishers.

Boehm-Davis, D. A., Curry, R. E., Wiener, E. L., & Leon Harrison, R. (1983). Human

factors of flight-deck automation: Report on a NASA-industry workshop.

Ergonomics, 26(10), 953–961. https://doi.org/10.1080/00140138308963424

Boy, G. A. (2020). Aerospace Human Systems Integration. In A Framework of Human

Systems Engineering: Applications and Case Studies (pp. 113–128). John Wiley &

Sons, Ltd. https://doi.org/10.1002/9781119698821.ch7

Guidetti – 9680784 27.08.21 page 37 of 48


Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative

Research in Psychology, 3(2), 77–101. https://doi.org/10.1191/1478088706qp063oa

Braun, V., & Clarke, V. (2016). (Mis)conceptualising themes, thematic analysis, and other

problems with Fugard and Potts’ (2015) sample-size tool for thematic analysis.

International Journal of Social Research Methodology, 19(6), 739–743.

https://doi.org/10.1080/13645579.2016.1195588

Braun, V., & Clarke, V. (2019). Reflecting on reflexive thematic analysis. Qualitative

Research in Sport, Exercise and Health, 11(4), 589–597.

https://doi.org/10.1080/2159676X.2019.1628806

Braun, V., & Clarke, V. (2020). One size fits all? What counts as quality practice in

(reflexive) thematic analysis? Qualitative Research in Psychology, 0(0), 1–25.

https://doi.org/10.1080/14780887.2020.1769238

Braun, V., & Clarke, V. (2021a). Can I use TA? Should I use TA? Should I not use TA?

Comparing reflexive thematic analysis and other pattern-based qualitative analytic

approaches. Counselling and Psychotherapy Research, 21(1), 37–47.

https://doi.org/10.1002/capr.12360

Braun, V., & Clarke, V. (2021b). To saturate or not to saturate? Questioning data saturation

as a useful concept for thematic analysis and sample-size rationales. Qualitative

Research in Sport, Exercise and Health, 13(2), 201–216.


https://doi.org/10.1080/2159676X.2019.1704846

Bucholtz, M. (2000). The politics of transcription. Journal of Pragmatics, 32(10), 1439–

1465. https://doi.org/10.1016/S0378-2166(99)00094-6

Byrne, D. (2021). A worked example of Braun and Clarke’s approach to reflexive thematic

analysis. Quality & Quantity. https://doi.org/10.1007/s11135-021-01182-y

Campbell, S., Greenwood, M., Prior, S., Shearer, T., Walkem, K., Young, S., Bywaters, D.,

& Walker, K. (2020). Purposive sampling: Complex or simple? Research case

examples. Journal of Research in Nursing, 25(8), 652–661.

https://doi.org/10.1177/1744987120927206

Guidetti – 9680784 27.08.21 page 38 of 48


Casner, S. M., Geven, R. W., & Williams, K. T. (2013). The Effectiveness of Airline Pilot

Training for Abnormal Events. Human Factors, 55(3), 477–485.

https://doi.org/10.1177/0018720812466893

Castelfranchi, C., & Falcone, R. (2010). Trust theory: A socio-cognitive and computational

model. J. Wiley.

Chancey, E. T., Bliss, J. P., Yamani, Y., & Handley, H. A. H. (2017). Trust and the

Compliance–Reliance Paradigm: The Effects of Risk, Error Bias, and Reliability on

Trust and Dependence. Human Factors, 59(3), 333–345.

https://doi.org/10.1177/0018720816682648

Chapanis, A. (1965). On the Allocation of Functions between Men and Machines.

Occupational Psychology, 39(1), 1–11.

Chialastri, A. (2012). Automation in Aviation. In Automation. IntechOpen.

https://doi.org/10.5772/49949

Cho, J. (2006). The mechanism of trust and distrust formation and their relational

outcomes. Journal of Retailing, 82(1), 25–35.

https://doi.org/10.1016/j.jretai.2005.11.002

Colquitt, J. A., Scott, B. A., & LePine, J. A. (2007). Trust, trustworthiness, and trust

propensity: A meta-analytic test of their unique relationships with risk taking and job

performance. Journal of Applied Psychology, 92(4), 909–927.


https://doi.org/10.1037/0021-9010.92.4.909

Constantinou, C. S., Georgiou, M., & Perdikogianni, M. (2017). A comparative method for

themes saturation (CoMeTS) in qualitative interviews. Qualitative Research, 17(5),

571–588. https://doi.org/10.1177/1468794116686650

de Winter, J. C., F, & Dodou, D. (2014). Why the Fitts list has persisted throughout the

history of function allocation. Cognition, Technology & Work, 16(1), 1–11.

http://dx.doi.org/10.1007/s10111-011-0188-1

Dekker, S. W. A., & Woods, D. D. (2002). MABA-MABA or Abracadabra? Progress on

Human–Automation Co-ordination. Cognition, Technology & Work, 4(4), 240–244.

http://dx.doi.org/10.1007/s101110200022

Guidetti – 9680784 27.08.21 page 39 of 48


Deutsch, M. (1958). Trust and suspicion. Journal of Conflict Resolution, 2(4), 265–279.

https://doi.org/10.1177/002200275800200401

Dimoka, A. (2010). What Does the Brain Tell Us About Trust and Distrust? Evidence from a

Functional Neuroimaging Study. MIS Quarterly, 34(2), 373–396.

https://doi.org/10.2307/20721433

Dorneich, M. C., Dudley, R., Rogers, W., Letsu-Dake, E., Whitlow, S. D., Dillard, M., &

Nelson, E. (2015). Evaluation of information quality and automation visibility in

information automation on the flight deck. 2015-January, 284–288.

https://doi.org/10.1177/1541931215591058

Downer, J. (2010). Trust and technology: The social foundations of aviation regulation.

The British Journal of Sociology, 61(1), 83–106. https://doi.org/10.1111/j.1468-

4446.2009.01303.x

Earle, T. C. (2010). Trust in Risk Management: A Model-Based Review of Empirical

Research. Risk Analysis, 30(4), 541–574. https://doi.org/10.1111/j.1539-

6924.2010.01398.x

Elias, B. (2019). Cockpit Automation, Flight Systems Complexity, and Aircraft Certification:

Background and Issues for Congress (No. R45939; pp. 1–30). Congressional

Research Service.

Englehardt, E., Werhane, P. H., & Newton, L. H. (2021). Leadership, Engineering and
Ethical Clashes at Boeing. Science and Engineering Ethics, 27(1), 1–17.

https://doi.org/10.1007/s11948-021-00285-x

Evjemo, T. E., & Johnsen, S. O. (2019). Lessons Learned from Increased Automation in

Aviation: The Paradox Related to the High Degree of Safety and Implications for

Future Research. Proceedings of the 29th European Safety and Reliability

Conference (ESREL), 3076–3083.

Ferris, T., Sarter, N., & Wickens, C. D. (2010). Cockpit Automation: Still Struggling to Catch

Up…. In E. Salas & D. Maurino (Eds.), Human Factors in Aviation (2nd ed., pp.

479–503). Academic Press. https://doi.org/10.1016/B978-0-12-374518-7.00015-8

Fitts, P. M., Viteles, M. S., Barr, N. L., Brimhall, D. R., Finch, G., Gardner, E., Grether, W.

F., Kellum, W. E., & Stevens, S. S. (1951). Human Engineering for an Effective Air-

Guidetti – 9680784 27.08.21 page 40 of 48


Navigation and Traffic-Control System. Ohio State University Research Foundation.

https://apps.dtic.mil/sti/citations/ADB815893

Fitzgerald, F. S. (1936). The Crack-Up. Edmund Wilson.

Fuld, R. B. (1993). The Fiction of Function Allocation. Ergonomics in Design, 1(1), 20–24.

https://doi.org/10.1177/106480469300100107

Fuld, R. B. (2000). The fiction of function allocation, revisited. International Journal of

Human-Computer Studies, 52(2), 217–233. https://doi.org/10.1006/ijhc.1999.0286

Funk, K., Lyall, B., Wilson, J., Vint, R., Niemczyk, M., Suroteguh, C., & Owen, G. (1999).

Flight Deck Automation issues. The International Journal of Aviation Psychology,

9(2), 109–123. https://doi.org/10.1207/s15327108ijap0902_2

Gareth, T., Hayfield, N., Clark, V., & Braun, V. (2017). Thematic Analysis. In C. Willig & W.

S. Roger (Eds.), The Sage handbook of qualitative research in psychology, 2e (2nd

edition). SAGE Inc.

Green, J., Franquiz, M., & Dixon, C. (1997). The Myth of the Objective Transcript:

Transcribing as a Situated Act. TESOL Quarterly, 31(1), 172–176.

https://doi.org/10.2307/3587984

Grimmelikhuijsen, S., & Knies, E. (2017). Validating a scale for citizen trust in government

organizations. International Review of Administrative Sciences, 83(3), 583–601.

https://doi.org/10.1177/0020852315585950
Guest, G., Namey, E., & Chen, M. (2020). A simple method to assess and report thematic

saturation in qualitative research. PLOS ONE, 15(5).

https://doi.org/10.1371/journal.pone.0232076

Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y. C., de Visser, E. J., &

Parasuraman, R. (2011). A Meta-Analysis of Factors Affecting Trust in Human-

Robot Interaction. Human Factors, 53(5), 517–527.

https://doi.org/10.1177/0018720811417254

Hancock, P. A., & Scallen, S. F. (1996). The Future of Function Allocation. Ergonomics in

Design, 4(4), 24–29. https://doi.org/10.1177/106480469600400406

Harris, D. (2011). Human Performance on the Flight Deck. Ashgate.

Guidetti – 9680784 27.08.21 page 41 of 48


Ho, N., Sadler, G. G., Hoffmann, L. C., Zemlicka, K., Lyons, J., Fergueson, W.,

Richardson, C., Cacanindin, A., Cals, S., & Wilkins, M. (2017). A Longitudinal Field

Study of Auto-GCAS Acceptance and Trust: First-Year Results and Implications.

Journal of Cognitive Engineering and Decision Making, 11(3), 239–251.

Ho, N. T., Sadler, G. G., Hoffmann, L. C., Lyons, J. B., & Johnson, W. W. (2017). Trust of a

Military Automated System in an Operational Context. Military Psychology, 29(6),

524–541. https://doi.org/10.1037/mil0000189

Hoff, K. A., & Bashir, M. (2015). Trust in Automation: Integrating Empirical Evidence on

Factors That Influence Trust. Human Factors, 57(3), 407–434.

https://doi.org/10.1177/0018720814547570

Hoffman, R. R. (2017). A Taxonomy of Emergent Trusting in the Human-Machine

Relationship. In P. J. Smith, R. R. Hoffman, & D. D. Woods (Eds.), Cognitive

systems engineering: The future for a changing world. CRC Press, Taylor & Francis

Group.

Hoffman, R. R., Johnson, M., Bradshaw, J. M., & Underbrink, A. (2013). Trust in

Automation. IEEE Intelligent Systems, 28(1), 84–88.

https://doi.org/10.1109/MIS.2013.24

Jenner, B. M., & Myers, K. C. (2019). Intimacy, rapport, and exceptional disclosure: A

comparison of in-person and mediated interview contexts. International Journal of


Social Research Methodology, 22(2), 165–177.

https://doi.org/10.1080/13645579.2018.1512694

Johnson, D. R., Scheitle, C. P., & Ecklund, E. H. (2019). Beyond the In-Person Interview?

How Interview Quality Varies Across In-person, Telephone, and Skype Interviews.

Social Science Computer Review. https://doi.org/10.1177/0894439319893612

Jordan, N. (1963). Allocation of functions between man and machines in automated

systems. Journal of Applied Psychology, 47(3), 161–165.

https://doi.org/10.1037/h0043729

Kharoufah, H., Murray, J., Baxter, G., & Wild, G. (2018). A review of human factors

causations in commercial air transport accidents and incidents: From to 2000–2016.

Guidetti – 9680784 27.08.21 page 42 of 48


Progress in Aerospace Sciences, 99, 1–13.

https://doi.org/10.1016/j.paerosci.2018.03.002

Kim, S.-E. (2005). The Role of Trust in the Modern Administrative State: An Integrative

Model. Administration & Society, 37(5), 611–635.

https://doi.org/10.1177/0095399705278596

Koo, J., Kwac, J., Ju, W., Steinert, M., Leifer, L., & Nass, C. (2015). Why did my car just do

that? Explaining semi-autonomous driving actions to improve driver understanding,

trust, and performance. International Journal on Interactive Design and

Manufacturing, 9(4), 269–275. https://doi.org/10.1007/s12008-014-0227-2

Krouwel, M., Jolly, K., & Greenfield, S. (2019). Comparing Skype (video calling) and in-

person qualitative interview modes in a study of people with irritable bowel

syndrome – an exploratory comparative analysis. BMC Medical Research

Methodology, 19, 1–9. http://dx.doi.org/10.1186/s12874-019-0867-9

Kwak, Y.-P., Choi, Y.-C., & Choi, J. (2018). Analysis between Aircraft Cockpit Automation

and Human Error Related Accident Cases. International Journal of Control and

Automation, 11(3), 179–192. https://doi.org/10.14257/ijca.2018.11.3.16

Lacher, A., Grabowski, R., & Cook, S. (2014, March 22). Autonomy, Trust, and

Transportation. 2014 AAAI Spring Symposium Series. 2014 AAAI Spring

Symposium Series.
https://www.aaai.org/ocs/index.php/SSS/SSS14/paper/view/7701

Lee, J. D. (2008). Review of a Pivotal Human Factors Article: “Humans and Automation:

Use, Misuse, Disuse, Abuse”. Human Factors, 50(3), 404–410.

https://doi.org/10.1518/001872008X288547

Lee, J. D., & Moray, N. (1994). Trust, self-confidence, and operators’ adaptation to

automation. International Journal of Human-Computer Studies, 40(1), 153–184.

https://doi.org/10.1006/ijhc.1994.1007

Lee, J. D., & See, K. A. (2004). Trust in Automation: Designing for Appropriate Reliance.

Human Factors, 46(1), 50–80.

Guidetti – 9680784 27.08.21 page 43 of 48


Lee, J., & Moray, N. (1992). Trust, control strategies and allocation of function in human-

machine systems. Ergonomics, 35(10), 1243–1270.

https://doi.org/10.1080/00140139208967392

Lee, J., Yamani, Y., Long, S. K., Unverricht, J., & Itoh, M. (2021). Revisiting human-

machine trust: A replication study of Muir and Moray (1996) using a simulated

pasteurizer plant task. Ergonomics, 1–14.

https://doi.org/10.1080/00140139.2021.1909752

Lempereur, I., & Lauri, M. A. (2006). The Psychological Effects of Constant Evaluation on

Air line Pilots: An Exploratory Study. The International Journal of Aviation

Psychology, 16(1), 113–133. https://doi.org/10.1207/s15327108ijap1601_6

Lewandowsky, S., Mundy, M., & Tan, G. P. A. (2000). The dynamics of trust: Comparing

humans to automation. Journal of Experimental Psychology: Applied, 6(2), 104–

123. https://doi.org/10.1037/1076-898X.6.2.104

Lewicki, R. J., McAllister, D. J., & Bies, R. J. (1998). Trust and Distrust: New Relationships

and Realities. The Academy of Management Review, 23(3), 438–458.

https://doi.org/10.2307/259288

Luhmann, N. (1988). Familiarity, Confidence, Trust: Problems and Alternatives. In D.

Gambetta (Ed.), Trust: Making and breaking cooperative relations. B. Blackwell.

Mårtenson, L. (1995). The Aircraft Crash at Gottröra: Experiences of the Cockpit Crew.
International Journal of Aviation Psychology, 5(3), 305.

https://doi.org/10.1207/s15327108ijap0503_5

Mateusz, M., & Stanislaw, D. (2020). The Assessment of Pilot Compliance with TCAS

RAs, TCAS Mode Selection and Serviceability Using ATC Radar Data. Eurocontrol.

Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An Integrative Model of

Organizational Trust. The Academy of Management Review, 20(3), 709–734.

https://doi.org/10.2307/258792

Mayo, R. (2015). Cognition is a matter of trust: Distrust tunes cognitive processes.

European Review of Social Psychology, 26(1), 283–327.

https://doi.org/10.1080/10463283.2015.1117249

Guidetti – 9680784 27.08.21 page 44 of 48


McLucas, J. L., Drinkwater, I., & Leaf, H. W. (1981). Report of the President’s Task Force

on Aircraft Crew Complement.

McNeese, N. J., Demir, M., Chiou, E. K., & Cooke, N. J. (2021). Trust and Team

Performance in Human–Autonomy Teaming. International Journal of Electronic

Commerce, 25(1), 51–72. https://doi.org/10.1080/10864415.2021.1846854

Morse, J. M. (1995). The Significance of Saturation. Qualitative Health Research, 5(2),

147–149. https://doi.org/10.1177/104973239500500201

Mouloua, M., Hancock, P., Jones, L., & Vincenzi, D. (2016). Automation in Aviation

Systems: Issues and Considerations. In J. A. Wise, V. D. Hopkin, & D. J. Garland

(Eds.), Handbook of Aviation Human Factors (2nd ed.).

Muir, B. M. (1987). Trust between humans and machines, and the design of decision aids.

International Journal of Man-Machine Studies, 27(5), 527–539.

https://doi.org/10.1016/S0020-7373(87)80013-5

Muir, B. M. (1994). Trust in automation: Part I. Theoretical issues in the study of trust and

human intervention in automated systems. Ergonomics, 37(11), 1905–1922.

https://doi.org/10.1080/00140139408964957

Muir, B. M., & Moray, N. (1996). Trust in automation. Part II. Experimental studies of trust

and human intervention in a process control simulation. Ergonomics, 39(3), 429–

460. https://doi.org/10.1080/00140139608964474
Niles, M. C. (2002). On the Hijacking of Agencies (and Airplanes): The Federal Aviation

Administration, Agency Capture, and Airline Security. American University Journal

of Gender, Social Policy & the Law, 10(2), 381–442.

Norman, D. A. (1990). The Problem of Automation: Inappropriate Feedback and

Interaction, Not Over-Automation. Philosophical Transactions of the Royal Society

of London.

Noy, I. Y., Shinar, D., & Horrey, W. J. (2018). Automated driving: Safety blind spots. Safety

Science, 102, 68–78. https://doi.org/10.1016/j.ssci.2017.07.018

Oliver, D. G., Serovich, J. M., & Mason, T. L. (2005). Constraints and Opportunities with

Interview Transcription: Towards Reflection in Qualitative Research. Social Forces,

84(2), 1273–1289. https://doi.org/10.1353/sof.2006.0023

Guidetti – 9680784 27.08.21 page 45 of 48


Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse, disuse,

abuse. Human Factors; Santa Monica, 39(2), 230–253.

Parasuraman, R., Sheridan, T. B., & Wickens, C. D. (2000). A model for types and levels of

human interaction with automation. IEEE Transactions on Systems, Man, and

Cybernetics - Part A: Systems and Humans, 30(3), 286–297.

https://doi.org/10.1109/3468.844354

Pearson, C. J., Welk, A. K., & Mayhorn, C. B. (2016). In Automation We Trust? Identifying

Varying Levels of Trust in Human and Automated Information Sources. Proceedings

of the Human Factors and Ergonomics Society 2016 Annual Meeting, 60, 201–205.

https://doi.org/10.1177/1541931213601045

Perry, P. (2011). Concept Analysis: Confidence/Self-confidence. Nursing Forum, 46(4),

218–230. https://doi.org/10.1111/j.1744-6198.2011.00230.x

Pritchett, A. R., Kim, S. Y., & Feigh, K. M. (2014). Measuring Human-Automation Function

Allocation. Journal of Cognitive Engineering and Decision Making, 8(1), 52–77.

https://doi.org/10.1177/1555343413490166

Riley, V. (1996). Operator Reliance on Automation: Theory and Data. In R. Parasuraman &

M. Mouloua (Eds.), Automation and human performance: Theory and applications

(pp. 19–35). Lawrence Erlbaum Associates.

Riley, V. (1995). What avionics engineers should know about pilots and automation.
Proceedings of 14th Digital Avionics Systems Conference, 252–257.

https://doi.org/10.1109/DASC.1995.482836

Robinson, O. C. (2014). Sampling in Interview-Based Qualitative Research: A Theoretical

and Practical Guide. Qualitative Research in Psychology, 11(1), 25–41.

https://doi.org/10.1080/14780887.2013.801543

Rousseau, D. M., Sitkin, S. B., Burt, R. S., & Camerer, C. (1998). Not so Different After All:

A Cross-Discipline View of Trust. Academy of Management Review, 23(3), 393–

404. https://doi.org/10.5465/AMR.1998.926617

Saldaña, J. (2016). The coding manual for qualitative researchers (3rd ed.). SAGE.

Sarter, N. B., Mumaw, R. J., & Wickens, C. D. (2007). Pilots’ Monitoring Strategies and

Performance on Automated Flight Decks: An Empirical Study Combining Behavioral

Guidetti – 9680784 27.08.21 page 46 of 48


and Eye-Tracking Data. Human Factors, 49(3), 347–357.

https://doi.org/10.1518/001872007X196685

Schaefer, K. E., Chen, J. Y. C., Szalma, J. L., & Hancock, P. A. (2016). A Meta-Analysis of

Factors Influencing the Development of Trust in Automation: Implications for

Understanding Autonomy in Future Systems. Human Factors, 58(3), 377–400.

Sheridan, T. B. (1988). Trustworthiness of Command and Control Systems. IFAC

Proceedings Volumes, 21(5), 427–431. https://doi.org/10.1016/S1474-

6670(17)53945-2

Sheridan, T. B. (1992). Telerobotics, automation, and human supervisory control. MIT

Press.

Sheridan, T. B. (2018). Comments on “Issues in Human–Automation Interaction Modeling:

Presumptive Aspects of Frameworks of Types and Levels of Automation” by David

B. Kaber. Journal of Cognitive Engineering and Decision Making, 12(1), 25–28.

https://doi.org/10.1177/1555343417724964

Sheridan, T. B., & Verplank, W. L. (1978). Human and Computer Control of Undersea

Teleoperators. Massachusetts Institute of Technology.

https://apps.dtic.mil/sti/citations/ADA057655

Sheridan, T. R. (2012). Human Supervisory Control. In G. Salvendy (Ed.), Handbook of

human factors and ergonomics (4th ed.). Wiley.


Six, F., & Verhoest, K. (2017). Trust in regulatory regimes: Scoping the field. In F. Six & K.

Verhoest (Eds.), Trust in regulatory regimes. Edward Elgar Publishing.

Spielman, Z., & Le Blanc, K. (2021). Boeing 737 MAX: Expectation of Human Capability in

Highly Automated Systems. In M. Zallio (Ed.), Advances in Human Factors in

Robots, Drones and Unmanned Systems (pp. 64–70). Springer International

Publishing. https://doi.org/10.1007/978-3-030-51758-8_9

Strauch, B. (2018). Ironies of Automation: Still Unresolved After All These Years. IEEE

Transactions on Human-Machine Systems, 48(5), 419–433.

https://doi.org/10.1109/THMS.2017.2732506

The House Committee on Transportation & Infrastructure. (2020). The Design,

Development & Certification of the Boeing 737 Max [Final Committee Report].

Guidetti – 9680784 27.08.21 page 47 of 48


Trainor, L. R., & Bundon, A. (2020). Developing the craft: Reflexive accounts of doing

reflexive thematic analysis. Qualitative Research in Sport, Exercise and Health,

0(0), 1–22. https://doi.org/10.1080/2159676X.2020.1840423

Tušl, M., Rainieri, G., Fraboni, F., De Angelis, M., Depolo, M., Pietrantoni, L., & Pingitore,

A. (2020). Helicopter Pilots’ Tasks, Subjective Workload, and the Role of External

Visual Cues During Shipboard Landing. Journal of Cognitive Engineering and

Decision Making, 14(3), 242–257. https://doi.org/10.1177/1555343420948720

Vaughan, D. (1999). The Dark Side of Organizations: Mistake, Misconduct, and Disaster.

Annual Review of Sociology, 25, 271–305.

Verberne, F. M. F., Ham, J., & Midden, C. J. H. (2012). Trust in Smart Systems: Sharing

Driving Goals and Giving Information to Increase Trustworthiness and Acceptability

of Smart Systems in Cars. Human Factors, 54(5), 799–810.

https://doi.org/10.1177/0018720812443825

Weyer, J. (2016). Confidence in hybrid collaboration. An empirical investigation of pilots’

attitudes towards advanced automated aircraft. Safety Science, 89, 167–179.

https://doi.org/10.1016/j.ssci.2016.05.008

Wiener, E. L. (1988). Cockpit Automation. In E. L. Wiener & D. C. Nagel (Eds.), Human

Factors in Aviation (pp. 433–461). Academic Press. https://doi.org/10.1016/B978-0-

08-057090-7.50019-9
Wiener, E. L. (1989). Human Factors of Advanced Technology (‘Glass Cockpit’) Transport

Aircraft (No. 177528). NASA Ames Research Center.

Wiener, E. L., & Curry, R. E. (1980). Flight-deck automation: Promises and problems (TM

81206). NASA Ames Research Center. https://doi.org/10.1080/00140138008924809

Wood, D. (1988). The Effect of Automation on the Human’s Role: Experience from Non-

Aviation Industries. In S. D. Norman & H. W. Orlady (Eds.), Flight Deck Automation:

Promises and Realities. https://core.ac.uk/reader/42825630

Woods, D. D. (1985). Cognitive Technologies: The Design of Joint Human-Machine

Cognitive Systems. AI Magazine, 6(4), 86–86.

https://doi.org/10.1609/aimag.v6i4.511

Guidetti – 9680784 27.08.21 page 48 of 48


Appendices

Guidetti – 9680784 27.08.21 page A-01


Certificate of Ethical Approval

Dispositions, development, and influence of trust on aircrew-automation collaboration: an exploratory study of professional pilots perspective.
P120256

Certificate of Ethical Approval


Applicant: Sandro Guidetti
Project Title: Dispositions, development, and influence of trust on aircrew-
automation collaboration: an exploratory study of professional
pilots perspective.

This is to certify that the above named applicant has completed the Coventry University Ethical
Approval process and their project has been confirmed and approved as Medium Risk

Date of approval: 03 Apr 2021


Project Reference Number: P120256

Sandro Guidetti (7052MAA) Page 1 03 Apr 2021

Guidetti – 9680784 27.08.21 page A-02


Relevant Aircraft Types Operated by the Participants

4th generation 4th generation 4th/4.5th generation


airliners business jets fighters

Airbus Dassault Boeing


• A320 • Falcon 7X • F/A-18 *
• A330 • F-15 *
• A340 Gulfstream
• G650 Dassault
Boeing • Mirage 2000
• B747-8
• B777 Lockheed Martin
• B787 • F-16 **
• MD-11 *
Panavia
Embraer • Tornado
• E190E2

* originally Mc Donnell Douglas


** originally General Dynamics

Guidetti – 9680784 27.08.21 page A-03


Participant information sheet

1
Dispositions, development, and influence of trust on aircrew-automation
collaboration: an exploratory study of professional pilots perspective.

PARTICIPANT INFORMATION SHEET


You are being invited to take part in research on the influence of trust on aeroplane automation
usage. Sandro Guidetti, MSc student at Coventry University is leading this research. Before you
decide to take part, it is important you understand why the research is being conducted and
what it will involve. Please take time to read the following information carefully.

What is the purpose of the study?


The purpose of the study is to understand how trust in automation is formed by aircrews, and
understand the extent to which trust impacts aircrew-automation collaboration. It further aims to
establish a tentative model, to explain these interrelations and predictors, in the specific context
of aeroplane flight operations.

Why have I been chosen to take part?


You are invited to participate in this study because of your substantial operational experience as
a pilot on 4th generation airliners, respectively similar business jets, or on 4 th/4.5th generation
fighters. Accordingly, you will bring expert insight into human-advanced automation
collaboration.

What are the benefits of taking part?


By sharing your experiences with us, you will be helping Sandro Guidetti and Coventry
University to better understand the relationship between trust and aeroplane automation usage.

Are there any risks associated with taking part?


This study has been reviewed and approved through Coventry University’s formal research
ethics procedure. There are no significant risks associated with participation.

Do I have to take part?


No – it is entirely up to you. If you do decide to take part, please keep this Information Sheet
and complete the Informed Consent Form to show that you understand your rights in relation to
the research, and that you are happy to participate. Please note down your participant number
(which is on the Consent Form) and provide this to the lead researcher if you seek to withdraw
from the study at a later date. You are free to withdraw your information from the project data
set until the data are fully anonymised in our records, that is two weeks after the interview took
place. You should note that your data may be used in the production of formal research outputs
(e.g. journal articles, conference papers, theses and reports) prior to this date and so you are
advised to contact the university at the earliest opportunity should you wish to withdraw from the
study. To withdraw, please contact the lead researcher (contact details are provided below).
Please also contact the Research Support Office (ethics.eec@coventry.ac.uk) so that your
request can be dealt with promptly in the event of the lead researcher’s absence. You do not
need to give a reason. A decision to withdraw, or not to take part, will not affect you in any way.

Participant Information Sheet

Guidetti – 9680784 27.08.21 page A-04


Participant information sheet, cont’d

What will happen if I decide to take part?


You will be asked a number of questions regarding your experiences, perspective or opinion
related to trust, advanced automation operation, and human-automation collaboration. The
interview will take place via video link at a time that is convenient to you. Ideally, we would like
to audio record your responses (and will require your consent for this), so the environment
should be fairly quiet. The interview should take around one hour to complete.

Data Protection and Confidentiality


Your data will be processed in accordance with the General Data Protection Regulation 2016
(GDPR), which is the EU legislation having direct effect in the United Kingdom, and the UK Data
Protection Act 2018. All information collected about you will be kept strictly confidential. Unless
they are fully anonymised in our records, your data will be referred to by a unique participant
number rather than by name. If you consent to being audio recorded, all recordings will be
destroyed once they have been transcribed. Your data will only be viewed by the
researcher/research team. All personal or confidential data will be stored on a password-
protected computer file stored in Coventry University OneDrive. All paper records will be
scanned and stored as confidential electronic data; the physical document will be immediately
shredded (destroyed) after being successfully scanned and uploaded in OneDrive. Your
consent information will be kept separately from your responses in order to minimise risk in the
event of a data breach. The lead researcher will take responsibility for data destruction and all
collected data will be destroyed on or before the 31 st of December 2021.

Data Protection Rights


Coventry University is a Data Controller for the information you provide. You have the right to
access information held about you. Your right of access can be exercised in accordance with
the General Data Protection Regulation and the Data Protection Act 2018. You also have other
rights including rights of correction, erasure, objection, and data portability. For more details,
including the right to lodge a complaint with the Information Commissioner’s Office, please visit
www.ico.org.uk. Questions, comments and requests about your personal data can also be sent
to the University Data Protection Officer - enquiry.igu@coventry.ac.uk

What will happen with the results of this study?


The results of this study may be summarised in published articles, reports and presentations.
Quotes or key findings will always be made anonymous in any formal outputs unless we have
your prior and explicit written permission to attribute them to you by name.

Making a Complaint
If you are unhappy with any aspect of this research, please first contact the lead researcher,
Sandro Guidetti, mobile: +41XX XXX XXXX, guidettis@uni.coventry.ac.uk. If you still have
concerns and wish to make a formal complaint, please write to:

Dr. John Huddlestone


Associate Professor in Human Factors
Coventry University
Coventry CV1 5FB
Email: john.huddlestone@coventry.ac.uk

In your letter please provide information about the research project, specify the name of the
researcher and detail the nature of your complaint.

Participant Information Sheet

Guidetti – 9680784 27.08.21 page A-05


Interview Guide

Semi-structured Interview Guide

Introduction
Thank you for agreeing to participate in this research. I would like to interview you to
understand better the relationship between trust and modern automation usage by pilots;
hopefully, this would help improve the integration of human factors in automation design,
training and use. There are no right or wrong answers; my interest is in your unique
perspective and experience.
Your participation in this research is voluntary; you may decline to answer any question or
stop the interview at any time and for any reason. The interview should take about one hour,
depending on how much information you would like to share. I would like to video and audio
record the interview with your permission because I don’t want to miss any of your
comments. The recording will be kept confidential; it will be subsequently transcribed in an
anonymous form, which means that any information included in my research report will not
identify you as the respondent. Do you have any question about what I just explained?
May I start recording the interview?

Establishing Rapport
Before we begin, it would be nice if you could tell me a little bit about your relationship with
aviation. What brought you into a cockpit?
In the following discussion, automation is not just about autopilot and auto-thrust, but it refers
to the whole range of automatic systems on board.

Questions
1) Aircraft cockpit and automation have evolved tremendously over the past 20 years,
how do you feel about this evolution?
◦ impact on humans (pilots)

2) Considering your overall experience, are there automation systems, or sub-systems,


you were more comfortable with?
◦ sources of confidence or doubts
◦ influence of automation philosophy

3) In your interactions with automation, are there contexts that could influence how you
use it or manage it? (e.g. time, situation, experience, procedures, skills, self-
confidence, etc)

Interview Guide V4.8 page 1 of 2

Guidetti – 9680784 27.08.21 page A-06


Interview Guide, cont’d

4) Could you share an experience where automation did not behave as expected?
◦ feelings or reaction on the moment
◦ medium term implications or changes

5) How do you feel about automatic systems, that do no keep the pilot in the loop about
what is happening in the background? (e.g. reduced redundancy, alarm inhibition,
abnormals management, etc)
◦ view on advantages / issues
◦ aspects influencing acceptance or refusal

6) What is your opinion regarding autonomous safety systems that could override the
crew, e.g. Auto-GCAS, automatic EDM, automatic RA manoeuvre?
◦ view on advantages / issues
◦ aspects influencing acceptance or refusal

7) It is likely that automation will evolve towards more intelligent or autonomous systems
(e.g. to support decision making) that might adapt to changing situations, making them
unpredictable. How do you feel about this likely evolution?
◦ what would you need to work confidently with such systems
◦ authority/responsibility paradigm

8) If you could design the dream aircraft automation for a perfect human-automation
collaboration, what would be the key elements?

9) In your opinion, how does trust influences automation usage by pilots?

Closing Part
• These are all the questions I have. Is there anything else you would like to share, or
that you would like to ask me about this study?
• How did you feel about the interview? was it conducted adequately? Did you gain
something out of it?

I would like to thank you very much for your contribution and the time offered.

Interview Guide V4.8 page 2 of 2

Guidetti – 9680784 27.08.21 page A-07


Transcription Convention

Recording Transcription Example


what is said cannot be heard, cannot (inaudible),
be understood, or cannot be heard or information in parentheses (unintelligible), (cross-
understood due to cross-talk talk)

doubtful word in parentheses with a (course?)


doubt about what the person is saying
question mark

word heard followed by likely word with


probable misspelling life/(lie?)
question mark, in parentheses

common aviation acronyms acronym in capital letters EFB, CAS

interviewee name random pseudonym Mike, Hector

actual data is replace by the data type, [age], [Co Name],


information to be anonymised
in bracket [Place]

reported thoughts or speech of self or


text in quotation marks he said: “I am lost”
others

what is being said is not meant in its information in parentheses after what I was completely lost
literal sense has been said [figure of speech]

tone (only if changing the sentence


information in brackets (sarcastic), (ironical)
meaning)

non-verbal sounds (only if meaningful


information in brackets [laughed], [sighed]
for analysis)

speech emphasised (only if meaningful


word underlined it was scary
for analysis)

long silence or pause, i.e > 3 sec (only


three points in parentheses (...)
if meaningful for analysis)

double words, “you know”, hesitation,


elements cleaned-up
interjections, etc

interviewee short acknowledgement or


in parentheses where it happened (Mike: “yeap”)
negation during interviewer talk

translated in English in parentheses,


Words not expressed in English (tr: vehicle)
prefixed with tr:

Guidetti – 9680784 27.08.21 page A-08


Intermediate Mind-map supporting reflective analysis

Guidetti – 9680784 27.08.21 page A-09


List of Codes per Themes

Trust in the Aviation System Healthy Scepticism


737 Max case assertive approach
automation design driven by technology and balancing trust and scepticism
business
exposed to cross-checking needs and malfunctions
automation reliability, engineering errors, weak existence
design and flawed assumptions
negative impact of dogmatic SOPs
certification process
crew training and certification leeway Self-Confidence
erosion of trust in authorities and OEMs confidence in basic flying skills
management by strict compliance confidence to take-over
OEM dictating SOP and practices flight control system understanding
personal data protection concerns
regulation and oversight Aircrew-Automation Interactions
automation monitoring-action protocol
Trust in Automation automation operational weaknesses
importance of trust in automation automation use proficiency
TiA at ease with the system philosophy automation-human interaction and understanding
TiA certification or validation process feeling or reaction after automation issue
TiA complacency flight envelope awareness
TiA conservatism-progressivism system feedback
TiA data entry integrity
TiA fully automatic functions Sense of Control
TiA live or die autonomous system and override authority
TiA myths and hearsay control-command and responsibility
TiA sense of control determination of the information needed by crews
TiA system design, capability, reliability, and from data to information needs context
redundancy
haptic control feedback
TiA training, system knowledge, understanding,
and usage proficiency mandated automation use and liability
pilot purpose, duty and authority

Usage of Automation systems management


AU cost-benefit
AU crew interaction
AU individual mood and habits
AU induced by technology
AU SOPs, practices, and culture
AU system understanding and usage proficiency
AU trust in automation

Guidetti – 9680784 27.08.21 page A-10

You might also like