Professional Documents
Culture Documents
automation interactions
an exploratory study of professional pilots perspective
Sandro Guidetti
Copyright Acknowledgement
I acknowledge that the copyright of this research project, the submitted assessments and
any ‘product’ developed as a result of this research project belongs to Coventry University.
To you, the anonymous participants. I want to thank you for the time offered
unconditionally, for sharing your knowledge, experience and perspective, and for
entrusting me with the information you have shared. Thanks for your professional insight,
kindness, and light moments. It has been a fantastic journey.
Aircraft automation complexity and usage have drastically increased over the past 40
years. While bringing clear safety improvement, advanced automation created new human
factors challenges, and modern accident causal factors emerged. Studies have shown the
significant influence of trust on adequate automation usage. The study aimed to gain a
better understanding of the importance of trust in automation in aviation, potentially to
inform future practice, training, and design. Semi-structured interviews of eleven highly
experienced professional pilots were analysed through a reflective thematic analysis to
identify the interconnections and influence of trust on aircrew-automation interactions. The
analysis generated seven interconnected themes related to trust. Trust in automation was
found to be part of a larger trust construct in manufacturers and certification authorities.
Nonetheless, trust in automation was found to influence human-automation interaction, as
expected. However, the study also showed that probably trust in automation does not flow
linearly to automation usage and then influencing aircrew-automation interactions. Instead,
the aircrew-automation interaction quality likely has a direct and significant influence on
trust in automation. Lack of automation feedback, limited system information, and poor
human-automation interactions raised most concerns, mainly due to the impact on sense
of control. Crews are made to react to technology instead of collaborating with a
technology designed around the human to achieve a common goal. Additionally, the study
showed that there is a latent need for a suitable aircrew-automation CRM protocol.
Acknowledgement iv
Abstract v
Table of Content vi
1. Introduction
1.1 Context 1
1.2 Aim and Objectives 2
1.3 Approach 2
1.4 Researcher Aviation Background 2
2. Theoretical Background
2.2 Trust
2.2.1 Trust and Its Function 7
2.2.2 Trust in Automation 8
2.2.3 Trust in Organisation and Authorities 11
3.7 Analysis 16
3.8 Data Saturation 18
4. Results
5. Discussion 30
References 37
Appendices
Certificate of Ethical Approval A-02
Relevant Aircraft Types Operated by the Participants A-03
Participant Information Sheet A-04
Interview Guide A-06
Transcription Convention A-08
Intermediate Mind Map A-09
List of Codes per Themes A-10
1.1. Context
In aviation, automation complexity and usage have drastically increased over the past 40
years. With the introduction of a new aeroplane generation, such as the A320 or B777,
automation became ubiquitous and functions less transparent. Consequently, pilots’ roles
evolved from aircraft controller to system manager (Boy, 2020, pp. 113–118). While
advanced automation brought clear safety improvement to aviation, the evolution created
new human factors challenges (Billings, 1997, pp. 81–117; Funk et al., 1999), and modern
accident causal factors emerged (Kwak et al., 2018).
Schaefer et al. (2016) discussed the importance of trust to foster effective human-
automation collaboration, as automation leans towards more autonomy. Parasuraman and
Riley (1997) showed that trust in automation is a balanced act, making clear that an
inappropriate level of trust in automation likely lead to automation disuse or misuse.
Human errors are analysed during accident investigations, Threat and Error Management
(TEM) is used in operation, and strict application of procedures is reinforced during
training. However, deep consideration regarding human-machine cooperation appears
absent. Cognitive processes impacting system usage daily, such as trust in automation,
are barely discussed and practically not considered.
The research aimed to comprehend how trust in automation is formed by aircrews and
understand the extent to which trust impacts aircrew-automation interactions. It further
intended to establish a tentative model to present trust elements interrelations and effects,
in the specific context of aeroplane flight operations.
The purpose was to identify the aspects that should be considered, regarding automation
trust, possibly to improve the design of advanced automation, related operational
procedures, and training approach, concerning advanced automation.
The specific objectives were first to establish the influence of trust on aircrew-automation
collaboration effectiveness. Secondly, explore the factors influencing aircrew propensity to
trust automation, and thirdly, discuss possible ways to improve trust in automation. The
expected benefit was to lay down a basis on which enhanced training or operational
practices could potentially be based.
1.3 Approach
The objective was to gain an expert view on the influence of trust on automation usage in
modern aeroplanes with advanced automation, with the longitudinal perspective of pilots
having a background in legacy aeroplanes. Accordingly, a non-probabilistic purposive
sampling of professional pilots with significant operational experience with advanced
automation, but having started their careers on classic aeroplanes, was researched and
analysed thematically.
The researcher has twenty years of experience as a professional pilot with over 7’000
hours of flight experience, mostly on CS23 certified turboprops and business jets. Besides
substantial experience as an instructor and examiner, he was also involved in flight testing,
mainly cockpit upgrades. While advanced avionics and automation are commonly fitted in
small business jets, flight control systems are typically mechanical. Hence, while fly-by-
wire (FBW) flight control mimicking mechanical systems could easily be related, it was not
the case when discussing control systems without displacement or force feedback.
2.1.1 Definition
Different factors were identified behind the automation drive (Harris, 2011, pp. 223–224;
Wiener, 1988, p. 444; Wiener & Curry, 1980, pp. 3–4). Availability of technology, safety,
economics and workload reduction, being illustrative. Digital technology opened incredible
possibilities, permitting a giant leap in aircraft automation. However, technology might
sometimes be treated as an aim in itself, leading to a tendency to automate as much as
feasible instead of considering what makes sense. While modern automation benefits
materialised quickly, the drawbacks were not adequately considered (Boehm-Davis et al.,
1983, p. 956), and not perceived until much later (Wiener, 1988, p. 444); many remained
latent today (Boy, 2020, pp. 115–116). Parasuraman and Riley (1997, pp. 286–287) were
adamant that “automation does not simply supplant human activity but rather changes it,
often in ways unintended and unanticipated by the designers of automation”, creating new
demands and challenges for the human (Chialastri, 2012, pp. 95–96), when not leading
outright to accidents, e.g. (Mårtenson, 1995, p. 311,324; Spielman & Le Blanc, 2021, p.
69). Sheridan (2012, p. 1013) opinioned that humans will keep playing technology catch-
up for the near future.
Reducing pilot workload was seen as another automation promise, which enabled
certifying large aeroplanes with two-men crews. While the different stakeholders involved
all claimed to pursue flight safety, obviously, they also had different competing financial
interests in the background (McLucas et al., 1981, p. 2,4-5). While the workload reduction
afforded by automation remains an open question, the nature of workload has shifted
towards increasing cognitive demand. Furthermore, automation reduces workload in low
workload phases while increasing it in peak phases (Wiener, 1989).
The view in the 1970s was reportedly to automate in full measure (Dekker & Woods, 2002,
p. 240). Wiener and Curry (1980, p. 1) questioned what should be automated, considering
the many human factor issues resulting from automation. Ferris (2010, p. 481) further
discussed the complexity of the available automation flavours and the necessity to
implement automation mixes adequate for human users. Wiener and Curry (1980, p. 13)
clarified that automation is not only about executing tasks. Cockpit alerting systems are
also automation; however, there is a distinction between the two. In the first case,
automation is in charge and pilots monitor, while for warning systems, the automation
monitors and pilots control. Both aspects could exist independently and with different LoA.
Sheridan and Verplank (1978, pp. 8–17) proposed ten LoA describing man-computer
decision-making, from the human doing everything to the computer being the sole
Hancock and Scallen (1996, p. 24,26) saw function allocation as an essential element of
the human factors endeavour. Furthermore, function allocation decisions are at the heart
of human-machine design decisions; they should be the starting point (Chapanis, 1965,
pp. 1–2; Pritchett et al., 2014, p. 52). Fuld (1993, 2000) criticised vividly function allocation
as a practical design process, labelling it “a useful theory but not a practical method”. In a
partial rebuttal, Hancock and Scallen (1996, p. 28) were adamant that the effort must be
continued and evolved towards dynamic function allocation.
Fitts (1951, pp. 5–11), probably initiated function allocation research, discussing the
possible roles of man and machine. He proposed a list of superior capabilities of men,
respectively machines, that informed functions allocation. Fitts did not discuss functions
that could be done equally by man and machine, nor trade-off situations. Fitts (1951, p. 5)
guiding perspective was human performance and capacity limitations. However, function
allocation could be linked to economic or political aspects, or depend on engineering
uncertainties (Chapanis, 1965, pp. 5–6). Nonetheless, aviation certification standards may
impose allocating specific functions to crews. While certain elements of the Fitts list have
lost relevance due to the technological evolution, the value and relevance of his work
should be appreciated (see de Winter et al., 2014 for a scientific review).
Jordan (1963, pp. 162–163) stated that it is wrong to compare men and machines abilities
in our quest to allocate specific functions to the best one. Appreciating that men and
machines are complementary would open brighter perspectives. Hancock and Scallen
(1996, p. 27) discussed the importance of considering the operating context dynamism
and its strong impact on the human side regarding human factors. Furthermore, functions
Discussing the new problems created by advanced automation, Billing (1991, pp. 266–
269) was adamant that aircrews must retain a central management and control function.
Woods (1988, p. 70; 1985, p. 88) warned about the risk of responsibility/authority double-
bind when the automation scope of responsibility is not made clear. Aircrews should be
provided with salient information about automation to monitor it effectively; furthermore,
automation should be predictable and communicate its intents (Billings, 1996, pp. 10–13;
D. A. Norman, 1990, p. 15; Parasuraman & Riley, 1997, pp. 242–243; Sarter et al., 2007,
pp. 355–356). Notwithstanding, Ashleigh and Stanton (2001, pp. 98–99) showed that the
quality of interaction plays a major role in successful human-automation collaboration.
Trust presuppose, firstly, the existence of a risk. However, this risk results from the trusting
engagement, in which potential negative consequences are more significant than the
benefit envisioned (Luhmann, 1988, p. 97,100). Furthermore, Deutsch (Deutsch, 1958, p.
266) is adamant that trusting should not be construed as meaning risk-taking or gambling.
Secondly, a situation of trust implies reliance on another party to achieve the expected
objective.
Trust enables people to collaborate effectively and deal with complex or unfamiliar
situations, which uncertainties could otherwise discourage enterprise. When relying on
technology, it permits humans to cope with situations where complete system
understanding is impossible (J. D. Lee & See, 2004, p. 52; Lewicki et al., 1998, p. 446).
Hoffman et al. (2013, p. 85) warned that when trust in automation is lost, it could be difficult
to regain. Furthermore, trust in automation could be quickly lost under some situations.
Lewicki et al. (1998, pp. 445–446) posited that trust and distrust are different constructs,
arguing that mistrust is not similar to low trust. Cho (2006, p. 26) concurred, advancing that
the trustee expects the other party to defend his self-interest and behave in harmful ways
in situations of distrust. Dimoka (2010, pp. 388–392) found that trust and distrust are likely
Comparing confidence and trust, confidence has to do with recognisable or known aspects
and the belief that things will happen as foreseen. Somehow, confidence is rooted in the
past, while trust lies in the future (Earle, 2010, p. 542). Self-confidence is described by
Perry (2011, p. 219) as “a self-perceived measure of one’s belief in one’s own abilities,
dependent upon contextual background and setting”.
Regarding trust in automation, Lacher et al. (2014, p. 43) (2014, p43) has the following
view: “Trust is not a trait of the system; it is the status the system has in the mind of human
beings based upon their perception and expectation of system performance”.
Muir (1987) extended human-human trust models into the field of human-machine trust, to
support the design of decision aids that users would trust, and consequently used
appropriately. While human-human models seemed applicable, she viewed machine
technical competencies as central to human-machine trust. Hoffman et al. (2013, pp. 83–
84) expressed that all technical limitations and flaws would impact human-automation
trust. Sheridan (1988, pp. 429–429) suggested additional attributes of trust, considering
the aspects of control and command. Accordingly, the system should be understandable
for the human and use familiar procedures; furthermore, the system intentions should be
transparent, and its usefulness explicated.
Lee and Moray (1994; 1992) investigated the role of trust in operators' choice to use
automated or manual control, including the influence of self-confidence. Working on
elements influencing trust, they proposed automation performance, purpose and process
to link the different existing frameworks (see J. D. Lee & See, 2004, p. 60 for a review).
Lee and Moray (1994; 1992) established that allocation between manual and automatic
control was not simply a function of trust in the system, suggesting that trust in one control
system might not imply whole system trust. Trust was found to be affected dynamically by
system faults and performance. Finally, they concluded that operators allocation decision
was conditioned by the difference between trust and self-confidence, cautioning that both
aspects are subject to individual bias and could be miscalibrated.
Lee et al. (2021) re-enacted Muir and Moray (1996) study and found dependability to
primarily drive trust development, contradicting the original study's findings. They
acknowledged the possible cultural influence between the original Canadian participants
and the Japanese participants they used. More interestingly was the suggestion that
participants having grow-up in a computer world may develop trust in automation
differently than participants who did not have that ubiquitous technology exposure. This
aspect was, however, not further researched. Contradicting both Lee et al. (2021) and Muir
and Moray (1996) findings, Balfe et al. (2018, p. 493) found in their real-world study that
understanding was the most relevant factor governing trust in automation. They concluded
that user trust in real operation is probably fundamentally different that trust in laboratory
settings. Nonetheless, Chancey et al. (2017, p. 342) suggested that perceived risk in real
operation had a different impact than a defined risk variable in an experiment, owing to the
reality of potential consequences, which could be another reason for the different results.
Through a systematic review of research made between 2002 and 2013 concerning trust
in automation, Hoff and Bashir (2015) synthesised a three layers model of the variables
affecting human-automation trust. The first layer is dispositional trust, covering the
enduring aspect governing personal dispositions to trust automation, like personal traits or
culture. The second is situational trust, encompassing context-dependent variables, which
could be internal, such as self-confidence, or external, such as workload or organisational
aspects. Finally, learned trust is the third layer, which is about the human perception of the
specific automation. It is influenced by knowledge and experience, and is affected by
different variables before and during interactions. In their design recommendations, they
stressed the importance of automation feedback and transparency, particularly when
levels of automation are high. In separate studies Verberne et al. (2012, p. 799), Koo et al.
(2015) and Dorneich (2015, p. 287), all showed that providing feedback and information
increase the trustworthiness of highly automated systems. Nonetheless, Ashleigh and
Stanton (2001, pp. 98–99) found that it was not mere information or feedback that matter,
the critical aspect was the quality of interaction.
In their meta-analysis of researches conducted until 2014, Schaefer et al. (2016) obtained
similar variable grouping. They were organised into three factors related to human,
partner, that is, automation, and environments; however, two aspects differ. Firstly, they
integrated the possibility that the human part consists of a team of humans; McNeese
(2021, p. 67) showed that team performance and trust in automation trust were related.
Secondly, automation is labelled a partner, and the relevant variables give the impression
of an exchange between the human and the automation, rather than a solely human
perspective. This was possibly influenced by previous work on human-robot trust
(Hancock et al., 2011).
The model of organisational trust set by Mayer et al. (1995, pp. 717–720), propose that
trustworthiness is based on three factors: ability, benevolence and integrity. Nonetheless,
Kim (2005, p. 622) cited credible commitment as an additional key factors of government
trustworthiness. Notwithstanding, Grimmelikhuijsen and Knies (2017, p. 596) showed the
validity of Mayers et al. (1995) three factors, to assess trust in governmental organisations.
Conceptually regulators act as proxies for the public, being a guarantor that regulated
organisations would not endanger public safety; this mission being carried out through
control and trust (Six & Verhoest, 2017, pp. 9–11). Noy et al. (2018, p. 74) stated the
critical importance of achieving trust in institutions concerning automated driving. In the
field of aviation, regulators tend to lack expert knowledge and delegate part of their
oversight function and responsibilities to the industry being regulated, with little possibility
to do otherwise (Downer, 2010); possibly reaching point were it looked captured by the
industry (Niles, 2002, pp. 405–406). Furthermore, organisations might deviate from
expected behaviours (Englehardt et al., 2021, pp. 2–5; Vaughan, 1999).
3.1 Methodology
The chosen research approach was qualitative, using thematic analysis. Semi-structured
interviews were chosen (Adams, 2015, pp. 495–496). The data collection aimed to obtain
rich accounts of pilots perspectives about trust and automation, to explore the influence of
trust on aircrew-automation collaboration. A constructionist approach was taken to
integrate participants' expressions and the importance of specific aspects (Byrne, 2021, p.
5). The analysis was mainly inductive to explore the data openly; it was not informed by
existing models or specific theories (Gareth et al., 2017, p. 22).
While different schools of thematic analysis exist (Braun & Clarke, 2021a, p. 39), reflective
thematic analysis (RTA) appeared to be the most suitable one, considering the research
objective, the chosen paradigms, and the sole researcher approach (Braun & Clarke,
2020, p. 6; Byrne, 2021, p. 3). In RTA, the researcher subjectivity is used as an analytic
resource (Braun & Clarke, 2019, p. 591, 2020, p. 3); Braun and Clark (2016, pp. 740–741)
are adamant that in RTA, themes do not pre-exist and emerge but are generated through
active researcher's participation. The RTA approach differs substantially from “coding
reliability” thematic analysis (Braun & Clarke, 2021a, p. 39); furthermore, attempting to
establish code accuracy or reliability contradict the RTA concept and is discouraged (Braun
& Clarke, 2019, p. 594; Byrne, 2021, p. 3).
The research aimed to gain an expert perspective on the influence of trust on automation
usage rather than achieving an overall perspective representing the entire pilot population.
Accordingly, a non-probability purposive sampling method was used (Campbell et al.,
2020, pp. 653–654); the sample was intended to be homogenous in terms of established
flight experience. The targeted participants were experienced aircrew flying as captains,
either 4th generation airliners, respectively similar business jets, or flying 4th/4.5th
generation fighters. The inclusion criteria were: more than ten years of experience as a
professional pilot, either with a civil operator or as a military pilot, with a career started on
classic aircraft types; operational flight experience as a commander on one or more of the
aircraft generation listed above; more than 5’000 hours flight experience, respectively
2’000 hours for fighter pilots. No relevant exclusion criteria could be defined.
3.3 Participants
Twelve participants living and working in Europe were interviewed. One of those interviews
was later set aside, as the critical criterion of starting a flying career on legacy aeroplanes
was not met. The retained eleven participants were professional pilots operating as
captains. Four of them having military fighter backgrounds, three of which have later
continued a civil career; the other seven received civil pilot training from the onset. Eight
participants were instructors, and three were also test pilots. The participants had an
average professional experience of 30 years (SD=9.3), an average total flight time of
12’545 hours (SD=5’027) with an average of 8’355 hours as pilot-in-command (SD=3’631).
Appendix A-03 shows the advanced aeroplanes flown. The participants' demographics
vastly exceeded the inclusion criteria.
Half the participants were direct contacts of the researcher. Regarding the remainder, two
were proposed by other participants, two were contacted through professional networks,
and two reached through the academic network. Participants were not compensated for
their participation and donated their time for the interview freely.
3.4 Equipment
A desktop computer with encrypted video conferencing capability (i.e. Zoom), including
audio and video recording, was used for the interviews by video-link. An iPad was used
concurrently as a backup audio recording device.
NVivo (no version number attached) was used for coding and analysing the interviews.
The NVivo transcription module was used for initial interview transcription.
Due to the Covid pandemic, the interviews were conducted via video-link, as per the
University recommendations. Johnson (2019) found that face-to-face interviews provided
richer data compared to video-link interviews. Furthermore, Krouwel (2019) noticed that
more interactions took place, although limitedly; nonetheless, they credited the approach
with substantial advantages regarding time and cost savings. Jenner and Myers (2019)
asserted that video-link interviews do not result in lower-grade data quality, but cautioned
about time delay issues, potentially resulting in challenging cross-talks. It was concluded
that the planned interviews could be conducted satisfyingly via video-link, as the apparent
superiority of face-to-face was somewhat limited; furthermore, video interviews offered
undeniable flexibility and time savings.
An interview guide was developed iteratively over several weeks, and was finalised
through a pilot interview. Nevertheless, it was further adjusted throughout the interviews
(appendix A-06). The questions were organised to cover areas where trust was likely to
play a role. While the participants were aware that trust was the central point of the
research, trust was not mentioned directly by the researcher, except for the last question,
to let participants contribute amply and avoid leading them into finite and specific answers.
The participants’ perspective was queried concerning:
The research was approved by Coventry University Ethics Committee (appendix A-02)
The research objectives and approach were transparent to the participants; they came
from different organisations and participated as private individuals. The interview files were
kept confidential, and the transcripts anonymised.
Participants were provided in advance with an information sheet (appendix A-04), sent by
e-mail together with a consent form. The signed consent forms were returned by e-mail to
the researcher before interviews took place; an e-mail confirming acceptance of the
conditions on the consent form was also accepted. Before formally starting the interview,
participants were re-informed verbally about the research purpose, the participation
conditions, their right to withdraw at any time, and then asked their permission to start
recording.
The interviews were conducted via video-link. They were audio and video recorded, except
for one which was solely audio recorded, on security constraints at the participant location.
The main questions were administered following the structure of the interview guide.
Prompts were used to investigate further and yield rich data on each aspect. Every
participant contributed to all the subjects queried.
Overall, the interviews went smoothly. Cross-talks occurred occasionally, likely caused by
communication lag. The eleven interviews yielded a total of 11h55 of recording about the
defined questions (M=01h05, SD=00h16)
Transcription calls for choices regarding what is transcribed and how, yielding different
outcomes; consequently, the transcription approach should be coherent with the research
methodology. Transcription involves a level of interpretation and cannot be neutral
(Bucholtz, 2000, pp. 1440–1441; Green et al., 1997); thus, the approach and concept are
made transparent.
All information related to companies, places, names were anonymised unless referring to
information in the public domain. Information regarding aeroplanes make and model was
retained for the analysis, but anonymised in extracts used to illustrate results.
3.7 Analysis
The analysis broadly followed Braun and Clark (2006, pp. 86–93) proposed steps, further
informed by two worked examples (Byrne, 2021; Trainor & Bundon, 2020). To get familiar
with the data before coding, interviews were listened to, and transcripts were read several
times.
The first six interview coding was approached with an open and curious mind. Sentences
were used to describe what seemed important, relevant, or peculiar, with a particular
interest in trust aspects and elements that could influence trust. Nonetheless, elements not
related to trust but that might be relevant or shed light later were also coded. The coding
approach was mostly semantic (Gareth et al., 2017, pp. 22–23).
Some codes were written interrogatively regarding the elements reported, some others as
versus, some as the expression of the interviewees' opinion or experience. Some codes
ended up as “containers” to encapsulate similar views among interviewees, particularly
when directly related to trust in automation or automation usage. Code names evolved as
their meaning became crispier, while others were consolidated. Purely descriptive codes
were avoided to give life and context to the codes and grasp the main ideas (Saldaña,
2016, pp. 76–80). Occasionally, interview elements resulted in different codes, when they
appeared to cover different aspects simultaneously.
These first interviews yielded 202 codes. To be consolidated, tentative categories were
then created to facilitate the appearance of codes duplicates or closely related. It also
After coding the seventh interview, a fresh, in-depth review of the codes and categories
appeared necessary. A mind-map was used to materialise reflections, clarify the overall
picture and improve the codes. All previously coded transcripts were reviewed for coding
adequacy and consistency. It resulted in 100 codes in 16 categories. The remaining
interviews were coded on that basis, adding new codes as necessary. Nevertheless,
codes and categories were continuously evolved as coding progressed. After all interviews
were coded, the categories were redefined to minimise overlapping, and the codes
relocated accordingly. The coding exercise resulted finally in 150 codes grouped into 18
categories.
The next step was to make sense of data coded from a holistic perspective, moving from a
global picture to the codes and vice-versa, with different angles. Through the process, the
categories evolved towards interconnected themes. Mind-maps were used extensively, re-
organising the codes and their grouping accordingly, creating a new mind-map, and so on
(see appendix A-09 for an intermediate Mind-map). In the process, some codes and
categories were segregated as a coherent, broad picture was generated. From 134 codes
in 14 categories, it evolved iteratively into 51 codes forming seven interconnected themes
(list of codes in appendix A-10).
The premise of achieving saturation is to ensure that sufficient data are being collected to
ensure complete coverage of the researched subject (Morse, 1995). While Constantinou et
al. (2017, pp. 585–586) see saturation as the key validity indicator in qualitative research,
Braun and Clark (2021b, pp. 208–211) believe that predicting data saturation in reflective
thematic analysis is nearly impossible, and possibly irrelevant. However, in the present
research, the quantum of data collected was limited by resource, and a saturation
evaluation a posteriori would still provide a helpful validity indication.
The method proposed by Guest et al. (2020) has been used to assess the level of data
saturation achieved, using the first four interviews as base, subsequent runs of two
interviews, and a new code threshold of 5%. Based on these assumptions, code saturation
was achieved by the eighth interview (see table 1). Accordingly, the sample size appeared
adequate.
Interview Number 01 02 03 04 05 06 07 08 09 10 11
New Codes in Interview 32 3 4 2 3 3 2 0 1 1 0
New Codes in Run 41 6 5 2 1 2 1
% Saturation 85% 88% 95% 98% 95% 98%
Overall, automation was seen as beneficial, particularly concerning safety. It has changed
work nature, which shifted from piloting to supervising systems; furthermore, the new
capabilities increased demand or expectations. Moving from classic aircraft to advanced
automation was described as a challenging giant step. Automation development was
perceived as technology-driven rather than needs-driven, resulting in extraneous
complexity. Finally, training was considered inadequate.
Participants expressed serious doubts and concerns regarding upper organisations in the
aviation system: management, manufacturers, and certification authorities. Many
comments were linked to the 737 Max Manoeuvring Characteristics Augmentation System
(MCAS) design, certification, and accidents. Participants opinioned strongly that
automation design was all about business decisions driven by profit, human factors and
safety being inferior concerns. Supposedly, manufacturers exploit the Operational
Suitability Data (OSD) process fully, to attain type rating commonality and create
competitive advantages. Participants expressed vivid concerns about autonomous devices
unknown to crews. P04 shared his view:
the Max system, which would supposed or implemented to avoid system failures and actually
overrode the pilot's input. Definitely not good, I see what, probably Boeing is probably a very
actual case where you can see what happen, when you try and hide or, when you not publish
information which is definitely, need to know for the two guys in front.
The technical expertise was seen as being in the manufacturers’ hands, while certification
authorities were considered lacking the competencies and resources to maintain the
balance of power. Furthermore, certification standards seemingly lag by one aircraft
generation. Trust in the manufacturers' honesty and authorities capabilities was perceived
as having declined steadily among the professional pilot community; one participant was
“very upset” by this situation, while another labeled it “a pity”. Many participants saw the
737 Max dramatic fiasco as a direct consequence of inadequate oversight and economic
pressures. P09 labeled the certification system as fundamentally flawed:
I think, we're into, you remember the phrase of the Disney film about the emperor's new clothes.
We have the manufacturers built it, he's got the manufacturers test pilots, he's got the
manufacturers test pilots talking to the allied certification authority test pilots, who have been
there talking to them; and they've done all the profiles in the simulator. It is a self-defeating
procedure because they all convince each other it's perfect, (...) until something goes wrong.
However, one participant described the multi-layered evaluation process in the military as
very dependable. When a new device or software load is released to the operational unit, it
works as expected and can be relied on. The process appears highly valued and trusted.
Interestingly, one participant saw technical trust as incidental compared to the serious
concerns posed by entrusting automation with personal data. P03 explained:
A huge challenge there is data protection; because, is that data protected? or is my employer
going to look at the way that I set up my cockpit, or the way that I deal with certain systems?
And that potentially going to have ramifications on your job, so there's that trust issue. And I
think, at least a lot of pilots I talked to, that's where their main trust issues are; not so much with
the technical side, but with the whole data side that's involved; the data protection, and also,
what data goes into some of these systems, has this data been embedded? Is it quality data, or
is it just some weird data? So (...) but trust is definitely an issue.
The participants saw trust in automation as essential and exemplified its impact on
automation usage. The level of automation used appears highly influenced by trust.
Notwithstanding, P02 emphasised the importance of finding the right balance:
Trust, surely to a certain degree, improve it; beyond a certain degree it decreases it. Meaning,
it's like a stall, your pitch to a certain degree, increases lift at some point you get a buffet and if
you are overconfident, you get in a secondary stall.
Yeah, well, automation is a very, very comfortable thing. It's, is very nice to rely, and it gives you
the sensation of full (...) reliability.
P03 wondered if trust did not reach a point where crews are not able anymore to be
assertive when using automation:
on the technical side, we have almost crossed the threshold where we are beginning to trust
automation too much; because it is very reliable, and the few instances where it does not act
like we expect, we're really surprised, and we're very reluctant to do something because we, at
least I, (constantly?) say: "I'm sure I missed something, because he, he wouldn't make that
mistake, I missed something, it's probably my mistake, I did something wrooong".
Nonetheless, the view is that trust should not be given as a blank check; knowing what is
trusted and why is necessary. Furthermore, trust should not exist without being verified.
Many participants raised the importance of being clear with expectations, as the
So, yes, (...) trust is important. But you have to know what you're trusting, and do you know
everything the computer is doing? Is the computer know everything that you are doing? or that
you're thinking?
I think it all being built in your training, that you, not only manage the systems, but you try and
understand as best as possible. And that comes all back to your training. Training means,
having also extra time, having not the commercial pressure in the back of your neck that
companies have nowadays, during training.
Trust is very deep, you must trust it, like we've said, for a CAT III landing. To trust (unknown?),
it's actually taking you down to the concrete, not to the airport building or the grass next to it or
whatever. So your implicit trust in a CAT III purely looking out the front and there's going to be a
runway when we get that, provided you've done your homework, set the autoland, done all your
checks, you've got the trust that it can fly, and would flare and land the aeroplane.
However, P06 raised a point regarding the tension created between requiring crews to
trust a system unequivocally while at the same time exposing them to system
malfunctions:
And it's ambivalent because on the one hand, after the simulator, a pilot should go out and trust
the system, the automation for his next flights in the real aircraft. On the other hand, every six
months, it's shown to him that the systems can fail very close to the ground.
Different accounts revealed that trust in automation could be severely affected by hearsay
or myths propagated within a community, dramatically affecting how specific systems are
used. P01 contrasted a conviction widely held and his operational experience:
I think it influences it greatly, and a lot of it is down to misinformation, social pressure. So, there
was a lot of question about the [Aeroplane-C] and its autothrust system, and it was nonsense.
And people just stop using it. They wouldn't trust it, they were overriding it, [...] And I left the
Finally, one participant reflected on the tension between the solutions engineers believe
they should provide, and what pilots perceive as their duties and responsibilities on board.
He saw the critical issue as negotiating a level of trust acceptable for a given situation,
noting that it would change with each automation generation.
While participants spoke about the importance of trust in automation, most pointed out that
automation usage was primarily governed by Standard Operating Procedures (SOPs).
Nonetheless, companies differed in their approaches, from very detailed SOPs to offering
vast leeway to their crews. The reasons being, for example, cultural or operational. P04
experience is illustrative:
that was a clever way of [Carrier] to take on the problem of having various culture in the cockpit
was, you have to create one similar operation for very dissimilar cultures. So, the SOPs were
written really in the smallest details, almost to the level of: you had to touch this knob with two
fingers and the other with three fingers, almost to a very detailed level. Whereas in [Carrier] you
could, it was more open. It's probably more the culture [World Area A] civilised culture, saying
[World Area B] probably more dogmatic, more to the letter.
Automation usage strategy was mainly linked with workload management, maintaining a
sensible drawback-advantage balance, or in some cases achieving maximum capabilities.
Nonetheless, several participants opinioned that high levels of automation could lead to
being overwhelmed by automation. One participant explained that it was not always easy
Now, you're in downwind for a visual approach; why do you need all this automation? Flight
director Off, autopilot Off, thrust director Off, fly the plane based on final. Why do we need to
turn the heading bug? Why do you need the path down now? Get rid of all of this [laugh].
it's also sometimes a problem when pilots have very good manual flying skills and good
overlook and a quick cross-check, but have a lack on automation use because the balanced
pilot has both skills, so he can decide which system he uses: his or the automation. But often
some pilots are skeptical for any reason, and on the other hand, they have also not the skill to
use automation proficiently.
All participants professed assertiveness in dealing with automation, while stating the
importance of having a “degree of scepticism” or maintaining a “healthy balance between
trust and scepticism”. One stated candidly that “a hole in the Swiss cheese” was
unavoidable. Many participants asserted that technology should not be blindly relied upon.
They articulated the importance of understanding how the system works, its weaknesses,
and what it is supposed to do. P02 claimed:
Participants spoke about the importance of a large base of knowledge and experience as
the foundation to develop an assertive attitude towards automation. It was stressed that it
was not a matter of doubting automation, but rather ensuring healthy cooperation and
reacting as necessary. P01 shared:
So, it's not that I don't trust it, I know what it can do, but I also know the limits of what it can do.
So, if something doesn't work, it doesn't upset me, it doesn't scare me, it's just: oh, that's
interesting it's not doing what I supposed to do. Instantly take over manually or use another
mode or use that secondary mode or whatever.
4.2.5 Self-Confidence
Participants were adamant that hand-flying skills and self-confidence were fundamental
aspects of flying. P07 expressed it vividly:
It's paramount, it's paramount. (...) I mean, trust in oneself, that's absolutely necessary, you
don't go in a plane if you don't trust yourself, of course.
The men are flying because when the systems fail, for many reasons, then the pilots are the
last line of defence. And they have to keep their manual flying skills up to date.
The capacity to hand-fly the plane and take over safely from automated flight was seen as
a critical skill to master to recover and continue safely, after experiencing automation
malfunctions. Participants expressed that the ability to operate automation fully and
peacefully depended greatly on crew confidence to fly proficiently at reduced levels of
automation. The participants' confidence in their hand-flying skills was striking. While their
confidence appeared rooted in substantial experience in classic aircraft, dedicated training
to gain sureness in hand-flying FBW aircraft was labelled capital. P04 exemplified:
their training was based on manual skills; they would first demonstrate that you could fly an
[Manufacturer] fly-by-wire aircraft perfectly in full manual mode, until you got very confident in
your manual skills, and then you got to know the system. So I felt I, must say, I felt very
comfortable using the [Manufacturer] system.
Regular hand-flying practice was deemed necessary. While this perspective seemed
shared by operators; however, actual policies and practices appeared to vary significantly.
Furthermore, hand-flying over minimum custom was reportedly criticised in some cockpits.
Achieving confidence in recovery abilities was reportedly more tricky. While training is
conducted in simulators, actual exposure is about experiencing abnormal events during
operation, which do not happen on command. Hence, overall experience, the breadth and
And in extremis, the next thing you know is the autopilot disconnected, because it's reached its
limit, and it says I can't cope anymore. And now it's gone from really quite a benign environment
because it's doing its job, taking out the turbulence effects, and then all of a sudden you get the
full turbulence. So not only are you having to fly manually, but you're now in this really quite
hostile environment. And of course, we don't fly that, we don't train it, it just happens.
P03 shared his concerns, stemming from insufficient information and flight control system
functioning opacity:
that's that very far tip that I, when it comes to feeling comfortable, would say, ok, if we get into
that very, very small niche, that's an area where, I'm honest, I'm not 100% comfortable with
One participant raised that crews tend to give more leeway to automation than to fellow
crew when hand-flying, where a strict deviation protocol exists and is enforced. In the
absence of clearly defined decision thresholds and actions, crew tend to wander and give
credit to automation in the presence of any subtle sign of corrections. P03 mentioned that
he never really thought about this issue until a discussion with a colleague a few days
before the interview. He contrasted the two situations:
if I fly, I'm pilot flying, manually, and my co-pilot says: "speed", I'm like: “yeap, correcting”. And
that's the response I have to give: "correcting", so that's the dialogue. Now, if we're watching
autopilot, this is how the dialogue typically goes: "ohh (...), if I'd be flying like that you'd be
calling speed right now", or: "look at, that's not very good speed control", and then, like you say,
we both look at each other: "ok?". And now it's sort of individual tolerance, if and when I'm going
to take action.
Many participants reported facing automation weaknesses regularly, and explained the
need to have strategies to deal with these performance shortcomings. This appeared
largely experience-based and often implies anticipating automation behaviour, or tricking it
into ensuring acceptable outcomes. Nonetheless, several participants also viewed the lack
of automation proficiency as a significant issue. They felt that some crew lack
understanding of what automation is doing, and more importantly, do not know what they
want the aircraft to do. However, P11 pointed human limitations:
I mean, as long as you're aware, it's nice. If you're ahead of the game, it's nice, but the trouble
is, like we all know, that sometimes you're not ahead of the game, sometimes you're behind.
A participant explained that aeroplanes communicate through the Flight Mode Annunciator
(FMA); hence, not understanding the FMA is similar to a couple where one talks and the
other one does not listen. Notwithstanding, other participants pointed to the rising
complexity of cockpit interfaces, which have become more demanding. The difficulty for
the automation to know what the crew truly want or think was also raised. P03 conveyed
the need for human-centred design:
I think, the human-machine interface is crucial; I mentioned, to (...) have the cockpit
communicate with me in a way, that caters towards me as a human
Several participants criticised the lack of information about the throw of controls and where
is the automation inside its operating envelope, limiting crew awareness of the remaining
automation margin. P01 expanded:
So, I think, we do need to make pilots more aware of what the automation does. And where you
are in terms of normal operating, versus the entire operational envelope of that automated
system; we need to have a gage. We don't provide that feedback at the moment, sufficiently.
The participants were adamant that their function was to command their aeroplane and
additionally, be the last line of defence when things go havoc. Accordingly, they want to be
the ultimate decision-maker. However, many participants felt tensions between the level of
control bestowed upon automation, the opacity of certain systems, and their duties and
responsibilities. Nevertheless, one participant mentioned that this discussion was a
recurring subject between engineers and pilots with every new automation generation.
then I cannot exercise, not just control, but I cannot exercise command, and then that has to be
clearly defined; and then, whoever assumes that command authority, somebody does, I mean if
it's designer, the software engineer, or the manufacturer, or the operator, then they need to say,
ok, yes, I assume the command authority; and I assume everything that comes with that.
So, as long as I have the responsibility, I want to have control. And if they take me that control
possibilities, I don't want to have responsibility. So we needs to be clear, who is responsible,
and the one who is responsible needs to have the possibility that he can act.
Autonomous safety systems were generally positively seen by the participants, provided
that an override function exists. However, one participant expressed his unconditional trust
to the ones fitted in [Aeroplane-D], as those were his lifeline.
Most participants reflected on haptic control feedback. Many expressed an aversion for
non-moving thrust levers and controls without force or position feedback, as they felt
deprived of an essential source of feedback to be in the loop. One participant stated that
he was very comfortable with such control systems; however, no participant suggested
preferring controls without feedback.
System management concerns were largely discussed, most participants mentioning the
lack of information regarding system functioning. P01 talked about his latest type rating
course:
To get the information as to how it actually works, was well beyond any of the training manuals,
or any of the manuals that were given to the pilot. We (were?) just not given the information. It
was just: you don't need to know that. You know, if it's in front of you, it's good. If it's not, we'll
take it away and we'll decide, okay. Yeah. So, I didn't like that.
The issue of having limited data from the aircraft systems, to effectively manage them
when they malfunction, was raised by many participants. One of them pointed out that
knowing the malfunction’s operational effect does not say what is happening inside the
aeroplane, nor does it shed light on malfunctions root cause. Furthermore, some stressed
the need to understand a situation rather than just mechanically applying procedures. P01
recalled:
So the first officer was very keen, very well trained, and he was going to just carry out that
action, because that's what it said to do. At which point I then said: why don't we find out what's
going wrong first, before we do anything else?
Several participants opinioned that manufactured were playing the lowest common
denominator regarding information provided. However, the approach was reported to vary
significantly between manufacturers, depending on how they view the pilot inside the
Unexpectedly, a substantial level of doubt exists toward aviation authorities and aircraft
manufacturers. Latent feelings and concerns, regarding the safety and integrity of the
aviation system, apparently grew over the past ten or twenty years. Possibly, the two B737
Max catastrophic accidents transformed these impressions into reality. First, showing the
dramatic consequences; second, making public facts about certification process flaws and
evidence that the manufacturer concealed information and disregarded safety concerns,
prioritising its business case instead (The House Committee on Transportation &
Infrastructure, 2020, pp. 11–33). Low trustworthiness of authorities appears an issue of
ability and benevolence; regarding manufacturers, benevolence and integrity are likely the
key factors (Mayer et al., 1995). The analysis showed that trust in the organisation
releasing automation for use influences automation trust. Muir (1994) and Parasuraman
and Riley (1997) had a supporting view. The positive influence of a trusted higher
organisation on automation trust, appears more substantial than the negative influence
when an organisation is doubted. However, these two opposites came from vastly different
set-ups regarding certification, operational usage, and culture, which could also explain the
apparent difference.
Trust is perceived as essential and strongly influences the automation level used; the
current high automation reliability creates complacency risk, rejoining the conclusions
reached by Muir and Moray (1996). The need to trust analytically and verify is
emphasised; however, when automation behaves unexpectedly, the human doubt himself
immediately, rather than automation. This apparent dichotomy could be explained by Lee
and See (2004) model, where the affective process strongly influences the analytical.
However, it could also be symptomatic of an automation complexity reportedly edging what
aircrews could reasonably comprehend. Automation knowledge and understanding,
together with comprehensive training, appear strongly linked to trust in automation; this
seems inconsistent with both Muir and Moray (1996) and Lee et al. (2021) findings, but
strongly supported by Balfe et al. (2018) conclusions in their real-world study. Reportedly,
hearsay could dramatically affect trust in automation, consequently it usage.
Achieving unconditional trust in automation is vital when crew must rely on automation,
e.g. during category III autoland, where confidence in the automation certification plays a
significant role, rejoining Ho et al. (2017, p. 248) findings. Crews training for such
operation likely creates cognitive tensions. On the one hand, they are trained to achieve
complete confidence in automation reliability, and on the other hand, they are shown that
The study showed that while trust greatly influences automation usage, actual usage is
largely prescribed by SOPs. Furthermore, some operational requirements, such as
autoland, are beyond human capabilities imposing reliance on automation. Finally, in
certain situations, pilots might decide to practice hand-flying. Consequently, situations
could occur where pilots trust automation but decide not to use it, or they must rely on it
without necessarily trusting it, or that they shall use automation despite not trusting it.
Therefore, aviation operational rules, procedures, practices, or situations strongly mediate
the relationship trust in automation – reliance on automation. However, this reality should
not downplay the impact of trust; it is conceivable that a significant divergence between the
level of trust in automation and automation usage increases cognitive workload.
Realising the role played by the healthy scepticism attitude was remarkable. It appears to
be the vital mechanism used to exploit automation confidently, and at the same time
observe it with a neutral dose of scepticism, and act decisively if needed. It apparently
relies strongly on critical thinking. It is not about doubting; it is about ensuring a healthy
relationship with automation. It seems based on the ability to maintain at the same time a
state of trust and distrust. While Mayo (2015) discuss the region of “neither-trust-nor-
distrust”, the present mechanism appears dynamic, hence the dual state expression. It is
“The test of a first-rate intelligence is the ability to hold two opposing ideas in
mind at the same time and still retain the ability to function”.
The healthy scepticism approach appears to build on solid knowledge, experience, and
critical thinking. Furthermore, a deep understanding of systems working, capabilities and
limitations seem necessary. Both Schaefer et al. (2016) and Hoff and Bashir (2015)
demonstrated the importance of knowledge and understanding importance to trust,
supporting the present perspective. Nonetheless, the participants' wealth of experience
stems from the early career spend on classic aeroplanes, where navigation systems were
less accurate and required frequent cross-checking, and where malfunctions were regular
occurrences. Consequently, the healthy scepticism approach described might well be
unique to the aircrew generation having started their career on legacy aeroplanes.
Having well-honed hand-flying skills and the necessary self-confidence to use them
appears essential. Lee and Moray (1994) found that the gap between trust and self-
confidence conditioned automation usage; consequently, high confidence in hand-flying
skills should lead to low automation usage. However, the analysis showed that high hand-
flying confidence could lead to higher automation usage, even in situations of limited trust
in automation, apparently contradicting Lee and Moray (1994) findings. Solid hand-flying
skills were presented as the B plan when automation does not behave as expected. It is,
therefore, most likely that when equipped with a solid fall-back plan, the perceived risk of
using automation reduces dramatically, hence the readiness to use automation despite not
entirely trusting it. Mature hand-flying skills and confidence certainly developed through a
consequent exposure to flying less automated aeroplanes as a matter of routine,
becoming second nature. It is unlikely that newer generation pilots could achieve the same
hand-flying experience and confidence with the current aviation paradigm. Confidence to
manually take over when at the edge of the flight envelope was less pronounced. The
critical issues seemed limited training and incomplete understanding of the flight control
system behaviour. Balfe et al. (2018) support the result regarding the criticality of
understanding to ensure automation trust. Casner et al. (2013) doubted the adequacy of
current training practice.
The research showed that pilots duties and responsibilities are very clear from their
perspective. However, pilot purpose and authority appeared viewed differently by aircraft
manufacturers. The formers seeing a leadership role, while the laters look for responsive
managers. Nonetheless, these tensions regarding how much control is given to pilots,
respectively, how much control they retain, reportedly occurred with each automation
generation. However, it would seem reasonable that the one bearing the ultimate
responsibility also has ultimate authority (Billings, 1996, pp. 8–9). With FBW, control is
technically with the flight control computers. However, there are situations where pilots
appear unsure if they are still in command, primarily due to automation transparency
paucity and the absence of flight envelope information.
Non-moving thrust levers and lack of control force feedback are reportedly significant
drawbacks to intuitive understanding what the aeroplane is doing and feeling in control.
This highlight the question of whether haptic feedback is advantageous for humans
compared to receiving visual information, or whether it is anchored into years of
mechanical control systems, making control mimicking legacy flight control appearing the
only acceptable approach. Alternatively, it might be a matter of personal preference.
Trust does influence automation usage; this is, however, constrained by SOPs and
practices. Nonetheless, a mismatch between automation trust and usage is likely to impact
aircrew cognitive workload. Advanced automation was seen as highly reliable, but
complex, leading to a bias towards distrusting humans. Most participants displayed a
healthy scepticism towards automation as a strategy to maintain a state of neutrality and
readiness regarding its behaviour. Participants were highly confident about their flying
skills and saw them as a fallback plan; hence, they confidently explored advanced
automation features.
All participants displayed a will to learn and improve themselves, consequently had some
frustration regarding information availability. Manufacturers are viewed as taking a lowest
common denominator approach regarding providing information. This approach possibly
creates fertile ground for very influential gossips about automation. Training is regarded as
being built on compliance and dogmatic SOPs, rather than encouraging long term
competence development and critical thinking.
The present study used a specific purposive sample of experienced professional pilots, to
explore the influence of trust on advanced automation, from the perspective of pilots
having a substantial background in earlier generation aeroplanes. Hence, the findings are
unlikely to be valid for other pilots demographics. However, its validity could be extended
through hypothesis testing with other pilot groups.
As Riley (1996) showed, the impact of trust on automation usage, and the complexity of
the interactions involved cannot be denied. The present study focused on understanding
how trust influences human-automation interaction in a modern aeroplane cockpit, based
on a purposive sample of highly experienced professional pilots. The results indicate that
the quality of the human-automation interaction is probably the most influential factor of
trust in automation. A model showing the perceived circular nature of trust in automation
development is proposed. This finding suggests that a human-automation team centred
approach should be considered for the design of advanced automation.
https://doi.org/10.1002/9781119171386.ch19
Ashleigh, M. J., & Stanton, N. A. (2001). Trust: Key Elements in Human Supervisory
https://doi.org/10.1007/PL00011527
https://doi.org/10.1016/0005-1098(83)90046-8
Balfe, N., Sharples, S., & Wilson, J. R. (2015). Impact of automation: Measurement of
Balfe, N., Sharples, S., & Wilson, J. R. (2018). Understanding Is Key: An Analysis of
https://doi.org/10.1207/s15327108ijap0104_1
Boehm-Davis, D. A., Curry, R. E., Wiener, E. L., & Leon Harrison, R. (1983). Human
Systems Engineering: Applications and Case Studies (pp. 113–128). John Wiley &
Braun, V., & Clarke, V. (2016). (Mis)conceptualising themes, thematic analysis, and other
problems with Fugard and Potts’ (2015) sample-size tool for thematic analysis.
https://doi.org/10.1080/13645579.2016.1195588
Braun, V., & Clarke, V. (2019). Reflecting on reflexive thematic analysis. Qualitative
https://doi.org/10.1080/2159676X.2019.1628806
Braun, V., & Clarke, V. (2020). One size fits all? What counts as quality practice in
https://doi.org/10.1080/14780887.2020.1769238
Braun, V., & Clarke, V. (2021a). Can I use TA? Should I use TA? Should I not use TA?
https://doi.org/10.1002/capr.12360
Braun, V., & Clarke, V. (2021b). To saturate or not to saturate? Questioning data saturation
1465. https://doi.org/10.1016/S0378-2166(99)00094-6
Byrne, D. (2021). A worked example of Braun and Clarke’s approach to reflexive thematic
Campbell, S., Greenwood, M., Prior, S., Shearer, T., Walkem, K., Young, S., Bywaters, D.,
https://doi.org/10.1177/1744987120927206
https://doi.org/10.1177/0018720812466893
Castelfranchi, C., & Falcone, R. (2010). Trust theory: A socio-cognitive and computational
model. J. Wiley.
Chancey, E. T., Bliss, J. P., Yamani, Y., & Handley, H. A. H. (2017). Trust and the
https://doi.org/10.1177/0018720816682648
https://doi.org/10.5772/49949
Cho, J. (2006). The mechanism of trust and distrust formation and their relational
https://doi.org/10.1016/j.jretai.2005.11.002
Colquitt, J. A., Scott, B. A., & LePine, J. A. (2007). Trust, trustworthiness, and trust
propensity: A meta-analytic test of their unique relationships with risk taking and job
Constantinou, C. S., Georgiou, M., & Perdikogianni, M. (2017). A comparative method for
571–588. https://doi.org/10.1177/1468794116686650
de Winter, J. C., F, & Dodou, D. (2014). Why the Fitts list has persisted throughout the
http://dx.doi.org/10.1007/s10111-011-0188-1
http://dx.doi.org/10.1007/s101110200022
https://doi.org/10.1177/002200275800200401
Dimoka, A. (2010). What Does the Brain Tell Us About Trust and Distrust? Evidence from a
https://doi.org/10.2307/20721433
Dorneich, M. C., Dudley, R., Rogers, W., Letsu-Dake, E., Whitlow, S. D., Dillard, M., &
https://doi.org/10.1177/1541931215591058
Downer, J. (2010). Trust and technology: The social foundations of aviation regulation.
4446.2009.01303.x
6924.2010.01398.x
Elias, B. (2019). Cockpit Automation, Flight Systems Complexity, and Aircraft Certification:
Background and Issues for Congress (No. R45939; pp. 1–30). Congressional
Research Service.
Englehardt, E., Werhane, P. H., & Newton, L. H. (2021). Leadership, Engineering and
Ethical Clashes at Boeing. Science and Engineering Ethics, 27(1), 1–17.
https://doi.org/10.1007/s11948-021-00285-x
Evjemo, T. E., & Johnsen, S. O. (2019). Lessons Learned from Increased Automation in
Aviation: The Paradox Related to the High Degree of Safety and Implications for
Ferris, T., Sarter, N., & Wickens, C. D. (2010). Cockpit Automation: Still Struggling to Catch
Up…. In E. Salas & D. Maurino (Eds.), Human Factors in Aviation (2nd ed., pp.
Fitts, P. M., Viteles, M. S., Barr, N. L., Brimhall, D. R., Finch, G., Gardner, E., Grether, W.
F., Kellum, W. E., & Stevens, S. S. (1951). Human Engineering for an Effective Air-
https://apps.dtic.mil/sti/citations/ADB815893
Fuld, R. B. (1993). The Fiction of Function Allocation. Ergonomics in Design, 1(1), 20–24.
https://doi.org/10.1177/106480469300100107
Funk, K., Lyall, B., Wilson, J., Vint, R., Niemczyk, M., Suroteguh, C., & Owen, G. (1999).
Gareth, T., Hayfield, N., Clark, V., & Braun, V. (2017). Thematic Analysis. In C. Willig & W.
Green, J., Franquiz, M., & Dixon, C. (1997). The Myth of the Objective Transcript:
https://doi.org/10.2307/3587984
Grimmelikhuijsen, S., & Knies, E. (2017). Validating a scale for citizen trust in government
https://doi.org/10.1177/0020852315585950
Guest, G., Namey, E., & Chen, M. (2020). A simple method to assess and report thematic
https://doi.org/10.1371/journal.pone.0232076
Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y. C., de Visser, E. J., &
https://doi.org/10.1177/0018720811417254
Hancock, P. A., & Scallen, S. F. (1996). The Future of Function Allocation. Ergonomics in
Richardson, C., Cacanindin, A., Cals, S., & Wilkins, M. (2017). A Longitudinal Field
Ho, N. T., Sadler, G. G., Hoffmann, L. C., Lyons, J. B., & Johnson, W. W. (2017). Trust of a
524–541. https://doi.org/10.1037/mil0000189
Hoff, K. A., & Bashir, M. (2015). Trust in Automation: Integrating Empirical Evidence on
https://doi.org/10.1177/0018720814547570
systems engineering: The future for a changing world. CRC Press, Taylor & Francis
Group.
Hoffman, R. R., Johnson, M., Bradshaw, J. M., & Underbrink, A. (2013). Trust in
https://doi.org/10.1109/MIS.2013.24
Jenner, B. M., & Myers, K. C. (2019). Intimacy, rapport, and exceptional disclosure: A
https://doi.org/10.1080/13645579.2018.1512694
Johnson, D. R., Scheitle, C. P., & Ecklund, E. H. (2019). Beyond the In-Person Interview?
How Interview Quality Varies Across In-person, Telephone, and Skype Interviews.
https://doi.org/10.1037/h0043729
Kharoufah, H., Murray, J., Baxter, G., & Wild, G. (2018). A review of human factors
https://doi.org/10.1016/j.paerosci.2018.03.002
Kim, S.-E. (2005). The Role of Trust in the Modern Administrative State: An Integrative
https://doi.org/10.1177/0095399705278596
Koo, J., Kwac, J., Ju, W., Steinert, M., Leifer, L., & Nass, C. (2015). Why did my car just do
Krouwel, M., Jolly, K., & Greenfield, S. (2019). Comparing Skype (video calling) and in-
Kwak, Y.-P., Choi, Y.-C., & Choi, J. (2018). Analysis between Aircraft Cockpit Automation
and Human Error Related Accident Cases. International Journal of Control and
Lacher, A., Grabowski, R., & Cook, S. (2014, March 22). Autonomy, Trust, and
Symposium Series.
https://www.aaai.org/ocs/index.php/SSS/SSS14/paper/view/7701
Lee, J. D. (2008). Review of a Pivotal Human Factors Article: “Humans and Automation:
https://doi.org/10.1518/001872008X288547
Lee, J. D., & Moray, N. (1994). Trust, self-confidence, and operators’ adaptation to
https://doi.org/10.1006/ijhc.1994.1007
Lee, J. D., & See, K. A. (2004). Trust in Automation: Designing for Appropriate Reliance.
https://doi.org/10.1080/00140139208967392
Lee, J., Yamani, Y., Long, S. K., Unverricht, J., & Itoh, M. (2021). Revisiting human-
machine trust: A replication study of Muir and Moray (1996) using a simulated
https://doi.org/10.1080/00140139.2021.1909752
Lempereur, I., & Lauri, M. A. (2006). The Psychological Effects of Constant Evaluation on
Lewandowsky, S., Mundy, M., & Tan, G. P. A. (2000). The dynamics of trust: Comparing
123. https://doi.org/10.1037/1076-898X.6.2.104
Lewicki, R. J., McAllister, D. J., & Bies, R. J. (1998). Trust and Distrust: New Relationships
https://doi.org/10.2307/259288
Mårtenson, L. (1995). The Aircraft Crash at Gottröra: Experiences of the Cockpit Crew.
International Journal of Aviation Psychology, 5(3), 305.
https://doi.org/10.1207/s15327108ijap0503_5
Mateusz, M., & Stanislaw, D. (2020). The Assessment of Pilot Compliance with TCAS
RAs, TCAS Mode Selection and Serviceability Using ATC Radar Data. Eurocontrol.
https://doi.org/10.2307/258792
https://doi.org/10.1080/10463283.2015.1117249
McNeese, N. J., Demir, M., Chiou, E. K., & Cooke, N. J. (2021). Trust and Team
147–149. https://doi.org/10.1177/104973239500500201
Mouloua, M., Hancock, P., Jones, L., & Vincenzi, D. (2016). Automation in Aviation
Muir, B. M. (1987). Trust between humans and machines, and the design of decision aids.
https://doi.org/10.1016/S0020-7373(87)80013-5
Muir, B. M. (1994). Trust in automation: Part I. Theoretical issues in the study of trust and
https://doi.org/10.1080/00140139408964957
Muir, B. M., & Moray, N. (1996). Trust in automation. Part II. Experimental studies of trust
460. https://doi.org/10.1080/00140139608964474
Niles, M. C. (2002). On the Hijacking of Agencies (and Airplanes): The Federal Aviation
of London.
Noy, I. Y., Shinar, D., & Horrey, W. J. (2018). Automated driving: Safety blind spots. Safety
Oliver, D. G., Serovich, J. M., & Mason, T. L. (2005). Constraints and Opportunities with
Parasuraman, R., Sheridan, T. B., & Wickens, C. D. (2000). A model for types and levels of
https://doi.org/10.1109/3468.844354
Pearson, C. J., Welk, A. K., & Mayhorn, C. B. (2016). In Automation We Trust? Identifying
of the Human Factors and Ergonomics Society 2016 Annual Meeting, 60, 201–205.
https://doi.org/10.1177/1541931213601045
218–230. https://doi.org/10.1111/j.1744-6198.2011.00230.x
Pritchett, A. R., Kim, S. Y., & Feigh, K. M. (2014). Measuring Human-Automation Function
https://doi.org/10.1177/1555343413490166
Riley, V. (1996). Operator Reliance on Automation: Theory and Data. In R. Parasuraman &
Riley, V. (1995). What avionics engineers should know about pilots and automation.
Proceedings of 14th Digital Avionics Systems Conference, 252–257.
https://doi.org/10.1109/DASC.1995.482836
https://doi.org/10.1080/14780887.2013.801543
Rousseau, D. M., Sitkin, S. B., Burt, R. S., & Camerer, C. (1998). Not so Different After All:
404. https://doi.org/10.5465/AMR.1998.926617
Saldaña, J. (2016). The coding manual for qualitative researchers (3rd ed.). SAGE.
Sarter, N. B., Mumaw, R. J., & Wickens, C. D. (2007). Pilots’ Monitoring Strategies and
https://doi.org/10.1518/001872007X196685
Schaefer, K. E., Chen, J. Y. C., Szalma, J. L., & Hancock, P. A. (2016). A Meta-Analysis of
6670(17)53945-2
Press.
https://doi.org/10.1177/1555343417724964
Sheridan, T. B., & Verplank, W. L. (1978). Human and Computer Control of Undersea
https://apps.dtic.mil/sti/citations/ADA057655
Spielman, Z., & Le Blanc, K. (2021). Boeing 737 MAX: Expectation of Human Capability in
Publishing. https://doi.org/10.1007/978-3-030-51758-8_9
Strauch, B. (2018). Ironies of Automation: Still Unresolved After All These Years. IEEE
https://doi.org/10.1109/THMS.2017.2732506
Development & Certification of the Boeing 737 Max [Final Committee Report].
Tušl, M., Rainieri, G., Fraboni, F., De Angelis, M., Depolo, M., Pietrantoni, L., & Pingitore,
A. (2020). Helicopter Pilots’ Tasks, Subjective Workload, and the Role of External
Vaughan, D. (1999). The Dark Side of Organizations: Mistake, Misconduct, and Disaster.
Verberne, F. M. F., Ham, J., & Midden, C. J. H. (2012). Trust in Smart Systems: Sharing
https://doi.org/10.1177/0018720812443825
https://doi.org/10.1016/j.ssci.2016.05.008
08-057090-7.50019-9
Wiener, E. L. (1989). Human Factors of Advanced Technology (‘Glass Cockpit’) Transport
Wiener, E. L., & Curry, R. E. (1980). Flight-deck automation: Promises and problems (TM
Wood, D. (1988). The Effect of Automation on the Human’s Role: Experience from Non-
https://doi.org/10.1609/aimag.v6i4.511
Dispositions, development, and influence of trust on aircrew-automation collaboration: an exploratory study of professional pilots perspective.
P120256
This is to certify that the above named applicant has completed the Coventry University Ethical
Approval process and their project has been confirmed and approved as Medium Risk
1
Dispositions, development, and influence of trust on aircrew-automation
collaboration: an exploratory study of professional pilots perspective.
Making a Complaint
If you are unhappy with any aspect of this research, please first contact the lead researcher,
Sandro Guidetti, mobile: +41XX XXX XXXX, guidettis@uni.coventry.ac.uk. If you still have
concerns and wish to make a formal complaint, please write to:
In your letter please provide information about the research project, specify the name of the
researcher and detail the nature of your complaint.
Introduction
Thank you for agreeing to participate in this research. I would like to interview you to
understand better the relationship between trust and modern automation usage by pilots;
hopefully, this would help improve the integration of human factors in automation design,
training and use. There are no right or wrong answers; my interest is in your unique
perspective and experience.
Your participation in this research is voluntary; you may decline to answer any question or
stop the interview at any time and for any reason. The interview should take about one hour,
depending on how much information you would like to share. I would like to video and audio
record the interview with your permission because I don’t want to miss any of your
comments. The recording will be kept confidential; it will be subsequently transcribed in an
anonymous form, which means that any information included in my research report will not
identify you as the respondent. Do you have any question about what I just explained?
May I start recording the interview?
Establishing Rapport
Before we begin, it would be nice if you could tell me a little bit about your relationship with
aviation. What brought you into a cockpit?
In the following discussion, automation is not just about autopilot and auto-thrust, but it refers
to the whole range of automatic systems on board.
Questions
1) Aircraft cockpit and automation have evolved tremendously over the past 20 years,
how do you feel about this evolution?
◦ impact on humans (pilots)
3) In your interactions with automation, are there contexts that could influence how you
use it or manage it? (e.g. time, situation, experience, procedures, skills, self-
confidence, etc)
4) Could you share an experience where automation did not behave as expected?
◦ feelings or reaction on the moment
◦ medium term implications or changes
5) How do you feel about automatic systems, that do no keep the pilot in the loop about
what is happening in the background? (e.g. reduced redundancy, alarm inhibition,
abnormals management, etc)
◦ view on advantages / issues
◦ aspects influencing acceptance or refusal
6) What is your opinion regarding autonomous safety systems that could override the
crew, e.g. Auto-GCAS, automatic EDM, automatic RA manoeuvre?
◦ view on advantages / issues
◦ aspects influencing acceptance or refusal
7) It is likely that automation will evolve towards more intelligent or autonomous systems
(e.g. to support decision making) that might adapt to changing situations, making them
unpredictable. How do you feel about this likely evolution?
◦ what would you need to work confidently with such systems
◦ authority/responsibility paradigm
8) If you could design the dream aircraft automation for a perfect human-automation
collaboration, what would be the key elements?
Closing Part
• These are all the questions I have. Is there anything else you would like to share, or
that you would like to ask me about this study?
• How did you feel about the interview? was it conducted adequately? Did you gain
something out of it?
I would like to thank you very much for your contribution and the time offered.
what is being said is not meant in its information in parentheses after what I was completely lost
literal sense has been said [figure of speech]