Professional Documents
Culture Documents
EGOS It Is Good For My Paper List of Papers
EGOS It Is Good For My Paper List of Papers
Angela Aristidou
UCL School of Management
a.aristidou@ucl.ac.uk
Giulia Cappellaro
Bocconi University
giulia.cappellaro@unibocconi.it
1
INTRODUCTION
Artificial Intelligence (AI) algorithms introduce novel practices that have the potential to transform
knowledge work and expertise, and alter coordination and control mechanisms inside
organizations (Alaimo, & Kallinikos, 2021; Faraj, Pachidi & Sayegh, 2018). While traditional
perspectives emphasize the risk of technologies and algorithms substituting human tasks (Lane &
Saint-Martin, 2021), recent streams of literature focus on the augmentation of human capability
allowing human to collaborate closely with AI algorithms on a task (Jarrahi, 2018; Lebovitz,
Indeed, AI tools studied so far in experimental and controlled settings often demonstrate
stark positive performance effects, which has further fortified research interest in them.
Nevertheless, scholars have had little opportunity to empirically examine the dynamics of real-life
implementations of AI tools, defined as the use of AI tools by professionals as part of their daily
work (Bailey et al., 2019; Berente et al., 2021), for example by physicians for actionable patient
care, which is distinct to using AI tools for research purposes and as a process separate to the
typical organizational flow. This is because AI tool implementations in professional settings often
suffer from the deployment problem, with few systems moving beyond the experimental stage
AI tools within research, experimental and controlled settings are often understood as
‘perfectly explainable’, a view that underpins the increasing desire of leaders and policymakers in
public good services, such as healthcare, to adopt them as the answer to concerns on capacity and
access. Yet, there is a clash with the less controlled, evolving and ‘imperfect’ world of
organizations in which AI tools are brought into. Rather than a view on AI tools as perfectly
explainable solutions to mounting societal needs – such as healthcare access and capacity issues –
2
we argue for a focus on the implementation dynamics of AI tools in real-life organizational settings
to shift scholarly attention to the imperfections that arise and how organizational members and
other stakeholders deal with them in order to overcome the deployment problem. This shift in
focus opens the window for scholars to examine the ambiguity of implementation and its
unintended consequences within the organization. In the context of organizations of public interest,
may carry additional implications for broader communities and our societies.
In recent literature on technology, work and digital innovation, scholars have uncovered
(e.g. Kellogg, 2021; Berente and Yoo 2012, Berente et al. 2016, Lifshitz-Assaf 2017; Barley 2015,
Pine and Mazmanian 2015, Kellogg et al. 2020; Beane 2019, Christin 2017, Leonardi 2011).
organizations (e.g. Lebovitz, Levina, & Lifshitz-Assaf, 2021; Raisch & Krakowski, 2021; Benbya,
Davenport & Pachidi, 2020) collectively suggest, importantly, that the challenge of AI introduction
to real-life settings can be traced to the specific characteristics of AI technology, such as its opacity,
complexity and learning. Whereas AI characteristics are contained in perfect research conditions,
they do not pose a concern. But it is suggested in existing studies that, to introduce AI tools to real-
life use in professionalized settings, these AI characteristics should be managed. Managing the AI
introduction, in effect, becomes a question of how to govern AI (the set of expectations around AI
use) which is different to other digital technologies because of the AI’s distinguishing
characteristics. Yet, while recent literature has empirically demonstrated the challenges of AI-
3
In this paper, we demonstrate how an AI-specific process of anticipatory governance can
accomplish overcoming what we refer to as AI governance constraints, i.e. the lack of clarity of
expectations around how to govern AI tools when brought into real-life settings for use in daily
work. The research study on which we report in this manuscript draws on the examination of the
clinical implementation of an AI tool in a healthcare setting and the process through which local
and remote actors and organizations across sectors (private technology developer; public sector
hospitals; open source community and lay patient public communities) generated new practices
towards the governance of this AI tool. These practices did not only aim to overcome the AI
the specific AI tool, but also to future-proof the AI tool and at the same time generate capacity in
the broader healthcare system for future AI tools’ introductions. In this sense, the governance
process we witnessed and demonstrate in this paper is anticipatory. Our focus on what works in
practice (the practices emerging among dispersed actors in the introduction of the AI tool) builds
on our understanding that in the evolving landscape of emerging technologies (such as AI),
imperfection should be expected and can only be managed through situated, bottom-up activity
THEORETICAL FRAMEWORK
Technology, work and digital innovation theorists suggest key factors that may drive resistance
implementation of digital technologies. These include the technology posing a challenge to the
professionals’ identity and jurisdictions (Berente and Yoo 2012, Berente et al. 2016, Lifshitz-Assaf
2017) and the material properties of the technology itself not fitting with the local work practices
4
(e.g. Barley, 1986; Beane 2019, Christin 2017, Leonardi 2011). In addressing the conditions under
which these barriers can be minimized, technology, work and digital innovation scholars have
revealed a range of ways to subvert or circumvent resistance, through articulation work, such as
tinkering, repairing, reminding, filtering (Berg, 1997, Timmerman 1997; Maiers, 2017), mutual
tuning of the technology and the work tasks (Barrett, Eivor & Orlikowski, 2012); through
translation (Spyridonidis and Currie, 2016) and boundary spanning in practice, through alignment
Literature on technology, work and digital innovation has also highlighted some key factors that
may drive resistance to the implementation of AI technology, beyond resistance to other digital
meaning that AI tools are viewed by their users as ‘black box’ in which humans cannot ‘see’ what
elements combine to generate the AI tool’s outputs. Another AI characteristic underscored is its
complexity, as AI involves larger scale data sets and more sophisticated algorithms than other
digital technologies (Anthony, 2021). Scholars have also distinguished AI from other digital
technologies through its learning characteristic; as AI algorithms can teach themselves, improving
without deliberate human input, meaning that the AI tool itself evolves in time. These three AI
characteristics (opacity, complexity, learning), are explicitly or implicitly put forward in existing
5
disadvantage in being implemented because of its distinguishing characteristics. Researchers have
also offered some emerging insights on how to address specific AI characteristics in order to
identify conditions under which AI implementation liability can be minimized. For example, it is
professionals to ‘teach’ the AI (cite) and that both AI opacity and complexity could be addressed
through collaboratively developing the AI tool within the organization (Singer et al., 2022).
While each of these directions holds the promise of adding to our understanding of an
important open puzzle, i.e. how to overcome AI implementation liability, the foundations of this
work are incomplete. We need to extend these foundations to account for the fact that AI
sense that AI sourcing (of elements: models; data; training) is usually both proprietary and open
and across sectors, with elements of the AI tool spread across and beyond organizational
boundaries and spread around the world. We already know through past literature that inside the
articulators who help employers use algorithms to facilitate improved decision making,
coordination, and organizational learning (Kellogg et al. 2020), as well as professionals with
expertise to solve pressing problems (DiBenigno 2020) such as digital interactivity professionals
(Truelove 2019). We also know that the private sector includes technology vendors (Myers 2020,
Kellogg 2021), and that even outside of the boundaries of organizations we would find online
communities (O’Mahony and Bechky 2008, Fayard et al. 2016, Lindberg and Levina 2018),
platform organizations that help focal organizations harness work and expertise from the crowd
(Lifshitz-Assaf 2017), and arbiters of the digital economy such as online content creators (Powell
et al. 2017, Christin and Lewis 2021). However, unlike digital technologies that may be
6
characterized by an integrated development-to-implementation pipeline within a single
implementation of the technology, AI is very rarely sourced solely from one organization and often
there is no key organization to orchestrate aspects of the AI’s development to the point of
implementation. Rather, AI fragmentation captures the fact that actors contributing to the training
of the AI model (e.g. a public organization’s professionals) may have different expectations to the
actors contributing the data (e.g. lay people such as patients, whose images are used in the AI
model) and different expectations to those actors programing the AI model (e.g. members of an
Scholars have already pursued some ways to address specific AI characteristics in order to
identify conditions under which AI implementation liability can be minimized, but we need to
extend the foundations of this line of research to include the characteristic of AI fragmentation
because –alongside the three already noted AI opacity, complexity and learning – the fourth AI
accounting for AI fragmentation, we do not fully understand why – while there is a recent
proliferation of AI tools that are highly effective in research labs – these are yet not adopted in
real-world settings by their intended end-users, or adopted and soon abandoned. Conversely, we
also cannot currently explain how some AI implementations are nevertheless successful (i.e. the
AI tools are adopted and not abandoned) and despite the known resistance to AI tools stemming
from AI characteristics.
constraints.
7
The governance of AI technology characterized by opacity, complexity, learning and
despite the AI implementation liability; for its (AI) ability to adapt, change and interact; to be able
Understanding how AI is governed is crucial for policymakers and regulators and for its
sustainability because it enables stakeholders to discuss and decide how the technology should be
used and evolve. AI governance constraints, that there is no clarity of expectations around how to
govern AI tools, are not only a concern for the implementation of AI but also for its sustainability.
Because so far AI governance has been limited to high-level policies and frameworks that are
governance constraints.
in addition to the challenges noted in the broader literature on digital technology implementation.
The perspective of anticipatory governance, which has been developed by Science and Technology
studies’ scholars to make sense of how to “collectively imagine, critique and thereby shape the
issues presented by emerging technologies before they become reified in specific ways” (Barben
et at., 2008; p. 992), can help to address this question of how to overcome AI governance
in professionalized work settings. In the process of anticipatory governance, expert and lay actors
8
dispersed across organizations and sectors make sense of how to govern an emerging technological
system taking advantage of its openness before lock-in of values and trajectories set in.
We find that this future-looking perspective was particularly useful in our empirical
learning and complexity. Our anticipatory governance analysis contributes to the literatures of
technology, work and digital innovation, and extends the concept of anticipatory governance in
governance constraints. We show that the AI governance constraints that may arise in the
development and introduction of an AI technology are what we label as: temporal, spatial,
evolutionary and commons. Second, we show that the emergence of new governance constraints
that are characteristic of AI technology allows for new and modified practices that can help
mitigate AI implementation liability. We show three key practices: situated integration, normative
consensus and distributed foresight, and how these are mobilized through two mechanisms:
and sustainability may be better accomplished through an anticipatory governance process that
allows for the spanning of organizations and sectors, and accounts for the multiple and imperfect
METHODOLOGY
Empirical Context
Hospital (pseudonym), a highly specialized research and teaching oncological hospital in the UK.
9
based on an innovative algorithm that allows auto-contouring of tumors, focusing specifically on
the organs at risk. This setting offers an ideal empirical setting to address our research question.
Healthcare is considered one of the key sectors in which Artificial Intelligence (AI) will
have a great impact on professional judgements as data-driven tools for diagnostic, treatment, and
operations management are being rapidly developed in this field (Bohr & Memarzadeh, 2020).
While extensive knowledge has accumulated on the technical features of AI, less is known on the
implementation of such systems in real organizational settings, e.g., in hospitals, treatment centers,
and communities. Thus, not much is understood about the consequences of AI technologies
studying the unforeseen effects of AI technologies in healthcare (Davenport & Dreyer, 2018;
Lebovitz, Lifshitz-Assaf & Levina, 2021), given their impact not only on organizational processes
Our study is able to trace in real time one of first cases of clinical implementation of an AI
technology. This specific AI algorithm was originally designed to compute hospital data to
accurately identify tumours on patient scans, cutting processing times and treatment planning by
up to 90%. Cancer-AI is an amalgam of proprietary and open source elements. The original project
idea dated back to 2016, and originated from a collaboration agreement between a private provider
and O-Hospital to exchange data for the development of machine learning models. In 2017 the
deep learning toolkit was developed, followed in 2018 by the clinical testing and in 2019 by the
introduction of Cancer-AI as research tool at O-Hospital. At the end of 2020, the private company
open-sourced the deep learning toolkit and O-Hospital retrained the model using hospital data. In
2021 Cancer-AI moved from the research to the clinical implementation phase and this is the
process we followed.
10
Methods
Given our interest in an unexplored phenomenon, we relied upon an inductive research design
(Edmondson & McManus, 2007). Specifically, we adopted an ethnographic research design that
Hospital. The design is longitudinal and multilevel. All the organizational levels concerned with
the clinical implementation of the AI technology are included in the data collection and analysis
(e.g., R&D, ward levels), although the study focuses primarily on the professionals developing
and using the AI tool. Following appropriate ethics and regulatory approvals, fieldwork started in
Data Sources. We triangulate three sources of data: interviews, observations and archival
data. By applying a snowball sampling approach, we interviewed all the organizational members
the typical clinical path followed by a patient case, the professionals involved, and at what point(s)
in the process the AI tool would be used and by whom. This preliminary analysis allowed us to
We identified the actual informants belonging to each category and we conducted semi-structured
interviews with each of them. These informants are interviewed twice, i.e., before and after the
implementation of the AI tool. We then conducted interviews with R&D, technical staff, IT,
clinical engineering staff, and private developer staff, for a total of 50 interviews.
The core data source is non-participant observations, within and outside the hospital. We
collected observations from two types of sources. First, we observed professionals’ group
meetings. These include weekly multidisciplinary meetings where professionals review and
approve patients’ treatment plans (i.e. team review meetings); monthly departmental research
11
meetings; and governance meetings. Meetings last from 30 to 120 minutes. Second, we took part
in stakeholders’ meetings, patient and public engagement meetings, and we participated in policy
discussions on AI implementation regulations in the NHS. Overall, we have attended more than
the impact of the Covid-19 pandemic in health systems worldwide. Surprisingly, because of Covid-
19 physical distancing policies, most of the group meetings take place remotely and this has
enabled us greater access to this source of data. We complemented these virtual observations with
systematic field visits to O-Hospital throughout the duration of the study. One of our research team
members has been granted employee-level access to within-hospital physical spaces and hospital-
wide archival documentation and ongoing access to the hospital’s internal platform and
communication systems. During field visits, she has had the opportunity to become deeply familiar
with O-Hospital’s physical settings; she has spent days in the wards observing the daily work of
medical professionals, and she has been allocated an office desk in the same corridor as the
Finally, we complemented observation and interview data with extensive archival data
produced by the organization and by professionals regarding the introduction, use and evaluation
of Cancer-AI.
Data Analysis. We moved iteratively between the data, emerging theory and relevant
literature (Miles & Huberman, 1994) following an approach of gradual abstraction that moved
from raw data to categories and themes. We started by coding the characteristics of Cancer-AI as
evidence of Cancer-AI implementation achievement and then focused on the process leading to
such outcome. We specifically coded for all the actions that professionals enacted in the process,
12
focusing not only on those aimed at the specific in-situ implementation but also those aimed at
facilitating future scaling up of Cancer-AI in other hospitals within the National Health Care
system (NHS). Moving back and forth from the literature, we drew on and extended recent insights
from Science and Technology Studies on anticipatory governance (Barben et at., 2008).
FINDINGS
Our analysis shows that the implementation of Cancer-AI in and around O-Hospital was achieved
through an anticipatory governance process (Figure 1), which allowed overcoming the potential
implementation liabilities of the technology, while accounting for the need to leave it
“incomplete”, “partial” and open to future evolutions and innovations. We define governance as
the set of practices and mechanisms ensuring delivery of the AI technology’s capabilities for the
organization. Specifically, the anticipatory governance process was based on three set of practices:
foresight. These practices are arranged on a temporal and spatial continuum from being time-
specific and site-bounded, to being forward-looking and spatially dispersed. Hence, these practices
account for the implementation liabilities of the first implementation workplace but also anticipate
those of future workplaces. Together, the three set of practices leverage upon the incompleteness
and ambiguity characterizing the AI system to “govern” it, leaving space for the technology to be
always in flux, and turning into a “perfectly imperfect” organizing. We first illustrate the
governance constraints emerging from the implementation of Cancer-AI in O-Hospital, and then
the introduction and implementation of Cancer-AI in O-Hospital was consistently pointed by our
informants as the key factor to be addressed to guarantee the transition of Cancer-AI from a
13
research tool in the hands of a few, to its widespread and sustained use by professionals in a real
organizational setting. In our earlier conversations with both clinical professionals and
professionals working in the R&D Department of O-Hospital, the difference between these two
uses of the AI tool emerged sparkly. Informants argued how “the reason why so much across the
board doesn't get implemented (…) is that it does not speak to the real life situation in the NHS”.
The Lead of the Research Governance Clinical Informatics (RGCI) Unit at O-Hospital explained:
It is not the same when you are looking at an AI in a lab or an experimental setting and if
you are looking at it in an actual hospital with real people. There is something to be said about
that. I'm so disappointed every time I see all these bloody papers by the computer scientists talking
about how brilliant their AI is and I'm like, everything is better in a laboratory because you control
everything. It is not the same (Interview, RGCI Lead).
Hospital were dealing with something they defined as “unknown and unprecedented”. Cancer-AI
was a fragmented mixture of open source and proprietary elements, both homegrown and
a professional working in the RGCI Unit: “The puzzle that you're dealing with here is essentially
that you're trying to create governance around something that's unprecedented. Open source,
homegrown AI that is clinically implemented” (interview, RGCI 3). Second, they had to manage
the unknowns deriving from the AI tool within an existing governance framework at the field level
of the National Health Care Service (NHS). Hence, the degree of freedom granted in the choice
and development of alternatives around the governance of Cancer-AI was potentially restricted by
being embedded in an institutional infrastructure that did not accommodate yet for novel AI
technologies. This mismatch was perceived “like a square peg in a round hole”:
Nobody knows the algorithm itself only what it does, could do, or does not do or could not
do. What is hard actually, it is not so much the governance in an unprecedented area, but merging
the governance of an unprecedented area with a well-established governance framework. That is
what is hard because we have a very well established research governance framework in the UK,
which is primarily designed for clinical trials, and it is not well adapted for other study types. And
14
has actually been very slow adapting to the data of the AI stuff. That is quite frustrating because
when you have got the national regulator under-regulating when you are dealing with other sites
and put up obstacles or ask for information (Interview, RGCI Lead).
Finally, informants also perceived the challenges and risks of being the first movers at the national
level. Cancer-AI was not designed to stay only within O-Hospital but to spread across other
hospitals within the NHS. Hence, they felt that “all the UK's R&D offices' eyes on you while you
are doing it” and that “whatever you come up with, it's going to be known as the protocol of default.
The choice, the default choice for other hospitals” (Interview, RGCI Lead). Hence, professionals
at O-Hospital had to deal with different governance constraints in the implementation of Cancer-
AI: expectations on its use both for current use and future uses (temporal constraints), on the local
O-Hospital site and other hospitals within the NHS (spatial constraints), expectations on its version
at the time of introduction but also evolving versions (evolutionary constraints), and expectations
constraints). In the following, we narrate how professionals at O-Hospital dealt with these
Practices of Situated Integration. A first set of practices aimed at the contextual assimilation
of the technology in the site. These practices refer to activities of normalizing Cancer-AI, making
sure it mirrored professionals’ work and mindset, and integrating it within the existing
organizational governance infrastructure by partially restricting the open source components of the
technology.
actors have an imperfect understanding of the functioning behind the algorithm. We found that
overcome this liability by, on the one hand, simulating humans’ norms and decision-making and,
on the other, leaving space for humans’ agency in the interpretation and use of Cancer-AI in the
15
work routine. As to the former, the clinical professionals and the private company developing the
technology embedded the typical human workflow in Cancer-AI, ensuring the AI simulated the
way of working of professionals. For example, Cancer-AI simulated the peer review system that
meeting:
This cartoon shows the structure of the actual U-Net model. What's quite interesting is that
the parallel human workflows, the output segmentation is actually based on a majority vote from
an ensemble of three different model instances actually working together, just as we actually
perform cultural peer review as a group of three human instances (observation, professional
meeting 4)
As to the latter, both in the development and in the implementation phases, individual
physicians were granted the discretion to intervene on the perceived gaps and lacunae of the tool.
Indeed, all our informants consistently reported not to take for granted the Cancer-AI outputs;
rather, both individually and in collegial meetings they explained the importance of checking and
You are meant to go back and have a look. You do not just take it for granted it's gonna be
done right. So, you always go back and tweak it if you want to, if you wish to. If you disagree.
(Interview, oncologist 8)
Informants argued how edits were “not necessarily massively” but happened “almost
always” (Interview, oncologist 5). By doing so, physicians were able to overcome the threat related
to the opacity of the tool, and increased trust in the technology, knowing that the ultimate decision
I think that the biggest fear with any black box would be the results, the computer might
just make a mistake or, you know, but I think as long as you still have the ability to review their
output from the black box, and I think it's absolutely fine (Interview, oncologist 4)
Partially controlling the learning capabilities. Situated integration included also practices
aimed at introducing a partial closure to the openness and potential for infinite evolution of the
Cancer-AI tool. The aim of partial closing practices was to create organizational “scaffolds” of
16
protocols and rules to protect the flow of data and information of the open source component of
Cancer-AI and hence protect individual professionals from being exposed to risks. The head of the
You cannot push risk on individuals. No, definitely you should not. You need routines in the
sense of repetitive action patterns that are done by the whole group of co-workers, and that is what
we are creating. We are creating the structure for those routines to develop those collective actions
so everybody knows what is expected of them, and then within that they can innovate and they can
be creative. And if it doesn't fit, then you modify it. It is not a straitjacket, but a scaffold. It is a
brilliant way of being right in the thick of something quite exciting and innovative and from an
organizational perspective (Interview, Lead R&D)
More precisely, the R&D and Governance Department created a trusted cloud environment
to “filter” the open source component of Cancer-AI and make it work within organization,
protecting the identity and guaranteeing the security of data and information. O-Hospital created:
“an Azure Landing zone of its’ own, i.e., a structure that allows a hospital to manage all the key
requisites to implement an AI technology in the cloud, managing things like costs, mapping, how
you connect things in and out of the hospital, identity, security in order to be able to run this
model”(interview, physician 1). A senior physician explained this practice during a meeting with
What we end up with is that we can build into the hospital, a sort of structure where you
have the hospital's existing data network, which is of course, completely firewalled. And then you
can build these types of processes in order to implement AI and open source models, with the very
high levels of trust and security that sit in hospitals own cloud subscription. I also tried to develop
the use of the concept of a “trusted clinical environment”, and this is where you have actually stood
up a structure like this in the cloud in order to apply a more mature AI to clinical data and allow
clinicians to actually see the first level of output of that data. (Observation, Stakeholder Meeting
3)
local order for the contingent use of Cancer-AI in the specific hospital work setting, a second set
of practices were developed over time to broaden participation and deliberation around the uses
and evolutions of Cancer-AI technology. These practices aimed to engage both lay and expert
17
stakeholders in the discovery and understanding of the AI technology and in creating opportunities
Lay engagement activities. First, senior physicians engaged with patients and members of
the public both in the development phase and later on when discussing how to roll Cancer-AI out
in other hospitals. Explaining to patients the unknowns characterizing the AI tool was considered
a necessary step to maximize the potential of the technology, given the primary role of patients in
the data management process. A senior physician explained this point in an early internal meeting:
The quality of deep learning artificial intelligence algorithms is highly dependent on the data
used to create models. However, data driven technologies need a patient mandate. We know that
patients trust hospitals to look after their data. And we know the clinical workforce understands
this data. Open source software is a proven route to the latest machine learning developments and
the cloud now has the compute resources for machine learning. We will ask the patients on how
they want to see the parts of this jigsaw put together. By engaging patients we aim also to ensure
that patients, regardless of where they live, have access to this new clinical pathway. (observation,
internal meeting 5)
A number of patient and public engagement meetings were organized to give voice to the
concerns of patients regarding the use of Cancer-AI. Our observations and analysis of those
meetings revealed how patients consistently claimed that the introduction of AI tools in medical
care should not change the traditional patient-doctor relationship, and how doctors should remain
the primary actors responsible for medical care. When asked about “what matters most to you in
It should stay within the hospital and doctors will discuss everything with me… Thinking
about supermarkets, first they moved to self-checkout and now there are shops where you can just
walk out without seeing a cashier. I do not want that and I want reassurance from my doctors that
there will still be people in my care (observation, PPE meeting 2)
while other patient representatives highlighted how “it’s important that AI mustn’t result in
any lowering of standards, so the person at the end should always be a very skilled expert
professional and you won’t end up with a less skilled professional looking at the scan”
(observation, PPE meeting 2). Some even suggested not to use the term AI as: “hearing ‘AI’ and
18
‘cloud’ is really scary and might make people worry that anyone has access to their data”
(observation, PPE meeting 2), and rather call it a tool that the clinician can use to help make a
decision. Overall, as one doctor explained to us, the purpose of engaging patients in the discussion
on the use of Cancer-AI was ultimately educational, and aimed at introducing lay stakeholders to
Expert engagement activities. Practices to broaden participation around the development and
evolution of Cancer-AI technology involved not only lay people, but also experts outside O-
Hospital. This involved specifically the medical professional community expert in coding and open
source, which acted as source of feedback and improvement of Cancer-AI. A senior doctor recalled
how:
Being part of this Community for 25 years, what has been fantastic is this real rich culture
of sharing every time we have a new Software tool; in particular, the fact that we do site visits, we
share expertise, we share little bits of code snippets and scripts for our new treatment planning
system. And I feel it is really building on that on that kind of community spirit, our concept really
(Interview, physician 1)
Github was the open source community where Cancer-AI was originally discussed as
medical imaging deep learning library to train and deploy models on Azure Machine Learning.
Here, senior doctors of O-Hospital posted updates and contributed in discussions on the
that contained a future-looking component, and focused on activities able to anticipate what was
going to be necessary for the acceptance and future use of Cancer-AI beyond O-hospital.
develop Cancer-AI’s capabilities beyond the first implementation site. This was done by
envisioning a logic of local, incremental training of the model behind the AI tool. That is, the
training data for the original model came from multiple different sites around the world, with
19
different scanners, and scanning protocols, using standardized and accepted segmentation
protocols, with the idea of having as baseline “something that was robust to the kind of different
natures of input data that could be used” (Interview, physician 1). Then the model used a tool kit
that was published as open source. The key feature of the toolkit is that it allowed professionals
“to implement an existing model, but also to retrain models within a hospital environment”
(Interview, physician 1). By foregrounding openness as defining feature of the AI model, they
In the view of our informants, this practice was essential to ensure the potential scaling up
of Cancer-AI at the NHS level and overcome the temporal and spatial governance constraints. In
a meeting with peers from other NHS hospitals, the lead physician of O-Hospital explained the
We envision development cycle where clinical teams could collect the patient data sets,
apply their expertise to curate the data labels and take that data and actually retrain existing
machine learning models for maximal benefits, because hospital protocols might change, scanning
equipment changes over time, or even patient demographics change a little bit over time. They
could also develop new models and we have this kind of blueprint for evaluating the performance
then going on to actually commissioning the models so that you can then deploy them for patient
benefit. (Observation, inter-organizational meeting 8)
The practice ensured that, ultimately, the deployment “is down to whoever's using it, which
Values’ molding. Envisioning the potential for scalability of Cancer-AI meant not only
working on the capabilities of the model, but also on its acceptability in the value system of end
users. Hence, practices of foresight worked also to mold future societal values around AI use in
centered on the ethical implications of the use of Cancer-AI in real hospital settings. They argued
20
for the need to develop ethical and compliance frameworks to make AI safe and fair. Given the
lack of existing frameworks they could rely upon, professionals drew upon and translated ethical
physician explained this choice during a research meeting with clinical professionals involved in
I think that as much as we need clinical evaluation frameworks, we actually need ethical and
governance frameworks. Therefore, for our project we are actually working with two different
frameworks, each of which has its own strengths. We have been working with a group based with
the Aletheia framework and that is part of an international collaboration with radiation oncology
community. Their framework from the Aerospace industry really came around from staff concerns
about the introduction of AI technology to highly skilled staff group. The health data governance
is still developing in that particular framework. (Observation, research meeting 7)
In doing so, their aim was to develop future ethical guidelines that could be adopted by the
NHS, with an emphasis on two principles. The first was the value of clinical specialism, and the
The second was the principle of public innovation, which meant framing AI tools as “an innovation
that originates and comes from within the public healthcare setting” (Interview, physician 4).
The two practices of seeding openness and values’ molding were mutually reinforcing, as
I believe strongly in the importance of trying to innovate within the NHS and that is why
I believe in this principle of using the Open Source. We have discussed some of the solutions and
some of the reasons why cloud is particularly good. (Observation, Stakeholder Meeting 8).
Together, the three set of practices – situated integration, normative consensus and
distributed foresight, are mobilized in the overall process of anticipatory governance through two
the ability to differentially leverage upon the incompleteness of the AI tool to either open or restrict
the alternative uses of AI temporally and spatially, and it is based on the practices of partially
21
controlling the learning component (situated integration), expert engagement activities (normative
refers to the capacity to gradually and incrementally envision ways to democratize access to and
deliberation around the AI tool (from skilled professionals, to patients and future beneficiaries),
and it is based on the practices of minimizing professionals’ resistance in use (situated integration),
lay engagement activities (normative consensus) and values’ molding (distributed foresight).
Our analysis is ongoing, and our emergent findings point to a number of contribution to the
emergent literature on the development and sustainability of AI tools in organizations and in the
hands of frontline staff. Our research offers the longitudinal and multi-sited first-hand empirical
examination of the development, adoption and sustained use of an AI tool in a healthcare setting
that was, by all accounts, successful in the sense that it was adopted in the target organization, not
abandoned, and continuously developed, reappropriated for further purposes and re-deployed in
other organizations. This was particularly surprising because the AI tool at hand is characterized
by extreme AI fragmentation, as well as AI opacity, learning and complexity. The AI tool at the
heart of our empirical examination is fragmented in the sense that the sourcing of its key elements
(models; data; training) is both proprietary and importantly open source. Also, because different
elements of the AI tool are sourced from organizations and non-organizational entities across
private, public and community sectors. For example, we noted firsthand how actors contributed to
the training of the AI model from the public sector (local O-Hospital’s oncologists), and the data
originated from national patient medical records, as one of the models of the AI tool was derived
from an open access international online community platform (GitHub) and combined with
22
another model created by members of a private, international technology developer organization
(MegaTech).
To make sense of how AI implementation liability was overcome in the setting of our
empirical study, we drew on and extended an anticipatory governance perspective. Our study
can be overcome through an iterative process of situated integration, normative consensus and
distributed foresight that are propelled forward through diversified openness and mediated
democratization. Importantly, we see these as being developed in concert, in order to reflect and
inform on one another. In the process of anticipatory governance, first, activities emerge at the
local level which stretches beyond the specific organization where the technology targets
implementation. The practice ‘site’ includes dispersed activities of actors across organizational
boundaries, that contribute to the understanding of how to set expectations around the new tool in
order to meet pressing governance needs of the here and now. Next, the new set of expectations
(governance) around the AI tool are stretched against the imagination of the engaged stakeholders,
in the sense that all are aiming to predict what is going to be necessary in the future to sustain the
We elaborate the significance of this model in three different areas. First, scholars studying
governance constraints arise. The temporal governance constraint is that the AI technology
23
requires a set of expectations on its use both for now and in the future. The spatial constraint is
that the AI tool requires governance on the local site (i.e. of the specific organization and system
in which it is adopted) but also globally wherever it may be used next. The evolutionary constraints
are closely tied to the AI learning characteristic, as the AI tool requires setting expectations on its
version at the time of deliberation and introduction but also on its versions as these may evolve
through its learning ability. The commons constraints is that the AI technology is not bound by its
current stakeholders (the organizations and entities that are currently developing and using it) but
its open source elements and need for large datasets requires that the technology accounts for
unaccounted for, and expanded pool of stakeholders that renders the AI a common good.
become increasingly effective in research labs and attempts to transition them to real-world work
settings also increase. This trend, combined with the AI governance constraints generated through
stakeholders and actors involved in the research, development, adoption and use of the AI tools. It
may, in turn, limit the ability of these dispersed communities and stakeholders to coordinate their
efforts towards the AI sustainable use and thus void the possibility of AI delivering on its promise
consensus and foresight may afford the possibility for helping overcome AI governance
constraints. We show that the emergence of new governance constraints that are characteristic of
AI technology also may allow for new and modified practices that can help mitigate AI
implementation liability. We also show how the three key practices (situated integration,
normative consensus and distributed foresight) are mobilized through two mechanisms: diversified
24
openness and mediated democratization. Past work highlights the positive role of including less
powerful actors in the initial adoption and ongoing local troubleshooting meetings related to
modifying digital technology and related routines at the local site/organization of adoption (Barrett
et al. 2012, Sergeeva et al. 2020). Our work demonstrates that who is and is not powerful in relation
to AI technologies is open to debate as multiple actors are equally needed (through their data, their
training of the AI, their modelling) in order for the AI to be developed and sustained and therefore
each is expert in their aspects and the role of each is significant in different ways. Additionally,
because we examine AI governance efforts in a specific site, place and time period as a starting
point, we are able through our practice lens to trace the relevance, effects and challenges of
tentative modes of governance in the heterogeneous array of governance efforts observed. In this
way, we provide a model of AI Governance that is grounded in organizational life and through the
bottom up actions and interactions of actors responding to a real-world challenge, rather than
models imposed top-down through vague policy recommendations and the aspirational protocols
through an anticipatory governance process that allows for the spanning of organizations and
sectors, and accounts for the multiple and imperfect elements of open and proprietary of the
technology. The strong vision of openness was demonstrated through the public embrace of the
open source elements of the AI tool and initially lauded as the response to the AI characteristic of
opacity. We found that the open source elements of the AI technology implemented in O-Hospital,
are fundamental to incentivize professionals to actually use the technology in the daily practice,
while acknowledging and overcoming its partial imperfection. At the same time, in an interesting
twist, these widely acknowledged open elements of the AI tool become a barrier to overcome the
25
AI implementation liability. Our findings show that the AI tool’s open source properties generate
further governance constraints as none of the actors engaged in this process fully embrace openness
in practice. Rather, what our findings show is a iterative rounds of cutting down the openness of
the AI tool in order to counter AI fragmentation. Furthermore, our work underscores that
modifying the digital technology and related routines at a local level is insufficient to account for
the prospective needs of the technology, i.e. in order to continue to be used in the future and in
remote locations (other organizations and countries). By working through these governance
constraints at the local level they share with others facing the same issues with the same AI tool
or with similar AI tools in existence or in-making. In this way, the local actors future proof their
organizations (public hospitals or private vendors), their profession (R&D, radiology, Clinical
Engineering) and the health ecosystem centred around the UK’s National Health Service. For this
reason, anticipatory governance, with its emphasis on thinking in advance about societal values
and institutional change so as to leverage the relative openness of the technology before lock-in of
values and trajectories set in, is a valuable new perspective for emerging technologies at a similarly
REFERENCES
Alaimo, C., & Kallinikos, J. (2021). Managing by data: Algorithmic categories and organizing.
Organization Studies, 42(9), 1385-1407.
Anteby, M., Chan, C. K., & DiBenigno, J. (2016). Three lenses on occupations and professions in
organizations: Becoming, doing, and relating. Academy of Management Annals, 10(1), 183-
244.
Anthony, C. (2021). When Knowledge Work and Analytical Technologies Collide: The Practices
and Consequences of Black Boxing Algorithmic Technologies. Administrative Science
Quarterly, 00018392211016755.
Bailey, D., S. Faraj, P. Hinds, G. von Krogh and P. Leonardi (2019). Special Issue of organization
science: Emerging technologies and organizing. Organization Science 30(3): 642-646.
Barben, D., Fisher E., Selin C, D.H. Guston (2008) Anticipatory governance of nanotechnology:
foresight, engagement, and integration
26
Barley SR (1986) Technology as an occasion for structuring: Evidence from observations of CT
Scanners and the social order of radi- ology departments. Admin. Sci. Quart. 31(1):78–108.
Barley SR (2015) Why the Internet makes buying a car less loathsome: How technologies change
role relations. Acad. Management Discoveries 1(1):5–35.
Barley, S. R. (1990). Images of imaging: Notes on doing longitudinal field work. Organization
science, 1(3), 220-247.
Barrett M, Oborn E, Orlikowski WJ, Yates J (2012) Reconfiguring boundary relations: Robotic
innovations in pharmacy work. Organ. Sci. 23(5):1448–1466.
Beane M (2019) Shadow learning: Building robotic surgical skill when approved means fail.
Admin. Sci. Quart. 64(1):87–123.
Benbya, H., T. H. Davenport and S. Pachidi (2020). Artificial intelligence in organizations:
Current state and future opportunities. MIS Quarterly Executive 19(4).
Berente N, Lyytinen K, Yoo Y, King JL (2016) Routines as shock absorbers during organizational
transformation: Integration, control, and NASA’s enterprise information system. Organ. Sci.
27(3):551–572.
Berente N, Lyytinen K, Yoo Y, Maurer C (2019) Institutional logics and pluralistic responses to
enterprise system implementation: A qualitative meta-analysis. Management Inform. Systems
Quart. 43(3):873–902.
Berente N, Yoo Y (2012) Institutional contradictions and loose cou- pling: Post-implementation
of NASA’s enterprise information system. Inform. Systems Res. 23(2):376–396.
Berente, N., Gu, B., Recker, J., & Santhanam, R. (2021). Managing Artificial Intelligence. MIS
Quarterly, 45(3).
Bohr, A., & Memarzadeh, K. (2020). The rise of artificial intelligence in healthcare applications.
In Artificial Intelligence in healthcare (pp. 25-60). Academic Press.
Choudhury, P., Starr, E., & Agarwal, R. (2020). Machine learning and human capital
complementarities: Experimental evidence on bias mitigation. Strategic Management
Journal, 41(8), 1381-1411.
Christin A (2017) Algorithms in practice: Comparing web journalism and criminal justice. Big
Data Society 4(2):1–14.
Christin A, Lewis R (2021) The drama of metrics: Status, spectacle, and resistance among
YouTube drama creators. Social Media Soc. 1:1–14.
Currie G, Spyridonidis D (2019) Sharing leadership for diffusion of innovation in professionalized
settings. Human Relations 72(7): 1209–1233.
Davenport, T. H. and K. Dreyer (2018). AI will change radiology, but it won’t replace radiologists.
Harvard Business Review 27.
27
DiBenigno J (2020) Rapid relationality: How peripheral experts build a foundation for influence
with line managers. Admin. Sci. Quart. 65(1):20–60.
Jarrahi, M. H. (2018). Artificial intelligence and the future of work: Human-AI symbiosis in
organizational decision making. Business Horizons 61(4): 577-586.
Kellogg KC (2021) Covert operations: Managing vendor intentional secrecy during ML tool
development in a high technology or- ganization. Presentation, MIT Economic Sociology
Working Group Seminar, Cambridge, MA.
Kellogg KC, Myers JE, Gainer L, Singer SJ (2020) Moving violations: Pairing an illegitimate
learning hierarchy with trainee status mobility for acquiring new skills when traditional
expertise erodes. Organ. Sci. 32(1):181–209.
Kellogg KC, Valentine MA, Christin A (2020) Algorithms at work: The new contested terrain of
control. Acad. Management Ann. 14(1):366–410.
Kellogg, K. C., Valentine, M. A., & Christin, A. (2020). Algorithms at work: The new contested
terrain of control. Academy of Management Annals, 14(1), 366-410.
Lane, M., & Saint-Martin, A. (2021). The impact of Artificial Intelligence on the labour market:
What do we know so far?.OECD Working Paper
Lebovitz, S., H. Lifshitz-Assaf and N. Levina (2021). To engage or not to engage with AI for
critical judgments: How professionals deal with opacity when using AI for medical diagnosis.
Organization Science Special Issue on Theorizing Emerging Technologies.
Lebovitz, S., N. Levina and H. Lifshitz-Assaf (2021). Is AI ground truth really “true”? The dangers
of training and evaluating AI tools based on experts’ know-what. Management Information
Systems Quarterly.
Leonardi PM (2011) When flexible routines meet flexible technologies: Affordance, constraint,
and the imbrication of human and material agencies. Management Inform. Systems Quart.
35(1):147–167.
Lifshitz-Assaf H (2017) Dismantling knowledge boundaries at NASA: From problem solvers to
solution seekers. Admin. Sci. Quart. 63(4):746–782.
Majchrzak A, Rice RE, Malhotra A, King N, Ba SL (2000) Digital technology introduction and
integration: The case of a computer- supported inter-organizational virtual team. Management
Inform. Systems Quart. 24(4):569–600.
28
Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis: An expanded sourcebook.
Sage.
Myers JE (2020) Direct vs. indirect vendor channels and the scaling of worker voice around digital
technologies. Academy of Management Proc. (Academy of Management, Briarcliff Manor,
NY), 17349.
Pine K, Mazmanian M (2017) Artful and contorted coordinating: The ramifications of imposing
formal logics of task jurisdiction on situated practice. Acad. Management J. 60(2):720–742.
Powell WW, Oberg A, Korff V, Oelberger C, Kloos K (2017) Insti- tutional analysis in a digital
era: Mechanisms and methods to understand emerging fields. Krücken G, Mazza C, Meyer R,
Walgenbach P, eds. New Themes in Institutional Analysis: Topics and Issues from European
Research (Edward Elgar Publishing, Northampton, MA).
Raisch, S. and S. Krakowski (2021). Artificial intelligence and management: The automation–
augmentation paradox. Academy of Management Review 46(1): 192-210.
Sowa, K., A. Przegalinska and L. Ciechanowski (2021). Cobots in knowledge work: Human–AI
collaboration in managerial professions. Journal of Business Research 125: 135-142.
Truelove E (2019) The changing nature of professional work inside an incumbent firm in the age
of social media: Examining the challenge of coproduction. Doctoral dissertation,
Massachusetts Institute of Technology, Cambridge, MA.
29
Figure 1 AI implementation and Anticipatory Governance Process
30