Professional Documents
Culture Documents
net/publication/372310583
CITATION READS
1 306
3 authors, including:
All content following this page was uploaded by Andreas Cebulla on 07 December 2023.
Preparing to work with artificial intelligence: assessing WHS when using AI in the workplace
Authors:
Andreas Cebulla (College of Business, Government and Law, Flinders University, Adelaide, Australia),
andreas.cebulla@flinders.edu.au
Published as: Cebulla, A., Szpak, Z. and Knight, G. (2023), "Preparing to work with artificial intelligence:
assessing WHS when using AI in the workplace", International Journal of Workplace Health Management,
Vol. 16 No. 4, pp. 294-312. https://doi.org/10.1108/IJWHM-09-2022-0141
Abstract
Purpose – Artificial Intelligence (AI) systems play an increasing role in organisation management, process
and product development. This study identifies risks and hazards that AI systems may pose to the work health
and safety (WHS) of those engaging with or exposed to them. A conceptual framework of organisational
interviewed 30 experts in data science, technology and WHS; 12 representatives of nine organisations using
or preparing to use AI; and ran online workshops, including with 12 WHS inspectors. The research mapped
AI ethics principles endorsed by the Australian government onto the AI Canvas, a tool for tracking AI
implementation from ideation via development to operation. Fieldwork and analysis developed a matrix of
WHS and organisational-managerial risks and risk minimisation strategies relating to AI use at each
implementation stage.
Findings - The study identified psychosocial, work stress and workplace relational risks that organisations and
employees face during AI implementation in a workplace. Privacy, business continuity and gaming risks were
also noted. All may persist and reoccur during the lifetime of an AI system. Alertness to such risks may be
Originality – A collaborative project involving sociologists, economists and computer scientists, the study
relates abstract AI ethics principles to concrete WHS risks and hazards. It translates principles typically
applied at the societal level to workplaces and proposes a process for assessing AI system risks.
This article reports on research on how the use of artificial intelligence (AI) in workplaces may affect
the health and safety of workers, and which processes could be applied to avoid or contain harm. The
COVID-19 pandemic, which led to increased use of advanced technologies, including technologies driven
by AI, accentuated the urgency for such exploration. Contributors to a recent special issue of the
International Journal of Workplace Health and Management (IJWHM, Vol. 15 No. 3, 2022) illustrated the
health implications of the growth in teleworking/working from home (Pataki-Bittó and Kun, 2022),
associated novel forms of remote workplace management and monitoring (Jeske, 2022), and new challenges
to managing relationships with customers frequenting a workplace in healthcare (Cumberland et al., 2022)
The COVID-19 pandemic also changed how organisations were going about their business. Notably,
it accelerated the digitalisation of workplaces, including the use of AI, the exploitation of ‘big data’ in
production and service industries (McKinsey Analytics, 2021; PWC, 2021), and process methods no longer
guided by human input of instructions but driven by unsupervised machine coding (Chhillar and Aguilera,
2022).
The Oxford Reference (2023) dictionary defines artificial intelligence as “[t]he theory and
development of computer systems able to perform tasks normally requiring human intelligence, such as
visual perception, speech recognition, decision-making, and translation between languages”. Everyday
examples of tools constructed with AI include automatic image content generation, facial recognition and
predictive texting (now also including ChatGPT). These tools share a reliance on the processing of large
Pre-Print Version (final accepted)
quantities of data for their construction and efficacy, which is facilitated by today’s computational
technology. AI promises faster speed, and greater precision beyond the capacity of human beings. AI
systems can assist humans by taking over or eliminating dangerous or tedious tasks in everyday and work
settings (Shubhendu and Vijay 2013). In workplaces, AI has found application in workforce management,
process management, and product and service development tools. In workforce management, AI
applications are reported in human resources (Black and van Esch, 2020) and monitoring and surveillance
(Mateescu and Nguyen, 2019). In process management, AI applications can be seen in automated
warehousing (Bustamante et al., 2020) and smart factories (Capgemini, 2020; Wilson and Daugherty, 2018).
In product and service development, AI has found applications, amongst others, in agriculture to improve
irrigation and seeding (Rural Industries, 2016; Talaviva et al., 2020), medicine to improve diagnostic
instrumentation (Choy et al., 2018; Davenport and Kalakota, 2019) and in care settings to assist individual
support tasks (Loveys et al., 2022). In manufacturing, automated product inspection tools built on AI are
challenging traditional manual product quality assurance processes (CBInsights, 2022; Azamfirei et al.,
2023).
Whilst the potential gains from AI are rarely disputed, albeit perhaps occasionally exaggerated, using
AI can contribute to new health risks (Todoli-Signes, 2021). This study examined AI’s potential health and
safety impacts on workers. It adopts a broadly preventative position, which echoed Karanika-Murray and
Ipsen (2022, p. 259), who, in their guest editorial to the IJWHM Special Issue, argued that: “…there is a need
for primary and proactive initiatives and a focus on the whole organisation. Preventative approaches are
mediator or actor. Workplace practices are typically subject to WHS regulation, new International
Organization for Standardization (ISO) standards and product liability regulations, but in the field of AI the
application of such rules and regulations is largely untested (Dignam, 2020). We thus asked what
commercial and other users of AI need to be aware of – and do – to avoid or contain risks 1 associated with
1
We use the term ‘risk’ to refer to the potentiality of an event without implying any a priori (typically negative) connotation.
Pre-Print Version (final accepted)
the disruption that new technologies, such as AI, cause to workplace processes, safety, and worker
wellbeing. This paper builds on and develops research previously reported in Cebulla et al. (2021, 2022).
AI adoption in workplaces involves changes to processes or products that likely also entail re-arranging,
fragmenting or substituting for human activity. AI adoption then shifts the boundaries of common job
descriptions, content and expertise, requiring new role assignments as the operation and management of the
AI change established working practices (Faraj et al., 2018). Time previously allocated to tasks that are now
automated gets re-assigned, while the servicing of the AI is delegated up or down the organisational
structure (Pepito and Locsin, 2018). Workers may experience such processes negatively, lowering job
engagement (Braganza et al., 2020). When appropriately applied, AI can improve the quality of materials or
services, but reports on accidents involving AI (Arnold and Toner, 2021) and on prediction failures
A conventional view is to assess AI adoption risks – along with business risks more generally – from
a leadership perspective, as this is where strategic decisions are taken and responsibility ultimately rests. But
because digitalisation leads to increased interconnectivity (Dery et al., 2017; Baethge-Kinsky, 2020), a
leadership perspective may not suffice to capture these risks. There is an emerging debate about how AI use
in business induces complacency in business leaders as they place unwarranted trust in the technology (de
Cremer, 2022; Walsh, 2019). AI systems may also desensitise shopfloor workers to the riskier aspects of AI
Current regulation of safe working practices barely protects against such risks, taking a
predominantly mechanistic approach and connecting safety to good work design (Safe Work Australia,
2015). They are rooted in a notion of workplaces that are physically protected from accident risk and where
supervisory systems can judge human capacity to adapt to workflows and successfully manage workloads.
Data-driven applications of AI have less obvious physical safety implications but increase the scope for
Pre-Print Version (final accepted)
psychosocial harms by raising the cognitive and emotional demands associated with workplace tasks and
Methods
adoption in a workplace for workers and management health and safety, involving data science and
technology experts, including AI specialists, and AI users in academia, the public sector and business, and
WHS inspectors. Given the novelty of the topic, the research adopted an exploratory, inductive qualitative
approach (Bingham and Witkowsky 2022; Swedberg 2020). The data collected from individuals familiar
with the uses of AI inside and outside of workplaces integrated impressions, stipulations and projections
with actual experiences from within workplaces. Based on the findings from that research, a framework of
actions for identifying and containing WHS risks when using AI in workplaces was developed.
The research with data science and technology experts, AI users, and WHS inspectors was conducted as a
three-phased fieldwork program between July 2020 and February 2021, namely:
• Interviews with data science and technology experts, and WHS inspectors; and two online
workshops seeking input from an audience beyond technology and WHS specialists (phase 1);
• Interviews with representative of organisation using or preparing for the use of AI in the workplace
• An online workshop with WHS specialist to discuss the findings from our study (phase 3).
During phase 1, interviews were conducted, individually, with data science and technology experts,
and WHS inspectors with interest and expertise in, or responsibility for, workplaces uses of AI. Participants
Pre-Print Version (final accepted)
were identified via social media, notably Linkedin, and professional networks, such as regional public fora.
Thirty interviews were completed by October 2020 with participants from industry (16), government (5),
WHS specialists (4), academia (2), research organisations (2), and one representative of an AI professional
network. Interviews lasted between 40 minutes and about one hour; were conducted by phone or via video
conferencing tools (Zoom, MS Teams); and were recorded with the participant’s consent, which 15
participants granted. About one-third of the interviews were co-attended by two or more researchers, who
also took notes of the conversations, with particular attention given to detailing those that could not be
recorded. Whilst we took a pragmatic approach to determining the number of interviews, guided by
resources and deadlines framing the commissioned part of this research, the data collection achieved
saturation before all interviews had been collected (Baker and Edwards, 2012).
Parallel with these interviews, two public online workshops were conducted in August 2020,
advertised via the researchers’ and their employers’ social media sites. A selection of AI interest groups
identified in web searches was also approached directly. The workshops were intended to reach a wider non-
technical, non-specialist audience than the interviews, although data science and technology experts again
made up most of the 22 participants. A detailed participant breakdown is unavailable since no further
personal or professional data were collected besides registering emails and participants’ names.
Phase 2 involved in-depth interviews with organisations using, or having put into place structures
and process in preparation for using, AI, and further expert interviews. Participants in this phase were
selected for their experience in introducing AI in the workplace. In the absence of a register of AI users,
participants were identified through contacts established during phase 1 and additional searches of publicly
available sources such as AI industry networks, innovation centres and innovation labs (mostly university-
based). Other sources included websites advertising, promoting, selling or otherwise exploring and
Twelve individuals from nine organisations participated. They included senior managers and data
scientists in local government; data mining scientists, chief executives and WHS managers from three
Australian state and territory governments; chief executives and senior managers of a specialist
Pre-Print Version (final accepted)
manufacturing company, a software company and a disability service provider. Semi-structured interviews
were conducted by phone or video conferencing, lasting between 40 and 80 minutes; about half of the
interviews were attended by two researchers, the remainder by one interviewer only. Interviews were
recorded where permission had been granted, and researchers took notes. As in phase 1, resources and
The final phase 3, conducted during March 2021, engaged 12 WHS inspectors identified by the
funding organisation prior to meeting in an online workshop. The workshop was conducted via
teleconference, lasted approximately 1.5 hours and was recorded with participants’ consent. All five
The fieldwork phases adopted a sequential approach, whereby phases 2 and 3 built on the findings of their
In the phase 1 interviews, participants were asked to explore current or potential uses of AI in their
workplace and workplaces more generally, and any ethical or WHS matters that these uses raised or might
raise. Participants were invited to provide examples based on their own experience or observation, or as they
The workshops of phase 1 started with a short presentation about the research objectives. Brief
illustrations of current uses of AI in workplaces and the generic debate about the ethics of AI followed to
guide the conversation, which sought to generate further examples of AI risks in workplaces.
Phase 1 resulted in an initial itinerary of WHS hazards and risks when using AI in the workplace
(hereafter referred to as AI WHS risks), which was then further developed and populated with case
examples and proposed responses or preventions in phase 2 of our fieldwork between November 2020 and
February 2021.
This second phase sought to (1) explore how organisations used or intended to use, and prepared the
application of, AI at work, (2) understand management and employee roles in those processes, (3) identify
Pre-Print Version (final accepted)
risk factors affecting worker safety, and (4) establish principles to safeguard their health. At the end of phase
2, a more detailed AI WHS risk list had emerged, including examples of how businesses had managed those
risks, distinguishing between different stages from the conception and development to the use of AI in the
workplace.
In phase 3, the findings were taken to the workshop with WHS experts for discussion and
Conceptual framework
The fieldwork’s conceptual starting point was the AI ethics principles endorsed by the Australian
Government and developed in 2019/20 by the country’s Commonwealth Scientific and Industrial Research
Organisation (CSIRO), the principal government agency responsible for scientific research (Dawson et al.,
2019). They were presented to study participants to guide the conversations and stimulate discussion about
uses of AI in workplaces that may jeopardise meeting these ethics principles and what could be done to
The Australian Government’s AI ethics principles concerned the need for AI and its applications to
protect, preserve or instil: (i) human, societal and environmental wellbeing, (ii) human-centred values, (iii)
fairness, (iv) privacy protection and security, (v) reliability and safety, (vi) transparency and explainability,
---------------------------
---------------------------
Further to assist interviews and group conversations about AI WHS risks, we introduced the AI Canvas, a
conceptual model of the stages of AI development in a business context developed by Agrawal et al. (2018)
(see Figure II). The AI Canvas model identifies seven stages, from the initial ideation of potential AI uses in
an organisation, via design, set up and testing of the AI system, to full operation. This sub-dividing sought to
Combining the ethics principles and the AI Canvas into a matrix added a temporal, sequential
dimension, thus, to explore when as well as how AI systems implementation may contravene ethical
principles and, more specifically, pose actual WHS risks. That matrix was shared for reference and
discussion with study participants before (via email) or during (online screen sharing) the interviews and
workshops.
---------------------------
---------------------------
Data analysis revisited the interview and workshop recordings, and aggregated and compared the notes
researchers had taken of the conversations. At least two researchers analysed interviews before the findings
The analysis had an emic focus (Gaber and Gaber, 2010), populating the matrix of AI Ethics
Principles and the AI Canvas with AI WHS risks and possible pre-emptive measures suggested by study
participants. The Ayoa Mind Mapping software was used to assist in this process. The matrix was updated
The researchers then applied the information collected in the fieldwork to develop a framework of
actions for identifying and minimising AI WHS risks, building on the AI Ethics/Canvas matrix and also
Findings
The workshops and interviews (phase 1) revealed early on that participants were struggling with giving
concrete meaning to some of the AI ethics principles, which were seen to be referring to similar or related
ethical challenges and values. In discussions, participants condensed the eight principles to just three, which
they considered to capture the range of principles effectively. The three aggregates reflected a concern for
Pre-Print Version (final accepted)
human wellbeing (combining the first three of the ethics principles in Figure 1), safety (principles 4 and 5)
The explorations then turned to applying these three overarching principles to workplace situations
to give them meaning specific to that context. In discussing the principle of human wellbeing, participants
developed AI systems. This included AI’s potential impact on workers’ control over their job and workload,
The safety principle, in contrast, was discussed in terms of potential threats to personal data security
and the manipulation of AI systems for personal advantage by those familiar with its functions (‘gaming’,
notably, in the context of performance measurement). Worker safety was, above all, understood to mean the
protection of privacy from AI intrusion and the prevention of associated harm. But it also included harm that
may result from the physical characteristics of workplaces changing with the installation of AI driven
technologies.
Finally, participants linked the accountability and oversight principle of AI systems to autonomous
decision-making systems changing human-machine interactions as those systems determine and potentially
overrule decisions previously taken by a human. Moreover, these systems may replace or alter human-
human (e.g., worker-supervisor) interactions (e.g., in human resource management applications) or reporting
In short, participants translated the abstract AI ethics principles to the workplace context as follows:
To distinguish this concretisation from the original AI ethics principles, we refer to it as the work
arrangements and governance (WAG). Because of their conceptual proximity, in the sections below, in the
Pre-Print Version (final accepted)
following, the sequence of the WAG is changed, still commencing with job control and workload, but now
followed by supervisor/peer and organisational relations, then concluding with privacy and harm.
With the three WAGs, the study defined the principal areas in which sound, ethical workplace relations and
conditions are worked out. The research next sought to establish how and when AI applications might
challenge these. The AI Canvas had initially been chosen to facilitate this part of the investigation because
its seven-stage model offered a focus on AI WHS risk specific to each stage and thus a granular account of
potential AI WHS risks. It was still possible to do so, but in working with the AI Canvas, participants again
sought simplifications to the instrument. Those less familiar with AI or machine learning (ML) terminology
or how business may approach their application from a commercial perspective found the specialist
language and technical connotations in the AI Canvas problematic. The solution was to retain the seven
stages of the AI Canvas but not to insist on maintaining all distinctions between stages when seeking to
identify AI WHS risks. The seven stages were thus aggregated into three broader categories, labelled
ideation (consisting of the AI Canvas stages of prediction, judgement, action), development (outcome,
Table 1 shows the most frequently mentioned occasions and circumstances that participants thought
might produce WHS risks and hazards in relation to each WAG and AI Canvas stage. These examples were
collected during fieldwork phases 1 and 2. For most of the AI stages, participants identified a mix of
psychological (e.g., communication, work stress) or physical (e.g., changes to the workplace environment)
risks pertaining to employees alongside business continuity and reputational risks that may affect employers.
Contributions also repeatedly noted risks associated with the opaqueness of complex data systems
employed by AI and different levels of access and understanding of these amongst a workforce, potentially
giving rise to conflict, manipulation and inappropriate, unintended use. Few, though, remarked on the risk of
In several instances, it was found that AI WHS risks may be present at different stages of AI
implementation, either because they persist or impact later stages. For example, not giving sufficient
consideration to employee job satisfaction may originate during early implementation stages because of a
failure to conduct (adequate or any) pay-off assessments but emerge only during full operation. A
consequence of AI scope transgressions is that risks may feature as ‘boundary creep’ or ‘worker resistance’
during AI operations. These lasting consequences pointed to the importance of reflecting early on the
potentially multiple operational and organisational factors that AI implementation in a workplace affects.
captured as part of the Ideation and Operation phases. The risk in the Ideation phase is at the intersections of
“Supervisor/peer and organisational relations” and “Judgement”, and of “Job control and workload” and
“Judgement”. In the Operation phase, the risk is captured at the intersection of “Supervisor/ peer and
---------------------------
---------------------------
A more detailed version of Table 1 than can be presented here was developed into an AI risk assessment
scorecard, which may be used to support users in identifying and rating the likelihood and impact of AI
WHS risks, and thus to prioritise preventative or remedial measures (Cebulla et al., 2021).
AI Action Framework – applying findings to develop a model for containing AI WHS risks
The question for practitioners remained as to what to do to avoid or at least contain AI WHS risks.
To answer this question, we have used the data collected in this study, including preventative and remedial
measures suggested in interviews and workshops, to construct a framework of actions for identifying and
containing AI WHS risks. We took turns reviewing items included in the matrix, interview notes and
Pre-Print Version (final accepted)
recordings for examples of identification or minimisation strategies that had been proposed or adopted,
actions.
In addition, the framework development drew on a review of the literature undertaken concurrently
with the fieldwork (for details, see Cebulla et al., 2021), which contributed to populating Table I with risk
The following sections summarise the action framework’s key messages. In doing so,
each AI Canvas category (in the columns of Figure III) is addressed sequentially, elaborating in each
instance on the three WAG (identified in the first (left) column). The WAG and the actions proposed
regarding an AI Canvas category are identified in italics at the start of each section, corresponding to entries
in Figure III.
---------------------------
---------------------------
Job control and workload - selecting solutions offering maximum benefit at minimum loss.
The intersection of “Job control and workload” and “Prediction” in Table 1 triggers consideration of
how an AI system may impact the workforce. AI systems that significantly alter, reduce or displace jobs,
i.e., AI used mainly for cost reduction purposes, especially if they also fail to enhance working conditions,
bear psychosocial risks for those affected. The ideation stage defines the business problem an AI system is
intended to address. The solution that AI may offer can generate new value propositions with profound
implications for what the organisation and its parts do and how individual members, i.e., the workers, make
sense of what their organisation is doing (Wessel et al., 2021). This research suggests value propositions be
Consideration should be given to AI’s potential for augmentation beyond the immediate commercial
objectives for AI use, such as more reliable or customised products and services (IEEE, undated; WEF,
2020). Could the greater productivity that AI use promises also translate into labour time savings to benefit
work-life balance (Crane, 2021)? What other potential might there be for promoting better working
conditions, including less physically or psychologically demanding work? How can job autonomy be
enhanced so as to improve mental wellbeing? Any reflection of AI use in this context would consider the
need and scope for rewriting job descriptions, modifying work tasks and work patterns and schedules, and
any implications for skills training and development. Some modifications may be more practically realisable
and acceptable to the workforce than others (ADAPT, 2017; van de Poel, 2016).
Supervisor/peer and organisational relations – configuring modes for communicating the purpose and
functioning of AI. The intersection of “Supervisor/peer and organisational relations” and “Prediction” in
Table 1 captures the need to think about how the purpose of the AI will be communicated. Detection of AI
risks is aided by inclusive communication, which is already used to stimulate positive attitudes to change
and AI buy-in (Baethge-Kinsky, 2020; Matsumoto and Ema, 2020; Makarius et al., 2020). It could also
benefit AI WHS risk awareness early in the ideation process. Communication across organisations would be
about more than providing information. It becomes about employee engagement, which includes fact-
finding, enabling organisations and their members to identify AI risks collectively, using two-way
considered during the AI ideation stage, especially if new job roles or task delegation are involved, rather
than only minor changes to an employee’s daily activities. An AI system thus may become a new
intermediary, for instance, when wearable devices, such as augmented reality (AR) glasses, provide instant
access to information, which substitutes for consultation with colleagues, likely also affecting lines of
accountability. Automated performance monitoring systems are another example, which may not only add to
Pre-Print Version (final accepted)
but replace conventional, inter-personal performance reviews. How is this likely to affect intra-
organisational relations? Human resource management provides early insights into the current limits of AI
use with lessons for other sectors (Charlwood and Guenole, 2022; Giermindl et al., 2021; Robert et al.,
Privacy and harm - identifying ethical, moral, social principles, and personal and collective conditions and
rights. The intersections of “Privacy and harm” with “Prediction” and “Judgement” in Table 1 cue attention
to reputational risks and encourage one to go beyond financial motivations to determine if an AI solution is
worth pursuing. Aloisi (2020, p.52) has argued that new technology deepens the risk of ‘hierarchy and
control over the workforce’, with associated physical or psychological harm and infringement of privacy.
Ideation should consider these risks, their probability, and scope for prevention. Could AI-driven
organisational innovation or an over-reliance on it adversely affect the workers’ right to a healthy and safe
workplace? How is this likely to affect trust within organisations and amongst workers? How might this
AI risks may also be introduced from the outside, via external contractors (Duke, 2022). Outsourcing
system development and management adds communication layers and complexity. Involved actors may not
speak the same language, have different agendas, knowledge, expectations, and (technical) understanding,
and aspire to other benefits/costs. Due diligence and risk management become of primary importance with
Job control and workload - mapping potential disruptions to work processes and needs for changes. The
intersections of “Job control and workload” with “Outcome” and “Training” in Table 1 focus on how an AI
solution will alter the way people work. The challenge to job control and workload during the development
stage is the risk of new unwelcome outcome measures and the collection of uneven or intrusive employee
Pre-Print Version (final accepted)
(performance) data to support AI development. These measures often signal the redefining of job roles and
uncertain, possibly inequitable impacts on individuals (Fountaine et al., 2019). The result may be resistance
to change (Holmström and Hällgren, 2021). Resistance is likely when the ideation stage has neglected
Changing job roles will affect working patterns, and new divisions of labour may need to be planned
and mapped out. New job roles may emerge that require new positions and, thus, resourcing. Organisations
may need new staff or face skill shortages and an abundance of ‘old’ skills. Although primarily about
training the AI, this stage should also be used to identify secondary impacts on workplace capacities not
previously apparent.
Supervisor/peer and organisational relations - delineating lines of reporting and accountability. Table 1
shows that the intersections of “Supervisor/peer and organisational relations” with “Outcome” and
“Training” emphasise the importance of considering how AI can affect worker incentives and relationships.
In defining expected outcomes from AI and commencing testing, questions of judgement, decision-
making and authority edge to the fore. To the extent that role differentiation occurs, some job roles will have
a greater capacity to shape outcomes and to benefit from them than others. Those holding those roles may
have different dispositions towards working with new technologies, especially fundamentally opaque AI
technologies (e.g., Lebovitz et al., 2022). There may be direct (physical) and relational risks, such as the
gaming of AI systems (Beard and Longstaff, 2018), that may have beneficial effects for some but
detrimental effects on others – those not gaming the system and hence losing out.
To the extent that new technology forces individual workers to adapt, it affects intra-organisational
expectations and processes, including amongst peers and, notably, concerning supervisors. The training
stage may reveal new complexities of the proposed AI system, including its interconnectivity across the
Accountability requires mechanisms to address AI predictions and recommendations that conflict with
workers’ experience, intuition, or sense of justice, potentially causing dissonance and psychosocial harm.
Pre-Print Version (final accepted)
Privacy and harm - anticipating direct and indirect effects of AI systems use on individual task performance.
The intersection of “Privacy and harm” with “Training” in Table 1 emphasises that AI systems may gather
or produce sensitive data. As organisational or product data are finally used during AI system testing, the
focus extends from personal physical and psychosocial harms to include those explicitly concerned with
privacy, which, in turn, may also relate to psychosocial harms. With data gathering, cyber security risks also
emerge. Workplace-specific AI systems likely incorporate process data with a discrete risk of disclosing
personal data, be they about how tasks are completed, work processes chosen, or concern socio-
demographics, pay rates and other data deemed critical to organisational efficiency calculations.
This type of data gathering and use entails surveillance risks. Information flowing into AI systems
may be used to improve an organisation’s aggregate performance and individual workers’ performance by
monitoring specific behaviours, ranging from toilet breaks to typing speeds (Gartner, 2019; Scassa, 2021;
Yu et al., 2018). With the use of wearables, intended or incidental surveillance may not end at the factory
gate or the office door. This additional data collection may not have been intended nor disclosed and thus
potentially breach prior agreement or legislation. Strict data use protocols and oversight should limit ‘usage
creep’.
Job control and workload - maintaining competencies, capabilities and capacity of workers.
During AI operation, the focus shifts to the longer-term impacts of AI systems on operators and end-users
and their interactions with the system. Failure to ensure that relevant competencies are in place may
undermine business continuity should new processes be disrupted. The intersection of “Job control and
workload” with “Input” in Table 1 recognises the fact that AI systems may fundamentally change how
people work. The impact of AI on working conditions, such as changes in speed, tasks, decision-making,
and technical skill requirements, will become evident, emphasising the importance of providing AI skills
Pre-Print Version (final accepted)
training to ensure safe and efficient use of AI systems and prepare individuals for collaborative work with
AI (Kolbjørnsrud et al., 2017). Makarius et al. (2020) conceive this process as one of the growing utility of
AI systems to organisations, with opportunities for workplace improvement, notably enhancing employee
communication and resolution. As inherited job roles and responsibilities change, new risks and hazards
emerge associated with worker unfamiliarity, and status change, including loss/degradation and emotional
reactions to working with AI (Hornung and Smolnik, 2022; Mirbabaie et al., 2022). As pointed out in the
intersection between “Supervisor/peer and organisation relations” and “Input” (Table 1), employees may
Almost unique to AI-enabled work environments is ‘algorithmic distance’, which emerges when
‘organisational power [is] exercised through automated routines or algorithms’ (Bartley et al., 2019, p.8).
Supervisor/peer relations may, for instance, be automated via AI in performance review contexts, when
remote monitoring (e.g., movement sensors) replaces supervisor assessment, significantly modifying
feedback and potentially also appeals processes. Our research participants additionally alerted to a physical
and relational gap in workplaces when human interaction is replaced with machine-generated AI messaging
Participants disagreed and the literature does not provide a clear answer about whether physical or
relational gaps have a positive, negative, or neutral effect on WHS and worker wellbeing. The lack of
consensus may reflect a gap in current knowledge but also an ambiguity of AI systems’ impacts, which may
have ‘good’ and ‘bad’ effects depending on the mode of implementation, use and the worker demographic
Privacy and harm - embedding system protections, quality assurance and wellbeing checks. Krzywdzinski
et al. (2022) found that workers expect new assisting AI technologies to contribute to WHS, and stress
Pre-Print Version (final accepted)
prevention and seek evidence of that effect (Table 1, the intersection of “Privacy and harm” with “Input”).
The same arguably applies to personal data. Both call for feedback controls to be put in place to record the
AI systems’ human and organisational impact, such as for responding to technostress (Graveling, 2020;
Tarfdar et al., 2015); or modifying the speed, sequencing or monitoring of work where they cause new
Agrawal et al.’s (2018) concept of feedback within their AI Canvas is about using outcomes to
improve the AI algorithm to achieve maximum benefits to the business. To this, we would add that this
objective should be achieved within its original confines and intended duration (Table 1, the intersection of
“Privacy and harm” and “Feedback”). A more comprehensive review than of the AI algorithm alone is
required to ascertain the continuing value of the AI outcomes in a workplace context. Feedback loops would
need to return to the initial stages of the risk and hazard assessment process at the ideation stage. It will need
to ask whether the implementation of the AI system has met or at least contributed to achieving a ‘good
workplace’ WHS objectives (specifically, enhanced workplace wellbeing as would have been defined at that
stage). The questions to ask at this point are: are those initial objectives still valid and desirable, and what
else should be done to use AI for ‘social good’ within the workplace?
Discussion
The framework presented here is a proposal for how AI risk may be assessed systematically and
preventatively in a workplace, the questions that ought to be asked and when they ought to be asked. It
invites self-reflection alongside pre-emptive impact assessment. The framework does not and cannot claim
to have conclusively covered known risks, whilst we also suspect new, unknown risks to emerge in the
future as AI development continues apace. In conjunction with the detailed risk assessment scorecard
previously developed during this study (Cebulla et al., 2021), the framework is a tool to assist with
An important message emerging from our fieldwork and the literature to date is that consultation and
communication are central to countering organisational complacency when working with AI. It prescribes
how the framework may be used most productively. Sanderson et al. (2021, p.5) stress the importance of
‘diverse perspectives and collaborative teams in the design, development and use of AI technology’ to
promote the beneficent use of AI. To ensure comprehensive involvement, management, data managers, and
relevant business units (including those some steps removed from the area directly impacted by the AI
As our research noted the temporality of the AI WHS risks, that is, the same risk carrying weight
beyond its initial occurrence, persisting in similar or different shape later on, workforce engagement may
need to be embedded in organisational processes and culture. This need is accentuated by the AI Canvas’
inclusion of a feedback stage for reflecting on how an AI system has been implemented and is being used.
AI implementation is here understood as a loop of checks and balances, and checks again.
AI may thus change not only WHS but also how workplaces ought to prepare for accelerating change
and emergence of AI WHS risks which, like AI risks more generally, are incompletely understood and, for
The use of AI in workplaces has then wider societal and regulatory implications. Societal, because
AI changes workplaces and communications and interactions in those workplaces, but also beyond, for
instance, through new ways of supply chain management. Like any technology, AI affects social relations.
The difference here is that, in the past, humans used technology to assist them in doing their work; today AI
may be used to determine what that work is and how it is to be done. Job roles and responsibilities become
interchangeable or inversed. Accountability, however, looks set to remain as is, i.e., with the performing
In the absence of bespoke statutory WHS regulation of psychosocial harm from AI and impact on
workers beyond workplace boundaries, the fallback for keeping AI WHS risks in check is existing
legislation, such as anti-discrimination laws used to prevent or halt algorithmic bias discriminating based on
gender, ethnicity/colour or disability. In some instances, AI workplace impacts may fall under the auspices
Pre-Print Version (final accepted)
of current WHS regulation, despite its limitations. These may be imperfect controls, but they nonetheless
require businesses to demonstrate to the outside world that their use of AI is fair and legal – and that they
As AI evolves, now reshaping work processes with the proliferation of generative AI, research must
continue to dissect the social and relational as well as commercial cost-benefits of that technology and
develop recommendations for managing those newly emerging risks, including through the means of WHS
oversight. This would be supported by a living inventory of AI risks and their impact on WHS, and risk
Conclusion
This paper has presented an action framework for assessing and containing WHS risks associated
with the use of AI in workplaces and identified in interviews and groups discussions with AI and WHS
specialists, and organisations using or preparing to use AI in the workplace. Study participants identified a
workplace risk dimension of AI beyond physical or psychological, work stress-related WHS risks and
hazards commonly identified in and monitored by safe-work guidelines. They stressed the link between AI
systems, their intended benefits or purpose, and their effects on workplace relations, (data) privacy and
business continuity. The risk assessment framework developed from those discussions suggests a sequence
of intra-organisational ‘Q&A’ (questions and answers) and engagement with a view to achieving a socially
and collegially aware workplace with sound WHS and employee welfare processes in place. It makes the
References
Agrawal, A., Gans, J. and Goldfarb, A. (2018), “A Simple Tool to Start Making Decisions with the Help of
Aloisi, A. and de Stefano, V. (2020), “Regulation and the future of work: The employment relationship as an
https://doi.org/10.1111/ilr.12160.
Arnold, Z. and Toner, H. (2021), “AI Accidents: An Emerging Threat”, CSET policy brief, Center for
Security and Emerging Technology, Georgetown University, Washington, DC, United States.
https://doi.org/10.51593/20200072.
Azamfirei,V., F. Psarommatis and Y. Lagrosen (2023), “Application of automation for in-line quality
inspection, a zero-defect manufacturing approach”, Journal of Manufacturing Systems, Vol. 67, pp. 1-22.
https://doi.org/10.1016/j.jmsy.2022.12.010.
Baker, S. and Edwards, R. (2012) “How many qualitative interviews is enough? Expert voices and early
Bartley, T., Soener, M. and Gershenson, C. (2019), “Power at a distance: Organizational power across
Beard, M. and Longstaff, S. (2018), “Ethical by Design: Principles for good technology”, available at:
Bingham, A.J. and Witkowsky, P. (2022), “Deductive and inductive approaches to qualitative data analysis.”
In C. Vanover, P. Mihas, & J. Saldaña (Eds.), Analyzing and interpreting qualitative data: After the
Black, J.S. and P. van Esch (2020), “AI-enabled recruiting: What is it and how should a manager use it?”,
Bustamante, F., A. Dekhne, J. Herrmann and V. Singh (2020), Improving warehouse operations—digitally,
available at : https://www.mckinsey.com/capabilities/operations/our-insights/improving-warehouse-
Capgemini (2020), Smart Factories. How can manufacturers realize the potential of digital industrial
Cebulla, A., Szpak, Z., Knight, G., Howell, C., and Hussain, S., (2021), “Ethical use of artificial intelligence
Cebulla, A., Szpak, Z., Howell, C., Knight, G., & Hussain, S. (2022). “Applying ethics to AI in the
workplace: the design of a scorecard for Australian workplace health and safety”, AI & Society, 38, pp.919–
935, https://doi.org/10.1007/s00146-022-01460-9 .
Chakravorti, B. (2022), “Why AI Failed to Live Up to Its Potential During the Pandemic”, Harvard Business
Review, 17 March.
Charlwood, A. and Guenole, N. (2022), “Can HR adapt to the paradoxes of artificial intelligence?,” Human
Chhillar, D. and Aguilera, R. V. (2022), “An Eye for Artificial Intelligence: Insights into the Governance of
Artificial Intelligence and Vision for Future Research”, Business & Society.
https://doi.org/10.1177/00076503221080959.
Choy, G., O. Khalilzadeh, M. Michalski, S. Do, A. E. Samir, O.S. Pianykh, J.R. Geis, P.V. Pandharipande,
J.A. Brink, and K.J. Dreyer (2018), “Current Applications and Future Impact of Machine Learning in
Crane, J. (2021), “How AI in the Workplace Could Shorten Your Workweek”, available at:
https://www.spiceworks.com/hr/hr-strategy/guest-article/how-ai-in-the-workplace-could-shorten-your-
Cumberland, D.M., Ellinger, A.D. and Deckard, T.G. (2022), “Listening and learning from the COVID-19
frontline in one US healthcare system”, International Journal of Workplace Health Management, Vol. 15
Davenport, T.A. and R. Kalakota (2019), “The potential for artificial intelligence in healthcare”, Future
Dawson, D., Schleiger, E., Horton, J., McLaughlin, J., Robinson, C., Quezada, G., Scowcroft, J. and
Hajkowicz, S. (2019), “Artificial Intelligence: Australia’s Ethics Framework”, Data61 CSIRO, Australia,
De Cremer, D. (20220, “With AI entering organizations, responsible leadership may slip!”, AI Ethics, Vol.
2, pp.49–51. https://doi.org/10.1007/s43681-021-00094-9.
Dery, K., Sebastian, I. M. and van der Meulen, N. (2017), “The Digital Workplace Is Key to Digital
https://aisel.aisnet.org/misqe/vol16/iss2/4.
Dignam, A. (2020), “Artificial intelligence, tech corporate governance and the public interest regulatory
response”, Cambridge Journal of Regions, Economy and Society, Vol. 13 No.1, pp.37–54.
https://doi.org/10.1093/cjres/rsaa002.
Duke, S.A. (2022), “Deny, dismiss and downplay: developers’ attitudes towards risk and their role in risk
creation in the field of healthcare‑AI”, Ethics and Information Technology, Vol. 24 No. 1.
https://doi.org/10.1007/s10676-022-09627-0.
Pre-Print Version (final accepted)
Elish, M.C. (2019), “Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction”, Engaging
Faraj, S., Pachidi, S. and Sayegh, K. (2018), “Working and organizing in the age of the learning algorithm”,
Fountaine, T., McCarthy, B. and Saleh, T. (2019), “Building the AI-powered organization”, Harvard
Gaber J. and Gaber, S. (2010), “Using face validity to recognize empirical community observations”,
2022)
Giermindl, L.M., Strich, F., Christ, O., Leicht-Deobald, U. and Redzepi, A. (2021), “The dark sides of
people analytics: reviewing the perils for organisations and employees”, European Journal of Information
Systems, https://doi.org/10.1080/0960085X.2021.1927213.
Graveling, R. (2020), “The mental health of workers in the digital era: how recent technical innovation and
its pace affects the mental well-being of workers”, European Parliament, Directorate-General for Internal
Holmström, J. and Hällgren, M. (2021), “AI management beyond the hype: exploring the co-constitution of
Hornung, O. and Smolnik, S. (2022), “AI invading the workplace: negative emotions towards the
organizational use of personal virtual assistants”, Electron Markets, Vol. 32, pp.123–138.
https://doi.org/10.1007/s12525-021-00493-0.
Pre-Print Version (final accepted)
IEEE (undated), “A Call to Action for Businesses Using AI. Ethically Aligned Design for Business”,
for-
business.pdf?utm_medium=undefined&utm_source=undefined&utm_campaign=undefined&utm_content=u
Jarrahi, M. H., Newlands, G., Lee, M. K., Wolf, C. T., Kinder, E. and Sutherland, W. (2021), “Algorithmic
Jeske, D. (2022), “Remote workers' experiences with electronic monitoring during Covid-19: implications
and recommendations”, International Journal of Workplace Health Management, Vol. 15 No. 3, pp.393-
409. https://doi.org/10.1108/IJWHM-02-2021-0042.
Karanika-Murray, M. and Ipsen, C. (2022), “Guest editorial: Reshaping work and workplaces: learnings
from the pandemic for workplace health management”, International Journal of Workplace Health
Kirkman, B. L. and Mathieu, J. E. (2005), “The Dimensions and Antecedents of Team Virtuality”, Journal
Kolbjørnsrud, V., Amico, R. and Thomas, R.J. (2017), "Partnering with AI: how organizations can win over
0085.
Krzywdzinski, M., Pfeiffer, S., Evers, M. and Gerber, C. (2022), “Measuring work and workers. Wearables
and digital assistance systems in manufacturing and logistics”, Discussion Paper SP III 2022–301. Berlin,
Lebovitz, S., Lifshitz-Assaf, H. and Levina, N. (2022), “To engage or not to engage with AI for critical
judgments: How professionals deal with opacity when using AI for medical diagnosis”, Organization
Science. https://doi.org/10.1287/orsc.2021.1549.
Pre-Print Version (final accepted)
Lee, M. K. (2018), “Understanding perception of algorithmic decisions: Fairness, trust, and emotion in
Loveys, K. M. Prina, C. Axford, Ò. Ristol Domènec, W. Weng, E. Broadbent, S. Pujari, H. Jang, Z. A Han,
J. A. Thiyagarajan (2022), “Artificial intelligence for older people receiving long-term care: a systematic
Makarius, E.E., Mukherjee, D., Fox, J.D. and Fox, A.K. (2020), “Rising with the machines: A
sociotechnical framework for bringing artificial intelligence into the organization”, Journal of Business
Mateescu, A., and A. Nguyen (2019), “Workplace Monitoring & Surveillance”, available at:
Matsumoto, T. and Ema, A. (2020), “Proposal of the Model Identifying Risk Controls for AI Services”, The
34th Annual Conference of the Japanese Society for Artificial Intelligence, Kumamoto, Japan, June 12,
Mayer, A.-S., Strich, F. and Fiedler, M. (2020), “Unintended Consequences of Introducing AI Systems for
Decision Making”, MIS Quarterly Executive, Vol. 19 No. 4, Article 6, available at:
Mayer, B., Helm, S., Barnett, M. and Arora, M. (2022), “The impact of workplace safety and customer
misbehavior on supermarket workers’ stress and psychological distress during the COVID-19 pandemic”,
https://doi.org/10.1108/IJWHM-03-2021-0074.
Pre-Print Version (final accepted)
Mirbabaie, M., Brünker, F., Möllmann (Frick), N.R.J. & Stieglitz, S. (2022), “The rise of artificial
intelligence – understanding the AI identity threat at the workplace”, Electron Markets, Vol. 32, pp.73–99.
https://doi.org/10.1007/s12525-021-00496-x.
2023)
Pataki-Bittó, F. and Kun, Á. (2022), “Exploring differences in the subjective well-being of teleworkers prior
to and during the pandemic”, International Journal of Workplace Health Management, Vol. 15 No. 3,
pp.320-338. https://doi.org/10.1108/IJWHM-12-2020-0207.
Pepito, J. A. and Locsin, R. (2018), “Can nurses remain relevant in a technologically advanced future?”,
https://doi.org/10.1016/j.ijnss.2018.09.013.
Robert, L.P., Pierce, C., Marquis, L., Kim, S. and Alahmad, R. (2020), “Designing fair AI for managing
employees in organizations: a review, critique, and design agenda”, Human–Computer Interaction, Vol. 35
Rural Industries (2016), “Artificial intelligence”. Rural Industries Research & Development Corporation,
Safe Work Australia (2015), “Principles of good work design. A work health and safety handbook”,
September 2022)
Pre-Print Version (final accepted)
Sanderson, C., Douglas, D., Lu, Q., Schleiger, E., Whittle, J., Lacey, J., Newnham, G., Hajkowicz, S.,
Robinson, C. and Hansen, D. (2022), “AI Ethics Principles in Practice: Perspectives of Designers and
Scassa, T. (2021), “Privacy in the Precision Economy: The Rise of AI-Enabled Workplace Surveillance
Shubhendu, S., & Vijay, J.F. (2013), “Applicability of Artificial Intelligence in Different Fields of Life”,
International Journal of Scientific Engineering and Research, Vol. 1 Issue 1, pp. 2347-3878.
Swedberg, R. (2020) “Exploratory Research,” In Elman, C., Gerring, J. and Mahoney, J. (eds.) (2020) The
Production of Knowledge: Enhancing Progress in Social Science, Strategies for Social Inquiry, Cambridge,
Talaviya, T., D. Shah, N. Patel, H. Yagnik and M. Shah (2020), “Implementation of artificial intelligence in
agriculture for optimisation of irrigation and application of pesticides and herbicides”, Artificial Intelligence
Tambe, P., Cappelli, P. and Yakubovich, V. (2019), “Artificial Intelligence in Human Resources
Management: Challenges and a Path Forward”, California Management Review, Vol. 61 No. 4, pp.15–42.
https://doi.org/10.1177/0008125619867910.
Todolí-Signes, A. (2021), “Making algorithms safe for workers: occupational risks associated with work
managed by artificial intelligence”, Transfer: European Review of Labour and Research, Vol. 27 No. 4, pp.
433–452. https://doi.org/10.1177/10242589211035040.
Trenerry, B., Chng S., Wang Y., Suhaila, Z.S., Lim, S.S., Lu, H.Y. and Oh, P.H. (2021), “Preparing
Workplaces for Digital Transformation: An Integrative Review and Framework of Multi-Level Factors”,
Walsh, M. (2019), “When Algorithms Make Managers Worse”, Harvard Business Review, 8 May. Product
WEF (2020), “Companion to the Model AI Governance Framework. Implementation and Self-Assessment
Wessel, L. K., Baiyere, A., Ologeanu-Taddei, R., Cha, J. and Jensen, T. B. (2021), “Unpacking the
Difference between Digital Transformation and IT-enabled Organizational Transformation”, Journal of the
Wilson, H.J. and P. R. Daugherty (2018), “Collaborative Intelligence: Humans and AI Are Joining Forces”,
Yu, Z., Du, H., Xiao, D., Wang, Z., Han, Q. and Guo, B. (2018), “Recognition of human computer
operations based on keystroke sensing by smartphone microphone”, IEEE Internet of Things Journal, Vol.
5, pp.1156-1168. https://doi.org/10.1109/JIOT.2018.2797896.
Pre-Print Version (final accepted)
Principles at a glance
1. Human, societal and environmental wellbeing: AI systems should benefit individuals, society and the
environment.
2. Human-centred values: AI systems should respect human rights, diversity, and the autonomy of
individuals.
3. Fairness: AI systems should be inclusive and accessible, and should not involve or result in unfair
discrimination against individuals, communities or groups.
4. Privacy protection and security: AI systems should respect and uphold privacy rights and data
protection, and ensure the security of data.
5. Reliability and safety: AI systems should reliably operate in accordance with their intended purpose.
6. Transparency and explainability: There should be transparency and responsible disclosure so people
can understand when they are being significantly impacted by AI, and can find out when an AI system
is engaging with them.
7. Contestability: When an AI system significantly impacts a person, community, group or environment,
there should be a timely process to allow people to challenge the use or outcomes of the AI system.
8. Accountability: People responsible for the different phases of the AI system lifecycle should be
identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems
should be enabled.
Supervisor/peer -Inadequate, or inadequately defined and -AI used out of scope -Inadequate chain of accountability,
and communicated, purpose of AI -AI undermining company core values reporting or governance structure,
organisational -Insufficient transparency, contestability outsourcing of design responsibility
relations -Gaming risk
-Context stripped from communication
between employees, replaced by
management by algorithm
-Worker manipulation or exploitation
-Undue reliance on AI decisions
-Lack of process for triggering oversight
Privacy and -Resolutions affecting ethical, moral and -Technical failure, human error, security -Physical and psychosocial hazards,
harm social principles, e.g., predicting health breach, processes/essential services unnecessary harm, avoidable death
conditions/pregnancy contravening impacted
privacy (in case employee monitoring for -Reputational risk
greater productivity)
-Over-reliance leading to diminished
diligence on-site
Development Operation
Outcome Choose Training Data to train AI for Input Data for predictions after Feedback Using
performance measures better predictions training the AI algorithm outcomes to improve
algorithm
Job control and -Workers impeded from -Insufficient consideration given to - Worker competences, skills (not) -Irreversible impacts on
workload modifying outcomes interconnectivity/interoperability meeting AI job responsibilities
of/challenging AI of AI systems, and their secondary requirements/overburdening -Inadequate integration
recommendations (e.g., effects on job roles, tasks and -Discontinuity of service (e.g., of AI into mechanical
automated advice processes responsibilities failure to anticipate shock events, or electrical processes
leaving queries unanswered) seasonal factors) -No offline systems or
processes to review
veracity of AI
predictions/decisions
Supervisor/peer -Outcome measures not -Unrepresentative training data -Worker resistance (e.g., to data -Assessment process
and aligned with healthy -Data not for purpose (e.g., sharing) review requirement
organisational workplace dynamics (e.g., untrusted past indicators) -Insufficient safety understanding
relations efficiency vs competition, -Continuity and change in e.g., resulting from acceleration)
equity/fairness) responsibilities and accountability -Boundary creep (e.g., data
-Worker-AI interface collection outside workplace)
adversely affecting the status -Supervisor/peer mediation
of workers (e.g., differential via/supervisor substitution with AI
rewards)
Privacy and -Adverse, differential, -Cyber security vulnerability -Physical workplace impact (e.g., -Personal data storage
harm additional (unforeseen) work -Data leaks and disclosure of design/cobot, temperature) beyond time
task and process effects (e.g., personal information -Insecure data storage, cyber
acceleration across internal security vulnerability
supply chain)
Source: Authors’ own creation based on interviews with AI, computer science, WHS experts, online
workshops & case studies
34
Figure III Framework of actions for identifying and minimising AI WHS risks