You are on page 1of 3

Case Study Assignment

Name of Fellow: ASTRODITA ADYA SETA

Name Of Case Study: Implementing Artificial Intelligence (AI) based clinical decision support to
detect patient safety incidents in real-world settings [M] AI and Patient Safety
Please answer the following questions based on your reading of the case study.

Questions

1. Describe the sociotechnical context within which the AI-tool was implemented (hint: think
people, processes, technologies). (minimum 150 words)
Artificial intelligence (AI) is increasingly being adopted by organizations, such as health
sector, yet implementation is often carried out without careful consideration of the employees who
will be working along with it. If employees do not understand or work with AI, it is unlikely to bring
value to an organization. Therefore, needs to be investigated about the ways in which employees
and AI can collaborate to build different levels of sociotechnical. Accordingly, we are talking about
to develop a model of AI integration based on Socio-Technical Systems (STS) theory that combines
AI novelty and scope dimensions. We take an organizational socialization approach to build an
understanding of the process of integrating AI into the organization. Our discussion underscores the
importance of AI socialization as a core process in successfully integrating AI systems and
employees. So we shall take up the highlights of the cognitive, relational, and structural implications
of integrating AI and employees.

2. Discuss key considerations for selecting and deploying AI-based tools in hospitals. (minimum
150 words)
Some healthcare organizations are turning to artificial intelligence and machine learning
because of the enhancements these advanced technologies can make to patient care, operations
and security. But assessing the promises of the technologies can be difficult and time-consuming,
unless you’re an expert.
Know that AI and machine learning are augmentative tools, understand that size matters
among data sets, real-world applicability is a must, and the tools must be trained and validated. AI
is a tool that enhances our capability, allowing humans to do more than what we could on our own.
It’s designed to augment human insight, not replace it. For example, a doctor can use AI to access
the distilled expertise of hundreds of clinicians for the best possible course of action. This is far more
than he or she could ever do by getting a second or third opinion.
That needs to be done by analysing AI recommendations carefully. A lot of buzz around AI
and machine learning is from creators of AI tools. That’s understandable because this discussion is
focused on what AI can do to improve healthcare and other realms.

3. Who should be involved in implementation? (minimum 150 words)


The greatest challenge to AI in these healthcare domains is not whether the technologies
will be capable enough to be useful, but rather ensuring their adoption in daily clinical practice. For
widespread adoption to take place, AI systems must be approved by regulators such as Government
or National Health Officer, integrated with electronic health record (EHR) systems, accredited by the
national companies, standardised to a sufficient degree that similar products work in a similar
fashion, taught to clinicians, paid for by public or private payer organisations, and updated over time
in the field. These challenges will ultimately be overcome, but they will take much longer to do so
than it will take for the technologies themselves to mature.

4. Is there a role for patients? If yes, how should they be involved? (minimum 150 words)
The integration of artificial intelligence (AI) into the health care system is not only changing
dynamics such as the role of health care providers but is also creating new potential to improve
patient safety outcomes and the quality of care. The term AI can be broadly defined as a computer
program that is capable of making intelligent decisions. The operational definition of AI we adopt in
this review is the ability of a computer or health care device to analyze extensive health care data,
reveal hidden knowledge, identify risks, and enhance communication. In this regard, AI
encompasses machine learning and natural language processing. Machine learning enables
computers to utilize labeled (supervised learning) or unlabeled (unsupervised learning) data to
identify latent information or make predictions about the data without explicit programming.
Among different types of AI, machine learning and natural language processing specifically have
societal impacts in the health care domain and are also frequently used in the health care field.

5. Challenges with clinical decision support alerts are not unique to AI-based tools. Discuss.
(minimum 150 words)
Artificial intelligence (AI) for healthcare presents potential solutions to some of the
challenges faced by health systems around the world. However, it is well established in
implementation and innovation research that novel technologies are often resisted by healthcare
leaders, which contributes to their slow and variable uptake. Although research on various
stakeholders’ perspectives on AI implementation has been undertaken, very few studies have
investigated leaders’ perspectives on the issue of AI implementation in healthcare. It is essential to
understand the perspectives of healthcare leaders, because they have a key role in the
implementation process of new technologies in healthcare.

6. Examine ways to monitor AI-based tools during and after implementation. (minimum 150
words)
Regulatory aspects (as described in phase I), data governance and model governance play
an important role in the clinical implementation and should be addressed appropriately. Before
widespread clinical implementation is possible, AI models must be submitted to the accredited
companies.
Furthermore, implementation efforts should be accompanied by clear and standardised
communication of AI model information towards end users to promote transparency and trust, for
example, by providing an ‘AI model facts label’. To ensure that AI models will be safely used once
they are implemented, users (ex, physicians, nurses and patients) should be properly educated,
particularly on how to use them without jeopardising the clinician–patient relationship. Specific AI
education programmes can help and have already been introduced.
After implementation, hospitals should implement a dedicated quality management system
and monitor AI model performance during the entire life span, enabling timely identification of
worsening model performance, and react whenever necessary (ex, retire, retrain, adjust or switch
to an alternative model). Governance of the required data and AI model deserves special
consideration. Data governance covers items such as data security, data quality, data access and
overall data accountability (see also the FAIR guideline). On the other hand, model governance
covers aspects such as model adjustability, model version control and model accountability. Besides
timely identifying declining model performance, governing AI models is also vital to gain patients’
trust. Once a model is retired, the corresponding assets such as documentation and results should
be stored for 15 years (although no consensus on terms has been reached yet), similar to clinical
trials.

You might also like