You are on page 1of 6

Ethical Issue in Artificial Intelligence in Healthcare

Abstract

AI raises legal, ethical, and philosophical questions about privacy, surveillance, bias, and human
judgment. New digital technologies have raised concerns about inaccuracy and data breaches.
Healthcare failures may be harmful for patients. Remember that people see doctors at their
most vulnerable. Artificial intelligence in healthcare may raise legal and ethical difficulties, yet
there are no clear rules. This evaluation emphasizes algorithmic openness, privacy, and security
of all stakeholders and cybersecurity of linked vulnerabilities.

Introduction
Healthcare systems encounter growing medical demand, chronic illness, and resource
restrictions. As digital health technologies are used more, healthcare data is expanding. If
correctly exploited, health professional might concentrate on disease causes and monitor
preventive actions and therapies. Thus, policymakers, lawmakers, and other decision-makers
should be informed. For this to occur, computer and data scientists, as well as clinical
entrepreneurs, contend that artificial intelligence (AI), particularly machine learning, will be one
of the most crucial parts of healthcare reform (1). Computer programs that can reason and
learn are called artificially intelligent (AI). It encompasses adaptability, sensory understanding,
and interaction. Traditional computational algorithms are software programs that follow a set
of rules and do the same work consistently, like an electronic calculator: "if this is the input,
then this is the output." However, AI systems learn rules (function) from training data (input). AI
may transform healthcare by gleaning new insights from the massive amounts of digital data
generated throughout delivery.(2)

Generally, AI is implemented as a system consisting of both software and hardware. Algorithms


dominate AI software. For the purposes of creating AI algorithms, an artificial neural network
(ANN) serves as a conceptual framework. It's a human brain model with weighted
communication channels connecting neurons. AI algorithms identify intricate non-linear
relationships in large datasets (analytics). Training machines corrects algorithmic flaws,
improving prediction model accuracy (confidence) (3, 4).

New technologies may introduce inaccuracies and data breaches. Errors in high-risk healthcare
may have serious repercussions for patients. This is crucial because patients encounter
professionals at their most vulnerable (5). AI can deliver evidence-based management and
medical decision-guides to clinicians if harnessed properly (AI-Health). It offers diagnostics,
medication development, epidemiology, individualized treatment, and operational efficiency.
As Ngiam and Khor note, integrating AI solutions into medical practice requires a strong
governance structure to safeguard people from damage, including unethical conduct (6–17).
The Hippocratic Oath, which governs medical ethics, comes from Hippocrates (18–24).

After the FDA approved an automatic artificial intelligence detection approach based on
machine learning, machine learning-healthcare applications (ML-HCAs) became a clinical reality
(ML). Algorithms learn from vast data sets and generate predictions without programming (25).

Utilization of AI in Medical Research

Utilization of electronic health record (EHR) data is an important area of AI-based health
research. If the underlying database and IT infrastructure don't stop the spread of inconsistent
or low-quality data, it might be challenging to utilize the data collected. Despite this, AI in
electronic health records may be used for scientific research, quality enhancement, and clinical
care enhancement. Before pursuing the conventional road of scientific publication, guideline
formulation, and clinical support tools, AI that has been constructed and taught with sufficient
data may aid in identifying clinical best procedures from healthcare data. By studying clinical
practice patterns derived through electronic health data, AI may also contribute to the
development of new clinical practice models for healthcare delivery (26).

Incorporating AI into the Pharmaceutical Research and Development Process

Future AI implementation is anticipated to streamline and speed pharmaceutical development.


By employing robots and models of genetic targets, medications, organs, illnesses and their
progression, pharmacology, safety, and effectiveness, AI may shift drug discovery from a labor-
intensive to a capital- and data-intensive process. The drug research and development process
may be accelerated and made more cost-effective using artificial intelligence (AI). Although, as
with any medication trial, finding a lead chemical does not ensure the creation of a safe and
effective therapeutic, artificial intelligence was previously used to discover prospective Ebola
virus therapies (26).

Ethical Difficulties

It's debated whether AI "fits within current legal categories or if a new category with its
particular characteristics and consequences should be formed." The implementation of AI in
clinical practice offers tremendous potential to enhance healthcare, but it also raises ethical
concerns that must be resolved. To realize the full promise of artificial intelligence in
healthcare, four significant ethical concerns must be addressed: Important considerations
include (1) informed permission to utilize data, (2) safety and openness, (3) algorithmic fairness
and biases, and (4) data privacy (27). The question of whether AI systems may be regarded
lawful is not just a legal but also a political one (16 February 2017 Resolution of the European
Parliament) (28).
The objective is to assist policymakers in proactively addressing the morally challenging issues
created by mandating AI in healthcare institutions (17). Concern about the lack of algorithmic
transparency has driven the vast majority of legal discourse on artificial intelligence. The
increasing prevalence of AI in high-risk circumstances has heightened the need for responsible,
egalitarian, and visible AI development and governance. The two most significant features of
transparency are information's accessibility and comprehension. Frequently, information on the
performance of algorithms is made intentionally difficult to access (29).

Machines that may function according to uncorrected principles and learn new behavioral
patterns purportedly pose a danger to our ability to identify the producer or operator of a
crime. The moral underpinning of society and the basis of the culpability principle in law are
both at risk because of the allegedly "ever-widening" difference, which is alarming. The usage
of AI may render us unable to hold anybody liable for any harm caused. The magnitude of the
threat is unclear, and the employment of machines will significantly restrict human capacity to
place responsibility and assume responsibility for decision-making (30).

Modern computer techniques may conceal the reasoning behind an Artificial Intelligence
System's (AIS) output, making meaningful inspection difficult. Consequently, the method
through which an AIS creates its outputs is opaque. A process used by an AIS may be so
complex that it is successfully disguised from a non-technically educated clinical user while
being uncomplicated for a techie competent in the computer science field (5)

Machines that may function according to uncorrected principles and learn new behavioral
patterns purportedly pose a danger to our ability to identify the producer or operator of a
crime. The moral underpinning of society and the basis of the culpability principle in law are
both at risk because of the allegedly "ever-widening" difference, which is alarming. The usage
of AI may render us unable to hold anybody liable for any harm caused. The magnitude of the
threat is unclear, and the employment of machines will significantly restrict human capacity to
place responsibility and assume responsibility for decision-making (30).

Modern computer techniques may conceal the reasoning behind an Artificial Intelligence
System's (AIS) output, making meaningful inspection difficult. Consequently, the method
through which an AIS creates its outputs is opaque. A process used by an AIS may be so
complex that it is successfully disguised from a non-technically educated clinical user while
being uncomplicated for a techie competent in the computer science field (5)

AI applied to a position in healthcare that requires adaptability to a constantly shifting


environment with frequent interruptions, while upholding ethical standards to protect the well-
being of patients (24). However, a simple and crucial aspect of determining the security of any
medical software is the ability to examine the program and understand how it might fail. For
instance, the ingredients and physiologic processes of drugs or mechanical systems are identical
to the method for creating software applications.
Why is Responsibility Necessary?

When the environment or situation shifts, AI systems may fail abruptly and severely.
Instantaneously, AI may move from being incredibly brilliant to exceedingly naïve. Even though
AI bias is minimized, all AI technologies will have limitations. For a human decision-maker to be
successful, he or she must understand the constraints of the system, and the system must be
tailored to the needs of the person. It's possible that doctors will become comfortable while
using a medical diagnostic and therapeutic system that's 99.9% accurate, losing interest in their
profession and maybe letting it suffer as a result. In addition, individuals may accept the
outcomes of decision-support systems without considering their limitations. This kind of failure
will recur in other fields, such as the justice system, where judges have changed their rulings
based on risk evaluations that were afterwards shown to be erroneous (32).

The usage of AI without human supervision raises worries about cyber security issues. The use
of artificial intelligence for surveillance or cyber security in the context of national security, as
stated in a paper by RAND perspectives, might generate a new attack vector based on "data
diet" vulnerabilities. The research also addresses domestic security problems, such as the
(increasing) use of artificial agents by governments for citizen monitoring. These have been
identified as possible threats to the basic rights of people. These challenges are grave because
they threaten vital infrastructures, putting lives, human security, and access to resources at risk.
Weaknesses in cyber security may pose a significant hazard since they are generally concealed
and only identified after the fact (after the harm has been done) (28).

In recent years, the practicality, engineering, and morality of deadly autonomous weapon
systems have increased (LAWS). These robots would have the wide discretion of AI autonomy
as well as the ability to kill and harm people. While these improvements may bring substantial
benefits, several issues have been raised about the morality of creating and enforcing LAWS
(33).

In AI research and development, it is not uncommon to encounter the issue of selection bias in
the datasets used to train algorithms. As shown by Buolamwini and Gebru, automated face
recognition and the accompanying datasets are biased, resulting in a decrease in the accuracy
of distinguishing darker-skinned persons, especially women. Machine learning requires a vast
quantity of data points, and the bulk of regularly used clinical trial research datasets are derived
from chosen groups. It follows that the generated algorithms may be less effective when
applied to underprivileged and, hence, potentially underrepresented patient groups (34).

Who Bears the Responsibility?


The Association for the Advancement of Artificial Intelligence recommends testing AISs. Before
implementing such robots and artificial intelligence systems, it is crucial to develop, test,
evaluate, and analyze their dependability, performance, safety, and ethical compliance logically
and statistically/probabilistically. Verification and validation may assist clinicians justify AIS use.
Clinical ethics prohibit unaccountable behavior. However, physicians and AIS may be opaque.
AIS cannot work in human care if it cannot be punished. Managers of AIS users must make it
quite clear that doctors cannot avoid responsibility by accusing the AIS (5).

Bias in the Use of AI


There is evidence that AI algorithms can include and use human and societal biases at scale.
Though the algorithm isn't entirely blameless, the responsibility lies more with the data it uses.
Data including human judgments or data reflecting the second-order impacts of social or
historical injustices may be used to train models. In addition, the collection and use of data may
lead to bias, and user-generated data can operate as a feedback loop that contributes to
prejudice. To our knowledge, there are no recommendations or established criteria for
reporting and comparing these models, but researchers and physicians should be guided by
them in the future (36, 37).

AI is evolving from a "nice-to-have" to an essential component of contemporary digital systems.


As our reliance on AI for decision-making increases, it becomes imperative that these decisions
be made ethically and without unfair biases. We see a need for visible, explicable, and
responsible AI systems. In several domains, the usage of AI algorithms for improving patient
paths and surgical results surpasses that of humans. Starting the era of artificial intelligence in
healthcare without using AI is probably unscientific and immoral, given that AI is expected to
supplement, coexist, or replace existing systems (38).

Conclusion
The rising use of AI in healthcare necessitates that it be ethically responsible. It is necessary to
prevent data bias by using algorithms based on unbiased real-time data. There must be varied
and diverse programming groups and periodic evaluations of the algorithm, as well as its
application in a system. AI cannot totally replace clinical judgment, but it may assist physicians
in making better judgments. If medical expertise is lacking in an environment that has limited
resources, AI might be used to do screening and assessment. Since all AI decisions are made
using algorithms, even the quickest ones are methodical in comparison to human decision
making. Therefore, even if actions do not have legal consequences (because effective legal
frameworks have not yet been constructed), they inevitably lead to responsibility, not by the
technology, but by the people who invented it and the people who use it. While moral
dilemmas exist in the usage of AI, it is likely to coexist with or replace present systems, ushering
in the era of artificial intelligence in healthcare, and not adopting AI may be unscientific and
immoral.

You might also like