You are on page 1of 4

Artificial intelligence (AI) has the potential to transform healthcare, from enhancing

diagnostic accuracy to predicting patient outcomes. However, as AI becomes more pervasive in


healthcare, it raises important ethical justice issues that must be addressed.

Access and equity is an ethical justice issue related to AI in healthcare. As just 45% of
people are connected in developing countries, and in the least developed countries the proportion
is just 20%, there is a risk that certain groups or communities may be left behind or excluded
from the benefits of AI, which could exacerbate existing health disparities. Topol (2019) argues
that one of the ethical challenges of AI in healthcare is ensuring access and equity for all people,
regardless of their geographic location, socioeconomic status, cultural background, or health
condition. AI can potentially reduce health disparities by providing low-cost and high-quality
services to underserved populations. However, AI can exacerbate existing inequalities if it is not
designed with diversity and inclusion in mind. Makri (2019) suggests that AI systems should be
designed to promote access and equity by addressing issues of digital divide and ensuring that AI
systems are accessible to all, regardless of socioeconomic status or geographic location. In order
for AI to be accessible for all people, some recommendations can be performed:
● Automating and standardizing the collection of race/ethnicity and language data
● Prioritizing the use of the data for identifying disparities and improving outcomes
● Providing internet access and digital literacy training to underserved populations
● Developing AI systems that are transparent, explainable, and accountable

Automation and job displacement is another ethical justice issue related to AI in


healthcare. Some examples of healthcare workflows that can be automated include appointment
scheduling, patient admission and discharge processes, managing health records and billing.
Such tasks that were performed by humans will be replaced by AI, which could lead to job
displacement and other economic consequences. This raises ethical questions about how to
ensure that the benefits of AI are shared fairly and how to support healthcare workers who may
be negatively impacted by automation. One approach is to focus on retraining workers for new
roles that complement automated processes. It might be preferable to slow down development in
some areas of automation before retraining and provision of clear compensation can be done.
This can help ensure that workers are able to adapt to changes brought about by automation and
continue to contribute value within their organizations. It is also important for organizations to
consider trends in technological job displacement and address resulting impacts on the safety,
health, and well-being of workers.

One of the big challenges in ensuring ethical justice in AI technology in the field of
healthcare is to ensure inclusivity and integration of different information technologies. The
integration of different technologies can pose several challenges, including technical
compatibility issues, data privacy and security concerns, and ethical and legal considerations. For
example, different technologies may have different data standards or formats, making it difficult
to share or combine data from different sources. Some recommendations that can help the
inclusivity of AI are:
● Compensate for the disadvantages of some technologies and combine the advantages of
others. By combining different AI techniques, synergetic AI systems can leverage the
strengths of each approach while compensating for their weaknesses.
● Apply synergetic AI. The idea behind synergetic AI is that each AI technique has its
strengths and limitations, and combining them can lead to better performance and more
accurate results. For example, machine learning algorithms are effective at processing
large amounts of data and finding patterns, but they can struggle with handling
uncertainty or making decisions in complex or changing environments. Expert systems,
on the other hand, are good at reasoning and decision-making but may require a lot of
manual knowledge engineering to build.
● Include in the research the processes of communication, evolution and cooperation of
artificial systems.

Bias is one of the main ethical concerns with AI in healthcare. AI algorithms are only as
good as the data they are trained on, and if the data contains biases, the resulting algorithms will
also be biased. For example, facial recognition software has been found to have higher error rates
for people with darker skin tones, which can have serious consequences in healthcare, such as
misdiagnosis and delayed treatment. Obermeyer et al. (2019) found that a widely used algorithm
for managing patient care had higher error rates for Black patients compared to white patients,
leading to inequities in care. In order to prevent bias and discrimination, some recommendations
can be performed:
● Validate the data set used for machine learning. This means checking that there is no
intentional discrimination in the data and that it accurately represents the population
being studied.
● Apply methodologies and software solutions that detect and prevent discrimination based
on factors such as race, national origin, gender, political opinion, religious belief, age,
social and economic status or privacy. These tools can help ensure that AI algorithms are
fair and unbiased.
● Adapt algorithms at the national level to account for differences in socio-economic
conditions and the state of national health systems. An algorithm with diversity in mind
ensures that everyone has access to high-quality healthcare.

In conclusion, AI has the potential to transform healthcare by improving diagnosis,


treatment, and patient outcomes. However, AI also raises important ethical justice issues related
to bias, automation and job displacement, and access and equity. These challenges must be
addressed to ensure that AI is used in an ethical and just manner in healthcare. Potential solutions
include ensuring diverse and representative data sets, supporting healthcare workers impacted by
automation, and promoting access and equity for all. As AI continues to evolve and become
more pervasive in healthcare, it is essential to remain vigilant and to prioritize ethical justice
considerations in the design and implementation of AI systems.

References:

Topol, E. J. (2019). High-performance medicine: the convergence of human and artificial


intelligence. Nature Medicine, 25(1), 44-56.

Makri A. Bridging the digital divide in health care. The Lancet Digital Health. 2019 Sep
1;1(5):e204–5.

López L, Green AR, Tan-McGrory A, King R, Betancourt JR. Bridging the digital divide in
health care: the role of health information technology in addressing racial and ethnic disparities.
Jt Comm J Qual Patient Saf. 2011 Oct;37(10):437–45.
Chang CC, Tamers SL, Swanson N. The Role of Technological Job Displacement in the Future
of Work | Blogs | CDC [Internet]. 2022. Available from: https://blogs.cdc.gov/niosh-science-
blog/2022/02/15/tjd-fow/

Nunes A. Automation Doesn’t Just Create or Destroy Jobs — It Transforms Them. Harvard
Business Review [Internet]. 2021 Nov 2 [cited 2023 Mar 14]; Available from:
https://hbr.org/2021/11/automation-doesnt-just-create-or-destroy-jobs-it-transforms-them

Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an
algorithm used to manage the health of populations. Science, 366(6464), 447-453.

You might also like