Professional Documents
Culture Documents
Access and equity is an ethical justice issue related to AI in healthcare. As just 45% of
people are connected in developing countries, and in the least developed countries the proportion
is just 20%, there is a risk that certain groups or communities may be left behind or excluded
from the benefits of AI, which could exacerbate existing health disparities. Topol (2019) argues
that one of the ethical challenges of AI in healthcare is ensuring access and equity for all people,
regardless of their geographic location, socioeconomic status, cultural background, or health
condition. AI can potentially reduce health disparities by providing low-cost and high-quality
services to underserved populations. However, AI can exacerbate existing inequalities if it is not
designed with diversity and inclusion in mind. Makri (2019) suggests that AI systems should be
designed to promote access and equity by addressing issues of digital divide and ensuring that AI
systems are accessible to all, regardless of socioeconomic status or geographic location. In order
for AI to be accessible for all people, some recommendations can be performed:
● Automating and standardizing the collection of race/ethnicity and language data
● Prioritizing the use of the data for identifying disparities and improving outcomes
● Providing internet access and digital literacy training to underserved populations
● Developing AI systems that are transparent, explainable, and accountable
One of the big challenges in ensuring ethical justice in AI technology in the field of
healthcare is to ensure inclusivity and integration of different information technologies. The
integration of different technologies can pose several challenges, including technical
compatibility issues, data privacy and security concerns, and ethical and legal considerations. For
example, different technologies may have different data standards or formats, making it difficult
to share or combine data from different sources. Some recommendations that can help the
inclusivity of AI are:
● Compensate for the disadvantages of some technologies and combine the advantages of
others. By combining different AI techniques, synergetic AI systems can leverage the
strengths of each approach while compensating for their weaknesses.
● Apply synergetic AI. The idea behind synergetic AI is that each AI technique has its
strengths and limitations, and combining them can lead to better performance and more
accurate results. For example, machine learning algorithms are effective at processing
large amounts of data and finding patterns, but they can struggle with handling
uncertainty or making decisions in complex or changing environments. Expert systems,
on the other hand, are good at reasoning and decision-making but may require a lot of
manual knowledge engineering to build.
● Include in the research the processes of communication, evolution and cooperation of
artificial systems.
Bias is one of the main ethical concerns with AI in healthcare. AI algorithms are only as
good as the data they are trained on, and if the data contains biases, the resulting algorithms will
also be biased. For example, facial recognition software has been found to have higher error rates
for people with darker skin tones, which can have serious consequences in healthcare, such as
misdiagnosis and delayed treatment. Obermeyer et al. (2019) found that a widely used algorithm
for managing patient care had higher error rates for Black patients compared to white patients,
leading to inequities in care. In order to prevent bias and discrimination, some recommendations
can be performed:
● Validate the data set used for machine learning. This means checking that there is no
intentional discrimination in the data and that it accurately represents the population
being studied.
● Apply methodologies and software solutions that detect and prevent discrimination based
on factors such as race, national origin, gender, political opinion, religious belief, age,
social and economic status or privacy. These tools can help ensure that AI algorithms are
fair and unbiased.
● Adapt algorithms at the national level to account for differences in socio-economic
conditions and the state of national health systems. An algorithm with diversity in mind
ensures that everyone has access to high-quality healthcare.
References:
Makri A. Bridging the digital divide in health care. The Lancet Digital Health. 2019 Sep
1;1(5):e204–5.
López L, Green AR, Tan-McGrory A, King R, Betancourt JR. Bridging the digital divide in
health care: the role of health information technology in addressing racial and ethnic disparities.
Jt Comm J Qual Patient Saf. 2011 Oct;37(10):437–45.
Chang CC, Tamers SL, Swanson N. The Role of Technological Job Displacement in the Future
of Work | Blogs | CDC [Internet]. 2022. Available from: https://blogs.cdc.gov/niosh-science-
blog/2022/02/15/tjd-fow/
Nunes A. Automation Doesn’t Just Create or Destroy Jobs — It Transforms Them. Harvard
Business Review [Internet]. 2021 Nov 2 [cited 2023 Mar 14]; Available from:
https://hbr.org/2021/11/automation-doesnt-just-create-or-destroy-jobs-it-transforms-them
Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an
algorithm used to manage the health of populations. Science, 366(6464), 447-453.