Professional Documents
Culture Documents
Ashley Teraishi
CST 300 Writing Lab
October 4, 2020
AI Bias in Healthcare
used correctly, AI technology can greatly benefit the healthcare industry. However, programs
that discriminate against people based on gender or race can cause a lot of harm. Therein lies the
ethical issue regarding the fairness of such software in healthcare environments. AI has the
potential to reduce bias and discrimination, but it can also perpetuate bias at scale (Sillberg &
Manyika, 2019). Because of the seriousness of the potential ramifications, it is crucial to look at
all sides of this issue. The question arises if it is ethical to use AI in healthcare decisions,
knowing that there is a potential for bias. To address the different sides of this issue, we can look
Background
One of the earliest reported incidences of AI discrimination was when, in 1988, the
British Medical Journal published an article stating, “discrimination in medicine against women
and members of ethnic minorities has long been suspected, but it has now been proved” (Lowry
& Macpherson, 1988). Between 1982 and 1986, as many as 60 applicants per year were denied
an interview at St. George’s Medical School because of their race or gender. This discrepancy
occurred because the computer program used during the screenings of applicants favored men
and people with European names (Lowry & Macpherson, 1988). However, it is vital to note that
the program did not introduce this bias, but rather it reflected the bias already present in the
application process. The computer program was trained with data from previous application
periods, where the school indeed favored men and people with European names. The results
Teraishi 2
provided by the computer program had a 90-95% correlation with the decisions made by humans
regarding applicants (Lowry & Macpherson, 1988). The idea that AI programs highlight existing
biases rather than introduce new ones adds another layer of complexity to this ethical dilemma.
Although this issue began decades ago, it is still relevant today, and instances of AI bias
continue to occur. For example, a prominent health services company, Optum, created a widely
used algorithm that predicts which patients may need extra medical care. This algorithm
“dramatically underestimated the health needs of the sickest black patients, amplifying
long-standing racial disparities in medicine” (Johnson, 2019). Without the bias, the number of
black patients flagged as needing more medical care would have more than doubled (Johnson,
2019). The creators of the computer program did not include this bias intentionally. In an attempt
to avoid such issues, race was specifically excluded from the design of the algorithm. However,
it analyzed how much each patient would cost the healthcare system in the future. According to
the data used to train the program, “black patients incurred about $1,800 less in medical costs per
year than white patients with the same number of chronic conditions; thus the algorithm scored
white patients as equally at risk of future health problems as black patients who had many more
diseases” (Johnson, 2019). This situation shows that issues can occur even when bias is
particularly important to address. In the United States, Covid-19 has disproportionately impacted
racial and ethnic minorities (Röösli, Rice, & Hernandez-Boussard, 2020). Because hospital
resources during this pandemic have often been scarce, discussions have arisen regarding the use
of AI “for optimal allocation of limited resources such as ventilators and ICU beds” (Röösli et
al., 2020). However, using AI for such sensitive decisions is dangerous because it can perpetuate
Teraishi 3
and worsen the racial disparities already unfolding. “These tools are built from biased data
reflecting biased healthcare systems and are thus themselves also at high risk of bias” (Röösli et
al., 2020). AI bias can be a very difficult problem to avoid, identify, and fix, but we must figure
Stakeholders
When addressing this issue, two of the main stakeholders to consider are patients as well
as the companies that create AI technology for healthcare usage. Technology companies want
their products to be marketable and profitable. Whereas, patients may be very against the idea of
introducing AI into sensitive healthcare decisions, especially when the programs in use may have
an underlying bias.
continue to be used because it has the potential to be beneficial for patients and healthcare
workers. It can streamline the healthcare industry and save time and money during diagnosis,
treatment, research, patient monitoring, and more. Additionally, it can reduce instances of
discrimination if used correctly because machine learning algorithms will only ever consider the
factors that can improve prediction accuracy (Sillberg & Manyika, 2019). To support this
position, one could use a claim of policy to state that AI should be used in healthcare, and
regulations should be implemented to mitigate bias. These companies and corporations benefit
through profits and notoriety, depending on the outcome of the situation. It is in their best
interest to create the best product so that they can remain profitable. To develop the best product
On the other hand, patients who are against AI value the need to receive proper care,
regardless of minority status. Their position is that AI should not be used in the healthcare
industry because it has the potential to discriminate based on race, gender, or socioeconomics.
“Vulnerable groups such as minorities and patients with disabilities may not be sufficiently
represented in the data, and their needs may not be adequately accounted for if these groups are
not carefully considered during the design of the AI system” (Asan, et al., 2020). Being
inadequately accounted for in datasets for AI programs is exceedingly harmful when we apply
those programs to decisions regarding a person’s human affairs, such as through healthcare. With
such data discrepancies, predictions will not be as accurate, unexpected issues may arise, and
ultimately patients may not receive the proper care. Because there is a potential to
unintentionally cause harm and perpetuate bias, these practices should not be used in medicine.
All doctors take an oath to do no harm so the consequences must be considered. To support this
position, one could use a claim of value and state that AI in healthcare introduces another way in
which patients can be overlooked or misrepresented. If this happens, those patients will not
receive the same quality of care as others; thus, their health is at stake.
right if the consequences of that action are more favorable than unfavorable” (Fieser, n.d.).
According to this stakeholder, the correct course of action is to continue using and bettering AI
technology for healthcare, because it will create more favorable outcomes in the long-term. “AI
has shown significant potential in the area of mining medical records, designing treatment plans,
robotics mediated surgeries, medical management and supporting hospital operations, clinical
Teraishi 5
virtual nursing, and connected healthcare devices” (Asan, Bayrak, Choudhury, 2020). At this
point, AI still has the potential to discriminate based on race or gender, but if tech companies and
healthcare providers continue to work towards reducing the bias, it will make for a much more
favorable outcome for everyone. As seen with the application discrimination issue at St.
George’s Medical School, the program provided definitive proof of a long-suspected bias in the
application process and the medical industry as a whole (Lowry & Macpherson, 1988).
Companies and organizations can use this information to their advantage by taking on issues of
real-life biases while improving AI software. Using the software in such a way will highlight
issues of biases so that they may be addressed. This, combined with the aforementioned
applications for AI in healthcare, shows that the positive consequences will outweigh the
negative.
In contrast, the position of patients against AI can be looked at through the ethical
framework of deontology. Deontology states that an action is moral if it honors one’s duties,
frameworks, there are four main duty theories to consider. First is that every person has duties to
God, duties to oneself, and duties to others. In this case, we would look at the duties to others,
which state we must “avoid wronging others, treat people as equals, and promote the good of
others” (Fieser, n.d.). Next is the rights theory. “Most generally, a ‘right’ is a justified claim
against another person’s behavior - such as my right to not be harmed by you” (Fieser, n.d.).
John Locke, a prominent 17th-century philosopher, argued that “the laws of nature mandate that
we should not harm anyone’s life, health, liberty, or possessions” (Fieser, n.d.). Rights are
universal and are the same for everyone, regardless of race, gender, disability, or
Teraishi 6
socioeconomics. The third duty-based theory states that we must treat others with dignity, “to
treat people as an end, and never as a means to an end” (Fieser, n.d.). According to this theory, it
is immoral to continue using these programs in hopes that it will create a brighter future because
it is hurting people along the way. The fourth and final duty-based theory states that we have
nonmaleficence. In this case, the most important duty would be nonmaleficence, to not harm
According to this stakeholder, the correct course of action is to stop using AI in the
healthcare field because it has the potential to indirectly harm patients and their health, which
goes against their rights and the duty of healthcare providers. Patients have a right to not be
harmed and to receive proper care, and healthcare providers have a duty to help patients. If
discriminated against, patients will inadvertently be harmed, and consequently, their health may
suffer.
Personal Perspective
regulations that will address bias. My position on the issue aligns most closely with that of the
companies that create AI for healthcare. AI can be extremely beneficial for the healthcare
industry. Addressing the biases in AI can also help to address biases that already exist in the
healthcare industry, making it a more inclusive and fair environment for all people. If we were to
get rid of AI in healthcare, it would slow down progress, and it would not fix the issue because
discrimination seen in the industry. Companies that develop the software should work closely
Teraishi 7
with data scientists, AI researchers, and ethicists to ensure the “fairness” of their decisions.
Additionally, humans can regularly review the decisions made by the programs to adjust
way for healthcare providers to approach the issue. For example, AI models can provide
suggestions, and physicians can consider those recommendations for their final decision (Hauge,
2019).
There is no simple fix to the issue of AI bias. However, large corporations such as IBM
have created software toolkits to make “fairness” easier to check and maintain in machine
learning models. Over time, it would be great to have similar software toolkits and fairness
testing become industry standard. Additionally, there can be laws and regulations put in place to
ensure the safety of patients. For example, if a machine learning model is found to be biased, it
should be taken off the market and rendered inactive until the bias has been addressed as best as
possible. I believe that with work and time, the benefits will outweigh the negatives, and the
References
Asan, O., Bayrak, A. E., & Choudhury, A. (2020). Artificial Intelligence and Human Trust in
doi:10.2196/15154
Hague, D. C. (2019). Benefits, Pitfalls, and Potential Bias in Health Care AI. North Carolina
Johnson, C. (2019, October 25). Racial bias in a medical algorithm favors white patients over
https://www.washingtonpost.com/health/2019/10/24/racial-bias-medical-algorithm-favors
-white-patients-over-sicker-black-patients/
Lowry, S., & Macpherson, G. (1988). A blot on the profession. British Medical Journal,
http://europepmc.org/backend/ptpmcrender.fcgi?accid=PMC2545288&blobtype=pdf
Röösli, E., Rice, B., & Hernandez-Boussard, T. (2020). Bias at warp speed: How AI may
contribute to the disparities gap in the time of COVID-19. Journal of the American
Silberg, J., & Manyika, J. (2020, July 22). Tackling bias in artificial intelligence (and in
https://www.mckinsey.com/featured-insights/artificial-intelligence/tackling-bias-in-artific
ial-intelligence-and-in-humans