You are on page 1of 10

Jerry Kim Do

CST 300 Writing Lab


9 October 2022
Artificially Decided

Every second, there is a good chance that someone is deciding on a critical, irreversible

decision for their lives. Often, people have to make important decisions for other people's lives.

Such decisions are consistently made by medical professionals that may solely define the well-

being of their patients. Decisions for diagnoses, health analysis, and treatment can take up a load

of time to complete and execute. This is where automation in decision-making could be

implemented, where a machine would automatically analyze a patient's data and decide their

diagnosis and treatment. This, however, raises the controversy on if it is a good idea to have

artificial intelligence decide on important medical decisions, as people felt unease and distrust of

a lifeless machine attempting to mimic cognitive decisions over the health of human lives. The

debate will primarily revolve around the trust and safety of artificial intelligence in healthcare.

Issue History

The workload of a physician is stressful and requires tenacity, which automation

can aid in. Due to the heavy workload in their profession, 68% of surveyed physicians in 2021

experienced a prevalence of burnout, the number nearly doubled since the previous year

(Shanafelt, 2022). In addition, the supply of physicians cannot meet the demands of medical

decisions, due to having only 1.1 million physicians currently in the United States (Statista,

2022). With the current United States population, the ratio of one doctor per 300 people currently

stands as a shortage that will be a major problem by the year 2034 if the ratio rate continues

(Robenznieks, 2022). Because of the shortages and frequent burnout, artificial intelligence usage

would greatly benefit doctors in assisting a part of the workload.


The problem is, while artificial intelligence may execute its instruction perfectly, its

system may not be inherently perfect for all scenarios. There have been incidental mortality

reports in the healthcare system where the artificial intelligence system is involved, for example

the death of Annette Monachelli in 2013 due to an overlooked brain aneurysm by the artificial

intelligence system of the time (Schulte, 2019). The incident started when, hearing Monachelli’s

concern of a head pain, a local physician made the order for a headscan through the health

center’s software system. However, the order was never transmitted through the system due to an

overrule. The overrule was made by the electronic health records system that, after an artificial

intelligence scanning on Monachelli’s limited data, concluded there was nothing wrong with the

patient. The result was Monachelli dying two months later without ever being diagnosed with her

brain aneurysm. Unfortunate events such as Monachelli’s death, combined with an already-

existing amount of distrust for technology, sets back the advancement of artificial intelligence

use in healthcare. Despite 90% of hospitals having artificial intelligence strategies in place within

their system, only 7% of the strategies had been fully operational in professional practice (Landi,

2019). Survey says this is because about half of United States doctors are anxious about using

artificial intelligence in their practices, and not without good reason.

Stakeholder Analysis

There are two stakeholders, each of which includes doctors, patients, their loved ones,

and medical authorities. The first group opposes the use of artificial intelligence in making

important medical decisions and the second group supports the use. While both believe the

patient’s health is of most importance in healthcare, their different values and claims support

different positions on the matter.


Stakeholder 1 – The Medical AI Opposers

Values

The opponents of medical artificial intelligence put more value in the human and less in

the machine. They see better diligence and trust in the human than the machine. Their biggest

motivation for being against the use of artificial intelligence is security. They value their

personal information and the cognitive ability of humans over artificial intelligence.

Position

The opposers stand against artificial intelligence involvement in important medical

decisions, which especially includes diagnosis, service and product prescription. This position

stems from the belief that two opposing decisions on the well-being of a real human has to be

made by real cognitive minds that had been educated in the matter, not imitators or simulators.

Claims

Due to their higher value in diligence and human cognition, they believe human work

over tedious but important tasks puts better quality in healthcare. With the claim of value, the

opponents of medical artificial intelligence argue that the extensive research and knowledge

acquired from medical studies makes the human more reliable and would therefore save more

lives than that of a machine. They also claim with a fact that the data used to train artificial

intelligence are highly misrepresented as there is not enough data for every disease and for every

human race (Axbom, 2019). This would then cause the risk of bias and increased health

inequalities. Most importantly to the economy, artificial intelligence should not be taken into

healthcare specifically because they especially put physicians like radiologists and pathologists

at risk due to the artificial intelligent reliance on diagnoses based on scanning biological images

(Staff, 2022).
Stakeholder 2 – The Medical AI Supporters

Values

Unlike the first group, higher value in automation is the main reason for the supporters of

medical artificial intelligence. They see better efficiency and trust in artificial intelligence than

they do in humans. They fear that in the medical world, lack of efficiency would kill lives. Since

machines do not get tired, they outperform human workers in the same task.

Position

The advocates believe that machines are just as trustworthy, if not more, than humans.

Even though 90% of hospitals have implemented artificial intelligence strategies, they should

make most of the strategies operational. They particularly believe that medical artificial

intelligence should be used more often in hospitals and such that technological advancement

should be pushed. For the sake of human lives that could be saved through efficiency, the group

is more than obligated to support artificial intelligence service in healthcare.

Claims

Due to their higher value in automation, the medical artificial intelligence supporters

believe artificial intelligence will make life easier than humans could ever do. With the claim of

fact, they state the artificial intelligence softwares already being used has improved quality

control for medical patients. They also make a claim of value that humans are more prone to

making biases and mistakes in comparison to emotionless and restless automation, thus artificial

intelligence is more trusted and provides less risk. They reason with the claim of cause that

artificial intelligence service will not take jobs away from medical professionals such as

radiologists and pathologists, but rather complement them as both fields are in shortage of active

workers (Staff, 2022). This is with their policy claim that artificial intelligence are not
replacements, but utilities for physicians’ jobs, that artificial intelligence can help diagnose and

be checked and explained by physicians thereafter.

Argument Question

Both groups have valid views and points about artificial intelligence in healthcare,

however, one decision has to be made. Should hospitals continue to advance and apply more into

artificial intelligence that ultimately plays critical roles in important medical decisions?

Stakeholder Arguments

Stakeholder 1 – The Medical AI Opposers

Ethical Framework – Care Ethics Framework

For debate over the appliance or disappliance, the artificial intelligence opposers use the

Care Ethics framework, which defines one’s general obligation to other people. Like how a

professor is obligated to prevent damage from their classroom furniture, even though their job

contract doesn’t necessarily say they have to.

Applying the Framework

The physician’s most important duty is for the best interest of their patients’ health. They

are obligated for the health of the patients, and with that, indifferently obligated to personally

oversee the entire process of patient care, not to some machine that will do some of the work for

them. Physicians are also obligated to provide some human services that generally bring a sense

of emotion and empathy.

Stakeholder’s Recommended Course of Action

With obligations being the motive, physicians should determine their patients’ diagnosis,

treatment, and personalized services and products. If done this way, the quality of healthcare
treatment increases with genuine emotions and empathy such as encouragement, warnings,

stories, and ensurement that doesn’t come from artificial scripts.

What is at Stake

With the human’s mind at work rather than the machine’s, there would be no possibility

for bias mistakes. An artificial intelligence software, trained only on the data that was available

to it, would misjudge a patient’s special needs because that patient’s data is underrepresented in

the machine’s training. A physician who personally sees through a patient’s process may

determine the patient’s specialty needs through study and experience. It’s not just the amount of

bias mistakes that are at stake, but also technical ones. Once in a while an individual physician

may make a mistake, but if an entire artificial intelligence system follows the instruction to make

the same mistake indefinitely in the medical field, the entire healthcare process would be ruined

and many lives would be at stake. They do not want another case where artificial intelligence

overlooks a patient’s mortal need (Schulte, 2019).

Stakeholder 2 – The Medical AI supporters

Ethical Framework – Utilitarianism Framework

For debate over the appliance or disappliance, the artificial intelligence supporters use the

Utilitarianism framework. The appliance of artificial intelligence is right because of how much it

benefits and provides relatively little pain for both the patients and the physicians.

Applying the Framework

The supporters argue that while not using artificial intelligence may add more quality in

diligence on the physician’s part, it is a waste of mental effort on tedious tasks and puts the

physicians at a disadvantage when needing focus elsewhere. Because artificial intelligence,

based on automated trial and error training plus numerous other methods of training, would
rarely make mistakes and are more precise relative to human workers. Alongside that reasoning,

tedious work are meant for technology to handle. A physician may diagnose patients of the same

sickness without mistake, but it takes a lot of work and time to do so for a result so repetitive.

Thus a diagnosis of artificial intelligence would greatly benefit both the physician’s and patient’s

time by reducing the processing time it takes to determine treatment.

Stakeholder’s Recommended Course of Action

It is not just the field of diagnostics that is in need of time reduction, but also in areas of

quality control, customer care, monitoring, inventory management, and service and product

personalization (Thormundsson, 2022). By applying artificial intelligence in only tedious

categories, physicians can focus their service on the human part such as emotion and empathy.

All hospitals should apply more to artificial intelligence in the strategies they already have.

What is at Stake

Some say time is money, and in the medical field’s case, time is survivability. With the

time saved and accuracy gained, many patients’ lives would be saved.

Student Position

Personally, I see the Utilitarianism Framework used by the supporters of artificial

intelligence in healthcare to be more beneficial and progressive than the Care Ethics Framework

of the opposers. Benefits for both and physicians is most ideal, and the time saved from tedious

physician tasks could be used as an optimal for the empathy and emotion the opposers valued

and argued for.


References

Axbom, P. (2019, October 20) Lack of Representation in AI Puts Vulnerable People at Risk. AI

Now Institute of New York University. Retrieved September 30, 2022, from

https://axbom.com/representation-ai-sweden/

Landi, H. (2019, April 25) Nearly Half of U.S. Doctors Say They Are Anxious About Using AI-

Powered Software: Survey. Fierce Healthcare. Retrieved September 30, 2022, from

https://www.fiercehealthcare.com/practices/nearly-half-u-s-doctors-say-they-are-anxious-

about-using-ai-powered-software-survey

Michas, F. (2022, June 8) Total Number of Active Physicians in the U.S. Statista. Retrieved

September 30, 2022, from https://www.statista.com/statistics/186269/total-active-

physicians-in-the-us/

Robenznieks, A. (2022, April 13) Doctor Shortages are Here– and They’ll Get Worse If We

Don’t Act Fast. The American Medical Association. Retrieved September 30, 2022 from

https://www.ama-assn.org/practice-management/sustainability/doctor-shortages-are-here-

and-they-ll-get-worse-if-we-don-t-act

Schulte, F. (2019, March 18) Death By 1,000 Clicks: Where Electronic Health Records Went

Wrong. Kaiser Health News. Retrieved 30, 2022 from https://khn.org/news/death-by-a-

thousand-clicks/

Shanafelt, T. (2022, September 13) Changes in Burnout and Satisfaction With Work-Life

Integration in Physicians During the First 2 Years of the COVID-19 Pandemic. Mayo

Clinic Proceedings. Retrieved September 30, 2022, from

https://www.mayoclinicproceedings.org/article/S0025-6196(22)00515-8/fulltext
Staff, G. (2022, March 2) Arguing the Pros and Cons of Artificial Intelligence in

Healthcare. Health IT Analytics. Retrieved September 30, 2022, from

https://healthitanalytics.com/news/arguing-the-pros-and-cons-of-artificial-intelligence-in-

healthcare

Thormundsson, B. (2022, March 17) AI Use Cases in the Parma and Healthcare Industry

as of 2020. Statista. Retrieved September 30, 2022 from

https://www.statista.com/statistics/1197960/ai-pharma-healthcare-global/

You might also like