You are on page 1of 5

Case Study of when AI makes a Bad decision, who’s legally responsible

Princess Wella Mae Tibon

Saint Joseph College


Albert Paluga
Ethics

“In submitting this assignment/essay, I confirm that I am the sole author of this work and I have
not intentionally included the work of anyone else without proper acknowledgement or
citation/reference. I confirm that no one else composed all part of the assignment/essay.
I confirm that I am NOT:
 Presenting another person’s work as my own, including written work,
images,designs or web content.
 Purchasing a paper or partial papers or other academic work from a 3 rd party and
presenting it as my own.
 Paraphrasing or condensing ideas from another persons work without proper
citation.
 Failing to document direct quotations with a proper citation.
 Copying word-for-word, using select phrases from another’s work, or failing to
properly cite all sources from which data, examples, ideas, or theories are found.
 Copying and pasting content and changing few words without citation.
Brief Description of The Case
This article had shown as that AI is the cutting edge technologies with such a long
history. Artificial intelligence (AI) also known as machine intelligence is a type of
technology that helps create machines that can perform specific tasks that normally
require human intelligence, such as visual perception, speech recognition, and decision-
making.
The case study looks on the legal repercussions of AI systems making detrimental
errors in judgment. The case study looks into instances where bad decisions made by AI
systems led to injury, financial loss, or even fatalities.
The case study considers legal responsibility and accountability when AI systems
make incorrect decisions. It looks at different legal frameworks and procedures, such as
strict liability, product liability, and negligence, that may be used to hold persons
accountable for injuries brought on by AI.
The case study also examines the challenges in proving that AI is to blame for human
wrong doing, including the challenge of identifying the actual cause of issues and the
potential for convoluted chains of responsibility.
When viewed as a whole, the case study highlights the significance of considering
legal responsibility and liability when designing, producing and deploying. AI systems,
together with the need for ongoing oversight and responsibility to make sure they are
not doing harm.

List of Relevant Facts


1. The case includes situations in which AI systems have made choices that have harmed
people or had unfavorable effects.
2. The case examines various legal frameworks and strategies, such as strict liability,
product liability, and negligence, that can be used to hold companies responsible for
harms caused by AI.
3.The case emphasizes the difficulties in recognizing accountability for AI-generated
harms, including the possibility for complicated chains of responsibility and the
challenge of pinpointing a problem's precise origin.
Problem Description
It is very possible for AI to prove to be inaccurate or unreliable. If anomalies are not
noticed.AI-driven systems have the potential to be harmful and make mistakes. So here are
some problems that can be deducted from the case.

1.)AI Algorithm Bias


One of the main issues with artificial intelligence is algorithmic bias since it poses a risk of
harm, especially in systems that control vital infrastructures like the transportation or
healthcare systems. There are a number of approaches to deal with the algorithmic bias, albeit
correcting it can be a difficult undertaking. Don't eliminate labeled data from the learning
process because it doesn't match other examples if it is representative of many groups and you
have labeled it. As an alternative, you should combine methods learned from smaller datasets
and use them as training data for a bigger ensemble model.
2.)AI is Vulnerable to Cyber Attacks
Government agencies, home computers, and cellphones all use artificial intelligence. Despite
the many advantages modern technology offers, it also introduces new assault points. By
producing system failure or getting unauthorized access to systems, criminals might utilize
artificial intelligence against themselves. AI will also be able to carry out automated
cyberattacks without human involvement as it becomes smarter and gains the ability to make
decisions. As a result, one of the largest difficulties facing artificial intelligence technologies is
cyber security.
3.)Lack of Skilled workers
Artificial intelligence is a technology that will be around for some time, therefore you should
start planning your AI strategy by investing in the personnel and technical expertise required to
create AI applications. Finding experts that have the necessary skill set to create a unique
artificial intelligence solution for your company may seem challenging at first due to the current
shortage of skilled engineers in this industry. In order to locate fresh developers who are
knowledgeable in this technology, you should establish links with regional institutions and AI
teaching platforms. For your company's AI activities, look into other choices like the
apprenticeship programs offered by Google, IBM, Microsoft, etc. The alternative is to hire a
software outsourcing firm.

List of Probable Solution


Most people now know that while AI has the potential to alter industries and boost
efficiency, it also has moral dilemmas and practical limitations that must be taken into account.
1.)How can we reduce bias in AI?
Since bias is a characteristic of all people, it can be seen and felt in everything we produce,
especially in technological endeavors. The problem of bias in the creation of artificial
intelligence (AI) may be helped through open source data science. By decreasing bias in AI,
adopting an open collaborative data science strategy can help create a more just and equitable
society.

2.)Detecting and responding to threats more quickly


You can more quickly than ever understand your networks and spot potential dangers by
utilizing AI. AI-powered tools can comb through voluminous data to spot anomalous behavior
and swiftly identify malicious activities, like a fresh zero-day attack. Numerous security
procedures, such as patch management, can be automated by AI, making it simplier for you to
keep on top of your cyber security requirements. By automating some processes, such as
diverting traffic away from a server that is susceptible or informing your IT team of possible
problems, it can help you react to assaults more quickly.

3.)Partner with nearby educational facilities


Many business can collaborate with regional (or national) educational institutions, giving you
access to a new pool of qualified candidates and enabling the college or university to assist
graduates in finding employment. You can allow internships, co-op work placements, and
apprenticeships at your business, which can help develop the future generation of talented
workers. The best part is that you closely monitor the development of your sector.

Chosen Solution
My chosen solution is partner with nearby educational facilities. Finding a trustworthy AI
partner is essential for integrating AI into the classroom successfully. This may be a tech firm, a
regional institution, or a nonprofit with a focus on AI teaching. For teachers to successfully
integrate AI into their teaching practices, the proper partner can offer support, training and
direction. The use of artificial intelligence in the classroom offers both teachers and students a
special chance, in conclusion. In addition to helping students develop crucial 21 st-century
abilities like critical thinking and problem-solving, AI has the ability to offer them individualized
and interesting learning experiences. However, the introduction of technology into the
classroom also brings with it a number of difficulties, including issues with data security and
morality, the requirement for continual training and assistance, and the possibility of unequal
access to technology and digital skills.
References
Chen, W., Y. Liang, and D. Liang. 2020. "Artificial intelligence in education: A review of the
literature." Educational Technology Research and Development 68 (1): 65-83.

You might also like