Professional Documents
Culture Documents
abuse: one can use AI for bad purposes, e.g., cyberattacks, and political manipulation
malfunction: AI may fail in various ways (e.g., giving wrong or biased outputs) for various
reasons (e.g., related to the training process or the training data)
security: people may attack an AI to affect its performance or to steal data
explainability: it has been hard to describe in a human-understandable way why an AI gives a
certain output
privacy: AI enables and requires one to collect, process, and keep track of a huge amount of
personal data extensively; to be discussed in §9.1
(https://canvas.nus.edu.sg/courses/45147/pages/ss9-dot-ethics) when we look into ethics
data scarcity: high-quality data may not be available for training
7.1. Abuse
We saw in §5 (https://canvas.nus.edu.sg/courses/45147/pages/ss5-dot-use-cases) many ways in
which one can use AI to benefit people. The same technology is capable of causing harm to
people too when used with ill intent. The power of AI makes the resulting harm more severe and
harder to avoid. Here are a few examples of how AI may be abused.
Deepfakes (https://canvas.nus.edu.sg/courses/45147/pages/ss3-dot-capabilities-
vision#deepfakes) and natural language generation AI
(https://canvas.nus.edu.sg/courses/45147/pages/ss2-dot-capabilities-language#nlg) can be used
to spread misinformation and to manipulate public opinion. In §1.3
(https://canvas.nus.edu.sg/courses/45147/pages/ss1-dot-why-care#zelenskyy) , we gave an
example in the war between Russia and Ukraine.
Deepfakes (https://canvas.nus.edu.sg/courses/45147/pages/ss3-dot-capabilities-
vision#deepfakes) and natural language generation AI
(https://canvas.nus.edu.sg/courses/45147/pages/ss2-dot-capabilities-language#nlg) can also be
used in impersonation, scams, and social engineering attacks.
AI robotics (https://canvas.nus.edu.sg/courses/45147/pages/ss4-dot-capabilities-robots) can be
used to automate physical and cyber weapons. We will discuss more about these in §9
(https://canvas.nus.edu.sg/courses/45147/pages/ss9-dot-ethics) when we look into ethics.
Cyberattackers can use AI to help them in many ways.
https://canvas.nus.edu.sg/courses/45147/pages/ss7-dot-challenges-and-issues 1/12
18/11/2023, 16:38 §7. Challenges and issues: HS1501 Artificial Intelligence and Society [2310]
Sep. 2023. [2] DARPAtv (https://www.youtube.com/@DARPAtv) . “DARPA Cyber Grand Challenge: Visualization Overview”. YouTube, 22
7.2. Malfunction
AI sometimes makes mistakes. The mistakes can range from innocent to fatal. These can be due
to unexpected scenarios (unexpected) , low-quality training data (data) , or poor
engineering/programming choices (choices) , amongst other reasons. Let us look at each of
these causes one by one, and discuss some good practices in preventing and handling
failures (failpract) .
Unexpected scenarios
We saw in §6.1 (https://canvas.nus.edu.sg/courses/45147/pages/ss6-dot-technical-
background#train) that AI learns from the data provided to train it. If the training data does not
cover a scenario that an AI encounters, then the AI may respond unpredictably. Here are two
examples.
https://canvas.nus.edu.sg/courses/45147/pages/ss7-dot-challenges-and-issues 2/12
18/11/2023, 16:38 §7. Challenges and issues: HS1501 Artificial Intelligence and Society [2310]
Even when the scenarios are already anticipated by the developers, the training data used may
still be not fitted or not representative enough for the purposes. In this case, the bias present in
the training data leads to biased results. Here are two examples.
An AI was used to assess which pneumonia patients have high risks. It was mostly accurate,
but erroneously classified patients with a history of asthma as low-risk. In reality, such patients
have higher rates of survival only because they were directly sent to intensive care. This
mistake was caused by the use of data that is not fitted for the purpose.
In 2015, a user reported that the Google Photos (https://www.google.com/photos/) app
misclassified two dark-skinned people as “gorillas”, which echoes racist tropes. Google
apologized for the incident. Reportedly, as of 2023, Google Photos still does not classify any
(gorilla or not) photo as “gorillas” unless the word itself appears in the photo. One potential
reason for the incident is that the training data used did not contain enough photos of dark-
skinned people.
References: [1] Rich Caruana, et al. “Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission
(https://doi.org/10.1145/2783258.2788613) ”. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data
Mining (KDD '15). Association for Computing Machinery, New York, NY, USA, pp. 1721–1730, 2015. [2] Anonymous. “(2015-06-03) Incident
Number 16”. In S. McGregor, ed., Artificial Intelligence Incident Database. Responsible AI Collaborative, https://incidentdatabase.ai/cite/16
(https://incidentdatabase.ai/cite/16) . Last accessed on 27 Sep. 2023. [3] Nico Grant and Kashmir Hill. “Google’s Photo App Still Can’t Find Gorillas.
And Neither Can Apple’s”. The New York Times, 22 May 2023. https://www.nytimes.com/2023/05/22/technology/ai-photo-labels-google-
One common consequence of poor programming choices is overfitting, in which the AI model
learns the specifics of the training data instead of patterns that are generalizable
(https://canvas.nus.edu.sg/courses/45147/pages/ss6-dot-technical-background#ml) to unseen data.
One possible reason for overfitting is that the AI models used are too complex for the data
involved. Another possible reason is that the model is trained too much for the amount of training
https://canvas.nus.edu.sg/courses/45147/pages/ss7-dot-challenges-and-issues 3/12
18/11/2023, 16:38 §7. Challenges and issues: HS1501 Artificial Intelligence and Society [2310]
data used.
human-in-the-loop: include humans to look over the system and to provide advice when
needed
think failure: expect that systems will fail, and some unlikely event with huge impact will
happen; design safeguards and contingency plans accordingly
backup plans: have different systems back up one another
minimization of dependencies: make some parts of the system run even when others fail if
possible
fail fast: detect problems early in the development cycle, e.g., by implementing system and
adminstrative procedures for faster reporting, and by carrying out testing alongside
development
Gall's law: do not build complex systems from scratch; instead, build them from simpler
systems that work
Know when and how to escalate issues quickly. When systems fail, stay calm, act quickly, identify
the root cause of the failure, remediate, and contain the damage, e.g., by reconfiguring one
system to fulfil another's role. Learn from past incidents.
Listen to Prof. Yu talk about how Netflix (https://www.netflix.com/) improves resilience of its
video streaming service in the video below.
https://canvas.nus.edu.sg/courses/45147/pages/ss7-dot-challenges-and-issues 4/12
18/11/2023, 16:38 §7. Challenges and issues: HS1501 Artificial Intelligence and Society [2310]
1.00
0:00 / 4:26
7.3. Security
People or agencies may attack an AI, e.g., to steal, modify, destroy data, or to prevent the system
from functioning properly. Such attacks may be performed by insiders (e.g., employees who are
laid off or frustrated) and state-funded high-end espionage. They may target individuals,
companies, or critical information infrastructures (CIIs) such as hospitals, railway systems,
payment systems, power plants and networks. They can cause substantial financial loss,
disruptions, and deterioration of reputation.
We will talk about two kinds of attacks on AI and discuss how to defend against such attacks.
Data poisoning
Data poisoning refers to a kind of attack in which the training data
(https://canvas.nus.edu.sg/courses/45147/pages/ss6-dot-technical-background#train) are
manipulated to affect the behaviour of an AI negatively.
References: [1] Amy Craft. “Microsoft shuts down AI chatbot after it turned into a Nazi”. CBS News, 25 Mar. 2016.
https://www.cbsnews.com/news/microsoft-shuts-down-ai-chatbot-after-it-turned-into-racist-nazi/
“Learning from Tay’s introduction”. Official Microsoft Blog, 25 Mar. 2016. https://blogs.microsoft.com/blog/2016/03/25/learning-tays-
https://canvas.nus.edu.sg/courses/45147/pages/ss7-dot-challenges-and-issues 5/12
18/11/2023, 16:38 §7. Challenges and issues: HS1501 Artificial Intelligence and Society [2310]
Evasion
Sometimes it is possible to specially design an input, called an adversarial example, that can trick
an AI to produce wrong outputs. Some adversarial examples even seem normal or innocent to
human eyes. Watch Prof. Yu present a few examples in the video below.
1.00
0:00 / 3:58
Defence
Here are a few cyber defence measures that are specific to AI.
(https://doi.ieeecomputersociety.org/10.1109/SP.2016.41) ”. In 2016 IEEE Symposium on Security and Privacy (SP), pp. 582-597, 2016.
https://canvas.nus.edu.sg/courses/45147/pages/ss7-dot-challenges-and-issues 6/12
18/11/2023, 16:38 §7. Challenges and issues: HS1501 Artificial Intelligence and Society [2310]
recognition (https://canvas.nus.edu.sg/courses/45147/pages/ss3-dot-capabilities-
vision#face) .
To subvert the entire system, one would then need to subvert all the constituent models
successfully.
The failure of one but not all of the constituent models may indicate an attack.
Ensemble learning also improves the accuracy of AI systems.
It can also be used to counter data scarcity, as we will see in §7.5 (few) .
7.4. Explainability
As we saw in §6.1 (https://canvas.nus.edu.sg/courses/45147/pages/ss6-dot-technical-background)
and §6.2 (https://canvas.nus.edu.sg/courses/45147/pages/ss6-dot-technical-background) , in
machine learning, models are not coded by human but are chosen automatically by the
algorithmic process of training. In fact, the models chosen are often too large and too complicated
to be comprehensible by human. As a result, outputs produced by current AI models typically do
not come with human-comprehensible explanations of why certain outputs are given. These
explanations are important because they make it easier for human to trust the AI. They are useful
in diagnosing malfunctions and in detecting attacks. Additional effort is needed to make such
explanations available. Methods to achieve this are referred to as Explainable AI (XAI).
Watch one demonstration from Prof. Yu of using LIME to explain an evasion attack in the video
below.
https://canvas.nus.edu.sg/courses/45147/pages/ss7-dot-challenges-and-issues 7/12
18/11/2023, 16:38 §7. Challenges and issues: HS1501 Artificial Intelligence and Society [2310]
1.00
0:00 / 2:25
Try it out to see how LRP can be used to present explanations by following the steps below.
1. Open the “XAILab Demo: Explainable VQA” page by the Fraunhofer Institute for
Telecommunications at https://lrpserver.hhi.fraunhofer.de/visual-question-answering/
(https://lrpserver.hhi.fraunhofer.de/visual-question-answering/) .
2. Click a picture in #1.
3. Type in a question in #2.
4. Press the enter key.
5. Wait for the answer to appear in #3.
6. The areas relevant in producing the answer are shown in #4.
7. Try again with different pictures and different questions.
8. Evaluate the quality of the outputs.
https://canvas.nus.edu.sg/courses/45147/pages/ss7-dot-challenges-and-issues 8/12
18/11/2023, 16:38 §7. Challenges and issues: HS1501 Artificial Intelligence and Society [2310]
1. Open the “Explainable AI Demos: Image Classification” page by the Fraunhofer Institute for
Telecommunications at https://lrpserver.hhi.fraunhofer.de/image-classification
(https://lrpserver.hhi.fraunhofer.de/image-classification) .
2. At the bottom right-hand corner, select “Adversarial Attacks” in the drop-down list.
3. Choose one of the images on the right.
4. The page displays what the AI classifies the image to be, and a heatmap showing parts of the
image that contribute to this classification.
5. Compare the heatmap with what you expect the AI to on focus if it were to classify the image
correctly.
6. Try again with different images under “Adversarial Attacks”.
7. Compare the heatmaps with those for the images under “General Images”.
https://canvas.nus.edu.sg/courses/45147/pages/ss7-dot-challenges-and-issues 9/12
18/11/2023, 16:38 §7. Challenges and issues: HS1501 Artificial Intelligence and Society [2310]
https://canvas.nus.edu.sg/courses/45147/pages/ss7-dot-challenges-and-issues 10/12
18/11/2023, 16:38 §7. Challenges and issues: HS1501 Artificial Intelligence and Society [2310]
For example, one can rotate, flip, crop, adjust the constrast of images for training object
recognition.
One can train another model to generate training data as follows. A generator model
generates data. The generated data and real data are mixed and fed into a discriminator
model, which identifies whether the input is real or generated. During training, the two
models improve with each other, so that at the end the generator model can generate
realistic data for training.
This generator–discriminator combination is known as a generative adversarial network
(GAN).
For example, at the beginning of the COVID-19 pandemic, one can use GANs to
produce synthetic lung CT scans and X-ray images for training. Here are some
synthetic X-ray images generated by a GAN.
Image source: Rutwik Gulakala, Bernd Markert and Marcus Stoffel. “Generative adversarial network based data augmentation
for CNN based detection of Covid-19 (https://doi.org/10.1038/s41598-022-23692-x) ”. Scientific Reports, vol. 12, art. number
19186, 2022.
As a side remark, GANs are very useful in generating realistic images for other
purposes too.
One can re-train a trained model to adapt to a different context.
This approach is known as transfer learning.
For example, one can use the linguistic features extracted into translation models for
more popular languages to obtain models for less popular ones.
Use ensemble learning in which a few smaller neural networks are used instead of one
big neural network.
The principle behind this approach is that bigger neural networks typically require more
training data to perform well.
For example, instead of using one model to recognize images of ice kacang, one can
combine the use of a number of models that recognize images of shredded ice, sweet
corn, red beans, pink colour, inverted cone shape, etc., for which training may be easier
and more training data may be available.
While there are ways to make some model work with a small amount of data, to obtain best
results, it is still important to find more high-quality data.
7.6. Reflection
https://canvas.nus.edu.sg/courses/45147/pages/ss7-dot-challenges-and-issues 11/12
18/11/2023, 16:38 §7. Challenges and issues: HS1501 Artificial Intelligence and Society [2310]
We saw that, although AI can be very useful, it brings also many challenges and issues.
A number of solutions are available to counter the existing problems, but these problems are
far from being completely solved, and new problems will likely arise with the rapid
advancement of AI.
As a user, how worried are you about AI giving you wrong information?
What measures would you take personally to protect yourself against the negative effects of
AI?
Do you think that AI will do more good than bad to people?
↑ Course content
← Previous: Tech background (https://canvas.nus.edu.sg/courses/45147/pages/c
(https://canvas.nus.edu.sg/courses/45147/pages/ss6- content)
dot-technical-background) ↓ Quiz
(https://canvas.nus.edu.sg/courses/45147/quizzes/22
https://canvas.nus.edu.sg/courses/45147/pages/ss7-dot-challenges-and-issues 12/12