You are on page 1of 5

ICT500

Emerging Technologies
Workshop 5:
Ethical and social issues in the emerging
technologies

Submitted By : Gagandeep
Solution To Activity 1

 These are the ethical concerns surrounding the use of


AI in education:
 AI bias: If AI systems are taught on skewed data, they
may be biassed. This can result in AI programmes that
target particular student groups.
 AI privacy: Students' performance, behaviour, and
personal data are all collected in great quantities by
AI systems. This information must be shielded from
unauthorised use and access.
 AI agency: AI systems have the potential to automate
teacher-intensive duties like marking papers and
giving feedback. This might result in less classroom
controls for teachers and fewer learning autonomy for
students.
 AI transparency: AI systems are frequently intricate
and opaque, making it challenging to comprehend
how they decide what to do. This might cause
teachers and students to lose faith in AI systems.
Solution To Activity 2

 An AI-powered health system creates a cancer risk


prediction model. Although the model is very
accurate, it might be discriminatory or violate
people's privacy. The healthcare system takes action
to lessen these dangers, including establishing a
patient education programme and a board of
independent oversight. The health system thinks that
using the model has more advantages than
disadvantages.Here is a novel perspective on the
moral conundrum presented in this instance:
 The AI cancer risk prediction model is a tool. Like any
other tool, it can be used for good or for bad. The
healthcare system must apply the concept in a moral
and patient-centered manner. The health system can
contribute to ensuring that the model is utilised for
good by adopting measures to reduce bias and
safeguard patient privacy.
Solution To Activity 3

 Here are a few succinct examples of mitigation technologies


to safeguard personal data from new technology:
 Data is protected by encryption, which scrambles the
information.
 Biometric authentication: Uses bodily traits to confirm a
user's identification.
 Data minimisation: Only gathers the minimum amount of
personal information.
 Pseudonymization: Substituting manufactured identifiers for
personal ones.
 Analytics that protects privacy: Examines data without
identifying individuals.
 Individuals can safeguard their personal data in addition to
these technical safeguards by:studying privacy guidelines,
Installing programs only from reliable sources,updating
computer programmes, and using secure passwords.being
watchful with the data they post online.
Solution To Activity 4
 Transparency and accountability: AI systems should be
transparent and accountable, so that individuals can
understand how they work and make decisions.
 Individual control over data: Individuals should have
control over their data, including the right to access,
correct, and delete it.
 Investment in AI safety: Research on AI safety is
essential to ensure that AI systems are beneficial to
humanity.
 Awareness of bias: Because AI systems have the
potential to be biassed, it is crucial to be aware of
this possibility and take precautions against it.
 It's crucial to keep in mind that AI systems are still in
the development stage if we want to know how we
can trust AI to represent us in legal or accounting
matters in the future. They are fallible and have a
tendency to make errors. However, AI systems could
be highly useful in these capacities. They have quick
and effective access to and processing of enormous
amounts of data. Additionally, they are able to spot
patterns and trends that people might miss.

You might also like