You are on page 1of 5

Student no.

465691

CONSULTATION RESPONSE TO THE REGULATOR’S CONSULTATION ON THE


REGULATION OF CHATGPT.

Introduction

As a digital rights company which believes in the digital sphere being a place that
fundamental principles of freedom of expression without worry and privacy is
essential, this opportunity to provide input on the potential regulation of ChatGPT. As
the use of this application becomes increasingly attractive to people from all works of
life, it is very important regulatory frameworks address the key concerns highlighted,
in order for responsible deployment.

Executive Summary.

i. Understanding what chat GPT is, and different ways in which it can be
used in different settings.
ii. Data protection/privacy concerns arose from chat GPT usage. Its heavy
reliance on past personal data which could be sensitive, and the
dangerous possibility of its misuse exemplified by the Cambridge Analytica
scandal.
iii. Highlighting how its complexity makes it harder for us to identify and
address its bias issues, how the underrepresentation of women in the
sector leads to discriminatory outcomes, and how certain scenarios are
better off without chat GPT such as hospitals where empathy may be
needed.
iv. Copy right concerns for Open AI and its users, especially in academic
contexts.

What is “chat GPT”?

v. CHAT GPT, which is short for Chat Generative Pre-Trained Transformer Is


an artificial intelligence application made by the company Open AI, it has
various uses, however its ability to regenerate human like responses
deems it mostly fit for chatbots in any setting, whether it may be for
companies or the student classroom. (Huallpa, 2023)
Student no. 465691

vi. Some of its features include-


 Answers questions
 Debugging codes
 Multilingual
 Self-improvement abilities

Data protection/Privacy concerns.

vii. Privacy concerns is a major issue which as to be taken account of when


considering the usage of chat gpt . It is known that in order for AI to form
its algorithms it is essential it makes use of past personal data in order to
form its trends which it may gain from past conversations, information
such as hospital records, bank transactions and biometrics could all be
stored by AI, and the inability to guard this information safely can lead to
the misuse of this personal information as a result of different reasons, for
example international applications which can be used in different countries
may not have the same data protection laws which can result to data leaks
and unauthorised access (Garcia, 2024)

viii. There is also a big issue with regards to consent towards information that
is being taken. An instance of this occurred when Facebook permitted
Cambridge Analytica to strategically reach voters in 2016 by accessing the
personal data of 80 million Facebook users without their explicit consent
(Andreotta, Kirkham and Rizzi, 2021). While some might perceive this as
relatively benign albeit manipulative, given its primary use for advertising,
it prompts consideration of the potential consequences if personal data
were in the hands of malicious actors. As a result of this many users will
find it difficult to trust the application.

ix. Following up on the issue of trust, there arise issues of transparency when
it comes to chat GPT because of its “blind spots”. AI has been able to
solve a lot of complex issues in the world, and that is because AI in itself is
very complex as well; Companies' reluctance to disclose information about
their algorithms in order to maintain competitiveness renders it
exceedingly difficult to fully comprehend them, even for experts.
Student no. 465691

Consequently, addressing dataset biases becomes nearly impossible.


(Leilasadat Mirghaderi, Mónika Sziron and Hildt, 2023).

Ethical concerns
x. Chat GPT primarily operates based on the data it has accumulated from
past interactions. If this data contains numerous inaccuracies or biases
that Chat GPT simply repeats, the application itself becomes flawed. This
can lead to the dissemination of misleading information and responses,
potentially resulting in harmful consequences in everyday situations.
xi. Only 22% of the AI professional’s population is accounted for by women.
Obviously, due to the lack of representation by women in this sector,
gender bias is always bound to be imbedded in AI, In contemporary times,
corporations intentionally employ default female voices like Alexa and Siri
due to stereotypical motivations.(Gupta and Mishra, 2022) .Also as a result
of past history of society’s gender biases, AI’s algorithm development
which has a lot to do with trained data is heavily influenced by this
information which ought to be selected with much more care. As a result of
this, Ai could be said to be unfair and discriminatory. (Nadeem, Abedin
and Marjanovic, 2020). Therefore information given during conversations
with chat GPT will contain bias and discrimination.
xii. The introduction of ChatGPT in workplace settings prompts concerns due
to the specific dynamics of certain environments. While artificial
intelligence is sometimes viewed as a replacement for human labor, it
lacks emotional understanding. Environments like hospitals, which rely
heavily on empathy, cannot solely rely on ChatGPT's recommendations
(Wang et al., 2023).
xiii. Copyright concerns represent another significant challenge with ChatGPT.
Its capacity to generate text resembling original content, potentially drawn
from previous academic reports, complicates the attribution of authorship.
This raises legal implications not only for ChatGPT itself but also for users,
particularly those employing the application for academic endeavors (Wu,
Student no. 465691

Duan and Ni, 2023). In 2023, OpenAI faced a lawsuit in San Francisco
when two authors based in the US alleged that OpenAI had improperly
utilized their academic works to develop their application. It was also
claimed that OpenAI extracted information from thousands of books
without authorization, violating the authors' copyrights (Brittain, 2023).

Reference list

Andreotta, A.J., Kirkham, N. and Rizzi, M. (2021). AI, big data, and the future of consent. AI
& SOCIETY, 37. doi:https://doi.org/10.1007/s00146-021-01262-5.

Brittain, B. (2023). Lawsuit says OpenAI violated US authors’ copyrights to train AI chatbot.
Reuters. [online] 29 Jun. Available at: https://www.reuters.com/legal/lawsuit-says-openai-
violated-us-authors-copyrights-train-ai-chatbot-2023-06-29/.

Garcia, J. (2024). Privacy Concerns in International AI Applications | International Journal of


Transcontinental Discoveries, ISSN: 3006-628X. internationaljournals.org, [online] 5(1).
Available at: https://internationaljournals.org/index.php/ijtd/article/view/29/28 [Accessed 2
Mar. 2024].

Goyal, A. and Aneja, R. (2020). Artificial intelligence and income inequality: Do


technological changes and worker’s position matter? Journal of Public Affairs, 20(4).
doi:https://doi.org/10.1002/pa.2326.

Gupta, A. and Mishra, M. (2022). Ethical Concerns While Using Artificial Intelligence in
Recruitment of Employees. Business Ethics and Leadership, 6(2), pp.6–11.
doi:https://doi.org/10.21272/bel.6(2).6-11.2022.

Huallpa, J.J. (2023). Exploring the ethical considerations of using Chat GPT in university
education. Periodicals of Engineering and Natural Sciences, [online] 11(4), pp.105–115.
doi:https://doi.org/10.21533/pen.v11i4.3770.
Student no. 465691

Leilasadat Mirghaderi, Mónika Sziron and Hildt, E. (2023). Ethics and Transparency Issues
in Digital Platforms: An Overview. AI, 4(4), pp.831–844.
doi:https://doi.org/10.3390/ai4040042.

Nadeem, A., Abedin, B. and Marjanovic, O. (2020). Gender Bias in AI: A Review of
Contributing Factors and Mitigating Gender Bias in AI: A Review of Contributing Factors
and Mitigating Strategies Strategies. [online] Available at:
https://aisel.aisnet.org/cgi/viewcontent.cgi?article=1048&context=acis2020.

Wang, C., Liu, S., Yang, H., Guo, J., Wu, Y. and Liu, J. (2023). Ethical Considerations of
Using ChatGPT in Health Care. Journal of Medical Internet Research, [online] 25(1),
p.e48009. doi:https://doi.org/10.2196/48009.

Wu, X., Duan, R. and Ni, J. (2023). Unveiling Security, Privacy, and Ethical Concerns of
ChatGPT. Journal of Information and Intelligence.
doi:https://doi.org/10.1016/j.jiixd.2023.10.007.

You might also like