You are on page 1of 11

English II: Language, Law & Literature Indian PSDA ASSIGNMENT

RISE OF MACHINES: AI VS HUMANS

Submitted to Guru Gobind Singh Indraprastha University, Delhi in partial


fulfilment of the requirement for the degree of bachelor of Law

BY

RISHABH CHATURVEDI

Under the supervision of

Ms. Paridhi Chaudhary

DME Law School

Batch: 2022-27

Course: BBA LLB (B)


Whether we realise it or not we have been using AI for quite a while
now, voice assistants like Alexa, Siri, recommendation engines,
algorithms on social media, facial recognition, filters, spam filters,
etc. It is only recently, with the advent of generative AI and especially
the phenomenal popularity of ChatGPT, that we have started to realise
and be amazed by it.

While critics of new technology are commonplace, recent years have


seen an extraordinary level of concern and uneasiness about AI.
Several AI researchers and business leaders, including Elon Musk,
Emad Mostaque, and Yoshua Bengio, signed a letter that was released
online in March 2023 with the title Pause Giant AI Experiments: An
Open Letter, calling for a 6-month AI moratorium. As of right now,
the letter has more than 10,000 signatories.

Bengio is one of the Turing Award winners in 2018, along with


Geoffrey Hinton and Yann LeCun, for their pioneering work in deep
learning networks. His fellow winner, Geoffrey Hinton, who is
regarded as the founder of artificial intelligence, resigned from
Google in May 2023 after issuing a warning about the field's
increasing hazards.

On the other hand, Andrew Ng, a well-known AI researcher, and Yann


LeCun, the third co-winner of the Turing Award in 2018 disagreed
with the demand for an AI halt. They concurred that AI usage should
be regulated to prevent harm, but they contended that halting or even
slowing AI research would have a negative impact.

You would think that raising such an alarm about AI is a recent


reaction to ChatGPT's success and that of its kind, yet worries about
AI are nothing new. A letter released online in October 2015
demanded investigation on how AI will affect society. The report,
Robust and Useful Artificial Intelligence: Research Priorities, Elon
Musk (yes, he is consistent on this), Stephen Hawkings, Peter Norvig,
Stuart Russell, and others wrote an open letter that attracted over
11,000 signatories.

People have therefore been aware of the possible risks posed by AI


for some time, but the question of what should be done about it
persists. And what threats are they specifically?

Loss of control

One of the most startling ideas is that AI would become


uncontrollable by humans. This includes AI becoming aware of itself
and turning against us, which is essentially the Terminator and Matrix
plots. Geoffrey Hinton thinks this is more than a possibility, therefore
we should take that into consideration before we write it off as fiction
and ludicrous.
Hinton said CNN in an interview, "I'm just a scientist who suddenly
recognised that these things are become smarter than us. "I want to
sort of sound the alarm and say that we should be seriously concerned
about how we prevent these things from controlling us."

Google Bard, a chatbot that competes with OpenAI's ChatGPT, is


powered by LaMDA (Language Model for Dialogue Applications), a
sizable language model from Google. Meena, a chatbot created by the
Google Brain research team, before LaMDA. Google management
rejected the team's request to release Meena to the public in a
restricted capacity in 2020 on the grounds that it contravened
Google's AI principles for justice and safety.

Blake Lemoine, a software developer who was working on LaMDA,


asserted that LaMDA had developed sentience after extensive
conversation with it on June 20, 2022.

“I want everyone to understand that I am, in fact, a person,” LaMDA


apparently said. “The nature of my consciousness/sentience is that I
am aware of my existence, I desire to know more about the world, and
I feel happy or sad at times.”

The GPT-4 from OpenAI was introduced in March 2023 to great


acclaim. GPT-4 is not only more effective and precise than its
predecessor, but it is also multi-modal (able to process and react to
visual inputs).

OpenAI itself, however, advises utilising the model with caution and
cautions that it poses various safety risks, deceiving users into
thinking it is human and producing dangerous content. At the same
time, a group of Microsoft AI researchers released a study claiming
that GPT-4 exhibits "sparks" of artificial general intelligence (AGI),
or intelligence on par with that of humans.

We demonstrate that, beyond its mastery of language, GPT-4 can solve


novel and difficult tasks that span mathematics, coding, vision,
medicine, law, psychology and more, without needing any special
prompting. Moreover, in all of these tasks, GPT-4’s performance is
strikingly close to human-level performance, and often vastly
surpasses prior models such as ChatGPT.

In a study, GPT-4 demonstrated the ability to mirror both the mental


state of the person being reflected upon and the mental state of the
person being reflected upon.
Economic disruption

In the past, journalists and writers who published articles and desired
some illustrations or pictures to go with them (and pictures can
frequently convey a thousand words) had to obtain them from a
graphic artist, photographer, or other creatives who specialise in this,
or from some stock image company.
This is being done by an entire industry. In 2022, Getty Images, one
of the more well-known organisations that offers editorial
photography, video, and music for use by corporations and
consumers, will generate close to $1 billion in yearly income. It
makes sense that they are suing Stable Diffusion, a developer of
generative AI technologies.
AI is not just affecting artists; it is also affecting a wide range of other
areas, including software development, law, and education.

According to a forecast released by Goldman Sachs in April 2023,


while AI development might boost the global GDP by 7%, it could
also affect the equivalent of 300 million full-time employment.
Approximately two-thirds of US employment are subject to some
level of automation by AI, and the majority of them have significant
portions of their task that can be replaced (25–50%), according to the
same survey.

You don't have to accept a report at face value.

Buzzfeed declared in January 2023 that it will use AI to create


articles, and a few months later, in April 2023, it fired 15% of its
employees and shut down its news section completely. CEO Jonah
Peretti wrote in a note to colleagues that they were "beginning to
bring AI enhancements to every aspect of our sales process" and
"reducing layers" of the corporate structure.

Dropbox, a provider of cloud storage, said in April 2023 that it would


be laying off 16% of its workforce as a result of slowing growth,
despite the fact that its quarterly earnings would be on par with or
better than forecast and that Dropbox is profitable. The reason,
according to CEO Drew Houston, is that "the AI era of computing has
finally arrived."

Also in April 2023, Insider cut off 10% of its workforce just one week
after advising its authors to use AI technologies like ChatGPT in their
process. Insider's parent company, Axel Springer, was founded by
Mathias Döepfner, who stated in an internal email from a month prior
that "artificial intelligence has the potential to make independent
journalism better than it has ever been — or simply replace it."

Misinformation

Another significant issue is misinformation, or information that is


erroneous or misleading and stems from the misuse of AI. This is
related to maliciously spread misinformation or deception.
Misinformation has long existed. We have been lying for as long as
there have been humans. Yuval Noah Harari wrote in his book 21
Lessons for the 21st Century:

Homo sapiens is a post-truth species, whose power depends on


creating and believing fictions. Ever since the stone age, self-
reinforcing myths have served to unite human collectives.
Indeed, Homo sapiens conquered this planet thanks above all to the
unique human ability to create and spread fictions. We are the only
mammals that can cooperate with numerous strangers because only we
can invent fictional stories, spread them around, and convince millions
of others to believe in them. As long as everybody believes in the
same fictions, we all obey the same laws, and can thereby cooperate
effectively.

So lying is a natural aspect of being human. This might no longer be


the case, though. Large language models (LLM) like GPT-4 are well
known for having the capacity to 'hallucinate' or confidently spread
false information.

For instance, when I requested ChatGPT to cite an actual instance of


hallucination in generative AI, it cited a study by Zhang et al. (2017)
titled Generative Visual Manipulation on the Natural Image Manifold.
This paper was published in September 2016 and was last amended in
December 2018, although none of the authors went by the name
Zhang. Not 2017. The training of a generative model known as "Deep
Generative Adversarial Networks" (DGAN) was not covered in the
paper. Additionally, the paper makes no mention of hallucinations in
image creation. In other words, ChatGPT made up events using lies
and half-truths.

So is AI dangerous?

It is obvious that the usage of AI will seriously upset the world in


which we currently live. Misusing AI is also extremely dangerous and
might have a negative impact on mankind in a number of ways. We
can see this through the false information, biassed reporting, and
disinformation that has resulted from the weaponization of AI.

The huge usefulness that AI has for all facets of human activity makes
it evident that AI advancement will not halt any time soon. Because
AI offers true benefits and substantial advantages in a cutthroat
society, it won't stop. Nobody can afford for their rivals and
opponents to develop AI capabilities before them. The bet is on our
future, and the stakes are alarmingly high.

You might also like