Professional Documents
Culture Documents
October 4, 2023
Megan Shipley
Engl 110
Assignment Two
The starting point for this essay to me seems to be that each of these authors agree that
AI, or artificial intelligence is “here to stay” so we are going to need to accept this fact and adapt
to this kind of intelligence interfering and or aiding in our day-to-day life, whichever perspective
you may take. I would argue that by putting such a large focus on this in a college level writing
course it is indeed already part of our educational existence and therefore a topic that needs to be
addressed. As I stated in my first essay my father works for a company that has been developing
technology in this area and it surely sparked the interest of my brother and I making our
schoolwork easier. In fact, is that not the point of machines? Humans have invented these to
make life easier. However, I would say that the consensus between these authors is that just
because things are easier does not necessarily mean they are better. In this case, AI and LLM’s,
specifically Chat GPT, can in theory make it easier to generate text, but is this text “better”? I
Wolfram’s description of ChatGPT is based on probability, but he himself points out that
always picking the highest probability words can lead to writing that is less exciting and essays
that lack creativity (Wolfram). Chomsky et al refer to Chat GPT and other AI programs as
“marvels of machine learning”, (Chomsky). They tell of these programs being foreshadowing of
“artificial general intelligence”, but they are quick to point out that the differences between these
programs and how humans use language leaves much to be desired. Murati talks about a writer's
unique voice and how machines have a difficult time portraying this. She goes on to say even
further problems occur when we consider human thinking/thoughts. She explains that all
languages are learned but points out that the “mechanics of the mind” is still a mystery. How do
we train a machine to use and convey language? And whose language do we tell it to use? She
goes on the say that “Humans also use their senses as a part of language” and we all know it is
impossible for a machine to smell, or taste or touch. It is also for a machine to convey the
In this way each of the authors agree that we will need to rely on programmers and
developers to add the human element into the AI experience. This includes values and emotions
that are not universal, but unique to each writer and expressed through their values and their
voice, something that we all know cannot ever be duplicated by a machine, no matter how
The potential of artificial intelligence is thrilling and colossal. These machines have
quickly figured out how to appreciate and comprehend language rapidly. “They raise new
questions about how human beings relate to machines and how that symbiosis of communication
will evolve as the future rushes toward us,” (Murati). There are many people who have a writing-
based career. Their main purpose in life is to write articles and information about things. They all
have their own words, thoughts, and beliefs that their writing is based around. The main
challenge for artificial intelligence is figuring out how these machines can understand
information like the human brain can and how it can apply that information to produce “human-
grade responses too,” (Murati). When reading the poem used in the article, it brings imagery to
your mind, and you can assess that there are emotions and feelings portrayed through these
words. You would assume this is the original poem written by Pablo Neruda word for word, but
it is a version of the poem written by GPT-3 just including bits and pieces of the original words.
Essentially the poem is made to seem like it is written by Pablo Neruda, but GPT-3 has studied
patterns of language and uses its knowledge to create a version of the poem that looks like
Neruda could have written it. This style of AI is growing rapidly and the people who oversee all
that is happening within artificial intelligence/GPT-3 have become authors and editors of a new
language (Murati).
important all around the universe, it is what keeps us connected with one another and with the
world. Language is used for communication, knowledge transfer, cultural identity, economic
growth, innovation, access to information, political disclosure, empathy and understanding, legal
and governance, and social integration. A vast issue with language within artificial intelligence is
if we cannot write exact rules of language how can we teach it to a machine? People have been
searching for the answers that are now starting to appear (Murati). The fuel to language is
thought. Thoughts always transpire in humans, every second and every day you are generating
thoughts in your own mind. “Our thoughts manifest through action and emotion but are
communicated through language,” (Murati). The first thing you are taught as a child is language,
how to talk. It takes years upon years to completely be conscious of your thoughts and how to
communicate them. In high school I was required to take a language class. I chose Spanish, we
spent two trimesters learning just the very basics. In order to learn this new language, it required
a lot of repetition. Repeating the same phrase or word until it stuck in your mind. “A child learns
by experiencing patterns, learning what is most likely to make sense in a new context,” (Murati).
It takes a lot of repetition to educate these machines on many different topics but not every word
were to ask Siri on my phone what three plus three is she would very quickly come with the
answer six. “There is little creativity in arithmetic, so machines make excellent calculators,
accountants, and modelers,” (Murati). Which makes complete sense when you are comparing a
machines ability to learn facts using numbers, versus learning language and thoughts. GPT-3 has
learned trillions of words from numerous different sauces on the internet. Text is generated by
starting with one word and contusing to the next. When words keep getting added on the
machine looks at all the words it has so far and uses them to come up with the next one. “The
origin of predicting what word comes next has roots in Russian literature. Scan the letters of this
text and you will notice that three consonants rarely appear in a row. Four consonants, hardly
ever,” (Murati). There are different probabilities and patterns that fuel the information that
artificial intelligence produces just like there is a rhyme and a reason for this as humans do.
“As the context becomes more detailed-for instance, walking into a kitchen covered in
mud-that list shrinks further. Our minds develop this sorting naturally through experiences, but
to train GPT-3's mind, the system must review hundreds of billions of different data points and
work out the patterns among them,” (Murati). This paragraph from the article explains the way
these machines must learn all the information. For example, if you’re walking into a kitchen
covered in mud if you walk to the left, you slip and fall, if you walk to the right there’s not as
much mud over there so you don’t. Those are two potential outcomes of the situation, but what if
you walked straight, or didn’t even move at all you stayed there. Those are also possible
outcomes. There could be a million different ways this scenario plays out, we as humans can do
this by physical experience and can choose what we want to do in that moment and that is the
way that it will be. In order for these machines to be able to produce a valuable answer or
situation they must be taught many of the possible scenarios that could happen. That is just one
question, now imagine the amount of questions GPT-3 has to answer on a daily, and how many
scenarios there are. Since the giant push of artificial intelligence came out, scientists have been
working very hard on ways to make this easier and their time more productive while teaching
these machines answers. They have come up with a certain NLP model that can now work all
under one system instead of having to deal with the mathematics and language separately. It also
can now handle billions of inputs and outputs within milliseconds (Murati).
Transformer machine learning models have helped the production of Chat GPT. In 2017
researchers explicitly started using a network called LSTM. “This finding hinted that a neural
network with good enough next character or word prediction capabilities should have developed
an understanding of language,” (Murati). After this finding transformer it was introduced and
put into play. They would be very useful as they have a rapid next word prediction. “This led to
the creation of the first GPT: the transformer language model that was pretrained on a large
corpus of text, which achieved excellent performance on every task using only a little bit of
finetuning,” (Murati). As Chat GPT being the base model, they kept improving and working on
the next word prediction and language to launch an upgraded version known as GPT-3.
I see artificial intelligence as lifechanging. It is used all around me from social media to
my dad’s job, learning about it in school, using it online to make online orders easier, and it has
many other benefits. I have used artificial intelligence to make old yearbook photos of myself as
a trend on tik took. I downloaded an app called “EPIK” and I was required to upload 8 photos of
strictly just my face where you can good look at my facial features. I had to pay $5.99 to use it,
but I was curious on how it could take me as a 20-year-old girl in 2023 and completely transform
me into looking like what my mom did when she graduated high school many years ago. It took
a few hours to process but when I received the photos back, I was mind blown. I was so
interested in how it could make me look so different, I kept downloading different artificial
intelligence apps and trying them all out. Another app I used created professional headshots,
turned me into a police officer, a bride, a professional soccer player, and even made me look
pregnant. There were a ton more different options on what I could turn myself into, I was so
fascinated. I also have loved mixing different songs together to create one, I was never very good
at it, but I had this app on my family computer at home that I would have to manually edit the
songs myself. Now I can ask an artificial intelligence and it will combine and create the song
In conclusion, it is obvious from the amount of attention that large language models such
as Chat GPT are receiving that this is something that is “here to stay”. There are many differing
opinions whether or not this is a service or a disservice to our daily lives, most markedly our
education system as this is where much of the use, and or misuse per se, of this technology is
happening. People outside of education may not realize the impact this is having, but in
academia this is certainly the most pressing and debated topic currently.
Since this intelligence is indeed artificial, it will likely always be missing the human
element, and this seems to be an area where people on all sides of this debate agree. Even
proponents of this technology, such as Wolfram, agree that while ChatGPT works on the
probability of words that are a likely fit, they will never match the words that come from
humans. He himself says that ChatGPT essays are often, “flat”. This is where he and Chomsky
agree that even if Chat GPT is here to stay there is much more to be done on the human side of
this technology. It will be up to the human developers and programmers to produce large
language models that more closely resemble human thoughts, feelings and values. Or it will be
up to the humans to choose not to engage in these types of technology in order to keep life a little
more real. No one can predict the future so we will have to all agree to just wait and see where
www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-
Murati, Ermira. “Language & Coding Creativity.” Daedalus 2022; 151 (2): 156–167.
doi: https://doi.org/10.1162/daed_a_01907
Wolfram, Stephen. “What Is ChatGPT Doing ... and Why Does It Work?” Stephen Wolfram
writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/
After receiving feedback from my first assignment, I knew I had a lot to improve. I
needed to add in text citations more throughout the paper. I feel like I did a better job at that. I
also feel like I had a much better understanding about what was being said in the article now that
I had done it already with assignment one. I worked harder to add more of an understanding with
what I’m trying to point out using direct quotes from the article.