You are on page 1of 8

1

Literature Review: “The AI Debate, And What We Can Learn From It”. By Elliot Crook

Elliot Crook

NUAMES

English 1010: Introduction to Writing

Laura J. Southwick

October 28, 2023


2

Literature Review: “The AI Debate, And What We Can Learn From It”. Elliot Crook

Not long ago a large area of discussion was opened when some of the first generative AI

programs were made publicly available. This one action brought on many ideas from many

people. It gave rise to many opinions and articles from a vast amount of people and articles.

Some articles were worried about the potential future of AI being misused, or even creating

some strange dystopia. Many were elated as the possibility of AI-enhanced civilizations that

could achieve greater heights than ever before. Some were simply trying to mediate and

prepare for the inevitable change soon to come. There are very many views, and I want to

explore quite a few in this essay. I want to lay out a relatively accurate landscape of the current

discussion, and bring together what we could learn from this debate. We’ll mostly focus on the

authors, their theses, data points, and their proposed solutions.

Good Uses of AI

One of the largest views of this topic finds AI to be an absolute game-changer that will

allow humanity to achieve greater heights than ever before. The best example of AI for the

purposes of this essay is ChatGPT. Due to its accessibility, it’s becoming a very valuable tool for

education. Angela Duckworth in her article “Op-Ed: Don’t Ban Chatbots in Classrooms – Use

Them to Change How we Teach”, describes ChatGPT as a very great tool for both teachers and

students. Students can use it to proofread essays and give feedback on already-written material.

They could also use it for generating ideas to include in the essay, instead of generating an

entire essay for them.


3

Another contender in this position is Sal Khan with his Ted Talk “How AI Could Save (Not

Destroy) Education”, where he uses the AI Khanmigo as an AI example. This AI is different in that

it’s been specifically designed for education, as it helps both students and teachers. Students

that interact with Khanmigo find that it can proofread essays and give feedback like before, but

it can also work them through complex math and programming problems, and never gives away

the answer completely. It’s more of a tool to assist the student in arriving to the correct answer,

instead of providing the correct answer. It assists teachers in a similar way, where it gives

guidance on the best ways to instruct students on the topic that gives them long-term

information.

Chen also takes a side with this in her article “AI Will Transform Teaching and Learning.

Let’s Get it Right”, though to a less extreme extent than these two authors. She mentions many

of the same points, but a few new ones include using AI to simulate a student for teachers,

constantly providing the newest material to teach, shifting focus towards students becoming

the essay’s architects, enabling students to learn without judgement, and using AI to create

tests and look over the responses to determine a student’s expertise much more accurately win

a field. These are all great suggestions to think about when you next discuss AI.

These three articles combined creates a compelling case for AIs potential use in a

classroom, but it’s not the only practical use of AI as Darrel West and John Allen demonstrate

with their article “How Artificial Intelligence is Transforming the World”. They take a more

neutral stance, simply stating some of the innovative applications, but their propositions include

using AI for educated financial decisions, national security through camera footage, identifying

potential problems in medical imagery, predicting whether people are likely to become repeat
4

offenders, self-driving cars, and creating smart cities with optimized service delivery. It’s

important to remember that although AI is best known for its educational potential, it can do

many more things than just teaching and assisting.

Potential Problems

Unfortunately, no piece of technology ever arrives without its complications. This is

something even Duckworth and Khan take note of, describing it as the “ultimate cheating tool”

very briefly. Other people though, like Vidhi Chugh in her article “Ethics in Generative AI”, take

much more time in pointing out the ways that AI could be used for nefarious purposes. Her

article highlights the issues of AI being used for things like generating harmful content to be

spread by malicious individuals, using it for deepfakes, and spreading false information. Then

there’s other problems like whether the AI breaches copyright if it includes copyrighted material

in its training data, or the repercussions of it containing personal information in its data too.

Even things like AI have their repercussions.

These are things that Neil Selwyn also describes in their article “The Future of AI and

Education: Some Cautionary Notes”. Selwyn takes a more neutral stance, but they do give

potential problems for AI and how to fix them. They list many of the same points as Chugh, in

fact West, Allen, and Chen also talk about them briefly, but Selwyn includes new potential

problems like AI program being biased due to their training data being biased. There’s also

another point of this argument too, that being the economic effect. Chugh states in her article

that AI has the potential to replace some existing jobs that could be done by what she describes
5

as “lesser humans”, that being jobs that don’t require much cognitive function to perform.

Selwyn takes this even further, describing how the competing factions of AI technology like the

companies, creators, CEOs, governments, etc. All want different things out of AI, which could

create rifts between them and affect AIs performance.

Another wholly Selwyn introduction is AI’s environmental impact. This is not something

mentioned in a single other article, but Selwyn uses the example of bitcoin. According to

Selwyn, “a product such as bitcoin (is) estimated to incur an annual energy consumption

equivalent to that of Thailand or Norway (de Vries et al., 2022)”. Selwyn uses this example to

illustrate that generative AI could have the exact same consequences on a larger scale, since AI

will be far more popular than bitcoin. It would be a very large portion of power going straight to

AI, when it could be used for other purposes like powering homes or businesses.

What to Do

Sometimes it’s a little hard to determine what to do in such a complicated landscape,

but multiple authors here have provided their own solutions for how to use AI effectively in the

future. The first, and possibly biggest, is to address AI’s ethics. AI potentially having sensitive

information, biased information, and copyrights material could be a big problem. The easiest

way to combat this is to just curate the material better. Don’t include any of it unless you know

it will be okay to include, like specifically gaining permission to include the material. And do not

include any biased information whatsoever. This also must go along with providing better access
6

to information, so that the AIs have larger databases to work off of, and therefore become

smarter.

It’s also important to familiarize yourself with the AIs ethical standards, so as to not

accidentally overstep them. Those include human rights and dignity, peaceful and just societies,

diversity and inclusion, and environmental flourishing, all of which are provided by UNESCO as

well at 10 other important things to keep in mind. By following these, it’s much harder for AI to

be used for wrongdoing, or for accidental breach of privacy or copyright. Along with this, it’s

best to engage with ethical AI communities so you gain the best feedback. There’s many out

there, so it shouldn’t be too difficult.

Some solutions provided are very political in nature, mostly given by West and Allen.

They point out things like government spending on AI, creating AI ethics boards to give out

policy recommendations, and voting for local politicians that will enact good AI policies. These

are possible some of the best ways to encourage safe and ethical production of AI. Regulating

things like this will become key soon, so it’s best to get started early.

To combat people using AI maliciously, it’s best to penalize its malicious use and improve

cybersecurity. Both of these combined will limit the amount of harm that people will use AI for,

and there is real harm involved. This could go along well with educating people on digital

literacy and to always investigate any claims or information you find online. This will allow more

people to understand AI on a higher level, so the decisions made by these people will end up

being better than if they did not fully understand the issue.
7

Employing even some of these changes will lead to a better future for AI. After all, AI’s

impact will likely be similar to how many other technology revolutions changed the world. We

survived every other big change, so with the right mindset and ideas, we could use AI effectively

and correctly. I highly recommend looking into most of the articles provided, since summarizing

their contents really doesn’t do justice to their hard work and dedication.

Conclusion

All things considered AI is a very big topic, and for good reason. Every point made by

each author was valuable to the conversation, and there’s much more to be had. It’s almost

certain that artificial intelligence will change the world, and it’s difficult to predict how. What

the debate does show, however, is that many people have different views on AI, all with

different conclusions and solutions. Many are very similar, but it’s nearly impossible to fit every

perspective into a few camps. Even Duckworth and Khan, as similar as their theses are, do not

state the exact same evidence for their conclusions. I find it rather fascinating, since people

treat this topic like it’s a line graph of opinions, when the human mind is much more

complicated than a simple 1-dimensional graph could ever depict. So just keep in mind that

everyone’s opinion is different, and it’s a very vast world out there where very few people even

fit into segmented groups.


8

References

Duckworth, A., & Ungar, L. (2023, January 19). Op-Ed: Don’t ban chatbots in classrooms – use
them to change how we teach. Los Angeles Times
https://www.latimes.com/opinion/story/2023-01-19/chatgpt-ai-education-testing-
teaching-changes
Khan, S. (2023). How AI could save (not destroy) education. TED.

https://www.ted.com/talks/sal_khan_how_ai_could_save_not_destroy_education/trans

cript

Chugh, V (2023). Ethics in generative AI. DataCamp

https://www.datacamp.com/tutorial/ethics-in-generative-ai

Selwyn, N. (2022). The future of AI and education: Some cautionary notes. Wiley Online Library

https://onlinelibrary.wiley.com/doi/epdf/10.1111/ejed.12532?domain=p2p_domain&to

ken=P92ZHGAIUNHFWC7AJHXJ

Chen, C. (2023). AI will transform teaching and learning. Let’s get it right. Stanford University

https://hai.stanford.edu/news/ai-will-transform-teaching-and-learning-lets-get-it-right

West, D. and Allen, J. (2018). How artificial intelligence is transforming the world. Brookings

https://www.brookings.edu/articles/how-artificial-intelligence-is-transforming-the-

world/

You might also like