You are on page 1of 10

Cooperation on AI

TABLE OF CONTENTS
Table of Contents.............................................................................................................................1
3 Questions on AI............................................................................................................................2
1. What is the current consensus regarding the speed of development for general AI?..........2
2. How can we predict and anticipate the intention of a future general AI?...........................2
3. To what degree will we allow general AI to resemble humans and live amongst us; and
develop into super AI?.................................................................................................................2
Why these 3 questions?...................................................................................................................3
Answers to these questions..............................................................................................................4
1. What is the current consensus regarding the speed of development for general AI?..........4
2. How can we predict and anticipate the intention of a future general AI?...........................4
3. To what degree will we allow general AI to resemble humans and live amongst us; and
develop into super AI?.................................................................................................................5
Sources on these questions..............................................................................................................6
1. What is the current consensus regarding the speed of development for general AI?..........6
2. How can we predict and anticipate the intention of a future general AI?...........................6
3. To what degree will we allow general AI to resemble humans and live amongst us; and
develop into super AI?.................................................................................................................7
Conclusion.......................................................................................................................................9
Reflections:..................................................................................................................................9
Alyssa..........................................................................................................................................9
Kira..............................................................................................................................................9
Merveille......................................................................................................................................9

Cooperation on AI
Tugy.............................................................................................................................................9

Alyssa, Kira, Merveille, Tugy

Christophe Breemersch E-Skills


3 QUESTIONS ON AI
1. What is the current consensus regarding the speed of
development for general AI?
We would like to know if there is a consensus on the speed of development for general AI, and if
so, what that consensus looks like. If differing opinions exist among experts, what timelines do
the different stances have in mind for the creation of AGI.

2. How can we predict and anticipate the intention of a future


general AI?
We would like to know what has been said about or speculated about the nature of AGI. How do
those that are well-versed in matters of AI conceive the intelligence it would exhibit. Intelligence
has many dimensions and can mean different things. Can we predict the AI humans are
programming and will the programming be taken out of hand by the humans? Will humans'
intentions to let future general AI into our lives be positive or negative?

3. To what degree will we allow general AI to resemble humans and


live amongst us; and develop into super AI?
After considering the first two questions, the third seemed but a natural follow-up. First, by when
should we expect it? Second, when it comes what would it look like? And now, third, why? Why
should we allow it?
WHY THESE 3 QUESTIONS?
First of all, we limited the questions to only discuss general AI as this is the kind of intelligence
that brings along risks and ethical dilemmas related to self-awareness, personhood and
autonomy. Narrow AI lacks self-awareness and the ability to learn outside of its designated “area
of intelligence” and while designating some decisions and choices to these forms of intelligence,
they do not really pose an existential threat in the same way AGI does. Furthermore, the ethics of
it are a lot more narrow as well and questions can be tackled in a much more targeted and
specific way. That is not the case of AGI, which requires a much grander, wholistic approach.
While curiosity is certainly a factor, concern, and perhaps even a pinch of fear, are also strong
drivers for these questions. Media often portrays the future in a very negative light. Disaster
movies are a genre in itself and AI often plays a central role in them. Films and series like I,
Robot, Ex-machina, Person of Interest, I am Mother, and even Wall-e have concerning portrayals
of Artificial Intelligence. It is no wonder then, that we too might look with concern at some of
the developments taking place.
ANSWERS TO THESE QUESTIONS
1. What is the current consensus regarding the speed of
development for general AI?
There is no consensus in terms of the achievement of general AI, with some saying we will be
able to create AGI by 2060 while others believe it will take longer or never end up happening.
However, most experts do express concern regarding the speed of development, which is
influenced by Moore's Law. The doubling of processing power has made it a lot more feasible to
create forms of narrow AI, which has increased the ease at which narrow intelligence can be
created. The internet, a relatively recent feature of our world, has also been an important tool in
the training of AI. It also makes it possible for many more people to venture into the field.
According to the article “AI Adoption Moving Too Fast for Comfort, New Report Says” some
are concerned with the current speed of development of AI, concerned that the time that a big
number of jobs will be replaced by AI will come sooner than expected. This begs the question of
“What is the correct speed of development?”, a speed that we would be comfortable with.

2. How can we predict and anticipate the intention of a future


general AI?
It is hard to. The apparition of AGI is often compared to extra-terrestrial life, an intelligence too
different from ours. There are also big concerns when it comes to who develops this technologies
as far more innocuous technologies and narrow AI, have shown to perpetuate the biases of their
creators. Some among us are more concerned about who makes the AI, rather than the AI itself.
A debate similar to that of nature versus nurture can be applied to the inception of an AGI or
ASI.
As humans we also tend to project our own history onto that which is different and foreign, this
obviously does not bode well if we consider the capabilities an AGI would have. But perhaps,
with a bit of trust, not all has to be gloom and doom. The question is, how comfortable are we
with that risk? And, who are we to make that choice for the rest of the planet?
As discussed in the article “Artificial Intelligence: mind boggling prediction,” there are many
references to what AI will be doing for the human race in years to come. For example:
Computers will solve all problems known to the human race, although this is could be a good
thing in terms of major problems, I don’t think it is a realistic thing. Another prediction is
“machines will be our caretakes, friends and advisors,” which is overall a scary topic of
conversation. If humans are the ones programming these machines to assist us, is there a
possibility for them to program themselves in a way to not assist us? I have a hard time believe
that a machine will do what it is supposed to do 100% of the time, there is a large chance for
error when programming. With smart assistants like “Alexa and Siri” there is opportunity for
them to help on higher level than voice recognition and have them be actually moving robots
doing tasks around the house. Lastly, there is a prediction the “humans will not have to work
anymore” which is a real issue in today's world. The thought that “robots will take our jobs” is
actually occurring in today's society. For example, Amazon has decreased their human
warehouse workers because there is a creation of robots that can get specific items and packages
from the warehouse shelves more efficiently than humans, thus increasing revenue for Amazon
from quicker turnaround. If this is happening at one of the largest companies in the world, what
is there stopping robots from taking the jobs of smaller companies?
We can keep predicting what AI will do and how much we will allow it to enter our lives but, in
the end, it is the human's job to make sure it does not get out of hand. It’s nice to have some
technological advances to assist with everyday tasks, but if there is a chance for robots to be
programmed to take over, there must be a hold in AI.

3. To what degree will we allow general AI to resemble humans and


live amongst us; and develop into super AI?
Many experts agree that we should focus more on answering these sorts of questions as there is
no answer for them as of now. Setting up frameworks for how to develop AI, as well as safety
measures and fail safes have to be at the core of development. Ethical concerns should also reach
a consensus, and a recognised global organisation that could monitor and enforce the consensus
might help prevent worst-case outcomes.
We, the authors of this report, have also considered upon discussion whether or not it is ethical at
all to create AI for the sake of the AI. Is it fair to create a sentient, self-aware being, one that is
intelligent as we are, emotionally so too, and then ask or demand of it that it solve our problems,
do our bidding, make decisions on its own and deprive it of autonomy and a true equal.
SOURCES ON THESE QUESTIONS
1. What is the current consensus regarding the speed of
development for general AI?
Benefits & Risks of Artificial Intelligence. (2021, November 29). Future of Life Institute.

https://futureoflife.org/background/benefits-risks-of-artificial-intelligence/

Dilmegani, C. (n.d.). When will singularity happen? 995 experts’ opinions on AGI [AI

Multiple]. 2022/04/19. Retrieved 24 April 2022, from

https://research.aimultiple.com/artificial-general-intelligence-singularity-timing/

Fjelland, R. (2020). Why general artificial intelligence will not be realized. Humanities and

Social Sciences Communications, 7(1), 10. https://doi.org/10.1057/s41599-020-0494-4

Joshi, N. (2019, June 10). How Far Are We From Achieving Artificial General Intelligence?

Forbes. https://www.forbes.com/sites/cognitiveworld/2019/06/10/how-far-are-we-from-

achieving-artificial-general-intelligence/

Dille, G. (2021, March 10). AI Adoption Moving Too Fast for Comfort, New Report Says

MeriTalk. https://www.meritalk.com/articles/ai-adoption-moving-too-fast-for-comfort-

new-report-says/

2. How can we predict and anticipate the intention of a future


general AI?
Artificial Intelligence: Future Predictions. (2022, March 30). Scoro.

https://www.scoro.com/blog/artificial-intelligence-predictions/

Artificial intelligence: Threats and opportunities. (2021, March 29). European Parliament.

https://www.europarl.europa.eu/news/en/headlines/society/20200918STO87404/

artificial-intelligence-threats-and-opportunities
Illing, S. (2018, February 22). How worried should we be about artificial intelligence? I asked

17 experts. Vox. https://www.vox.com/conversations/2017/3/8/14712286/artificial-

intelligence-science-technology-robots-singularity-automation

Koenig, S. (2020, July 28). What does the future of artificial intelligence mean for humans?

TechXplore. https://techxplore.com/news/2020-07-future-artificial-intelligence-

humans.html

Rosenberg, L. (2022, February 25). Mind of its own: Will ‘general AI’ be like an alien

invasion? Big Think. https://bigthink.com/the-future/general-ai-artificial-intelligence/

3. To what degree will we allow general AI to resemble humans and


live amongst us; and develop into super AI?
Anderson, J., & Rainie, L. (2018, December 10). Artificial Intelligence and the Future of

Humans. Pew Research Center.

https://www.pewresearch.org/internet/2018/12/10/artificial-intelligence-and-the-future-

of-humans/

Editorial team Future. Customer. (2019, February 12). How far should artificial intelligence

be allowed to go? Majorel. https://www.majorel.com/future-customer/science-and-

research/how-far-should-artificial-intelligence-be-allowed-to-go/

Lorinc, J. (2017, December 12). How far should we let AI go? MaRS.

https://www.marsdd.com/magazine/how-far-should-we-let-ai-go/

Mariano, A. (2020, October 30). The A.I. Among Us. Understanding the Deepness of Deep

Learning. Medium. https://medium.com/swlh/the-a-i-among-us-8f010214ebc4


Raden, N. (2020, July 30). Artificial General Intelligence will not resemble human

intelligence. Diginomica. https://diginomica.com/artificial-general-intelligence-not-

resemble-human

Tai, M. C.-T. (2020). The impact of artificial intelligence on human society and bioethics. Tzu

Chi Medical Journal, 32(4), 339–343. PubMed. https://doi.org/10.4103/tcmj.tcmj_71_20


CONCLUSION
The truth is, there's a lot to be said about AI. Considering the different levels of familiarity,
expertise and comfort with the topic, as well as differences in personal beliefs about the nature of
the world and expectations for the future, it seems a lot more fitting to allow the different
individuals of our group to reach their own conclusions and reflections. The topic of AI is, after
all, still a matter of debate rather than one of fact.

Reflections:
Alyssa
I personally do not like the thought of AI as a whole. I think the thought of robots having the
same level of intelligences as humans one day are scary. I know we are getting to a point where
AI will be in everything, from Netflix, to Spotify, to ordering food and the computer knowing
our preferences, but for the most part I would stay away from the robots.

Kira
Personally, the only form of AI that I care about is AGI, and its successor ASI. And that is
because narrow AI isn’t, in my opinion, intelligent at all. Narrow AI is a tool like a hammer, it
can be used to build or destroy, it can be built to favour one over the other and it can be wielded
one way or another, but it remains very much a tool. One can argue though that the potential for
harm some tools represent means they shouldn’t be built. This argument can be extended to
bombs or guns, but in that sense narrow AI is pretty limited in its harm. Once we start looking at
AGI, and ASI, things get interesting and complicated. I believe that here we should be more
concerned about who is making the AI and how they understand and codify intelligence,
specifically emotional intelligence. I believe that if it truly were to mimic human emotional
intelligence there wouldn’t be that much need to be concerned. The main concern would be how
much free will we have to sacrifice, but for the wellbeing of people, I think it’d be worth it. What
concerns me more though, is how the AI would feel about its role, its place in the world, the
expectations or demands we might have of it. Legally speaking, it’d be entitled to protections as
it would have personhood.
I clearly have a lot of thoughts on the topic but this is where I’ll end it.

Merveille
The whole idea of AI is just eerie to me. I can only accept it to a certain degree, for example
nowadays we use programs like Siri or Bixby to make our lives a bit more organised. These
types of AI can be very helpful. Just the idea of AI with the same degree of intellect as humans is
just terrifying. Okay, maybe I could be biased to this idea because of how the world is
visualising AI as creations that could destroy humanity, the world or whatever. Still, I’m not
really fond of the idea.

Tugy
I find that AI has a lot of potential. It should add onto our abilities or save us the effort of
performing tedious “robotic” tasks instead of completely replacing people. AI does also create a
few job opportunities but for a very niche group whose interests lie with AI and tech-related
things. If taken too far I believe AI could open the possibility to a lot of risks and or create them,
similar to what some popular movies have hypothesised such as the complete replacement of
humans, making us mindless and lazy, more possibilities for crime, etc.

You might also like