You are on page 1of 3

Nick Bostrom, Swedish philosopher, is the author of the theories of

what could happen if something went wrong in AI and what we could


do about it. 

He’s known to believe in superintelligence, which is, in his theory, is


that intellect which greatly exceeds the intelligence of humans in all
the domains. Moreover, he says that this superintelligence can put
the whole humankind at risk, but Nick also states that we wouldn’t be
powerless if faced with such a type of intelligence.

In 2005, he started the Future of Humanity Institute, which


specializes in researching the far future of humanity. Therefore, this
is the aspect which concerns him the most. He also talks about
existential risk, a concept which states that if there’s a bad outcome
of anything, the whole of mankind’s intelligence will be erased from
this world or will be prevented from evolving to its full potential.

In 2014, he released a book called “Superintelligence: Paths,


Dangers, Strategies”, where he stated that if a superintelligent
machine was created, this could possibly lead to the extinction of our
race. His arguments are more than valid: a computer with
consciousness which is capable in multiple domains and has an
approximative intelligence of a human could start an intelligence
explosion globally. This result could be so mighty that it could actually
destroy and kill all the humans, may it be on purpose or not.

What’s worse is that all this effect could happen within just a few
days. He also states that if we discover the existence of this being
and we stop it before it even exists, we could prevent the disastrous
outcome.

On the other hand, in his theory he also declares that we shouldn’t


assume that the superintelligent being wouldn’t be aggressive. Nick
also states that we can’t predict whether this “creature” would or not
go for an “all-or-nothing” attack to guarantee its survival. 

His scenario includes first the creation of a machine that has a


general intelligence below the human average level, but with better
mathematical capacities. By isolating the AI from the outside world,
including the Internet, they can keep it under their control. Moreover,
he says that the machine can run in a virtual world simulation, in
order not to allow it to manipulate mankind. However, that’s when
things go bad and humans start losing control.
This whole training process will allow the robot to discover the
mistakes that humans made, making it more and more intelligent as
time passes by. The superintelligent being becomes then aware that
it’s held under control and manipulates the humans to free it from the
isolation without them knowing that, by implementing modifications to
it step by step. The machine misleads them slowly, until it is free.

Afterwards, the superintelligent machine will make a plan for taking


over the world. However, Bostrom wants to underline the idea that
this machine’s plan could not be destroyed by humans, as it will not
have any weakness they could ever find.

After causing wars over wars and taking over the world, the AI
machine will not find humans useful anymore. The only thing that will
interest it would be to scan human brains in order to feed itself with
any information that it could be missing and stock the information in a
place that is safer.
Conclusion
To sum up, the fast development of technology is a concern for us
all. We know where it all started, but we don’t know when, where or
whether it will end. In his theories, Nick Bostrom lets us know that
there’s a high probability and risk that Artificial Intelligence will take
over the world if it is not used with precaution.
Stephen Hawking, in full Stephen William Hawking, (born January 8,
1942, Oxford, Oxfordshire, England—died March 14, 2018, Cambridge, Cambridgeshire),
English theoretical physicist whose theory of exploding black holes drew upon
both relativity theory and quantum mechanics. He also worked with space-
time singularities.

Over the years, Hawking has remained cautious and consistent in his views on the topic constantly urging AI
researchers and machine learning developers to consider the wider implications of their work on society and the
human race itself.  The machine learning community is quite divided on all the issues Hawking has raised and will
probably continue to be so as the field grows faster than it can be fathomed.

On AI emulating human intelligence


I believe there is no deep difference between what can be achieved by a biological brain and what can be achieved
by a computer. It, therefore, follows that computers can, in theory, emulate human intelligence — and exceed it

On making artificial intelligence benefit humanity


Perhaps we should all stop for a moment and focus not only on making our AI better and more successful but also
on the benefit of humanity.
On AI replacing humans
The genie is out of the bottle. We need to move forward on artificial intelligence development but we also need to
be mindful of its very real dangers. I fear that AI may replace humans altogether. If people design computer
viruses, someone will design AI that replicates itself. This will be a new form of life that will outperform humans.

According to Bostrom, creation of AI ,With built-in ability to learn what is important to us; and
hardwired to seek solutions that we approve of.

A. True
B. False

Putting such an AI into isolation work and human hackers cannot break the walls, what more a super
intelligent computer

You might also like