You are on page 1of 33

Futuristic United Nations General Assembly

Background Guide
Letter from the Executive Board

Dear Delegates,

It is our distinct honor to welcome you to DPS Eldeco Model United


Nations 2017! More specifically, it is our pleasure to serve as the
Executive Board for the United Nations Futuristic General Assembly
(UNFGA), a particularly important international agency in the time of
great need of the future.

We would like to begin by telling you a bit about ourselves.

Mitali Bhasin is a student of Humanities in College and is one of the most


active youth in the city. Her wide range of interest encompasses poetry,
writing, and entrepreneurship and certainly MUNs in which her skills are
at par to get her recognition at such an age, she truly is a prodigy.
She's known for her keen sense of rationality which plays in tandem with
her strengths of excellent research, she's known for the famous survivor
strategy of "Outwit-Outplay-Outlast" in the MUN circuit. Her keen
interests in technology and high competitive arena make her perfect for
taking the responsibility of the Assembly Co-Chairperson for this
committee. Her expertise shall provide you with an extensive learning
experience for sure.

Avinash Singh Vishen is a first year Law student, at Symbiosis Law


College, Noida, originally hailing from Lucknow itself and will be serving
as the Assembly Co-Chairperson. Having spent around 4 years in this
circuit, he has done quite a few MUNs, both in the capacity of a delegate

2
and an Executive Board Member. He has an undying interest towards
Policy Studying and Litigation, because of which he expects the delegates
of his committee to be thoroughly researched and be aware of all the
aspects of the Agenda at hand. He really looks forward to meeting you all,
this August.

Your Assembly Vice Chairperson, Aryan Srivastava (not to be confused


with your Secretary General), is a class 12 student, studying Science with
Maths and Psychology. He, along with me has attended quite a few
MUNs, both as a delegate as well as EB member. He always looks
forward to logical arguments, even if research content is a bit low. Logic is
something he considers as a standing pillar to build an argument over facts
and research. He looks forward to meeting you all very soon.

We are especially excited to be chairing UNFGA. This committee in


particular has a topic at hand, which not now, but will definitely matter to
us in the coming future.
With regard to the substance of this committee, however, we feel that
there are few topics that are as underdeveloped and underappreciated as
the Threat from Artificial Intelligence, with regard to Human Beings,
considering the recent events. In the wake of Facebook’s recent AI activity
to shutting down of Yahoo’s AI server, there are many questions to be
answered with regard to the balancing of technological and theoretical
interests. For example, should there be more comprehensive guidelines,
laws, and incentives that encapsulate a technologically-friendly attitude?
Or is the status quo sufficient for current and future needs and desires?
Will the future of Artificial Intelligence (AI) and development be
predicated upon the traditionalist neoclassical economic model, or is a
shift in our approach to the market and our understanding of Corporate
Environmental and Social terms necessary in order to maximize

3
happiness for our societies and ensure appropriate protection for
Humans and relevant aspects.

These are just some of the ambitious questions that this committee will
have to answer over the course of committee in August. Come prepared
for some engaging debate, and be ready to shape the future of global
technological discourse!

Sincerely,
Mitali Bhasin

Assembly Co-Chairperson, United Nations Futuristic General Assembly

Avinash Singh Vishen

Assembly Co-Chairperson, United Nations Futuristic General Assembly

Aryan Srivastava
Assembly Vice Chairperson, United Nations Futuristic General Assembly
Contact Details:
+91-87-26-499100
aryansrivastava1999@gmail.com

5
About the Committee
The UNGA is the democratic heart of the UN, a forum for decision-
making where all 193 member states each have a single vote. Unlike the
security council, which is dominated by the five permanent members –
Russia, UK, US, France and China – every country is invited to send a
representative to the general assembly. It was established as a founding
institution of the UN in 1945 as the “deliberative, policymaking and
representative organ of the United Nations”.
The general assembly has a range of vital decisions to make within the
UN system, including appointing the secretary general, electing the non-
permanent members of the Security Council and approving the UN
regular budget.
Most importantly, it is the main global forum for discussing international
political cooperation, threats to peace and economic development, as well
as the huge range of social, humanitarian and cultural issues that come
under the remit of the United Nations.
The general assembly discusses and makes decisions on just about
anything you can think of. The 69th assembly, which is coming to an end,
has discussed how to further economic development in Africa, the
situation in the occupied territories of Azerbaijan and the role of
diamonds in fuelling conflict. Most recently the assembly approved a set
of principles to resolve disputes between bankrupt countries and their
creditors, which comes after years of lobbying by Argentina and Greece
for a debt restructuring process that would shield them from draconian
cutbacks threatening their political and economic stability.
The general assembly is funded out of the UN regular budget which is
paid for by member states, based on a scale relating to their ability to pay.

6
The regular budget for 2014-15 was $5.4bn (£3.5bn) with $663m
allocated for the general assembly, economic and social council and
conference management.
If countries fail to pay their dues they may have their vote taken away.
Yemen is banned from voting because it is in arrears, while another four
countries in arrears – Comoros, Guinea-Bissau, São Tomé and Príncipe
and Somalia – have been allowed to vote until the end of 2015.
At the end of 2014 the state of payments towards the UN was described
by a senior UN official as “alarming”, with a funding gap of more than
$950m.
Decisions on important questions, such as those on peace and security,
admission of new members and budgetary matters, require a two-thirds
majority. Decisions on other questions are by simple majority.
Each country has one vote. Some Member States in arrear of payment
may be granted the right to vote. See the list of countries in arrears in the
payment of their financial contributions.

7
Setting of the Committee
The Committee is set in 2030, wherein certain developments and
progress has been made in the equitable terms of Technological
advancements, in the marketing, economic and share sector, with respect
to each country’s individual state profile, regarding the Laws and Policies
governing the areas of Artificial Intelligence and Technology.
Certain parameters defined and drew boundaries in recent times,
considering and very well aware of some small and big time
developments, or “progress” as some may call it.
Before moving on, there are a few terminologies you must be aware of,
before starting your research.
1. AI: Artificial Intelligence- Man Made technology, capable of self
deductions and inferences. Also, can alter understanding capabilities and
initially programmed to run on commands.
2. Patent: A government authority or license conferring a right or title for
a set period, especially the sole right to exclude others from making,
using, or selling an invention.
3. Existential Crisis: Threat to Human Existence because of some risking
factor, obligating and commanding on its own.

8
Gravitas of the Situation [Background
Timeline: Since 2017-2030 (Present Day)]
*Note: Delegates are advised to be aware of the technological
terms related to AI and Robotics, in order to understand the
language of the Background Guide.

What worries you about the coming world of artificial intelligence?


Too often the answer to this question resembles the plot of a sci-fi thriller.
People worry that developments in A.I. will bring about the “singularity”
— that point in history when A.I. surpasses human intelligence, leading to
an unimaginable revolution in human affairs. Or they wonder whether
instead of our controlling artificial intelligence, it will control us, turning
us, in effect, into cyborgs.

July 2017: Beginning threatens. Facebook, a renowned social media site,


has its Artificial Intelligence Program, develop an unknown language of
communication within its systems, all by itself, causing the firm to shut
down the AI server for that programme.
December 2017: Yahoo and Google, two of the most influential and
widely used search engines, experience a technical shut down, chaining to
a mass server outbreak, causing fault at an interior level of the programs.
May 2018: Trans-electric Servers fail to alter the coding language and
the Email Clouding system is disconnected, leading to the inaccessibility
of the Emails by the client computers, resulting to loss of Emails from all
servers and the entire cloud.

9
November 2018: Bing, a relatively lesser known and used search
engine, hits the stocks with highest ever ratings of search conducts usage
ever, after the failure of Yahoo and Google last year.
2019: Military data servers all around the world experience a database
shut down, causing the Military AI to launch codes in the silos on its own,
threatening of coded nuclear missile launch worldwide. However, is
solved with immediate effect by the Technical Directorate, Military Wing
of the FSB.
2020: A major chunk of highly confidential data is leaked from the CIA
and MI6, as reported by Reuters, causing the Stats to touch a whopping
76% up rating in terms of AI advancements and division managements of
all the technical firms in the world, especially in the USA, Russia, UK,
India and Germany.
2022: Ministry of State Security, China, releases a report which
extensively mentions the term ‘Sleeper Bugs’, which have the capability of
terminating the all-sourcing databases (technical, military, AI, etc.) and
have never been used before. They can also be used to steal/transfer
information or raw data from one outsourcing server to another, without a
tracking IP left.
2023, 2024 & 2025: Policies of different technical gigantic firms come
into being over the course of these three years regarding the condition of
a possible threat to mankind from Artificial Intelligence, but are yet not
disclosed in the public, for reasons very obvious.
2026: The International Media presses the governments, as they are
awaited to clarify the stands they have regarding the increasing hype of the
AI threats, all over the world. Also, several conspiracy theories come up,
surprisingly with valid logic, to look for.

10
2027, 2028 & 2029: The basic elements of programming loose
commanding structure and now start following self made commands,
which in turn flip the switches between the agencies and governments, all
across the globe, forcing both of them to work together, in its entirety.
The Following setup, prepared by the Artificial Intelligence Research and
Analysis Command, Government of USA (est. 2018), ensures the
breakdown of self structured controlled working of AI and is launched
with the permission of the POTUS, Mr. Dwayne ‘The Rock’ Johnson,
causing public defamation, as this setup is capable of killing the cerebral
circuits of the entire regime of AI, including the ones which are beneficial
and not harmful to us in any way as well as those which can counter attack
and secure already affected and harmful AI. India, Russia and Nigeria
have now developed a technique, through which the R&AW, FSB and
NIA (Nigeria) now have access to worldwide Trigger Management System
(TMS) on which they have been working together since 2013, which can
eliminate the research capacities of all the Intelligence agencies and
techno firms across the globe, which if done, will have serious
consequences.
2030 (Current Setting): The world is set to counter the possible threat
to mankind by the Artificial Intelligence. Countries like India, USA and
Israel are equipped with programs capable of self destruct capacities. In
case, there is an AI programme which starts running commands on its
own will, the self Trigger Management System (TMS) will initiate itself
simultaneously, and will cause the entire AI programming codes to vanish
from the coding-decoding strata, ultimately leading to the destruction of
that particular programme type, from all the known functional agencies,
all around the globe. AI posses a grave threat to the Existence of human
kind as well as LIFE. Development of machines, capable of producing
more like themselves, feed of Common Sense and self commanding

11
prospects make them very dangerous and us vulnerable, on a high scale of
Existential Crisis.
Governments around the world haven’t yet made any official
statements or policy frameworks regarding the situation at hand, most of
them still denying the threat from AI, stating it as a misleading myth,
considering they are totally aware of all the events and happenings which
have taken place and have claimed them to be ‘Technical Faults’.

Involvement of United Nations (Past Actions)


The United Nations has been readily concerned about this issue, since
the very starting, as the Secretary General; His Excellency; Mr. Vikas
Swaroop (India) has himself emphasized on this pertinent issue. The UN
Technical Department (Weapons, Robotics and Analysis) has been very
much involved and has prevented the UN Cloud Servers from crashing,
as they stand as the only functional Email servers in the world, making it
imperative for them to ensure its security and 100% functional overview.
Also, The UN Crash Command X (UNCCX) system is a system,
developed under the direct supervision of Mr. Secretary General in 2029,
which enables all the data servers to align and come at once, like a single
unit, unifying the entire Internet and all Proxy IP servers, which can help
stand against all of the self constructing Programs and saving the coding-
decoding strata. Also, it must be noted, that the UNCCX is a single unit
functional system, which works under extreme measures and combined
authorization of the Secretary General, the Deputy Secretary General and
the Under Secretary General for General Assemblies, and if let
commanded by one particular state/non-state actor, can risk the safety of
the entire globe at once.

12
For more details, visit:
http://www.unicri.it/special_topics/Robotics_Artificial_Intelligence/

Categorical Classification of AI
• WHAT IS AI?
From SIRI to self-driving cars, artificial intelligence (AI) is progressing
rapidly. While science fiction often portrays AI as robots with human-like
characteristics, AI can encompass anything from Google’s search
algorithms to IBM’s Watson to autonomous weapons.
Artificial intelligence today is properly known as narrow AI (or weak AI),
in that it is designed to perform a narrow task (e.g. only facial recognition
or only internet searches or only driving a car). However, the long-term
goal of many researchers is to create general AI (AGI or strong AI).
While narrow AI may outperform humans at whatever its specific task is,
like playing chess or solving equations, AGI would outperform humans at
nearly every cognitive task.

• WHY RESEARCH AI SAFETY?


In the near term, the goal of keeping AI’s impact on society beneficial
motivates research in many areas, from economics and law to technical
topics such as verification, validity, security and control. Whereas it may
be little more than a minor nuisance if your laptop crashes or gets hacked,
it becomes all the more important that an AI system does what you want it
to do if it controls your car, your airplane, your pacemaker, your
automated trading system or your power grid. Another short-term

13
challenge is preventing a devastating arms race in lethal autonomous
weapons.
In the long term, an important question is what will happen if the quest
for strong AI succeeds and an AI system becomes better than humans at
all cognitive tasks. As pointed out by I.J. Good in 1965, designing smarter
AI systems is itself a cognitive task. Such a system could potentially
undergo recursive self-improvement, triggering an intelligence explosion
leaving human intellect far behind. By inventing revolutionary new
technologies, such a super intelligence might help us eradicate war,
disease, and poverty, and so the creation of strong AI might be the biggest
event in human history. Some experts have expressed concern, though,
that it might also be the last, unless we learn to align the goals of the AI
with ours before it becomes super intelligent.
There are some who question whether strong AI will ever be achieved,
and others who insist that the creation of super intelligent AI is guaranteed
to be beneficial. We have to recognize both of these possibilities, but also
recognize the potential for an artificial intelligence system to intentionally
or unintentionally cause great harm. We believe research today will help
us better prepare for and prevent such potentially negative consequences
in the future, thus enjoying the benefits of AI while avoiding pitfalls.

• HOW CAN AI BE DANGEROUS?


Most researchers agree that a super intelligent AI is unlikely to exhibit
human emotions like love or hate, and that there is no reason to expect
AI to become intentionally benevolent or malevolent. Instead, when
considering how AI might become a risk, experts think two scenarios
most likely:
The AI is programmed to do something devastating: Autonomous
weapons are artificial intelligence systems that are programmed to kill. In

14
the hands of the wrong person, these weapons could easily cause mass
casualties. Moreover, an AI arms race could inadvertently lead to an AI
war that also results in mass casualties. To avoid being thwarted by the
enemy, these weapons would be designed to be extremely difficult to
simply “turn off,” so humans could plausibly lose control of such a
situation. This risk is one that’s present even with narrow AI, but grows as
levels of AI intelligence and autonomy increase.
The AI is programmed to do something beneficial, but it develops a
destructive method for achieving its goal: This can happen whenever we
fail to fully align the AI’s goals with ours, which is strikingly difficult. If you
ask an obedient intelligent car to take you to the airport as fast as possible,
it might get you there chased by helicopters and covered in vomit, doing
not what you wanted but literally what you asked for. If a super intelligent
system is tasked with an ambitious geoengineering project, it might wreak
havoc with our ecosystem as a side effect, and view human attempts to
stop it as a threat to be met.
As these examples illustrate, the concern about advanced AI isn’t
malevolence but competence. A super-intelligent AI will be extremely
good at accomplishing its goals, and if those goals aren’t aligned with ours,
we have a problem. You’re probably not an evil ant-hater who steps on
ants out of malice, but if you’re in charge of a hydroelectric green energy
project and there’s an anthill in the region to be flooded, too bad for the
ants. A key goal of AI safety research is to never place humanity in the
position of those ants.

• WHY THE RECENT INTEREST IN AI


SAFETY?
Stephen Hawking, Elon Musk, Steve Wozniak, Bill Gates, and many
other big names in science and technology have recently expressed

15
concern in the media and via open letters about the risks posed by AI,
joined by many leading AI researchers. Why is the subject suddenly in the
headlines?
The idea that the quest for strong AI would ultimately succeed was long
thought of as science fiction, centuries or more away. However, thanks to
recent breakthroughs, many AI milestones, which experts viewed as
decades away merely five years ago, have now been reached, making
many experts take seriously the possibility of super intelligence in our
lifetime. While some experts still guess that human-level AI is centuries
away, most AI researches at the 2015 Puerto Rico Conference guessed
that it would happen before 2060. Since it may take decades to complete
the required safety research, it is prudent to start it now.
Because AI has the potential to become more intelligent than any human,
we have no surefire way of predicting how it will behave. We can’t use
past technological developments as much of a basis because we’ve never
created anything that has the ability to, wittingly or unwittingly, outsmart
us. The best example of what we could face may be our own evolution.
People now control the planet, not because we’re the strongest, fastest or
biggest, but because we’re the smartest. If we’re no longer the smartest,
are we assured to remain in control?

• THE TOP MYTHS ABOUT ADVANCED AI


A captivating conversation is taking place about the future of artificial
intelligence and what it will/should mean for humanity. There are
fascinating controversies where the world’s leading experts disagree, such
as: AI’s future impact on the job market; if/when human-level AI will be
developed; whether this will lead to an intelligence explosion; and whether
this is something we should welcome or fear. But there are also many
examples of boring pseudo-controversies caused by people

16
misunderstanding and talking past each other. To help ourselves focus on
the interesting controversies and open questions — and not on the
misunderstandings — let’s clear up some of the most common myths.

17
• TIMELINE MYTHS
The first myth regards the timeline: how long will it take until machines
greatly supersede human-level intelligence? A common misconception is
that we know the answer with great certainly.
One popular myth is that we know we’ll get superhuman AI this century.
In fact, history is full of technological over-hyping. Where are those fusion
power plants and flying cars we were promised we’d have by now? AI has
also been repeatedly over-hyped in the past, even by some of the founders
of the field. For example, John McCarthy (who coined the term “artificial
intelligence”), Marvin Minsky, Nathaniel Rochester and Claude Shannon
wrote this overly optimistic forecast about what could be accomplished
during two months with stone-age computers: “We propose that a 2
month, 10 man study of artificial intelligence be carried out during the
summer of 1956 at Dartmouth College […] An attempt will be made to
find how to make machines use language, form abstractions and concepts,
solve kinds of problems now reserved for humans, and improve
themselves. We think that a significant advance can be made in one or
more of these problems if a carefully selected group of scientists work on
it together for a summer.”
On the other hand, a popular counter-myth is that we know we won’t get
superhuman AI this century. Researchers have made a wide range of
estimates for how far we are from superhuman AI, but we certainly can’t
say with great confidence that the probability is zero this century, given the
dismal track record of such techno-skeptic predictions. For example,
Ernest Rutherford, arguably the greatest nuclear physicist of his time, said
in 1933 — less than 24 hours before Szilard’s invention of the nuclear
chain reaction — that nuclear energy was “moonshine.” And Astronomer
Royal Richard Woolley called interplanetary travel “utter bilge” in 1956.
The most extreme form of this myth is that superhuman AI will never

18
arrive because it’s physically impossible. However, physicists know that a
brain consists of quarks and electrons arranged to act as a powerful
computer, and that there’s no law of physics preventing us from building
even more intelligent quark blobs.
There have been a number of surveys asking AI researchers how many
years from now they think we’ll have human-level AI with at least 50%
probability. All these surveys have the same conclusion: the world’s
leading experts disagree, so we simply don’t know. For example, in such a
poll of the AI researchers at the 2015 Puerto Rico AI conference, the
average (median) answer was by year 2045, but some researchers guessed
hundreds of years or more.
There’s also a related myth that people who worry about AI think it’s only
a few years away. In fact, most people on record worrying about
superhuman AI guess it’s still at least decades away. But they argue that as
long as we’re not 100% sure that it won’t happen this century, it’s smart to
start safety research now to prepare for the eventuality. Many of the safety
problems associated with human-level AI are so hard that they may take
decades to solve. So it’s prudent to start researching them now rather than
the night before some programmers drinking Red Bull decide to switch
one on.

• CONTROVERSY MYTHS
Another common misconception is that the only people harboring
concerns about AI and advocating AI safety research are luddites who
don’t know much about AI. When Stuart Russell, author of the standard
AI textbook, mentioned this during his Puerto Rico talk, the audience
laughed loudly. A related misconception is that supporting AI safety
research is hugely controversial. In fact, to support a modest investment in
AI safety research, people don’t need to be convinced that risks are high,

19
merely non-negligible — just as a modest investment in home insurance is
justified by a non-negligible probability of the home burning down.
It may be that media have made the AI safety debate seem more
controversial than it really is. After all, fear sells, and articles using out-of-
context quotes to proclaim imminent doom can generate more clicks than
nuanced and balanced ones. As a result, two people who only know about
each other’s positions from media quotes are likely to think they disagree
more than they really do. For example, a techno-skeptic who only read
about Bill Gates’ position in a British tabloid may mistakenly think Gates
believes super intelligence to be imminent. Similarly, someone in the
beneficial-AI movement who knows nothing about Andrew Ng’s position
except his quote about overpopulation on Mars may mistakenly think he
doesn’t care about AI safety, whereas in fact, he does. The crux is simply
that because Ng’s timeline estimates are longer, he naturally tends to
prioritize short-term AI challenges over long-term ones.

• MYTHS ABOUT THE RISKS OF


SUPERHUMAN AI
Many AI researchers roll their eyes when seeing this headline: “Stephen
Hawking warns that rise of robots may be disastrous for mankind.” And
as many have lost count of how many similar articles they’ve seen.
Typically, these articles are accompanied by an evil-looking robot carrying
a weapon, and they suggest we should worry about robots rising up and
killing us because they’ve become conscious and/or evil. On a lighter
note, such articles are actually rather impressive, because they succinctly
summarize the scenario that AI researchers don’t worry about. That
scenario combines as many as three separate misconceptions: concern
about consciousness, evil, and robots.

20
If you drive down the road, you have a subjective experience of colors,
sounds, etc. But does a self-driving car have a subjective experience? Does
it feel like anything at all to be a self-driving car? Although this mystery of
consciousness is interesting in its own right, it’s irrelevant to AI risk. If you
get struck by a driverless car, it makes no difference to you whether it
subjectively feels conscious. In the same way, what will affect us humans is
what super intelligent AI does, not how it subjectively feels.
The fear of machines turning evil is another red herring. The real worry
isn’t malevolence, but competence. A super intelligent AI is by definition
very good at attaining its goals, whatever they may be, so we need to
ensure that its goals are aligned with ours. Humans don’t generally hate
ants, but we’re more intelligent than they are – so if we want to build a
hydroelectric dam and there’s an anthill there, too bad for the ants. The
beneficial-AI movement wants to avoid placing humanity in the position
of those ants.
The consciousness misconception is related to the myth that machines
can’t have goals. Machines can obviously have goals in the narrow sense
of exhibiting goal-oriented behavior: the behavior of a heat-seeking missile
is most economically explained as a goal to hit a target. If you feel
threatened by a machine whose goals are misaligned with yours, then it is
precisely its goals in this narrow sense that troubles you, not whether the
machine is conscious and experiences a sense of purpose. If that heat-
seeking missile were chasing you, you probably wouldn’t exclaim: “I’m
not worried, because machines can’t have goals!”
In fact, the main concern of the beneficial-AI movement isn’t with robots
but with intelligence itself: specifically, intelligence whose goals are
misaligned with ours. To cause us trouble, such misaligned superhuman
intelligence needs no robotic body, merely an internet connection – this
may enable outsmarting financial markets, out-inventing human

21
researchers, out-manipulating human leaders, and developing weapons we
cannot even understand. Even if building robots were physically
impossible, a super-intelligent and super-wealthy AI could easily pay or
manipulate many humans to unwittingly do its bidding.
The robot misconception is related to the myth that machines can’t
control humans. Intelligence enables control: humans control tigers not
because we are stronger, but because we are smarter. This means that if
we cede our position as smartest on our planet, it’s possible that we might
also cede control.

What will happen if AI takes control?


“These are interesting issues to contemplate, but they are not pressing.”
They concern situations that may not arise for hundreds of years, if ever.
At the moment, there is no known path from our best A.I. tools (like the
Google computer program that recently beat the world’s best player of the
game of Go) to “general” A.I. — self-aware computer programs that can
engage in common-sense reasoning, attain knowledge in multiple
domains, feel, express and understand emotions and so on.
This doesn’t mean we have nothing to worry about. On the contrary, the
A.I. products that now exist are improving faster than most people realize
and promise to radically transform our world, not always for the better.
They are only tools, not a competing form of intelligence. But they will
reshape what work means and how wealth is created, leading to
unprecedented economic inequalities and even altering the global balance
of power.
It is imperative that we turn our attention to these imminent challenges.

22
What is artificial intelligence today? Roughly speaking, it’s technology that
takes in huge amounts of information from a specific domain (say, loan
repayment histories) and uses it to make a decision in a specific case
(whether to give an individual a loan) in the service of a specified goal
(maximizing profits for the lender). Think of a spreadsheet on steroids,
trained on big data. These tools can outperform human beings at a given
task.
This kind of A.I. is spreading to thousands of domains (not just loans),
and as it does, it will eliminate many jobs. Bank tellers, customer service
representatives, telemarketers, stock and bond traders, even paralegals
and radiologists will gradually be replaced by such software. Over time
this technology will come to control semiautonomous and autonomous
hardware like self-driving cars and robots, displacing factory workers,
construction workers, drivers, delivery workers and many others.
Unlike the Industrial Revolution and the computer revolution, the A.I.
revolution is not taking certain jobs (artisans, personal assistants who use
paper and typewriters) and replacing them with other jobs (assembly-line
workers, personal assistants conversant with computers). Instead, it is
poised to bring about a wide-scale decimation of jobs — mostly lower-
paying jobs, but some higher-paying ones, too.
This transformation will result in enormous profits for the companies that
develop A.I., as well as for the companies that adopt it. Imagine how
much money a company like Uber would make if it used only robot
drivers. Imagine the profits if Apple could manufacture its products
without human labor. Imagine the gains to a loan company that could
issue 30 million loans a year with virtually no human involvement. (As it
happens, my venture capital firm has invested in just such a loan
company.)

23
We are thus facing two developments that do not sit easily together:
enormous wealth concentrated in relatively few hands and enormous
numbers of people out of work. What is to be done?
Part of the answer will involve educating or retraining people in tasks A.I.
tools aren’t good at. Artificial intelligence is poorly suited for jobs
involving creativity, planning and “cross-domain” thinking — for example,
the work of a trial lawyer. But these skills are typically required by high-
paying jobs that may be hard to retrain displaced workers to do. More
promising are lower-paying jobs involving the “people skills” that A.I.
lacks: social workers, bartenders, concierges — professions requiring
nuanced human interaction. But here, too, there is a problem: How many
bartenders does a society really need?
The solution to the problem of mass unemployment, I suspect, will
involve “service jobs of love.” These are jobs that A.I. cannot do, that
society needs and that give people a sense of purpose. Examples include
accompanying an older person to visit a doctor, mentoring at an
orphanage and serving as a sponsor at Alcoholics Anonymous — or,
potentially soon, Virtual Reality Anonymous (for those addicted to their
parallel lives in computer-generated simulations). The volunteer service
jobs of today, in other words, may turn into the real jobs of the future.
Other volunteer jobs may be higher-paying and professional, such as
compassionate medical service providers who serve as the “human
interface” for A.I. programs that diagnose cancer. In all cases, people will
be able to choose to work fewer hours than they do now.
Who will pay for these jobs? Here is where the enormous wealth
concentrated in relatively few hands comes in. It strikes me as
unavoidable that large chunks of the money created by A.I. will have to be
transferred to those whose jobs have been displaced. This seems feasible

24
only through Keynesian policies of increased government spending,
presumably raised through taxation on wealthy companies.
As for what form that social welfare would take, I would argue for a
conditional universal basic income: welfare offered to those who have a
financial need, on the condition they either show an effort to receive
training that would make them employable or commit to a certain
number of hours of “service of love” voluntarism.
To fund this, tax rates will have to be high. The government will not only
have to subsidize most people’s lives and work; it will also have to
compensate for the loss of individual tax revenue previously collected
from employed individuals.
This leads to the final and perhaps most consequential challenge of A.I.
The Keynesian approach I have sketched out may be feasible in the
United States and China, which will have enough successful A.I.
businesses to fund welfare initiatives via taxes, but what about the other
countries?
They face two insurmountable problems. First, most of the money being
made from artificial intelligence will go to the United States and China.
A.I. is an industry in which strength begets strength: The more data you
have, the better your product; the better your product, the more data you
can collect; the more data you can collect, the more talent you can attract;
the more talent you can attract, the better your product. It’s a virtuous
circle, and the United States and China have already amassed the talent,
market share and data to set it in motion.
For example, the Chinese speech-recognition company iFlytek and
several Chinese face-recognition companies such as Megvii and
SenseTime have become industry leaders, as measured by market
capitalization. The United States is spearheading the development of

25
autonomous vehicles, led by companies like Google, Tesla and Uber. As
for the consumer internet market, seven American or Chinese companies
— Google, Facebook, Microsoft, Amazon, Baidu, Alibaba and Tencent —
are making extensive use of A.I. and expanding operations to other
countries, essentially owning those A.I. markets. It seems American
businesses will dominate in developed markets and some developing
markets, while Chinese companies will win in most developing markets.
The other challenge for many countries that are not China or the United
States is that their populations are increasing, especially in the developing
world. While a large, growing population can be an economic asset (as in
China and India in recent decades); in the age of A.I. it will be an
economic liability because it will comprise mostly displaced workers, not
productive ones.
So if most countries will not be able to tax ultra-profitable A.I. companies
to subsidize their workers, what options will they have? I foresee only one:
Unless they wish to plunge their people into poverty, they will be forced
to negotiate with whichever country supplies most of their A.I. software —
China or the United States — to essentially become that country’s
economic dependent, taking in welfare subsidies in exchange for letting
the “parent” nation’s A.I. companies continue to profit from the
dependent country’s users. Such economic arrangements would reshape
today’s geopolitical alliances.
One way or another, we are going to have to start thinking about how to
minimize the looming A.I.-fueled gap between the haves and the have-
nots, both within and between nations. Or to put the matter more
optimistically: A.I. is presenting us with an opportunity to rethink
economic inequality on a global scale. These challenges are too far-
ranging in their effects for any nation to isolate itself from the rest of the
world.

26
CASE STUDY: Stephen Hawking on Threat from
Artificial Intelligence
*Note: This is a quoted opinion paper, included in this guide
in order to facilitate the understanding of the agenda from a
broader perspective.

With the Hollywood blockbuster Transcendence playing in cinemas, with


Johnny Depp and Morgan Freeman showcasing clashing visions for the
future of humanity, it's tempting to dismiss the notion of highly intelligent
machines as mere science fiction. But this would be a mistake, and
potentially our worst mistake in history.
Artificial-intelligence (AI) research is now progressing rapidly. Recent
landmarks such as self-driving cars, a computer winning at Jeopardy! and
the digital personal assistants Siri, Google Now and Cortana are merely
symptoms of an IT arms race fuelled by unprecedented investments and
building on an increasingly mature theoretical foundation. Such
achievements will probably pale against what the coming decades will
bring.
The potential benefits are huge; everything that civilization has to offer is a
product of human intelligence; we cannot predict what we might achieve
when this intelligence is magnified by the tools that AI may provide, but
the eradication of war, disease, and poverty would be high on anyone's
list. Success in creating AI would be the biggest event in human history.
Unfortunately, it might also be the last, unless we learn how to avoid the
risks. In the near term, world militaries are considering autonomous-
weapon systems that can choose and eliminate targets; the UN and
Human Rights Watch have advocated a treaty banning such weapons. In
the medium term, as emphasized by Erik Brynjolfsson and Andrew

27
McAfee in The Second Machine Age, AI may transform our economy to
bring both great wealth and great dislocation.
Looking further ahead, there are no fundamental limits to what can be
achieved: there is no physical law precluding particles from being
organized in ways that perform even more advanced computations than
the arrangements of particles in human brains. An explosive transition is
possible, although it might play out differently from in the movie: as Irving
Good realized in 1965, machines with superhuman intelligence could
repeatedly improve their design even further, triggering what Vernor
Vinge called a "singularity" and Johnny Depp's movie character calls
"transcendence".
One can imagine such technology outsmarting financial markets, out-
inventing human researchers, out-manipulating human leaders, and
developing weapons we cannot even understand. Whereas the short-term
impact of AI depends on who controls it, the long-term impact depends
on whether it can be controlled at all.
So, facing possible futures of incalculable benefits and risks, the experts
are surely doing everything possible to ensure the best outcome, right?
Wrong. If a superior alien civilization sent us a message saying, "We'll
arrive in a few decades," would we just reply, "OK, call us when you get
here – we'll leave the lights on"? Probably not – but this is more or less
what is happening with AI. Although we are facing potentially the best or
worst thing to happen to humanity in history, little serious research is
devoted to these issues outside non-profit institutes such as the Cambridge
Centre for the Study of Existential Risk, the Future of Humanity Institute,
the Machine Intelligence Research Institute, and the Future Life Institute.
All of us should ask ourselves what we can do now to improve the
chances of reaping the benefits and avoiding the risks.

28
More to go through
Videos:

• Stuart Russell – The Long-Term Future of (Artificial)


Intelligence
• Humans Need Not Apply
• Nick Bostrom on Artificial Intelligence and Existential Risk

• Stuart Russell Interview on the long-term future of AI

• Value Alignment – Stuart Russell: Berkeley IdeasLab Debate

Presentation at the World Economic Forum


• Social Technology and AI: World Economic Forum Annual

Meeting 2015
• Stuart Russell, Eric Horvitz, Max Tegmark – The Future of

Artificial Intelligence
• Talks from the Beneficial AI 2017 conference in Asilomar, CA

• Jaan Tallin on Steering Artificial Intelligence

Media Articles:

• Concerns of an Artificial Intelligence Pioneer


• Transcending Complacency on Superintelligent

Machines
• Why We Should Think About the Threat of Artificial

Intelligence
• Stephen Hawking Is Worried About Artificial Intelligence Wiping

Out Humanity
• Artificial Intelligence could kill us all. Meet the man who takes that

risk seriously

29
• Artificial Intelligence Poses ‘Extinction Risk’ To Humanity Says
Oxford University’s Stuart Armstrong
• What Happens When Artificial Intelligence Turns On Us?

• Can we build an artificial superintelligence that won’t kill us?


• Artificial intelligence: Our final invention?

• Artificial intelligence: Can we keep it in the box?

• Science Friday: Christof Koch and Stuart Russell on Machine

Intelligence (transcript)
• Transcendence: An AI Researcher Enjoys Watching His Own

Execution
• Science Goes to the Movies: ‘Transcendence’

• Our Fear of Artificial Intelligence

Essays by AI Researchers:

• Stuart Russell: What do you Think About Machines that


Think?
• Stuart Russell: Of Myths and Moonshine

• Jacob Steinhardt: Long-Term and Short-Term Challenges to

Ensuring the Safety of AI Systems


• Eliezer Yudkowsky: Why value-aligned AI is a hard engineering

problem

Research Articles:

• Intelligence Explosion: Evidence and Import (MIRI)


• Intelligence Explosion and Machine Ethics (Luke Muehlhauser,

MIRI)
• Artificial Intelligence as a Positive and Negative Factor in Global

Risk (MIRI)
• Basic AI drives

30
• Racing to the Precipice: a Model of Artificial Intelligence
Development
• The Ethics of Artificial Intelligence

• The Superintelligent Will: Motivation and Instrumental Rationality


in Advanced Artificial Agents
• Wireheading in mortal universal agents

Research Collections:
• Bruce Schneier – Resources on Existential Risk, p. 110

• Aligning Superintelligence with Human Interests: A

Technical Research Agenda (MIRI)


• MIRI publications

Case Studies:

• The Asilomar Conference: A Case Study in Risk Mitigation (Katja


Grace, MIRI)
• Pre-Competitive Collaboration in Pharma Industry (Eric Gastfriend

and Bryan Lee, FLI): A case study of pre-competitive collaboration on


safety in industry.

Blog posts and talks:

• AI control
• AI Impacts

• No time like the present for AI safety work


• AI Risk and Opportunity: A Strategic Analysis

• Where We’re At – Progress of AI and Related Technologies: An

introduction to the progress of research institutions developing new AI


technologies.
• AI safety

• Wait But Why on Artificial Intelligence

31
• Response to Wait But Why by Luke Muehlhauser
• Slate Star Codex on why AI-risk research is not that controversial
• Less Wrong: A toy model of the AI control problem
• What Should the Average EA Do About AI Alignment?

Books:

• Superintelligence: Paths, Dangers, Strategies


• Our Final Invention: Artificial Intelligence and the End of the

Human Era
• Facing the Intelligence Explosion

• E-book about the AI risk (including a “Terminator” scenario that’s


more plausible than the movie version)

Organizations:

• Machine Intelligence Research Institute: A non-profit organization


whose mission is to ensure that the creation of smarter-than-human
intelligence has a positive impact.
• Centre for the Study of Existential Risk (CSER): A multidisciplinary

research center dedicated to the study and mitigation of risks that could
lead to human extinction.
• Future of Humanity Institute: A multidisciplinary research institute

bringing the tools of mathematics, philosophy, and science to bear on


big-picture questions about humanity and its prospects.
• Global Catastrophic Risk Institute: A think tank leading research,
education, and professional networking on global catastrophic risk.
• Organizations Focusing on Existential Risks: A brief introduction to

some of the organizations working on existential risks.


• 80,000 Hours: A career guide for AI safety researchers.

32
Questions a Resolution must answer
1. What firm policies and stances do the governments all around the
world have, regarding this issue in its entirety?
2. Is there a way, in which all the parameters can be defined for the AI
threat, in all its aspects to make sure the self-commanding structure
does not ever get to operate on its own will?
3. How can the Military AI be made secure, in specific using all the
current resources we posses in order to make sure that our military
stays out of the current scenario and remains unaffected?

Nature of Reports and Evidences


1. All UN reports (any agency of the UN) will be deemed valid, regardless
of any claims.
2. State-owned News and Government Agencies. Eg: Xinhua from PRC,
BBC from UK, CIA, RAW, Mossad, Etc.
3. Reports/Facts/Statistics from Wikipedia, WikiLeaks, and Reuters shall
not be considered as a standing factual agreement in the committee, as
UN has discredited the last two sources and the first one is open to variety
of humungous public approach and subject to valid/invalid opinions.

33

You might also like