You are on page 1of 35

Artificial Intelligence, Friend or Foe

Photo: Social robots, By Linda Haden, Future Ready Singapore, April 26, 2016
Professor Nadia Thalmann (left) at Nanyang Technological University in Singapore is the creator of the robo humanoid,
Nadine (right). Nadine is a robo secretary capable of greeting visitors, expressing emotions, carrying on conversations.

April 2, 2017 Raj N. Gaonkar, New Haven

Abstract: In 2014, Prof. Stephen Hawking in an interview with BBC warned, "The
development of full artificial intelligence (AI) could spell the end of the human race." His
unnerving prophecy sent alarming signals to the institutions involved in AI. Still the business
world is blindly exploiting AI just for the sake of short term gains. Aristotle (384 BC 322 BC)
wrote that the means of production can be valuable if the end product helps the end users. To
the business community profit is plainly an optimization exercise of revenue minus cost
equation and the end users are secondary. Burning of the fossil fuel according to the oil
companies is not the reason for global warming and instead they think of it to be a periodically
occurring natural phenomenon. The distinguished business leaders on CNBC and Fox
Business talk about AIs contribution to the bottom line of their corporations. Todays profit-
making machines may possibly become the terrorists of tomorrow, just like the friendly Soviet
Union during the WWII became adversary of the U.S after the war. What functionality can
be automated to boost profit? is an investigative research question commonly dealt by the
industrial world. In the near future, workers performing certain low skilled jobs will be
replaced by robots. AI in collaboration with the cloud computing is constantly learning on its
own to think like humans. The artificial intelligence can trigger a revolution which could make
a sizeable workforce redundant. It is anticipated by a few futurists that the rate at which AI is
learning, it may get smarter than humans at some point in the future. Humans may become
subordinate to humanoids. Millions of years ago the last of the dinosaurs for the reasons
unknown seized to exist. Humans might become the dinosaurs of the new age.

Following the Big Bang 13.7 billion years ago our universe was born. A few billion
years later probably the life might have begun in remote corners of the universe, much
earlier than the creation of the earth. The Earth formed about 4.6 billion years ago. The
earliest undisputed evidence of life on Earth dates back to 3.7 billion years when the
earths crust started solidifying. The oldest microbe fossil found in Australia is
believed to be 3.5 billion years old. In the universe there may be planets with life forms
more ancient and more advanced than our civilization. It is also likely to have existence
of super intelligent aliens of bizarre unrecognizable forms. The earth so far hasnt come
across signs of life anywhere in the universe. NASA believes that several planets in the
universe have temperate environments like the earth. The probability of existence of
some form of life is quite good in the planets with the earth like mild environment.
The origin of life on earth was totally an accidental event. By chance with the energy
of sun, molecules of carbon reacted with the gaseous elements and gave birth to
molecules of amino acids. Early metabolism started when amino acid molecules
started growing larger into peptide and protein compounds and then into macro
molecules of DNA and RNA. The process of such progression led to the creation of life
and ultimately humans. There is no certainty that conventional evolution process will
take the same routine path forever. Nevertheless, an unexpected accidental event can
derail the conventional theory of evolution and take all together a different route to
create non-DNA based life forms.

The human ingenuity took more than 160,000 years from the appearance of the homo-
sapiens in Africa to arrive at the contemporary technological internet society. During
this period the human genome hasnt changed except for the changes in physical traits
due to climatic conditions. The present population of seven billion has grown from 300
million at the beginning of the first millennium. Reflecting chronologically on the
sequence of inventions, the human progression was gaining momentum from cooking
meet 150,000 years ago to driving an automobile in the beginning of 20th century. The
progress in human creativity was moving at a proportionate rate with the human
population growth. The population is growing at the pace of geometrical progression
and in the last century has curved up toward the asymptotic state. Concurrently the
technology is moving even faster. The technological changes in the next 25 years will
be more than what the world has seen since the time of the Industrial Revolution. We
are on the verge of creating artificial intelligence equivalent to that of humans. In the
twentieth century the outbreak of technological innovations fetched riches to the

people and also has shrunk the world bringing nations to close proximities. After the
World War II both allied and axis countries worked cooperatively in the domain of
scientific researches. The cumulative brainpower of our civilization further facilitated
the dramatic upsurge in the scientific and technological advancements that brought us
to the AI age of the 21st century. The robotic maid is the next important gadget which
appears to be in the offing. In general people want to save more time for recreational
activities. Lately the recreational play has become the screen time on iPhone, which
may not be productive practice. Of course, a much smaller population such as
scientists, artists and businessmen need more time for creative work.

Being at the helm of the food chain, we humans have the responsibility to discipline
ourselves to take good care of the environment for the well-being of our future
generations. Awareness of the natures health which is dependent on the collective
consciousness of all nations is growing and has gained momentum in the recent years.
However, in the process of fulfilling the short term greedy goals we are callously
harming the health of the proverbial living planet, the earth. The well proven facts of
greenhouse effect and global warming are overlooked only because we arent ready to
give up our interim comforts. Correspondingly without analyzing the long-term
impact on the habitat, the conglomerates are deploying latest technologies to improve
productivity and consequently driving up profit. The short sightedness of
corporations is seduced by any kind of innovation that can improve the transient or
short-term earnings. The wealthy oil companies are skeptical of the global warming
caused by the burning of fossil fuels. They think that its a natural cycle that repeats at
regular intervals of a few millenniums. Does it really? AI may not be injurious to
environment, but it may become the worst ever enemy of the human race. The
Information Communication Technology (ICT) industry is currently deeply involved
in the development of AI. The profit minded corporations are likely to conceal the
probable negative impact on humanity like the oil companies do with global warming.
It is worrisome that we may become our own worst enemies and place the subsistence
of the human race in jeopardy.

J. Robert Oppenheimer was the planner and the engineer of The Manhattan Project,
which developed the first atom bomb. The detonation of the first atomic bomb, the
Gadget was carried out in New Mexico on July 16, 1945. Dr. Oppenheimer after
observing the rising horrid fire ball from the atom bomb explosion uttered the famous
quote from Bhagawat Gita in his own words, Now I am become Death, the destroyer

of worlds. Even after realizing the destructive power of nuclear weapons, officially
seven countries have stock piled nuclear weapons that can destroy the entire world
many dozen times. The U.S has been spending so much of time and resources in
preventing nuclear proliferation. On the contrary, well intended hundreds of nuclear
energy power plants, the suppliers of inexpensive electricity have mushroomed all
over the world. Massive growth in the processing power added by the rampant
increase in the transistor density has not satisfied our thirst for the higher computing
speed. Constantly increasing need for cloud computing and the scientific research will
keep the demand for chips with higher speed for the foreseeable future. At this
juncture when we are zooming on creating autonomous artificial intelligence (AI), the
consequential truth facing us is mystifying. AI technologies will self-reengineer many
of the manufacturing and organizational practices by replacing weaknesses of humans
with the strength of robots. The analogy between AI and atomic bomb may sound
absurd since we only think of the positive side of AI and ignore negatives as there is
not enough data or adequate knowledge to understand the harmful consequences due
to self-learning AI.

Moores Theory
The central processing unit (CPU) also known as microprocessor is the main
component of a computer. Integrated with arithmetic logic unit, virtual memory
management unit and control unit, it performs the logic and arithmetic operations as
instructed by a program. The speed or clock speed is the rate at which CPU processes
data fed from RAM and other components. The CPU speed is measured in megahertz
(106 hertz) and gigahertz (109 hertz). Gordon E. Moore, co-founder of the Intel wrote in
his 1965 paper the number of transistors in a dense integrated circuit doubles
approximately every two years. Its known as Moores Law which holds to be true
even after fifty years. The Moores Law of 1965 became the guiding light for the chip
innovators and manufacturers for the next 50 years. The clock speed was never
expected to increase exponentially for such a long period. The chip industry
introduced more efficient and faster chips; the computer companies manufactured
faster and cheaper computers year after year. The computer companies burgeoned all
around the world including China and India. The incessant increase in the computer
clock speed is one of the main reasons for the last few decades of intensified growth in
all most all human endeavors.

Moores law isnt based on laws of physical science; it is more of an empirical law or
prediction based on Gordon Moores expertise in chip manufacturing. Moore in his
initial publication of 1965 predicted that the transistor density would double every
year. Later in 1975 he made modification to his law, transistors would double every
two years. Intel 8088 introduced in 1979 had 29000 transistors and 5 MHz clock speed.
The Intel Core i7-875K launched in 2010 consists of 774 million transistors and runs at
3 GHz clock speed. Owing to the geometric rate of increase in transistors, the CPU
speed increased 600 times in 30 years. Intels 7nm mobile chips planned for 2018
contains 20 billion transistors per unit area. Newer chips of 2017 used in the latest
computers are reduced to 10nM. The Moores law which became the metaphoric polar
star to the computer industry deserves lot of credit for guiding the computer industry
from word processing technology of 1960s to the internet age of today. We are at the
cusp of a post silicon chip era, where the Moores law of silicon chips will soon be
coming to an end, forcing to devise new chip technologies.

Potential Breakthrough Technologies

The next step in chips manufacturing of 7nM chip will make a noticeable change in
computation speed. In 2020 the chips are expected to shrink even further to 5nM. The
Moores law will seize to exist after 2020. The physical law restricts silicon transistor
to function beyond the size of an atom provided subatomic particles are brought into
play. Any further innovations in the chip technology have to seek alternative scientific
means. The new N3XT (Nano-Engineered Computer System Technology) skyscraper
style carbon chip may be the next avenue of chip innovation. It is in the process of
being developed by several institutions including Stanford University and IBM. The
chip based on carbon nano-tubes will be initially 10 times faster than the latest silicon
chip. In the future the carbon microchip can be devised to run 1000 times faster than
the presently exploited silicon chip. N3XT chip architecture arranges transistors into
multi layers in contrast to the current single layer architecture. Silicon chip cannot be
built in the multi-layer architecture since multi-level chips production requires the
temperature of almost 1800F at which the lower levels of the silicon chips collapse. The
carbon atoms with four electrons in the outermost orbit makes bonding between
carbon atoms very strong even at high temperatures. Carbon Nano-tubes are lighter
than silicon and make conduct electricity better than copper. The transistors stacked
in multi levels of the microchips are connected by millions of micro-wires. In the
Skyscraper architecture, data moves much faster over shorter vertical distances than

across the horizontal layout of silicon chips. The concept of skyscraper chip is still in
the infancy stage and may take another decade to manufacture commercially.

In certain materials controlling electron spin has led to breakthrough innovation of

transistors. Spin is an intrinsic form of angular momentum carried by electrons.
Electric current is generated by the electrical charge of electrons moving through
electronic circuits. While moving in orbits around the nucleus, electron also flips back
and forth around its own axis. The study of spinning electron is called Spintronics
in quantum mechanics. The spin of the electron as stated in quantum mechanics carries
charge, electron aligned with magnetic field Ms = +1/2 and electron aligned against
magnetic field Ms = -1/2. The two states of electron spins can store and manipulate
digital information (1, 0). The states of electron spin can be preserved for long time in
semiconductors. The high (1) and low (0) charge distributions in semiconductors are
easily controlled to make logic gates. The spinning electron would significantly shrink
the existing silicon chips and make the computation a lot faster.

When two waves of the same frequency with no phase difference in the same direction
meet the amplitudes of the two waves reinforce each other to form the third wave. It
is called constructive interference. When two waves of the same frequency moving in
the same direction with a phase difference of 900 colloid with each other, the amplitude
of the resultant wave is equal to the difference in the amplitudes of the original waves.
It is known as destructive interference. Same way in quantum physics superposing of
two quantum states transpire the third or resultant quantum state. The elementary
particles (subatomic particles) in superposed state can exist in two separate locations
at the same time. Superposition state as central to computing is the new
breakthrough concept in computer science. Qubit is the smallest charge in the
superposed quantum state. In a quantum computer, Qubit is the block of memory just
like the bit in digital computing. In superposition state Qubit carries both 0 and 1
simultaneously unlike a bit carrying 0 or 1 at any given time. The memory outburst
of quantum computer may far exceed the reckoning of Moores law applied to the
digital computing. The processing speed of the new quantum computers will make
the silicon chip digital computation obsolete. IBM in March 2017 announced that it
would produce the quantum computer by brand name IBMQ in the future. IBM will
release software developing kit in the first half of the 2017. The IBMQ will be radically
smaller in size than the computers based on silicon chips. D-Wave, a small Canadian
company, the leader in quantum commuting supported by Jeff Bezos, NASA and CIA

is expected to sell the first quantum computer. The competition to make the first
quantum computer has become so critical that it is comparable to the 75-year-old race
for the first nuke during the World War II. Qubit might change the global balance of
power just the way nuke did in 1945. The technology giants and universities are
hustling to find that holy grail of quantum computing.

Three prospective technologies, Sky scraper chips, Spintronic and Qubit may not have
equal potential. However, in theory they appear to be a lot superior to the silicon chip
technology. The probability of at least one of them succeeding in next 10 years is
reasonably fair. If it actually happens, the world will experience a revolution in
information and communication technology. Between the implementation of the new
technology and the total undoing of the old one will experience time lag of at least a
few years. There will be scarcity of technical knowhow before adapting to new
technology. The silicon chip based computing will be remembered like James Watts
steam engine that started the industrial revolution of the 18th century. The technology
giants will be racing to build upon the on-hand AI. The nation that comes through the
earliest break through will have an edge over the rest of the world. The next World
War will not be determined by nuclear weapons at all and instead it will be Cyber War,
attacking of financial, industrial and military information systems. Each one of the
three frontiers will have its own merits and demerits. The user utility of each
technology may vary according to the applications. In a wishful nonviolent setting, it
is possible that all of the rivals would converge to create a peaceful society instead of
trying to destroy each other.

Human/Computer Interfacing (HCI)

The Association for Computing Machinery (ACM) is the largest international group
involved in the design of Human/Computer Interfacing (HCI). According to ACM, the
study of HCI involves the design of an effective human and computer interactive
system to provide maximum efficiency and better user satisfaction. All most all forms
of computers including Global Positioning System, a locomotive control panel, home
security system etc. are designed to interact smoothly with humans. HCI has become
significantly important as computers have gradually become an important component
of our life style. The HCI design is a blend of expertise contributed by psychologists,
human aspects engineers, computer scientists, cognitive psychologists, industrial
designers and graphic artists etc. In the 70s PCs were in rudimentary stages and a

standard HCI design was sufficient to meet the needs of compromising users. Since
then the global computer user population has reached 3 billion. The computers
technology is applied to so many devices including automobiles, medical equipment
and home appliances. Accordingly, the HCI design has become quite complex due to
the widely diversified applications. While the computer interaction has spread to
broad spectrum of users, the issues addressing efficiency and safety have become the
main attributes of consideration in HCI design.

The HCI design concerning household appliances reflects on the users with no
computer background. The ongoing analyses of HCI especially in smart phone
applications have challenged the manufacturers who are striving to gain the market
share. Interaction of users with the tiny touch screen has literally integrated the smart
phone into our life style. Artificial intelligence (AI) is suddenly getting more attention
of the innovators and entrepreneurs. Human Artificial Intelligence Interface (HAII) is
the design of human interactions with AI. The AI is still in the embryonic stages, which
engages research in physics, mathematics and computer science. Development HAII
requires expertise in assorted subject matters. The conditions would be of better value
if corporate laboratories jointly work with academia. Human/robot interaction (HRI)
which is a subset of HAII is simplifying the communication between humans and
robots. A good example is dealing with a robot babysitting for preschool children. In
the case of personal robots of the future, HRI will be tailored to fit the styles of
individual users. The development of HRI is a critical topic, especially regarding
national defense. HRI development measures consider safety as the central criterion
of design.

The people in much of the twentieth century lived happily without computers or
electronic gadgets. The technological advances of the last twenty years brought more
changes to the collective behavior than the changes encountered in the entire 20th
century. The high-speed internet has reshaped the personalities of even the middle-
aged people. Just the iPhone alone has changed the socio-cultural activities more than
any other equipment in the history of electronics engineering. Without computers,
including smart phones, it would be difficult to carry on with the progressive lifestyle
of today. Even the school kids going to grammar school use iPads to do homework.
Medical students read the class room lectures on iPad sitting in the comfort of home.
iOS is the operating system for all forms Apple computers, including iPhone manufactured
by Apple Inc. The HCI for iPhone is designed so effectively that iPhone is by far the most

popular consumer gadget in the world and consequently Apple Inc is on the way to
become the first trillion-dollar company. The iPhone apps are easy to understand; users
just with a bit of common sense can navigate through the animated apps. The interaction
between iPhone and its users at times can be more magnetizing than a normal relation
between husband and wife.

The iPhone 6 upgrade on Sept 16, 2016 to iPhone 7 became such big market news, the
iPhone enthusiasts lined up in a couple of hundred yards long queue around the Apple
store in Manhattan. Within six months after upgrade the Apple stock price jumped up
by 40%. The iPhone users eagerly wait for upgrades and Apple Inc greedily tries to
upgrade its products ASAP. The human/iPhone interface or HII (acronym made up
for this thesis) gets better with every upgrade. The enthusiasm for the iPhone
innovations has become a common philosophy between both manufacturers and
consumers. The fervor for the iPhone has created a cult of ardent iPhone users if not a
religion. Often, they went on pilgrimage to Apple Inc in Cupertino, California to listen
to the preaching of Reverend Steve Jobs when he was still alive. The Temple of Apple
Inc. is looking at the skyrocketing stock price while the iPhone followers are simply
addicted to interacting with the iPhone. The smart phone is stepping into AI era. It is
about to gain artificial intelligence which will enable the phone to evaluate its user
habits and will accommodate the users on its own. Google Assistant of Android
system and Siri of iOs are the early measures of AI prologue. The HAII will continue
to evolve in the mobile operating systems along with the AI advancement in the smart

Cloud Computing
The cloud computing model is cloud storage in which data is stored on remote servers
and can be accessed from anywhere on any device that can be connected to internet.
Cloud computing is an internet facility that works as a virtual data center providing
wide variety of services that an internal data center could provide and more.
Businesses of all sizes have benefitted by the cost-effective, flexible, scalable and
convenient features of cloud computing. Lately the cloud storage providers are putting
their efforts into developing AI assisted cloud computation. Businesses large or small
can access data on demand from cloud computing platforms for a small fee. The S&P
500 companies are accessing and storing data over the internet. Retaining a virtual
datacenter instead of actual one gives greater flexibility to work on business

development. The hybrid cloud computing model is popular among large
corporations. It is a mix of proprietary datacenter (private cloud) and third-party cloud
services (public cloud). The hybrid model offers superior control over data. The
hybrid models in view of storing confidential documents provide better data security.
The cloud computing users also encrypt files to assure security. The search for the
encrypted data slows down the retrieval process. For the sake of security, a bit of
efficiency is sacrificed. Cloud computing is growing at a rate of 20% and will continue
to grow for a while.

The computer tablet or smart phone users constantly depend on cloud computing
every day and many times in a day. Internet, email, music, video, stored data and
even software are accessed from the cloud computing servers owned by a third party.
The varying business demand during peak periods of business cycle can be easily met
with small increase in marginal cost. Cloud computing services are always available
and can be accessed on demand without experiencing much of delay. The primary
categories of cloud computing services are:

1. Software as a Service (SaaS): is cloud based software which is licensed to clients

and software is delivered on demand for many applications.
2. Platform as a Service (PaaS): The platform is outsourced to a cloud computing
service. The client company manages its own software and hardware.
3. Infrastructure as a Service (IaaS): Servers are outsourced to cloud computing
company. This is trouble-free way of maintaining data center with the third

Information and communications technology (ICT) covers all sorts of communication

means and architectures which includes software and equipment supporting
communication practices. Unified Communication (UC) which is integrating voice,
video and data network in a single platform has become popular in recent years due
to economies of scale. UC extensively stores data in the cloud managed by third party.
The cloud service provided to the UC application is called UC-as-a-Service (UCaaS).
The service is purchased from remotely retained cloud facilities. The UCaaS cloud
facility is inexpensive and well protected. The access to cloud is given to licensed IP
addresses. An approved IP address with desktop, laptop, smart phone or tablet can
retrieve stored data from anywhere. The UCaaS facility is the mainstay of AI/machine
learning software.

Rapidly expanding cloud computing has become metaphoric steroid for the growth of
AI. In turn the capabilities of cloud computing aided by the AI programs are shifting
to the next stage. The new AI computational models analyze complex database to find
the underlying patterns that could not be done with the conventional programs.
Analyzing data stored in the cloud by AI/deep learning with the aid of graphic
processing unit (GPU) is the latest novelty in the cloud computing field. GPU can be
accessed on demand from all major cloud computing platforms. Googles TensorFlow
is an open source software library which is used by the machine learning programs for
building and training artificial neural networks for human like reasoning. The current
trend is integrating machine learning with the conventional software. The Amazon
Web Services (AWS) is the most advanced cloud computing platform for storage,
database, analytics, networking and many other applications. AWS is the premier
internet application provider for iOS and Android operating system. PAAS with built
in machine learning is the new trend for supporting analytical and predictable

Microsoft Azure, Google IT, Amazon EC2, IBM Smart Cloud, Sales CRM
and Verizon Terremark are some of the major third-party cloud computing
establishments. The architecture of cloud computing is going through radical changes
to accommodate the rapid growth in size. The landscape of cloud computing
dominated by a few technology companies can be disrupted by new players who are
specialized in artificial intelligence researches, particularly in deep learning. The
future of AI is difficult to visualize. With new gadgets (hardware and software)
machine learning is getting as smart as humans in certain cognitive functions. What if
the progression of AI goes beyond our anticipation? What if the advances of machine
learning get out of our control? The noted expressions of certain scientists, likes of
Stephen Hawking and Alan Turing on the possible massive self-replication of AI is
disturbing. The optimists of technology are looking at the possible benefits of AI.

The Artificial Intelligence as a Service (AIaaS) platform is the next venture of the cloud
computing enterprises. But what if the competition among the cloud computing
companies gets too severe? AI on its own seeking data in cloud storage for self-
advancement without the human interference will be an alarming achievement.
Exposing AI/machine learning to the ever-growing clouds can spell a perfect storm.
The cloud computing looks promising but at the same time the cloud of ambiguity is
hanging above the future of humanity. At any cost the possible risk of AI

autonomously taking control of the cloud needs to be addressed soon. Certain cautious
institutions have developed a list of restraints for the development and deployment of
AI. A U.N like central organization to supervise the AI advances with the
responsibility of censorial control to repress unwanted innovations is indispensable.
The United Nations Security Council has the responsibility of taking proper actions
against any kind of danger threatening the world peace. Considering the present state
of the world affairs, threat to the peace is growing across all continents. The agenda of
the Security Council is seemingly ineffective in controlling global instability. The U.N
needs a capable and better structured separate agency exclusively to check and
prevent any kind of danger instigated by AI.

Humanization of Machines
Psychologists and neuroscientists tend to reject the hypothesis claiming the similarity
between the human intelligence and artificial intelligence. In contrast pure scientists
(physicists, chemists and mathematicians) lean to recognize the parallel between the
workings of two forms of intelligences. Human brain consists of same number of
neurons in every person immaterial of age, sex or race. Neurons in human brain work
on synapses, which is electric current produced by chemicals. Transistors in a silicon
chip work on electric current. Memory stored in the brain depends on the intensity of
synaptic circuitry. The computer memory increases with the increased number of
transistors in a chip. Since the origination of Homo sapiens in Africa 160 thousand
years ago, the human brain has not changed much. The process of evolution moves at
lethargically slow pace. In the last thirty-five years PCs have become much smaller
with much larger memory and the computation turnaround time is reduced
drastically. The advancement in computer science aided by the latest super computers
has come a long way. Still the computers are based on silicon chip and binary digital
system. The silicon chip is on the verge of being replaced by much superior technology
within a decade.

The computer science is at the cutting edge of creating machines akin to humans. The
latest super computers use special algorithms to delineate the cognitive processes of
humans. The thinking computers are currently being exploited in numerous fields
such as medicine, stock trading, data mining, AI, robotics and automation to name a
few. The humanoid robots can imitate human movements. The robots can move more
precisely with better control and they are also better at multitasking. The robotic

surgery is becoming very popular. The da Vinci robotic system manufactured by
Intuitive Surgical Inc. can perform minimally invasive surgery aided by the medical
professionals. Robots with the assistance of MRI perform open heart surgery more
precisely than a well-trained surgeon with better control in lesser time. The
movements of surgical robots are controlled by doctors on sight or remotely. The new
AI designs have produced partially autonomous robots specialized in flying, running,
swimming and skating on their own. In specific individual specialty they are more
accurate than the machines controlled by humans. However, the constraints built into
AI algorithms limits the robotic movements to specific functionality. Robots are not
yet totally autonomous.

Artificial neuron is a mathematical model which can function like a neuron in the
human brain. An artificial neural network is an interconnected group of artificial
neurons works almost like the human brain. Machine Learning algorithms comprise
of specifically built in mathematical functions that can learn on their own to perform
cognitive computing. Machine learning can modify the algorithms to accommodate
changing environment or data. It can imitate cognitive thinking and react to changes
in a dynamic setting on its own. It can see colors, hear sounds and understand major
spoken languages. These attributes of AI are still in the early stages of development.
Machine learning will become more powerful as the transistor density increases. In the
future the machine learning will be in par with humans regarding cognitive thinking
and it is suspected by many scientists that in the due course the machine intelligence
may even surpass the human intelligence. Undertaking of the machine learning
development needs to be controlled cautiously to keep the lid on its autonomy.

Fortune magazine of March 2017 published fifty large promising AI game changing
startups (AI 50) from the pool of 1650 candidates chosen by the mosaic algorithm of
CB Insight. The demand for AI technologies since the beginning of the 21st century has
been growing at a considerable tempo. The universities such as MIT, Carnegie Mellon,
and Stanford have publicized their progressive AI research programs. Amazon,
Google, Apple, Microsoft, and IBM have taken head starts on completion. The larger
companies are gobbling up smaller companies. In 2016, 75 mergers and acquisitions
were concluded. The leading organizations are competing to accumulate technical
knowhow from all corners. The latest trend within AI has been machine learning.
Machine learning works better with larger database. Amazon invests more in cloud
computation than its nearest rivals, Google and Microsoft. Amazon AWS is 300%

larger than Microsoft Azure which is the second largest cloud computing service.
Amazon with the most advanced cloud computing system has an edge in machine
learning over its completion. The goal of every player in the arena is to possess
advanced AI which can provide the best answers to most difficult questions and make
decision under dire settings. The competition among these companies is so rigorous
that each one of them is trying to stay a step ahead of the rest. What if someone like develops the fiercest machine learning software that can wipe out the
targeted competitors?

The study completed in 2017 by Daron Acemoglu of M.I.T. and Pascual Restrepo of
Boston University states The advent of automation and the simultaneous decline in
the labor share and employment among advanced economies raise concerns that labor
will be marginalized and made redundant by new technologies. the study further
states, The increased deployment of robots in manufacturing industry had affected
the workers negatively. For every robot per thousand workers, up to six workers lost
their jobs and wages fell by as much as three-fourths of a percent. At Singapores
Nanyang Technological University, the Institute of Media Innovation has built a
humanoid robot, Nadine who looks lot like younger version of its creator, Prof. Nadia
Thalmann. She is a social robot capable of welcoming guests and carryon
conversations. Nadine also can show range of emotions depending on the situation.
The machine learning feature lets Nadine to learn on her own from analyzing the data
patterns and relating them to humans and events. Nadines ability is still in the early
stages of cognitive processing. In South Korea a few kindergarten schools have
replaced teachers with the humanoid robots. Soon office secretaries can be replaced by
Nadine like robots. The robots will be able to handle most jobs performed by humans
including certain jobs requiring creativity and people skills. The humanoid robots will
get sophisticated as the demand grows.

Roger Penrose, mathematical physicist from the University of Oxford thinks that the
brain works on undiscovered mysterious laws of quantum mechanics. It doesnt work
on any kind of algorithms like computers. Human brain can't be simulated with a
programmable computer. Brain is not a computer that runs on transistors. David
Gelernter, a computer scientist at Yale University thinks, "Free association is a kind of
thinking also. My mind doesnt shut off, but Im certainly not solving problems; Im
wandering around." He distinguishes the problem-solving process of computer from
the thinking aspect of the human mind. The brain is not computable, and no

engineering can reproduce it, says Miguel Nicolelis, a top neuroscientist at Duke
University. Many distinguished thinkers tried to dissociate consciousness of human
mind from the conscienceless machines. We deliberately in a way think of us to be the
rulers of the universe and believe in the superiority of the human mind over any kind
of imaginable intelligence. It is hard for human ego to envision that certain human
qualities can be rehearsed by artificial intelligence. How an AI algorithm can articulate
feelings like love, sadness or romantic behavior? Can it ponder creatively on subjects
such as mathematics and physics to find new theories? Can it gain consciousness
which makes every human being special and differ from the others? The mathematical
modeling or constructing algorithms to duplicate cognitive processes is abstractly
complex. More frequently the Bayesian probability functions are exercised instead of
probability distributions of random numbers.

AI Impact on Labor Force Participation

The Fed Chairwoman, Janet Yellen in Jan 2017 stated that the 4.7% unemployment of
the U.S was close to full employment. Yet, in the first quarter of 2017 the labor force
participation was around 65%. The labor force participation represents full time jobs
held by the men and women who are 18 years old and older. The Labor department
calculates unemployment based on the number of unemployed workers receiving
unemployment benefits. Once the course of benefits runs out, the unemployed would
be excluded from the unemployment statistics. Technological advancement is rejecting
considerable portion of the labor force. A good portion of the labor pool has become
permanently redundant and mounting by month. The actual rate of unemployment is
rising but the unemployment rate published by the Labor department is declining. The
methodology used by the Labor department is skewed to present lower
unemployment rate than the actual. The automation process is gradually replacing
certain jobs that require low level skill sets especially in the manufacturing industry.
If the government succeeds in cutting the corporate tax to 15% as proposed by the
Republican Party, the corporate savings due to lower tax will be well spent on
automation, which would translate into deployment of more robots. Of course, the
corporations would be highly profitable but conversely the job market would be
shrinking. In 2025 the labor departments unemployment rate possibly will be
maintained at 5% when ever increasing unemployed people in search of jobs will be
roaming the streets of the United States.

Artificial intelligence is developing into an alarming exogenous force that might
severely disrupt the steady state of the U.S economy. The great recession set off by the
Lehman Brothers in 2008 brought the market to the lowest point in the preceding
dozen years. Since 2008 the S&P 500 has risen from almost as low as 650 to 2400 by the
beginning of March 2017. The Federal Reserve Board to save the stock market brought
down the Feds interest rate to 0% and also began the Quantitative Easing which
was indirectly pouring $3.5 trillion into the market from October 2008 to October 2014
to improve the liquidity. At the same time rapid innovations supported by the artificial
intelligence began to spread especially to technology and telecommunication
industries. Automation has played the main role since 2013 for the continued upward
move of the stock market which has gone up by 70% in the last four years.

The unrelenting AI implementation is in full swing across all industries. In the coming
years the profits due to AI will keep on escalating at least for the foreseeable future.
The most likely scenario is that the corporations with the sudden windfall profits
would buy back their own shares. S&P 500 would soar to staggering new highs. The
disparities in the distribution of wealth between the rich and poor will continue to get
wider and may possibly create an intense dichotomy between the Democratic and
Republican parties. As the political pressure intensifies the Federal government would
be compelled to begin all sorts of pointless programs for alleviating the grave
unemployment problem facing the country. Still such programs might fall short of
controlling the swelling unemployment pool. The government may be forced to
increase the corporate taxation rates; part of the profit derived from the automation
possibly will be distributed to the unemployed population. The capitalism if goes too
far with no control, it might be pushed toward socialism but not without resistance
from the wealthy and big corporations. Such a scenario could create disorders of large
proportions all across the nation.

Workings of Artificial Intelligence

Inputs received from different sensory organs are processed in different areas of the
human brain. The data input received by a neuron is transmitted to stacks of divergent
neural circuits before converging to deliver a specific output. Super computers with
thousands of built in CPUs use many subprograms and subroutines simultaneously
to process a program. The schematics of both processes are fairly comparable.
However, a super computer unlike human mind lacks consciousness. Computer

scientists project that in a decade computer will be built to work like a human brain.
Consciousness is intuitive awareness of the surrounding environment. Consciousness
emerges from the memory accumulated since birth. Physicians are using computer
images generated by Positron Emission Tomography, MRI and Magneto
Encephalography to detect the disorders. Lately neuro-researchers have deployed the
same tools to make deeper understanding of human brain. The scientists at the Center
for Neuroscience, Oxford University have built 3D computer generated models to
study electrical impulses (synapse) and movement of electrons through the complex
web of neurons. On the other hand, IBM is investing 3 billion dollars on quantum
computer to make it function like a human brain. Google is partnering with NASA to
develop computers based on Qubit which also works on the quantum computing
principles. The latest mission of the technological enterprises is by some means to go
beyond the digital technology which is based on silicon chips and binary
computation. IBM is building a new programming language to work on quantum
computers which is expected to mimic cognitive processes. The effort of trying to
connect artificial intelligence to human mind is underway at many institutions. Max
Tegmark, Professor of Cosmology, MIT told Life Science that both consciousness and
to some extent quantum mechanics were mysteries, so they should be linked together.
His argument is a bit hard to construe.

The physicists at the University of Oxford argue that the brain acts as a quantum
computer which works on the principle of quantum mechanics. They are very much
in quest of creating artificial intelligence that can participate in many facets of human
endeavors. The current trend is on developing artificial computing/machine learning
backed by the ever-growing cloud storage and fast computers with the clock speed
tending to 5 GHz. Deep learning, a specialty in machine learning is structured to
function like human brain. The deep learning algorithms process data in many steps
and in many layers. The bottommost layer receives input, the topmost layer delivers
the output and the intermediate layers process computations. Watson is a question
answering (QA) IBM super computer aided with nonlinear, deep learning algorithms.
It consists of 90 IBM 750 servers with total of 2880 cores and 16 terabytes of RAM.
Watson works on Linux/Unix open source operating system. It can operate at a rate of
more than 80 Teraflopsthats 80 trillion operations per second. It is a powerful search
engine which can communicate in natural language. It can take a question, analyze
large unstructured data set and provide an answer in three seconds. Watsons

cognitive computing ability is unmatched. It took on two world champions in
Jeopardy and defeated them hands down. Watson is working with many corporations
in many fields around the world to develop smarter businesses. SPSS (Statistical
Package for the Social Science) is a software package that can handle large database
and perform intricate statistical analyses. Watson uses AI search algorithm along with
SPSS to provide analytics and machine learning services to the clients.

The Recommender System is a tool that makes product recommendations to its users.
Essentially it is filtering software which organizes list of multiple alternative choices
for a user to make decision on choosing a product. The list is made from the suggested
criteria of the user for finding a specific item. The recommender system is based on the
machine learning algorithm. A recommender system is designed and coded in the
Python programming language. recommends personalized list of
choices based on the community of users who are defined by certain common set of
criteria. The personalized list is prepared from the historical data of similar requests
stored in the cloud. Amazon is by far the leading player in the cloud storage. It is
investing more money in the cloud technology than the other three competitors. In
2017 Amazon unveiled the AI gadget Echo Look which is equipped with a camera
and microphone. This equipment can see and listen to its users. Amazon is marketing
the equipment as fashion assistant. It lists alternative choices and gives advices to
Amazon customers at home to pick matching outfits. The feature called Style Check
runs on a machine learning algorithm. The Style Check recommends outfits to users
from its cloud computation data as analyzed by the algorithm. The machine learning
algorithm on its own categorizes Amazon customers from specific criteria chosen on
its own. It can also identify the changing trend and make suggestions accordingly.
Gradually in course of time Echo Look will become an expert clothes designer. It will
also work as Amazons marketing agent. Echo Look is a good example how AI will
change business practices in the future.

The demand for low skilled labor is very elastic. According to Bureau of Labor
Statistics of 2016, in the U.S 2.6 million workers were paid the minimum wage of $ 7.25
or below the minimum wage. The uncounted illegal immigrants sneak in to grab the
jobs paying below minimum wage. AI can present serious threat to workers in the U.S
job market but the businesses will end up making windfall profits. Computer Weekly,
Jan 2017 editorial writes, Businesses that have adopted artificial intelligence (AI)
technologies expect their revenues to increase by 39% and costs to drop by 37% by 2020

and 64% say their future growth depends on large-scale AI adoption. Robots and
automation will replace the lower paying jobs including the migrant farm workers.
Even good fraction of the semiskilled jobs requiring some training such as taxi driver,
security guard and telephone operator will be automated before 2025. Ironically
robots are even threatening sex workers. At present moderately automated
undertakings such as algorithmic trading, remotely controlled drones and driverless
cars are deployed in the market place. At this juncture of break through innovations,
as precautionary measure for our own safety, certain daunting questions are to be

To what extent the skilled jobs will be affected by the artificial intelligence?
What will happen if the automated computers become autonomous?
What will be the consequences if robots gain consciousness?

When compared to the progression of technology, the natures evolution process is

moving ahead at a very slow pace. The AI is still in its infancy and is making headway
at a faster rate than that of the slow moving human evolution. In due course AI will
catch up with human intelligence. Since 2001 after nine eleven we are so vigilantly
guarding the incursion of terrorist on our soil. Prior to 2001 we were lethargically
negligent even when we knew about the danger of terrorism on our soil. Now the
terrorist operations have spread like cancer. The entire world is expending so much of
resources and effort in the prevention of terrorism. The threats are emerging from
technologically backward small antisocial groups and the entire human population is
concerned about the danger. The whole world is forced to change the way of living to
avoid any kind of malicious aggression. If we project in light of todays terrorism,
what would be the fate of human race in case the AI of superior intellect turns against
us? In regard to building autonomous or self-upgrading AI we need to be cautious.

The mind of an autonomous and independent AI would be impossible to understand

and would be difficult to predict its actions. If AI keeps learning on its own, there
might be a time when the artificial intelligence would supersede the human
intelligence. The deliberation of singularity right now sounds utterly idiotic. If
cognitively stronger AI gets on the wrong side of us or develops some sort of enmity,
what will be the fate of humanity? The AI we so fondly nurtured will come after us
like a house cat that loved us suddenly got as big as Bengal tiger and started chasing

to destroy us. It is hard to imagine that after, so many years of human dominance
suddenly nonliving creatures created by humans would try to destroy humanity.

Cybernetics and Feedback Loops

Norbert Wiener (1894-1964), the Professor of Mathematics, MIT is considered to be the
father of cybernetics. He defined cybernetics the science of control and
communication in the animal and the machine. Cybernetics is a branch in applied
mathematics dealing with regulation of any type of dynamic system which is
supported by a built-in feedback loop or loops. Our chosen interest in this thesis is
systems dealing with cognition and artificial intelligence. The feedback loop in essence
communicates with the processing unit for regulating the output. The control theory
is a branch in applied mathematics that quantifies the behavior of dynamic systems
resulting from specified input. In the control theory, outcomes are compared with the
expected goals and it is used to moderate actions until the intent is achieved. Negative
feedback is relaying the discrepancy between the expected goal and the actual
outcome back to the control unit. Feedback is controlled by constraint equation(s)
which regulates the functionality within the predefined limits. A simplistic feedback
loop consists of a sensory node to input signals (data) into the control unit (intelligent
system) which delivers the output corresponding to the input. The feedback loop
keeps repeating the flow till the constraint equations are satisfied. A good example of
cybernetics is human body. It cannot survive without feedback loops. Babies start
crying for food when they are hungry. Thermostat controls the room temperature
within the specified comfortable limits.

The feedback in cybernetics is similar to algorithmic loop which reiterates same

calculations till the predetermined argument is satisfied. In programming languages
its called looping or loop. In an algorithm the nesting is the insertion of one loop inside
the regulation of another loop. An algorithm can have many nested loops depending
on the intricacy of computations. The outermost loop includes with an end statement
to bring the program to conclusion. The sequence of algorithmic instructions is seldom
setup to run endlessly. If a loop with an end statement instructing the loop to restart
or no terminating conditions would run the program endlessly if not interrupted by
someone. Human mind never shuts off, not even in sleep. The sensory glands
continuously send signals to the brain. The brain is always awake to respond to the
sensory or cognitive demands. Human mind works like an endless or infinite loop.

The infinite looping system is relevant in a cloud storage programming where the
cloud needs to be awake all the time to serve the on-demand customers.

Our mind learns through discussions with people, environment, books, and
experience etc. Any meaningful discussion includes feedback loops for making
adjustment to any dialogue. Within a feedback loop it is possible to have more than
one feedback loops. The study of feedback systems that trains the human mind can be
imitated in building cybernetic processes applicable to machine learning. The
exposure to different situations is stored in data form, with which the machine learning
algorithms reason and make decisions. The data that is fed to machine learning often
consists of noise. The superfluous or irrelevant data is considered to be the noise. The
noise in the data can significantly affect the quality of the output. Feedback loops
supported by subprograms or subroutines are the most common method for
identifying and rejecting noise. In image processing constraint equations are used for
smoothing data to capture only specific patterns and pixels. The machine learning can
see, hear and save an event in the memory. It segregates and classifies data conforming
to particular pattern even the ones that cant be identified by humans. The machine
learning algorithms like in the case of human mind are supported by the workings of
feedbacks or cybernetics.

AI Machine Learning Algorithms

The word algorithm comes from Arabic word Al-Khwarizmi which was named after
a 9th century Arabian mathematician, Muhammad al-Khwarizmi. A set of procedures
used in solving problems in mathematics was called algorithm. An algorithm in
computer science is a bunch of logical steps inclusive of mathematical equations
arranged in a particular order to solve a given problem. A computer language is the
means of translating algorithm to computer program. A program is bunch of
instruction coded lines arranged logically for computer to solve the problem. A
programmer is a translator of an algorithm to computer language (Python, R, C++ and
Java etc.) applications. A program can be constituted of many components
(subroutines, sub programs) depending upon the complexity of the problem. Each
component is individually calculated and incorporated with the main program as
planned. An algorithm to the same problem can be written in numerous ways and
some of them are more efficient than the other. The two criteria used in algorithm
design are efficiency and accuracy. In Artificial Intelligence (AI) answers arent always

expected with great accuracy unlike its done in numerical analysis, for instance
computing square root of 2 up to 5 decimal places. AI seeks best estimated answers
within certain time duration. The efficiency is measured in terms of run time of an
algorithm. In most cases AI seeks best approximation in a reasonable period. AI
algorithms are categorized according to the functionalities such as searching, sorting,
recursive, dynamic modeling, network theory, simulation and graphic etc. Simulation
of computerized robots is the modern experimental challenge in AI design. The
algorithms in robotic design are constructed to maximize the effectiveness of human/
robot interaction.

In AI the algorithm, also known as machine learning algorithm is different from

classical algorithm. Machine learning is a branch of AI which lets computer program
to learn on its own without the help of a programmer. The search function of the
machine learning algorithm follows heuristic process instead of mathematical
approach of classical algorithm. Heuristic approach with the large amount of data
provided by the cloud is becoming popular in AI field, especially in machine learning.
The biggest breakthrough in AI technology is undoubtedly the machine learning. The
learning algorithms are written in a way to make the machine to learn on its own.
There are mainly three kinds of learning algorithms:

Supervised Machine Learning is based on evaluating an environment based on

examples. Specially chosen variables or predictors are used to train the
machine. The data set is divided in two different groups, training data and
testing data. The percentage of training and testing data can be varied
depending on the nature of problem. The training data is fed in the machine.
Then the testing data set is input into the model to assess the accuracy of the
outcome. The difference between the two out puts gives the error factor. To
minimize the error factor more training data is input till the intended accuracy
level is reached. Supervised learning uses patterns of the label data on to foretell
the values of unlabeled data or for predicting future from analyzing the events
of the past. A good example is analyzing the criminal records of a convict and
predicting the timing and nature of the next crime.
Unsupervised learning skips the training step and directly scrutinizes data.
Without labels it finds the hidden pattern in the data. The program derives a
pattern by studying features (such as pixels) of data points. The algorithm
groups and analyzes data which cannot be achieved through any statistical

method. The customer segmentation in banking industry is a good example of
unsupervised learning. The algorithm continually modifies itself when exposed
to new data. The summaries form unsupervised learning can be fed into
supervised learning to summarize or label the description. The self-learning AI
software is becoming popular in all industries.
Reinforced Learning is conversation between agent (learning program) and
environment (subject under scrutiny). A successful action results in virtual
reward and an unsuccessful action into penalty. It is almost like a trial and error
method. Reinforcement learning is commonly used in training robots. Agent
tries to maximize reward by guessing the environment correctly. The
conversation with the environment trains the agent to take a correct action. It is
used to learn large environments by using function approximation which is
similar to interpolation in heuristic techniques to gain fairly accurate output.
The observation of the current state of the new environment becomes data that
lets machine train itself and take action accordingly. Reinforced learning lies
between the other two methods described above.

Machine learning is comparable to Data Mining. Data mining is identifying patterns

and trends hidden in a selected segment of big data using a range of statistical tools.
The data segment from big database is selected to analyze the specific objective that is
necessary to support business needs. Machine learning uses learning algorithm in
addition to the data mining methodology. Bayesian analysis is used in the place of
conventional methods of statistical data analysis. Bayesian probability doesnt depend
on propensity of occurrences. Instead it is assigned to reasonable hypothesis or
educated expectation. As the new data becomes available the expectation is
continually updated. In the 2016 presidential election, Hilary Clinton was expected to
win based on the popular vote survey data. The indirect or underlying variable such
as Electoral College and aspects such as race, age and education of the voters was
ignored. Of course, she got 3 million more popular votes but lost in the Electoral
College counts. The Bayesian models use distribution parameters of the prior
examples for the unknown distributions of future outcomes. It takes into account the
earlier distributions and its salient attributes to model the distributions in the
argument. Actually, in the 2000 presidential election Al Gore got more popular votes
than George Bush but lost the election on the basis of Electoral College vote count.
Bayesian analysis would have provided better estimation if the mistakes made in

forecasting 2000 presidential election were analyzed. The algorithms in machine
learning keep on updating the statistical inferences stored in the cloud, in addition to
incorporating new data as it becomes available. Every new data set after the analysis
is added to the cloud. It is a dynamic and continuous process of self-training.

The narratives of natural, physical and material aspects of the human subsistence can
always be defined by objective mathematical equations accompanied by the qualifiers
or constraint equations. The natures mathematical narratives can be translated to
algorithms so that the artificial intelligence can interpret. The mounting demand for
finite supply is a question facing the ever-growing humanity. A democratic country
always tries to provide good health and eradicate poverty of its citizens. A bank tries
to provide good customer service and concurrently maximize its profit. We are faced
with the problems of optimization in every aspect of our lives. Linear Programming is
the mathematical modeling approach to solve optimization problems. An
optimization problem consisting of large number of constraints is solved by Simplex
method. In Simplex method a matrix is constructed with each row presenting
coefficients of all variables in a single constraint equation and each column is allocated
to present all the coefficients of a single variable in all the constraint equations. At
times some of the constraints required to solve a problem may not be available. The
reasonable expectation can be substituted for the missing data required for the simplex
solution. Common world problems seek solutions through compromise or finding a
solution through optimization of constraints. AI will be able to find quick and superior
solutions to world problems. The constructive research in machine learning for the
survival of humanity is collectively left to us.

Deep Learning, Breakthrough Technology

Deep learning is a unique class of machine learning algorithm which is used to
recognize particularly deep patterns of speech and image. The deep learning
algorithms are mostly unsupervised but supervised algorithms also are employed in
certain applications such as adding sound to silent video. The deep learning algorithm
works in multi layers. The data is input to the bottom most layer and top most layer
delivers the output. Multiple layers in the middle process data in an ascending order.
The output of one intermediate layer becomes input to the next layer above it. The
deep learning is a nonlinear system; the output is not proportional to input and hard
to predict the outcome. The quality of data computation in each succeeding layer gets

more refined in nonlinear increments. Deep learning algorithms have progressed
significantly in speech and visual identification, pharmaceutical research and mapping
of genomes etc. It can communicate in natural language. The spoken languages which
evolved naturally over many centuries are natural languages such as English, French,
Chinese, Hindi and Arabic etc. Computer languages are artificial languages. The
recent multi-layer algorithm architecture aided by 2 GHz of computer speed and
exposure to fast growing cloud storage has turned out to be ever more sophisticated.
The multi-layered algorithm memorizes every analysis and becomes more intelligent
with new analyses. For small calculations CPU responds much faster than GPU.
However, when calculations are complex GPU responds at a faster rate. The neural
network processing such as transcribing English to Hindi is supported by GPU. The
deep learning gets better with experience and also assumed by some that without
deliberate interruption it may even surpass human intelligence in due course. It is of
extreme importance to pay special attention to the progression of deep learning in this
technologically savvy environment.

The deep learning runs on a special algorithm, back propagation algorithm which
works in two phases. In the first phase, the data moves from the bottom (input) layer
to top (output) layer. The difference between actual output and the anticipated output
is known as the error value. In the second phase the error value is processed in reverse
to quantify the contribution of each neuron to the error value. Subject to optimization
theory the slope of the error value function is minimized. The objective of the
optimization is to keep actual value close to expected value. The back and forth
iterations rearrange the organization of intermediate layers. The reciprocating
iterations modify the algorithm. The back-propagation algorithm is used for training
deep learning program. Artificial neural network can discover hidden shapes,
patterns, trends in big data sets by using the back-propagation algorithms. The deep
leaning can accurately predict the patterns of human behaviors. The larger artificial
neural networks have more intermediate layers of varying sizes and as they become
larger their performance tends to get more accurate. As the data cascades through each
layer the output of each layer becomes self-training data. The mapping and storing
data in the cloud is essentially performed by the deep learning program. Artificial
neural networks can be engaged in wide variety of applications by inserting
appropriate parameters. The voice and image recognition and natural language
processing have become quite popular applications in deep learning. In life sciences

neural network has shown remarkable progress in identifying cancer cells and tumors.
It can detect abnormalities that are missed by the other medical testing methods.

IPAT Equation, the modified Malthusian Theory

Malthusian Theory (1798) Human population grows exponentially (geometric
function) while food production grows at an arithmetic progression (linear function).
Malthusian Trap (the continuation of the theory) as population grows faster than the
agricultural growth, at a point in time the food supply becomes inadequate for feeding
the population. If the population goes unchecked, a war, famine or disease may
breakout to control the population. The critics of Malthusian theory believe that the
population curve will never surpass the line of food supply because the human
ingenuity will always discover improved farming method to feed the ever-increasing
population. On the other side of coin, the human ingenuity could be self-destructive
too. The anti-environmental forces, pollution due to over population and exploitation
of the fossil fuel are waging war with the echo friendly green innovations. We can use
technology to prevent further pollution of our environment. What kind of future is in
store for us, if the human ingenuity fails to curb the deteriorating air and water

The modern interpretation Malthusian Theory is The IPAT equation. Paul Ehrlich
and John Holdren first proposed IPAT equation in early 1970s as a way to calculate
the impact of humans on the environment.

I = P x A x T. This is short for:

Environmental (I)mpact = (P)opulation x (A)ffluence x (T)echnology

I: Environmental effect P: Population, A: Affluence, consumption per person, T:

Technology impact per unit of consumption

Rearranged IPAT equation:


If we assume the explosion of technology or very high value of T, the corresponding

values of P and A would be very small. The IPAT interpretation supports Malthusian
Trap and the Malthusian Theory of Population; the uncontrolled exponential
population growth would be checked by a catastrophe triggered by technology of
some sort to make adjustment to population growth which would otherwise ruin the

environment. Yet singularity is the ultimate viewpoint since it assumes the end of the
humanity. However, in the world of AI superiority, the IPAT and Malthusian theories
would be articulated differently.

Old Fictions and New Realities

The wandering minds of illustrious story tellers led to wild imaginations of the future
worlds. The science fiction authors tried to fantasize the innovations of the yet to come
generations. Most of the fantasies never materialized but a very small fraction of the
fancy dreams came to fruition. Mary Shellys novel, Frankenstein written in 1818 is the
first science fiction to suggest the concept of artificial humanoid. Dr. Frankenstein, a
scholarly character in the novel creates a giant artificial humanoid which ends up
wiping out the entire human race. The word robot for artificial humanoid was coined
later in 1920 from the word in Czech language for forced labor, Robota. Since
Frankenstein hundreds of science fictions have been written. Many science fiction
fantasies imagined in the earlier periods have consequently turned out to be reality.
The novel written in1865 by the French novelist, Jules Verne, From the Earth to the
Moon became reality on July 21, 1969 when Neil Armstrong walked on the moon. HG
Wells wrote about the fictional atom bomb in his 1913 science fiction novel, The
World Set Free, and on August 6 and 9, 1945 atom bombs were dropped on
Hiroshima and Nagasaki respectively. Apple's CEO, Mr. Steve Jobs, on June 7, 2010
at the iPhone 4 introduction ceremony admitted that he got the concept of iPhone
when he was twelve from Communicator, the hand held fictional communication
device used in the Original Series, Star Trek (1966-67). A long-discussed idea of robot
with artificial intelligence superseding human intelligence can become reality. Do we
know whether such robotic master will be a hero or villain?

In science fiction movies such as Starwar robots with human like intelligence have
been portrayed for many decades. In the movies humans and robots compete for the
galactic dominance. The general theme of Starwar has been usually the human mind
conquers AI. The main characters Luke Skywalker and Princess Leia survive through
many acute attacks by the advanced robots of farthest galaxies. We are proud of
human heritage and always assume that human intelligence is superior to any other
kind of intelligence in the universe and beyond. The human intelligence is the force
that controls our planet but for how long? Bye and large humans are selfish to the
extent they for short term gains ignore the wellbeing of our future generations. With

all our ingenuity we cannot sustain a healthy ecological environment around us. The
deteriorating ecosystem is a big problem of discussion. Our excessive greed is making
us to misuse the natural resources. The human race is growing exponentially at the
expense of diminishing mineral resources; forests are disappearing. Humans have
become sort of cancer to the planet. Also, the human ingenuity is ready to create
human like humanoids which presumably can support us in many facets of our lives.
A small deviation in the AI design calculations may even lead to butterfly effect which
was formulated by Prof. Edward Lorenz in the Chaos theory; the butterfly effect is
commonly used as a metaphor for disasters, Todays petite amiable event can lead to
a catastrophe in the future.

Consciousness and Entropy

In the second law of thermodynamics entropy is described as the energy or heat thats
lost due to disorder in a closed system. In thermodynamics, a closed system is a
secluded system that doesnt switch energy or matter with the other systems around
it. Except for the imaginary Carnots cycle, there is no system that is reversible or free
of entropy.

Enthalpy (Total heat) = Work-done + Entropy (Heat loss)

The contemporary cognitive psychology describes consciousness as the by-product of

entropy. The cognitive psychology is the scientific analysis of workings of brain. In
regard to human mind, consciousness is equivalent of work-done in
thermodynamics. Consciousness is awareness of the surrounding environment and
self-awareness built on the accumulated memories. Entropy in the human brain is
randomness of floating thoughts which doesnt get converted into memory (work-
done). Cognitive entropy level is high when brain is provoked by anxiety or
uncertainty. Only certain portion of thoughts gets accumulated at the conscious level.
The memorized information bits are codified and stored in neural circuits dispersed at
random all over the brain. Human consciousness is created from long term memory.
Disorder in computers is caused mainly by randomness of operating system, data
compression and software rot etc. In addition, processing of unformatted and pre-
initialized random data leads to disorderly output or entropy. Even though AI has no
consciousness, when equipped with camera, microphone and a machine learning
algorithm, it can develop some kind of sense of its surroundings. AI stores coded data
in logically structured and well-organized storage which is controlled by the

methodically designed algorithms. One of the criteria of cloud storage design is
minimizing the entropic losses. The confusion in a vacillating human mind is hard to
understand and the assessment of resulting entropy is far more complicated. The
technological innovations in AI have led to improved efficiency and consequently in
less entropy. As the disorder in AI is constantly being minimized, the fluctuating
human mind has remained the same. AI of the future will have consciousness
conceivably in a different form than that of the human mind.

AI Skepticism: Hawking, Musk and Gates

Majority only thinks positively of the artificial intelligence platform, helping the
businesses to make more profit. Our short sightedness ignores the long-term effect of
AI. The greatest living mind and theoretical physicist, Stephen Hawking expressed his
doubts about the future of AI a few times. He in December 2014 in an interview with
BBS said "The development of full artificial intelligence could spell the end of the
human race. It would take off on its own, and re-design itself at an ever-increasing
rate". He continued "Humans, who are limited by slow biological evolution, cannot
compete, and would be superseded. In April 2014 Hawking with three other
physicists (Max Tegmark and Frank Wilczek of MIT, and computer scientist Stuart
Russell, Computer scientist, University of California, Berkeley) wrote in The
Huffington Post The creation of AI will be the biggest event in human history.
Unfortunately, it may also be the last. At Cambridge University in 2015 on AI
Stephen Hawking made cautious remark when it eventually does occur, its likely to
be either the best or worst thing ever to happen to humanity, so theres huge value in
getting it right.

In 2014 Elon Musk talking to students at MIT said I think we should be very careful
about artificial intelligence. If I had to guess at what our biggest existential threat is,
its probably that. So, we need to be very careful. Musk also said, Im increasingly
inclined to think that there should be some regulatory oversight, maybe at the national
and international level, just to make sure that we dont do something very foolish.
He continued With artificial intelligence we are summoning the demon. In all those
stories where theres the guy with the pentagram and the holy water, its like yeah,
hes sure he can control the demon. Doesnt work out It wasn't the first time Musk
warned about the plausible danger of AI. In August, he tweeted, "We need to be super
careful with AI, potentially more dangerous than nukes."

At the Ask Me Anything (AMA) session held in Jan 2015 Bill Gates was asked about
the Super Intelligence Machines. Gates wrote I am in the camp that is concerned
about super intelligence. First the machines will do a lot of jobs for us and not be super
intelligent. That should be positive if we manage it well. A few decades after that the
intelligence is strong enough to be a concern. I agree with Elon Musk and some others
on this and don't understand why some people are not concerned. Louis A. Del
Monte, CEO of Del Monte Associates, inventor and futurist shared his view with Elon
Musk on the threat of AI. Speaking to Business Insider , Del Monte, said: Today
theres no legislation regarding how much intelligence a machine can have, how
interconnected it can be. If that continues, look at the exponential trend. We will reach
the singularity in the time frame most experts predict. From that point on youre going
to see that the top species will no longer be humans, but machines.

ITIF (Information Technology and Innovation Foundation) in 2015 branded Elon

Musk and Bill Gates as Luddites for talking guardedly on AI. Luddites are English
workers in the early 19th century who destroyed cotton and woolen mills thinking that
their jobs would be threatened. Inventor and futurist Ray Kurzweil, director of
engineering at Google, refers to the point in time when machine intelligence surpasses
human intelligence as "the singularity," which he predicts could come as early as 2045.
Kurzweils singularity is different than the gravitational singularity which occurred
before the Big Bang when the gravitational force reached infinity. Other experts say
such a day of singularity is a long way off. The American author James Barratt, in his
2013 book, Our Final Invention says The A.I. of our future wont necessarily be
friendly. They could actually be what destroy us. The total extermination of the
humankind by the manmade machines certainly reminds us of the apocalyptic
prophesy mentioned in the Bible.

Professor Mark Bishop, Professor of Cognitive Computing, London University said

that there would always be a 'humanity gap' between any artificial intelligence and a
real human mind. Because of this gap humans would be always more powerful than
the AI working on its own. However, he was concerned about the military deployment
of robotic weapons that can take decisions without human intervention. Prof. Bishop
in a way is concerned about the potential danger of AI. With the concern for
mishandling of AI, Google has formed an ethics committee to oversee the
development of AI. The threat of monstrous AI existed even prior to the existence of

AI. Isaac Asimov, Russian-American author wrote fictions on robots. He is famous for
the three laws of robotics which he proposed in1942:

1. A robot may not injure a human being or, through inaction, allow a human
being to come to harm.
2. A robot must obey orders given to it by human beings except where such orders
would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not
conflict with the First or Second Law.

Internet has given the opportunity for the people all over the world including the
technologically backward countries to learn the most recent technologies. The digital
technology in 21st century has proliferated in the third world countries who are now
supplying technical knowhow to the advanced countries. Large countries, China and
India have been producing large number of savvy computer engineers to support the
innovations in computer science pursued by the advanced institutions who are racing
to build AI models as the critical component of the national prosperity and security.
But do they recognize the danger that AI can pose to humanity? Still we are too casual
and relaxed about the looming of self-motivated progression of AI. It is possible to
lose the sight of threat posed by AI in the thick of completion among ourselves. James
Joyce, the Irish Novelist of 20th century wrote, Nations have their ego just like
humans. Just the way nuclear weapons were built on the competitive ego of
countries, we may end up creating insidious AI monsters. By the time we sense the
persona of AI monster, it would be too late to put the Genie back in the bottle. As a
precautionary measure the United Nations needs to address this matter soon. It should
be the U.N agenda to set up protocol for the countries to practice AI safety.

Lying Robots
Inadequate computer programs have time and again led to large scale chaotic
conditions globally. The COBOL (Common Business Oriented Language) programs
written in 60s and 70s werent facilitated to change dates after 1999 as the program
considered only the last two digits of a year to be variable and 19 (first two digits) was
constant. The defect or inadequacy of the COBOL program was known as Y2K bug.
To correct the wide spread Y2K bug, the U.S corporations spent $150 billion. Many U.S
corporations over reacted to the Y2K hype triggered by a few technology companies.
Sometimes GPS fails to provide the closest or quickest driving routes; seldom it is even

totally wrong and takes motorists literally for surprise rides. A Syrian truck driver
instead of driving to Gibraltar drove his truck to Gibraltar Point in Scotland, about
1600 miles north of his intended destination. At times programming errors can lead to
costly mistakes. Unintentional errors in AI or machine learning algorithms can trigger
serious unexpected consequences. The computer networks seldom make errors, but
rare accidental errors can mess up the lives of large population of innocent people. The
cyber-attack is a sneaky strike by rogue programmers on networks for manipulating,
destroying or stealing treasures. The ongoing progress in AI without safety guidelines
is an acute matter of significant consequences.

Robots with humanized looks and behavior similar to that of humans would likely to
be treated as another fellow human. Robots displaying human like facial expressions
of sadness, joy, love and anger possibly would develop close intimacy with humans.
A flirty smile of a good-looking robot may prompt romantic mood in humans. Lying
for any kind of reward is a natural character of humans. People even tell white lies to
be polite and sociable. AI is getting smarter and AI is learning human like etiquettes
and social gestures. A humanoid secretary telling an unanticipated visitor that it was
expecting him can be considered as a robotic white lie. The machine learning
algorithms are being designed to create lying robots especially in the national defense
applications. The U.S Navy has built a prototype robotic blue fin tuna, Silent Nemo to
work as a spy at the shores of our adversarial nations. The advancement in AI is
moving toward creating decision making smart weapons. They can autonomously
attack targeted objects and skillfully incapacitate or destroy the marked targets. We
cannot ignore the possibility of terrorists creating smart weapons for attacking well
established innocent society for absurd reasons.

When robots start having conversations with each other, they may start telling white
lies to please each other. What if robots develop human like ego and envy? What if
they start lying to humans for some kind of abstract rewards? Alan Turing was one of
the greatest minds of the 20th century. He is considered to be the father of computer
science. His ingenuity was unmatched in his time. Unfortunately, he died in 1954 at
the age of 41. In 1951 Turing said It seems probable that once the machine thinking
method had started, it would not take long to outstrip our feeble powers. They would
be able to converse with each other to sharpen their wits. At some stage therefore, we
should have to expect the machines to take control." Turing test created by him is the
standard test for measuring the adeptness of a computer. His well-known words in

reference to the Turing test were A computer would deserve to be called intelligent
if it could deceive a human into believing that it was human. Turing warned about
the machine intelligence in 1940s.

Humanity Passing Button to Humanoids?

The biological evolution of all forms of organisms occurs through the natural
selection process was the theory of evolution put forward by Charles Darwin in 1859.
Its a well-accepted theory that still applies to all kind of life forms including human
beings. We are inclined to think that being the prevailing force we decide the fate of
our planet and its belongings. At least for the last century we have been excessively
abusing the natural environment. The cause for our behavior is well explained by the
Neo Malthusian theory, The human population growth is exponential, and the
population growth can easily exceed the available resources if the population is not
controlled. The Mother Nature so far has shown enough endurance. However, at a
point in time in the future, if nature cannot tolerate human behavior any longer it could
find a way to forcibly eliminate human beings from the face of the earth. It might seem
like an irrational strand of imagination as we always assumed that the nature was at
our beck and call. Excessive burning of fossil fuels and deforestation are causing global
warming which is one of the worst predicaments, since the known history of
humanity. Simultaneously we are creating self-learning humanoids to help us in our
endeavors and make our living more efficient. In next 10 years decision making
humanoids will help us find solutions to complex problems in just about every field.
It is also a possibility that in the not too distant future artificial intelligence could
exceed the human intelligence.

Mr. Ray Kurzweils prediction of the state of singularity as early as 2045 is not a bona
fide year of cataclysmic event. But if we are not concerned about our greedy endeavors,
the perceived singularity may in fact suddenly transpire without any signs warning.
If and when it happens, Robots would take the charge of the planet from humans. In
the 21st century AI is increasingly replacing humans from the jobs that dont require
high level skill sets. The human existence will be in danger if and when the highly
skilled jobs of writer, artist, actor and scientist are taken over by AI. This is where
intelligences of humans and AI would intersect before AI overtakes the intelligence of
humans. The point of intersection of two curves is MR. Kurzweils state of singularity.

After singularity it wouldnt take long for the self-learning AI to go past the human

The process of evolution began with the primordial soup which contained nucleic acid,
the earliest omens of life created by the chemical reactions between elements in the
atmosphere and ocean. Almost four billion years later, humanoid, the superior most
life form came out of the spontaneous natural miracles. The earth is on the verge of
transferring chemical energy based evolution to that of electrical energy. The present
form of slow moving organic evolution will switch to faster unwavering evolution
process based on electrical energy. Millions of years ago dinosaurs even though
survived on chemical energy, for an ambiguous reason seized to exist. Humans might
become the dinosaurs of the new age. The evolution due to genetic mutation may
become unfit to survive the quickly shifting environmental changes and could switch
on to new strain of intelligent beings. Even though it sounds awfully alarming, the
inconceivable inorganic life form may take control. Instead of organism based
evolution it would be progression of machines made of electronics running on the flow
of electrons and even photons. When robots begin to replicate on their own, we will
be changing the status quo definition of life. The robots built out of electronics and
alloys would be new form of super life. The definition of life wouldnt matter when
we stand facing extinction. In the large scheme of our universe, it is very possible that
the human existence on this planet may be a transient phase in the natures process of

Selected Bibliography:

How humans will lose control of artificial intelligence By Rick Paulas, The Week
Moores Law Timeline, Intel
Special Robots: Companions and helpers of the future, Linda Haden, Future Ready Singapore,
April 26, 2016
Reach For the skyscraper, The Engineer, By Andrew Wade 2nd February 2016 11:54 am
Can Computers Be Conscious? Big Think, By: Max Miller, 2016
9 Ways Computer will Change Every Thing. Time, Matt Vella, Feb 6, 2014
There's Enough Time to Change Everything, The Atlantic, Conor Friedersdorf, Feb 23, 2017
Bill Gates Fears A.I., But A.I. Researchers Know Better. Popular Science, Eric Sofge, Jan 30, 2015