You are on page 1of 28

Impact

Of
Artificial Intelligence

Malhar Sharma
A Report
On
Impact of
Artificial Intelligence

By
Malhar Sharma
(19CSU170)

Submitted to
Ms. Divyabha Vashisth

6th May, 2020


Copyright Notice

@ NCU 2019

All rights reserved.

No parts of this report may be reproduced in any


form or any means without permission in writing
from the publisher.
Acknowledgement

I would like to present my gratitude for the kind support


and help of many individuals and sources. I would like to
extend my sincere thanks to all of them.

I am highly indebted to my English professor Ms


Divyabha Vashisth for encouraging me to take this study.

I would also like to express my gratitude towards my


parents and my friends for their kind cooperation and
encouragement which helped me in the completion of
this project.
Preface

Artificial Intelligence is a vast field encompassing


issues that require more space than a single book on the
topic. With inspiration from multiple fields of knowledge
such as neurology, psychology, philosophy and literature
to say the least; artificial intelligence is an amalgamation
of yet more refined subjects.
AI has a wide spectrum of applications including natural
language processing, search engines, medical diagnosis,
bioinformatics and cheminformatics, detecting credit
card fraud, stock market analysis, classifying DNA
sequences, speech and handwriting recognition, object
recognition in computer vision, game playing, machine
learning and robot locomotion.
Some scientists and Futurologists predict that in near
future AI will make digital human, artificial life and
artificial immortality possible.
Table of Contents

Acknowledgements………………………….3
Preface……………………………………………4
1.Introduction………………………………….6
2. Discussion…………………………………….8
i. How Artificial intelligence is
changing modern lives
ii. AI is changing education…….9
3.Conclusion………………………………….16
4.Recommendations………………………17
5.Bibliography……………………………….20
6.List of References……………………….21
7.Glossary……………………………………….23
Appendix 1
Introduction

In computer science, artificial intelligence (AI),


sometimes called machine intelligence,
is intelligence demonstrated by machines, in contrast to
the natural intelligence displayed by humans and
animals. Computer science defines AI research as the
study of “intelligent agents”: any device that perceives its
environment and takes actions that maximize its chance
of successfully achieving its goals. Colloquially, the term
“artificial intelligence” is used to describe machines that
mimic “cognitive” functions that humans associate with
other human minds, such as “learning” and “problem
solving”.
As machines become increasingly capable, tasks
considered to require “intelligence” are often removed
from the definition of AI, a phenomenon known as the AI
effect. A quip in Tesler’s Theorem says “AI is whatever
hasn’t been done yet.” For instance, optical character
recognition is frequently excluded from things
considered to be AI, having become a routine technology.
Modern machine capabilities generally classified as AI
include successfully understanding human speech,
competing at the highest level in strategic game systems
(such as chess and Go), autonomously operating cars,
intelligent routing in content delivery networks,
and military simulations.
Artificial intelligence can be classified into three different
types of systems: analytical, human-inspired, and
humanized artificial intelligence. Analytical AI has only
characteristics consistent with cognitive intelligence;
generating a cognitive representation of the world and
using learning based on past experience to inform future
decisions. Human-inspired AI has elements from
cognitive and emotional intelligence; understanding
human emotions, in addition to cognitive elements, and
considering them in their decision making. Humanized
AI shows characteristics of all types of competencies (i.e.,
cognitive, emotional, and social intelligence), is able to
be self-conscious and is self-aware in interactions with
others.
In the twenty-first century, AI techniques have
experienced a resurgence following concurrent advances
in computer power, large amounts of data, and
theoretical understanding; and AI techniques have
become an essential part of the technology industry,
helping to solve many challenging problems in computer
science, software engineering and operations research.
Discussion

How Artificial Intelligence Is Changing Modern Life


There is a multitude of ways that artificial intelligence is
changing our day-to-day life. In some of the largest
industries in the world, this ever-growing technology is
rearing itself as a force to be reckoned with. Already we
are seeing artificial intelligence creep into our education
systems, our businesses, and our financial structures.
Here’s how:

AI Is Changing Education
Artificial intelligence powered education programs are already
helping students learn basic math and writing skills. These
programs can only teach students the fundamentals of subjects,
but at the rate this technology has changed, it’s safe to say it will
be able to teach higher level thinking in the future. Artificial
intelligence allows for an individualized learning experience.
This type of technology can show what subjects a student is
suffering in and allow teachers to help focus on building up
specific skill sets.

Over half of all students 14 and under surveyed reported using


either a smartphone or laptop for homework each day.
With the expansion of technology knowledge and accessibility,
we are seeing the very road map of education change. In the
future, a combination of artificial intelligence tutoring and
support software will give an opportunity for students anywhere
around the globe to learn any subject at their own pace and on
their own time.
Artificial intelligence has been able to automate simple actions
like grading, which is relieving teachers and professors from
time-consuming work. Teachers spend a lot of their time grading
and reporting for their students, but it is now possible for
educators to automate their grading for almost all types of
multiple-choice testing. Essay grading software has emerged in
its early years as an improvable tool to help teachers focus more
on classroom management than assessment.
Artificial intelligence (AI) is awakening fear and enthusiasm in
equal measures.  Some have likened the advances in AI to
“summoning the devil” and there are concerns that AI
threatens to end humanity.  AI can scare people, perhaps due
to the science fiction notion that machines will take all of our
jobs; ‘wake up’ and do unintended things.  However, where
some see danger, others see opportunity!

This article pulls together information from a series of articles


on AI and machine learning, its’ impact on the future world of
work, and implications for occupational safety and health
(OSH).

It’s likely that the upwards trend in capabilities of AI systems


will continue; that systems will eventually become capable of
solving a wide range of tasks (rather than a new system having
to be built for each new problem), and that the adoption of AI
within many industries will continue.  Evidence suggests AI is
currently unable to reproduce human behaviour or surpass
human thinking; it’s likely to stay a complementary workforce
tool for a very long time to come.  However, steady gradual
improvements in AI could reach a point where AI exceeds
current expectations.  The continued development of AI will
depend on moral public opinion regarding the benefits and
acceptability of it, on businesses continuing to gain
competitive advantage from using it, and continued funding for
research and development of it.
It is difficult to determine where this technology might create
new jobs in the future, yet easier to see which tasks AI might
take from humans.  It’s likely that any routine, repetitive task
will be automated.  This shift to automation has happened for
centuries, but what is different today is that it affects many
more industries.  It’s likely that we will adapt to technological
changes by inventing entirely new types of work, and by taking
advantage of our uniquely human capabilities.
Historically, automating a task has made it quicker and
cheaper, which has increased demand for humans to carry out
tasks around those which can’t be automated.  In addition,
rather than replacing jobs altogether, technology has changed
the nature of some jobs, along with the skills required to do
them.  As the workplace, jobs and tasks change, knowledge will
need to be updated, and skills will need to adapt.  ‘Soft’ skills,
such as collaboration, flexibility and resilience, will become
increasingly important.  The challenge will be to develop our
skills as quickly as the technological advancements are being
made.  Therefore, we may need to ask ourselves, what the
health and safety risks might be if the technology advances
faster than skills required for working with it?
In the future, if over-reliance is placed on technology people
could become disconnected from the process.  They may cease
to understand how things work (become de-skilled) or fail to
appreciate how bad things are when they go wrong.  Whilst an
AI system can present data and recommendations, the
decisions on what action to take is one for humans.  However,
if humans blindly follow automated instructions, without
knowing how to question them, this could have negative
implications for OSH.
Greater numbers of workers will be ‘new’ to their roles and
tasks (with resulting implications for risk management). 
Therefore ongoing workforce training and re-learning will be
increasingly important in the future.
In a future where benefits and risks are ‘incalculable’, it will be
how humans choose to use the technology that decides
whether it’s good or bad.  To harness the power and benefits of
machine learning we need to decide what we want machines to
‘learn’ and/or do, and what questions we want them to answer. 
It is clearly important that controls and goals for AI are set,
and that a lot more empirical work needs to be done to gain a
better understanding of how goal systems (in AI) should be
built, and what values the machines should have.  Once this is
done, it will provide an idea of what sort of things should be
put in a regulatory framework, or whether existing regulatory
frameworks are robust enough.
If AI is seen to contribute to business success via enabling a
better understanding of customers, along with a more rapid
response to their needs, then its uptake within the world of
work is likely to continue.  In the future, many tasks will have
the opportunity of input from AI.  However, rather than
replacing humans, it is the combination of AI and humans that
is likely to bring the greatest benefits to the working world. 
Therefore, we might conclude that it will be how AI ‘interacts’
with humans that will influence its role in the future world of
work.  If human values are carefully articulated and embedded
into AI systems then socially unacceptable outcomes might be
prevented.
So, does AI present opportunity or danger?  Will machines take
all the jobs or create more than they destroy?  Opinions on this
are divided, and the reality is likely to be somewhere in
between the two extremes.  AI will continue to change the
world of work, and workers will need to engage in life-long
learning, developing their skills and changing jobs more often
than they did in the past.
In the future, as humans increasingly work together with AI,
the challenge for us in HSE’s Foresight Centre is to ensure that
we anticipate any negative health and safety consequences,
assess the risks, and share this knowledge to benefit the future
working world.
Conclusions
Artificial Intelligence and the technology are one side of
the life that always interest and surprise us with the new
ideas, topics, innovations, products …etc. AI is still not
implemented as the films representing it(i.e. intelligent
robots), however there are many important tries to reach
the level and to compete in market, like sometimes the
robots that they show in TV. Nevertheless, the hidden
projects and the development in industrial companies.

At the end, we’ve been in this research through the AI


definitions, brief history, applications of AI in public,
applications of AI in military, ethics of AI, and the three
rules of robotics. This is not the end of AI, there is more
to come from it, who knows what the AI can do for us in
the future, maybe it will be a whole society of robots.
Recommendations

Recognize the social risks implied by artificial


intelligence
The first step in resolving a problem is to recognize that
it exists. According to EIU, the risk for the future of
employment and privacy posed by artificial intelligence is
undeniable. Faced with this reality, there is no room for
complacency or resignation.

Explain, educate and boost transparency


To demand blind faith in algorithms is a sure road to
misinformation and distortion about artificial
intelligence. “The biggest challenge facing AI is the
possible lack of confidence in the technology due to a
lack of transparency on how machines arrive at their
decisions”, says Manuela Veloso, the head of the
department of machine learning of Carnegie Mellon
University, one of 14 global experts interviewed for the
report. Those at the vanguard of the AI revolution need
to explain their work and the plans for society in the
simplest way possible. They have a lot of power and that
brings with it a lot of responsibility, the EIU says.
Adapt training and education to the new artificial
intelligence society
This involves working with three tools: professional
training, which has fallen by the wayside in many
countries and needs to be given more importance, the
report says; maintaining the focus on STEM
subjects(Science, Technology, Engineering and
Mathematics) and not to forget the importance of the
humanities whose value will grow as a result of an
expected increase in the demand for soft skills such as
teambuilding, cooperation and critical thought. All of
this in a way that makes close collaboration between
teachers, business and lawmakers more necessary than
ever in the face of the ongoing evolution of training
needs.
SCIENCE AND TECHNOLOGY
What is a recommendation in the era of artificial
intelligence?
What happens when you choose a restaurant on the
Internet? Or when you want to buy a book? Or find a
song? Or get hooked on a new series?  BBVA Data &
Analytics in a new item for dissemination explains how
the different recommendation systems used by Amazon,
Netflix, and Spotify work and how they will extend
beyond the consumer and entertainment sectors.
Regulation and improving the treatment of data
As the report categorically states, the use of data is going
to be one of the defining questions of the 21st century. It
calls for the creation of specific regulations that allow the
appropriate use of aggregate anonymous data in response
to current doubts on cybersecurity and privacy. However,
these regulations should not prevent the movement of
data beyond state borders.

Build bridges and enhance communication


The report notes many gaps in understanding with
respect to the development of artificial intelligence, but
probably the biggest one is between company technical
experts and political leaders.
Good public policies could lessen the negative effects of
artificial intelligence without limiting the positive ones,
the report concludes –for example, in the labour market.
References

 Concerns of an Artificial Intelligence Pioneer


 Transcending Complacency on Super intelligent

Machines
 Why We Should Think About the Threat of Artificial

Intelligence
 Stephen Hawking Is Worried About Artificial

Intelligence Wiping Out Humanity


 Artificial Intelligence could kill us all. Meet the man

who takes that risk seriously


 Artificial Intelligence Poses ‘Extinction Risk’ To

Humanity Says Oxford University’s Stuart Armstrong


 What Happens When Artificial Intelligence Turns On

Us?
 Can we build an artificial superintelligence that won’t

kill us?
 Artificial intelligence: Our final invention?

 Artificial intelligence: Can we keep it in the box?

 Science Friday: Christof Koch and Stuart Russell on

Machine Intelligence (transcript)


 Science Goes to the Movies: ‘Transcendence’

 Our Fear of Artificial Intelligence


Bibliography

 https://en.wikipedia.org/wiki/Artificial_intelligence
 https://www.shponline.co.uk/technology
 https://dzone.com/articles/ai-survey-2018-insights-and-
suggestions
 https://futureoflife.org/background/benefits-risks-of-
artificial-intelligence/
 https://www.livemint.com/
 https://dzone.com/articles/ai-and-its-impact-on-humanity
Glossary

 DATA SCIENCE : known as data-driven science, is an


interdisciplinary field about scientific processes and systems
to extract knowledge or insights from data
 COMPUTER VISION : an interdisciplinary field that deals
with how computers can be made for gaining high-level
understanding from digital images or videos.
 DEEP LEARNING : application of artificial neural networks
to learning tasks that contain more than one hidden layer.
 HUMAN-COMPUTER INTERACTION : researches the
design and use of computer technology, focused on the
interfaces between people (users) and computers.
 INTELLIGENCE : one’s capacity for logic, understanding,
self-awareness, learning, emotional knowledge, planning,
creativity and problem solving.
 ALGORITHM : self-contained sequence of actions to be
performed. Algorithms perform calculation, data processing,
and/or automated reasoning tasks.
 ARTIFICIAL GENERAL INTELLIGENCE : intelligence of a
machine that could successfully perform any intellectual task
that a human being can. It is a primary goal of artificial
intelligence research and a common topic in science fiction
and futurism.

Appendix 1
A Short History of AI

This Appendix is based primarily on Nilsson's book and written


from the prevalent current perspective, which focuses on data
intensive methods and big data. However important, this focus
has not yet shown itself to be the solution to all problems. A
complete and fully balanced history of the field is beyond the
scope of this document.

The field of Artificial Intelligence (AI) was officially born and


christened at a workshop organized by John McCarthy in 1956
at the Dartmouth Summer Research Project on Artificial
Intelligence. The goal was to investigate ways in which
machines could be made to simulate aspects of intelligence—
the essential idea that has continued to drive the field forward
ever since. McCarthy is credited with the first use of the term
“artificial intelligence” in the proposal he co-authored for the
workshop with Marvin Minsky, Nathaniel Rochester, and
Claude Shannon. Many of the people who attended soon led
significant projects under the banner of AI, including Arthur
Samuel, Oliver Selfridge, Ray Solomon off, Allen Newell, and
Herbert Simon.

Although the Dartmouth workshop created a unified identity


for the field and a dedicated research community, many of the
technical ideas that have come to characterize AI existed much
earlier. In the eighteenth century, Thomas Bayes provided a
framework for reasoning about the probability of events. In the
nineteenth century, George Boole showed that logical reasoning
—dating back to Aristotle—could be performed systematically
in the same manner as solving a system of equations. By the
turn of the twentieth century, progress in the experimental
sciences had led to the emergence of the field of statistics which
enables inferences to be drawn rigorously from data. The idea
of physically engineering a machine to execute sequences of
instructions, which had captured the imagination of pioneers
such as Charles Babbage, had matured by the 1950s, and
resulted in the construction of the first electronic computers.
Primitive robots, which could sense and act autonomously, had
also been built by that time.

The most influential ideas underpinning computer science


came from Alan Turing, who proposed a formal model of
computing. Turing's classic essay, Computing Machinery and
Intelligence, imagines the possibility of computers created for
simulating intelligence and explores many of the ingredients
now associated with AI, including how intelligence might be
tested, and how machines might automatically learn. Though
these ideas inspired AI, Turing did not have access to the
computing resources needed to translate his ideas into action.

Several focal areas in the quest for AI emerged between the


1950s and the 1970s.Newell and Simon pioneered the foray into
heuristic search, an efficient procedure for finding solutions in
large, combinatorial spaces. In particular, they applied this idea
to construct proofs of mathematical theorems, first through
their Logic Theorist program, and then through the General
Problem Solver. In the area of computer vision, early work in
character recognition by Selfridge and colleagues laid the basis
for more complex applications such as face recognition. By the
late sixties, work had also begun on natural language
processing. “Shakey”, a wheeled robot built at SRI International,
launched the field of mobile robotics. Samuel's Checkers-
playing program, which improved itself through self-play, was
one of the first working instances of a machine learning system.
Rosenblatt's Perceptron, a computational model based on
biological neurons, became the basis for the field of artificial
neural networks. Feigenbaum and others advocated the case for
building expert systems—knowledge repositories tailored for
specialized domains such as chemistry and medical diagnosis.

Early conceptual progress assumed the existence of a symbolic


system that could be reasoned about and built upon. But by the
1980s, despite this promising headway made into different
aspects of artificial intelligence, the field still could boast no
significant practical successes. This gap between theory and
practice arose in part from an insufficient emphasis within the
AI community on grounding systems physically, with direct
access to environmental signals and data. There was also an
overemphasis on Boolean (True/False) logic, overlooking the
need to quantify uncertainty. The field was forced to take
cognizance of these shortcomings in the mid-1980s, since
interest in AI began to drop, and funding dried up. Nilsson calls
this period the “AI winter.”

A much needed resurgence in the nineties built upon the idea


that “Good Old-Fashioned AI” was inadequate as an end-to-end
approach to building intelligent systems. Rather, intelligent
systems needed to be built from the ground up, at all times
solving the task at hand, albeit with different degrees of
proficiency. Technological progress had also made the task of
building systems driven by real-world data more feasible.
Cheaper and more reliable hardware for sensing and actuation
made robots easier to build. Further, the Internet’s capacity for
gathering large amounts of data, and the availability of
computing power and storage to process that data, enabled
statistical techniques that, by design, derive solutions from
data. These developments have allowed AI to emerge in the
past two decades as a profound influence on our daily lives, as
detailed in Section II.

In summary, following is a list of some of the traditional sub-


areas of AI. As described in Section II, some of them are
currently “hotter” than others for various reasons. But that is
neither to minimize the historical importance of the others, nor
to say that they may not re-emerge as hot areas in the future.

Search and Planning deal with reasoning about goal-directed


behaviour. Search plays a key role, for example, in chess-playing
programs such as Deep Blue, in deciding which move
(behaviour) will ultimately lead to a win (goal).

The area of Knowledge Representation and Reasoning involves


processing information (typically when in large amounts) into a
structured form that can be queried more reliably and
efficiently. IBM's Watson program, which beat human
contenders to win the Jeopardy challenge in 2011, was largely
based on an efficient scheme for organizing, indexing, and
retrieving large amounts of information gathered from various
sources.[159]

Machine Learning is a paradigm that enables systems to


automatically improve their performance at a task by observing
relevant data. Indeed, machine learning has been the key
contributor to the AI surge in the past few decades, ranging
from search and product recommendation engines, to systems
for speech recognition, fraud detection, image understanding,
and countless other tasks that once relied on human skill and
judgment. The automation of these tasks has enabled the
scaling up of services such as e-commerce.

As more and more intelligent systems get built, a natural


question to consider is how such systems will interact with each
other. The field of Multi-Agent Systems considers this question,
which is becoming increasingly important in on-line
marketplaces and transportation systems.

From its early days, AI has taken up the design and


construction of systems that are embodied in the real world.
The area of Robotics investigates fundamental aspects of
sensing and acting—and especially their integration—that
enable a robot to behave effectively. Since robots and other
computer systems share the living world with human beings,
the specialized subject of Human Robot Interaction has also
become prominent in recent decades.

Machine perception has always played a central role in AI,


partly in developing robotics, but also as a completely
independent area of study. The most commonly studied
perception modalities are Computer Vision and Natural
Language Processing, each of which is attended to by large and
vibrant communities.

Several other focus areas within AI today are consequences of


the growth of the Internet. Social Network Analysis investigates
the effect of neighbourhood relations in influencing the
behaviour of individuals and communities. Crowdsourcing is
yet another innovative problem-solving technique, which relies
on harnessing human intelligence (typically from thousands of
humans) to solve hard computational problems.

Although the separation of AI into sub-fields has enabled deep


technical progress along several different fronts, synthesizing
intelligence at any reasonable scale invariably requires many
different ideas to be integrated. For example, the AlphaGo
program that recently defeated the current human champion at
the game of Go used multiple machine learning algorithms for
training itself, and also used a sophisticated search procedure
while playing the game.

You might also like