You are on page 1of 70

PROJECT REPORT

ON
ARTIFICIAL INTELLIGENCE
Submitted in Partial Fulfillment of the Requirement of Degree in
Bachelor of Business Administration (Computer Aided Management)
OF

MAHARISHI DYANAND UNIVERSITY, ROHTAK


(Session: 2015-2016)

Submitted To:

Submitted By:

Mrs. Snehlata

Name: Arvind Singh

Lecturer of BBA Department

Class: BBA [CAM] 6thSem.


Roll No.24012
University No.

D. A. V. CENTENARY COLLEGE
NH-3, NIT FARIDABAD

ACKNOWLEDGEMENT
I am very much thankful to Mrs. Snehlata (PROJECT GUIDE) for giving me opportunity and
her guidance which helps me throughout preparing this report. She has also provided me a
valuable suggestions and excellence guidance about this project which proved very helpful to me
to utilize my theoretical knowledge in practical field.

I am thankful to M.D University, Rohtak for putting me to this valuable exposure into the field of
Research Methodology.

At last I am also thankful to my friends, who have given me their constructive advice, educative
suggestion, encouragement, co-operation and motivation to prepare this report.

(Arvind Singh)

PREFACE
The title of my project is ARTIFICIAL INTELLIGENCE
This project report is on how Artificial Intelligence has evolved from the earlier twenty one
century till today. It contains emergence of AI, programming languages and programming codes
used in a computer to make it more user-friendly. This report also tells you why AI become an
essential need in todays world and how they are influencing our life. It also tells about the
business uses of AI. And what is the scope and uses of AI and also what are the professions
related to the AI in a person can make his career.

(Arvind Singh)

CONTENTS
S.No.

Topic

1.

Introduction To The Topic

2.

Review of literature

3.

Research Methodology
a) Objectives of the study
b) Scope of the study
c) Data Collection
d) Limitations of the study

4.

Data Analysis & Interpretation

5.

Conclusion

6.

Recommendation & Suggestions

7.

Bibliography

PAGE NO.

Introduction to Artificial Intelligence

Artificial intelligence (AI)

Computers with the ability to mimic or duplicate the functions of the human
brain

Artificial intelligence systems

The people, procedures, hardware, software, data, and knowledge needed to


develop computer systems and machines that demonstrate the characteristics
of intelligence

What is Artificial Intelligence?


Artificial Intelligence is a branch of Science which deals with helping machines find
solutions to complex problems in a more human-like fashion. This generally involves
borrowing characteristics from human intelligence, and applying them as algorithms in a
computer friendly way. A more or less flexible or efficient approach can be taken
depending on the requirements established, which influences how artificial the intelligent
behaviour appears.

AI is generally associated with Computer Science, but it has many important links with
other fields such as Maths, Psychology, Cognition, Biology and Philosophy, among many
others. Our ability to combine knowledge from all these fields will ultimately benefit our
progress in the quest of creating an intelligent artificial being.

(John McCarthy, Stanford University)

What is artificial intelligence?


It is the science and engineering of making intelligent machines, especially intelligent
computer programs. It is related to the similar task of using computers to understand human
intelligence, but AI does not have to confine itself to methods that are biologically
observable.

Yes, but what is intelligence?


Intelligence is the computational part of the ability to achieve goals in the world. Varying
kinds and degrees of intelligence occur in people, many animals and some machines.

Isn't there a solid definition of intelligence that doesn't depend on


relating it to human intelligence?

Not yet. The problem is that we cannot yet characterize in general what kinds of
computational procedures we want to call intelligent. We understand someof the
mechanisms of intelligence and not others.

Motivation
Computers are fundamentally well suited to performing mechanical computations, using
fixed programmed rules. This allows artificial machines to perform simple monotonous
tasks efficiently and reliably, which humans are ill-suited to. For more complex problems,
things get more difficult... Unlike humans, computers have trouble understanding specific
situations, and adapting to new situations. Artificial Intelligence aims to improve machine
behavior in tackling such complex tasks.
Together with this, much of AI research is allowing us to understand our intelligent
behavior. Humans have an interesting approach to problem-solving, based on abstract
thought, high-level deliberative reasoning and pattern recognition. Artificial Intelligence
can help us understand this process by recreating it, then potentially enabling us to enhance
it beyond our current capabilities.

Technology
There are many different approaches to Artificial Intelligence, none of which are either
completely right or wrong. Some are obviously more suited than others in some cases, but
any working alternative can be defended. Over the years, trends have emerged based on the
state of mind of influential researchers, funding opportunities as well as available computer
hardware.
Over the past five decades, AI research has mostly been focusing on solving specific
problems. Numerous solutions have been devised and improved to do so efficiently and
reliably. This explains why the field of Artificial Intelligence is split into many branches,

ranging from Pattern Recognition to Artificial Life, including Evolutionary Computation


and Planning.

Applications
The potential applications of Artificial Intelligence are abundant. They stretch from the
military for autonomous control and target identification, to the entertainment industry for
computer games and robotic pets. Lets also not forget big establishments dealing with huge
amounts of information such as hospitals, banks and insurances, who can use AI to predict
customer behaviour and detect trends.
As you may expect, the business of Artificial Intelligence is becoming one of the major
driving forces for research. With an ever growing market to satisfy, there's plenty of room
for more personnel. So if you know what you're doing, there's plenty of money to be made
from interested big companies!

Artificial Intelligence involves the study of:

automated recognition and understanding of signals

reasoning, planning, and decision-making

learning and adaptation

AI has made substantial progress in

recognition and learning

some planning and reasoning problems

but many open research problems

AI Applications

Improvements in hardware and algorithms

AI applications in industry, finance, medicine, and science.

Rational agent view of AI

Philosophy Logic, methods of reasoning, mind as physical system, foundations of


learning, language, rationality.

Mathematics

Formal

representation

and

proof,

algorithms,

computation,

(un)decidability, (in)tractability

Probability/Statisticsmodeling uncertainty, learning from data

Economics utility, decision theory, rational economic agents

Neuroscience neurons as information processing units.

Psychology/how do people behave, perceive, process cognitive

Cognitive Science information represents knowledge.


Computer building fast computers engineering

Control theory design systems that maximize an objective function over time

Linguistics knowledge representation, grammars

Perceptive system

A system that approximates the way a human sees, hears, and feels
objects

Vision system

Capture, store, and manipulate visual images and pictures

Robotics

Mechanical and computer devices that perform tedious tasks with


high precision

Expert system

Stores knowledge and makes inferences

The branch of computer science concerned with making computers behave like humans.
The term was coined in 1956 by John McCarthy at the Massachusetts Institute of
Technology. Artificial intelligence includes

Games Playing: programming computers to play games such as chess and


checkers

Expert Systems: programming computers to make decisions in real-life


situations (for example, some expert systems help doctors diagnose
diseases based on symptoms)

Natural Language : programming computers to understand natural


human languages

Neural Networks : Systems that simulate intelligence by attempting


to reproduce the types of physical connections that occur in animal
brains

Robotics: programming computers to see and hear and react to


other sensory stimuli

Currently, no computers exhibit full artificial intelligence (that is, are able to simulate human
behavior). The greatest advances haveoccurred in the field of games playing. The best computer
chessprograms are now capable of beating humans. In May, 1997, an IBMsuper-computer called
Deep Blue defeated world chess champion

Gary Kasparov in a chess match.


In the area of robotics, computers are now widely used in assemblyplants, but they are
capable only of very limited tasks. Robots havegreat difficulty identifying objects based on
appearance or feel, andthey still move and handle objects clumsily.
Natural-language processing offers the greatest potential rewardsbecause it would allow
people to interact with computers withoutneeding any specialized knowledge. You could
simply walk up to acomputer and talk to it. Unfortunately, programming computers to
Understand natural languages has proved to be more difficult thanoriginally thought. Some
rudimentary translation systems thattranslate from one human language to another are in
existence, butthey are not nearly as good as human translators. There are alsovoice
recognition systems that can convert spoken sounds intowritten words, but they do not
understand what they are writing;they simply take dictation. Even these systems are quite
limited -you must speak slowly and distinctly.
In the early 1980s, expert systems were believed to represent thefuture of artificial
intelligence and of computers in general. To date,however, they have not lived up to
expectations. Many expertsystems help human experts in such fields as medicine and
Engineering, but they are very expensive to produce and are helpful only in special
situations.Today, the hottest area of artificial intelligence is neural networks,which are
proving successful in a number of disciplines such as voicerecognition and naturallanguage processing.
There are several programming languages that are known as AIlanguages because they are
used almost exclusively for AIapplications. The two most common are LISP and Prolog
Provide a high potential payoff or significantly reduced downside risk
Capture and preserve irreplaceable human expertise
Provide expertise needed at a number of locations at the same time or in a hostile
environment that is dangerous to human health
Provide expertise that is expensive or rare

Develop a solution faster than human experts can


Provide expertise needed for training and development to share the wisdom of
human experts with a large number of people

CHAPTER 2
REVIEW
OF
LITERATURE

HISTORY OF ARTIFICIAL INTELLIGENCE

The history of artificial intelligence (AI) began in antiquity, with myths, stories and
rumors of artificial beings endowed with intelligence or consciousness by master
craftsmen; as Pamela McCorduck writes, AI began with "an ancient wish to forge the
gods."
The seeds of modern AI were planted by classical philosophers who attempted to describe
the process of human thinking as the mechanical manipulation of symbols. This work
culminated in the invention of the programmable digital computer in the 1940s, a machine
based on the abstract essence of mathematical reasoning. This device and the ideas behind
it inspired a handful of scientists to begin seriously discussing the possibility of building an
electronic brain.
The field of AI research was founded at a conference on the campus of Dartmouth College
in the summer of 1956. Those who attended would become the leaders of AI research for
decades. Many of them predicted that a machine as intelligent as a human being would
exist in no more than a generation and they were given millions of dollars to make this
vision come true. Eventually it became obvious that they had grossly underestimated the
difficulty of the project. In 1973, in response to the criticism of James Light hill and
ongoing pressure from congress, the U.S. and British Governments stopped funding

undirected research into artificial intelligence. Seven years later, a visionary initiative by
the Japanese Government inspired governments and industry to provide AI with billions of
dollars, but by the late 80s the investors became disillusioned and withdrew funding again.
This cycle of boom and bust, of "AI winters" and summers, continues to haunt the field.
Undaunted, there are those who make extraordinary predictions even now.
Progress in AI has continued, despite the rise and fall of its reputation in the eyes of
government bureaucrats and venture capitalists. Problems that had begun to seem
impossible in 1970 have been solved and the solutions are now used in successful
commercial products. However, no machine has been built with a human level of
intelligence, contrary to the optimistic predictions of the first generation of AI researchers.
"We can only see a short distance ahead," admitted Alan Turing, in a famous 1950 paper
that catalyzed the modern search for machines that think. "But," he added, "we can see
much that must be done."
Realistic humanoid automatons were built by craftsman from every civilization, including
Yan Shi,Hero of Alexandria,Al-Jazari and Wolfgang von Kempelen. The oldest known
automatons were the sacred statues of ancient Egypt and Greece. The faithful believed that
craftsman had imbued these figures with very real minds, capable of wisdom and emotion
Hermes Trismegistus wrote that "by discovering the true nature of the gods, man has
been able to reproduce it."

Formal reasoning
Artificial intelligence is based on the assumption that the process of human thought can be
mechanized. The study of mechanicalor "formal"reasoning has a long history.
Chinese, Indian and Greek philosophers all developed structured methods of formal
deduction in the first millennium BCE. Their ideas were developed over the centuries by
philosophers such as Aristotle (who gave a formal analysis of the syllogism), Euclid
(whose Elements was a model of formal reasoning), al-Khwrizm (who developed algebra
and gave his name to "algorithm") and European scholastic philosophers such as William
of Ockham and Duns Scotus.

Majorcan philosopher Ramon Llull (12321315) developed several logical machines


devoted to the production of knowledge by logical means;Llull described his machines as
mechanical entities that could combine basic and undeniable truths by simple logical
operations, produced by the machine by mechanical meanings, in such ways as to produce
all the possible knowledge.Llull's work had a great influence on Gottfried Leibniz, who
redeveloped his ideas.

Gottfried Leibniz
who speculated that human reason could be reduced to mechanical calculation.
In the 17th century, Leibniz, Thomas Hobbes and Ren Descartes explored the possibility
that all rational thought could be made as systematic as algebra or geometry.Hobbes
famously wrote in Leviathan: "reason is nothing but reckoning".Leibniz envisioned a
universal language of reasoning (his characteristicauniversalist) which would reduce
argumentation to calculation, so that "there would be no more need of disputation between
two philosophers than between two accountants. For it would suffice to take their pencils in
hand, down to their slates, and to say each other (with a friend as witness, if they liked): Let
us calculate." These philosophers had begun to articulate the physical symbol system
hypothesis that would become the guiding faith of AI research.
In the 20th century, the study of mathematical logic provided the essential breakthrough
that made artificial intelligence seem plausible. The foundations had been set by such
works as Boole's The Laws of Thought and Frege'sBegriffsschrift. Building on Frege's
system, Russell and Whitehead presented a formal treatment of the foundations of

mathematics in their masterpiece, the Principia Mathematical in 1913. Inspired by Russell's


success, David Hilbert challenged mathematicians of the 1920s and 30s to answer this
fundamental question: "can all of mathematical reasoning be formalized?" His question
was answered by Gdel's incompleteness proof, Turing's machine and Church's Lambda
calculus. Their answer was surprising in two ways.

The ENIAC, at the Moore School of Electrical Engineering. This photo has been artificially
darkened, obscuring details such as the women who were present and the IBM equipment in use.
First, they proved that there were, in fact, limits to what mathematical logic could accomplish.
But second (and more important for AI) their work suggested that, within these limits, any form
of mathematical reasoning could be mechanized. The Church-Turing thesis implied that a
mechanical device, shuffling symbols as simple as 0 and 1, could imitate any conceivable
process of mathematical deduction. The key insight was the Turing machinea simple
theoretical construct that captured the essence of abstract symbol manipulation. This invention
would inspire a handful of scientists to begin discussing the possibility of thinking machines.

Computer science
Main articles: history of computer hardware and history of computer science

Calculating machines were built in antiquity and improved throughout history by many
mathematicians, including (once again) philosopher Gottfried Leibniz. In the early 19thcentury,
Charles Babbage designed a programmable computer (the Analytical Engine), although it was
never built. Ada Lovelace speculated that the machine "might compose elaborate and scientific
pieces of music of any degree of complexity or extent". (She is often credited as the first
programmer because of a set of notes she wrote that completely detail a method for calculating
Bernoulli numbers with the Engine.)
The first modern computers were the massive code breaking machines of the Second World War
(such as Z3, ENIAC and Colossus). The latter two of these machines were based on the
theoretical foundation laid by Alan Turing and developed by John von Neumann.

The birth of artificial intelligence 19431956

The IBM 702: a computer used by the first generation of AI researchers.


A note on the sections in this article
In the 1940s and 50s, a handful of scientists from a variety of fields (mathematics, psychology,
engineering, economics and political science) began to discuss the possibility of creating an
artificial brain. The field of artificial intelligence research was founded as an academic discipline
in 1956.

Cybernetics and early neural networks


The earliest research into thinking machines was inspired by a confluence of ideas that became
prevalent in the late 30s, 40s and early 50s. Recent research in neurology had shown that the
brain was an electrical network of neurons that fired in all-or-nothing pulses. Norbert Wiener's
cybernetics described control and stability in electrical networks. Claude Shannon's information
theory described digital signals (i.e., all-or-nothing signals). Alan Turing's theory of computation
showed that any form of computation could be described digitally. The close relationship
between these ideas suggested that it might be possible to construct an electronic brain.
Examples of work in this vein includes robots such as W. Grey Walter's turtles and the Johns
Hopkins Beast. These machines did not use computers, digital electronics or symbolic reasoning;
they were controlled entirely by analog circuitry.
Walter Pitts and Warren McCulloch analyzed networks of idealized artificial neurons and showed
how they might perform simple logical functions. They were the first to describe what later
researchers would call a neural network. One of the students inspired by Pitts and McCulloch
was a young Marvin Minsky, then a 24-year old graduate student. In 1951 (with Dean Edmonds)
he built the first neural net machine, the SNARC.Minsky was to become one of the most
important leaders and innovators in AI for the next 50 years.

Turing's test
In 1950 Alan Turing published a landmark paper in which he speculated about the possibility of
creating machines that think. He noted that "thinking" is difficult to define and devised his
famous Turing Test. If a machine could carry on a conversation (over a teleprinter) that was
indistinguishable from a conversation with a human being, then it was reasonable to say that the
machine was "thinking". This simplified version of the problem allowed Turing to argue
convincingly that a "thinking machine" was at least plausible and the paper answered all the most
common objections to the proposition. The Turing Test was the first serious proposal in the
philosophy of artificial intelligence.

Game AI
In 1951, using the Ferranti Mark 1 machine of the University of Manchester, Christopher
Strachey wrote a checkers program and Dietrich Prinz wrote one for chess. Arthur's checkers
program, developed in the middle 50s and early 60s, eventually achieved sufficient skill to
challenge a respectable amateur. Game would continue to be used as a measure of progress in AI
throughout its history.

Symbolic reasoning and the Logic Theorist


When access to digital computers became possible in the middle fifties, a few scientists
instinctively recognized that a machine that could manipulate numbers could also manipulate
symbols and that the manipulation of symbols could well be the essence of human thought. This
was a new approach to creating thinking machines.
In 1955, Allen Newell and (future Nobel Laureate) Herbert A. Simon created the "Logic
Theorist" (with help from J. C. Shaw). The program would eventually prove 38 of the first 52
theorems in Russell and Whiteheads Principia, and find new and more elegant proofs for some.
Simon said that they had "solved the venerable mind/body problem, explaining how a system
composed of matter can have the properties of mind." (This was an early statement of the
philosophical position John Searle would later call "Strong AI": that machines can contain minds
just as human bodies do.)

Dartmouth Conference 1956: the birth of AI


The Dartmouth Conference of 1956 was organized by Marvin Minsky, John McCarthy and two
senior scientists: Claude Shannon and Nathan Rochester of IBM. The proposal for the
conference included this assertion: "every aspect of learning or any other feature of intelligence
can be so precisely described that a machine can be made to simulate it". The participants
included Ray Solomon off, Oliver Selfridge, Trenchard More, Arthur Samuel, Allen Newell and
Herbert A. Simon, all of whom would create important programs during the first decades of AI
research. At the conference Newell and Simon debuted the "Logic Theorist" and McCarthy
persuaded the attendees to accept "Artificial Intelligence" as the name of the field. The 1956

Dartmouth conference was the moment that AI gained its name, its mission, its first success and
its major players, and is widely considered the birth of AI.

The golden years 19561974


The years after the Dartmouth conference were an era of discovery, of sprinting across new
ground. The programs that were developed during this time were, to most people, simply
"astonishing": computers were solving algebra word problems, proving theorems in geometry
and learning to speak English. Few at the time would have believed that such "intelligent"
behavior by machines was possible at all. Researchers expressed an intense optimism in private
and in print, predicting that a fully intelligent machine would be built in less than 20 years.
Government agencies like ARPA poured money into the new field.

The work
There were many successful programs and new directions in the late 50s and 1960s. Among the
most influential were these:

Reasoning as search
Many early AI programs used the same basic algorithm. To achieve some goal (like winning a
game or proving a theorem), they proceeded step by step towards it (by making a move or a
deduction) as if searching through a maze, backtracking whenever they reached a dead end. This
paradigm was called "reasoning as search".
The principal difficulty was that, for many problems, the number of possible paths through the
"maze" was simply astronomical (a situation known as a "combinatorial explosion").
Researchers would reduce the search space by using heuristics or "rules of thumb" that would
eliminate those paths that were unlikely to lead to a solution.
Newell and Simon tried to capture a general version of this algorithm in a program called the
"General Problem Solver". Other "searching" programs were able to accomplish impressive tasks
like solving problems in geometry and algebra, such as Herbert Gelernter's Geometry Theorem

Prover (1958) and SAINT, written by Minsky's student James Slagle (1961). Other programs
searched through goals and sub goals to plan actions, like the STRIPS system developed at
Stanford to control the behavior of their robot Shakey.

An example of a semantic network

Natural language
An important goal of AI research is to allow computers to communicate in natural languages like
English. An early success was Daniel Bobrow's program STUDENT, which could solve high
school algebra word problems.
A semantic net represents concepts (e.g. "house, door") as nodes and relations among concepts
(e.g. "has-a") as links between the nodes. The first AI program to use a semantic net was written
by Ross Quillian and the most successful (and controversial) version was Roger
Schank'sConceptual dependency theory.
Joseph Weizenbaum'sELIZA could carry out conversations that were so realistic that users
occasionally were fooled into thinking they were communicating with a human being and not a
program. But in fact, ELIZA had no idea what she was talking about. She simply gave a canned
response or repeated back what was said to her, rephrasing her response with a few grammar
rules. ELIZA was the first chatterbox.

Micro-worlds
In the late 60s, Marvin Minsky and Seymour Papert of the MIT AI Laboratory proposed that AI
research should focus on artificially simple situations known as micro-worlds. They pointed out
that in successful sciences like physics, basic principles were often best understood using
simplified models like frictionless planes or perfectly rigid bodies. Much of the research focused
on a "blocks world," which consists of colored blocks of various shapes and sizes arrayed on a
flat surface.
This paradigm led to innovative work in machine vision by Gerald Sussman (who led the team),
Adolfo Guzman, David Waltz (who invented "constraint propagation"), and especially Patrick
Winston. At the same time, Minsky and Papert built a robot arm that could stack blocks, bringing
the blocks world to life. The crowning achievement of the micro-world program was Terry
Winograd'sSHRDLU. It could communicate in ordinary English sentences, plan operations and
execute them.

The optimism
The first generation of AI researchers made these predictions about their work:

1958, H. A. Simon and Allen Newell: "within ten years a digital computer will be
the world's chess champion" and "within ten years a digital computer will discover
and prove an important new mathematical theorem."

1965, H. A. Simon: "machines will be capable, within twenty years, of doing any
work a man can do."

1967, Marvin Minsky: "Within a generation ... the problem of creating 'artificial
intelligence' will substantially be solved."

1970, Marvin Minsky (in Life Magazine): "In from three to eight years we will
have a machine with the general intelligence of an average human being."

The money
In June 1963, MIT received a $2.2 million grant from the newly created Advanced Research
Projects Agency (later known as DARPA). The money was used to fund project MAC which
subsumed the "AI Group" founded by Minsky and McCarthy five years earlier. ARPA continued
to provide three million dollars a year until the 70s.ARPA made similar grants to Newell and
Simon's program at CMU and to the Stanford AI Project (founded by John McCarthy in 1963).
Another important AI laboratory was established at Edinburgh University by Donald Michie in
1965. These four institutions would continue to be the main centers of AI research (and funding)
in academia for many years.
The money was proffered with few strings attached: J. C. R. Licklider, then the director of
ARPA, believed that his organization should "fund people, not projects!" and allowed researchers
to pursue whatever directions might interest them. This created a freewheeling atmosphere at
MIT that gave birth to the hacker culture, but this "hands off" approach would not last.

The first AI winter 19741980


In the 70s, AI was subject to critiques and financial setbacks. AI researchers had failed to
appreciate the difficulty of the problems they faced. Their tremendous optimism had raised
expectations impossibly high, and when the promised results failed to materialize, funding for AI
disappeared. At the same time, the field of connectionism (or neural nets) was shut down almost
completely for 10 years by Marvin Minsky's devastating criticism of perceptrons. Despite the
difficulties with public perception of AI in the late 70s, new ideas were explored in logic
programming, commonsense reasoning and many other areas.

The problems
In the early seventies, the capabilities of AI programs were limited. Even the most impressive
could only handle trivial versions of the problems they were supposed to solve; all the programs
were, in some sense, "toys". AI researchers had begun to run into several fundamental limits that
could not be overcome in the 1970s. Although some of these limits would be conquered in later
decades, others still stymie the field to this day.

Limited computer power: There was not enough memory or processing speed to
accomplish anything truly useful. For example, Ross Quillian's successful work on
natural language was demonstrated with a vocabulary of only twenty words, because that
was all that would fit in memory.HansMoravec argued in 1976 that computers were still
millions of times too weak to exhibit intelligence. He suggested an analogy: artificial
intelligence requires computer power in the same way that aircraft require horsepower.
Below a certain threshold, it's impossible, but, as power increases, eventually it could
become easy. With regard to computer vision, Moravec estimated that simply matching
the edge and motion detection capabilities of human retina in real time would require a
general-purpose computer capable of 109 operations/second (1000 MIPS). As of 2011,
practical computer vision applications require 10,000 to 1,000,000 MIPS. By
comparison, the fastest supercomputer in 1976, Cray-1 (retailing at $5 million to $8
million), was only capable of around 80 to 130 MIPS, and a typical desktop computer at
the time achieved less than 1 MIPS.

Intractability and the combinatorial explosion. In 1972 Richard Karp


(building on Stephen Cook's 1971 theorem) showed there are many problems that can
probably only be solved in exponential time (in the size of the inputs). Finding optimal
solutions to these problems requires unimaginable amounts of computer time except

when the problems are trivial. This almost certainly meant that many of the "toy"
solutions used by AI would probably never scale up into useful systems.

Commonsense knowledge and reasoning. Many important artificial intelligence


applications like vision or natural language require simply enormous amounts of
information about the world: the program needs to have some idea of what it might be
looking at or what it is talking about. This requires that the program know most of the
same things about the world that a child does. Researchers soon discovered that this was
a truly vast amount of information. No one in 1970 could build a database so large and no
one knew how a program might learn so much information.

Moravec's paradox: Proving theorems and solving geometry problems is


comparatively easy for computers, but a supposedly simple task like recognizing a face or
crossing a room without bumping into anything is extremely difficult. This helps explain
why research into vision and robotics had made so little progress by the middle 1970s.

The frame and qualification problems. AI researchers (like John McCarthy) who used
logic discovered that they could not represent ordinary deductions that involved planning
or default reasoning without making changes to the structure of logic itself. They
developed new logics (like non-monotonic logics and modal logics) to try to solve the
problems.

The agencies which funded AI research (such as the British government, DARPA and
NRC) became frustrated with the lack of progress and eventually cut off almost all funding
for undirected research into AI. The pattern began as early as 1966 when the ALPACreport

appeared criticizing machine translation efforts. After spending 20 million dollars, the NRC
ended all support. In 1973, the Lighthill report on the state of AI research in England
criticized the utter failure of AI to achieve its "grandiose objectives" and led to the
dismantling of AI research in that country. (The report specifically mentioned the
combinatorial explosion problem as a reason for AI's failings.)DARPA was deeply
disappointed with researchers working on the Speech Understanding Research program at
CMU and canceled an annual grant of three million dollars. By 1974, funding for AI
projects was hard to find.
Hans Moravec blamed the crisis on the unrealistic predictions of his colleagues. "Many
researchers were caught up in a web of increasing exaggeration." However, there was
another issue: since the passage of the Mansfield Amendment in 1969, DARPA had been
under increasing pressure to fund "mission-oriented direct research, rather than basic
undirected research". Funding for the creative, freewheeling exploration that had gone on
in the 60s would not come from DARPA. Instead, the money was directed at specific
projects with clear objectives, such as autonomous tanks and battle management systems.

Critiques from across campus


Several philosophers had strong objections to the claims being made by AI researchers. One of
the earliest was John Lucas, who argued that Gdels incompleteness showed that a formal
system (such as a computer program) could never see the truth of certain statements, while a
human being could. Hubert ridiculed the broken promises of the 60s and critiqued the
assumptions of AI, arguing that human reasoning actually involved very little "symbol
processing" and a great deal of embodied, instinctive, unconscious "how. John's Chinese Room
argument, presented in 1980, attempted to show that a program could not be said to "understand"
the symbols that it uses (a quality called "intentionality"). If the symbols have no meaning for the
machine, Searle argued, then the machine can not be described as "thinking".
These critiques were not taken seriously by AI researchers, often because they seemed so far off
the point. Problems like intractability and commonsense knowledge seemed much more

immediate and serious. It was unclear what difference "know how" or "intentionality" made to an
actual computer program. Minsky said of Dreyfus and Searle "they misunderstand, and should be
ignored." Dreyfus, who taught at MIT, was given a cold shoulder: he later said that AI
researchers "dared not be seen having lunch with me."JosephWeizenbaum, the author of ELIZA,
felt his colleagues' treatment of Dreyfus was unprofessional and childish. Although he was an
outspoken critic of Dreyfus' positions, he "deliberately made it plain that theirs was not the way
to treat a human being."
Weizenbaum began to have serious ethical doubts about AI when Kenneth Colby wrote
DOCTOR, a chatterbox therapist. Weizenbaum was disturbed that Colby saw his mindless
program as a serious therapeutic tool. A feud began, and the situation was not helped when Colby
did not credit Weizenbaum for his contribution to the program. In 1976, Weizenbaum published
Computer Power and Human Reason which argued that the misuse of artificial intelligence has
the potential to devalue human life.

Perceptrons and the dark age of connectionism


A perceptrons was a form of neural network introduced in 1958 by Frank Rosenblatt, who had
been a schoolmate of Marvin Minsky at the Bronx High School of Science. Like most AI
researchers, he was optimistic about their power, predicting that "perceptrons may eventually be
able to learn, make decisions, and translate languages." An active research program into the
paradigm was carried out throughout the 60s but came to a sudden halt with the publication of
Minsky and Papert's 1969 book Perceptrons. It suggested that there were severe limitations to
what perceptrons could do and that Frank Rosenblatt's predictions had been grossly exaggerated.
The effect of the book was devastating: virtually no research at all was done in connectionism for
10 years. Eventually, a new generation of researchers would revive the field and thereafter it
would become a vital and useful part of artificial intelligence. Rosenblatt would not live to see
this, as he died in a boating accident shortly after the book was published.
The history of artificial intelligence (AI) began in antiquity, with myths, stories and rumors of
artificial beings endowed with intelligence or consciousness by master craftsmen; as Pamela
McCorduck writes, AI began with "an ancient wish to forge the gods."

The seeds of modern AI were planted by classical philosophers who attempted to describe the
process of human thinking as the mechanical manipulation of symbols. This work culminated in
the invention of the programmable digital computer in the 1940s, a machine based on the
abstract essence of mathematical reasoning. This device and the ideas behind it inspired a
handful of scientists to begin seriously discussing the possibility of building an electronic brain.
The field of AI research was founded at a conference on the campus of Dartmouth College in the
summer of 1956. Those who attended would become the leaders of AI research for decades.
Many of them predicted that a machine as intelligent as a human being would exist in no more
than a generation and they were given millions of dollars to make this vision come true.
Eventually it became obvious that they had grossly underestimated the difficulty of the project.
In 1973, in response to the criticism of James Light hill and ongoing pressure from congress,
the U.S. and Governments stopped funding undirected research into artificial intelligence. Seven
years later, a visionary initiative by the Japanese Government inspired governments and industry
to provide AI with billions of dollars, but by the late 80s the investors became disillusioned and
withdrew funding again. This cycle of boom and bust, of "AI winters" and summers, continues to
haunt the field. Undaunted, there are those who make extraordinary predictions even now.
Progress in AI has continued, despite the rise and fall of its reputation in the eyes of government
bureaucrats and venture capitalists. Problems that had begun to seem impossible in 1970 have
been solved and the solutions are now used in successful commercial products. However, no
machine has been built with a human level of intelligence, contrary to the optimistic predictions
of the first generation of AI researchers. "We can only see a short distance ahead," admitted Alan
Turing, in a famous 1950 paper.

CHAPTER- 3
RESEARCH
METHODOLOGY

RESEARCH METHODOLOGY:
The process used to collect information and data for the purpose of making business
decision.
The methodology may include publication research, interviews, surveys and other research
techniques and could include both present and historical information.
Research Methodology is a way to find out the result of a given problem on a specific
matter or problem that is also referred as research problem. In Methodology, researcher
uses different criteria for solving/searching the given research problem. Different sources
use different type of methods for solving the problem. If we think about the word
Methodology, it is the way of searching or solving the research problem.
In Research Methodology, researcher always tries to search the given question
systematically in our own way and find out all the answers till conclusion. If research does
not work systematically on problem, there would be less possibility to find out the final
result. For finding or exploring research questions, a researcher faces lot of problems that
can be effectively resolved with using correct research methodology.

OBJECTIVE OF THE STUDY

To learn about AI Components and Application.


To know what the various areas are where the user can interface.
To gather knowledge about the components of AI.
To know what are the advantages and disadvantages of AI.

Scope of the study


This study comprise of in-depth coverage of Emergence of AI, evolution of AI though
various versions.AI used in various areas. AI used in hospitals, institutions. A little bit of
versions have been covered. However, this study is limited to the extent that it is only related
to the study of emergence of AI. This study does not focus on the wide spectrum of AI which
it has on our social life and the businesses. There is no doubt in saying that technology is

changing our view of life but side by side it has some side effects or we can say bad effects
especially on our children and society and it cant be neglected.

DATA COLLECTION
Secondary data, is data collected by someone other than the user. Common
sources of secondary data for social science include censuses, organizational records and data
collected through qualitative methodologies or research. Primary, by contrast, are collected by
the investigator conducting the research.
Secondary data analysis saves time that would otherwise be spent collecting data and,
particularly in the case of quantitative data, provides larger and higher-quality databases that
would be unfeasible for any individual researcher to collect on their own. In addition, analysts of
social and economic change consider secondary data essential, since it is impossible to conduct a
new survey that can adequately capture past change and/or developments.

SOURCES OF SECONDARY DATA


As is the case in primary research, secondary data can be obtained from different research
strands:
Prior documentation such as Census, housing, social security as well as electoral statistics and
other related databases. Internet searches, libraries; progress reports; etc. It does not include
interviews as this collect primary data for analysis to generate information.
A clear benefit of using secondary data is that much of the background work needed has already
been carried out, for example: literature reviews, case studies might have been carried out,
published texts and statistics could have been already used elsewhere, media promotion and
personal contacts have also been utilized.
This wealth of background work means that secondary data generally have a pre-established
degree of validity and reliability which need not be re-examined by the researcher who is reusing such data.

Furthermore, secondary data can also be helpful in the research design of subsequent primary
research and can provide a baseline with which the collected primary data results can be
compared to. Therefore, it is always wise to begin any research activity with a review of the
secondary data.

Advantages of Secondary data


1. It is economical. It saves efforts and expenses.
2. It is time saving.
3. It helps to make primary data collection more specific since with the help of
secondary data, we are able to make out what are the gaps and deficiencies and
what additional information needs to be collected.
4. It helps to improve the understanding of the problem.
5. It provides a basis for comparison for the data that is collected by the researcher.

Disadvantages of Secondary Data


1. Secondary data is something that seldom fits in the framework of the marketing
research factors. Reasons for its non-fitting are:a. Unit of secondary data collection-Suppose you want information on
disposable income, but the data is available on gross income. The
information may not be same as we require.
2. Accuracy of secondary data is not known.
3. Data may be outdated.

LIMITATION OF THE STUDY


This study is not so vast because of lack of time.
In this study the data is not real because it is collected from the secondary sources.

This report is related to only emergence of Artificial Intelligence.


It is one sided study only which involves basically how AI emerge and developed.
This study does not focus on the impact of AI on the society.

CHAPTER 4
DATA ANALYSIS
AND
INTERPRETATION

Artificial intelligence (AI) is the intelligence exhibited by machines or software. It is an


academic field of study which studies the goal of creating intelligence. Major AI researchers and

textbooks define this field as "the study and design of intelligent agents, where an intelligent
agent is a system that perceives its environment and takes actions that maximize its chances of
success. John, who coined the term in 1955, defines it as "the science and engineering of making
intelligent machines".
AI research is highly technical and specialized, and is deeply divided into subfields that often fail
to communicate with each other. Some of the division is due to social and cultural factors:
subfields have grown up around particular institutions and the work of individual researchers. AI
research is also divided by several technical issues. Some subfields focus on the solution of
specific problems. Others focus on one of several possible approaches or on the use of a
particular tool or towards the accomplishment of particular applications.
The central problems (or goals) of AI research include reasoning, knowledge, planning, learning,
natural language processing (communication), perception and the ability to move and manipulate
objects. General is still among the field's long-term goals. Currently popular approaches include
statistical methods, computational intelligence and traditional symbolic AI. There are a large
number of tools used in AI, including versions of search and mathematical optimization, logic,
methods based on probability and economics, and many others. The AI field is interdisciplinary,
in which a number of sciences and professions converge, including computer science,
mathematics, psychology, linguistics, philosophy and neuroscience, as well as other specialized
fields such as artificial psychology.
The field was founded on the claim that a central property of humans, intelligencethe sapience
of Homo sapiens"can be so precisely described that a machine can be made to simulate it."
This raises philosophical issues about the nature of the mind and the ethics of creating artificial
beings endowed with human-like intelligence, issues which have been addressed by myth, fiction
and philosophy since antiquity. Artificial intelligence has been the subject of tremendous
optimism but has also suffered stunning setbacks. Today it has become an essential part of the
technology industry, providing the heavy lifting for many of the most challenging problems in
computer science.

Knowledge-based interfaces to existing statistical software are one way of applying artificial
intelligence methods in data analysis and interpretation. In this paper we discuss our experiences
from the implementations of knowledge-based systems for time series analyses, in particular for
eliminating seasonal variations in statistical studies of industry and trade, and for multivariate
tabular analysis. We further discuss the problems of realizing knowledge-based systems in a
production environment and present an integrated conceptual model for knowledge-based data
analysis and interpretation.
Current practice for the management and extraction of meaningful information from available
data, to inform understanding of water supply system performance, is limited, time consuming
and of inadequate accuracy. This is primarily due to reliance on human data analysis and
interpretation, which is unfeasible for the volume and complexity of data involved. Further, it is
often only practical and economically justifiable to instigate such analysis in a reactive manner to
improve problem definition such as the determination of the magnitude of a burst once it has
been recognized through a breach of serviceability limits. It is not cost effective to rely on human
data analysis and interpretation and the inherent inefficiency contributes to lowered regulatory
compliance and standards of service. A new automated approach that applies Artificial Neural
Network (ANN) technology has been developed to provide more efficient and consistent analysis
of large data volumes, ANNs are trained to infer the future behavior of a system, or to classify
current behavior, by analyzing data describing its past performance, so that problem solving is a
matter of learning by example rather than programming. This is especially attractive for domains
where an understanding of the problem to be solved is limited, but where training data is readily
available, for example, flow and pressure time series from a water supply system. The system
developed can be applied to historic flow time series data to construct a grey box model for the
detection and classification of burst mains. Initial ANN data analysis to construct a probability
density model of the future flow profile was followed by application of a Fuzzy Inference
System for classification such that confidence intervals could be assigned to the detected burst
events. This artificial intelligence analysis system was applied to four month periods of sample
flow data (provided by a major UK water company) from district meter areas of various size,
complexity and connectivity and successfully identified a number of burst events within the data;
some of which could not have been detected by manual analysis. When integrated with real time

communications, the `intelligent' analysis system developed can be readily applied to an online
environment providing the capability to proactively manage water supply systems. This paper
was presented at the 8th Annual Water Distribution Systems Analysis Symposium which was
held with the generous support of Awwa research foundation (AwwaRF).

Artificial Intelligence is a branch of Science which deals with helping machines find solutions to
complex problems in a more human-like fashion. This generally involves borrowing
characteristics from human intelligence, and applying them as algorithms in a computer friendly
way. A more or less flexible or efficient approach can be taken depending on the requirements
established, which influences how artificial the intelligent behavior appears.
AI is generally associated with Computer Science, but it has many important links with other
fields such as Maths, Psychology, Cognition, Biology and Philosophy, among many others. Our
ability to combine knowledge from all these fields will ultimately benefit our progress in the
quest of creating an intelligent artificial being.

The Advantages for Artificial Intelligence (AI)

Jobs depending on the level and type of intelligence these machines receive in the
future, it will obviously have an effect on the type of work they can do, and how well
they can do it (they can become more efficient). As the level of AI increases so will their
competency to deal with difficult, complex even dangerous tasks that are currently done
by humans, a form of applied artificial intelligence.

They dont stop as they are machines there is no need for sleep, they dont get ill , there
is no need for breaks or Facebook, they are able to go, go, go! There obviously may be
the need for them to be charged or refueled, however the point is, they are definitely
going to get a lot more work done than we can. Take the Finance industry for example,
there are constant stories arising of artificial intelligence in finance and that stock traders
are soon to be a thing of the past.

No risk of harm when we are exploring new undiscovered land or even planets, when a
machine gets broken or dies, there is no harm done as they dont feel, they dont have
emotions. Where as going on the same type of expeditions a machine does, may simply
not be possible or they are exposing themselves to high risk situations.

Act as aids they can act as 24/7 aids to children with disabilities or the elderly, they
could even act as a source for learning and teaching. They could even be part of security
alerting you to possible fires that you are in threat of, or fending off crime.

Their function is almost limitless as the machines will be able to do everything (but
just better) essentially their use, pretty much doesnt have any boundaries. They will
make fewer mistakes, they are emotionless, they are more efficient, they are basically
giving us more free time to do as we please.

The Disadvantages for Artificial Intelligence (AI)

Over reliance on AI as you may have seen in many films such as The Matrix, iRobot or
even kids films such as WALL.E, if we rely on machines to do almost everything for us
we become very dependent, so much so they have the potential to ruin our lives if
something were to go wrong. Although the films are essentially just fiction, it wouldnt
be too smart not to have some sort of backup plan to potential issues on our part.

Human Feel as they are are machines they obviously cant provide you with that
human touch and quality, the feeling of a togetherness and emotional understanding,
that machines will lack the ability to sympathies and empathize with your situations, and
may act irrationally as a consequence.

Inferior as machines will be able to perform almost every task better than us in
practically all respects, they will take up many of our jobs, which will then result in
masses of people who are then jobless and as a result feel essentially useless. This could
then lead us to issues of mental illness and obesity problems etc.

Misuse there is no doubt that this level of technology in the wrong hands can cause
mass destruction, where robot armies could be formed, or they could perhaps malfunction
or be corrupted which then we could be facing a similar scene to that of terminator ( hey,
you never know).

Ethically Wrong? People say that the gift of intuition and intelligence was Gods gift to
mankind, and so to replicate that would be then to kind of play God. Therefore not right
to even attempt to clone our intelligence.

INTRODUCTION OF ROBOTICS
Robotics research often focuses on increasing robot capability. If end users do not perceive
these increases, however, user acceptance may not improve. In this work, we explore the idea of
perceived capability and how it relates to true capability, differentiating between physical and
social capabilities. We present a framework that outlines their potential relationships, along with
two user studies, on robot speed and speech, exploring these relationships. Our studies identify
two possible consequences of the disconnect between the true and perceived capability: (1)
under-perception: true improvements in capability may not lead to perceived improvements and
(2) over-perception: true improvements in capability may lead to additional perceived
improvements that do not actually exist.

Robot cists often focus on increasing robot capability: we make robots faster, help them
perceive the world and enable them to interact with people. The goal is to make robots a part of
everyday life, with purposes ranging from entertainment to assisting with tedious or dangerous
tasks
An integral part of making this goal a reality is acceptance, which is largely affected by users
perceptions. Therefore, increasing robot capability does not necessarily lead to increased
acceptance since it is actually the users perception of capability the robots perceived
capability that determines acceptance.
For example, imagine a robot designed to clean the home. If a user perceives the robots
capability to be lacking, they may be reluctant to allow it to handle their possessions. On the

other hand, if the user overestimates the robots capability, this can lead to unmet expectations
and disappointment.
In this paper, we focus on the idea of perceived capability, and the disconnect between it and
the robots true capability. We start by conjecturing a framework that links true and per-ceived
capability and then outline their possible relationships. Within this framework, we investigate the
effects of two important robot capabilities, speed and speech, on perceived capability via two
user studies.
Framework for Perceived Capability
Prior work in anthropomorphism, mental models, and sense making shows there is often a
disconnect between users perceptions and a robots true capability. This occurs largely because
people are unfamiliar and lack experience with robots. Hence, their knowledge is often based on
popular culture which depicts a wide variety of robots with a multitude of capabilities.
In order to investigate this disconnect, we introduce a framework which enumerates the
possible relationships be-tween true and perceived capability. Our framework dis-tinguishes
between physical capability (e.g., doing laundry) and social capability (e.g., understanding what
someone is saying), and between a particular skill and overall capability.
Elizabeth Cha, Anca D. Dragan and Siddhartha S. Srinivasan are with the Robotics Institute,
School of Computer Science, Carnegie Mellon University, Pittsburgh, Pennsylvania, United
States lizcha, adragan,
siddh@cs.cmu.edu

SPEED STUDY
Our first study is about under-perception. An increase in true capability does not necessarily
lead to an increase in perceived capability.
We exposed users to two videos of a robot performing a physical task next to an actor. In each
scenario, the robot enters the kitchen and retrieves a microwave meal while standing next to the
actor. In one video, the robot moves faster and spends less time planning.

A.Study Design
1) Manipulated Variables: We manipulated speed by producing two videos of a robot
performing a physical task
retrieving a microwave meal at two different speeds (2.3 minutes vs. 1.15 minutes to
complete the task). The lower speed represents the state of the art, while the higher speed
represents a capability well beyond current robot manipulation skills in both planning and
execution time.

2)

Participants: We recruited 20 participants (12 females

and 8 males) through Amazon Mechanical Turk. All participants were located in the United
States, primary English speakers and ranging in age from 23 to 65 (M=39.0, SD=10.55) years.
40% percent of participants were male and 60% were female. Participants were compensated $4
for successful completion of the study.
Participants were told that they were taking part in a survey to design better home robots. All
participants successfully answered a set of control questions about the videos they were shown
and none had previously participated in a study with the robot.
This resulted in 14 participants spread across the 2 conditions. On average, participants rated
their familiarity with robots as 2.2 (SD=1.28) and their previous level of interaction as 1.3
(SD=0.47) on a 7-point Liker scale.

3) Procedure: We opted for a within-subjects design, where participants were shown both the
slow and fast videos, in order to enable direct comparisons and stay away from absolute ratings
(as suggested by a pilot study). The order of the videos shown were counterbalanced to negate
ordering effects.
Participants were given a link through Amazon Mechanical Turk to the study. After reading the
instructions and giving their consent, participants were shown the first video of the robots
labeled Robot 1. To continue the study, participants had to watch the entire video. They then
answered questions about the video and perceived capability. They repeated this process after
watching the second video.

4)

Hypothesis: We hypothesize that even when doubling

The speed of the robot, it will not affect perceived capability as the robot is still much slower
than a human.
5)

Measures: We used the perceived capability measures outlined in Section.

Results
Our hypothesis predicted that speed would not significantly affect a robots perceived physical
capability. To test this hypothesis, we performed a non-inferiority test to show that the difference
between the slow and fast robot is significantly greater than a negative margin. Setting = 5%, a
one-tailed paired t-test supported the hypothesis:
0:296 >

0:3, t (25) = 1:796; p = 0:042.

Despite a difference in true capability that would be considered very large from a robotic
algorithmic standpoint, the mean ratings were not significantly different. Since speed is a
physical trait, we did not expect it to affect perceived social capability: we found no significant
difference in the perceived social capability ratings.

We confirmed in a follow-up question that users did perceive the difference in speed, but this had
a very small effect on the robots perceived capability, i.e. on their perceptions of what tasks the
robot is capable of performing. They said, for example, that despite the robot being faster, it was
either still not fast enough, or it did not matter, because the robots speed did not impact its
ability to do other tasks and thus, cannot be trusted.
These results suggest that true improvements in capability do not necessarily lead to perceived
improvements in capability: there can be a disconnect between the two.

Perceived Physical (Success)

100
10

Physical

90
80

AverageLikertRating

PercentChosenasMoreCapable

H4:Conversational

Physical
6

79.15%

5.592
5.183

70

60

65.60%

4.143
3.932

50
40

4
30.91% 29.01%

30
20
0

Functional
Conversational

3
2

Success

Failure

(a) Percentage Chosen

Success

Failure

(b) Rating

CAPABILITIES OF EXPERT SYSTEMS

APPLICATION OF EXPERT SYSTEMS & ARTIFICIAL INTELLIGENCE

Credit granting
Information management and retrieval
AI and expert systems embedded in products
Plant layout
Hospitals and medical facilities
Help desks and assistance
Employee performance evaluation
Loan analysis
Virus detection
Repair and maintenance
Shipping
Marketing
Warehouse optimization

Artificial Intelligence involves the study of:


o automated recognition and understanding of signals
o reasoning, planning, and decision-making
o learning and adaptation
o AI has made substantial progress in
o recognition and learning
o some planning and reasoning problems
o but many open research problems
o AI Applications

o Improvements in hardware and algorithms

=> AI applications

finance, medicine, and science.

Different Types of Artificial Intelligence


1. Modeling how ideal agents should think
2. Modeling how ideal agents should act

Modern AI focuses on the last definition

we will also focus on this engineering approach

success is judged by how well the agent performs

Turing (1950) "Computing machinery and intelligence

"Can machines think?" "Can machines behave intelligently?

Operational test for intelligent behavior: the Imitation Game

Suggests major components required for AI:

3.

knowledge representation

4.

Reasoning,

5. Language/image understanding,

in industry,

6. learning

Cognitive Science approach

Try to get inside our minds

E.g., conduct experiments with people to try to reverse-engineer how we


reason, learning, remember, predict

Problems

Humans dont behave rationally

e.g., insurance

The reverse engineering is very hard to do

The brains hardware is very different to a computer program

Represent facts about the world via logic

Use logical inference as a basis for reasoning about these facts

Can be a very useful approach to AI

E.g., theorem-provers

Limitations

Does not account for an agents uncertainty about the world

E.g., difficult to couple to vision or speech systems

Has no way to represent goals, costs, etc (important aspects of real-world


environments)

Decision theory/Economics

Set of future states of the world

Set of possible actions an agent can take

Utility = gain to an agent for each action/state pair

An agent acts rationally if it selects the action that maximizes its utility

Or expected utility if there is uncertainty

Emphasis is on autonomous agents that behave rationally (make the


best predictions, take the best actions)

on average over time

within computational limitations (bounded rationality)

Whats involved in Intelligence?

Ability to interact with the real world

to perceive, understand, and act

e.g., speech recognition and understanding and synthesis

e.g., image understanding

e.g., ability to take actions, have an effect

Reasoning and Planning

modeling the external world, given input

solving new problems, planning, and making decisions

ability to deal with unexpected problems, uncertainties

Learning and Adaptation

we are continuously learning and adapting

our internal models are always being updated

e.g., a baby learning to categorize and recognize animals

Example: DARPA Grand Challenge

Grand Challenge

Cash prizes ($1 to $2 million) offered to first robots to complete a long


course completely unassisted

Stimulates research in vision, robotics, planning, machine learning,


reasoning, etc

2004 Grand Challenge:

150 mile route in Nevada desert

Furthest any robot went was about 7 miles

but hardest terrain was at the beginning of the course

2005 Grand Challenge:

132 mile race

Narrow tunnels, winding mountain passes, etc

Stanford 1st, CMU 2nd, both finished in about 6 hours

2007 Urban Grand Challenge

This November in Victorville, California

Summary of State of AI Systems in Practice

Speech synthesis, recognition and understanding

very useful for limited vocabulary applications

unconstrained speech understanding is still too hard

Computer vision

works for constrained problems (hand-written zip-codes)

understanding real-world, natural scenes is still too hard

Learning

adaptive systems are used in many applications: have their limits

Planning and Reasoning

only works for constrained problems: e.g., chess

real-world is too complex for general systems

Overall:

many components of intelligent systems are doable

there are many interesting research problems remaining

Intelligent Systems in Your Everyday Life

Post Office

automatic address recognition and sorting of mail

Banks

automatic check readers, signature verification systems

automated loan application classification

Customer Service

automatic voice recognition

The Web

Identifying your age, gender, location, from your Web surfing

Automated fraud detection

Digital Cameras

Automated face detection and focusing

Artificial intelligence (AI) is the human-like intelligence exhibited by machines or software. The
AI field is interdisciplinary, in which a number of sciences and professions converge, including
computer science, psychology, linguistics, philosophy and neuroscience, as well as other
specialized fields such as artificial psychology. Major AI researchers and textbooks define the
field as "the study and design of intelligent agents", where an intelligent agent is a system that
perceives its environment and takes actions that
maximize its chances of success. John McCarthy, who coined the term in 1955, defines it as "the
science and engineering of making intelligent machines". AI research is highly technical and
specialised, and is deeply divided into subfields that often fail to communicate with each other.
Some of the division is due to social and cultural factors: subfields have grown up around
particular institutions and the work of individual researchers. AI research is also divided by
several technical issues. Some subfields focus on the solution of specific problems. Others focus
on one of several possible approaches or on the use of a particular tool or towards the
accomplishment of particular applications. The central problems (or goals) of AI research include

reasoning, knowledge, planning, learning, natural language processing (communication),


perception and the ability to move and manipulate objects. General intelligence (or "strong AI")
is still among the field's long term goals. Currently popular approaches include statistical
methods, computational intelligence and traditional symbolic AI. There are a large number of
tools used in AI, including versions of search and mathematical optimization, logic, methods
based on probability and economics, and many others. The field was founded on the claim that a
central property of humans, intelligencethe sapience of Homo sapiens"can be so precisely
described that a machine can be made to simulate it." This raises philosophical issues about the
nature of the mind and the ethics of creating artificial beings endowed with human-like
intelligence, issues which have been addressed by myth, fiction and philosophy since antiquity.
Artificial intelligence has been the subject of tremendous optimism but has also suffered
stunning setbacks. Today it has become an essential part of the technology industry, providing
the heavy lifting for many of the most challenging problems in computer science.

Data Analysis
It is the process of systematically applying statistical and/or logical techniques to describe and
illustrate, condense and recap, and evaluate data. According to Shamoo and Resnik (2003)
various analytic procedures provide a way of drawing inductive inferences from data and
distinguishing the signal (the phenomenon of interest) from the noise (statistical fluctuations)
present in the data..
While data analysis in qualitative research can include statistical procedures, many times
analysis becomes an ongoing iterative process where data is continuously collected and analyzed
almost simultaneously. Indeed, researchers generally analyze for patterns in observations through
the entire data collection phase (Savenye, Robinson, 2004). The form of the analysis is
determined by the specific qualitative approach taken (field study, ethnography content analysis,
oral history, biography, unobtrusive research) and the form of the data (field notes, documents,
audiotape, videotape).
An essential component of ensuring data integrity is the accurate and appropriate analysis of
research findings. Improper statistical analyses distort scientific findings, mislead casual readers

(Shepard, 2002), and may negatively influence the public perception of research. Integrity issues
are just as relevant to analysis of non-statistical data as well.
Considerations/issues in data analysis
There are a number of issues that researchers should be cognizant of with respect to data
analysis. These include:

Having the necessary skills to analyze

Concurrently selecting data collection methods and appropriate analysis

Drawing unbiased inference

Inappropriate subgroup analysis

Following acceptable norms for disciplines

Determining statistical significance

Lack of clearly defined and objective outcome measurements

Providing honest and accurate analysis

Manner of presenting data

Environmental/contextual issues

Data recording method

Partitioning text when analyzing qualitative data

Training of staff conducting analyses

Reliability and Validity

Explosion

Early AI researchers developed algorithms that imitated the step-by-step reasoning that humans
use when they solve puzzles or make logical deductions.By the late 1980s and 1990s, AI
research had also developed highly successful methods for dealing with uncertain or incomplete
information, employing concepts from probability and economics.
For difficult problems, most of these algorithms can require enormous computational resources
most experience a "combinatorial explosion": the amount of memory or computer time required
becomes astronomical when the problem goes beyond a certain size. The search for more
efficient problem-solving algorithms is a high priority for AI research.
Human beings solve most of their problems using fast, intuitive judgements rather than the
conscious, step-by-step deduction that early AI research was able to model.AI has made some
progress at imitating this kind of "sub-symbolic" problem solving: embodied agent approaches
emphasize the importance of sensorimotor skills to higher reasoning; neural net research
attempts to simulate the structures inside the brain that give rise to this skill; statistical
approaches to AI mimic the probabilistic nature of the human ability to guess

Having necessary skills to analyze


A tacit assumption of investigators is that they have received training sufficient to demonstrate a
high standard of research practice. Unintentional scientific misconduct' is likely the result of
poor instruction and follow-up. A number of studies suggest this may be the case more often than

believed (Nowak, 1994; Silverman, Manson, 2003). For example, Sica found that adequate
training of physicians in medical schools in the proper design, implementation and evaluation of
clinical trials is abysmally small (Sica, cited in Nowak, 1994). Indeed, a single course in
biostatistics is the most that is usually offered (Christopher Williams, cited in Nowak, 1994).
A common practice of investigators is to defer the selection of analytic procedure to a research
team statistician. Ideally, investigators should have substantially more than a basic
understanding of the rationale for selecting one method of analysis over another. This can allow
investigators to better supervise staff who conduct the data analyses process and make informed
decisions

Concurrently selecting data collection methods and appropriate analysis


While methods of analysis may differ by scientific discipline, the optimal stage for determining
appropriate analytic procedures occurs early in the research process and should not be an
afterthought. According to Smeeton and Goda (2003), Statistical advice should be obtained at
the stage of initial planning of an investigation so that, for example, the method of sampling and
design of questionnaire are appropriate
Drawing unbiased inference
The chief aim of analysis is to distinguish between an event occurring as either reflecting a true
effect versus a false one. Any bias occurring in the collection of the data, or selection of method
of analysis, will increase the likelihood of drawing a biased inference. Bias can occur when
recruitment of study participants falls below minimum number required to demonstrate statistical
power or failure to maintain a sufficient follow-up period needed to demonstrate an effect
(Altman, 2001).
Inappropriate subgroup analysis
When failing to demonstrate statistically different levels between treatment groups, investigators
may resort to breaking down the analysis to smaller and smaller subgroups in order to find a
difference. Although this practice may not inherently be unethical, these analyses should be

proposed before beginning the study even if the intent is exploratory in nature. If it the study is
exploratory in nature, the investigator should make this explicit so that readers understand that
the research is more of a hunting expedition rather than being primarily theory driven. Although
a researcher may not have a theory-based hypothesis for testing relationships between previously
untested variables, a theory will have to be developed to explain an unanticipated finding.
Indeed, in exploratory science, there are no a priori hypotheses therefore there are no
hypothetical tests. Although theories can often drive the processes used in the investigation of
qualitative studies, many times patterns of behavior or occurrences derived from analyzed data
can result in developing new theoretical frameworks rather than determined a priori (Savenye,
Robinson,

2004).

It is conceivable that multiple statistical tests could yield a significant finding by chance alone
rather than reflecting a true effect. Integrity is compromised if the investigator only reports tests
with significant findings, and neglects to mention a large number of tests failing to reach
significance. While access to computer-based statistical packages can facilitate application of
increasingly complex analytic procedures, inappropriate uses of these packages can result in
abuses as well.
Following acceptable norms for disciplines
Every field of study has developed its accepted practices for data analysis. Resnik (2000) states
that it is prudent for investigators to follow these accepted norms. Resnik further states that the
norms are based on two factors:
(1) the nature of the variables used (i.e., quantitative, comparative, or qualitative),
(2) assumptions about the population from which the data are drawn (i.e., random distribution,
independence, sample size, etc.). If one uses unconventional norms, it is crucial to clearly state
this is being done, and to show how this new and possibly unaccepted method of analysis is
being used, as well as how it differs from other more traditional methods. For example, Schroder,
Carey, and Vanable (2003) juxtapose their identification of new and powerful data analytic

solutions developed to count data in the area of HIV contraction risk with a discussion of the
limitations of commonly applied methods.
If one uses unconventional norms, it is crucial to clearly state this is being done, and to show
how this new and possibly unaccepted method of analysis is being used, as well as how it differs
from other more traditional methods. For example, Schroder, Carey, and Vanable (2003)
juxtapose their identification of new and powerful data analytic solutions developed to count
data in the area of HIV contraction risk with a discussion of the limitations of commonly applied
methods.

Determining significance
While the conventional practice is to establish a standard of acceptability for statistical
significance, with certain disciplines, it may also be appropriate to discuss whether attaining
statistical significance has a true practical meaning, i.e., clinical significance. Jeans (1992)
defines clinical significance as the potential for research findings to make a real and important
difference to clients or clinical practice, to health status or to any other problem identified as a
relevant priority for the discipline.
Kendall and Grove (1988) define clinical significance in terms of what happens when
troubled and disordered clients are now, after treatment, not distinguishable from a meaningful
and representative non-disturbed reference group. Thompson and Noferi (2002) suggest that
readers of counseling literature should expect authors to report either practical or clinical
significance indices, or both, within their research reports. Shepard (2003) questions why some
authors fail to point out that the magnitude of observed changes may too small to have any
clinical or practical significance, sometimes, a supposed change may be described in some
detail, but the investigator fails to disclose that the trend is not statistically significant .
Lack of clearly defined and objective outcome measurements

No amount of statistical analysis, regardless of the level of the sophistication, will correct poorly
defined objective outcome measurements. Whether done unintentionally or by design, this
practice increases the likelihood of clouding the interpretation of findings, thus potentially
misleading readers.
Provide honest and accurate analysis
The basis for this issue is the urgency of reducing the likelihood of statistical error. Common
challenges include the exclusion of outliers, filling in missing data, altering or otherwise
changing data, data mining, and developing graphical representations of the data (Shamoo,
Resnik, 2003).
Manner of presenting data
At times investigators may enhance the impression of a significant finding by determining how
to present derived data (as opposed to data in its raw form), which portion of the data is shown,
why, how and to whom (Shamoo, Resnik, 2003). Nowak (1994) notes that even experts do not
agree in distinguishing between analyzing and massaging data. Shamoo (1989) recommends that
investigators maintain a sufficient and accurate paper trail of how data was manipulated for
future review.
Environmental/contextual issues
The integrity of data analysis can be compromised by the environment or context in which data
was collected i.e., face-to face interviews vs. focused group. The interaction occurring within a
dyadic relationship (interviewer-interviewee) differs from the group dynamic occurring within a
focus group because of the number of participants, and how they react to each others responses.
Since the data collection process could be influenced by the environment/context, researchers
should take this into account when conducting data analysis.
Data recording method
Analyses could also be influenced by the method in which data was recorded. For example,
research events could be documented by:

a. recording audio and/or video and transcribing later


b. either a researcher or self-administered survey
c. either closed ended survey or open ended survey
d. preparing ethnographic field notes from a participant/observer
e. requesting that participants themselves take notes, compile and submit them to researchers.
While each methodology employed has rationale and advantages, issues of objectivity and
subjectivity may be raised when data is analyzed.
Partitioning the text
During content analysis, staff researchers or raters may use inconsistent strategies in analyzing
text material. Some raters may analyze comments as a whole while others may prefer to dissect
text material by separating words, phrases, clauses, sentences or groups of sentences. Every
effort should be made to reduce or eliminate inconsistencies between raters so that data
integrity is not compromised.
Training of Staff conducting analyses
2A major challenge to data integrity could occur with the unmonitored supervision of inductive
techniques. Content analysis requires raters to assign topics to text material (comments). The
threat to integrity may arise when raters have received inconsistent training, or may have
received previous training experience(s). Previous experience may affect how raters perceive the
material or even perceive the nature of the analyses to be conducted. Thus one rater could assign
topics or codes to material that is significantly different from another rater. Strategies to address
this would include clearly stating a list of analyses procedures in the protocol manual, consistent
training, and routine monitoring of raters.
Reliability and Validity

Researchers performing analysis on either quantitative or qualitative analyses should be aware of


challenges to reliability and validity. For example, in the area of content analysis, Gottschalk
(1995) identifies three factors that can affect the reliability of analyzed data:

stability , or the tendency for coders to consistently re-code the same data in the
same way over a period of time

reproducibility , or the tendency for a group of coders to classify categories


membership in the same way

accuracy , or the extent to which the classification of a text corresponds to a


standard or norm statistically

The potential for compromising data integrity arises when researchers cannot consistently
demonstrate stability, reproducibility, or accuracy of data analysis
According Gottschalk, (1995), the validity of a content analysis study refers to the
correspondence of the categories (the classification that raters assigned to text content) to the
conclusions, and the generalizability of results to a theory (did the categories support the studys
conclusion, and was the finding adequately robust to support or be applied to a selected
theoretical rationale?).
Extent of analysis
Upon coding text material for content analysis, raters must classify each code into an appropriate
category of a cross-reference matrix. Relying on computer software to determine a frequency or
word count can lead to inaccuracies. One may obtain an accurate count of that word's
occurrence and frequency, but not have an accurate accounting of the meaning inherent in each
particular usage (Gottschalk, 1995). Further analyses might be appropriate to discover the
dimensionality of the data set or identity new meaningful underlying variables.
Whether statistical or non-statistical methods of analyses are used, researchers should be aware
of the potential for compromising data integrity. While statistical analysis is typically performed
on quantitative data, there are numerous analytic procedures specifically designed for qualitative
material including content, thematic, and ethnographic analysis. Regardless of whether one

studies quantitative or qualitative phenomena, researchers use a variety of tools to analyze data
in order to test hypotheses, discern patterns of behavior, and ultimately answer research
questions. Failure to understand or acknowledge data analysis issues presented can compromise
data integrity.
Analysis of data is a process of inspecting, cleaning, transforming, and modeling data with the
goal of discovering useful information, suggesting conclusions, and supporting decision-making.
Data analysis has multiple facets and approaches, encompassing diverse techniques under a
variety of names, in different business, science, and social science domains.
Data mining is a particular data analysis technique that focuses on modeling and knowledge
discovery for predictive rather than purely descriptive purposes. Business intelligence covers
data analysis that relies heavily on aggregation, focusing on business information. In statistical
applications, some people divide data analysis into descriptive statistics, exploratory data
analysis(EDA), and confirmatory data analysis (CDA). EDA focuses on discovering new
features in the data and CDA on confirming or falsifying existing hypotheses.Predictive
analytics focuses on application of statistical models for predictive forecasting or classification,
while text analytics applies statistical, linguistic, and structural techniques to extract and classify
information from textual sources, a species of unstructured data. All are varieties of data
analysis.
Data integration is a precursor to data analysis, and data analysis is closely linked to data
visualization and data dissemination. The term data analysis is sometimes used as a synonym
for data modeling.

CHAPTER 5
CONCLUSION

CONCLUSIONS
1. Artificial Intelligence have really a lot from being & till todays date & still they are
evolving day by day.
2. Artificial Intelligence help in easily understanding that what the various areas where the
user can interface are.
3. AI have really made our lifes very easier, & they have been contributing a lot for the
development of our society in our businesses.
4. If we talk about making AI as a profession/career, AI have a very wide spectrum in
future. It has a very wide scope for people how are interested in choosing AI as a
professional knowledge.
5. In future we will be experiencing a technology which is beyond our imaginations right
now.

CHAPTER-6
RECOMMENDATION
&
SUGGESTION

SUGGESTIONS: There must be enough time for the full study.


The data should be real which is collected from the primary sources.

The study should involves the whole artificial intelligence not only the emergence of the
artificial intelligence.
The study should not be related to the history and development of the artificial
intelligence.
The study should be focus on the impact of artificial intelligence in the society and all
over areas.

CHAPTER-7
BIBLIOGRAPHY

BIBLIOGRAPHY

Artificial Intelligence, C.S.V.Murthy- Himalaya Publishing


House, New Delhi 2002. Artificial Intelligence from Wikipedia-

the free encyclopedia


Artificial Intelligence - The Cutting Edge of Business, Bajaj & Nag- New Delhi 2000
Department of Electronics 1999, Information Technology Bill along with Cyber
Laws, Government of India, Published in Electronic Information & Planning,

New Delhi.
GICC Report on Internet and Artificial Intelligence 2000, Government of India, New

Delhi.
IEEE 1999, ' Artificial Intelligence Perspective from Different Parts of the World,

published in Information Technology, Special Issue, November, New Delhi.


Information & Technology (IT) 1997, India's Advantage in Information
Technology, Vol. 1, Issue 3, pp. 117-124, December, New Delhi.

WEBSITES:
http://newint.org/books/reference/world-development/case-studies/2013/03/14/artificial
intelligence-in-developing-world/
http://wikieducator.org/History_of_Artificial Intelligence_Development_
%26_Generation_of_Artificial Intelligence
http://www.byte-notes.com/advantages-and-disadvantages-artificial intelligence
http://www.informationq.com/uses-of-artificial intelligence-in-different-fields-areassectors-industries-banking /

You might also like