Professional Documents
Culture Documents
Artificial Intelligence New
Artificial Intelligence New
ON
ARTIFICIAL INTELLIGENCE
Submitted in Partial Fulfillment of the Requirement of Degree in
Bachelor of Business Administration (Computer Aided Management)
OF
Submitted To:
Submitted By:
Mrs. Snehlata
D. A. V. CENTENARY COLLEGE
NH-3, NIT FARIDABAD
ACKNOWLEDGEMENT
I am very much thankful to Mrs. Snehlata (PROJECT GUIDE) for giving me opportunity and
her guidance which helps me throughout preparing this report. She has also provided me a
valuable suggestions and excellence guidance about this project which proved very helpful to me
to utilize my theoretical knowledge in practical field.
I am thankful to M.D University, Rohtak for putting me to this valuable exposure into the field of
Research Methodology.
At last I am also thankful to my friends, who have given me their constructive advice, educative
suggestion, encouragement, co-operation and motivation to prepare this report.
(Arvind Singh)
PREFACE
The title of my project is ARTIFICIAL INTELLIGENCE
This project report is on how Artificial Intelligence has evolved from the earlier twenty one
century till today. It contains emergence of AI, programming languages and programming codes
used in a computer to make it more user-friendly. This report also tells you why AI become an
essential need in todays world and how they are influencing our life. It also tells about the
business uses of AI. And what is the scope and uses of AI and also what are the professions
related to the AI in a person can make his career.
(Arvind Singh)
CONTENTS
S.No.
Topic
1.
2.
Review of literature
3.
Research Methodology
a) Objectives of the study
b) Scope of the study
c) Data Collection
d) Limitations of the study
4.
5.
Conclusion
6.
7.
Bibliography
PAGE NO.
Computers with the ability to mimic or duplicate the functions of the human
brain
AI is generally associated with Computer Science, but it has many important links with
other fields such as Maths, Psychology, Cognition, Biology and Philosophy, among many
others. Our ability to combine knowledge from all these fields will ultimately benefit our
progress in the quest of creating an intelligent artificial being.
Not yet. The problem is that we cannot yet characterize in general what kinds of
computational procedures we want to call intelligent. We understand someof the
mechanisms of intelligence and not others.
Motivation
Computers are fundamentally well suited to performing mechanical computations, using
fixed programmed rules. This allows artificial machines to perform simple monotonous
tasks efficiently and reliably, which humans are ill-suited to. For more complex problems,
things get more difficult... Unlike humans, computers have trouble understanding specific
situations, and adapting to new situations. Artificial Intelligence aims to improve machine
behavior in tackling such complex tasks.
Together with this, much of AI research is allowing us to understand our intelligent
behavior. Humans have an interesting approach to problem-solving, based on abstract
thought, high-level deliberative reasoning and pattern recognition. Artificial Intelligence
can help us understand this process by recreating it, then potentially enabling us to enhance
it beyond our current capabilities.
Technology
There are many different approaches to Artificial Intelligence, none of which are either
completely right or wrong. Some are obviously more suited than others in some cases, but
any working alternative can be defended. Over the years, trends have emerged based on the
state of mind of influential researchers, funding opportunities as well as available computer
hardware.
Over the past five decades, AI research has mostly been focusing on solving specific
problems. Numerous solutions have been devised and improved to do so efficiently and
reliably. This explains why the field of Artificial Intelligence is split into many branches,
Applications
The potential applications of Artificial Intelligence are abundant. They stretch from the
military for autonomous control and target identification, to the entertainment industry for
computer games and robotic pets. Lets also not forget big establishments dealing with huge
amounts of information such as hospitals, banks and insurances, who can use AI to predict
customer behaviour and detect trends.
As you may expect, the business of Artificial Intelligence is becoming one of the major
driving forces for research. With an ever growing market to satisfy, there's plenty of room
for more personnel. So if you know what you're doing, there's plenty of money to be made
from interested big companies!
AI Applications
Mathematics
Formal
representation
and
proof,
algorithms,
computation,
(un)decidability, (in)tractability
Control theory design systems that maximize an objective function over time
Perceptive system
A system that approximates the way a human sees, hears, and feels
objects
Vision system
Robotics
Expert system
The branch of computer science concerned with making computers behave like humans.
The term was coined in 1956 by John McCarthy at the Massachusetts Institute of
Technology. Artificial intelligence includes
Currently, no computers exhibit full artificial intelligence (that is, are able to simulate human
behavior). The greatest advances haveoccurred in the field of games playing. The best computer
chessprograms are now capable of beating humans. In May, 1997, an IBMsuper-computer called
Deep Blue defeated world chess champion
CHAPTER 2
REVIEW
OF
LITERATURE
The history of artificial intelligence (AI) began in antiquity, with myths, stories and
rumors of artificial beings endowed with intelligence or consciousness by master
craftsmen; as Pamela McCorduck writes, AI began with "an ancient wish to forge the
gods."
The seeds of modern AI were planted by classical philosophers who attempted to describe
the process of human thinking as the mechanical manipulation of symbols. This work
culminated in the invention of the programmable digital computer in the 1940s, a machine
based on the abstract essence of mathematical reasoning. This device and the ideas behind
it inspired a handful of scientists to begin seriously discussing the possibility of building an
electronic brain.
The field of AI research was founded at a conference on the campus of Dartmouth College
in the summer of 1956. Those who attended would become the leaders of AI research for
decades. Many of them predicted that a machine as intelligent as a human being would
exist in no more than a generation and they were given millions of dollars to make this
vision come true. Eventually it became obvious that they had grossly underestimated the
difficulty of the project. In 1973, in response to the criticism of James Light hill and
ongoing pressure from congress, the U.S. and British Governments stopped funding
undirected research into artificial intelligence. Seven years later, a visionary initiative by
the Japanese Government inspired governments and industry to provide AI with billions of
dollars, but by the late 80s the investors became disillusioned and withdrew funding again.
This cycle of boom and bust, of "AI winters" and summers, continues to haunt the field.
Undaunted, there are those who make extraordinary predictions even now.
Progress in AI has continued, despite the rise and fall of its reputation in the eyes of
government bureaucrats and venture capitalists. Problems that had begun to seem
impossible in 1970 have been solved and the solutions are now used in successful
commercial products. However, no machine has been built with a human level of
intelligence, contrary to the optimistic predictions of the first generation of AI researchers.
"We can only see a short distance ahead," admitted Alan Turing, in a famous 1950 paper
that catalyzed the modern search for machines that think. "But," he added, "we can see
much that must be done."
Realistic humanoid automatons were built by craftsman from every civilization, including
Yan Shi,Hero of Alexandria,Al-Jazari and Wolfgang von Kempelen. The oldest known
automatons were the sacred statues of ancient Egypt and Greece. The faithful believed that
craftsman had imbued these figures with very real minds, capable of wisdom and emotion
Hermes Trismegistus wrote that "by discovering the true nature of the gods, man has
been able to reproduce it."
Formal reasoning
Artificial intelligence is based on the assumption that the process of human thought can be
mechanized. The study of mechanicalor "formal"reasoning has a long history.
Chinese, Indian and Greek philosophers all developed structured methods of formal
deduction in the first millennium BCE. Their ideas were developed over the centuries by
philosophers such as Aristotle (who gave a formal analysis of the syllogism), Euclid
(whose Elements was a model of formal reasoning), al-Khwrizm (who developed algebra
and gave his name to "algorithm") and European scholastic philosophers such as William
of Ockham and Duns Scotus.
Gottfried Leibniz
who speculated that human reason could be reduced to mechanical calculation.
In the 17th century, Leibniz, Thomas Hobbes and Ren Descartes explored the possibility
that all rational thought could be made as systematic as algebra or geometry.Hobbes
famously wrote in Leviathan: "reason is nothing but reckoning".Leibniz envisioned a
universal language of reasoning (his characteristicauniversalist) which would reduce
argumentation to calculation, so that "there would be no more need of disputation between
two philosophers than between two accountants. For it would suffice to take their pencils in
hand, down to their slates, and to say each other (with a friend as witness, if they liked): Let
us calculate." These philosophers had begun to articulate the physical symbol system
hypothesis that would become the guiding faith of AI research.
In the 20th century, the study of mathematical logic provided the essential breakthrough
that made artificial intelligence seem plausible. The foundations had been set by such
works as Boole's The Laws of Thought and Frege'sBegriffsschrift. Building on Frege's
system, Russell and Whitehead presented a formal treatment of the foundations of
The ENIAC, at the Moore School of Electrical Engineering. This photo has been artificially
darkened, obscuring details such as the women who were present and the IBM equipment in use.
First, they proved that there were, in fact, limits to what mathematical logic could accomplish.
But second (and more important for AI) their work suggested that, within these limits, any form
of mathematical reasoning could be mechanized. The Church-Turing thesis implied that a
mechanical device, shuffling symbols as simple as 0 and 1, could imitate any conceivable
process of mathematical deduction. The key insight was the Turing machinea simple
theoretical construct that captured the essence of abstract symbol manipulation. This invention
would inspire a handful of scientists to begin discussing the possibility of thinking machines.
Computer science
Main articles: history of computer hardware and history of computer science
Calculating machines were built in antiquity and improved throughout history by many
mathematicians, including (once again) philosopher Gottfried Leibniz. In the early 19thcentury,
Charles Babbage designed a programmable computer (the Analytical Engine), although it was
never built. Ada Lovelace speculated that the machine "might compose elaborate and scientific
pieces of music of any degree of complexity or extent". (She is often credited as the first
programmer because of a set of notes she wrote that completely detail a method for calculating
Bernoulli numbers with the Engine.)
The first modern computers were the massive code breaking machines of the Second World War
(such as Z3, ENIAC and Colossus). The latter two of these machines were based on the
theoretical foundation laid by Alan Turing and developed by John von Neumann.
Turing's test
In 1950 Alan Turing published a landmark paper in which he speculated about the possibility of
creating machines that think. He noted that "thinking" is difficult to define and devised his
famous Turing Test. If a machine could carry on a conversation (over a teleprinter) that was
indistinguishable from a conversation with a human being, then it was reasonable to say that the
machine was "thinking". This simplified version of the problem allowed Turing to argue
convincingly that a "thinking machine" was at least plausible and the paper answered all the most
common objections to the proposition. The Turing Test was the first serious proposal in the
philosophy of artificial intelligence.
Game AI
In 1951, using the Ferranti Mark 1 machine of the University of Manchester, Christopher
Strachey wrote a checkers program and Dietrich Prinz wrote one for chess. Arthur's checkers
program, developed in the middle 50s and early 60s, eventually achieved sufficient skill to
challenge a respectable amateur. Game would continue to be used as a measure of progress in AI
throughout its history.
Dartmouth conference was the moment that AI gained its name, its mission, its first success and
its major players, and is widely considered the birth of AI.
The work
There were many successful programs and new directions in the late 50s and 1960s. Among the
most influential were these:
Reasoning as search
Many early AI programs used the same basic algorithm. To achieve some goal (like winning a
game or proving a theorem), they proceeded step by step towards it (by making a move or a
deduction) as if searching through a maze, backtracking whenever they reached a dead end. This
paradigm was called "reasoning as search".
The principal difficulty was that, for many problems, the number of possible paths through the
"maze" was simply astronomical (a situation known as a "combinatorial explosion").
Researchers would reduce the search space by using heuristics or "rules of thumb" that would
eliminate those paths that were unlikely to lead to a solution.
Newell and Simon tried to capture a general version of this algorithm in a program called the
"General Problem Solver". Other "searching" programs were able to accomplish impressive tasks
like solving problems in geometry and algebra, such as Herbert Gelernter's Geometry Theorem
Prover (1958) and SAINT, written by Minsky's student James Slagle (1961). Other programs
searched through goals and sub goals to plan actions, like the STRIPS system developed at
Stanford to control the behavior of their robot Shakey.
Natural language
An important goal of AI research is to allow computers to communicate in natural languages like
English. An early success was Daniel Bobrow's program STUDENT, which could solve high
school algebra word problems.
A semantic net represents concepts (e.g. "house, door") as nodes and relations among concepts
(e.g. "has-a") as links between the nodes. The first AI program to use a semantic net was written
by Ross Quillian and the most successful (and controversial) version was Roger
Schank'sConceptual dependency theory.
Joseph Weizenbaum'sELIZA could carry out conversations that were so realistic that users
occasionally were fooled into thinking they were communicating with a human being and not a
program. But in fact, ELIZA had no idea what she was talking about. She simply gave a canned
response or repeated back what was said to her, rephrasing her response with a few grammar
rules. ELIZA was the first chatterbox.
Micro-worlds
In the late 60s, Marvin Minsky and Seymour Papert of the MIT AI Laboratory proposed that AI
research should focus on artificially simple situations known as micro-worlds. They pointed out
that in successful sciences like physics, basic principles were often best understood using
simplified models like frictionless planes or perfectly rigid bodies. Much of the research focused
on a "blocks world," which consists of colored blocks of various shapes and sizes arrayed on a
flat surface.
This paradigm led to innovative work in machine vision by Gerald Sussman (who led the team),
Adolfo Guzman, David Waltz (who invented "constraint propagation"), and especially Patrick
Winston. At the same time, Minsky and Papert built a robot arm that could stack blocks, bringing
the blocks world to life. The crowning achievement of the micro-world program was Terry
Winograd'sSHRDLU. It could communicate in ordinary English sentences, plan operations and
execute them.
The optimism
The first generation of AI researchers made these predictions about their work:
1958, H. A. Simon and Allen Newell: "within ten years a digital computer will be
the world's chess champion" and "within ten years a digital computer will discover
and prove an important new mathematical theorem."
1965, H. A. Simon: "machines will be capable, within twenty years, of doing any
work a man can do."
1967, Marvin Minsky: "Within a generation ... the problem of creating 'artificial
intelligence' will substantially be solved."
1970, Marvin Minsky (in Life Magazine): "In from three to eight years we will
have a machine with the general intelligence of an average human being."
The money
In June 1963, MIT received a $2.2 million grant from the newly created Advanced Research
Projects Agency (later known as DARPA). The money was used to fund project MAC which
subsumed the "AI Group" founded by Minsky and McCarthy five years earlier. ARPA continued
to provide three million dollars a year until the 70s.ARPA made similar grants to Newell and
Simon's program at CMU and to the Stanford AI Project (founded by John McCarthy in 1963).
Another important AI laboratory was established at Edinburgh University by Donald Michie in
1965. These four institutions would continue to be the main centers of AI research (and funding)
in academia for many years.
The money was proffered with few strings attached: J. C. R. Licklider, then the director of
ARPA, believed that his organization should "fund people, not projects!" and allowed researchers
to pursue whatever directions might interest them. This created a freewheeling atmosphere at
MIT that gave birth to the hacker culture, but this "hands off" approach would not last.
The problems
In the early seventies, the capabilities of AI programs were limited. Even the most impressive
could only handle trivial versions of the problems they were supposed to solve; all the programs
were, in some sense, "toys". AI researchers had begun to run into several fundamental limits that
could not be overcome in the 1970s. Although some of these limits would be conquered in later
decades, others still stymie the field to this day.
Limited computer power: There was not enough memory or processing speed to
accomplish anything truly useful. For example, Ross Quillian's successful work on
natural language was demonstrated with a vocabulary of only twenty words, because that
was all that would fit in memory.HansMoravec argued in 1976 that computers were still
millions of times too weak to exhibit intelligence. He suggested an analogy: artificial
intelligence requires computer power in the same way that aircraft require horsepower.
Below a certain threshold, it's impossible, but, as power increases, eventually it could
become easy. With regard to computer vision, Moravec estimated that simply matching
the edge and motion detection capabilities of human retina in real time would require a
general-purpose computer capable of 109 operations/second (1000 MIPS). As of 2011,
practical computer vision applications require 10,000 to 1,000,000 MIPS. By
comparison, the fastest supercomputer in 1976, Cray-1 (retailing at $5 million to $8
million), was only capable of around 80 to 130 MIPS, and a typical desktop computer at
the time achieved less than 1 MIPS.
when the problems are trivial. This almost certainly meant that many of the "toy"
solutions used by AI would probably never scale up into useful systems.
The frame and qualification problems. AI researchers (like John McCarthy) who used
logic discovered that they could not represent ordinary deductions that involved planning
or default reasoning without making changes to the structure of logic itself. They
developed new logics (like non-monotonic logics and modal logics) to try to solve the
problems.
The agencies which funded AI research (such as the British government, DARPA and
NRC) became frustrated with the lack of progress and eventually cut off almost all funding
for undirected research into AI. The pattern began as early as 1966 when the ALPACreport
appeared criticizing machine translation efforts. After spending 20 million dollars, the NRC
ended all support. In 1973, the Lighthill report on the state of AI research in England
criticized the utter failure of AI to achieve its "grandiose objectives" and led to the
dismantling of AI research in that country. (The report specifically mentioned the
combinatorial explosion problem as a reason for AI's failings.)DARPA was deeply
disappointed with researchers working on the Speech Understanding Research program at
CMU and canceled an annual grant of three million dollars. By 1974, funding for AI
projects was hard to find.
Hans Moravec blamed the crisis on the unrealistic predictions of his colleagues. "Many
researchers were caught up in a web of increasing exaggeration." However, there was
another issue: since the passage of the Mansfield Amendment in 1969, DARPA had been
under increasing pressure to fund "mission-oriented direct research, rather than basic
undirected research". Funding for the creative, freewheeling exploration that had gone on
in the 60s would not come from DARPA. Instead, the money was directed at specific
projects with clear objectives, such as autonomous tanks and battle management systems.
immediate and serious. It was unclear what difference "know how" or "intentionality" made to an
actual computer program. Minsky said of Dreyfus and Searle "they misunderstand, and should be
ignored." Dreyfus, who taught at MIT, was given a cold shoulder: he later said that AI
researchers "dared not be seen having lunch with me."JosephWeizenbaum, the author of ELIZA,
felt his colleagues' treatment of Dreyfus was unprofessional and childish. Although he was an
outspoken critic of Dreyfus' positions, he "deliberately made it plain that theirs was not the way
to treat a human being."
Weizenbaum began to have serious ethical doubts about AI when Kenneth Colby wrote
DOCTOR, a chatterbox therapist. Weizenbaum was disturbed that Colby saw his mindless
program as a serious therapeutic tool. A feud began, and the situation was not helped when Colby
did not credit Weizenbaum for his contribution to the program. In 1976, Weizenbaum published
Computer Power and Human Reason which argued that the misuse of artificial intelligence has
the potential to devalue human life.
The seeds of modern AI were planted by classical philosophers who attempted to describe the
process of human thinking as the mechanical manipulation of symbols. This work culminated in
the invention of the programmable digital computer in the 1940s, a machine based on the
abstract essence of mathematical reasoning. This device and the ideas behind it inspired a
handful of scientists to begin seriously discussing the possibility of building an electronic brain.
The field of AI research was founded at a conference on the campus of Dartmouth College in the
summer of 1956. Those who attended would become the leaders of AI research for decades.
Many of them predicted that a machine as intelligent as a human being would exist in no more
than a generation and they were given millions of dollars to make this vision come true.
Eventually it became obvious that they had grossly underestimated the difficulty of the project.
In 1973, in response to the criticism of James Light hill and ongoing pressure from congress,
the U.S. and Governments stopped funding undirected research into artificial intelligence. Seven
years later, a visionary initiative by the Japanese Government inspired governments and industry
to provide AI with billions of dollars, but by the late 80s the investors became disillusioned and
withdrew funding again. This cycle of boom and bust, of "AI winters" and summers, continues to
haunt the field. Undaunted, there are those who make extraordinary predictions even now.
Progress in AI has continued, despite the rise and fall of its reputation in the eyes of government
bureaucrats and venture capitalists. Problems that had begun to seem impossible in 1970 have
been solved and the solutions are now used in successful commercial products. However, no
machine has been built with a human level of intelligence, contrary to the optimistic predictions
of the first generation of AI researchers. "We can only see a short distance ahead," admitted Alan
Turing, in a famous 1950 paper.
CHAPTER- 3
RESEARCH
METHODOLOGY
RESEARCH METHODOLOGY:
The process used to collect information and data for the purpose of making business
decision.
The methodology may include publication research, interviews, surveys and other research
techniques and could include both present and historical information.
Research Methodology is a way to find out the result of a given problem on a specific
matter or problem that is also referred as research problem. In Methodology, researcher
uses different criteria for solving/searching the given research problem. Different sources
use different type of methods for solving the problem. If we think about the word
Methodology, it is the way of searching or solving the research problem.
In Research Methodology, researcher always tries to search the given question
systematically in our own way and find out all the answers till conclusion. If research does
not work systematically on problem, there would be less possibility to find out the final
result. For finding or exploring research questions, a researcher faces lot of problems that
can be effectively resolved with using correct research methodology.
changing our view of life but side by side it has some side effects or we can say bad effects
especially on our children and society and it cant be neglected.
DATA COLLECTION
Secondary data, is data collected by someone other than the user. Common
sources of secondary data for social science include censuses, organizational records and data
collected through qualitative methodologies or research. Primary, by contrast, are collected by
the investigator conducting the research.
Secondary data analysis saves time that would otherwise be spent collecting data and,
particularly in the case of quantitative data, provides larger and higher-quality databases that
would be unfeasible for any individual researcher to collect on their own. In addition, analysts of
social and economic change consider secondary data essential, since it is impossible to conduct a
new survey that can adequately capture past change and/or developments.
Furthermore, secondary data can also be helpful in the research design of subsequent primary
research and can provide a baseline with which the collected primary data results can be
compared to. Therefore, it is always wise to begin any research activity with a review of the
secondary data.
CHAPTER 4
DATA ANALYSIS
AND
INTERPRETATION
textbooks define this field as "the study and design of intelligent agents, where an intelligent
agent is a system that perceives its environment and takes actions that maximize its chances of
success. John, who coined the term in 1955, defines it as "the science and engineering of making
intelligent machines".
AI research is highly technical and specialized, and is deeply divided into subfields that often fail
to communicate with each other. Some of the division is due to social and cultural factors:
subfields have grown up around particular institutions and the work of individual researchers. AI
research is also divided by several technical issues. Some subfields focus on the solution of
specific problems. Others focus on one of several possible approaches or on the use of a
particular tool or towards the accomplishment of particular applications.
The central problems (or goals) of AI research include reasoning, knowledge, planning, learning,
natural language processing (communication), perception and the ability to move and manipulate
objects. General is still among the field's long-term goals. Currently popular approaches include
statistical methods, computational intelligence and traditional symbolic AI. There are a large
number of tools used in AI, including versions of search and mathematical optimization, logic,
methods based on probability and economics, and many others. The AI field is interdisciplinary,
in which a number of sciences and professions converge, including computer science,
mathematics, psychology, linguistics, philosophy and neuroscience, as well as other specialized
fields such as artificial psychology.
The field was founded on the claim that a central property of humans, intelligencethe sapience
of Homo sapiens"can be so precisely described that a machine can be made to simulate it."
This raises philosophical issues about the nature of the mind and the ethics of creating artificial
beings endowed with human-like intelligence, issues which have been addressed by myth, fiction
and philosophy since antiquity. Artificial intelligence has been the subject of tremendous
optimism but has also suffered stunning setbacks. Today it has become an essential part of the
technology industry, providing the heavy lifting for many of the most challenging problems in
computer science.
Knowledge-based interfaces to existing statistical software are one way of applying artificial
intelligence methods in data analysis and interpretation. In this paper we discuss our experiences
from the implementations of knowledge-based systems for time series analyses, in particular for
eliminating seasonal variations in statistical studies of industry and trade, and for multivariate
tabular analysis. We further discuss the problems of realizing knowledge-based systems in a
production environment and present an integrated conceptual model for knowledge-based data
analysis and interpretation.
Current practice for the management and extraction of meaningful information from available
data, to inform understanding of water supply system performance, is limited, time consuming
and of inadequate accuracy. This is primarily due to reliance on human data analysis and
interpretation, which is unfeasible for the volume and complexity of data involved. Further, it is
often only practical and economically justifiable to instigate such analysis in a reactive manner to
improve problem definition such as the determination of the magnitude of a burst once it has
been recognized through a breach of serviceability limits. It is not cost effective to rely on human
data analysis and interpretation and the inherent inefficiency contributes to lowered regulatory
compliance and standards of service. A new automated approach that applies Artificial Neural
Network (ANN) technology has been developed to provide more efficient and consistent analysis
of large data volumes, ANNs are trained to infer the future behavior of a system, or to classify
current behavior, by analyzing data describing its past performance, so that problem solving is a
matter of learning by example rather than programming. This is especially attractive for domains
where an understanding of the problem to be solved is limited, but where training data is readily
available, for example, flow and pressure time series from a water supply system. The system
developed can be applied to historic flow time series data to construct a grey box model for the
detection and classification of burst mains. Initial ANN data analysis to construct a probability
density model of the future flow profile was followed by application of a Fuzzy Inference
System for classification such that confidence intervals could be assigned to the detected burst
events. This artificial intelligence analysis system was applied to four month periods of sample
flow data (provided by a major UK water company) from district meter areas of various size,
complexity and connectivity and successfully identified a number of burst events within the data;
some of which could not have been detected by manual analysis. When integrated with real time
communications, the `intelligent' analysis system developed can be readily applied to an online
environment providing the capability to proactively manage water supply systems. This paper
was presented at the 8th Annual Water Distribution Systems Analysis Symposium which was
held with the generous support of Awwa research foundation (AwwaRF).
Artificial Intelligence is a branch of Science which deals with helping machines find solutions to
complex problems in a more human-like fashion. This generally involves borrowing
characteristics from human intelligence, and applying them as algorithms in a computer friendly
way. A more or less flexible or efficient approach can be taken depending on the requirements
established, which influences how artificial the intelligent behavior appears.
AI is generally associated with Computer Science, but it has many important links with other
fields such as Maths, Psychology, Cognition, Biology and Philosophy, among many others. Our
ability to combine knowledge from all these fields will ultimately benefit our progress in the
quest of creating an intelligent artificial being.
Jobs depending on the level and type of intelligence these machines receive in the
future, it will obviously have an effect on the type of work they can do, and how well
they can do it (they can become more efficient). As the level of AI increases so will their
competency to deal with difficult, complex even dangerous tasks that are currently done
by humans, a form of applied artificial intelligence.
They dont stop as they are machines there is no need for sleep, they dont get ill , there
is no need for breaks or Facebook, they are able to go, go, go! There obviously may be
the need for them to be charged or refueled, however the point is, they are definitely
going to get a lot more work done than we can. Take the Finance industry for example,
there are constant stories arising of artificial intelligence in finance and that stock traders
are soon to be a thing of the past.
No risk of harm when we are exploring new undiscovered land or even planets, when a
machine gets broken or dies, there is no harm done as they dont feel, they dont have
emotions. Where as going on the same type of expeditions a machine does, may simply
not be possible or they are exposing themselves to high risk situations.
Act as aids they can act as 24/7 aids to children with disabilities or the elderly, they
could even act as a source for learning and teaching. They could even be part of security
alerting you to possible fires that you are in threat of, or fending off crime.
Their function is almost limitless as the machines will be able to do everything (but
just better) essentially their use, pretty much doesnt have any boundaries. They will
make fewer mistakes, they are emotionless, they are more efficient, they are basically
giving us more free time to do as we please.
Over reliance on AI as you may have seen in many films such as The Matrix, iRobot or
even kids films such as WALL.E, if we rely on machines to do almost everything for us
we become very dependent, so much so they have the potential to ruin our lives if
something were to go wrong. Although the films are essentially just fiction, it wouldnt
be too smart not to have some sort of backup plan to potential issues on our part.
Human Feel as they are are machines they obviously cant provide you with that
human touch and quality, the feeling of a togetherness and emotional understanding,
that machines will lack the ability to sympathies and empathize with your situations, and
may act irrationally as a consequence.
Inferior as machines will be able to perform almost every task better than us in
practically all respects, they will take up many of our jobs, which will then result in
masses of people who are then jobless and as a result feel essentially useless. This could
then lead us to issues of mental illness and obesity problems etc.
Misuse there is no doubt that this level of technology in the wrong hands can cause
mass destruction, where robot armies could be formed, or they could perhaps malfunction
or be corrupted which then we could be facing a similar scene to that of terminator ( hey,
you never know).
Ethically Wrong? People say that the gift of intuition and intelligence was Gods gift to
mankind, and so to replicate that would be then to kind of play God. Therefore not right
to even attempt to clone our intelligence.
INTRODUCTION OF ROBOTICS
Robotics research often focuses on increasing robot capability. If end users do not perceive
these increases, however, user acceptance may not improve. In this work, we explore the idea of
perceived capability and how it relates to true capability, differentiating between physical and
social capabilities. We present a framework that outlines their potential relationships, along with
two user studies, on robot speed and speech, exploring these relationships. Our studies identify
two possible consequences of the disconnect between the true and perceived capability: (1)
under-perception: true improvements in capability may not lead to perceived improvements and
(2) over-perception: true improvements in capability may lead to additional perceived
improvements that do not actually exist.
Robot cists often focus on increasing robot capability: we make robots faster, help them
perceive the world and enable them to interact with people. The goal is to make robots a part of
everyday life, with purposes ranging from entertainment to assisting with tedious or dangerous
tasks
An integral part of making this goal a reality is acceptance, which is largely affected by users
perceptions. Therefore, increasing robot capability does not necessarily lead to increased
acceptance since it is actually the users perception of capability the robots perceived
capability that determines acceptance.
For example, imagine a robot designed to clean the home. If a user perceives the robots
capability to be lacking, they may be reluctant to allow it to handle their possessions. On the
other hand, if the user overestimates the robots capability, this can lead to unmet expectations
and disappointment.
In this paper, we focus on the idea of perceived capability, and the disconnect between it and
the robots true capability. We start by conjecturing a framework that links true and per-ceived
capability and then outline their possible relationships. Within this framework, we investigate the
effects of two important robot capabilities, speed and speech, on perceived capability via two
user studies.
Framework for Perceived Capability
Prior work in anthropomorphism, mental models, and sense making shows there is often a
disconnect between users perceptions and a robots true capability. This occurs largely because
people are unfamiliar and lack experience with robots. Hence, their knowledge is often based on
popular culture which depicts a wide variety of robots with a multitude of capabilities.
In order to investigate this disconnect, we introduce a framework which enumerates the
possible relationships be-tween true and perceived capability. Our framework dis-tinguishes
between physical capability (e.g., doing laundry) and social capability (e.g., understanding what
someone is saying), and between a particular skill and overall capability.
Elizabeth Cha, Anca D. Dragan and Siddhartha S. Srinivasan are with the Robotics Institute,
School of Computer Science, Carnegie Mellon University, Pittsburgh, Pennsylvania, United
States lizcha, adragan,
siddh@cs.cmu.edu
SPEED STUDY
Our first study is about under-perception. An increase in true capability does not necessarily
lead to an increase in perceived capability.
We exposed users to two videos of a robot performing a physical task next to an actor. In each
scenario, the robot enters the kitchen and retrieves a microwave meal while standing next to the
actor. In one video, the robot moves faster and spends less time planning.
A.Study Design
1) Manipulated Variables: We manipulated speed by producing two videos of a robot
performing a physical task
retrieving a microwave meal at two different speeds (2.3 minutes vs. 1.15 minutes to
complete the task). The lower speed represents the state of the art, while the higher speed
represents a capability well beyond current robot manipulation skills in both planning and
execution time.
2)
and 8 males) through Amazon Mechanical Turk. All participants were located in the United
States, primary English speakers and ranging in age from 23 to 65 (M=39.0, SD=10.55) years.
40% percent of participants were male and 60% were female. Participants were compensated $4
for successful completion of the study.
Participants were told that they were taking part in a survey to design better home robots. All
participants successfully answered a set of control questions about the videos they were shown
and none had previously participated in a study with the robot.
This resulted in 14 participants spread across the 2 conditions. On average, participants rated
their familiarity with robots as 2.2 (SD=1.28) and their previous level of interaction as 1.3
(SD=0.47) on a 7-point Liker scale.
3) Procedure: We opted for a within-subjects design, where participants were shown both the
slow and fast videos, in order to enable direct comparisons and stay away from absolute ratings
(as suggested by a pilot study). The order of the videos shown were counterbalanced to negate
ordering effects.
Participants were given a link through Amazon Mechanical Turk to the study. After reading the
instructions and giving their consent, participants were shown the first video of the robots
labeled Robot 1. To continue the study, participants had to watch the entire video. They then
answered questions about the video and perceived capability. They repeated this process after
watching the second video.
4)
The speed of the robot, it will not affect perceived capability as the robot is still much slower
than a human.
5)
Results
Our hypothesis predicted that speed would not significantly affect a robots perceived physical
capability. To test this hypothesis, we performed a non-inferiority test to show that the difference
between the slow and fast robot is significantly greater than a negative margin. Setting = 5%, a
one-tailed paired t-test supported the hypothesis:
0:296 >
Despite a difference in true capability that would be considered very large from a robotic
algorithmic standpoint, the mean ratings were not significantly different. Since speed is a
physical trait, we did not expect it to affect perceived social capability: we found no significant
difference in the perceived social capability ratings.
We confirmed in a follow-up question that users did perceive the difference in speed, but this had
a very small effect on the robots perceived capability, i.e. on their perceptions of what tasks the
robot is capable of performing. They said, for example, that despite the robot being faster, it was
either still not fast enough, or it did not matter, because the robots speed did not impact its
ability to do other tasks and thus, cannot be trusted.
These results suggest that true improvements in capability do not necessarily lead to perceived
improvements in capability: there can be a disconnect between the two.
100
10
Physical
90
80
AverageLikertRating
PercentChosenasMoreCapable
H4:Conversational
Physical
6
79.15%
5.592
5.183
70
60
65.60%
4.143
3.932
50
40
4
30.91% 29.01%
30
20
0
Functional
Conversational
3
2
Success
Failure
Success
Failure
(b) Rating
Credit granting
Information management and retrieval
AI and expert systems embedded in products
Plant layout
Hospitals and medical facilities
Help desks and assistance
Employee performance evaluation
Loan analysis
Virus detection
Repair and maintenance
Shipping
Marketing
Warehouse optimization
=> AI applications
3.
knowledge representation
4.
Reasoning,
5. Language/image understanding,
in industry,
6. learning
Problems
e.g., insurance
E.g., theorem-provers
Limitations
Decision theory/Economics
An agent acts rationally if it selects the action that maximizes its utility
Grand Challenge
Computer vision
Learning
Overall:
Post Office
Banks
Customer Service
The Web
Digital Cameras
Artificial intelligence (AI) is the human-like intelligence exhibited by machines or software. The
AI field is interdisciplinary, in which a number of sciences and professions converge, including
computer science, psychology, linguistics, philosophy and neuroscience, as well as other
specialized fields such as artificial psychology. Major AI researchers and textbooks define the
field as "the study and design of intelligent agents", where an intelligent agent is a system that
perceives its environment and takes actions that
maximize its chances of success. John McCarthy, who coined the term in 1955, defines it as "the
science and engineering of making intelligent machines". AI research is highly technical and
specialised, and is deeply divided into subfields that often fail to communicate with each other.
Some of the division is due to social and cultural factors: subfields have grown up around
particular institutions and the work of individual researchers. AI research is also divided by
several technical issues. Some subfields focus on the solution of specific problems. Others focus
on one of several possible approaches or on the use of a particular tool or towards the
accomplishment of particular applications. The central problems (or goals) of AI research include
Data Analysis
It is the process of systematically applying statistical and/or logical techniques to describe and
illustrate, condense and recap, and evaluate data. According to Shamoo and Resnik (2003)
various analytic procedures provide a way of drawing inductive inferences from data and
distinguishing the signal (the phenomenon of interest) from the noise (statistical fluctuations)
present in the data..
While data analysis in qualitative research can include statistical procedures, many times
analysis becomes an ongoing iterative process where data is continuously collected and analyzed
almost simultaneously. Indeed, researchers generally analyze for patterns in observations through
the entire data collection phase (Savenye, Robinson, 2004). The form of the analysis is
determined by the specific qualitative approach taken (field study, ethnography content analysis,
oral history, biography, unobtrusive research) and the form of the data (field notes, documents,
audiotape, videotape).
An essential component of ensuring data integrity is the accurate and appropriate analysis of
research findings. Improper statistical analyses distort scientific findings, mislead casual readers
(Shepard, 2002), and may negatively influence the public perception of research. Integrity issues
are just as relevant to analysis of non-statistical data as well.
Considerations/issues in data analysis
There are a number of issues that researchers should be cognizant of with respect to data
analysis. These include:
Environmental/contextual issues
Explosion
Early AI researchers developed algorithms that imitated the step-by-step reasoning that humans
use when they solve puzzles or make logical deductions.By the late 1980s and 1990s, AI
research had also developed highly successful methods for dealing with uncertain or incomplete
information, employing concepts from probability and economics.
For difficult problems, most of these algorithms can require enormous computational resources
most experience a "combinatorial explosion": the amount of memory or computer time required
becomes astronomical when the problem goes beyond a certain size. The search for more
efficient problem-solving algorithms is a high priority for AI research.
Human beings solve most of their problems using fast, intuitive judgements rather than the
conscious, step-by-step deduction that early AI research was able to model.AI has made some
progress at imitating this kind of "sub-symbolic" problem solving: embodied agent approaches
emphasize the importance of sensorimotor skills to higher reasoning; neural net research
attempts to simulate the structures inside the brain that give rise to this skill; statistical
approaches to AI mimic the probabilistic nature of the human ability to guess
believed (Nowak, 1994; Silverman, Manson, 2003). For example, Sica found that adequate
training of physicians in medical schools in the proper design, implementation and evaluation of
clinical trials is abysmally small (Sica, cited in Nowak, 1994). Indeed, a single course in
biostatistics is the most that is usually offered (Christopher Williams, cited in Nowak, 1994).
A common practice of investigators is to defer the selection of analytic procedure to a research
team statistician. Ideally, investigators should have substantially more than a basic
understanding of the rationale for selecting one method of analysis over another. This can allow
investigators to better supervise staff who conduct the data analyses process and make informed
decisions
proposed before beginning the study even if the intent is exploratory in nature. If it the study is
exploratory in nature, the investigator should make this explicit so that readers understand that
the research is more of a hunting expedition rather than being primarily theory driven. Although
a researcher may not have a theory-based hypothesis for testing relationships between previously
untested variables, a theory will have to be developed to explain an unanticipated finding.
Indeed, in exploratory science, there are no a priori hypotheses therefore there are no
hypothetical tests. Although theories can often drive the processes used in the investigation of
qualitative studies, many times patterns of behavior or occurrences derived from analyzed data
can result in developing new theoretical frameworks rather than determined a priori (Savenye,
Robinson,
2004).
It is conceivable that multiple statistical tests could yield a significant finding by chance alone
rather than reflecting a true effect. Integrity is compromised if the investigator only reports tests
with significant findings, and neglects to mention a large number of tests failing to reach
significance. While access to computer-based statistical packages can facilitate application of
increasingly complex analytic procedures, inappropriate uses of these packages can result in
abuses as well.
Following acceptable norms for disciplines
Every field of study has developed its accepted practices for data analysis. Resnik (2000) states
that it is prudent for investigators to follow these accepted norms. Resnik further states that the
norms are based on two factors:
(1) the nature of the variables used (i.e., quantitative, comparative, or qualitative),
(2) assumptions about the population from which the data are drawn (i.e., random distribution,
independence, sample size, etc.). If one uses unconventional norms, it is crucial to clearly state
this is being done, and to show how this new and possibly unaccepted method of analysis is
being used, as well as how it differs from other more traditional methods. For example, Schroder,
Carey, and Vanable (2003) juxtapose their identification of new and powerful data analytic
solutions developed to count data in the area of HIV contraction risk with a discussion of the
limitations of commonly applied methods.
If one uses unconventional norms, it is crucial to clearly state this is being done, and to show
how this new and possibly unaccepted method of analysis is being used, as well as how it differs
from other more traditional methods. For example, Schroder, Carey, and Vanable (2003)
juxtapose their identification of new and powerful data analytic solutions developed to count
data in the area of HIV contraction risk with a discussion of the limitations of commonly applied
methods.
Determining significance
While the conventional practice is to establish a standard of acceptability for statistical
significance, with certain disciplines, it may also be appropriate to discuss whether attaining
statistical significance has a true practical meaning, i.e., clinical significance. Jeans (1992)
defines clinical significance as the potential for research findings to make a real and important
difference to clients or clinical practice, to health status or to any other problem identified as a
relevant priority for the discipline.
Kendall and Grove (1988) define clinical significance in terms of what happens when
troubled and disordered clients are now, after treatment, not distinguishable from a meaningful
and representative non-disturbed reference group. Thompson and Noferi (2002) suggest that
readers of counseling literature should expect authors to report either practical or clinical
significance indices, or both, within their research reports. Shepard (2003) questions why some
authors fail to point out that the magnitude of observed changes may too small to have any
clinical or practical significance, sometimes, a supposed change may be described in some
detail, but the investigator fails to disclose that the trend is not statistically significant .
Lack of clearly defined and objective outcome measurements
No amount of statistical analysis, regardless of the level of the sophistication, will correct poorly
defined objective outcome measurements. Whether done unintentionally or by design, this
practice increases the likelihood of clouding the interpretation of findings, thus potentially
misleading readers.
Provide honest and accurate analysis
The basis for this issue is the urgency of reducing the likelihood of statistical error. Common
challenges include the exclusion of outliers, filling in missing data, altering or otherwise
changing data, data mining, and developing graphical representations of the data (Shamoo,
Resnik, 2003).
Manner of presenting data
At times investigators may enhance the impression of a significant finding by determining how
to present derived data (as opposed to data in its raw form), which portion of the data is shown,
why, how and to whom (Shamoo, Resnik, 2003). Nowak (1994) notes that even experts do not
agree in distinguishing between analyzing and massaging data. Shamoo (1989) recommends that
investigators maintain a sufficient and accurate paper trail of how data was manipulated for
future review.
Environmental/contextual issues
The integrity of data analysis can be compromised by the environment or context in which data
was collected i.e., face-to face interviews vs. focused group. The interaction occurring within a
dyadic relationship (interviewer-interviewee) differs from the group dynamic occurring within a
focus group because of the number of participants, and how they react to each others responses.
Since the data collection process could be influenced by the environment/context, researchers
should take this into account when conducting data analysis.
Data recording method
Analyses could also be influenced by the method in which data was recorded. For example,
research events could be documented by:
stability , or the tendency for coders to consistently re-code the same data in the
same way over a period of time
The potential for compromising data integrity arises when researchers cannot consistently
demonstrate stability, reproducibility, or accuracy of data analysis
According Gottschalk, (1995), the validity of a content analysis study refers to the
correspondence of the categories (the classification that raters assigned to text content) to the
conclusions, and the generalizability of results to a theory (did the categories support the studys
conclusion, and was the finding adequately robust to support or be applied to a selected
theoretical rationale?).
Extent of analysis
Upon coding text material for content analysis, raters must classify each code into an appropriate
category of a cross-reference matrix. Relying on computer software to determine a frequency or
word count can lead to inaccuracies. One may obtain an accurate count of that word's
occurrence and frequency, but not have an accurate accounting of the meaning inherent in each
particular usage (Gottschalk, 1995). Further analyses might be appropriate to discover the
dimensionality of the data set or identity new meaningful underlying variables.
Whether statistical or non-statistical methods of analyses are used, researchers should be aware
of the potential for compromising data integrity. While statistical analysis is typically performed
on quantitative data, there are numerous analytic procedures specifically designed for qualitative
material including content, thematic, and ethnographic analysis. Regardless of whether one
studies quantitative or qualitative phenomena, researchers use a variety of tools to analyze data
in order to test hypotheses, discern patterns of behavior, and ultimately answer research
questions. Failure to understand or acknowledge data analysis issues presented can compromise
data integrity.
Analysis of data is a process of inspecting, cleaning, transforming, and modeling data with the
goal of discovering useful information, suggesting conclusions, and supporting decision-making.
Data analysis has multiple facets and approaches, encompassing diverse techniques under a
variety of names, in different business, science, and social science domains.
Data mining is a particular data analysis technique that focuses on modeling and knowledge
discovery for predictive rather than purely descriptive purposes. Business intelligence covers
data analysis that relies heavily on aggregation, focusing on business information. In statistical
applications, some people divide data analysis into descriptive statistics, exploratory data
analysis(EDA), and confirmatory data analysis (CDA). EDA focuses on discovering new
features in the data and CDA on confirming or falsifying existing hypotheses.Predictive
analytics focuses on application of statistical models for predictive forecasting or classification,
while text analytics applies statistical, linguistic, and structural techniques to extract and classify
information from textual sources, a species of unstructured data. All are varieties of data
analysis.
Data integration is a precursor to data analysis, and data analysis is closely linked to data
visualization and data dissemination. The term data analysis is sometimes used as a synonym
for data modeling.
CHAPTER 5
CONCLUSION
CONCLUSIONS
1. Artificial Intelligence have really a lot from being & till todays date & still they are
evolving day by day.
2. Artificial Intelligence help in easily understanding that what the various areas where the
user can interface are.
3. AI have really made our lifes very easier, & they have been contributing a lot for the
development of our society in our businesses.
4. If we talk about making AI as a profession/career, AI have a very wide spectrum in
future. It has a very wide scope for people how are interested in choosing AI as a
professional knowledge.
5. In future we will be experiencing a technology which is beyond our imaginations right
now.
CHAPTER-6
RECOMMENDATION
&
SUGGESTION
The study should involves the whole artificial intelligence not only the emergence of the
artificial intelligence.
The study should not be related to the history and development of the artificial
intelligence.
The study should be focus on the impact of artificial intelligence in the society and all
over areas.
CHAPTER-7
BIBLIOGRAPHY
BIBLIOGRAPHY
New Delhi.
GICC Report on Internet and Artificial Intelligence 2000, Government of India, New
Delhi.
IEEE 1999, ' Artificial Intelligence Perspective from Different Parts of the World,
WEBSITES:
http://newint.org/books/reference/world-development/case-studies/2013/03/14/artificial
intelligence-in-developing-world/
http://wikieducator.org/History_of_Artificial Intelligence_Development_
%26_Generation_of_Artificial Intelligence
http://www.byte-notes.com/advantages-and-disadvantages-artificial intelligence
http://www.informationq.com/uses-of-artificial intelligence-in-different-fields-areassectors-industries-banking /