You are on page 1of 28

A SEMINAR REPORT ON

ARTIFICIAL INTELLIGENCE

-By Avinash Kumar

-1-
ABSTRACT

This paper is the introduction to Artificial intelligence (AI). Artificial intelligence is exhibited
by artificial entity, a system is generally assumed to be a computer. AI systems are now in
routine use in economics, medicine, engineering and the military, as well as being built into
many common home computer software applications, traditional strategy games like
computer chess and other video games.

We tried to explain the brief ideas of AI and its application to various fields. It cleared the
concept of computational and conventional categories. It includes various advanced systems
such as Neural Network, Fuzzy Systems and Evolutionary computation. AI is used in typical
problems such as Pattern recognition, Natural language processing and more. This system is
working throughout the world as an artificial brain.

Intelligence involves mechanisms, and AI research has discovered how to make computers
carry out some of them and not others. If doing a task requires only mechanisms that are well
understood today, computer programs can give very impressive performances on these tasks.
Such programs should be considered ``somewhat intelligent''. It is related to the similar task
of using computers to understand human intelligence.

We can learn something about how to make machines solve problems by observing other
people or just by observing our own methods. On the other hand, most work in AI involves
studying the problems the world presents to intelligence rather than studying people or
animals. AI researchers are free to use methods that are not observed in people or that involve
much more computing than people can do. We discussed conditions for considering a
machine to be intelligent. We argued that if the machine could successfully pretend to be
human to a knowledgeable observer then you certainly should consider it intelligent.

-2-
CONTENTS

CHAPTER TITLE PAGE


1 Introduction 1
1.1 Introduction 1
1.2 History 3
2 Dive into AI 7
2.1 Categories of AI 7

3 Case study on Artificial Intelligence 9


3.1 Google Deepmind 9
3.2 Machine Learning 10
3.3 Deep reinforcement learning 11
3.4 AlphaGO 11
3.5 Healthcare 12
3.6 Controversies 12

4 Goals of AI 14
4.1 Main goals of artificial intelligence 14
4.2 Typical Problems to which AI are applied 17
4.3 Other fields in which AI methods are implemented 18
5 Applications of AI 23
5.1 Main goals of Artificial Intelligence 23
Conclusion 29
Reference 30

-3-
CHAPTER-1
INTRODUCTION

1.1 INTRODUCTION

ARTIFICIAL:-
The simple definition of artificial is that objects that are made or produced by human beings
rather than occurring naturally.

INTELLIGENCE:-
The simple definition of intelligence is a process of entail a set of skills of problem solving,
enabling to resolve genuine problems or difficulties that encounters and to create an effective
product and must also entail the potential for finding or creating problems and thereby laying
the groundwork for the acquisition of new knowledge.

ARTIFICIAL INTELLIGENCE:-
Artificial intelligence is a branch of science which deals with helping machines find solution
to complex problems in a more human like fashion. This generally involves borrowing
characteristics from human intelligence, and applying them as algorithms in a computer
friendly way. A more or less or flexible or efficient approach can be taken depending on the
requirements established, which influences how artificial intelligent behavior appears.

Artificial intelligence is generally associated with computer science, but it has many
important links with other fields such as maths, psychology, cognition , biology and
philosophy , among many others . Our ability to combine knowledge from all these fields will
ultimately benefits our progress in the quest of creating an intelligent artificial being.

A.I. is mainly concerned with the popular mind with the robotics development, but also the
main field of practical application has been as an embedded component in the areas of

-4-
software development which require computational understandings and modeling such as
such as finance and economics, data mining and physical science.

A.I in the fields of robotics is the make a computational models of human thought processes.
It is not enough to make a program that seems to behave the way human do. You want to
make a program that does it the way humans do it.

In computer science they also the problems bcoz we have to make a computer that are satisfy
for understanding the high-level languages and that was taken to be A.I.

Fig. 1.1 Artificial Intelligence

-5-
1.1.1 HISTORY

Fig.1.2 Evolution of Artificial Intelligence

The intellectual roots of AI, and the concept of intelligent machines, may be found in Greek
mythology. Intelligent artifacts appear in literature since then, with real mechanical devices
actually demonstrating behaviour with some degree of intelligence. After modern computers
became available following World War-II, it has become possible to create programs that
perform difficult intellectual tasks.

1.1.2 The Timeline


1950s: The Beginnings of Artificial Intelligence (AI)
Research

With the development of the electronic computer in 1941 and the stored program computer in
1949 the condition for research in artificial intelligence is given, still the observation of a link
between human intelligence and machines was not widely observed until the late in 1950.The
first working AI programs were written in 1951 to run on the Ferranti Mark I machine of the
University of Manchester (UK): a draughts-playing program written by Christopher Strachey
and a chess-playing program written by Dietrich Prinz.
The person who finally coined the term artificial intelligence and is regarded as the father of
the AL is John McCarthy. In 1956 he organized a conference “the Darthmouth summer
research project on artificial intelligence" to draw the talent and expertise of others interested

-6-
in machine intelligence of a month of brainstorming. In the following years AI research
centers began forming at the Carnegie Mellon University as well as the Massachusetts
Institute of Technology (MIT) and new challenges were faced:
1) The creation of systems that could efficiently solve problems by limiting the search.
2) The construction of systems that could learn by themselves.

1960:-

By the middle of the 1960s, research in the U.S. was heavily funded by the Department of
Defense and laboratories had been established around the world. AI's founders were
profoundly optimistic about the future of the new field: Herbert Simon predicted that
"machines will be capable, within twenty years, of doing any work a man can do" and Marvin
Minsky agreed, writing that "within a generation .By the 1960’s, America and its federal
government starting pushing more for the development of AI. The Department of Defense
started backing several programs in order to stay ahead of Soviet technology. The U.S. also
started to commercially market the sale of robotics to various manufacturers. The rise of
expert systems also became popular due to the creation of Edward Feigenbaum and Robert K.
Lindsay’s DENDRAL. DENDRAL had the ability to map the complex structures of organic
chemicals, but like many AI inventions, it began to tangle its results once the program had
too many factors built into it... the problem of creating 'artificial intelligence' will
substantially be solved". The same predicament fell upon the program SHRDLU which
would use robotics through a computer so the user could ask questions and give commands in
English.

1980:-

In the early 1980s, AI research was revived by the commercial success of expert systems, a
form of AI program that simulated the knowledge and analytical skills of one or more human
experts. By 1985 the market for AI had reached over a billion dollars. At the same time,
Japan's fifth generation computer project inspired the U.S and British governments to restore
funding for academic research in the field. In the 1990s and early 21st century, AI achieved
its greatest successes, albeit somewhat behind the scenes. Artificial intelligence is used for
logistics, data mining, medical diagnosis and many other areas throughout the technology
industry
-7-
1990:-
From 1990s until the turn of the century, AI has reached some incredible landmarks with the
creation of intelligent agents. Intelligent agents basically use their surrounding environment
to solve problems in the most efficient and effective manner. In 1997, the first computer
(named Deep Blue) beat a world chess champion. In 1995, the VaMP car drove an entire 158
km racing track without any help from human intelligence. In 1999, humanoid robots began
to gain popularity as well as the ability to walk around freely. Since then, AI has been playing
a big role in certain commercial markets and throughout the World Wide Web. The more
advanced AI projects, like fully adapting common sense knowledge, have taken a back-
burner to more lucrative industries.

Fig.1.3 Time v/s Human Progress

-8-
CHAPTER-2
DIVE INTO AI

2.1 CATEGORIES OF A.I

AI divides roughly into two schools of thought:

1. Conventional AI
2. Computational Intelligence (CI)

1.Conventional AI :-

Conventional AI mostly involves methods now classified as machine learning, characterized


by formalism and statistical analysis. This is also known as symbolic AI, logical AI, neat AI
and Good Old Fashioned Artificial Intelligence (GOFAI).

Methods include:

Expert systems: apply reasoning capabilities to reach a conclusion. An expert system can
process large amounts of known information and provide conclusions based on them.

Case based reasoning

Bayesian networks

Behavior based AI: a modular method of building AI systems by hand.

2.Computational Intelligence (CI) :-

Computational Intelligence involves iterative development or learning (e.g. parameter tuning


e.g. in connectionist systems). Learning is based on empirical data and is associated with
non-symbolic AI, scruffy AI and soft computing.

Methods include:

-9-
Neural networks: systems with very strong pattern recognition capabilities.

Fuzzy systems: techniques for reasoning under uncertainty, has been widely used in modern
industrial and consumer product control systems.

Evolutionary computation: applies biologically inspired concepts such as populations,


mutation and survival of the fittest to generate increasingly better solutions to the problem.
These methods most notably divide into evolutionary algorithms (e.g. genetic algorithms) and
swarm intelligence (e.g. ant algorithms).

Fig.2.1 Global Computational Intelligence brief history

- 10 -
CHAPTER-3
CASE STUDY ON ARTIFICIAL INTELLIGENGE

3.1 GOOGLE DEEPMIND

DeepMind Technologies Limited is a British artificial intelligence company founded in


September 2010. It was acquired by Google in 2014. The company has created a neural
network that learns how to play video games in a fashion similar to that of humans,as well as
a Neural Turing Machine, or a neural network that may be able to access an external memory
like a conventional Turing machine, resulting in a computer that mimics the short-term
memory of the human brain.

The company made headlines in 2016 after its AlphaGo program beat a human professional
Go player for the first time.

3.1.1 History

The start-up was founded by Demis Hassabis, Shane Legg and Mustafa Suleyman in 2010.
Hassabis and Legg first met at University College London's Gatsby Computational
Neuroscience Unit.On 26 January 2014, Google announced the company had acquired
DeepMind for $500 million, and that it had agreed to take over DeepMind Technologies.

Since then major venture capital firms Horizons Ventures and Founders Fund have invested
in the company, as well as entrepreneurs Scott Banister and Elon Musk. Jaan Tallinn was an
early investor and an adviser to the company. The sale to Google took place after Facebook
reportedly ended negotiations with DeepMind Technologies in 2013. The company was
afterwards renamed Google DeepMind and kept that name for about two years.

In 2014, DeepMind received the "Company of the Year" award by Cambridge Computer
Laboratory.

After Google's acquisition the company established an artificial intelligence ethics board. The
ethics board for AI research remains a mystery, with both Google and DeepMind declining to
reveal who sits on the board. DeepMind, together with Amazon, Google, Facebook, IBM,

- 11 -
and Microsoft, is a member of Partnership on AI, an organization devoted to the society-AI
interface.

3.2 Machine learning

DeepMind Technologies' goal is to "solve intelligence", which they are trying to achieve by
combining "the best techniques from machine learning and systems neuroscience to build
powerful general-purpose learning algorithms". They are trying to formalize intelligence in
order to not only implement it into machines, but also understand the human brain, as Demis
Hassabis explains:

[...] attempting to distil intelligence into an algorithmic construct may prove to be the best
path to understanding some of the enduring mysteries of our minds.

Google Research has released a paper in 2016 regarding AI Safety and avoiding undesirable
behaviour during the AI learning process. Deepmind has also released several publications
via their website.

To date, the company has published research on computer systems that are able to play
games, and developing these systems, ranging from strategy games such as Go to arcade
games. According to Shane Legg human-level machine intelligence can be achieved "when a
machine can learn to play a really wide range of games from perceptual stream input and
output, and transfer understanding across games [...]."Research describing an AI playing
seven different Atari 2600 video games (the Pong game in Video Olympics, Breakout, Space
Invaders, Seaquest, Beamrider, Enduro, and Q*bert) reportedly led to their acquisition by
Google. Hassabis has mentioned the popular e-sport game StarCraft as a possible future
challenge, since it requires a high level of strategic thinking and handling imperfect
information.

3.3 Deep reinforcement learning

As opposed to other AIs, such as IBM's Deep Blue or Watson, which were developed for a
pre-defined purpose and only function within its scope, DeepMind claims that their system is
not pre-programmed: it learns from experience, using only raw pixels as data input.
Technically it uses deep learning on a convolutional neural network, with a novel form of Q-

- 12 -
learning, a form of model-free reinforcement learning. They test the system on video games,
notably early arcade games, such as Space Invaders or Breakout. Without altering the code,
the AI begins to understand how to play the game, and after some time plays, for a few games
(most notably Breakout), a more efficient game than any human ever could.

For most games (Space Invaders, Ms Pac-Man, Q*Bert for example), DeepMind plays below
the current World Record. The application of DeepMind's AI to video games is currently for
games made in the 1970s and 1980s, with work being done on more complex 3D games such
as Doom, which first appeared in the early 1990s.

3.4 AlphaGo

In October 2015, a computer Go program called AlphaGo, powered by DeepMind, beat the
European Go champion Fan Hui, a 2 dan (out of 9 dan possible) professional, five to zero.
This is the first time an artificial intelligence (AI) defeated a professional Go player.
Previously, computers were only known to have played Go at "amateur" level. Go is
considered much more difficult for computers to win compared to other games like chess,
due to the much larger number of possibilities, making it prohibitively difficult for traditional
AI methods such as brute-force.The announcement of the news was delayed until 27 January
2016 to coincide with the publication of a paper in the journal Nature describing the
algorithms used. In March 2016 it beat Lee Sedol—a 9th dan Go player and one of the
highest ranked players in the world—with 4-1 in a five-game match.

Fig.3.1 AlphaGo Logo

- 13 -
3.5 Healthcare

In July 2016, a collaboration between DeepMind and Moorfields Eye Hospital was
announced. DeepMind would be applied to the analysis of anonymised eye scans, searching
for early signs of diseases leading to blindness.

In August 2016, a research programme with University College London Hospital was
announced with the aim of developing an algorithm that can automatically differentiate
between healthy and cancerous tissues in head and neck areas.

There are also projects with the Royal Free London NHS Foundation Trust and Imperial
College Healthcare NHS Trust to develop new clinical mobile apps linked to electronic
patient records.

3.6 Controversies

In April 2016 New Scientist obtained a copy of a data-sharing agreement between DeepMind
and the Royal Free London NHS Foundation Trust. The latter operates the three London
hospitals where an estimated 1.6 million patients are treated annually. The revelation has
exposed the ease with which private companies can obtain highly sensitive medical
information without patient consent. The agreement shows DeepMind Health is gaining
access to admissions, discharge and transfer data, accident and emergency, pathology and
radiology, and critical care at these hospitals. This included personal details such as whether
patients had been diagnosed with HIV, suffered from depression or had ever undergone an
abortion. The agreement is seen as controversial and its legality has been questioned.
Officials from Google have yet to make a statement on the matter.

The concerns were widely reported and have led to a complaint to the Information
Commissioner's Office (ICO), arguing that the data should be pseudonymised and encrypted

In May 2016, New Scientist published a further article claiming that the project had failed to
secure approval from the Confidentiality Advisory Group of the Medicines and Healthcare
Products Regulatory Agency.

- 14 -
CHAPTER-4
GOALS OF A.I.

4.1 Main Goals of Artificial Intelligence:-


The general problem of simulating (or creating) intelligence has been broken down into a
number of specific sub-problems. These consist of particular traits or capabilities that
researchers would like an intelligent system to display. The traits described below have
received the most attention.

4.1.1. Deduction, reasoning, problem solving:-

For difficult problems, most of these algorithms can require enormous computational
resources most experience a "combinatorial explosion": the amount of memory or computer
time required becomes astronomical when the problem goes beyond a certain size. The search
for more efficient problem-solving algorithms is a high priority for AI research.

Human beings solve most of their problems using fast, intuitive judgements rather than the
conscious, step-by-step deduction that early AI research was able to model. AI has made
some progress at imitating this kind of "sub-symbolic" problem solving: embodied agent
approaches emphasize the importance of sensorimotor skills to higher reasoning; neural net
research attempts to simulate the structures inside the brain that give rise to this skill;
statistical approaches to AI mimic the probabilistic nature of the human ability to guess.

4.1.2. Knowledge representation:-

Knowledge representation and knowledge engineering are central to AI research. Many of the
problems machines are expected to solve will require extensive knowledge about the world.
Among the things that AI needs to represent are: objects, properties, categories and relations
between objects; situations, events, states and time; causes and effects; knowledge about
knowledge (what we know about what other people know) and many other, less well
researched domains. A representation of "what exists" is an ontology: the set of objects,
relations, concepts and so on that the machine knows about. The most general are called
upper ontologies, which attempt provide a foundation for all other knowledge.

- 15 -
4.1.3. Planning:-
Intelligent agents must be able to set goals and achieve them. They need a way to visualize
the future and be able to make choices that maximize the utility (or "value") of the available
choices.
In classical planning problems, the agent can assume that it is the only thing acting on the
world and it can be certain what the consequences of its actions may be. However, if the
agent is not the only actor, it must periodically ascertain whether the world matches its
predictions and it must change its plan as this becomes necessary, requiring the agent to
reason under uncertainty.

4.1.4. Natural language processing:-

Natural language processing gives machines the ability to read and understand the languages
that humans speak. A sufficiently powerful natural language processing system would enable
natural language user interfaces and the acquisition of knowledge directly from human-
written sources, such as Internet texts. Some straightforward applications of natural language
processing include information retrieval (or text mining) and machine translation. A common
method of processing and extracting meaning from natural language is through semantic
indexing. Increases in processing speeds and the drop in the cost of data storage makes
indexing large volumes of abstractions of the users input much more efficient.

4.1.5. Motion and manipulation:-

The field of robotics is closely related to AI. Intelligence is required for robots to be able to
handle such tasks as object manipulation and navigation, with sub-problems of localization
(knowing where you are, or finding out where other things are), mapping (learning what is
around you, building a map of the environment), and motion planning (figuring out how to
get there) or path planning (going from one point in space to another point, which may
involve compliant motion - where the robot moves while maintaining physical contact with
an object).

4.1.6. Perception:-

- 16 -
Machine perceptionis the ability to use input from sensors (such as cameras, microphones,
sonar and others more exotic) to deduce aspects of the world. Computer vision is the ability
to analyze visual input. A few selected subproblems are speech recognition facial recognition
and object recognition.

4.1.7. Social intelligence:-

Affective computing is the study and development of systems and devices that can recognize,
interpret, process, and simulate human affects. It is an interdisciplinary field spanning
computer sciences, psychology, and cognitive science While the origins of the field may be
traced as far back as to early philosophical inquiries into emotion. A motivation for the
research is the ability to simulate empathy. The machine should interpret the emotional state
of humans and adapt its behaviour to them, giving an appropriate response for those
emotions.

Emotion and social skills play two roles for an intelligent agent. First, it must be able to
predict the actions of others, by understanding their motives and emotional states. (This
involves elements of game theory, decision theory, as well as the ability to model human
emotions and the perceptual skills to detect emotions.) Also, in an effort to facilitate human-
computer interaction, an intelligent machine might want to be able to display emotions—even
if it does not actually experience them itself—in order to appear sensitive to the emotional
dynamics of human interaction.

4.1.8. General intelligence:-

Most researchers think that their work will eventually be incorporated into a machine with
general intelligence (known as strong AI), combining all the skills above and exceeding
human abilities at most or all of them. A few believe that anthropomorphic features like
artificial consciousness or an artificial brain may be required for such a project.

Many of the problems above may require general intelligence to be considered solved. For
example, even a straightforward, specific task like machine translation requires that the
machine read and write in both languages (NLP), follow the author's argument (reason),
know what is being talked about (knowledge), and faithfully reproduce the author's intention

- 17 -
(social intelligence). A problem like machine translation is considered "AI-complete". In
order to solve this particular problem, you must solve all the problems.

4.2 Typical problems to which AI methods are applied :-

Pattern recognition

o Optical character recognition


o Handwriting recognition
o Speech recognition
o Face recognition

 Natural language processing, Translation and Chatter bots


 Non-linear control and Robotics
 Computer vision, Virtual reality and Image processing
 Game theory and Strategic planning

4.3 Other fields in which AI methods are implemented :-

4.3.1 Automation:-

Automation is the use of machines, control systems and information technologies to optimize
productivity in the production of goods and delivery of services. The correct incentive for
applying automation is to increase productivity, and/or quality beyond that possible with
current human labor levels so as to realize economies of scale, and/or realize predictable
quality levels. automation greatly decreases the need for human sensory and mental
requirements while increasing load capacity, speed, and repeatability.

4.3.2 Cybernetics:-

Cybernetics in some ways is like the science of organisation, with special emphasis on the
dynamic nature of the system being organised. The human brain is just such a complex
organisation which qualifies for cybernetic study. It has all the characteristics of feedback,
storage, etc. and is also typical of many large businesses or Government departments.
Cybernetics is that of artificial intelligence, where the aim is to show how artificially
manufactured systems can demonstrate intelligent behaviour.

- 18 -
4.3.3 Hybrid intelligent system :-

Hybridization of different intelligent systems is an innovative approach to construct


computationally intelligent systems consisting of artificial neural network, fuzzy inference
systems, rough set, approximate reasoning and derivative free optimization methods such as
evolutionary computation, swarm intelligence, bacterial foraging and so on. The integration
of different learning and adaptation techniques, to overcome individual limitations and
achieve synergetic effects through hybridization or fusion of these techniques, has in recent
years contributed to a emergence of large number of new superior class of intelligence known
as Hybrid Intelligence.

4.3.4 Intelligent agent:-

In artificial intelligence, an intelligent agent (IA) is an autonomous entity which observes


through sensors and acts upon an environment using actuators (i.e. it is an agent) and directs
its activity towards achieving goals.

4.3.5 Intelligent control:-

Intelligent Control or self-organising/learning control is a new emerging discipline that is


designed to deal with problems. Rather than being model based, it is experiential based.
Intelligent Control is the amalgam of the disciplines of Artificial Intelligence, Systems
Theory and Operations Research. It uses most recent experiences or evidence to improve its
performance through a variety of learning schemas, that for practical implementation must
demonstrate rapid learning convergence, be temporally stable, be robust to parameter changes
and internal and external disturbances.

4.3.6 Automated reasoning:-

The study of automated reasoning helps produce software that allows computers to reason
completely, or nearly completely, automatically. Although automated reasoning is considered
a sub-field of artificial intelligence, it also has connections with theoretical computer science,
and even philosophy.

4.3.7 Data mining:-

- 19 -
Data mining (the analysis step of the "Knowledge Discovery in Databases" process, or
KDD), an interdisciplinary subfield of computer science, is the computational process of
discovering patterns in large data sets involving methods at the intersection of artificial
intelligence, machine learning, statistics, and database systems. The overall goal of the data
mining process is to extract information from a data set and transform it into an
understandable structure for further use

4.3.8 Behavior-based robotics:-

Behavior-based robotics is a branch of robotics that bridges artificial intelligence (AI),


engineering and cognitive science. Its dual goals are:
(1) To develop methods for con- trolling artificial systems, ranging from physical robots
to simulated ones and other autonomous software agents
(2) To use robotics to model and understand biological sys- tems more fully, typically,
animals ranging from insects to humans. Cognitive robotics.

4.3.9 Developmental robotics:-

 Developmental Robotics (DevRob), sometimes called epigenetic robotics, is a


methodology that uses metaphors from neural development and developmental psychology to
develop the mind for autonomous robots.
 The program that simulates the functions of genome to develop a robot's mental
capabilities is called a developmental program.

4.3.10 Evolutionary robotics:-

Evolutionary robotics (ER) is a methodology that uses evolutionary computation to develop


controllers for autonomous robots

4.3.11 Chatbot:-

Chatterbot, a chatter robot is a type of conversational agent, a computer program designed to


simulate an intelligent conversation with one or more human users via auditory or textual
methods.

4.3.12 Knowledge Representation:-


- 20 -
Knowledge representation (KR) is an area of artificial intelligence research aimed at
representing knowledge in symbols to facilitate inferencing from those knowledge elements,
creating new elements of knowledge.
The KR can be made to be independent of the underlying knowledge model or knowledge
base system (KBS) such as a semantic network

American Association for Artificial Intelligence (AAAI) :-

Founded in 1979, the American Association for Artificial Intelligence (AAAI) is a nonprofit
scientific society devoted to advancing the scientific understanding of the mechanisms
underlying thought and intelligent behaviour and their embodiment in machines. AAAI also
aims to increase public understanding of artificial intelligence, improve the teaching and
training of AI practitioners, and provide guidance for research planners and funders
concerning the importance and potential of current AI developments and future directions.

- 21 -
CHAPTER-5
APPLICATIONS OF A.I.

5.1 Main Goals of Artificial Intelligence:-

Artificial intelligence has been used in a wide range of fields including medical diagnosis,
stock trading, robot control, law, scientific discovery and toys.

Fig. 5.1 Applications of AI

5.1.1 Hospitals and medicine:-

A medical clinic can use artificial intelligence systems to organize bed schedules, make a
staff rotation, and provide medical information.

Artificial neural networks are used as clinical decision support systems for medical
diagnosis, such as in Concept Processing technology in EMR software.

Other tasks in medicine that can potentially be performed by artificial intelligence include:

 Computer-aided interpretation of medical images. Such systems help scan digital


images, e.g. from computed tomography, for typical appearances and to highlight

- 22 -
conspicuous sections, such as possible diseases. A typical application is the detection of a
tumor.
 Heart sound analysis.

5.1.2 Heavy industry:-


Robots have become common in many industries. They are often given jobs that are
considered dangerous to humans. Robots have proven effective in jobs that are very repetitive
which may lead to mistakes or accidents due to a lapse in concentration and other jobs which
humans may find degrading.

5.1.3 Game Playing:-


 This prospered greatly with the Digital Revolution, and helped introduce people, especially
children, to a life of dealing with various types of Artificial Intelligence
 You can also buy machines that can play master level chess for a few hundred dollars. There
is some AI in them, but they play well against people mainly through brute force
computation--looking at hundreds of thousands of positions.
 The internet is the best example were one can buy machine and play various games.

- 23 -
Fig.5.2 Game implementation of AI

5.1.4 Speech Recognition:-


In the 1990s, computer speech recognition reached a practical level for limited purposes.
Thus United Airlines has replaced its keyboard tree for flight information by a system using
speech recognition of flight numbers and city names. It is quite convenient. On the other
hand, while it is possible to instruct some computers using speech, most users have gone back
to the keyboard and the mouse as still more convenient.

5.1.5 Understanding Natural Language:-


Just getting a sequence of words into a computer is not enough. Parsing sentences is not
enough either. The computer has to be provided with an understanding of the domain the text
is about, and this is presently possible only for very limited domains.

5.1.6 Computer Vision:-

- 24 -
The world is composed of three-dimensional objects, but the inputs to the human eye and
computer’s TV cameras are two dimensional. Some useful programs can work solely in two
dimensions, but full computer vision requires partial three-dimensional information that is not
just a set of two-dimensional views. At present there are only limited ways of representing
three-dimensional information directly, and they are not as good as what humans evidently
use.

5.1.7 Expert Systems:-


A ``knowledge engineer'' interviews experts in a certain domain and tries to embody their
knowledge in a computer program for carrying out some task. How well this works depends
on whether the intellectual mechanisms required for the task are within the present state of
AI. One of the first expert systems was MYCIN in 1974, which diagnosed bacterial infections
of the blood and suggested treatments. It did better than medical students or practicing
doctors, provided its limitations were observed.

5.1.8 Heuristic Classification:-


One of the most feasible kinds of expert system given the present knowledge of AI is to put
some information in one of a fixed set of categories using several sources of information. An
example is advising whether to accept a proposed credit card purchase. Information is
available about the owner of the credit card, his record of payment and also about the item he
is buying and about the establishment from which he is buying it (e.g., about whether there
have been previous credit card frauds at this establishment).

Fig. 5.3 Heuristic Classification

- 25 -
5.2 FUTURE SCOPE OF A.I

 In the next 10 years technologies in narrow fields such as speech recognition will continue to
improve and will reach human levels.
 In 10 years AI will be able to communicate with humans in unstructured English using text
or voice, navigate (not perfectly) in an unprepared environment and will have some
rudimentary common sense (and domain-specific intelligence).
 We will recreate some parts of the human (animal) brain in silicon. The feasibility of this is
demonstrated by tentative hippocampus experiments in rats There are two major projects
aiming for human brain simulation, CCortex and IBM Blue Brain.
 There will be an increasing number of practical applications based on digitally recreated
aspects human intelligence, such as cognition, perception, rehearsal learning, or learning by
repetitive practice.
 The development of meaningful artificial intelligence will require that machines acquire
some variant of human consciousness.
 Systems that do not possess self-awareness and sentience will at best always be very brittle.
 Without these uniquely human characteristics, truely useful and powerful assistants will
remain a goal to achieve. To be sure, advances in hardware, storage, parallel processing
architectures will enable ever greater leaps in functionality
 Systems that are able to demonstrate conclusively that they possess self awareness, language
skills, surface, shallow and deep knowledge about the world around them and their role
within it will be needed going forward.

- 26 -
CONCLUSION

We conclude that if the machine could successfully pretend to be human to a knowledgeable


observer then you certainly should consider it intelligent. AI systems are now in routine use
in various field such as economics, medicine, engineering and the military, as well as being
built into many common home computer software applications, traditional strategy games
etc.
AI is an exciting and rewarding discipline. AI is branch of computer science that is
concerned with the automation of intelligent behavior. The revised definition of AI is
- AI is the study of mechanisms underlying intelligent behavior through the
construction and evaluation of artifacts that attempt to enact those mechanisms. So it is
concluded that it work as an artificial human brain which have an unbelievable artificial
thinking power.

- 27 -
REFERENCES:
 Programs with Common Sense :-
John McCarthy, In Mechanization of Thought Processes, Proceedings of the
Symposium of the National Physics Laboratory, 1959.

 Artificial Intelligence, Logic and Formalizing Common


Sense :-
Richmond Thomason, editor, Philosophical Logic and Artificial Intelligence. Klüver
Academic, 1989.

 Logic and artificial intelligence :-


Richmond Thomason.In Edward N. Zalta, editor, The Stanford Encyclopedia of
Philosophy. Fall 2003.
http://plato.stanford.edu/archives/fall2003/entries/logic-ai/.

 Artificial Intelligence a Modern Approach


Russell, Stuart and Norvig, Peter
The second edition of a standard (and very substantial) university- level textbook on
AI. 2003

WEBSITES:-

 www.google.com
 www.wikipedia.com
 http://www.aaai.org/
 http://ww0w- formal.stanford.edu/
 http://insight.zdnet.co.uk/hardware/emergingtech/
 http://www.genetic-programming.com/

- 28 -

You might also like