You are on page 1of 351

Artificial Intelligence

1 The Concept
Syllabus
Fhatis Al?: The Al Problems, The Underlying Assumption, What is an AI Techniques, The Level
of The Model, Criteria For Success, Some General References, One Final Word.

Contents
1.1 The Concept of Artificial Intelligence (Al) . Winter-12,14,16,17,19,
Summer-18, 20 . Marks 7
1.2 Al Problem
Winter-12 Marks 7
1.3 The Undertying Assumption
1.4 What is an Al Technique ?
1.5 The Level of the Model.. . . Winter-19,
Marks 3
1.6 Criteria for Success
1.7 Some General References
1.8 AI Tems
1.9 The Environments
.Winter-18,19,Summer-19 Marks 3
1.10 Different Types of Agents
1.11 Designing an Agent System
1.12 One Final Word
1.13 University Questions with Answers
1-2 Artificial Intelligence 1-3
(AI) Artificial Intelligence The Concept
Intelligence
G T U :Winter-12,14,16,1
4,16.17,19. Summer-r-18,20 10. "The study of the
A r t i f c i a lI n t o l l i g e n c e

of
Artificlal

(Winston - 1992)
computations that make it possible to perceive, reason and act".
C o n c e p t

The
1.1
11. Systems that act rationally
working 12. "Computational intelligence
c o m p u t e r

is the study of the


developing

design of intelligent agents". (Poole


1 . 1 . 1Introduction

mental
activities
such
as

commonsense
reasoning,
understanding
u n d e r

languages
to demand "intelli
to dem
et al 1998)
said
13. "AI is
concerned with
Many
human
engaging
in
an
automobile
are
task
pertorm tasks such
perform
e" intelligent behaviour
in artifacts". (Nilsson
1998)
can
c an These definitions vary
alorng two main dimensions. First dimension is the
mathematics,

built that
driving that
out
interpreting
it,
even

have
been
bult

stems that can diagnose disease


systems
process and reasoning and second dimension is the behaviour of the
thought
and
computer
systems computers
and rural language text.
natural machine.
developed
Several speech The first seven definitions are based on
comparisons to human performance where
human
specially
are understand
into
equations, degree
of artificial
lligence. as
remaining definitions measure success against an ideal concept of
solve
quadratic possess
certain

that "How
"How to think" which we call rationality. A intelligence,
all such
systems
systems is OR system is rational if it does the "right thing" given
saythat
and
what it knows.
.We can
such
all think".
activities

ofthinking
thi has various steps like Historically, there are four approaches that are followed in AL
central point of These four approaches are
The process
The
make system a w o r l d that is
mnade
ade
up of tiny Acting Humanly, Thinking Humanly, Thinking
rather
"How to
predict
and
manipulate
Rationally and Acting Rationally. Let us consider four approaches in detail.
understand,
preceive, 1) Acting Humanly
things or
situations.
but it builds intelliger
Turing
understand
complex
of Al not just attempts
to
Test: For testing
intelligence Alan Turing (1950) proposed a test called as
T h e field Turing test. He suggested a test based on common
features that can match with
entities. the most intelligent
entity human beings. -

of AI Computer would need to possess following capabilities


Varlous Defintons
12 science that is concerned with the a) Natural language
as the branch
of computer processing To enable it to communicate successfully in
1 Al may be defined Englissh.
1993) behaviour. (Luger
automation
ofintelligent b) Knowledge representation to store what it
knows, what it hears.
Systems that thinks like human. c) Automated reasoning to make use of stored
machines with minds, in the information to answer questions
make computers think ...

being asked and to draw conclusions.


3. The exciting new effort to
full and literal sense. (Hallgeland 1985) d) Machine learning to adapt to new circumstances
with human thinking, activities sueh and to detect and make new
4 "The automation of activities that we associate predictions by finding patterns.
as devision making, problem solving, learning ." (Bellman - 1978) Turing also suggested to have
physical interaction between interrogater and
Systems that act like humans
5. computers. Turing test avoids this but Total
Turing Test includes video signal so
that the
. he art of creating machines that perform functions that require intelligence, WE interrogator can test the subject's perceptual abilities, as well as the
opportunity for the interrogator to pass the physical objects
performed by people". (Kurzweil 1990) .To pass total turing test in
"through the hatch'.
addition, computer will need
n e stay of how to make computers do things at which, at the mome e) Computer vision to following capabilities.
better". perceive objects.
are
(Rich and Knight 1991).
Robotics
-

8.
Systems that think rationally.
to
manipulate objects.
9. The niak 2) Thinklng Humanly
study of mental faculties
and McDermott 1985) through the use of computational n As we are
saying that the given program thinks like human it we should know
that how human thinks. For that, the
theory of human minds needs to be
TECHNICAL PUBLICATIONS An up thrust
TECH
for knowledge
Artificial inteingence - The
The Concept

1-4 Artificial Intelligence The Concept


i.
1-5
through
introspection

experin"n8 ng to catch Artificial Intelligence


A r t i f ñ c i e lI n t e l i g e n c e this :
to do and make
psychological

to make decisions so as to maximize payoff


support AI
ways
two through
Economics
and
ing
matches corresponding human
corres
are matches
There
by
theygo decisions under uncertain circumstances.
behaviours

explored.
mechani
as
mechanisms
our
own
thoughts

I/0
and
timing
of the
program's
could als gives information which
is related to brain processing which helps
programs,
some

of cognitive
science Neuroscience

bings
that field
computer can sayi n t e r d e s c i p l i n a r y theories.
f
that is,
we techniques from AI to develope date processing
behaviours,

operating
in
human.
The

from
AI
and
experimental

of the
workings es
PSycho

of human
of

mind
chology Phychology provides strong concepts
of how humans and animals think
and actions.
and act

AI for developing process of thinking


be models theories

together
computer
and
testable
which helps
precise
toconstruct
approach"

that try
the
"Maws of thought
by
Aristotle.
This idea provided
This i
1.1.4 The Strong and Weak AI
proposed
Rationally was
correct conch
conclusions now let
vielded that contribute towards AI,
correct

3) Thinking of "Right thinking aVS yielded


always brief look at various disciplines
concept
that After taking basic foundation
The
structures

and weak AI which also gives


for argument us look at the concept of strong
patterns
given correct premises. for developing automated systems.
"Ram is man,

For example,
men are
mortal, 1.1.4.1 Strong AI
"All
"Minds, Brains
This concept was put forward John Searle in 1980 in his article,
by
"Ram is mortal in
operation
the operation in the mind; their theories for developing some form of
min.

supposed to govern and Programs". Strong form AI provides


were to create intelligent
create A strong form of AI
These laws of thought called logic which
can be implemented
computer based AI that can truly
reason and solve problems.

initiated the field self


study is said to be sentient or aware.

systems.
Strong AI can be categorized as,
4) Acting Rationaly and much like a
program thinks
reasons
that acts. But computer agents are
do) is something
Human-like AI In which the computer
-

A n agent (Latin agre-to


atributes that distinguish
them from just #h human-mind.
more other
expected to have under a u t o n o m o u s control, perceivine
eiving which the computer program develops a totally
"programs", because they
need to operate Non-human-like AI - In
over a prolonged
time period, adapting to change non-human sentience, and a non-human way of thinking and reasoning.
their environment, persisting is act
and being capable of taking on
another goals. rational agent expected to
A
is uncertainity to acheive best 1.1.4.2 Weak AI
outcome or when there
so as to achieve the best
Weak artificial intelligence research deals with the creation of some form of
expected outcome.
can reason
computer based AI that cannot truly reason and solve problems. They
correct inference which should be incorported in
-

The laws of thought emphasis on would, in some


rational agent.
and solve problems only in a limited domain, such a machine
act as if it were intelligent, but it would
not possess true intelligence.
ways,
Much of the
1.1.3 The Foundation of AI There are several fields of weak AI, one of which is natural language.
work in this field has been done with computer intelligence based
simulations of
Now we discuss the various disciplines that contributed ideas, viewpoints and of rules. little has been made in strong Al.
on predefined sets Very progress
techniques to AL
on how one defines one's goals, a moderate amount of progress has
Depending
Philosophy provides base to Al by providing theories of relationship between been made in weak AI.
physical brain and mental mind, rules for drawing valid conclusions. It also
provides information about knowledge origins and the knowledge leads to action.
Mathematics gives strong base to AI to
develop concrete and formal rules tor
drawing valid conclusions, various methods for date
to deal with
uncertain information. computation and techniques

TECHNICAL PUBLICATIONS
-
An up thrust for knowledge
TECHNICAL PUBLICATIONS An up thrust
for knowledge
Artiicia toigE
noept
Artimicial Inteligence Artificial Inteiligerce The Concapt
Artici inteligece

can do Todey 1.1.5.7 Language Understanding and Problem Solving


11.5 What Al PROVERB is
Pianning
and
Scheduling

the first
on-board
autonomous

rem
pianrning computer program which expert in solving crossword puzzles
1151
Autonomous
became
spacecraft. Such
Such remote agents It make
Remote Agent
program
of operations
for spacecraft.
problens
problems as they
can use
of constraints or
possible word fillers, a
large database of past
NASA
control
the
scheduling
and
recovering
from
puzzles and variety of information sources
including dictionaries and online
to
progTam
detecting.
diagnosing
databases Such as a list of movies and the actors that appears in them.
of
can do task
AI does not generate magic or science fiction but rather it can
Orurred develops science,
engineering and mathematics system.
defeated
1152 Game Plaeytng named as Deep
Blue
wori
chess Recent progress in
by IBM in 1997 Such type of gaming understanding the theoretical basis for intelligence has gone
A computer chess program in match in
hand in hand with
Garry Kasparov
exhibition
improvements in the capabilities of real systems. The subfields
champion Al techniques. of AI have became more integrated and AI has found common
can be developed
using ground with other
programs
disciplines.
1.15.3 Autonomous Control
keep to it f o l l e i
vision system
w a s trained
to stear car
wing 1.1.6 Human Vs Machine
The ALVINN computer miles in which
98 % ot the time contro was
2850
a lane. It
was made to travel
human n took over. Al ccan
took a n give
over.
give more 1161 WiIl Machine behave Exactly as Human ?
and only 2 % of the time
with the system Here are the considerable
theories to develop
such systems. difference between human and machine.
1) Machines do not have life, as they are mechanical. On the
other hand, humans
are made of flesh and
154 Diagnosls blood; life is not mechanical for humans.
where leading expert on lymph node
2) Humans have
Heckerman (1991) describes a case a

case. The machine


feelings and emotions and they can express these emotions.
of an difficult can
Machines have no feelings and emotions.
pathology program's diagnosis
scoffs at a
They just work as per the details fed
the major tactors influencing i t into their mechanical brain.
The machine points out
explain the diagnosis.
of several of the symptoms in this case. If such 3) Human can do
decision and explain interaction anything original and machines cannot.
are developed using Al
then highly accurate dignosis can be
diagnostic programs 4)Humans have the capability to understand
situations and behave
made. On the contrary, machines do not
have this capability.
accordingly
5) While humans behave as
11.55 Logistc Planning per their consciousness, machines just
perform as
I n 1991 during the persion Gulf Crisis U.S. forces deployed a dynamac analysis they are taught.
and replanning tool name DART for automated logistics planning and scheduling 6) Humans perform activities as per their own intelligence. On the contrary,
for transportation machines only have an artificial intelligence.
Al can provide techniques for making fast and accurate plans. 1.1.6.2 Comparisons between Human and Machines
1) Brains are analogue ; machines are
1.156 Robotics digital.
2) The brain uses content-addressable
'For doing complex and critical tasks systems can be developed using AI accessed by polling its
memory; In machine, information in memory is
techniques precise memory address. This is known as
memory. byte-addressable
or
eg Surgeons can use robot assistants in microsurgery which can generate 3D 3)
vision of patents internal The brain is a massively
parallel machine; machines are modular and serial.
anatomy. 4) Processing speed is not fixed in the brain; machine has fixed
5) Brains short
speed specification.
term memory is not like RAM.

TECHNICAL PUBLICATIONS An up TECHNICAL PUBLICATIONS An up thrust


thrust for knowledge for knowledge
Artificial intelligence The
Concept Artificial Intelligence
T h e Concept
Artificiel Intelligence with respect to #h e Artificial Inteligence
1- 9

software
distinction
can be made
brain or GTU : Winter-12
6) No hardware
1 . 2 Al Problem
mind. logic gates.
than
electrical
formal tasks, such as game playing and
complex work in AI focused on
Synapses are far more
are performed by
by the same Much of the early theorist was an early attempt to
playing, logic
management
memory chess
machine, processing and theorem proving. For example the
8) Unlike theorems. Game playing and theorem proving share
mathematical
the brain. prove considered to be displaying
components in do them well are
who
organizing system. property that people
9The brain is a self
than any lcurrent] machin.
the brain is
much, much digger intelligence.
well at those tasks by being
10) Brain have bodies, Despite this it appeared that computers could perform
Field then selecting the best one.
Influential in AI fast at exploring a large number of solution paths and
Systems combinatorial explosion generated
1.1.7 List of Expert But no computer is fast enough to o v e r c o m e the

solve complex maths problems.


on how to
Advised the u s e r by most problems.
1. MACSYMA
-

the output from from a mass when we


on how to interpret we do every day for instance,
2. DENDRAL
Advised the user
- AI focusing on the sort of problem solving
called c o m m o n s e n s e reasoning.In
decide to get to work in the morning, often
spectrograph.
Are all medical expert Shaw, and Simon built the General
stems or investigating this sort of reasoning Newel,
syste
PUFF, CASNET
3. CENTAUR, INTERNIST, several commonsense tasks as well
Problem Solver (GPS), which they applied to
various purposes.
performing symbolic manipulations of expression. However no attempt was
logical
4. DELTA - Locomotive engineering made to create a program with a large amount of knowledge about a particular
Oilfield prospecting tasks selected.
5. Drilling Advisor -

problem domain. Only quite simple were

6. Exper Tax -
Tax minimisation advice. A s AI research progressed and techniques for handling larger amounts of world
knowledge were developed in dealing with problem solving in specialized
7. XSEL Computer sales. domains such as medical diagnosis and chemical analysis.
data as potential evidence for mineral
8. PROSPECTOR Interpreted geological Perception (vision and speech) is another area for AI problems. Natural language
deposits. (Duda, Hart, in 1976).
9 NAVEX Monitored radar data and estimated the velocity and position of the understanding and problem solving in specialized domain are other areas related
to AI problems. The problem of understanding spoken language is perceptual
space shuttle. (Marsh, 1984) problem and is hard to solve from the fact that it is more analog related than
the basis of customer's digital related. Many people can perform one or may be more specialized tasks in
10. RI/XCON -

Configured VAX computer systems on


needs which carefully acquired expertise is necessary. Examples of such as tasks include
(Mc Dermott, 1980)
engineering design, scientific discovery, medical diagnosis, and financial planning.
11.COOKER ADVISER Provides repair advice with respect to canned soup
Programs that can solve problems in these domains also fall under the aegis of
sterilizing machines. (Texas Instruments, 1986) Artificial Intelligence.
12. VENTILATOR MANAGEMENT ASSISTANT Scrutinised the data from hospital -

The tasks that are targets of works in Al can be categorized as folows:


breathing support machines, and provided accounts of the patient's conditions
1. Mundane tasks Perception (Vision and Speech), Natural language
(Fagan, 1978) (Understanding, Gerneration, Translation, Commonsense reasoning, Robot
13. MYCIN Diagnosed blood infections of the sort that control)
might be contracted
-

in
hospital. 2. Formal tasks Games (Chess, etc.), Mathematics
14. CROP
(Geometry, Logic, Integral
ADVISOR Developed by ICI to advise cereal
-

calculus, etc.)
fertilizers and pesticides for their
grain farmers on appropriae
farms. 3. Expert tasks Engineering (Design, Fault finding,
15. Manufacturing planning),
OPTIMUM - AV -

is a
planner used by the European Space Agency to neip n the
Scientific analysis, Medical diagnosis, Financial analysis
assembly, integration and verification of
spacecraft
TECHNICAL TECHNICAL PUBLICATIONS An up thrust for knowledge
PUBLICATIONs- An up thrust for
knowledge
Artificial Intelligenc
nce The Conceni
1-10

Artificial Intelligence
from several of th
the Artificial Intelligence-
The Concept

A person
who
knows how
to pertorm
skills
tasks

in a standardard order. categories sho


irst percephua
Artificial Intelligence
1-11
the
the necessary
Later the legal chess moves,
learn Jearned. exn
the processes are
in above list skills
are

a kills Chess: The symbols


are
the pieces, board
linguistic,
and
commonsense

are acquired.
Earlier skils
Sier such expressions are
the positions of all the pieces o n the

are also examples


of

engineering,
medicine, or
finance

computerized
duplication
than the later, mOr
re and tru
specialized The physical symbol system
hypothesis claims
human thoughts
that both of these
a r e the symbols
that a r e encoded
to in Al work was con.

amenable
Intelligent of
initial work mental operations
physical symbol systems.
more
a r e the
much of the The processes
For this reason

practi
thoe in o u r brains. The expressions
are thoughts.

artificial intelligence program


the symbols are
data, the
the data.
thinking. In a running
early areas.
tical
most as a
that manipulate

The problems
areas
where now Al is flourishing
zed expertise
only specialized discipline expressions are more data and
the processes a r e programs
is twofold. It
is significant

orams) winOWthoutare e
require
that
domains system hypothesis
The importance of the physical symbol that it is
the
primarily systems (AI programe basis of the belief
knowledge. Expert and it forms the
assistance of
commonsense
of human intelligernce
aim at solving part, or perhaps all. tn theory of the nature intelligent tasks which are currently
tasks that that can perform
for day-to-day high human expertise. possible to build programs
that previously required
significant problem performed by people.
following questions need to he
be consideret
a expert system,
When one is building is an Al Technique ?
further: 1.4 What
before one can progress but knowledge possesses
less desirable properties
about intelligence ? Intelligence requires knowledge
What are the underlying assumptions difficult to characterize accurately. 3. It is constantly
such as, 1. It is voluminous. 2. It is that corresponds to its
will be useful for solving AI problems? data by being organised in a way
What kinds oftechniques ?
changing. 4. It differs from
human intelligence be modelled application.
A t what level if at all can
that is represented so that the
intelligent program has been built ? An AI technique is a method that exploits knowledge share
.When will it be realised when an generalizations and situations that properties which can be
knowledge captures
rather than being allowed separate representation. It can be
Underlying Assumption grouped together,
1.3 The understood by people who must provide the knowledge; although for many programs

A physical symbol system consists of a set of entities called symbols which the bulk of the data may come automatically, such as from readings.
form the
components of another entity called an expression At In many AI domains people must supply the knowledge to programs in a
patterns that can occur as
an instant the system will contain a collection of these symbol structures people understand and in a form that is acceptable to the program. Knowledge can be
easily modified to correct errors and reflect changes in real conditions. Knowledge can
addition the system also contains a collection of processes that operate on
be widely used even if it is incomplete or inaccurate. Knowledge can be used to helpP
expressions to produae other expressions processes of creation, modification,
overcome its own sheer bulk by helping to narrow the range of possibilities that must
reproduction and destruction. A physical symbol system is a machine that be usually considered.
produces through time an evolving collection of symbol structures. Such a system
Following are three important AI techniques
is machine that produces through time an evolving collection of symbol structures
Search Provides a way of solving problems for which no more direct approach is
Following are the examples of physical systems -

available.
Formal logie : The symbols are words like "and", "or", "not", "for all
The expressions are statements in formal
and x so U s e ofknowledge - Provides a way of solving complex problems by exploiting the
logic which can be true or false. The structures of the objects that are involved.
processes are the rules of logical deduction.

Algebra: The symbols are "+", '*, "x", "y", "1", "2", "3",
Abstraction- Provides a way of separating important features and variations from
etc. The expressions ae the many unimportant ones that would otherwise overwhelm any process.
equations. The processes are the rules of algebra, that allow
mathematical expression and retain its truth.
you to
manipulate 1.5 The Level of the Model GTU: Winter-19
A digital computer: The symbols are zeros and ones
of computer me ory, the Before starting doing something, it is good idea to decide exactly what one is trying
processes are the operations of the CPU that to do. One should ask
following questions for self analysis
change memory.
TECHNICAL PUBLICATIONS An up thrust for knowledge
TECHNICAL PUBLICATIONS An up thrust for knowledge
Artificial inegeiCe Ihe COn.
- 12 Concept 1-13
Artificial Intelligence The Concept
Artificiel Intelligence
do the tasks the
Artificial Intelligence
programs
that
same way this, then we will
machine succeeds at
it is the person. If the
produce
to
What is the 80al
in tying believing that
think.
people do? that
do the
tasks the same

tasks
wav
y
ople do ? conclude that the machine can

programs
ply the task
do the
do in
whatever
simply
Are we trying to produce
to produce
programs
that
way Some General References

Or are
we trying do can be di as AI was done in
the period
appears easiest tasks the way people The early work that is now generally recognized men McCulloch and
perform that do w e r e formally put by
not

Efforts to build
program
that

those that attempt


to
could easily
solve problems
solve. The se
really! it of 1943 to 1955. The first AI thoughts
Walter Pitts (1943). Their idea of AI was
based on three theories, firstly
basic

two classes.
The first one are
that computer fall m
that fall
that e
more
class of in the brain), secondly formal analysis of
of Al ie. problems
those
that do things
clearly phsycology (the function n e u r o n s

our
definition

human
perforrmance
are
not trivial he com
for the computer. propositional logic and third was Turing's theory
of computation.
things that
are
to model do
attempt
of Ai tasks; they updating rule for modifying the
definition kind of tasks Later Donald Hebb in 1949 demonstrated simple
within o u r for these Hebbian learning
performance rule called
connection strengths between n e u r o n s . His
human now
T o test psychological theories of human pertomance. E-g FARRY programsimulate
Reasons for modeling
e which is considered to be influencial model in AI.
great
of human paranoid behaviour to
model
work that can be recognized as AI but Alan Turing
a
which exploited
for this reason,
paranoid person.
There were huge early day
b e h a v i o u r of a
named "Computing
the
conversational
For example, for a
who first articulated a complete vision of AI in his 1950 article
to
understand human reasoning.
question, such
comput
"qATter Machinery and Intelligence".
T o enable computer answer as
and then did
story held workshop on automata
to be able to
read a n e w s paper Real AI birth year is 1956 where in John McCarthy
Ravana lose the game? theory, neural nets and study of intelligencewhere other researchers also
in many cases Deani.
computer reasoning. their papers and they out with new field in computer science
to understand
re presented come
To enable people unless they can understand w the
reluctant to rely on
the output of computer called AL.

machine arrived at its


result. From 1952 to 1969 large amount of work was done with great success
we can collect from people. Newell and Simon's presented General Problem Solver (GPS) within the limited
To exploit what knowledge
and ask them how to procasd class of puzzles it could handle. It turned out that the order in which the
To ask for assistance
from best performing people
program considered subgoals and possible actions was similar that in which
in dealing with their tasks.
humans approached the same problems. GPS was probably the first program
which has "thinking humanly" apPproach.
1.6 Criteria for Success
Herbert Gelernter (1959) constructed the Geometry Theorem Prover which was
most important questions any scientific or engineering
to answer in
One of the succeeded ?. So how in Al we
capable of proving quite tricky mathematics theorem.
research project is "How will we know if we have
A t MIT, in 1958 John McCarthy made major contributions to AI field
have to ask ourselves, how will we know if we have constructed a machine thatis
inteligent? The question s hard as unanswerable question "What is development of HLL LISP which has became the dominant AI
programing
Intelligence?" language.
In 1958, McCarthy
To measure the progress we use proposed method known as Turing Test. Alan published
a paper entitled
Programs with Common Sense, in
which he described the Advice Taker, a
Turing suggested this method to determine whether the machine can think. To hypothetical program that can be seen
as the first
conduct this test, we need two people and the machine to be evaluated. One complete AI system. Like the Logic Theorist and Theorem Geometry
Prover. McCarthy's program
from the computer and the
was
designed to use
knowledge to search for
person act The
other
interogator, who is in
as a separate room
solutions of problems.
person. interrogator can ask questions of either the person or computer
by typing questions and received typed The program was also designed so that it could accept new axioms in the normal
responses. However the interrogaro of
knows them
only as A and B and aims to determine which is the person a n course
operation, thereby allowing it to achieve
competence in new areas
which is
the machine. The goal of the machine is to fool the into
interrogator TECHNICAL PUBLICATIONS An up thrust
TECHINICAL PUBLICA for knowiedge
TIONS-An up thrust for knowleadge
Artificial intelligen
Artificial Inteligence
1- 14
The Concept Artificial Intelligence-The Concept
Advice
Taker thus embodied the 1-15
The

centntra
come
without being
reprogrammed.
Artificial Inteligence
and reasoning AI has finally
representation of methodolo8y
Prncples of knowledge terms Hidden
of McCulloch In based on

on the
neural
showed how.a
networks
and Pitts d I n 1990s Al emerged
as a
science.

In
approaches
recent years
field. This
model is
Early work building
method.
and Cowan (1963) la scientific
the the AI
flourished. The work of
Winogard individual Conge ge nur firmly
under
(HMMS) have
come to
dominate
and second is,
an model theory
of could
elements
collectively
represent
Hebb's lean
ept, with Markov Models
one
mathematical

is igorous corpus
real
a large
and parallelism. two aspects of training
on
robustness
Wi 8 based on

dro, 1962),ethods
in process
corresponding
increase
(Widrow Hoff, and 1960; models are generated by a
these
by Bernie Widrow
Frank Rosenblatt (1962) with
hihis 62) to a n e w
ep who
enhanced
were speech data. Systems led
called his
Rosenblatt
networks

proved the perceptron


and
adalines, by
convergence
theorem,

of a
showing thas
perception to matrh
perceptrons Judea Pearl's
(1988)
Probabilistic

probability theory
Reasoning
in
in Intelligent
AL. Later Bayesian
network w a s
invented

the connection
strengths acceptance of with reasoning support.
algorithm ould adjust uncertain knowledge along
can represent
which the idea of
in 1986 promoted
existed.
a match
data, provided such to Eric Hovitz and
David Hackerman of decision
appeared conduct the laws
ELIZA program Judea Pearl, according to
1965,
Weizenbaum's
serious that can act rationally
In
conversation on any topic by
basically borrowing
and m a n i n .

None of the programs developed so


manipulating the f
normative

theory.
expert systems
vision and
given by a human.
ad ocurred in robotics, computer
sentences
called weak' methods. Researcha Similar but slow
revolution have
and were

complicated,realized
domain knowledge
complex knowledge for mo
more knowledge representation. Allan
that it was necessary
to use more
larget architecture called SOAR was work out by
reasoning tasks. In 1987 a complete agent were developed
to
and Paul Rosenbloom. Many such agents
Buchanan in 1969 and was:
was based on Newell, John Laired in web
developed by "Internet". AI systems have
become so common
The DENDRAL program was domain. used work in big environment
program that effectively " bot" sufix has entered in everyday language.
these principles.
knowledge
It was a unique
in problem solving. In the mid-1970's, MYCIN, a program deva specifc
eloped based applications that the

underlie Internet tools, such as search engines,


to diagnose illnesses a Al technologies many
It used expert knowledge
to diagnose blood infections. recommender systems and website.
prescribe treatments. This program is also knovwn as the first program, hich
realized that previously isolated
addressed the problem of reasoning with uncertain or incomplete information. While developing complete agents it was

subfields of AI need to reorganize when their results are to be tied together.


Within a very short time a number of knowledge representation languages were
Today, in particular it is widely appreciated that sensory systems (vision, sonar,
developed such as predicate calculus, semantic networks, frames and objects. Some speech-recogonition, etc.) cannot deliver perfectly reliable information about the
of them are based on mathematical logic such as PROLOG. Although PROLOG
environment. Hence reasoning and planning systems must be able to handle
goes back to 1972, it did not attract wide spread attention until a more efficient
AI has been draw in to much closer contact with other fields such
uncertainity. as
version was introduced in 1979.
control theory and ecornomics, that also deal with agents.
A s the real, useful strong works on AI were put forward by researchers, Al
emerged to be a big Industry. 1.8 AI Terms
In
1981, Japanese announced 5n generation projecta 10-year plan to build
1.8.1 Agents and it's Envlronment
ntelligent computers running PROLOG. US also formed
and
the Micro electronis
Computer Technology Corporation (MCC) for research in Al. An agent is anything that can be viewed as
perceiving its environment through
Overall the Al sensors and acting upon that environment through actuators.
industry boomed from few million dollars in 1980 to bilions or
dollars in 1988. But soon For example consider human as agent. Human has
after that AI
industry had huge setback as n eyes, ears and other organs
companies suffered as they failed to deliver on extra which are sensors. Hands, legs, mouth and other body part work as actuators.
vagant promises. Lets
I n late 1970s more research were done by which consider another example of agent Robot. A Robotic -

agent might have


continued in 19805 psychologists on neural nerwo cameras, infrared rangefinders as sensors. Robot can have various motors foor
actuators.
TECHNICAL PUBLICATIONSs -An up thrust for knowledge TECHNICAL PUBLICATIONS -
An up thrust for knowledge
Artificial Inteligencce
1-16
The Concep
The Concept
Artificial Intelligence Artificial Intelligence
1-17
Artificlal Intelligence
of agent
More examples
packets.
all the agent
Software agent and
network

4) Agent Program
we need to tabulate
1. Agent contents
program functions
file
packet. want develop a agent
to lead to infinite
When w e This can practically
Keystrokes,

Sensors
network
agent. need to
files, describes any given that we
writing functions that of percept sequence
Screen,
bound o n the length action will be
external
Actuator
Internet shopping agent hence w e need to put sequences
and
script) of percept
graphics This table of functions function for an intelligent agent
2. Agent pages (text consider.
internally agent
of the agent where
DHTML, as
H T M L , follow URL characteristics

Sensors user,
agent program.
displayto implement by an
Forms,
will be
Actuators
Note
Terminology abstract mathematical description.
AI function is an
1.8.2 The Agent on the agent architecture.
at any given i.
implementation, running
1) Percept perceptual
inputs instant. Agent program is a concrete

the agent's
refers to
percept
The term
1.8.3 Architecture of Agent
in the sky "through
"thro eyes and takes its which is called the
Examples "Bird fAying snan runs on some sort of computing device,
percepts The agent program
that the architecture will
human agent
architecture. The program we choose
has to be one
1) A sensors available to
(photograph) of a boiler through neras and takes
canmer

and run. The architecture makes the percepts from the


Temperature accept action choices to the
robotic agent perceive the program, runs the program and feeds the program's
2) A architectures annd
control action. effectors as they are generated. The relationship among agernts,
programs can be summed up as follows
2) Percept Sequence of everything the.
is the complete
history has e Agent Architecture + Program
sequence n s t a n t and it can
dena
An agent's percept
has choice of action at any 8iven
ange in the perception
on te
perceived. Agent recorded. The change forms
agent has
percept sequence
entire
Environment]
historical case. Percepts]
Forexample of a boiler
will be sensing it contins
ISensor
monitoring temperature
A robotic agent This percept sequence will heh
the percept sequence.
and keep on maintaining fluctuates and action will be taken dependino

agent to know
percept sequence
how temperature
for controlling temperature. Tmints
Actuator
3) Agent Function
function which maps each and every possible perogt
(Action
It is defined as mathematical
action.
sequence to a possible
and it gives output as action.
This function has input as percept sequence
can be represented in a tabular form.
Agent function Fig. 1.8.1 Agent and its environment
Example
ATM machine is a agent, it display menu for withdrawing money, when ATM can
1.8.4 Schematic of Ar's Agent
Performing Action
inserted. When provided with percept sequence (1) A transaction type and Following diagram illustrates the agent's action process,
number, then only user gets cash. architecture. This can be also termed as
as
specified by
agene's structure.

TECHNICAL PUBLICATIONs An up thrust for knowledge


TECHNICAL PUBLICATIONS An up thrust
for knowledge
The (Concept
Artifncial inegence The
18
Artificial Intelligence The Concept
Artificial Intelligence
- 19 Artificial Intelligence
Ar
LAL
Design Agent program 1.8.7.1 Weak Agent
architecture
hardware or software based computer
Agent prOgraim A weak notion says that an agent is a

Implements
system that has the following properties
Agent function
1] Autonomy
Maps have control over their
without direct intervention of humans and
Agents operate
Perception
actions and internal state.
Leads to
Action 2] Soclal ablity
action process communication
1.8.2 Agent's Agents interact with other agents (and possibly humans) via an agent
Flg.
language.
Program
1.8.5 Role of An Agent 3] Reactivity
is internally environment and respond in timely and rational fashion to
agent program An agent program Agents perceive their
An
Input (Agent function) Output
implemented as agent
function.
(Curent percepts (Action made changes that occur in it.
from sensors) through actuators)
takes input
An agent program 4] Pro activeness
from the Fia. 1.8.3 Role of an agent program in agent
as the curent percept architecture Agents do not simply act in response to their environment, they
are capable of taking8
action to
sensor and return an
the initiative, generate their owm goals and act to achieve them.
the effectors (Actuators).

Tabulation of a Agent
1.8.7.2 Strong Agent
for
1.86 Simple Example A stronger nation says that an agent has mental properties, such as knowledge, belief,
internet called as bot.
A shopping agent on
intention, obligation. In addition and agent has other properties such as:
Agent
Tabulation of percepts and action mapping 1. Mobility: Agents can move around from one machine to another and across
different system architectures and platforms.
Sequence of Percepts Actions
St. No. 2. Veracity: Agents do not knowingly communicate false information.
mygreeting.com Display website.
[Type URL of greeting 3. Rationality: Agents will try to achieve their goals and not acts in such a way that
[Navigation and observation of greetings to Clicks on the link. would prevent their goals from being achieved.
be purchased] Strong AI is associated with human traits such as consciousness, sentience, sapience,
To get details of greeting (which is Form filling. self-awareness
purchased}, in terms of a formj
1. Conciousness To have subjective experience and thought.
[To perceive completion of process Receiving recept u
awwawwwwwww.wwwww.wwwwww
2. Selfawareness To be aware of oneself as a separate individual, especially to be
aware of one's own thoughts.
1.8.7 The Weak and Strong Agent
3. Sentience The ability to feel perceptions and emotions subjectively.
An agent is anything that can be viewed its environment
perceiving
as through 4. Sapience The capacity for wisdom.
sensors and
acting upon that environment through effectors/actuators.

TECHNICAL PUBLICATIONs -An TECHNICAL PUBLICATIONS An up thust for knowledge


up thrust for knowledge
1-20 Artificial Intelligence- The Concept
Artificial Intelligence
The Concept
Omniscience
1-21
Artificial intelligence
Behaviour and
Artificiel Intelligence
1.8.8 Rational
1.8.8.1 Rational Agent 1.8.8.2 The Good and the Bad Agent
the agent function correctly
is filled then the a behaviour leads to
two
Environment known
If every entry in
called as rational agent. Doino
ill alway T h e concept of rational
Such agent is the bad agent.
agents and
do the right thing.
we need certain methods to"ENt thi types agernts, the good
measurethitnhge
successful. S0 now and bad behaviour
makes agent most Most of the time the good Good agent
of the agent depends
success of rational agent. (that is performance)
environment.
it on the
When an agentworking in the environment, generates a sequene of actione
is completely Rational behaviour
known then we get
it receives. This sequence of action
tions leads to
according to the percept If environment is completely
states of environment.
If this sequences of environment state change s various agent's good
behaviour as depicted in Fig. 1.8.5.
Fig. 1.8.5 Good agent

then we can say that agent


has performed well. So if the tasks and n able is unknown then agent
can act
If environment
change automatically the measuring conditions will change and hence ment badly as depicted Fig
in 1.8.6.
fixed measure suitable for all agents. no
As a general rule, it is better to design pertormance measures accordino Environment unknown
to
one wants in the environment, rather
than according to how one thinks hat
should behave. agent Bad agent
rationality depends upon 4 things
.

The
1) The performance measure that defines the criterion of success. Irational behaviour

2) The agent's prior knowledge about the environment.


3) The actions that the agent can perform. Fig. 1.8.6 Bad agent

4) The agent's percept sequence til current date 1.8.8.3 Omniscience, Leaming and Autonomy
outcome of its actions and can act
Based on above 4 statements rational agent can be defined as follows A n omniscient agent knows the actual

For each possible percept sequence, a rational agent should select an action that is accordingly, but in reality omniscience impossible.
is

Rationality is not same as perfection. Rationality maximizes expected performance


expected to maximize its performance measure given the evidence provided by the
where as perfection maximizes actual performance.
percept sequence and whatever built-in knowledge the agent has. Following figue
must do same actions in order to modify future
depits performance measuremetric. For increasing performance agent
percepts.
This is called as information gathering which is important part of rationality. Also
Optimal/ right ection / behaviour
agent should explore (understand) environment to increase performance ie. for
doing more correct actions.
Result into desired information.
sequence of states Learning is another important activity agent should do so as to gather
know environment completely (which is practically not possible) in
Perceive sequence which
Agent may
certain cases but if it is not known agent needs leam on its own
generator optimal
sequence actions To the extent that an agent relies on the prior knowledge of its designer rather
Agent coupled than on its own percepts, we say that agent lacks autonomy. A rational agent
with complex
environment should be autonomous it should leam what it can do to compensate for partial
or incorrect prior knowledge.
Fig. 1.8.4 Optimal perfomance triangle
TECHNICAL PUBLICATIONS An up thrust for knowledge
TECHNICAL PUBLICATIONS An up thrust for knowledge
-
Artificial Intelligence The Con
Artificial Intelligence 1-22 oncep Artificial Intelligence
- 23 Artificial Intelligence The Concept

RelationshipP
Figure Depicting rationality and Omniscience
Action
Expected Percept Sequence
Sequence Rationality performance
Ball Right
of percept Depends Maximizes IL1, No Black
on Pick
as L1, More Black Balls1
same
is not

I L2, No Black Ball ]I Right


Actual Pick
Pertection Maximizes perfomance [ L2, More Black Balls

betwoen rationallty and omniscience


Flg. 1.8.7 The relationship

I L1, No Black Ball [ L1, No Black Ball ] Right


1.8.9 Agent and it's Environment
[ L1, More Black Balls Pick
L1, No Black Ball ,
1891 Agent Doscrption
A BLACK BALLS PICKER
Consider following example,
The Picker World (Environment)
world so one can invent many variations.
It is a simple and made-up No Black Ball },
consider square I LI No Black Ball,[L1,
two buckets at two locations, Li and L2 (for simplicity area
for
Ithas WHITE colour bals. I L1, No Black Ball } Right
location), full of BLACK and
[L1, No Black Ball , [ L1, No Black Ball,
The Plcker and ts Perceptions
ILI, More Black Balls 1 Pick
perceive that, is there a BLACK ball
Picker peroceives at which location it is. It can at

the given location.

The Agent Actlons


Picker can choose to MOVE LEFT or MOVE RIGHT, PICK UP BLACK BALL or be 1.9 The Environments GTU Winter-18.19. Summer-19
ideal that as do nothing
A function can be devised as follows if the current location bucket has more 1.9.1 Nature of Environment
BLACK BALLS then PICK, otherwise MOVE to other square.
.In previous section we have seen various types of agents, now let us see the
Dlagram Depicting Black Ball Plcker details of environment where in agent is going to work. A task environment is

function for the black ball essentially a problem to which agent is a solution.
Following is the partial tabulation of a simple agent
picker. The range of task environments that might arise in Al is obviously vast. We can,
however, identify a fairly small number of dimensions along which task
environments can be categorized. These dimensions determine, to a large extent,
the appropriate agent design and the applicability of each of the principle families
of techniques for agent implementation.

Fig. 1.8.8 Black ball picker world with two buckets at two locations

TECHNICAL PUBLICATIONS An up thrust for knowledge


TECHNICAL PUBLICATIONS An up thrust
for knowledge
- 24
Artificial Intelligence Artificial Intelligen
The Concept
The Concept Artificial Intelligence
Environment 1-2
1.9.2 Types of Task Artificiel Intelligence

Observable
1.9.2.1 Fully Observable Vs Partialy Examples
percept of the image,
current
agent can

the whatever is
If an agent's sensors give it the to
access
complete state
state of the Deterministic: In image analysis of image
based on current
knowledge.

eachpoint of time, then it is fully


observable.
environment.at take next action
or c a n process
remaining part

all the detail aspects


of the image.
e n v i r o n m e n t as from
the
. I n some environment, if there is noise or gent is with
inaccurate sensors Finally it can produce
is in strategic
tic-tac toe game other agents.
be some states of environment are missing then ch environment
such or.
Strategic: Agent playing the action of

observable.
environment is partially
may current state agent
decides next state
action except
for

More examples
Example
Video analysis. 2) Trading agent. the next driving does
1) stochastic
environment as
in
Fully Observable Stochastic : Boat driving agent is and from all current and
In fact it has to
see the goal
current state.
The puzzle game environment is fully observable where agent can see all tho not based on
the aspects, needs to take
action.

that are surrounding it. That is agent can see all the squares of the
the previous percepts agent
with values (if any added) in them.
puzzle
puzzle
game along More examples
Robot firing in crowd.
1) Car driving 2)
More examples
1) Image analysis. 1.9.2.3 Eplsodic Vs Sequential is divided into atomic episodes such

2) Tic tac toe. episodic


environment agene's experience then performing
process and
In
consists of, the agent perceiving
that each episode the episode
environment the choice of action depends only on

Partially Observable action. In this


single actions.
episode does not affect current

The pocker game environment is partially observable. Game of pocker is a card itself, previous
decision could affect all
the other hand, the
current
game environment on
that shares betting rule; and usually (but not always) hand rankings. In this game In sequential
agent future decision.
is not able to perceive other player's betting the
environments are more simpler than sequential environments because
Also agent cannot see other player's card. It has .Episodic
to play with reference to its own
N not need to think ahead.
cards and with current betting knowledge. agent does

More examples Example


machine.
defective part of assembled computer
Episodic Environment: Agent finding not depend on previous
1) Interactive Science Tutor. and take action which does
Here agent will inspect current part
2) Millitary Planning. decisions (previously checked parts).

More Examples
1.9.2.2 Deterministic Vs Stochastic
1) Blood testing for patient. 2) Card games.
I f from current state of environment and the action, agent can deduce the next
state of environment then, it is deterministic environment otherwise it is stochastic Sequential Environment: A game of chess
is sequential environment where agent
takes action based on all previous decisions.
environment.
More examples -

. I f the environment is deterministic except for the actions of other agents, we say
1) Chess with a clock. 2) Refinery controller.
that the environment is strategic.

TECHNICAL PUBLICATIONS - An up thrust for knowledge

TECHNICAL PUBLICATIONS An up thrust for knowledge


ArTTcia/ eigence .
Artificial Intelligence 1-26 The Concep
Artificial Intelligence The Concept
1-27
1.9.2.4 Static Vs Dynamic Artificlal Intelligence
while agent is deliberating then we say the
the environment can change it is static. Vs Mutlagent
for the agent, otherwise 192.6 Single Agent who takes decision
environment is dynamic have well defined single agent
environment we
tackle as agent need not worry abo In single agent
are easy
to
Static environments

around (as it will not change) while taking actions. changes and acts
be various agents or
there can
various group of agents

changing continuously which makes In multiagent environment environment


Dynamic environments keep on

decisions for act.


agent to be which a r e working together
to take decision and act. In multiagent
in which many agents are
more attentive to make can have competitive multiagent
environment,
be
of individual or there
we can
not change with time but the agent's nt's performance performance
I f the environment itself does Ders.
working parallel to
miximize
in all agents have single goal and
environment is semidynamic. environment, where
does, then we say that co-operative multiagent
they work to performance of all of them together.
get high
Examples Example
Static: In crossword puzzle game the
environment that is values values held in se.

squares can Multiagent independent environment

onlychange by the action of agent. Many agent in game of Maze.


More examples Multiagent cooperative environment
1)8 queen puzzle. 2) Semidynamic. together to achieve same goal.]
Fantasy footbalil. [Here many agents work
boat is in dynamicbecause the enin
environment
Dynamic Agent driving
can change (A big wave can come, it can be more windy) without any action of ironme
:
Multiagent competitive environ1ment
ant
agent. working but opposite to each other]
Trading agents. [Here many agents are

More examples Multiagent antagonistic environment


1) Car driving. 2) Tutor Wargames. [Here multiple agents are working opposite
to each other but one

side (agent/agent team) is having negative goal.]


1.9.2.5 Dscrete Vs Continuous
Single agent environment
I n discrete environment the environ1ment has fixed finite discrete states over
the Boat driving[Here single agent perceives and acts]
time and each state has associated percepts and action.

.Where as continuous environment is not stable at any given point of time andit 1927 Complexty Comparlson of Task Environment
changes randomly thereby making agent to learn continuously, so as to male the rising order of complexity of various task environment.
Following is
decisions.
Low-
Rising order oHigh
complexity
Example:
Discrete: A game of tic-tac toe depicts discrete environment where every state is
stable and it associated percept and it is outcome of some action. Observable Partially observable
More examples . Determiristic Stochastic
1) 8- queen puzzle. 2) Crossword puzzle. Episodic quenti
Continuous: A boat driving environment is continuous where the state changes are Static Dynamic
continuous, and agent needs to perceive continuously. Discrete Continuous
More examples Single agent Muitiple agents.
wwwwww.wwwwwnin vwwme wvv
1) Part Picking Robot. 2) Flight Controller.

TECHNICAL PUBLICATIONS An up thrust for knowledge


TECHNICAL PUBLICATIONs An up thrust for knowledge
Artificiel Inteligence The
Artificial Intelligence
26 Concept The Concept
1-2 Artificial Intelligence
Artificial Inteligence
Environment
1.9.3 More Types of Task task environmer
can
further classiry nents Problem Solving
Environment
from
we
10)
specific problem
domains
solve different types of problems
Dased on
We can have agent who salesman
Example: problem like travelling
follow. or statistics or
any general purpose
mathematics
Environment
Survelance
1) Monltoring and problem.
at some gathering whera
re

Agent monitoring
incoming people only and Engineering Task Environment
Example 11) Sclentific
:

allowed. calculations for aeronautics purpose


or agent
authorized people are
Example Agent doing
:
scientific

road maps or over


bridge structure.

2) Time Constrained
Environment
the move should be d develop to design
environment
where
clock
Chess with a Task Environment
Example : 12) Biological
of time. helpful for
for of chemical component
specified amount Example: Agent working design some

Environment medicine.
Decision Making
3) of a organization, Can

Example: The
executive agent who
is monitoring profit help 13) Space Task
Environment
take decision.
to environment and
top level management Example: Agent that is working in space for observing space
Process Based
Environment recording details about it.
4) take input and synthesize ittto
who can
Example The image processing agent 14) Research Task
Environment
and details about the image.
produce required output, in research lab where it is made to grasp (learm)
Example: Agent working a

User Environment knowledge and represent it and drawing conclusions from it, which will helps
5) Personal or
assistance who can helh researcher for further study.
agent which can be used as personal
Example: A small scale
notifications about work etc.
to remember daily task, who can give 15) Network Task Environment

6) Buying Environment Example: An agent developed to automatically carry data over a computer network
based on certain conditions like time limit or data size limit in same network (same type
Example: A online book shopping bot (agent) who buys book online as per user of agent can be developed for physically transferring items or mails) over same network.
requirements.
16) Repository Task Environment
7) Automated Task Environment
Example If a data repository is to be maintained then agent can be developed to
firm can use a agent who automates complete
Example A cadburry manufacturing arrange data based on criterias which will be helphul for searching later on.
procedure of cadburry making.
1.10 Different Types of Agents
8) Industrlal Task Environment
Example: An agent developed to make architecture of a building or layout of 1.10.1 Intelligent Agent
building. "Intelligent agent is an intelligent actor, who observe and act upon an
environmene"
9) Leaming Task Environment (Educational) Intelligent agent is magnum opus.
Example: We can have a agent who is learning some act or some theories presented
to if and later it can play it back which will be helpful for others to learn that act or

theories.

TECHNICAL PUBLICATIONS An up thrust for


knowledge TECHNICAL PUBLICATIONS An up thrust for knowledge
Artificial Intel
Artificial Inteligence 1-30 nce Ine Con
The Concept
Artificial Intelligence
1-31
Artificial Intelligence
Enviroment

(world with agents) amounts of data.


learnquickly from large
3) The IA
must
incremently.
Percept
accommodate n e w problem
solving rules
4) The IA
must
- through sensors and retrival capacities.
Knowledge which must exhibit storage
,Belief must have memory
5) The IA and success.
error
self in terms of behaviour,
should be able to analyze
Interaction
Intelligent 6) The IA
agent of Agents)
Goals Forms of Agents: (Types
Obligation Desires 1.10.2 Different and sub-agents.
there are different forms of intelligent agent
In artificial intelligence, to framne
through actuators varies, it is possible
of perceived intelligence and capability
IAction As the degree
into four categories.
World agent's
without 1. Simple reflex agents.
20ent
2. Model based reflex agents.
3. Goal based agents.

Fig. 1.10.1 Intelligent agent


4. Utility based agents.
detail.
Fig. 1100.2 discuss each type of agent in
is different from intelligent agent. In the following section
we
The term Intelligent thinker
intelligent agent's behaviour.
1.10.2.1 Agent Type 1
Intelligent agent Simple Reflex Agent
ignoring the rest of
These agents select actions on the basis
of the current percept,

percept history.
An entity

which perfom?

1. Perception Sensor
2. Action What the
wortd is like
now?
Fig. 1.10.2 Intelligent Agent

What actOn
Example: Conditon action rulee I shoukd do
now?
1) A robotic agent (Cameras, Infrared range finders).

2) An embedded real time software system agent. Actuator

3) A human agent (Eyes, ears and other organ). Environment

Characteristics of
Intelligent Agent (LA)
1) The IA leam and improve through interaction with the environment.
must
Flg. 1.10.3 Slmple reflex agent
2) The LA must adapt online and in the real
time situation.

TECHNICAL PUBLICATIONS An up thrust for knowledge


TECHNICAL PUBLICATIOws- An up thrust for knowledge
Artificiel Intelligence The Concept
Artificial Intelligence 1-32 The Concept
Artificlel Inteligence 1-33 Artificial Intalligence

Property: is limited.
but their intelligence Forexample:
) T h e s e are very simple
be made on the basis of onlv
which maintains its own internal state and then take action as

if correct
decision can
the A car driving agernt
2) They will work only environment is fully observable.
environment appears to it.
current percept- that
is only if the
serious trouble.
can cause
3) A little bit of unobservability then, it
observable environment can
works in partially lead Sensor
4) f simple reflex agent
to infinite loops. State
reflex agent try can out possible
Infinite loops be avoided if simplex
can
ach.
tions Whatthe wortd
5) Howthe wortd
randomize the actions.
evolves? is like now?
ie can

reflex agent will perform


better than deterministic
6) A randomize simple reflex What my action do?
agent.
What action I
Example Condition-action rule should do now?
In ATM agent system if PIN matches with given account number then customer gets
Actuators

money. Environment
REFLEX AGENT
Procedure: SIMPLE
-

Input: Percept
Fig. 1.10.4 Model based reflex agent
Output: An action.
action rules. Procedure : REFLEX-AGENT-WITH-STATE
Static: Rules, a set of condition -

1. State INTERPRET - INPUT (percept) Input: Percept


2 rule RULE MATCH (state, rules) Output: An action.
3. action RULE - ACTION (rule) Static State, a description of the current world state, rules, a set of condition-
action rules, action, the most recent action, initially none.
4. returm action.
1. State -UPDATE-STATE (state, action, percep)
110 22 Agent Type 2 2. Rule-RULE-MATCH (state, rules)
Model Based Refex Agent 3. Action RULE-ACTION (rule)
Internal state of the agent stores current state of environment which describes part of 4. return action.
unseen world ie how world evolves, and effect of agent's own actions. It means that it
stores model of model based reflex agent.
110.23 Agent Type 3
possibilities around it. Hence it is called as
Goal Based Agent
Property: Goal based agent stores state
1) It has ability to handle partially observable environments.
description as well as it stores
goal states information.

2) Its internal state is updated continuously which can be shown as: Property
Old Internal state 1) Goal based agent works simply towards achieving goal.
+ Current percept =Update state. 2) For tricky goals it needs searching and planning.

TECHNICAL PUBLICATIONS An up thrust PUBLICATIONS


for knowledge TECHNICAL An up thrust for knowledge
Artificiel Intelligence The
The Concept Artificial Intelligence- The Concept
Artificial Intelligence 35
1-34 Artificial Intelligence

information description appear.


because the ars two discrete states
** dynamic in nature
are
Goals gives only
4)
proper and explicit manner. UnhappPy
behaviour for new/unknown oal. a) Happy b)
4) based agent's
We can quickly change goal For example -

action to be taken. Its


of
robot which provides certain plan
Millitary planning performance is
also high.
is to complex, and expected
Environment environment
Sensor
Environment
State
What the workd Sensor
How the worid is like nOw?
evoves? What the
world is like
State now
What may action do? What it will be like
ifI do action A
How the world evolves What it wlll be
like if do
What action I What may actions doF action A?

GoalsF should do now?


How happy I will be
in such a state?
Actuators

What action I
should do now?

Fig. 1.10.5 Goal-based agent

For example: Fig. 1.10.6 Utlity- based agent


Agent searching a solution for &queen puzzle.
1.10.3 The Learning Agent
110.24 Agent Type 4 then agent should be
If agent is to operate initially in unknown environments
Ulity Based Agent self-learner. It should observe and gain and store information. Learnming agent c a n be
enough for agernt designs. Additional to divided into 4 conceptual components
In complex environment only goals are not
1) Learning Element - Which is responsible for making improvements.
this we can have utility function.
2) Performance Elements Which is responsible for selecting external actions.
-

Property: 3) Critic I t tells how agent is doing and determines how the performance element
function maps a state to real number, which describes the associated
1) Utility on a should be modified to do better in the future.
degree of best performance. It is responsible for suggesting actions that will lead to new
4) Problem Generator -

2) Goals gives us only two outcomes achieved or not achieved. But


agents provide a way in which the likelihood of success can be measured against
utility based and informative experiences to agent. Agent can ask problem generator for
suggestions.
importance of the goals. T performance standards distinguishes part of the incoming percept as a reward
3) Rational agent which is utility based can maximize expected value of u lity (success) or penalty (failure) that provides direct feedback orn the quality of the agent's
function ie more perfection can be achieved. behaviour.
TECHNICAL PUBLICATIONS An up thrust
for knowledge
TECHNICAL PUBLICATIONS -
An up thrust for knowledge
Artificial Intelligence - The (

36
Artificial Intelligence 1-37 Artificial Intelligence The Concept
Artificial Intelligence
ough
thro.

earning
pertormance

their
improve
can
seen
More Types of Agents
All four type
ypes agent
we
have
1.10.4
and there by become learning agernt do classification of agents based on various like
We can aspects -

ment
from environment and
For example which
continuously
learns
then do 1) Task they perform. 2) Their various control architecture.
agent
roplane driving
Aerop 3) Depending on sensitivity of their sensors, and effectiveness of their action and
safe plane driving internal states they possess.
of Learming Agent
1.10.3.1 Components basic knowledge and learm Following are various types of agents, based on above classification criteria
Base/Learner/Learming
element-It holds ngs
1) environment.
1. Physical Agents: A physical agent is an entity which perceives through sensors
unfamiliar
from the Capable system is and acts through actuators.
system/Performing elements
is the actual
respo S16le
2) Capable/Efficient actions.
Performance element
agent. It 2. Temporal Agents A temporal agent may use time based stored information to
external
for selecting
actions. offer instructions or data acts to a computer program or human being and takes
and decides
perceives
feedback. It reflects fault and analyze cor program inputs percepts to adjust its next behaviour.
3)
It
gives
Faultreflector element maximum success.
orrective 3. Spatial Agents That relate to the physical real-world.
actions in order to get
element -
It generate new and informative experi 4. Processing Agents -
That solve a problem like speech recognition.
4) New problem generator t
suggests new actions. 5. Input Agents That process and make sense of sensor inputs- eg. neural
makes difference between incoming percept as a network based agents.
The performance standard reward
ree

on the quality of the agent's behaviour.


(or penalty), that indícate
direct feedback 6. Decision Agents That are geared upto do decision making
7. Believable Agents An agent exhibiting a personality via the use of an artificial
Performance character (the agent is embedded) for the interaction.
standaro
8. Computational Agents That can do some complex, lengthy scientiic
computations as per problem requirements.
Crtic 9. Information Gathering Agents Who can collect
(perceive) and store data.
Feedback 10. Entertaining Agents Who can perform something which can entertain human
like gaming agents.
Changes
Learning Performance 11. Biological Agents Their reasoning engine works almost identical to human
element element
Knowledge brain.
Leerning
Goals 12. World Agents That incorporate a combination of all the other classes of agents
Problem to allow autonomous behaviours.
generator 13. Life Like
ActuatorsS Agents Which are combinations of other classes of agents which will
behave like real world characters. (For example A robotic dog)
Environment Sensors
1.11 Designing an Agent System
When we are
specifying agents we need to specify performance measure, the
Fig. 1.10.7 Leaming agent environment and the agent's sensors and actuators. We group all these under the
heading of the task environment.

TECHNICAL PUBLICATIONS An up thrust for


TECHNICAL PUBLICATIONS An up thrust for
knowledge
-

knowledge
Artificial Intelligence 1-38 Artificial lIntelligence.
The Concp Artificial
intelligence
The Concept

(P]erformance, [E}nviro 1-3


For e acronymically
we call this PEAS ment, [A]ctuato Artificlal Intelligence

[Slen
Slensors) description. atom, Industrlal
Business
Purpose
Sensors
Actuators
) Environment
Designing an Agent Performance
in
1.11.1 The Steps
Sr. No.
Agent Type Measure
Display product Keyboard,
task environment) in complete
area (i.e. mouse.

1) Define problem automated taxi a Secure reliable, E-commerce lists with price,
automated face recognitiorn,
human

Example-Vaccum world, driver. E-commerce

System
fast business
websites,

system.
forms.
processing8
tabulate PEAS.
(customer).
Define Temperature,
2) or
and action. Values, pumps,
uence and
functions (i.e. percept sequence action
column) Refinery, pressure

3) Define or tabulate agent


Refinery
Maximize,
purity, yield, operators.
heaters,
displays.
hemical
sensors.
controller

4) Design agent program. safety.


an architecture to implement agent program.
5) Design
Purpose
an agent program / Research
Sensors
6) Implement )
Scientific
ActuatoIs
be single agent or multiple agents system. Performance
Environment

The agent system may Agent Type


need to consider communication, coo Sr. Measure

If system is multiagents
then we
co-operation No.
Correct image
Downlink from Display
categorization
of
Color pixel
arrays.
agents.
strategies among multiple
satelite.
Satellite image orbiting
analysis system. categorization
scere
Thelr PEAS Description Knowledge
Types and
resuit
Agent of lab Recording database of
Examples
1.11.2 According
****

A chemistry of reaction
Correct chemicals and
to Their Uses Chemical
recording
of where
e a c t i o n analyzer instruments, their
reaction.
for common man) in chemistry chemicais are
characteristics.

1) General Purpose (uses research lab. available for


carrying out
Environment Actuators Sensors
Performance reactions.
St. No. Agent Type Measure

Roads, other Steering Cameras, sonar,


An automated Safe, fast, legal, acceleration speedometer, Medical Purpose Sensors
comforatable traffic, GPS, Odometa, wwwe
Actuators
taxi driver
trip, maximize pedestrians, break, Signal, Performance
Environment
accele rometer
Customers. hom, display.
St. No. Agent Type Measure
profits. engine, Sensors, Keyboard entry
keyboard Display of symptoms,
Patient, hospital,
Medical
Healthy patient,
minimize costs, staft. guestions, indings
Web/video diagrosesS
Human face Capturing face diagnosis treatments,
patiene's
lawsusS
An automated Correct, feature camera, answers.
software, web system. referrals
face recognizer recognition extraction keyboard, of
efficient system. camera/video Database
mouse, infrared Detail reporting of
camera, infrared classification Blood sample with procedures
ight Blood testing Correct of each test test conduction
light. reporting on lab. specified
and resus
arm and Camera, joint system. each test.t Commponents.
Conveyor belt Jointed a n g l e sensors
Percentage of
3, Part-picking with parts hand. wwww.wwwwww.w.a
parts in correct
robot
bins. bins.
Touch screen.

Secure, reliabie
ATM machine, Display
ATM system human system
menu/screer

fast service. with options,


(customer.
validity checks
An up
thrust for knowledge
TECHNICAL
PUBLICATIONS
The Concepot
Artificial Intelligence
Artificial Intelligence 1 - 40 Artificial lntelligence The Con 1 41
Concept Artficial lntelligenoce

V) Educational Purpose The following


[S]ensors : Tutor agent system.
The
M in interactive English
Agent Type Performnance
Environment Actuators Sensors plays a crucial role

No. Measure
Sensor
required
to support sequence of perception
are
sensor
events.
for providing input
Interactive
English tutor.
Maximize
s t u d e n t ' s sCore
Set of students,
testing agency.
Display exercises
sugEestions,
corrections.
Keyboard entry.
Keyboard
1) Keyboard
for GUI
interface.
Mouse
on test 2) audio recording.
and mike for
Group of
learner Display of each Inputs from 3) Headphone for listening
A casio teacher. Learmer should
be able to play or a single note,
presentation of
learner, trom Video/web camera's for video shooting
specific musical learner. mouse or 4)
pieces. playing a key,
sample music keyboard and One Final Word
database ot casio 1.12 c a n be s e e n
that goal
pieces details. its related work it
******

After taking a brief tour


of AI history andthat solve the problems
which a r e useful
of AI is to construct working programs

1.11.3 The Detail Example of PEAS for well being of human.


a m o u n t of data and processed
Interactive English Tutor is to acquire large and enough
. I n AI major issue and at least solve the toy
Agent deal with almost all the problems
knowledge that c a n when required, o n c e the
The [Plerfomance Measures It becomes harder to access appropriate things
1) problems.
Tutor agent system must achieve the following performan. amount of knowledge grows up.
The Interactive English nance related to AI
is required to process knowledge
measures. A good programming language for AI programming.
knowledge regarding English subject, such s
maximum LISP has been most commonly used language
1) All the student mustget problems. that have been
communicational skil), reading, writing skills easiest to build using languages
vocabulary, verbal soft skills, (i.e. Specifically, AI programs are
than primarily numeric computation.
marks in the english test. designed to support symbolic rather
2) All the students must score good going to
AI is still a
yet to bloom and a bud in industry.
In o u r syllabus we are

I1) The [E]nvironment: study some of the basic but major topics related to AL.
has following properties
In InteractiveEnglish Tutor agent system environment Answer in Brief
and 1Q (Intellectual Quotient).
1) All the students having different grasping power
demonstration. 1. Define Al. (Refer section 1.1)
2) Software modules which gives
2. What is AI ? (Refer section 1.1)

I) The [A]ctuators (Actions) 3. What is meant by robotic agent ? (Refer section 1.1)
architecture. (ie.
The software model (agent program) will be executed on the agent What are adoantages one can infer when machines perform intelligently ? (Refer section 1.1)
interactive english tutor are, 5. Define an agent. (Refer section 1.8)
operating system). The actions performed by
6. What is role of an agent program ? (Refer section 1.8)
1) Audio / video demonstration on different topics.
7. Define rotational agent. (Refer section 1.8)
2) Practical assignment on verbal written skills, report generation, letter writing,etc
8. List down the characteristics of intelligemt agent. (Refer section 1.10)
Monitoring and inspection (ie. checking) of the practical assignment provide 9. Give general model of learning agent. (Refer section 1.10)
10. Explain in detail the history of Al. (Refer section 1.
with suggestions and corrections, to students.
11. What are various domains of AI ? (Refer section 1.1)
4) Online test conduction and result analysis.
12. Discuss in detail the structure
5) Student's of agent with suitable diagram. (Refer section 1.8)
speech and video recording.
TECHNICAL PUBLICATIONS An up thrust for knowledge

TECHNICAL PUBLICATIONS- An up thrust for knowledge


Artiñicial intelligence 1 - 42 Artificial lntelligence The Conca The Concept
****** Artificial intelligence-
43.
******

What is an ideal rational agent ? (Refer section 1.8)


1-43
Artficiel Intelligence8
14. Explain properties of environment. (Refer section 1.9)
Summer - 18
. Name at least 5 agent types with percepts actions and goals with environment.

41
(Refer section 1.9)
test. (Refer section 1.1)
Discuss Turning
W h a t are requirements of intelligent agents ? (Refer section 1.10) Q.5
(Refer section 1.10) Winter- 18
17. Discuss model based agents and goals based agents. wmww.d
with goals. (Refer section 1.10)
18. Give the structure of a n agent discuss different task domain of artificial intelligence. 31
their PEAS. (Refer section 1.11) Q.6 Define and
19. List few agent types and describe (Refer section 1.9)
section 1.11)
20. What is meant by PEAS ? (Refer
Summer 19
how a n Al system is diferent from
a cornvolutional! computing
computing system.
s
21. What is AI ? Explain
(Refer section 1.1) (Refer section 1.9) 141
words in the context of AI: Intelligence
characteristics of Al. (Refer section 1.1) Q.7 Define the following
22. What is Al ? State various
23. Explain the nature and scope of AlL. Why game playing problems are considered Al vrohi Winter 19
lems ?
(Refer section 1.1)
how AI techniques improve
the term "Artificial Intelligence". Explain
24. What a r e Al techniques? (Refer
section 14) Q.8 Define 1.1 and 1.4)
25. Define AI and justify with suitable example how does conventional computing diflerent
real-world problem solving. (Refer sections
the What is the significance of the Turing Test" in Al : Explain how it is performed.
section 1.1)
intelligent computing. (Refer Q.9 41
26. Explan desirable properties of Al internal representation and Al softoare. (Refer section 1
(Refer section 1.1)
1.1)
a.10 Enlist and discuss the major task domains of Artificial Intelligence.
Questions with Answers (Refer section 1.9)
1.13 University nwww.wwwwnw

Winter 12 Summer-20
wwwww.w
wwww.w wi

What is intelligence Discuss types of problems requiring intelligence to solbe it. Q.11 Define the following wods in the contert of Al:
Q.1
i) Intelligence. (Refer section 1.1)
Define Al.(Refer sections 1.1.2 and 1.2)

Winter -14 O00


the characteristics of Al problem. (Refer section 1.12)
Q.2 Define Al ? Explain
Winter 16

(Refer section 1.1)


Q.3 Discuss following: i) Turing test
Winter 17
wmmnommmmmuá

Q4 Discuss: Turning test. (Refer section 1.1)

TECHNICAL PUBLICATIONS -
An up thrust for knowledge

An up thrust for
knowledge
TECHNICAL PUBLICATIONS -
Knowledge Representation
4 IssuesS
Syllabus
a l Agents: Knowledge-based agents, The Wumpus world, Logic,
onsitional theorem proving. Efective propositional model checking,Propositional
Agents basedlogic,
on
Pro
propositional logic.

et Order Logic: Representation Revisited, Syntax and Semantics of First Order logic, Using First
Orderlogic

Contents
Representation and Mappings... .. Winter- 14, 18, 19,
Summer- 16,18,20
.

Marks 7

4.2 Approaches to Knowledge Representation... Summer 15, 17, 18,20


. Winter-18 .Marks7

4.3 University Questions with Answers


4-2 Knowledge Rep
Artificial Intelligence sentation IsSU8Sue 4-3
GTU : Winter-14,19, Summer,
A r t i f i c i e lI n t e l l i g e n c e
Knowledge Representation Issues
and Mappings
4.1 Representation
16,18,20 This can be treated as John Zorn plays in the band Naked City or John Zorm's band

4.1.1 Introduction is Naked City.


Another representatic
ation is band = Naked City
Some knowled
programs require
.Search-based problem solving to be John Zorm Bill Frissell, Fred Frith, Joey Barron
or path toward so
band-members
states
can be a particular
implemented. Knowledge
this knowledge must be represented in a na
etc. Before being used ular way Granulartty

Representation (KR) is an important ould


the knowledge be represented and what are the primitives.
with a certain format. Knowledge issue in what
level shoul
in particular, "The dominant naw At
larity of Representation Primitives are fundamental concepts such as
science in general and in Al
-

computer
since the early 1970s has been based on #h
aradigm
for Choosi ng
the
Granularit

and as English is a very rich language with million


over halfa
building intelligent systems seing, playing
knowlec premise
wledge is represente holding, find difficulty in deciding which words to choose as our
that intelligence presupposes knowledge". Generaly, we will clear upon
words
it is
the svstem's knowledge base, which consists of data structures and prograns. a series
of situations.
in
primitives
to have a program called an infa
addition, the intelligent system is expected erence feeds a dog then it could become :

for the task at hand. Th E Tom


engine that implements the reasoning patterns necessary Thus
current Al theory and practice dictate that intelligent systems be knowledge based feeds(tom, dog)
ased, bone like:
gives the dog
a
consistent with this simple knowledge base plus inference engine architecture. This Tom
f
emphasis on knowledge has led to suggestions that AI can be arguably called these the same ?
dog,.bone) Are
gives(tom,
applied epistenology". object food constitute feeding?
In any sense does giving an

> feed(x) then we are making progress.


4.1.2 Issues In Knowledge Representation If give(x, food)
certain inferential rules.
But need to add
we
Are any attributes of objects so basic that they occur in almost every problem How do we represent
Louise is Bills cousin
on relationships
famous program it is Chris
domain ? If there are such attributes then we need to make sure that they are In the
(brother or sister (father mother( bill)) Suppose well. or
handled appropriately in each of the mechanism we propose. If such attributes this? louise daughter =

female and then son applies


as
as a male or
do not know if it is Chris and
exists, what are they ? There are several issues that must be considered when then w e
different levels of primitives
levels of understanding require
representing various kinds of real-wold knowledge. Clearly the separate similar primitives.
to link together apparently must
these need many rules and the underlying question
Important Atributes storage problem
is a potential
Obviously there
Are there any attributes that occur in many different types of problem? comprehension is needed.
be what level of to handle.
is another issue
There are two instance and isa and each is that is granularity
important because each supports T h e finest level of
knowledge be represented.
For this, o n e
property inheritance. knowledge that needs to to be
level of the knowledge
Granularity is understanding of
There are two important attributes that are of general significance such as ISA and primitive (basic)
should have complete
instances. These atributes are important because they support property in an Isa
represented in the system. Existence
handled are Inverses,
Relationship among attributes must be considered carefully which is need to be attributes. Major
inheritance.
depicting more knowledge. O t h e r significant
issues those
about values,
Single-vahued
knowledge is
for reasoning whose
hierarchy, Technique The set of objects
identified.
Relationshlps to be
artributes are required identified.
stored should
be clearly
-What about the relationship between the attributes of an object, such as, nverses, required to be
existence, techniques for reasoning an and Mappings
consider an example of an inverse in
about values and single valued attributes.
The Techniques of Representation large Nevertheless

13 within.
encountered is
knowledge
that
bandJohn Zom,Naked City) solve a complex problems manipulating
AI used to means
of
DE well as some

as
TECHNICAL O Knowledge thrust for knowledge
PUBLICATIONS An up thrust for knowledge TECHNICAL
PUBLICATIONS
An up
A r t i h c i a li n t e l l g e n c e 4 5
Artificial Intelligence 4-4 Knowledge Representetion lssue
sues Knowledge Representation Issues
nsider example of the dtilated
required so as to create solutions for new problems. In the representation there
here are two board from which h two
squares, in
Checkerbo Problem. onsider a normal
different entities that must be considered checker

all the remaining squares comers, have been opposite


removed. The
.Facts: truths in some relevant world. These are things that we want to reDrees. is to
cover

exactiy with
donimoes, each of which covers
task
No overlapping either of dominoes on
Representation of facts in some chosen formalism. These are things that
squares

of the multilated board are


top of each other or of
alowed. Can this task be done dominoes
two

the boundary
actually be manipulated.
be done in two levels
over

7"
Structuring of these entities can

The knowledgelevel at which facts are described.


No. black square
The symbol level at whichrepresentation of some objects at the knowledge-level 30
defined in terms of symbols that can be manipulated by are
programs. No. white squara
32
Reasoning program
internal
acts
representation
English Fig. 4.1.2 A multilated checker board
English generation A example follows:
understanding .Checkerboard total contains 32 white squares and 30 black squares.
English .When every domino cover two neighboring squares, a biack one and a whute one,
representation
then first thirty dominos cover 30 black squares and 30 white squares, and leaving
Fig. 4.1.1 Mappings between facts and two white square and zero black domino.
representation
Our main goal is to focus on
facts, representation as well as the .These two black squares can not be adjusted and can not cover remaining domino.
that must exist between the two as
shown in the
two-way mappings
4.1.1 Fig. above. The links in the .It is impossible to cover all 62 squares with 31 one domincs.
are called representation mappings. In representation figure
mappings, there are
Forward representation which
-

maps from facts to representation.


Backward representation which
maps the other way.
One
representation of facts concerns with natural
sentences is that, language (particularly English)
regardless of the representation for facts that we use
may also need to be concerned with an in a program, we
order to facilitate English representation of those facts that in
getting information into and out
mapping functions from English sentences to the of the system. We must also have
going to use and from it back to sentences as
representation which we are actually
shown in the Fig. 4.1.1. For
can use mathematical logic as the example we

sentences below. representation formalism. Consider the


Engus
Tommy is dog. This fact can also be
a

represented in logic as follows: Dog(Tommy)


Suppose also we have logical a

explained below. Using the deductiverepresentation of the fact: all dogs have talls a
Partial covernng
Cover fieds on the board
new
representation object. Usingg an mechanisms of the logic, we may generaecould he
then generate the appropriate backward mapping Flg. 4.1.3 Observation

has a tail Or we can functo


English sentence 1 number of black
is that the
representation of new fact to cause us ommy
to take
make use or u An obser Which can be made the computation
in
covering.
h e s a m e is
true
Squaresc dominoes in the partial
representation of additional facts. some
appropriate aáction or derive
POna to the number of

TECHNICAL PUBLICATIONS An up thrust or knowledge


An up thrust for PUBLICATIONS
knowledge TECHNICAL
Artlficial Intelligence
4-6
entation Issues Artificial intelngence

4-7

which enforces
the number of black squarres to coincide Knowledge Represental Issues
ror the number of white fields, 2) Inherltable knowledge
the inter play befween covered sa
with the white squares, when
investigated uares on the edge is
Relational knowledge made up of
object
in the partial covering associativity like
board and dominoes contained values attribute.
co-relation associated
Representation
to Knowledge All data shoule be organised into
42 Approaches GTU: Summer-15 17, 18, 20, Winter
a
hierarchy of classes.
18 Inherit values from being all members of class.
of knowledge in a particularparticular domai
domain shoulad
A good system for the representation Class must be arranged in a generalization.
possess the following Pproperties individual frame can represent the
the ability to represent all of #h Cuery
collection
Representational adequacy It is kinds of associated with a individual node.
of attribute and its
value
that domain.
knowledge that are needed in
I t is the to manipulate the representational
Inferential adequacy ability ct
Player
in such a way as to derive new structures corresponding to new knowl.
inferred from old.
ledge Sa

Iníerrential efficiency It is the ability to incorporate into the


-

struchs knowledge Cricket


additional information, that can be used to focus the attention of the inferene
mechanisms in the most promising direction. Isa Isa
Acquisitional efficiency Acquiring new information easily.

Two types of approaches to knowledge representation: L batsman L.H.batsman


1) Simple relational knowledge instance instance

2) Inheritable knowledge bablee moni

1) Simple relational knowledge Equal Equa


handed handed
This is the simplest way of storing fact which uses relational method, when every BPL Indore
and each fact about a set of objects is set out
sequentially and automatically in
column. Fig. 4.2.1 Inheritable knowledge
This type of representation is small procedure for inference. Example
I t is used to define
inference engines.
For example Properties of Inheritance hierarchy
Player 1) to be point from object and its value.
Weight Age Play cricket
4) Boxed: to be object and value of attribute any object
ww.roo
Monu 70 Right H.
30 3) It may be also be called slot-and filter structure
Sonu 65 Right H.
Bablee
Algorlthm retrieve
50 29 Left H. 0 retrieve a value for attribute of an instance object.
Soni
45 Right H.
2
1Find object in the knowledgebase
Moni 4 I t h e r e is a value for the attribute, report t a t vea
42 Left H
wwwwwe
w***

Player_info (Monu', 70, 30, right H)


PUBLICATIONS
An up for knowledge
thrust

TECHNICA
TECHNICAL PUBLICATIONS. n tn thou
4-8
Knowledge kepresentation A r t f i c i e lI n t e l l i g e n c e 4- 9
Artificial Intelligence ISsues Knowled Representation
tail, otherwise go
not then
arams rely
Many programs rely on more than one technique
Otherwise look value of instance,
if
found, report it. Otherwise.
node and srstem - They are used
used in
attribute, if one
is
Database system representing Simple
find a value for the
ISA, found for
the attribute. there which declarative facts and can be said as a set of
Relation Knowledge
is no value search using relations of the
Aatabase systems. Fig. 4.2.2 shows an same sort
example of such systems.
4.2.1 Inferential Knowledge
Player Height Weight
is very useful form of inference, represent the kno..
Bats-Thrown
property
When inheritance
as a formal logic.
ledge Ram 6-0 180
Right-Right
tail (n) Shyam 5-10 170
All cat have tails tx: dog (x)> has a
Right-Right
Veer 6-2 215 Left-Left
Set of rules 6-3
Tarun 205
1) Define require fact.
www.ww..a niivnwe nwnmnt
Left-Right
Semantic nets Semantic nets are useful for
2) Additional statement is varified, true or false. representing inheritable knowledge.
Tnheritable knowledge is the most useful for property inheritance, in which elements
3) Logic provides a powerful structure in relationships.
of specific classes inherits attributes and values from more general classes in which

4.2.2 Procedural Knowledge they are included. Frames also do play a big role in representing this knowledge.
In order to support property inheritance, objects must be organized into classes
Procedural different-different way in program.
knowledge can
explain and classes must be arranged in a generalization hierarchy. Fig. 4.2.2 below shows
Procedural knowiedge clearly differs from propositional knowledge. some additional baseball knowledge inserted into a structure that is so arranged.

.Procedural knowledge basically involves knowing how to do Example:


something.
Procedural knowledge follow implicit learming. Handed
Person Right
4221 Advatages of Procedural
Knowledge
1) Property specific knowledge can be
specified. isa

2) Extended logical inference is


possible.
4.2.2.2 Disadvantages of Procedural Knowledge
Adult
male eign178
isa
1) Consistency: all eigh195
deduction are not always correct. bats Baseball
2) Completeness all cases are not Equal to
to
There
easy represent. handed player
are
multiple techniques for knowledge representation. Different representaion Isa, isa bating 252
formalisms are, average

Rules batting averagPitcher batting average262


106 Fielder
Logic instance
instance

Natural language Pee-Wee-


eam Brooklyn
Chicago team Three-Finger dodgers
Database systems Brown Reese
cubs
Semantic nets hierarchy
4.2.2 Inheritance
Frames Fig.

TECHNICAL PUBLICATIONS PUBLICATIONS -An up thust for knowledge


An up thrust
for knowledge TECHNICAL
4-10 Knowledge Representation lssues 4-11
Artificial Intelligence
Artifcielntellgence Knowledge Repres
esentation Issues
Lines repr
used in semantic nets/ frames.
ent atribtes Adult maleg
hierarchy is normally
sa atributes of objects. Correct da
and values of uction from
Boxed nodes represent objects Brown is 195 cm. An incor
ig.
could be
4.2.2 could be:

:
height of Three-Finger
height of Three-Finger
178
Brown is cm. 1he structure

semantic network
shown in the
or
deduction
Fig 422
19a

Musician
be also called
collection
a

1S a slot-and-filler
structure. It may Isa
frames.

Predicate logic Jazz Avant Grade


inferential knowledge. Jazz
Predicate logic is used to represent
structure in which to describe relationshine
Logic provides powerful s among instance instance

values.
It can be combined with some other powerful description language with Miles Davis John Zom
hierarchy. bands bands

Production rules
Production rules are useful in representing procedural knowledge. Miles Davis Group Naked City
Miles Davis Quintet Massada
Procedural knowledge is form of operational knowledge which specifies what to
do when. 4.2.3 Property inheritance hierarchy
Flg.
such LISP.
Previously it was done using programming language as
fail.
Otherwise look for a value of instance if none
However it was hard to reasoning with this method hence in AA progra 3.
then report it.
find a value for the attribute and
is represented using production rules. 4. Otherwise go to that node and
procedural knowledge for the attribute.
isa until a value is found
5. Otherwise search through using
Inheritable knowiedge is an important issue
in computer science in
Relational knowledge is made up of objects consisting of Knowledge Representation (KR) for building intelligent
"The dominant paradigm
general and in AI in particular. on the premise
that inteligence
Attributes 1970s has been based
systems since the early in the system's
Corresponding associated values. knowledge is represented
presupposes knowledge. Generally, In addition, the
structures and programs.
We extend the base more by allowing inference mechanisms consists of data
knowledge base, which called an inference engine
that

is expected to have
a program
current Al
Property inheritance ntegent system
necessary
for the task at hand.
Thus
Elements inherit values from being members of a class. piements the reasoning patterns based, consistent
systems be knowledge
Data must be organised into a hierarchy of classes (Fig. 4.2.3). and practice dictate that intelligent This emphasis
EOry architecture.
base plus inference engine
i s simple knowledge
Boxed nodes called applied
objects and values of attributes of objects. has led to suggestions
that AI can be arguably
Values can be objects with attributes and so on. Knowledge
Arrows - point from object to its value.
epistemology be termed the
symbol-manipulation approacn.
neural
eapproach described above may of work in
which another approach,
h i s structure is known slot and filler structure, semantic out non-symbolic
as a newo ically, however, AI grew distributed processing
or

collection of frames. networks (or connectionism or parallel w a s outplayed


by the
but this approach an
The algorithm to retrieve a value for an attribute of instance object representations) played the major role, networks again got
an
80s when neural
until the
1. Find the object in the annpulation approach
knowledge base. imn
important role.
2. If there is a value for the attribute
report it.
thrust for knowledge
- An up
ECHNICAL PUBLICATIONS
TECHNICAL PUBLIGATIONS
Artificial lntelligence Knowled 4-13
4-12
epresentation Is ues ArtificialIntelligence

Knowledge epresentation lssues


is mentioned by Davis (2001p.8138): statistical anals armical neural network node attempts emulate this beha
Afinal approach lysis of large t
of input lines
which are analagous to vior. Each node l
corpora of data. set input symapses in
'activation function' (also known as biological neuron.
a a
has an an
The approaches to KR have parallels in theories of psychology as node also
Each
ho node when fire, similar to a
to 'transfer function), a

epistemology. We will start networks, then o


by considering neural as in which
tells
biological neuron. In it's
symbolic" approaches and finally consider large corpora of data (which h e nction can just be to generate a T if
this activation func
the summed
simplest
form,
is most some value or a '0' otherwise.
Activation functions, input is
related to library and information science, which is concermed concerned wit
with large greater
than
to be
this simple -

in fact to create networks that can do not


useful work, they
however, do
bibliographical and full-text databases). have
have to be more complex, for at least some of
a lImost
m
always the nodes in the
Neural networks
network.

neural networks exist, for in the human


.While biological example, brain, Arti6 feedforward
neural network, which i one ot the more common neural
network
Neural Networks (ANN), are mathematical or computational modela .A
mes, is composed of a set or these nodes and connections. These nodes are
information processing. There is no precise agreed definition amongest research
what a neural network is, but the original inspiration for the
chers ranged in layers. The connections are typically formed by connecting each of the
as to
Was technique nodes in a given layer to all of the neurons in the next layer. In this way every
from examination of bioelectrical networks in the brain formed by neurons is connected to every other node in the next
and node in a given layer layer.
their synapses. In a neural network model, simple nodes (or "neurons
are
. Tvoically there layers to feedforward network an input layer,
are at least three a
connected together to form a network of nodes, hence the term "neural network it is
an output layer. The input layer does no processing
-

a hidden layer and


Input Hidden Output vector is fed into the network. The input layer then feeds
simply where the data
ayer layer layer
in turn, feeds into the output layer. The
into the hidden layer. The hidden layer,
network occurs in the nodes of the hidden layer and the
actual processing in the
Input:#1 output layer.
network can be
When enough neur are connected together in layers, the
Feedforward networks, in
Input #2 trained' to do useful things using a training algorithm.
to do inteligent
particular, are very useful, when trained appropriately, Neural
Output tasks on unfamiliar data." (Wikipedia :

classification or identification type


Input#3 network, 2005).
is a neural network much like
As an approach to knowledge representation from
American psychology
Input # 4 Behaviorism dominated
behaviourism in psychology. behaviour of animals
to shape the
interest was how kinds ot
about 1913 to 1970. Its main with different
such organisms
and human beings by confronting approach (or
Fig. 4.2.4 A model of a neural net thus very
much an input-output and to
It is
Sumuli-patterns. mental terms (eg memory)
tried to avoid
(copied from Government web site: - r e s p o n s e approach).
They between stimuli
and responses (eg
ane to relations behaviourists neglected
http /smig.usgs.gov/SMIG/features_0902/tualatin_ann.fig3.gif usGS-authored
or produced them with terms referring most
place response"). Although
a "black
box,
with "delayed the brain as
and information are in the public domain)
to
place "memory" preferred to look at of neural
processes and the idea
I n a typical neural network, each node operates on a principle simila LStructures and brain models, and Hebb
has a in Donald 0.
biological neuron. In a biological neuron, each incoming synapse of a neuro some beha iourists were interested the
behaviourist

forward for the


first tim by
associated with it. When the of each synapse, times
its
inpu networks was put
weight weight
Summed up for all incoming synapses and that sum is greater
in 1949 behaviourism are closely
nets and British
n neural classicai
threshold value, then the neuron fires, sending a value to anotner Doth the computer technology of in particular by
ideas developed
network. epistemological

PUBLICATIONS-
An up
thrust for knowledge
TECHNICAL

TECHNICAL pIURIICATIONo 1 I thnict for knowledge


Knowledge Representation Issues A t f i c i a lI n t e l l i g e n c e
4 - 15
Knowledge Representation Issues
4-14
Artificial Intelligence
be that knowledge
is
is
represented
represented in the
brain as on the
other hand, reverses this dependency by identifying the meaning of a

The basic
idea may
based orn of
repetitions of similar
sim stimulia knowled base with its Kramer and Mylopoulos, 1992, p. 746).
empiricism. learning
stimuli-processes,
why or view the most imDor
result of point
association.
From our
co issye b) Semantlc networks

follows the laws of is provided by somebody


who is in
D)
representation
what represents wanted
behas emantic networks are knowledge representation schemes involving nodes and
is that knowledge her view ot The
learning process.
It is his or
considered true,
relevant and important
knowled
that inks between nodes. nodes
between nodes. The
represent objects or concepts and the links
links directed and labeled. Semantic
represent relations
what is are
indirectly manage formulated and provided
directly, but is implemage
by cognitive motivated models of human memory.
nets were originaly
is not n
This knowledge
manipulating
the stimulation of the
tem
sres
the system or the organism
by or
Kramer and Mylopoulos (1992, p. 747-748) their popularity and
which involves rewards and/or punishm ent). ccording to
organism (simplified: by feedback uccesscan best be understood as a comvenient compromise between the

Symbol representation declarative and the procedural extremes, while "others have argued that semantic
to knowledge representation
in AL which
networks offer a fundamentally different representational paradigm that object
can : is
There are several approaches be

seen as subcategories of the symbol-representation


They all share the
approach.
centered in the sense that it is based on object descriptions rather than arbitrary
the use of some kind of
conditions the knowledge is explicated by symbolic
propositions and focuses on krnowledge organization.
installed in the system "manually, piece by piece. The four mos
language and is
WordNet is an example of a semantic net. The semantic web is a concept that
based systems b) semani
kinds of symbolic systems may be a) logic tic
important represents a major
research program with semantic networks. For many persons
networks and c) frame-based systems.
this idea of a semantic web represents the kind of knowledge organization with
a) Logic based representations the most promising prospects.
.Knowledge may be represented in computers by programmers writing declarative
sentencessuch as "Socrates is human" and if somebody is human, then she is

dOe-up-of a-player-in
Market
mortal" using mathematical logic. "A major advantage of many logics adopted for
knowledge representation is that they are sound and complete, which means that
derivability and provability lead to the same set of consequences, given a carries
Retailer
knowledge base. It has however turned out to be difficult to find logics that is em
both expressively adequate for knowledge representation and also computationally

ractable. "Atempts to find an acceptable compromise to the expressiveness versus


tractabilitytrade-off generally use variations of first-order
logic, following one of
a-form-of Brand
Creates

The first approach limits the expressiveness of the language of Various


wo approaches.
representation by restricting the form of the formulas that can be admitted in
other Manufacturer

the attributes
s
knowledge base. The second approach redefines the provability relation or na
first-order logic to make it
computational tractable. S Size
Relational databases.. . widely used to
represent "simple" facts, such
peop as
is-a Category

cresses or salaries, constitute Color


(Kramer and
a
good example of the first approach *
Mylopoulos, 1992, p. 745-746). The second
make a slight change in the semantics of existential approach whichexa
may for
Segment
quantification
arge representations computational tractable, but this has on is-a
the provability relation. a remarkable
inpa na

Logic based systems may also use Category


atribute
representations treat the intended procedural representations. ion
that
imposes constraints on meaning of a knowledge base as a
Flg. 4.2.5 Semantic Nets
knowledge base operations. Procedural ntations,
TECHNICAL repe An up thrust for knowiedge
PUBLICATIONS An up thrust for TECHNICAL PUBLICATIONS
knowledge
4 6 present ntation Issues ArtificialIntelligence9
4-17
Artificial Intelligence wledge Representation Issues
aspects of KR in computer
Frame-based representations that . se frames, a eplstemic
science
are knowledge
representation systems
genera
General
The
R [Knowledge Representation)
architectures we have considered above
Frame-based systems (1975), their primary means to
their
as
primary
notion originally
introduced by Minsky representnt ic-based systems,
[logic-
semantic networks and
frame-based systems],
A frame is
structure for representing a concept
other proposals of a more or less similar together with
uatio
a
domain knowledge. restaurant. Attached to a frame many flavour, such as production
what may be called the classical, or
systems, constitute
in a
are several
such "restaurant" or "being (with some
heaging) the knowledge-based approach to AL. Knowledge representation,question
as

information, for
kindstoofuse
instance,
definitional and descriptive informatio and in this
how the frame. Frames are supposed to capture the essence af.concep
view, involves large, complex structures of symbols, defined and assembled by
or
for dinner, by clustering all
for example going out hand. This approach to Al essentially derives from a line of philosophical thought
stereotypical situations, This means, in particular, that
vant
information for these situations together. running from Descartes through Leibniz, Frege, and Russel. In the late 1980s and

deal of procedurally expressed


knowledge should be part of the a t 1990s, however, as a result
of the inherent difficulty of this line of research, and of
be organized in frame systems in whick
frames. the limited progress that has been made, this approach to AI has been challenged
Collections of such frames are to ich the
frames are interconnected. by two alternative methodologies neural networks, and statistical analysis of
(Davis, 2001, p. 8137-8138).
are in many ways similar to object-oriente large corpora".
Obviously, frame-based systems
of knowledge representation thus correspond to cognitivism in
programming languages; indeed, the two theories interacted strongly in eir The symbolic forms
and to rationalism in epistemology while neural networks correspond
development. psychology both
in psychology and to empiricism in epistemology. They may
The chief advantages of frame-based architectures are expressivity, flexibility and to behaviourism
the subjective side of knowledge representation.
In the symbolic
ease of use. The chief disadvantages is lack of precision and lack of a well defined be said to ignore and
of the programming tasks are defining
form of KR the person in control
model of inference. The architecture provides a wealth of features and options for is said about whether different subjects
both representation and inference, but only a weak underlying model. Hence, in a assembling the knowledge. Nothing would or should define and
different traditions or paradigms)
complex case, it is difsicult to predict how these features will interact or to explain (representing networks the person in control
of
assemble different kinds
of knowledge. In neural learm.
should
unexpected interactions, which makes debugging and updating difficult. what the organism or the system
the stimulation is determining connected to subjective
views
criteria may be
From a psychological point of view has a tendency to overuse frames as is said about how persons' kind of
Nothing assumed without any
socio-cultural factors.
In both c a s e s it is of
explanations been critized: "I going to argue against the existence
am not and to
representation is "objective".
What kinds

(whatever that may be) of organised knowledge structures. What I will do is place examination, that the knowledge quotation:
"More
uncovered by the following
may be
doubts on the explanatory value of concepts as frames, conventions, scripts and so perspective that are missing level has relativistic properties.
that the knowledge observer's
there are structures like frames and scripts, recently, Clancey (1991) argues n v i r o n m e n t . It is
an
on...
Even if are they relatively easy description is of an agent in its
e

studied" (Clancey,
1992,
for people to override. People can still use arbitrary knowledge of the world to A knowledge-level the agent being
possessed by level (or
representations
system's knowledge
understand sentences and scenes: you cannot exclude
any part of the knowledge theory, not
description of an agent's or a
has relative properties
and implies
base in advance, using some general prestructuring of that knowledge. 743). Yes! The
Thereroe generalized: a description of its knowledge)
perspectives.
This is a basic point
the content of such structures as frames and
scripts must themnselves be Dos t or system
from specific
but it has yet
to be fully
analyzable and subject to reasoning by their users, which puts us back a i o the theory of the agent
understanding
of knowledge
where we started. What we have gained is a summary of the agere in pragmatic
the representation. much
theories of knowledge there has not been
implemented in
regularities frequently or typically, exhibited. The structures themselves te epistemology theories and
recognition
of AI as applied hand
epistemological
nothing about people's cognitive capacities, only about what are probably n spite of the between on
the one
This is odd,
because

ephemeral habits of thought which people can change. In terms of Bilg (1987) systematically
investigation
of knowledge
representation.
theory
of knowledge
theories and any
frames and other hand
scrips lack any kind of 'witcraft. Frames, scripts and relatea Onthe
theory ofknowledge
summarize some of the patterms that emerge when people don't bother to ** is the knowledge.
PIstemology a theory of
be based
on
must
(Vliet, 1992) epresentation

knowtedge
thrust for
A n up
PUBLICATIONS
TECHNICAL
4- 19
Knowledge Representation Issues
A r t i c i e lI n t e l l i g e n c e

4-18
owledge Representatio Issues
Artificia! Intelligence nresent. In large corpora of texts
voice present.
(such as Davis, 2001
and Kramer many voices are present (what kind
representation
in Al
has been es varies accordin8 to how the text corpus is of
n overviews of knowledge and
rationalism
cted as
voice

or scholarly papers).
selected, e.g. if it consists of
Mylopoulos,
1992) only
empiricism
to expand
such overviews b newspapers
and need
There is an
obvious Large corpora of texts onsist of documents each of
which is itself a
approaches.
different
epistemological
positions.
arguments and
knowledge claims. We are now in the realm of Library system of
cOverage of feld of A, first and fore
oremost by and
Hermeneutics has been regarded in the
contributions nclude Mallery, Hu
include
ormation Science (LIS) rather than in computer science in a narrow
sense. What

Winograd &Flores
(1987).
Additional
and Martin (2005). There seems
Tepresented
are
in LIS are
representations of documents
representing
knowledge
and Fonseca
Chalmers (1999) (thus meta-representations). If, tor example, the text
corpus is an academic corpus
Duffy (1992);
bee a need for a
more direct application
ot
eutical/pragmati
historicist/hermeneutical/prao

from the same domain as the person


doing the representation (e.g.
computer
representation. science) then different suggestions and voices on how best to perform the task at
approaches to knowledge
such an approach to knowlad hand is present in the very material to be (meta) represented. Different paradigms
shall very briefly suggest
.

In the next section we of the person dni


main point will
be put on the subjectivity in KR contain arguments in favour of specific ways to do the representation.
representation. The introduced in the first part of #h
the representation in line
with the thoughts this
In other words: The texts to be organized are voices, which probably will contain
article. different implications for how this knowledge should be organized (and by the
Analysis of large corpora way also implications for how texts should be selected in the first hand). This
both neural nets and symbolic KRs E. Davis brief argument may be expanded also to cases in which the corpus is not in the domain
Among alternatives to
or explicit criteria of
mentions text corpora of knowledge representation: Any document has implicit
relevance, which are of importance for organizing those documents.
. "The statistical approach to AI involves taking very large corpora of data and
for how best to represent arts
them in great depth using statistical techniques. These statistics can then If we consider the domain of Arts then the criteria
analyzing As discussed by Ørom (2003)
be used to guide new tasks. The resulting data, as compared to the is depending on what is considered (good) art.
for how arts should be
extremely shallow in terms of their semantic different traditions in Arts have different implications
knowledge-based approach, are
not just the programmers voice.
content, since the categories extracted must be easily derived from the data, but represented. In corpora there are different voices,
and provide knowledge representations
The programmer may ignore these voices
they can be immensely detailed and precise in terms of statistical relations. consider those voices and
exist that based on his own voice alone, or the programmer may
Moreover, techniques such as maximum entropy analysis allow a which represents a dialog between
himself
collection of statistical indicators, each individually quite weak, to be combined provide a knowledge representation, text based on
it is possible to use corpora
effectively into strong collective evidence. From the point of view of knowledge and the voices in the corpora. This way
epistemologies (see also Hjorland
representation, the most interesting data corpora are online libraries of text pragmatic epistemologies rather than empiricist
Libraries of pure text exist online containing billions of words; libraries of and Nissen Pedersen, in press).
extensively annotated texts exist containing hundreds of thousands to millions or
words, depending on the type of annotation. Now, in 2001, statistical methods or Answer in Brief
natural language analysis are, in general, comparable in quality to careruy
.Write a note representations and mappings.
(Refer section 41)
hand-crafted natural language on
analyzers; however, they can be created for a W h a t are various approaches to krnowledge representation. (Refer section
4

language or a new domain at a small fraction of the cost in human labor"


2001, p. 8138). (Dav 3. Explain neural nets. Refer section 4.1)

Large corpora of data


may be approached by methods related 43 University Questions with Answers
which seems to be what Davis is empirrence to

however, between traditional suggesting. There is an


important u ntation Winter-14
and "text corpora" empiricist approaches to knowledge repres (Refer section 4.1.2)
approaches. In the traditional hat is
Q.1 representation.
considered knowledge by the approach is representea aplain the different issues in knowledge
person doing the representation. There
PUBLICATIONS
An up thrust for knowiedge
TECHNICAL
TECHNICAL PUBLICATIONS An up thrust fo kno
Uncertainty
Syllabus
Tlncertainty - Acting under Uncertainty, Basic Probability Notation, The Axioms of Probability,
Joint Distributions.
Inference Using Full

Contents
7.1 Acting Under Uncertainty
7.2 Utility Theory
7.3 The Basic Probability Notation. Winter-18, . Marks 4

7.4 University Question with Answer

(7-1)
Artificial Intelligence 7-2
Uncertainty
7.1 Acting Under Uncertainty
Introduction
A agent working in real world environment almost has
never access to whole + th
about its environment. Therefore, agent needs to work under uncertainity.
Earlier agents we have seen make the epistemological commitment that
either the
facts (expressed as propositions) are true, false or else they are unkrnown.
When an
agent knows enough facts about its environment, the logical approach enables it t
derive plans, which are guaranteed to work.
But when agent works with uncertain knowledge then it might be impossible to
construct a
complete and correct description of how its actions will work. If a
logical
agent can not conclude that any
perticular course of action achieves its
goal, then it will
be unable to act.
The right thing logical agent can do is, take a rational decision. The rational decision
depends on following things:
The relative importance
of various goals.
The likelihood and the degree to
An agent would
which, goals will be achieved.
possess some early basic knowledge of the world (Assume that
knowledge represented in first order logic sentence). Using first order logic to handle
is
real word problem domains fails for three main reasons
as discussed below

1)Laziness
It is too much work to list the
complete set of arntecedents or
consequents needed to
ensure an
exceptionless rule and too hard to use such rules.
2) Theoretical ignorance
A perticular
problem may not have complete theory for the domain.
3) Practical ignorance:
Even if all the rules are known,
perticular aspects of problem are not checked yet or
some details are not considered at all
(missing out the details).
The agene's knowledge can
provide it with a degree of belief with relevent sentences.
To this degree of belief
probability theory
is applied. Probability assigrns a numerical
degree of belief between 0 and 1 to each sentence.
Probability provides a way of summarizing the uncertainity that comes from our
laziness and ignorance.

Assigning probability of 0 to a given sentence corresponds to an


unequivocal beer
saying that sentence is false. Assigning probability of 1 corresponds to an unequivocal

TECHNICAL PUBLICATIONS An up thrust for


knowledge
-
Artificiel intelligence 7- 3 Uncertainty

helief saying that the sentence is true. Probabilities between 0 and 1 correspond to
intermediate degree of belief in the truth of the sentence.

The beliefs completely depends on percepts of agent at perticular time. These


rcepts constitute the evidence on which probability assertions are based. Assignment
percepts
of probability to a proposition is analogous to saying that whether the given logical
sentence (or its negation) is entailed by the knowledge base rather than whether it is
true or not. When more sentences are added to knovwledge base the entailment keeps on
changing. Similarly the probability would also keep on changing with additional

knowledge
All probability statements must therefore, indicate the evidence with respect to which
the probability is being assessed. As the agent receives new percepts, its probability
assessments are updated to reflect the new evidence. Before the evidence is obtained, we
talk about prior or unconditional probability; after the evidence is obtained, we talk
about posterior or conditional probability. In most cases, an agent will have some
evidence from its percepts and will be interested in computing the posterior probabilities
of the outcomes it cares about.

Uncertainty and rational decisions


The presence of uncertainty drastically changes the way an agent makes decision. At

time an agent can have various available decisions, from which it has to
perticular
make a choice. To make such choices an agent must have a preferences between the
different possible outcomes, of the various plans.
A perticular outcome is completely specified state, along with the expected factors

related with the outcome.

For example : Consider a car driving agent who wants to reach at airport by a

specific time say at 7.30 pm.


time, what is the length of
Here factors like, whether agent arrived at airport on

waiting duration at the airport are attached with the outcome.

7.2 Utility Theory


preferences. The term utility in
is used to represent and reason with
Uility theory
Current context is used as "quality of being useful.
of usefulness called as utility. The
Unlity theory says that every state has a degree
agent will prefer the states with higher utility.
which utility function is calculated
he
utility of the state is relative to the agent for
on the basis of agent's preferences.

TECHNICAL PUBLICATIONS An up thrust for knowledge


Uncertainty
Artificial Intelligence 7-4

functions. The utility ofa


are utility
for example: The pay off functions for games
for the agent playing
state in which black has won a game of
chess is obviously high
black and low for the agent playing white.
loves deep chocolate
Someone
that can count test or preferences.
is
function can account for
here no measure

loves chocochip icecream. A utility


Icecream and someone of the factors
of other as one
altruistic behavior, simply by including the welfare
contributing to the agent's own utility.

Decision theory
combined with probabilities for making
Preterences as expressed by utilities are
called as decision theory.
rational decisions. This theory, of rational decision making is
Decision theory can be summarized as,
Decision theory =Probability theory + Utility theory.
The principle of Maximum Expected Utility (MEU): that
Decisiontheory rational if and
says that the agent is
only if it chooses the action
all the possible outcomes of the action.
yields highest expected utility, averaged over

Design for a decision theoretic agent:


agent that decision theory to
Following algorithm sketches the structure of an uses

select actions.

The algorithm
Function : DT-AGENT (percept) returns an action.
Static belief-state, probabilistic beliefs about the current state of the world.

action, the agent's action.


Update belief-state based on action and percept
Calculate outcome probabilities for actions,

given actions descriptions and current belief-state


Select action with highest expected utility
given probabilities of outcomes and utility information
Return action.

A decision therotic agent that selects rational actions.

The decision theoretic agent is identical, at an abstract


level, to the agent. The logical
rimary difference is that the decision theoretic agent's knowledge of the current state:
uncertain; the agent's belief state is a representation of the probabilities of all
actual states of the world.
possible
As time passes, the agent accumulates more evidence and its belief state changes.
Given the belief state, the agent can make probabilistic
predictions of action outcomes
and hence select the action with highest expected
utility.
TECHNICAL PUBLICATIONS An up thrust
for knowledge
Artificial Intelligence 7-5 Uncertainty

7.3 The Basic Probability Notation GTU Winter-18


The probability theory uses propositional logic language with additional

expressivness.
The probability theory uses represent prior probability statements, which apply betore
any evidence is obtained. The probability theory uses conditional probability statemernts
which include the evidence explicitly.
7.3.1 Propositions
1) The propositions (assertions) are attached with the degree of belief.

2) Complex proposition can be formed using standard logical connectives


For example : [(Cavity = True) a (Toothache = False)] and [(Cavity a - Toothache)]
both are same assertions.
.Therandom variable:
1) The basic element of language is random variable.
2) It reffers to a "pare" of the world whose "status" is initially unknown.
For example In toothache problem 'cavity' is a random variable which can refer
to my left wisdom tooth or right wisdom tooth.
3) Random variables are like symbols in propositional logic.
4) Random variables are represented using capital letters. Whereas unknown random
variable can be represented with lowercase letter.
For example: P
(a) =1 -P(%a)
5) Each random variable has a domain of values that it can take on. That is domain
is, set of allowable values for random variable.
For example : The domain of cavity can be < true, false >
6) A random variable's proposition will assert that what value is drawn for the
random variable from its domain.
For example : Cavity = True is proposition.

Saying that "there is cavity in my lower left wisdom tooth".


7Random variables are divided into three kinds, depending on their domain. The
ypes are as follows.

i)Boolean random variables These are random variables that can take up
only boolean values.
it takes value either true or false.
For example Cavity,
:

TECHNICAL PUBLICATIONS An up thrust for knowledge


Artificial Intelligence 7-6
Uncertainty
ii) Discrete random variables: They take values from coUntaki. table domain.
The values in the domain
They also include boolean domain. ust be
exclusive and exhaustive (finite).
mutually
For example Weather, it has domain < sunny, rainy, cloudy, cold

iii) Continuous random variables : They take values from real numhers
domain can be either entire real line or subset of it like intervalc Ihe
2, 3)
that X has exact value 4.14
For example : X = 4.14 asserts
Propositions having continuous random variable can have inequalitioc
like
X 4.14.

7.3.2 Atomic Events

1) An atomic event is a complete specification of the state of the world ahou


about
which agent is uncertain.

2) They are represented as variables. These variables are assigned values from the

real world.
For example If the world is consists of cavity and Toothache then there are four
distinct atomic events,

a) Cavity = False a Toothache =True


False Toothache = False
b) Cavity = a

c) Cavity = True a Toothache =False


d) Cavity =
True a Toothache = True

e) properties of atomic events

actually be the case.


i) They are mutually exclusive That is at most one can
-

Toothache) can not


For example : (Cavity a Toothache) and (Cavity a -

both be the case.


at least one
i) The set of all possible atomic events is exhaustive out of which
must be the case. That is, the disjunction of all atomic events is logicauy

equivalent to true.
falsehood of evey
ii) Any particular atomic event entails the truth or

proposition, whether simple or complex.


For example -

The atomic event lsehood of


(Cavity A Toothache) entails the truth of cavity and the Fais
(cavity toothache). au
mic
of
iv Any proposition is logically equivalent to the disjunction
events that entail the truth of the proposition.

TECHNICAL PUBLICATIONS - An up thrust for knowledge


A r t i f i c i a lI n t e l l i g e n c e
7-7 Uncertainty

The proposition cavity is equivalent to disjunction of the


For example:
atomic events (cavity A toothache) and (cavity a - toothache).

Prior Probability (Unconditional Probability)


7.3.3
11 The prior (unconditional) probability is associated with a proposition 'a'.
of belief accorded to a proposition in the absence of any other
2) It is the degree
information.

3) It is written as P(a).
For example: The probability that, Ram has cavity = 0.1, then prior probability

is written as, P(Cavity =


true) = 0.1 or P(Cavity) = 0.1

information is received, one should


be noted that as soon as new
4) It should
reason with the conditional depending upon new information.
probability of 'a'

of all the possible values of a


5) When it is required to express probabilities
random variable, then a vector of values is used. It is represented using
P(a).
state of the 'a'.
This represents values for the probabilities of each individual
four
For example : P(Weather) <0.7, 0.2, 0.08, 0.02> is representing
=

equations
P (Weather = Sunny) = 0.7

P (Weather = Rain) = 0.2

P (Weather Cloudy) = 0.08

P (Weather = Cold) = 0.02

distribution for the


6) The expression P(a) is said to be defining prior probability
random variable 'a'.

7) To denote probabilities of al random variables combinations, the expression


be used. This is called as joint probability distribution for
Pla1,a2) can
number of random variables can be mentioned
random variables a1,a2. Any
in the expression.
8) A simple example of joint probability distribution is,
> P<Weather, Cavity> can be represented as, 4x2table of probabilities.
(Weather's probability) (Cavity probability)
distribution that covers the complete set of random
9) A joint probability
variables is called as full joint probability distribution.
10) A simple example of full joint probability distribution is,
If consists of 3 random variables, wheather,
problem world cavity, toothache
then full joint probability distribution would be,

P<Weather, Cavity, Toothache>

TECHNICA"
3LICATIONS - An up thrust for knowledge
Artificial Intelligence 7-8

t will be represented as, 4x2x2, table of probabilities.


Uncertainty
11) Prior probability for continuous random variable:
i) For continuous random variable it is not feasible to represent veckso
possible values because the values are infinite. For continuous all
variable the probability is defined as a function with
inuous
parameter y
random
indicates that random variable takes some value x. which
For example Let random variable x denotes the tomorrow's temmers
in Chennai. It would be represented as, ature
PX=x) = U[25 - 37] (x).

This sentence express the belief that X is distributed


uniformly between
and 37 degrees celcius.
i) The probability distribution for continuous random variable has
probabilitv
density function.

7.3.4 Conditional Probability


1) When agent obtains evidence concerning
previously unknown random variables in
the domain, then prior
probability are not used. Based on new information
conditional or posterior probabilities are calculated.
2) The notation is P(a |b) where a and b are
any proposition.
The P' is read "the probability of a
as
given that all we know is b". That is when b
is known it indicates
probability of a.
For example: P (Cavity | Toothache) =
0.8
it means that, if patient has toothache (and no other information is known) then
the chances of
having cavity are =0.8
3) Prior probability are infact
special case of conditional probability. It can be
represented P(a)
as which means that
probability 'a' is conditioned on no evidence.
4) Conditional probability can be defined
interms of unconditional
equation would look like, probabilities. The

Pla/b)=Planb)
,
P(b) it holds whenever P(b) > 0 .(7.3.1)

The above
equation can also be written as,
Pla a
b) =
P(a |b) P(b)
This is called as
product rule. In other words it says, for we
need 'b' to be true and we need a to 'a' and b' to be rue
be true given b. It
can be also written
Pa b)= P(b|a) P(a). a

TECHNICAL PUBLICATIONS -

An up thrust for
knowledge
Artificial Intelligence 7-9 Uncertainty

5)Conditional probability are used for probabilistic inferencing


P notation can be used for conditional distribution. Pox|y) gives the values o
P(X = X; |Y = y;) for each possible i, j.
following are the individual equations,
P(X=X1 AY = y 1) = P(X=x1|Y = yP(Y = y 1)

PX=X1 aY = y2) = PX=X1|Y =y2)P(Y = y2)

These can be combined into a single equation as,

P(X, Y) =PX|Y) P(Y)


7 Conditional probabilities should not be treated as logical implications. That 1s,
"When 'b holds then probability of 'a is something", is a conditional probability
and not to be mistake as logical implication. It is wrong on two points, one is, P(a)
always denotes prior probability. For this it do not require any evidence. Secondly
P(a b) = 0.7, is immediately relevant when b is available evidence. This will keep
on altering. When information is updated logical Implications do not change over

time.

7.3.5 The Probability Axioms

Axioms gives semantic of probability statements. The basic axioms (Kolmogorov's


axioms) serve to define the probability scale and its end points.
1) All probabilities are between 0 and 1. For any proposition a, 0s P(a)$1.
and necessarily false
2) Necessarily true (i.e, valid) propositions have probability 1,
(ie., unsatisfiable) propositions have probability 0.
1 P(false) =0
P(true) =

3) The probability of a disjunction is given by


Plav b) = P(a) + P(b) - P(a ab)

This axiom connects the probabilities of logically related propositions. This rule
states that, the cases where a' holds, together with the cases where 'b' holds,
cover all the causes where 'avb holds;
but summing the two sets of
certainly
cases counts their intersection twice, so we need to subtract P(a ab).
Note The axioms deal only with prior probabilities rather than conditional

this is because prior probability can be defined in terms of conditional


probabilities;
probability.

TECHNICAL PIBLICATIONS An up thrust for knowledge


Artificial Inteligence 7-10
certainty
Using the axioms of probability
From basic probability axioms following facts can be deduced.

axiom 3 with b =^a)


Plav-a) =
P(a) + P(-a) -P(a a-a) (by
P(true) = P(a) + P(-a) - P(false) (by logical equivalence)

1 =P(a) +P(-a) (by axiom 2)


P-a) = 1 - P(a) (by algebra).
.Let the discrete variable D have the domain <dj,.n.
n

Then,PD =
d;) =1.
i=1

That is, any probability distribution on a single variable must sum to 1.


I t is also true that any joint probability distribution on any set of variables must
sum to 1. This can be seen simply by creating a single megavariable whose
domain is the cross product of the domains of the original variables.
Atomic events are mutually exclusive, so the probability of any conjunction of
atomic events is zero, by axiom 2.
From axiom 3, we can derive the following simple relationship: The probability of
a proposition is equal to the sum of the probabilities of atomic events in which it
holds; that is Pla) = 2 Pe;) (7.3.2)
ej e(a)

7.3.6 Inference using Full Joint Distribution


Probabilistic inference means, computation from observed evidence of posterior
probabilities, for query propositions. The knowledge base used for answering the query
is represented as full joint distribution. Consider simple example, consists of three
boolean variables, toothache, cavity, catch. The full joint distribution is 2x2x2, as shown
below.

Toothache Toothache
Catch Catch Catch -Catch

Cavity 0.108 0.012 0.072 0.008


Cavity 0.016 0.064 0.144 0.576

Note that the probability in the joint distribution sum to 9.


ne
One perticular common task in inferencing is
to extract the distribution over
or
subset of variables or a single variable. This distribution over some s
varao
single variables is called as marginal probability.
For example: P(Cavity) = 0.108 +0.012 + 0.072 +0.008 0.2

TECHNICAL PUBLICATIONS An up thrust for knowledge


Artificiel I n t e l i g e n c e 7-11 Uncertainty

This process is called as summing. Because the variables other


marginalization or

than 'cavity' (that is whose probability is being counted) are summed out
.The general marginalization rule is as follows,
For any sets of variables Y and Z,

P) = P(Y, z) . (7.3.3)

It indicates that distribution Y can be obtained by summing out all the other
variables from any joint distribution X containing Y.
.Variant of above example of general marginalization rule involved the conditional
probabilities using product rule.
P(Y) P(Y|2)P(e) . (7.3.4)

This rule is conditioning rule.


For example: Computing probability of a cavity, given evidence of a toothache
is as follows,

P(Cavity Toothache) =
P(Cavity nToothache) 0.108+0.012
0.6
P(Toothache) 0.108+0.012 +0.016+0.064
Normalization constant : It is variable that remains constant for the distribution,
which ensures that it adds in to 1. a is used to denote such constant.
For example : We can compute the probability of a cavity, given evidence of a
toothache, as follows

Cavity Toothache)
P(Cavity |Toothache) =

P(Toothache)
0.108+0.012
0.6
0.108+0.012+0016+0.064
Just to check we can also compute the probability that there is no cavity given a
toothache

P (Cavity/Tootache) = P Cavitya Toothache)


P (Toothache)

0.016+0.064
0.4
0.108+0.012+0.016+ 0.064
Notice that in these two calculations the term 1/P (toothache) remains constant, no
matter which value of cavity we calculate. With this notation we can write above two
equations in one.
P(Cavity | Toothache) =« P(Cavity, Toothache)
a [PCavity, Toothache, Catch) + P(Cavity, Toothache,- Catch)]
=
a [< 0.108, 0.016> + <0.012, 0.064>]

TECHNICAL PUBLICATIONS An up thrust for knowledge


Artificial Intelligence 7-12
Uncertainty
= a <0.12, 0.08> = <0.6, 0.4>

From above one can extract a general inference procedure.


Consider the case in which query involves a single variable. The notation tisad
X be the query variable (cavity in the example), let E be the set of evidence Varit let
iables
just toothache in the example) let e be the observed values for them, and let Y be th
remaining unobserved variable (just catch in the example). Ihe query is P(X/e) and
can
be evaluated as
POX|e) = a P(X, e) = a 2P(X,e,y)
(7.3.5)
where the summation is'over all possible ys (i.e. all possible combinations of valtu
values
of the unobserved variables 'y'). Notice that together the variables, X, E and Y constit
itute
the complete set of variables for the domain, so P , e, y) is simply a subset of

probabilities from the full joint distribution.

7.3.7 Independance
It is a relationship between two different sets of full joint distributions. It is also
called as marginal or absolute independance of the variables. Independence indicates
that whether the two full joint distributions affects probability of each other.

The independance between variables X and Y can be written as follows,


P(X|Y) = PX) or P(Y|X) = P(Y) or P(X, Y) = P() P(Y)

.For example : The weather is independant of once dental problem. Which can be
shown as below equation.
PToothache, Catch, Cavity, Weather) = P(Toothache, Catch, Cavity) P(Weather).

Following diagram shows factoring a large joint distributing into smaller


distributions, using absolute independence. Weather and dental problems are
independent.
Cavity
Toothache Catch
Weather
Decomposes
into
Cavity
Toothache Catch Weather

Fig. 7.3.1 Factoring a large joint dlstributing into smaller distributlon

7.3.8 Bayes' Rule


Bayes' rule is derived from the product rule.

TECHNICAL PUBLICATIONS An up thust


for knowledge
Artificial Intelligence 7-13
Uncertainty
The product rule can be written as,
P(a ab) = P(a|b) P(b)
. (7.3.6)
P(a ab) = P(b|a) P(a)
.(7.3.7)
because conjunction is commutative]
Equating right sides of equation (7.3.6) and equation (7.3.7) and dividing by P(a),

Pba) P(a|b)P(b)
P(a)
This equation is called as Bayes' rule or Bayes' theorem or Bayes' law. This rule is
very useful in probabilistic inferences.
Generalized Bayes' rule is,
P(YIX)= X|Y P(Y)
PX)
(where P has same meanings)
We can have more general version, conditionalized on some
background evidence e.

PX|Y,e) P(Y|e)
P(Y |X, e) =

P(Xe)
General form of Bays' rule with normalization's
P(ylx) = a P(x|y) P(y).

Applying Bays' Rule


1) It requires total three terms (1 conditional probability and 2 unconditional
Probabilities). For computing one conditional probability.
For example Probability of patient having low sugar has high blood pressure is
50 %.
Let, M be proposition, 'patient has low sugar.
S be a proposition, 'patient has high blood pressure'.
Suppose we assume that, doctor knows following unconditional fact,
i) Prior probabilition of (m) = 1/50,000.

i) Prior probability of (s) = 1/20.

Then we have,
P(s |m) = 0.5

P(m) = 1|50000

P(s)= 1|20
TECHNICAL PUBLICATIONS An up thrust for knowledge
Artificial Intelligence 7-14
Uncertainty
P(s |m)P(m)
P(ms)= P(s)
0.5x1|50000
1 20

= 0.0002

That is, we can expect that 1 in 5000 with high B.P. will has low sugar.

2) Combining evidence in Bayes' rule.


conditioned on evidences.
Bayes rule is helpful for answering queries
For example : Toothache and catch both evidences are available then cavity is sn
sure
to exist. Which can be represented as
Toothache Catch) a <0.108, 0.016> =
<0.871, 0.129>
P(Cavity n =

By using Bayes' rule to reformulate the problem:


P(Cavity|Toothache a Catch) = a P(Toothache a Catch|Cavity) P(Cavity)

. (7.3.8)
For this reformulation to work, we need to know the conditional probabilities of the
conjunction Toothache a Catch for each value of Cavity. That might be feasible for just
two evidence variables, but again it will not scale up.
If there are n possible evidence variable (Xrays, diet, oral hygiene, etc.), then there
are 2" possible combinations of observed values for which we would need to know
conditional probabilities.
The notion of independence can be used here. These variables are independent,
however, given the presence or the absence of a cavity. Each is directly caused by the
cavity, but neither has a direct effect on the other. Toothache depends on the state of the
nerves in the tooth, where as the probe's accuracy depends on the dentist's skill, to
which the toothache is irrelevant.
Mathematically, this property is written as,
P(Toothache A Catch| Cavity) = P(Toothache | Cavity) P(Catch | Cavity) .(7.3.9)
This equation expresses the conditional independence of toothache and catch, given
cavity.
Substitute equation (7.3.3) into (7.3.4) to obtain the probability of a cavity
P (Cavity | Toothache n Catch) = a P (Toothache| Cavity) P (Catch| Cavity) P (Cavity)

Now, the information requirement are the same as for inference using each piece
evidence separately the prior probability P(Cavity) for the query variable and the
conditional probability of each effect, given its cause.

TECHNICAL PUBLICATIONS An up thust for


knowledge
Artificial Intelligence 7-15 Uncertainty

to scale up; more


Conditional independence assertions carn allow probabilistic systems
assertions.
over, they are
much more commonly available than absolute independence
are all conditionally independent, the size of
When their are 'n' variables given that they
instead of O(2").
the representation grows as O(n)
For example -

number of
Consider dentistry example, single cause, directly influences
in which a
a

conditionally independent, given the


cause.
effects, all of which are

The full joint distribution can be written as,

P(Cause, Effect , , Effectn) =


P(Cause) II P(Effect, | Cause).
"naive" because it is
Such probability distribution is called as naive Bayes' model -
a
"effect"variables are not
often used (as a simplifying assumption) in cases where the
model is sometimes
conditionally independent given the cause variable. The naive Bayes
called as Bayesian classifier.

Answer in Brief

1. Explain the process of inference using full joint distribution with example.

(Refer section 7.3.6)


2. Define Dempster-Shafer theory.
The Dempster-Shafer theory is designed to deal with the distinction between
Ans.
uncertainty and ignorance.

Rather than computing the probability of a proposition, it computes the probability the
evidence that supports the proposition.
3. Define: Baye's theorem.
called
In probability theory and applications, Baye's theorem (alternatively
as
Ans.:
its inverse.
Baye's law or Bayes rule) links a conditional probability to
P(a b) P(b)
Pba) P (a)

is called Rule Baye's Theorem.


This equation as Baye's or

4. What is reasoning by default ?


Ans. We can do qualitative reasoning using technique like default reasoning.

degree", but
Default reasoning treats conclusions not as "believed to a certain
as

"believed until a better reason is found to believe something else".


5. What are the logics reasoning wvith uncertain information ?
used in
with uncertain
Ans.: There are two approaches that can be taken for reasoning
information in which logic is used.

TECHNICAL PUBLICATIONS An up thrust for knowledge


Artificial Intelligence 7-16
ncertainty
Non-monotonic logic is used in default reasoning Process. Default reason
uses other type or logic called as default logic.
ning also
The second approach towards reasoning is vagueness which uses fuzzy
y logic
Fuzzy logic is a method for reasoning with logical expressions describing member.
in fuzzy sets. ership
6. Define prior probability.
Ans.: The prior (unconditional) probability is associated with a proposition 'a'.

The prior probability is the degree of belief accorded to a proposition in t


the
absence of any other information.
It is written as P(a). For example, the probability that, Ram has cavity = 0.1, then
the prior priobility is written as, then
P(Cavity = true) = 1 or P(cavity) = 0.1

7. State the types of approximation methods.


Ans.: For randomize sampling algorithm (Monte Carl
approximate inferencing
Algorithm) is used. There are two approximation methods that are used in randomize
sampling algorithm which are 1) Direct sampling algorithm and 2) Markov chain
sampling algorithm.
direct sampling algorithm samples are generated from known
In
distribution. In Markov chain sampling each event is generated by
probability
making a random
change to the preceding event.
8. What do you mean by hybrid Bayesian network ?
Ans.: A network with both discrete and continuous variables is called as
hybrid
Bayesian network. In hybrid Bayesian network, for representing the continuous variable
its discretization is done in terms of intervals because it can have infinite values.
For specifying the hybrid
network two kinds of distribution are
conditional distribution for a continuous variable
specified. The
given discrete or continuous parents
and the conditional distribution for a discrete variable
given continuous parent.
9. Define computational learning theory.
Ans.: The
computational learning theory is a mathematical field related to the analysis
of machine learning algorithms.
The computational learning
theory is used in the evaluation of sample complexity
and computational
complexity. Sample complexity targets the issue that, how many
training examples are needed to learn a successful hypothesis ? The
complexity evaluates that how much computational effort is neededcomputationa to learn a
successful hypothesis ?
In addition to
performance bounds, computational learning theory also deals wi
the time complexity and feasibility of learning.

TECHNICAL PUBLICATIONS -

An up thust for
knowledge
Artificial Intelligence 7-17 Uncertainty

10.Give the full specification of Bayesian network. in which


Ans. network Definition: It is a data structure which is
Bayesian a graph,
each node is annotated with quantitative probability information.

The nodes and edges in the graph are specified as follows :

1) A set of random variables makes up the nodes of the network. Variables may be
discrete or continuous.
2) A set of directed links or arrows connects pairs of nodes. If there is an arrow from
node X to node Y, then X is said to be a parent of Y.
3) Each node X, has a conditional probability distribution P(X; |Parents(Xi)) that
quantifies the effect of the parents on the node.
4) The graph has no directed cycles (and hence is a directed, acyclic graph, or DAG).
The set of nodes and links is called as topology of the network.

7.4 University Question with Answer


********

Winter 188

Q.1 Explain Bay's theorem. (Refer section 7.3 [4]


***** **************

TECHNICAL PUBLICATIONS An up thrustfor knowledge

You might also like