You are on page 1of 29

Machine Intelligence

UNIT 1: Introduction (5 HOURS)


App/System/Case study: Medical Diagnosis System, Self Driving Vehicle.
Contents: Introduction, Foundation and history of AI, AI Problems and
techniques, AI applications,
Turing test, Impact and ethical concerns of AI, Intelligent Agents, PEAS
Representation of Agents,
Structure of Agents
Further reading: Examples autonomous agents
Instructional Objectives
 Define an agent
 Define an Intelligent agent
 Define a Rational agent
 Explain Bounded Rationality
 Discuss different types of encouragement
 Explain different agent architectures

On completion of this lesson the students will be able to ,


 Understand what an agent is and how an agent interacts with the environment
 Given a problem situation, the students should be able to
o Identify the percept available to the agent and
o The actions that the agent can execute
Thinking Humanly Thinking Rationally

“The exciting new effort to make computers


think….machine with minds, in the full and  “The study of mental faculties through
the use of computational models.”
literal sense.” (Haugeland, 1985) (Charniak and McDermott, 1985)

“The Automation of activities such as  “The study of the computations that


decision making, problem solving and make is possible to perceive, reason and
learning.” (Bellman, 1978) act.”(Winston, 1992)
What is
Artificial
Acting Humanly Acting Rationally
Intelligence?
 “The Art of creating machines that  “Computational Intelligence is the study
perform functions that require of the design of intelligent agent”. (Poole
intelligence when performed by people.” et al., 1998)
(Kurzweil, 1990)
 “AI…..is concerned with intelligent
 “The study of how to make computers do behavior in artifacts.”( Nilsson, 1998)
things better at which, at the moment
people are better”(Rich and Knight, 1991)
Agent and Environment
Actions

Agent Environment

Percepts
Agents
 Operate in an environment
 Perceives its environment through sensors,
 Acts upon its environment through actuators/effectors
 Have goals

Agent: anything that perceives and acts on its environment Agents


are expected: operate autonomously, perceive their environment,
persist over a prolonged time period, adapt to change and create
and pursue goals.
Sensors and Effectors/Actuator
 An agent perceives its environment through sensors
o The complete set of inputs at a given time is called a percept
o The current percept or a sequence of percepts can influence the actions of
an agent
 It can change the environment through effectors
o An operation involving an actuator is called an action
o Actions can be grouped into action sequences
Agents
 Have Sensors, Actuators
 Have Goal
 Implement mapping from percept sequence to actions
 Performance measure to evaluate agents
 Autonomous Agent: Decide autonomously which actions to take in the current situation to
maximize progress towards its goals.
Performance
 Behavior and performance of Intelligent Agents in terms of agent function
o Perception history (sequence) to Action Mapping
o Ideal mapping: specifies which actions an agent should take at any point in time

 Performance measure: a subjective measure to characterize how successful an agent is (e.g.


speed, power usage, accuracy, cost, etc.)
Examples of the Agents
 Humans
o Eyes, ears, skin, taste buds, etc. for sensors
o Hands, fingers, legs, mouth for effectors

 Robots
o Camera, infrared, sensors
o Grippers, wheels , lights, speakers etc. for actuators

 Software Agent (softbot)


ofunctions as sensors
oFunctions as actuators
Types of the Agents:
Robots

robot assembly line in car factory Examples of Military Robots

Sophia- A Real, Live Electronic Girl

Rashmi Robot MIT Robot Cog Moving Its Arms Robots Deliver Takeout Orders On The Streets Of Washington, D.C.
Types of Agents
 Softbots
 Expert Systems
 Autonomous spacecraft's
 Intelligent building
Agents
 Fundamental faculties of intelligence
o Acting
o Sensing
o Understanding, reasoning, learning

 In order to act you must sense. Blind actions is not a characterization of intelligence
 Robotics: Sensing and acting, understanding not necessary
 Sensing needs understanding to be useful.
INTELLIGENT AGENTS
Intelligent Agent
oMust sense.
oMust Act.
oMust be autonomous{to some extent}
oMust be rational.
Rational Agent
 AI is about building rational agents.
 An agent is something that perceives and acts
 A rational agent always does the right thing.
o What are the functionalities (goals)?
o What are the components?
o How do we build them?
RATIONALITY
 PERFECT RATIONALITY
o Assumes that the rational agent knows all and will take the
action that maximizes her utility.
o Human beings do not satisfy this definition of rationality.
 BOUNDED RATIONALITY
o Because of the limitations of the human mind, humans must use
approximate methods to handle many tasks.
Rationality
The proposed definition requires:

Information gathering/exploration
◦ To maximize future rewards
Learn from percepts
◦ Extending prior knowledge
Agent autonomy
◦ Compensate for incorrect prior knowledge
RATIONALITY
 Rational Actions: The actions that maximizes the expected value of
the performance measure given the percept sequence to date
o Rational =Best?
• Yes, to the best of its knowledge
o Rational=Optimal?
• Yes, to the best of its abilities
• And its constraints
Omniscience
 A rational agent is not omniscient
o It doesn’t know the actual outcome of its actions
o It may not know certain aspects of its environment
 Rationality must take into account the limitations of
the agent
o Percept sequence, background knowledge and feasible
actions
o Deal with the expected outcome of actions
Bounded Rationality
 Evolution did not give rise to optimal agents, but to agents which
are in some senses locally optimal at its best.
 In 1957, Simon proposed the notion of bounded Rationality: that
property of an agent that behaves in a manner that is nearly
optimal with respect to its goals as its resources will allow.
Vacuum-cleaner world

Percepts: location and contents, e.g., [A,Dirty]

Actions: Left, Right, Suck, NoOp


A vacuum-cleaner function
Percept Sequence Action
[A, Clean] Right
[A, Dirty] Suck
[B, Clean] Left
[B, Dirty] Suck
[A, Clean], [A, Clean] Right
[A, Clean], [A, Dirty] Suck

oclass Environment(object):
o def __init__(self):
o # instantiate locations and conditions
o # 0 indicates Clean and 1 indicates Dirty
o self.locationCondition = {'A': '0', 'B': '0'}

o # randomize conditions in locations A and B


o self.locationCondition['A'] = random.randint(0, 1)
o self.locationCondition['B'] = random.randint(0, 1)
oclass SimpleReflexVacuumAgent(Environment):
o def __init__(self, Environment):
o print (Environment.locationCondition)
o # Instantiate performance measurement
o Score = 0
o # place vacuum at random location
o vacuumLocation = random.randint(0, 1)
oif vacuumLocation == 0: o else:

o print ("Vacuum is randomly placed at Location A") o # if B is Dirty

o # and A is Dirty o if Environment.locationCondition['B'] == 1:

o if Environment.locationCondition['A'] == 1: o print ("Location B is Dirty.")

o print ("Location A is Dirty. ") o # move to B

o # suck and mark clean o print ("Moving to Location B...")

o Environment.locationCondition['A'] = 0; o Score -= 1

o Score += 1 o # suck and mark clean

o print ("Location A has been Cleaned. ") o Environment.locationCondition['B'] = 0;


o Score += 1
o print ("Location B has been Cleaned :D.")
oelif vacuumLocation == 1:
o # if A is Dirty
o print ("Vacuum is randomly placed at Location B. ")
o if Environment.locationCondition['A'] == 1:
o # and B is Dirty
o print ("Location A is Dirty")
o if Environment.locationCondition['B'] == 1:
o # move to A
o print ("Location B is Dirty")
o Score -= 1
o # suck and mark clean
o print ("Moving to Location A")
o Environment.locationCondition['B'] = 0;
o # suck and mark clean
o Score += 1
o Environment.locationCondition['A'] = 0;
o print ("Location B has been Cleaned")
o Score += 1
o print ("Location A has been Cleaned")
o else:
OUTPUT
o # if A is Dirty {'A': 0, 'B': 1}
Vacuum is randomly placed at Location A Location B is Dirty.
o if Environment.locationCondition['A'] == 1: Moving to Location B...
o print( "Location A is Dirty") Location B has been Cleaned.
{'A': 0, 'B': 0}
o # move to A Performance Measurement: 0
o print ("Moving to Location A")
o Score -= 1 OUTPUT
{'A': 1, 'B': 1}
o # suck and mark clean
Vacuum is randomly placed at Location B.
o Environment.locationCondition['A'] = 0; Location B is Dirty
Location B has been Cleaned
o Score += 1 Location A is Dirty
o print ("Location A has been Cleaned") Moving to Location A
Location A has been Cleaned
o # done cleaning {'A': 0, 'B': 0}
o print (Environment.locationCondition) Performance Measurement: 1

o print ("Performance Measurement: " + str(Score))


THANK YOU
POLL QUESTIONS
1) What is meant by agent’s percept sequence?
o Used to perceive the environment
o Complete History of actuators
o Complete history of perceived things
o None of the mentioned

2) Which action sequences are used to achieve the agent’s goal?


o Search
o Plan
o Retrieve
o Both Search and Plan
POLL QUESTIONS
1) Which element is used for selecting actions?
o Perceive
o Performance
o Learning
o Actuator

2) Agents behavior can be best described by


o Perception sequence
o Agent function
o Sensors and actuators
o Environment in which agent is performing

You might also like